threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi all,\n\nWe are working on a patch which targets the overhead of spinlock in buffer cache.In this regard, we need to exercise buffer spinlock contention on a 16 core machine.\n\nCould anyone please suggest some good methods for exercising the same?\n\nRegards,\n\nAtri\n\nSent from my iPad\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Apr 2013 11:39:24 +0530",
"msg_from": "Atri Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advice on testing buffer spin lock contention"
},
{
"msg_contents": "> You will want a select only workload in which all data fits in\n> shared_buffers, and that doesn't do a round trip to some external driver\n> program for each row selected.\n>\n> I proposed a patch for pgbench to add a new transaction of this nature last\n> year under the subject \"pgbench--new transaction type\". Heikki also\n> described how you can make a custom pgbench -f file to accomplish the same\n> thing without having to change and recompile pgbench.\n>\n>\n> There was also a length discussion about nailing certain pages so they don't\n> need to get pinned and unpinned all the time, under \"9.2beta1, parallel\n> queries, ReleasePredicateLocks, CheckForSerializableConflictIn in the\n> oprofile\"\n>\n\nThanks a ton, we will try it out.\n\nRegards,\n\nAtri\n\n\n--\nRegards,\n\nAtri\nl'apprenant\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Apr 2013 10:54:30 -0700",
"msg_from": "Atri Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Advice on testing buffer spin lock contention"
}
] |
[
{
"msg_contents": "Hi all,\nI have this simple query that has performance issue. I am assuming there is something wrong in our configuration. Can someone point me to the right direction? (our work_mem is set to 64MB)\n\nsrdb=> explain analyze 013-04-15 16:51:20,223 INFO [com.vasoftware.sf.server.common.querygenerator.Query] (http--127.0.0.1-8080-11) Query: [Id: 4652] [duration: 51999][debug sql] SELECT\npsrdb(> MAX(length(discussion_post.id)) AS maxLength\npsrdb(> FROM\npsrdb(> discussion_post discussion_post^C\npsrdb=> explain analyze SELECT\npsrdb-> MAX(length(discussion_post.id)) AS maxLength\npsrdb-> FROM\npsrdb-> discussion_post discussion_post\npsrdb-> ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=10000043105.13..10000043105.14 rows=1 width=10) (actual time=52150.015..52150.015 rows=1 loops=1)\n -> Seq Scan on discussion_post (cost=10000000000.00..10000041980.90 rows=449690 width=10) (actual time=0.006..51981.746 rows=449604 loops=1)\nTotal runtime: 52150.073 ms\n(3 rows)\n\nThanks a lot,\nAnne\n\n\n\n\n\n\n\n\n\nHi all,\nI have this simple query that has performance issue. I am assuming there is something wrong in our configuration. Can someone point me to the right direction? (our work_mem is set to 64MB)\n \nsrdb=> explain analyze 013-04-15 16:51:20,223 INFO [com.vasoftware.sf.server.common.querygenerator.Query] (http--127.0.0.1-8080-11) Query: [Id: 4652] [duration: 51999][debug sql] SELECT\npsrdb(> MAX(length(discussion_post.id)) AS maxLength\npsrdb(> FROM\npsrdb(> discussion_post discussion_post^C\npsrdb=> explain analyze SELECT\npsrdb-> MAX(length(discussion_post.id)) AS maxLength\npsrdb-> FROM\npsrdb-> discussion_post discussion_post\npsrdb-> ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=10000043105.13..10000043105.14 rows=1 width=10) (actual time=52150.015..52150.015 rows=1 loops=1)\n -> Seq Scan on discussion_post (cost=10000000000.00..10000041980.90 rows=449690 width=10) (actual time=0.006..51981.746 rows=449604 loops=1)\nTotal runtime: 52150.073 ms\n(3 rows)\n \nThanks a lot,\nAnne",
"msg_date": "Mon, 15 Apr 2013 20:59:50 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor performance on an aggregate query"
},
{
"msg_contents": "On Mon, Apr 15, 2013 at 1:59 PM, Anne Rosset <[email protected]> wrote:\n> I have this simple query that has performance issue. I am assuming there is\n> something wrong in our configuration. Can someone point me to the right\n> direction? (our work_mem is set to 64MB)\n\nWhy don't you try creating a functional index on length(discussion_post.id)?\n\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Apr 2013 14:04:43 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance on an aggregate query"
}
] |
[
{
"msg_contents": "Dear All,\n\nCan any one please help me to fix this issue, i am getting this error from\nour application, currently Database is running on 9.2.\n\n2013-04-17 11:37:25:151 - {ERROR} database.ConnectionManager Thread\n[http-8080-1]; --- getConnection() Exception:\norg.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool\nerror Timeout waiting for idle object\n at\norg.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114)\n at\norg.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)\n at\ncom.tenkinfo.b2g.database.ConnectionManager.getConnection(ConnectionManager.java:39)\n at\ncom.tenkinfo.b2g.usermanagement.dao.UserManagementDAOImpl.getSessionData(UserManagementDAOImpl.java:228)\n at\ncom.tenkinfo.mapnsav.common.action.BaseAction.execute(BaseAction.java:156)\n at\norg.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)\n at\norg.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)\n at\norg.apache.struts.action.ActionServlet.process(ActionServlet.java:1196)\n at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)\n at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)\n at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)\n at\norg.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)\n at\norg.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)\n at\norg.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176)\n at\norg.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145)\n\nDo i have to set tcp _keepalive * paramter to less sec, or need to kill the\nidle connection ?\n\nRegards,\nItishree\n\nDear All,\n \nCan any one please help me to fix this issue, i am getting this error from our application, currently Database is running on 9.2.\n \n2013-04-17 11:37:25:151 - {ERROR} database.ConnectionManager Thread [http-8080-1]; --- getConnection() Exception:org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object\n at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114) at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) at com.tenkinfo.b2g.database.ConnectionManager.getConnection(ConnectionManager.java:39)\n at com.tenkinfo.b2g.usermanagement.dao.UserManagementDAOImpl.getSessionData(UserManagementDAOImpl.java:228) at com.tenkinfo.mapnsav.common.action.BaseAction.execute(BaseAction.java:156) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)\n at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)\n at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)\n at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145)\n \nDo i have to set tcp _keepalive * paramter to less sec, or need to kill the idle connection ?\n \nRegards,\nItishree",
"msg_date": "Wed, 17 Apr 2013 21:31:31 +0530",
"msg_from": "itishree sukla <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQLNestedException: Cannot get a connection, pool error Timeout\n\twaiting for idle object"
},
{
"msg_contents": "Hi Itsrhree\n\n From the machine where is running the tomcat, do you check that you can connect to postgresql server (remember check parameters of connection, user, password, ip)?\n\nHaving this first step tested, then:\n\nDo you have the correct connection pool configured on Catalina (Tomcat) and let this software to configure the pool of database connections?\n\nGood luck :)\n\nEl 17/04/2013, a las 17:01, itishree sukla <[email protected]> escribió:\n\n> Dear All,\n> \n> Can any one please help me to fix this issue, i am getting this error from our application, currently Database is running on 9.2.\n> \n> 2013-04-17 11:37:25:151 - {ERROR} database.ConnectionManager Thread [http-8080-1]; --- getConnection() Exception:\n> org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object\n> at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114)\n> at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)\n> at com.tenkinfo.b2g.database.ConnectionManager.getConnection(ConnectionManager.java:39)\n> at com.tenkinfo.b2g.usermanagement.dao.UserManagementDAOImpl.getSessionData(UserManagementDAOImpl.java:228)\n> at com.tenkinfo.mapnsav.common.action.BaseAction.execute(BaseAction.java:156)\n> at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)\n> at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)\n> at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196)\n> at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)\n> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)\n> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)\n> at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)\n> at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)\n> at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176)\n> at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145)\n> \n> Do i have to set tcp _keepalive * paramter to less sec, or need to kill the idle connection ?\n> \n> Regards,\n> Itishree\n\nAlfonso Afonso\n(personal)\n\n\n\n\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Wed, 17 Apr 2013 19:48:42 +0100",
"msg_from": "Alfonso Afonso <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQLNestedException: Cannot get a connection,\n\tpool error Timeout waiting for idle object"
},
{
"msg_contents": "I'd say you either have overloaded application (try increasing timeout),\ntoo small pool (increase pool) or connection leaks (find and fix).\n18 квіт. 2013 23:45, \"itishree sukla\" <[email protected]> напис.\n\n> Dear All,\n>\n> Can any one please help me to fix this issue, i am getting this error from\n> our application, currently Database is running on 9.2.\n>\n> 2013-04-17 11:37:25:151 - {ERROR} database.ConnectionManager Thread\n> [http-8080-1]; --- getConnection() Exception:\n> org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool\n> error Timeout waiting for idle object\n> at\n> org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114)\n> at\n> org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)\n> at\n> com.tenkinfo.b2g.database.ConnectionManager.getConnection(ConnectionManager.java:39)\n> at\n> com.tenkinfo.b2g.usermanagement.dao.UserManagementDAOImpl.getSessionData(UserManagementDAOImpl.java:228)\n> at\n> com.tenkinfo.mapnsav.common.action.BaseAction.execute(BaseAction.java:156)\n> at\n> org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)\n> at\n> org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)\n> at\n> org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196)\n> at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)\n> at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)\n> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)\n> at\n> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)\n> at\n> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)\n> at\n> org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176)\n> at\n> org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145)\n>\n> Do i have to set tcp _keepalive * paramter to less sec, or need to kill\n> the idle connection ?\n>\n> Regards,\n> Itishree\n>\n\nI'd say you either have overloaded application (try increasing timeout), too small pool (increase pool) or connection leaks (find and fix).\n18 квіт. 2013 23:45, \"itishree sukla\" <[email protected]> напис.\nDear All,\n \nCan any one please help me to fix this issue, i am getting this error from our application, currently Database is running on 9.2.\n \n2013-04-17 11:37:25:151 - {ERROR} database.ConnectionManager Thread [http-8080-1]; --- getConnection() Exception:org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object\n\n at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114) at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) at com.tenkinfo.b2g.database.ConnectionManager.getConnection(ConnectionManager.java:39)\n\n at com.tenkinfo.b2g.usermanagement.dao.UserManagementDAOImpl.getSessionData(UserManagementDAOImpl.java:228) at com.tenkinfo.mapnsav.common.action.BaseAction.execute(BaseAction.java:156) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)\n\n at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1196) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)\n\n at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)\n\n at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145)\n \nDo i have to set tcp _keepalive * paramter to less sec, or need to kill the idle connection ?\n \nRegards,\nItishree",
"msg_date": "Fri, 19 Apr 2013 00:17:39 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] SQLNestedException: Cannot get a connection, pool error\n\tTimeout waiting for idle object"
}
] |
[
{
"msg_contents": "Hello all,\n\nI'm running into an issue when joining between to tables that are partitioned by month. At this point I'm leaning towards it being a bug in the planner but it could be due to something I'm not doing properly as well. Each parent table is empty and has about 30 child tables, and there are between 2 and 10 mil rows total in each set of partitions. When selecting a particular day's data, indexes and constraint exclusion are used if queried individually and the results return in under a second. However, querying from the two tables inner/natural joined together with a single day in the where clause results in a full sequential scan on the second table, so the query takes a ridiculous amount of time. Changing the order of the tables in the join changes which table is fully scanned.\n\nAll child tables have been recently vacuum analyzed. I've played around with this every which way, and not been able to get the planner to make a more reasonable decision. I have several different boxes with varying physical specs running either CentOS 5.8 or 6.4, and Postgres 8.4.17, and they all exhibit the same behavior, so I've ruled out the possibility that it's related to a particular quirk in one database. I didn't notice the issue at first because the tables weren't large enough for it to cause any serious performance issues. Now that the tables have grown, queries involving a join no longer finish in any reasonable number of hours.\n\nI've been able to reproduce the issue in a generic environment and posted the code to create this environment on my GitHub at https://github.com/mikeokner/pgsql_test. The query plans demonstrating this issue are pasted here: http://bpaste.net/show/92138/. I've poked around on IRC and no one seems to think this is normal behavior. Is it in fact a bug or is there something I should be doing to fix this behavior?\n\nRegards,\nMike\nHello all,I'm running into an issue when joining between to tables that are partitioned by month. At this point I'm leaning towards it being a bug in the planner but it could be due to something I'm not doing properly as well. Each parent table is empty and has about 30 child tables, and there are between 2 and 10 mil rows total in each set of partitions. When selecting a particular day's data, indexes and constraint exclusion are used if queried individually and the results return in under a second. However, querying from the two tables inner/natural joined together with a single day in the where clause results in a full sequential scan on the second table, so the query takes a ridiculous amount of time. Changing the order of the tables in the join changes which table is fully scanned.All child tables have been recently vacuum analyzed. I've played around with this every which way, and not been able to get the planner to make a more reasonable decision. I have several different boxes with varying physical specs running either CentOS 5.8 or 6.4, and Postgres 8.4.17, and they all exhibit the same behavior, so I've ruled out the possibility that it's related to a particular quirk in one database. I didn't notice the issue at first because the tables weren't large enough for it to cause any serious performance issues. Now that the tables have grown, queries involving a join no longer finish in any reasonable number of hours.I've been able to reproduce the issue in a generic environment and posted the code to create this environment on my GitHub at https://github.com/mikeokner/pgsql_test. The query plans demonstrating this issue are pasted here: http://bpaste.net/show/92138/. I've poked around on IRC and no one seems to think this is normal behavior. Is it in fact a bug or is there something I should be doing to fix this behavior?Regards,Mike",
"msg_date": "Wed, 17 Apr 2013 11:59:20 -0500",
"msg_from": "Michael Okner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query planner ignoring constraints on partitioned tables when joining"
},
{
"msg_contents": "Michael Okner <[email protected]> writes:\n> I've been able to reproduce the issue in a generic environment and posted the code to create this environment on my GitHub at https://github.com/mikeokner/pgsql_test. The query plans demonstrating this issue are pasted here: http://bpaste.net/show/92138/. I've poked around on IRC and no one seems to think this is normal behavior. Is it in fact a bug or is there something I should be doing to fix this behavior?\n\nIt's not a bug, though I can see why you'd like to wish it was.\n\nWhat you've essentially got is\n\nWHERE\n\t(group_bbb_one.start_time = group_bbb_two.start_time)\n\tAND\n\t(group_bbb_one.start_time >= '2013-02-04 00:00:00'\n\t AND group_bbb_one.start_time < '2013-02-05 00:00:00');\n\nwhere the first clause is expanded out from the NATURAL JOIN, and the\nrest is the way the parser interprets the references to the natural\njoin's outputs. So you have fixed constraints only on\ngroup_bbb_one.start_time, which is why constraint exclusion triggers for\nthat table hierarchy and not the other one.\n\nThe only convenient way to fix this is to explicitly repeat the\nconstraints for each side of the join, eg\n\nSELECT * FROM group_bbb_one NATURAL JOIN group_bbb_two\nWHERE (group_bbb_one.start_time >= '2013-02-24 00:00:00'\n AND group_bbb_one.start_time < '2013-02-25 00:00:00')\n AND (group_bbb_two.start_time >= '2013-02-24 00:00:00'\n AND group_bbb_two.start_time < '2013-02-25 00:00:00');\n\nNow I can see why you might think this is a bug, because you don't have\nto do it when the WHERE constraint is a simple equality. Then you\nwould have, in effect,\n\nWHERE\n\t(group_bbb_one.start_time = group_bbb_two.start_time)\n\tAND\n\t(group_bbb_one.start_time = '2013-02-04 00:00:00');\n\nwhich the planner's equivalence-class mechanism replaces with\n\nWHERE\n\t(group_bbb_one.start_time = '2013-02-04 00:00:00')\n\tAND\n\t(group_bbb_two.start_time = '2013-02-04 00:00:00');\n\nand so you get fixed constraints on both tables without having to write\nit out explicitly. But that only works for equality conditions.\n\nOne could imagine adding planner logic that would make inferences of a\nsimilar sort for equalities combined with inequalities, but it would be\nvastly more complicated, and would provide useful results in vastly\nfewer queries, than the equality-propagation logic. So don't hold your\nbreath waiting for something like that to happen.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Apr 2013 17:42:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner ignoring constraints on partitioned tables when\n\tjoining"
},
{
"msg_contents": "On 18 April 2013 22:42, Tom Lane <[email protected]> wrote:\n\n> One could imagine adding planner logic that would make inferences of a\n> similar sort for equalities combined with inequalities, but it would be\n> vastly more complicated, and would provide useful results in vastly\n> fewer queries, than the equality-propagation logic. So don't hold your\n> breath waiting for something like that to happen.\n\nI'll take note that we need to make partitioning work for merge joins also.\n\nOn a more general note, it would be good to be able to look at the\nstarting value from the driving table of the join and use that as a\nconstraint in the scan on the second table. We rely on that mechanism\nfor nested loop joins, so we could do with that here also.\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 May 2013 13:41:14 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner ignoring constraints on partitioned\n\ttables when joining"
}
] |
[
{
"msg_contents": "Hello,\n\nI recently stumbled upon on what could be a planner bug or a corner case.\nIf \"<false condition> OR ...\" is added to WHERE clause of SELECT query,\nthen the planner chooses a very inefficient plan. Consider a query:\n\nSELECT count(k0.id)\nFROM k0\nWHERE 1 = 2\n OR k0.id IN (\n SELECT k1.k0_id\n FROM k1\n WHERE k1.k1k2_id IN (\n SELECT k2.k1k2_id\n FROM k2\n WHERE k2.t = 2\n AND (coalesce(k2.z, '')) LIKE '%12%'\n )\n );\n\nEXPLAIN (ANALYZE, BUFFERS) for this query:\nhttp://explain.depesz.com/s/tcn\nExecution time: 2037872.420 ms (almost 34 minutes!!)\n\nIf I comment out \"1=2 OR\", then the plan changes dramatically:\nhttp://explain.depesz.com/s/5rsW\nExecution time: 617.778 ms\n\n\nI know LEFT JOIN or EXISTS instead of NOT IN in this case will give better\nplans. What bothers me is not performance of this particular query, but the\nstrange behavior of query planner. Is this behavior considered normal, or\nshould I file a bug?\n\ndatabase schema: http://pgsql.privatepaste.com/b297e685c5\nversion: PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit\npackage: postgresql91-server.x86_64 9.1.9-1PGDG.rhel6\nos: Scientific Linux 6.3\npostgresql.conf: http://pgsql.privatepaste.com/e3e75bb789\n\n--\n\nHello,I recently stumbled upon on what could be a planner bug or a corner case. If \"<false condition> OR ...\" is added to WHERE clause of SELECT query, then the planner chooses a very inefficient plan. Consider a query:\nSELECT count(k0.id)FROM k0WHERE 1 = 2 OR k0.id IN ( SELECT k1.k0_id FROM k1 WHERE k1.k1k2_id IN ( SELECT k2.k1k2_id\n FROM k2 WHERE k2.t = 2 AND (coalesce(k2.z, '')) LIKE '%12%' ) );EXPLAIN (ANALYZE, BUFFERS) for this query:\nhttp://explain.depesz.com/s/tcn\nExecution time: 2037872.420 ms (almost 34 minutes!!)If I comment out \"1=2 OR\", then the plan changes dramatically:http://explain.depesz.com/s/5rsW\nExecution time: 617.778 msI know LEFT JOIN or EXISTS instead of NOT IN in this case will give better plans. What bothers me is not performance of this particular query, but the strange behavior of query planner. Is this behavior considered normal, or should I file a bug? \ndatabase schema: http://pgsql.privatepaste.com/b297e685c5\nversion: PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit\npackage: postgresql91-server.x86_64 9.1.9-1PGDG.rhel6os: Scientific Linux 6.3postgresql.conf: http://pgsql.privatepaste.com/e3e75bb789\n--",
"msg_date": "Thu, 18 Apr 2013 18:20:48 +0400",
"msg_from": "dmitry potapov <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"WHERE 1 = 2 OR ...\" makes planner choose a very inefficient plan"
},
{
"msg_contents": "On 18/04/13 15:20, dmitry potapov wrote:\n> Hello,\n>\n> I recently stumbled upon on what could be a planner bug or a corner\n> case. If \"<false condition> OR ...\" is added to WHERE clause of SELECT\n> query, then the planner chooses a very inefficient plan. Consider a query:\n\n> If I comment out \"1=2 OR\", then the plan changes dramatically:\n\nWhat happens if you substitute:\n1. 1=3 OR\n2. false OR\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Apr 2013 15:43:20 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"WHERE 1 = 2 OR ...\" makes planner choose a very inefficient plan"
},
{
"msg_contents": "dmitry potapov <[email protected]> writes:\n> I recently stumbled upon on what could be a planner bug or a corner case.\n> If \"<false condition> OR ...\" is added to WHERE clause of SELECT query,\n> then the planner chooses a very inefficient plan. Consider a query:\n\n> SELECT count(k0.id)\n> FROM k0\n> WHERE 1 = 2\n> OR k0.id IN (\n> SELECT k1.k0_id\n> FROM k1\n> WHERE k1.k1k2_id IN (\n> SELECT k2.k1k2_id\n> FROM k2\n> WHERE k2.t = 2\n> AND (coalesce(k2.z, '')) LIKE '%12%'\n> )\n> );\n\nPerhaps you should fix your application to not generate such incredibly\nsilly SQL. Figuring out that 1=2 is constant false and throwing it away\ncosts the server easily a thousand times as many instructions as it\nwould take for the client to not emit that in the first place.\n\nThe reason you don't get a nice semijoin plan when you do that is that\nconversion of IN clauses to semijoins happens before\nconstant-subexpression simplification. So the planner hasn't yet\nfigured out that the OR is useless when it would need to know that to\nproduce a good plan. (And no, we can't just flip the order of those two\nsteps. Doing two rounds of const-simplification wouldn't be a good\nanswer either, because it would penalize well-written queries to benefit\nbadly-written ones.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Apr 2013 10:46:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"WHERE 1 = 2 OR ...\" makes planner choose a very inefficient plan"
},
{
"msg_contents": "On 18 April 2013 15:46, Tom Lane <[email protected]> wrote:\n> dmitry potapov <[email protected]> writes:\n>> I recently stumbled upon on what could be a planner bug or a corner case.\n>> If \"<false condition> OR ...\" is added to WHERE clause of SELECT query,\n>> then the planner chooses a very inefficient plan. Consider a query:\n>\n>> SELECT count(k0.id)\n>> FROM k0\n>> WHERE 1 = 2\n>> OR k0.id IN (\n>> SELECT k1.k0_id\n>> FROM k1\n>> WHERE k1.k1k2_id IN (\n>> SELECT k2.k1k2_id\n>> FROM k2\n>> WHERE k2.t = 2\n>> AND (coalesce(k2.z, '')) LIKE '%12%'\n>> )\n>> );\n>\n> Perhaps you should fix your application to not generate such incredibly\n> silly SQL. Figuring out that 1=2 is constant false and throwing it away\n> costs the server easily a thousand times as many instructions as it\n> would take for the client to not emit that in the first place.\n>\n> The reason you don't get a nice semijoin plan when you do that is that\n> conversion of IN clauses to semijoins happens before\n> constant-subexpression simplification. So the planner hasn't yet\n> figured out that the OR is useless when it would need to know that to\n> produce a good plan. (And no, we can't just flip the order of those two\n> steps. Doing two rounds of const-simplification wouldn't be a good\n> answer either, because it would penalize well-written queries to benefit\n> badly-written ones.)\n\nThe situation shown could be the result of SQL injection attack.\n\nIt would be nice to have a switch to do additional checks on SQL\nqueries to ensure such injections don't cause long runtimes to return\nuseless answers.\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 May 2013 13:48:34 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"WHERE 1 = 2 OR ...\" makes planner choose a very inefficient plan"
},
{
"msg_contents": "On Thu, May 2, 2013 at 9:48 AM, Simon Riggs <[email protected]> wrote:\n>>> SELECT count(k0.id)\n>>> FROM k0\n>>> WHERE 1 = 2\n>>> OR k0.id IN (\n>>> SELECT k1.k0_id\n>>> FROM k1\n>>> WHERE k1.k1k2_id IN (\n>>> SELECT k2.k1k2_id\n>>> FROM k2\n>>> WHERE k2.t = 2\n>>> AND (coalesce(k2.z, '')) LIKE '%12%'\n>>> )\n>>> );\n>>\n...\n>\n> The situation shown could be the result of SQL injection attack.\n>\n> It would be nice to have a switch to do additional checks on SQL\n> queries to ensure such injections don't cause long runtimes to return\n> useless answers.\n\nHow could that be the case without becoming much much worse than large runtimes?\n\nI don't think it's the place of the database to worry about SQL injection.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 May 2013 12:00:28 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"WHERE 1 = 2 OR ...\" makes planner choose a very inefficient plan"
}
] |
[
{
"msg_contents": "Hi guys.\nI'm tuning my postgres server and I faced these 2 parameters.\nToday they have default values.\nIs default enough?\nHow should I tune these values?\n\nHi guys.I'm tuning my postgres server and I faced these 2 parameters.Today they have default values.Is default enough?How should I tune these values?",
"msg_date": "Fri, 19 Apr 2013 18:55:18 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "maintenance_work_mem and autovacuum_max_workers"
}
] |
[
{
"msg_contents": "Hi,\n\nMy databases are updated regularly so I am vacuuming frequently (every one\nhour). Recently i also added template1 database to avoid over wrapping\nproblem. But somehow i am seeing strange behavior.\n\nMost of the time all db vacuuming finish in 30 secs.\n\nbut once in a day or two\n- My actual DB is taking less than 30 secs for vacuuming.\n- Sometime template1 is taking 5 mins for vacuuming.\n- Queries become exceptionally slow at that time for 5 mins ( specially\nduring the end).\n\nI am wondering what could be the reason of long time of template1 vacumming\nsometime and slow query at end of vacumming.\n\nDo we need to template1 analyze regularly? What is ideal frequency of\ntemplate1 vacuuming only and analyze?\n\nMy DB version is little old - 8.1.18.\n\nHi, My databases are updated regularly so I am vacuuming frequently (every one hour). Recently i also added template1 database to avoid over wrapping problem. But somehow i am seeing strange behavior. \nMost of the time all db vacuuming finish in 30 secs. but once in a day or two - My actual DB is taking less than 30 secs for vacuuming. - Sometime template1 is taking 5 mins for vacuuming. \n- Queries become exceptionally slow at that time for 5 mins ( specially during the end). I am wondering what could be the reason of long time of template1 vacumming sometime and slow query at end of vacumming. \nDo we need to template1 analyze regularly? What is ideal frequency of template1 vacuuming only and analyze?My DB version is little old - 8.1.18.",
"msg_date": "Sun, 21 Apr 2013 21:46:50 +0900",
"msg_from": "sunil virmani <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "On Sun, Apr 21, 2013 at 5:46 AM, sunil virmani <[email protected]> wrote:\n\n> My DB version is little old - 8.1.18.\n>\n\nYour db is exceptionally old and very much unsupported. Vacuum has\nmassively improved since 8.1 .\n\nSee http://www.postgresql.org/support/versioning/ regarding supported\nversions.\n\nOn Sun, Apr 21, 2013 at 5:46 AM, sunil virmani <[email protected]> wrote:\nMy DB version is little old - 8.1.18. \n\nYour db is exceptionally old and very much unsupported. Vacuum has massively improved since 8.1 .\n\nSee http://www.postgresql.org/support/versioning/ regarding supported versions.",
"msg_date": "Sun, 21 Apr 2013 10:00:14 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Sun, Apr 21, 2013 at 6:46 AM, sunil virmani <[email protected]> wrote:\n> Hi,\n>\n> My databases are updated regularly so I am vacuuming frequently (every one\n> hour). Recently i also added template1 database to avoid over wrapping\n> problem. But somehow i am seeing strange behavior.\n>\n> Most of the time all db vacuuming finish in 30 secs.\n>\n> but once in a day or two\n> - My actual DB is taking less than 30 secs for vacuuming.\n> - Sometime template1 is taking 5 mins for vacuuming.\n> - Queries become exceptionally slow at that time for 5 mins ( specially\n> during the end).\n>\n> I am wondering what could be the reason of long time of template1 vacumming\n> sometime and slow query at end of vacumming.\n>\n> Do we need to template1 analyze regularly? What is ideal frequency of\n> template1 vacuuming only and analyze?\n>\n> My DB version is little old - 8.1.18.\n\nWell upgrade as soon as possible. 9.1 is pretty darn stable.\n\nThere are two possible things that cause this kind of slowdown. One is\na checkpoint. This is where postgresql writes out its own dirty\nbuffers, and the other is a back OS level write flush. Both of these\nwill cause your system to slow to a crawl. The fix for checkpointing\nis to adjust your postgresql.conf file's completion target and other\nsettings, many of which, like completion target, do not exist in 8.1.\nIncreasing checkpoint segments and checkpoint timeouts may help here.\n\nDepending on your OS you may or may not be able to reduce the two\ndirty*ratio settings, vm.dirty_background_ratio and vm.dirty_ratio. On\nmany servers reducing these to 0 or something under 5 is a good first\nstep. In almost no circumstance is a high setting good for large\nmemory, database, or file server machines.\n\nAnother possibility is that your kswap daemon is going nuts and\nswapping for no reason. Turning off swap can stop it. You'll see lots\nof so/si in iostat when that's happening, but no real reason for it.\n(i.e. no memory pressure, plenty free memory etc)\n\nI'm gonna just assume since you're running an old postgres you're\nprobably not on more modern numa hardware and don't have an issue with\nzone_reclaim_mode = 1 that I've seen before.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 21 Apr 2013 11:28:18 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "I was gonna tell you to turn off full page writes, but in 8.1 that\nsetting is ignored. For reference, here's the pages that you should\nlook at for this:\n\nhttp://www.postgresql.org/docs/8.1/static/runtime-config-wal.html\nhttp://www.postgresql.org/docs/8.1/static/wal-configuration.html\n\nFor 9.1, the same pages:\n\nhttp://www.postgresql.org/docs/9.1/static/runtime-config-wal.html\nhttp://www.postgresql.org/docs/9.1/static/wal-configuration.html\n\nFor future reference, if you want to learn more about performance\ntuning postgresql, Performance PostgreSQL by Greg Smith is THE book to\nhave.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 21 Apr 2013 11:32:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "W dniu 2013-04-21 19:28, Scott Marlowe pisze:\n>> My DB version is little old - 8.1.18.\n> Well upgrade as soon as possible. 9.1 is pretty darn stable.\n\nScott,\n\nexcuse me this somewhat off-topic question.\n\nGood to hear that 9.1 is so stable, because this is what I currently use in production. But why I \nstill use it, is only because I failed to manage my task to migrate to 9.2, so far. Anyway, this \ntask in on my long-term agenda.\n\nSo, let me understand why, as you recommend the OP upgrading, you only mention the 9.1, while there \nhave already been a few releases of 9.2. Isn't 9.2 stable enough?\n\n\nBest regards\nIrek.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Apr 2013 10:26:08 +0200",
"msg_from": "Ireneusz Pluta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: - why only 9.1?"
}
] |
[
{
"msg_contents": "Folks,\n\nI've heard a rumor that the most recent update of OSX \"mountain lion\"\nlowers the installed SHMMAX to 4MB, which prevents PostgreSQL from\ninstalling. Can anyone verify this?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 21 Apr 2013 14:29:57 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Issues with OSX and SHMMAX?"
},
{
"msg_contents": "\nOn Apr 22, 2013, at 1:29 AM, Josh Berkus <[email protected]> wrote:\n\n> Folks,\n> \n> I've heard a rumor that the most recent update of OSX \"mountain lion\"\n> lowers the installed SHMMAX to 4MB, which prevents PostgreSQL from\n> installing. Can anyone verify this?\n> \n\nkern.sysv.shmmax: 4194304\n\nmac os x 10.8.3\n> -- \n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com.\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Apr 2013 01:33:09 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issues with OSX and SHMMAX?"
},
{
"msg_contents": "On 04/21/2013 02:33 PM, Evgeny Shishkin wrote:\n> \n> On Apr 22, 2013, at 1:29 AM, Josh Berkus <[email protected]> wrote:\n> \n>> Folks,\n>>\n>> I've heard a rumor that the most recent update of OSX \"mountain lion\"\n>> lowers the installed SHMMAX to 4MB, which prevents PostgreSQL from\n>> installing. Can anyone verify this?\n>>\n> \n> kern.sysv.shmmax: 4194304\n\nThat would be 4MB. Can anyone else verify this?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Apr 2013 11:19:40 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Issues with OSX and SHMMAX?"
},
{
"msg_contents": "\nOn Apr 22, 2013, at 11:19 AM, Josh Berkus <[email protected]> wrote:\n\n> On 04/21/2013 02:33 PM, Evgeny Shishkin wrote:\n>> \n>> On Apr 22, 2013, at 1:29 AM, Josh Berkus <[email protected]> wrote:\n>> \n>>> Folks,\n>>> \n>>> I've heard a rumor that the most recent update of OSX \"mountain lion\"\n>>> lowers the installed SHMMAX to 4MB, which prevents PostgreSQL from\n>>> installing. Can anyone verify this?\n>>> \n>> \n>> kern.sysv.shmmax: 4194304\n> \n> That would be 4MB. Can anyone else verify this?\n\nIt's the default setting on my 10.8.3 box. (I'm setting it higher in /etc/sysctl.conf\nand have no problems, but it stopped postgres.app starting when I removed\nthat).\n\nCheers,\n Steve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Apr 2013 15:30:26 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issues with OSX and SHMMAX?"
},
{
"msg_contents": "Steve Atkins <[email protected]> writes:\n> On Apr 22, 2013, at 11:19 AM, Josh Berkus <[email protected]> wrote:\n>> On 04/21/2013 02:33 PM, Evgeny Shishkin wrote:\n>>> On Apr 22, 2013, at 1:29 AM, Josh Berkus <[email protected]> wrote:\n>>>> I've heard a rumor that the most recent update of OSX \"mountain lion\"\n>>>> lowers the installed SHMMAX to 4MB, which prevents PostgreSQL from\n>>>> installing. Can anyone verify this?\n\n>>> kern.sysv.shmmax: 4194304\n\n>> That would be 4MB. Can anyone else verify this?\n\n> It's the default setting on my 10.8.3 box. (I'm setting it higher in\n> /etc/sysctl.conf and have no problems, but it stopped postgres.app\n> starting when I removed that).\n\nAFAIR, the default setting has been that or lower in every previous\nversion of OSX, so it seems unlikely that mountain lion per se broke\nanything. It might've appeared that way if you did a reinstall and\nforgot to copy your old /etc/sysctl.conf.\n\nA different theory, if things used to work and now don't, is that\nsomewhere recently we crossed a threshold in memory usage between\n\"will start at 4MB\" and \"won't start at 4MB\". If so, 9.3 ought to\nmake things better (since we're mostly getting out from under SysV\nshared memory limits), but in existing releases manually fixing\nthe limit will be the only recourse.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Apr 2013 20:39:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issues with OSX and SHMMAX?"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI am using postgresql 8.1 DB.\nI have a large DB and i run the vacuum every hour with \"vocuumdb\n--analyze -a\" option.\nSome times template1 consumes much time in comparison to my own DB.I\nthink at these times the load on OS is higher. But from iostat logs i\ncan see that the load on the partition where DB resides is not too\nmuch.\nI would like to know why such thing happens?\nWhat are the processing that is carried out with the template1 vacuuming.\nWhen the entries in template1 is updated and inserted?\nWhat should be the frequency of template1 vacuuming?\nIs template1 is updated as frequent as my own DB is updated?\n\n\n\nThanks,\nPradeep\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Apr 2013 20:31:13 +0900",
"msg_from": "pradeep singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "template1 vacuuming consuming much time compared to another\n\tproduction DBs"
},
{
"msg_contents": "\nOn 04/22/2013 07:31 AM, pradeep singh wrote:\n> Hi,\n>\n>\n> I am using postgresql 8.1 DB.\n\nWhy are you using a release of Postgres that is way out of date and \nunsupported?\n\ncheers\n\nandrew\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Apr 2013 08:08:19 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: template1 vacuuming consuming much time compared to\n\tanother production DBs"
},
{
"msg_contents": "I am using this DB since last 3 or 4 years. And suddenly we can't update it.\nWe are planning it at the end of year. Recently we faced the template1\nwraparound issue. So we added template1 also for vacuuming. We are\nvacuuming the DB each hour with 'vacuumdb --analyze -a' options.\nNow the vacuuming of template1 consumes time. And the queries become slow.\n From the logs i find that during this we period there are lot of backend\nprocesses in startup state. so i think connection open is slow. They are\nwaiting on WCHAN log_wait_.\nSo could you please recommend what may be the problem. FYI there is much\nload on OS this time.\nAnd why vacuuming of template1 only consumes time not other DBs?\n\n\nOn Mon, Apr 22, 2013 at 9:08 PM, Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 04/22/2013 07:31 AM, pradeep singh wrote:\n>\n>> Hi,\n>>\n>>\n>> I am using postgresql 8.1 DB.\n>>\n>\n> Why are you using a release of Postgres that is way out of date and\n> unsupported?\n>\n> cheers\n>\n> andrew\n>\n>\n>\n>\n\n\n-- \npradeep singh\nbiet jhansi\n\nI am using this DB since last 3 or 4 years. And suddenly we can't update it.We are planning it at the end of year. Recently we faced the template1 wraparound issue. So we added template1 also for vacuuming. We are vacuuming the DB each hour with 'vacuumdb --analyze -a' options.\nNow the vacuuming of template1 consumes time. And the queries become slow. From the logs i find that during this we period there are lot of backend processes in startup state. so i think connection open is slow. They are waiting on WCHAN log_wait_.\nSo could you please recommend what may be the problem. FYI there is much load on OS this time.And why vacuuming of template1 only consumes time not other DBs?\nOn Mon, Apr 22, 2013 at 9:08 PM, Andrew Dunstan <[email protected]> wrote:\n\nOn 04/22/2013 07:31 AM, pradeep singh wrote:\n\nHi,\n\n\nI am using postgresql 8.1 DB.\n\n\nWhy are you using a release of Postgres that is way out of date and unsupported?\n\ncheers\n\nandrew\n\n\n\n-- pradeep singh biet jhansi",
"msg_date": "Tue, 23 Apr 2013 10:49:40 +0900",
"msg_from": "pradeep singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: template1 vacuuming consuming much time compared to\n\tanother production DBs"
}
] |
[
{
"msg_contents": "Hi,\nWe are seeing some overall performance degradation in our application since we installed the security release. Other commits were also done at the same time in the application so we don't know yet if the degradation has any relationship with the security release.\nWhile we are digging into this, I would like to know if it is possible that the release has some impact on performance. After reading this \"It was created as a side effect of a refactoring effort to make establishing new connections to a PostgreSQL server faster, and the associated code more maintainable.\", I am thinking it is quite possible.\n\nPlease let me know. Thanks,\nAnne\n\n\n\n\n\n\n\n\n\nHi,\nWe are seeing some overall performance degradation in our application since we installed the security release. Other commits were also done at the same time in the application so we don’t know yet if the degradation has any relationship\n with the security release.\nWhile we are digging into this, I would like to know if it is possible that the release has some impact on performance. After reading this “It was created as a side effect of a refactoring effort to make establishing new connections to\n a PostgreSQL server faster, and the associated code more maintainable.”, I am thinking it is quite possible.\n \nPlease let me know. Thanks,\nAnne",
"msg_date": "Mon, 22 Apr 2013 16:48:55 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance with the new security release"
},
{
"msg_contents": "On 22.04.2013 19:48, Anne Rosset wrote:\n> Hi,\n> We are seeing some overall performance degradation in our application since we installed the security release. Other commits were also done at the same time in the application so we don't know yet if the degradation has any relationship with the security release.\n> While we are digging into this, I would like to know if it is possible that the release has some impact on performance. After reading this \"It was created as a side effect of a refactoring effort to make establishing new connections to a PostgreSQL server faster, and the associated code more maintainable.\", I am thinking it is quite possible.\n\nI doubt that particular commit, the one that fixed the security issue, \ncould cause any meaningful slowdown. But it's not impossible that some \nother fix included in the release would cause a regression, although we \ntry to be careful to avoid that. If you narrow the culprit down to the \nnew PostgreSQL version, we're going to need more details to find the \nroot cause.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Apr 2013 20:41:41 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with the new security release"
}
] |
[
{
"msg_contents": "Hi all,\n\n\n\n I have a general question about network traffic between\nPostgreSQL’s client and server:\nwhat determines the network bandwidth usage or data transferring rate\nbetween a client and a server\nwhen network bandwidth is enough?\n\n\n\nFor example, I ran queries on two tables, lineitem and partsupp in\nTPCH benchmark\n(with scaling factor 5). Lineitem table is 4630 MB and has 30000000 rows.\nPartsupp table is 693 MB and has 4000000 rows. Their definitions are\nshown below:\n\n\n\n Table \"public.lineitem\"\n\n Column | Type | Modifiers\n\n-----------------+-----------------------+-----------\n\n l_orderkey | integer | not null\n\n l_partkey | integer | not null\n\n l_suppkey | integer | not null\n\n l_linenumber | integer | not null\n\n l_quantity | numeric(15,2) | not null\n\n l_extendedprice | numeric(15,2) | not null\n\n l_discount | numeric(15,2) | not null\n\n l_tax | numeric(15,2) | not null\n\n l_returnflag | character(1) | not null\n\n l_linestatus | character(1) | not null\n\n l_shipdate | date | not null\n\n l_commitdate | date | not null\n\n l_receiptdate | date | not null\n\n l_shipinstruct | character(25) | not null\n\n l_shipmode | character(10) | not null\n\n l_comment | character varying(44) | not null\n\n\n\n\n\n Table \"public.partsupp\"\n\n Column | Type | Modifiers\n\n---------------+------------------------+-----------\n\n ps_partkey | integer | not null\n\n ps_suppkey | integer | not null\n\n ps_availqty | integer | not null\n\n ps_supplycost | numeric(15,2) | not null\n\n ps_comment | character varying(199) | not null\n\n\n\nFor different queries, I observe different data transferring rate\nbetween a client and a server (client and server are on different\nphysical machines)\nusing tcpdump as shown below:\n\n\n\nQuery 1: select * from lineitem;\n\n\n\nSeq Scan on lineitem (cost=0.00..892562.86 rows=29998686 width=125)\n\n\n\nAverage network usage: *42.5MB/s*\n\n\n\n\n\nQuery 2: select * from partsupp;\n\n\n\nSeq Scan on partsupp (cost=0.00..128685.81 rows=4001181 width=146)\n\n\n\nAverage network usage: *95.9MB/s*\n\n\n\n\n\nQuery 3: select * from lineitem, partsupp where l_partkey=ps_partkey;\n\n\n\nHash Join (cost=178700.57..17307550.15 rows=116194700 width=271)\n\n Hash Cond: (lineitem.l_partkey = partsupp.ps_partkey)\n\n -> Seq Scan on lineitem (cost=0.00..892562.86 rows=29998686 width=125)\n\n -> Hash (cost=128685.81..128685.81 rows=4001181 width=146)\n\n -> Seq Scan on partsupp (cost=0.00..128685.81 rows=4001181 width=146)\n\n\n\nAverage network usage: *53.1MB/s*\n\n\n\nIn all the experiments, the lineitem and partsupp tables reside in memory\nbecause there is no io activities observed from iotop.\nSince there is enough network bandwidth (1Gb/s or 128MB/s) between\nclient and server,\nI would like to know what determines the data transferring rate or the\nnetwork bandwidth usage\nbetween a client and a server when network bandwidth is enough.\nFor example, given that the size of each tuple of lineitem table is\n88% of that of partsupp,\nwhy the average network usage for sequential scan of lineitem table is only 50%\nthat of partsupp table? And why the average network usage of their\njoin is higher\nthan that of sequential scan of lineitem but lower than that of\nsequential scan of partsupp table?\n\nThanks!\n\nKelphet Xiong\n\nHi all, I have a general question about network traffic between PostgreSQL’s client and server: what determines the network bandwidth usage or data transferring rate between a client and a server \nwhen network bandwidth is enough? For example, I ran queries on two tables, lineitem and partsupp in TPCH benchmark (with scaling factor 5). Lineitem table is 4630 MB and has 30000000 rows. Partsupp table is 693 MB and has 4000000 rows. Their definitions are shown below:\n Table \"public.lineitem\" Column | Type | Modifiers\n-----------------+-----------------------+----------- l_orderkey | integer | not null l_partkey | integer | not null\n l_suppkey | integer | not null l_linenumber | integer | not null l_quantity | numeric(15,2) | not null\n l_extendedprice | numeric(15,2) | not null l_discount | numeric(15,2) | not null l_tax | numeric(15,2) | not null\n l_returnflag | character(1) | not null l_linestatus | character(1) | not null l_shipdate | date | not null\n l_commitdate | date | not null l_receiptdate | date | not null l_shipinstruct | character(25) | not null\n l_shipmode | character(10) | not null l_comment | character varying(44) | not null Table \"public.partsupp\"\n Column | Type | Modifiers---------------+------------------------+----------- ps_partkey | integer | not null\n ps_suppkey | integer | not null ps_availqty | integer | not null ps_supplycost | numeric(15,2) | not null\n ps_comment | character varying(199) | not null For different queries, I observe different data transferring rate between a client and a server (client and server are on different physical machines)\nusing tcpdump as shown below: Query 1: select * from lineitem; Seq Scan on lineitem (cost=0.00..892562.86 rows=29998686 width=125) Average network usage: 42.5MB/s\n Query 2: select * from partsupp; Seq Scan on partsupp (cost=0.00..128685.81 rows=4001181 width=146) Average network usage: 95.9MB/s\n Query 3: select * from lineitem, partsupp where l_partkey=ps_partkey; Hash Join (cost=178700.57..17307550.15 rows=116194700 width=271) Hash Cond: (lineitem.l_partkey = partsupp.ps_partkey)\n -> Seq Scan on lineitem (cost=0.00..892562.86 rows=29998686 width=125) -> Hash (cost=128685.81..128685.81 rows=4001181 width=146)\n -> Seq Scan on partsupp (cost=0.00..128685.81 rows=4001181 width=146) Average network usage: 53.1MB/s\n In all the experiments, the lineitem and partsupp tables reside in memory because there is no io activities observed from iotop. Since there is enough network bandwidth (1Gb/s or 128MB/s) between client and server, \nI would like to know what determines the data transferring rate or the network bandwidth usage between a client and a server when network bandwidth is enough. For example, given that the size of each tuple of lineitem table is 88% of that of partsupp,\nwhy the average network usage for sequential scan of lineitem table is only 50% that of partsupp table? And why the average network usage of their join is higher than that of sequential scan of lineitem but lower than that of sequential scan of partsupp table?\nThanks!Kelphet Xiong",
"msg_date": "Wed, 24 Apr 2013 16:56:24 -0700",
"msg_from": "Kelphet Xiong <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?windows-1252?Q?Question_about_network_bandwidth_usage_between_Postg?=\n\t=?windows-1252?Q?reSQL=92s_client_and_server?="
},
{
"msg_contents": "On 25.04.2013 02:56, Kelphet Xiong wrote:\n> In all the experiments, the lineitem and partsupp tables reside in memory\n> because there is no io activities observed from iotop.\n> Since there is enough network bandwidth (1Gb/s or 128MB/s) between\n> client and server,\n> I would like to know what determines the data transferring rate or the\n> network bandwidth usage\n> between a client and a server when network bandwidth is enough.\n\nSince there's enough network bandwidth available, the bottleneck is \nelsewhere. I don't know what it is in your example - maybe it's the I/O \ncapacity, or CPU required to process the result in the server before \nit's sent over the network. It could also be in the client, on how fast \nit can process the results coming from the server.\n\nI'd suggest running 'top' on the server while the query is executed, and \nkeeping an eye on the CPU usage. If it's pegged at 100%, the bottleneck \nis the server's CPU.\n\n> For example, given that the size of each tuple of lineitem table is\n> 88% of that of partsupp,\n> why the average network usage for sequential scan of lineitem table is only 50%\n> that of partsupp table? And why the average network usage of their\n> join is higher\n> than that of sequential scan of lineitem but lower than that of\n> sequential scan of partsupp table?\n\nHere's a wild guess: the query on lineitem is bottlenecked by CPU usage \nin the server. A lot of CPU could be spent on converting the date fields \nfrom on-disk format to the text representation that's sent over the \nnetwork; I've seen that conversion use up a lot of CPU time on some test \nworkloads. Try leaving out the date columns from the query to test that \ntheory.\n\nIf that's the bottleneck, you could try fetching the result in binary \nformat, that should consume less CPU in the server. You didn't mention \nwhat client library you're using, but e.g with libpq, see the manual on \nPQexecParams on how to set the result format.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 25 Apr 2013 14:26:00 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?windows-1252?Q?Re=3A_=5BPERFORM=5D_Question_about_netw?=\n\t=?windows-1252?Q?ork_bandwidth_usage_between_PostgreSQL=92s_cl?=\n\t=?windows-1252?Q?ient_and_server?="
}
] |
[
{
"msg_contents": "Hi,\n\nWe again have problems with query planer... (Ubuntu, pg 9.1)\n\nUp to now - solution was \"rephrase the question\", but for next thing we are\nnot sure what would be best solution...\n\nthe whole story is too complex... but simplified:\n\nWe have tables:\n\nthings (thing_id int pk... other columns...)\nactivities (activity_id int pk, date, thing_id.... other columns...)\n\nSo, for each day we track main activities about things...\n\nNow... each activity... could have 0 or more additional info about\nactivity... if that happened at all that day...\n\nSo we have:\nadditional_activities (id serial pk, activity_id int fk,... other\ncolumns...)\n\n\nNow, what creates problems...\n\nWe need a view\nwhat shows all info about things and activities...\n\nbut just 1 row per activity...\n\nso:\n\ndate, thing columns, activity columns... and now last 7 columns are from\nadditional_activities table... what can have 0 or more rows related to the\nactivity - but we need just one...\nif it has more than 1 row - we should show:\n-actual values from the first row (related to the activity) + last two\ncolumns: sum value and total number of additinal info relateed to the\nactivity...\n\nSo we have make a view:\n\n\nWITH main_id AS (\n SELECT min(id) AS id, sum(value) AS total_value, count(1) AS\ntotal_additional_info\n FROM additional_activities\n GROUP BY activity_id\n )\n SELECT *\n FROM main_id\n JOIN additional_activities USING (id);\n\n\nWhat actually returns first row values about thing + summarized values...\n\nthen left join to that view - and we get result what we want...\n\nwith my_view:\n\nSELECT *\nFROM things\nJOIN activities USING (thing_id)\nLEFT JOIN additional_activities_view USING (thing_id)\n\n\nUsual query on that view is:\n\nSELECT * FROM my_view WHERE thing_id = $1 AND date BETWEEN $2 AND $3\n\nAnd now comes problems:\nQuery1:\nSELECT * FROM my_view WHERE thing_id = 321 AND date BETWEEN '20130301' AND\n'20130331'\n\ntakes more then 20s and uses very bad plan:\nhttp://explain.depesz.com/s/CLh\n\nbut Query2:\nSELECT * FROM my_view WHERE thing_id = 321 AND date BETWEEN '20130201' AND\n'20130331\n\nWhat returns even more rows then query1, (Changed just from date 1st Feb\ninstead of 1st March)\n\ntakes less then 2 seconds!?\nhttp://explain.depesz.com/s/9QP\n\n\nAny suggestions?\n\nMany Thanks,\n\nMisa\n\nHi,We again have problems with query planer... (Ubuntu, pg 9.1)Up to now - solution was \"rephrase the question\", but for next thing we are not sure what would be best solution...\nthe whole story is too complex... but simplified:We have tables:things (thing_id int pk... other columns...)\nactivities (activity_id int pk, date, thing_id.... other columns...)So, for each day we track main activities about things...Now... each activity... could have 0 or more additional info about activity... if that happened at all that day...\nSo we have:additional_activities (id serial pk, activity_id int fk,... other columns...)Now, what creates problems...\nWe need a viewwhat shows all info about things and activities...but just 1 row per activity...so:\ndate, thing columns, activity columns... and now last 7 columns are from additional_activities table... what can have 0 or more rows related to the activity - but we need just one...\nif it has more than 1 row - we should show:-actual values from the first row (related to the activity) + last two columns: sum value and total number of additinal info relateed to the activity...\nSo we have make a view:WITH main_id AS ( SELECT min(id) AS id, sum(value) AS total_value, count(1) AS total_additional_info\n FROM additional_activities GROUP BY activity_id ) SELECT * FROM main_id JOIN additional_activities USING (id);\nWhat actually returns first row values about thing + summarized values...then left join to that view - and we get result what we want...\nwith my_view:SELECT * FROM things JOIN activities USING (thing_id) LEFT JOIN additional_activities_view USING (thing_id)\nUsual query on that view is:SELECT * FROM my_view WHERE thing_id = $1 AND date BETWEEN $2 AND $3\nAnd now comes problems:Query1:SELECT * FROM my_view WHERE thing_id = 321 AND date BETWEEN '20130301' AND '20130331'takes more then 20s and uses very bad plan:\nhttp://explain.depesz.com/s/CLhbut Query2:SELECT * FROM my_view WHERE thing_id = 321 AND date BETWEEN '20130201' AND '20130331\nWhat returns even more rows then query1, (Changed just from date 1st Feb instead of 1st March)takes less then 2 seconds!?http://explain.depesz.com/s/9QP\nAny suggestions?Many Thanks,Misa",
"msg_date": "Thu, 25 Apr 2013 14:18:27 +0200",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "different plans for the same query - different filter values"
}
] |
[
{
"msg_contents": "Recently we encountered the following unhappy sequence of events:\n\n1/ system running happily\n2/ batch load into table begins\n3/ very quickly (some) preexisting queries on said table go orders of \nmagnitude slower\n4/ database instance becomes unresponsive\n5/ application outage\n\nAfter looking down a few false leads, We've isolated the cause to the \nfollowing:\n\nThe accumulating in-progress row changes are such that previously \noptimal plans are optimal no longer. Now this situation will fix itself \nwhen the next autoanalyze happens (and new plan will be chosen) - \nhowever that cannot occur until the batch load is completed and \ncommitted (approx 70 seconds). However during that time there is enough \nof a performance degradation for queries still using the old plan to \ncripple the server.\n\nNow that we know what is happening we can work around it. But I'm \nwondering - is there any way (or if not should there be one) to let \npostgres handle this automatically? I experimented with a quick hack to \nsrc/backend/commands/analyze.c (attached) that lets another session's \nANALYZE see in progress rows - which works but a) may cause other \nproblems and b) does not help autoaanalyze which has to wait for COMMIT \n+ stats message.\n\nI've attached a (synthetic) test case that shows the issue, I'll \nreproduce the output below to hopefully make the point obvious:\n\n\n Table \"public.plan\"\n Column | Type | Modifiers\n--------+-----------------------------+-----------\n id | integer | not null\n typ | integer | not null\n dat | timestamp without time zone |\n val | text | not null\nIndexes:\n \"plan_id\" UNIQUE, btree (id)\n \"plan_dat\" btree (dat)\n \"plan_typ\" btree (typ)\n\n\n[Session 1]\nEXPLAIN ANALYZE\nSELECT * FROM plan\nWHERE typ = 3 AND dat IS NOT NULL;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Index Scan using plan_dat on plan (cost=0.00..265.47 rows=55 \nwidth=117) (actual time=0.130..4.409 rows=75 loops=1)\n Index Cond: (dat IS NOT NULL)\n Filter: (typ = 3)\n Rows Removed by Filter: 5960\n Total runtime: 4.440 ms\n(5 rows)\n\n[Session 2]\n\nBEGIN;\nINSERT INTO plan\nSELECT id + 2000001,typ,current_date + id * '1 seconds'::interval ,val\nFROM plan\n;\n\n[Session 1]\nEXPLAIN ANALYZE\nSELECT * FROM plan\nWHERE typ = 3 AND dat IS NOT NULL;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Index Scan using plan_dat on plan (cost=0.00..551.35 rows=91 \nwidth=117) (actual time=0.131..202.699 rows=75 loops=1)\n Index Cond: (dat IS NOT NULL)\n Filter: (typ = 3)\n Rows Removed by Filter: 5960\n Total runtime: 202.729 ms\n(5 rows)\n[Session 2]\nCOMMIT;\n\n[Session 1...wait for autoanalyze to finish then]\n\nEXPLAIN ANALYZE\nSELECT * FROM plan\nWHERE typ = 3 AND dat IS NOT NULL;\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on plan (cost=407.87..44991.95 rows=10116 width=117) \n(actual time=2.692..6.582 rows=75 loops=1)\n Recheck Cond: (typ = 3)\n Filter: (dat IS NOT NULL)\n Rows Removed by Filter: 19925\n -> Bitmap Index Scan on plan_typ (cost=0.00..405.34 rows=20346 \nwidth=0) (actual time=2.573..2.573 rows=20000 loops=1)\n Index Cond: (typ = 3)\n Total runtime: 6.615 ms\n\n\nRegards\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 26 Apr 2013 14:33:31 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 26/04/13 14:56, Gavin Flower wrote:\n> On 26/04/13 14:33, Mark Kirkwood wrote:\n>> Recently we encountered the following unhappy sequence of events:\n>>\n>> 1/ system running happily\n>> 2/ batch load into table begins\n>> 3/ very quickly (some) preexisting queries on said table go orders of \n>> magnitude slower\n>> 4/ database instance becomes unresponsive\n>> 5/ application outage\n>>\n>> After looking down a few false leads, We've isolated the cause to the \n>> following:\n>>\n>> The accumulating in-progress row changes are such that previously \n>> optimal plans are optimal no longer. Now this situation will fix \n>> itself when the next autoanalyze happens (and new plan will be \n>> chosen) - however that cannot occur until the batch load is completed \n>> and committed (approx 70 seconds). However during that time there is \n>> enough of a performance degradation for queries still using the old \n>> plan to cripple the server.\n>>\n>> Now that we know what is happening we can work around it. But I'm \n>> wondering - is there any way (or if not should there be one) to let \n>> postgres handle this automatically? I experimented with a quick hack \n>> to src/backend/commands/analyze.c (attached) that lets another \n>> session's ANALYZE see in progress rows - which works but a) may cause \n>> other problems and b) does not help autoaanalyze which has to wait \n>> for COMMIT + stats message.\n>>\n>> I've attached a (synthetic) test case that shows the issue, I'll \n>> reproduce the output below to hopefully make the point obvious:\n>>\n>>\n>> Table \"public.plan\"\n>> Column | Type | Modifiers\n>> --------+-----------------------------+-----------\n>> id | integer | not null\n>> typ | integer | not null\n>> dat | timestamp without time zone |\n>> val | text | not null\n>> Indexes:\n>> \"plan_id\" UNIQUE, btree (id)\n>> \"plan_dat\" btree (dat)\n>> \"plan_typ\" btree (typ)\n>>\n>>\n>> [Session 1]\n>> EXPLAIN ANALYZE\n>> SELECT * FROM plan\n>> WHERE typ = 3 AND dat IS NOT NULL;\n>> QUERY PLAN\n>> --------------------------------------------------------------------------------------------------------------------- \n>>\n>> Index Scan using plan_dat on plan (cost=0.00..265.47 rows=55 \n>> width=117) (actual time=0.130..4.409 rows=75 loops=1)\n>> Index Cond: (dat IS NOT NULL)\n>> Filter: (typ = 3)\n>> Rows Removed by Filter: 5960\n>> Total runtime: 4.440 ms\n>> (5 rows)\n>>\n>> [Session 2]\n>>\n>> BEGIN;\n>> INSERT INTO plan\n>> SELECT id + 2000001,typ,current_date + id * '1 seconds'::interval ,val\n>> FROM plan\n>> ;\n>>\n>> [Session 1]\n>> EXPLAIN ANALYZE\n>> SELECT * FROM plan\n>> WHERE typ = 3 AND dat IS NOT NULL;\n>>\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------- \n>>\n>> Index Scan using plan_dat on plan (cost=0.00..551.35 rows=91 \n>> width=117) (actual time=0.131..202.699 rows=75 loops=1)\n>> Index Cond: (dat IS NOT NULL)\n>> Filter: (typ = 3)\n>> Rows Removed by Filter: 5960\n>> Total runtime: 202.729 ms\n>> (5 rows)\n>> [Session 2]\n>> COMMIT;\n>>\n>> [Session 1...wait for autoanalyze to finish then]\n>>\n>> EXPLAIN ANALYZE\n>> SELECT * FROM plan\n>> WHERE typ = 3 AND dat IS NOT NULL;\n>> QUERY PLAN\n>> --------------------------------------------------------------------------------------------------------------------------- \n>>\n>> Bitmap Heap Scan on plan (cost=407.87..44991.95 rows=10116 \n>> width=117) (actual time=2.692..6.582 rows=75 loops=1)\n>> Recheck Cond: (typ = 3)\n>> Filter: (dat IS NOT NULL)\n>> Rows Removed by Filter: 19925\n>> -> Bitmap Index Scan on plan_typ (cost=0.00..405.34 rows=20346 \n>> width=0) (actual time=2.573..2.573 rows=20000 loops=1)\n>> Index Cond: (typ = 3)\n>> Total runtime: 6.615 ms\n>>\n>>\n>> Regards\n>>\n>> Mark\n>>\n>>\n> Hmm...\n>\n> You need to specify:\n>\n> 1. version of Postgres\n> 2. Operating system\n> 3. changes to postgresql.conf\n> 4. CPU/RAM etc\n> 5. anything else that might be relevant\n>\n>\n\n\nWhile in general you are quite correct - in the above case (particularly \nas I've supplied a test case) it should be pretty obvious that any \nmoderately modern version of postgres on any supported platform will \nexhibit this.\n\nI produced the above test case on Postgres 9.2.4 Ubuntu 13.04, with no \nchanges to the default postgresql.conf\n\n\nNow our actual production server is a 32 CPU box with 512GB RAM, and 16 \nSAS SSD running Postgres 9.2.4 on Ubuntu 12.04. And yes there are quite \na few changes from the defaults there - and I wasted quite a lot of time \nchasing issues with high CPU and RAM, and changing various configs to \nsee if they helped - before identifying that the issue was in progress \nrow changes and planner statistics. Also in the \"real\" case with much \nbigger datasets the difference between the plan being optimal and it \n*not* being optimal is a factor of 2000x elapsed time instead of a mere \n50x !\n\nregards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 26 Apr 2013 15:19:53 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 26/04/13 15:34, Gavin Flower wrote:\n> On 26/04/13 15:19, Mark Kirkwood wrote:\n>> While in general you are quite correct - in the above case\n>> (particularly as I've supplied a test case) it should be pretty\n>> obvious that any moderately modern version of postgres on any\n>> supported platform will exhibit this.\n >\n> While I admit that I did not look closely at your test case - I am aware\n> that several times changes to Postgres from one minor version to\n> another, can have drastic unintended side effects (which might, or might\n> not, be relevant to your situation). Besides, it helps sets the scene,\n> and is one less thing that needs to be deduced.\n>\n\nIndeed - however, my perhaps slightly grumpy reply to your email was \nbased on an impression of over keen-ness to dismiss my message without \nreading it (!) and a - dare I say it - one size fits all presentation of \n\"here are the hoops to jump through\". Now I spent a reasonable amount of \ntime preparing the message and its attendant test case - and a comment \nsuch as your based on *not reading it* ...errrm... well lets say I think \nwe can/should do better.\n\nI am concerned that the deafening lack of any replies to my original \nmessage is a result of folk glancing at your original quick reply and \nthinking... incomplete problem spec...ignore... when that is not that \ncase - yes I should have muttered \"9.2\" in the original email, but we \nhave covered that now.\n\nRegards\n\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 01 May 2013 21:43:36 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> writes:\n> I am concerned that the deafening lack of any replies to my original \n> message is a result of folk glancing at your original quick reply and \n> thinking... incomplete problem spec...ignore... when that is not that \n> case - yes I should have muttered \"9.2\" in the original email, but we \n> have covered that now.\n\nNo, I think it's more that we're trying to get to beta, and so anything\nthat looks like new development is getting shuffled to folks' \"to\nlook at later\" queues. The proposed patch is IMO a complete nonstarter\nanyway; but I'm not sure what a less bogus solution would look like.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 01 May 2013 10:06:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 02/05/13 02:06, Tom Lane wrote:\n> Mark Kirkwood <[email protected]> writes:\n>> I am concerned that the deafening lack of any replies to my original\n>> message is a result of folk glancing at your original quick reply and\n>> thinking... incomplete problem spec...ignore... when that is not that\n>> case - yes I should have muttered \"9.2\" in the original email, but we\n>> have covered that now.\n> No, I think it's more that we're trying to get to beta, and so anything\n> that looks like new development is getting shuffled to folks' \"to\n> look at later\" queues. The proposed patch is IMO a complete nonstarter\n> anyway; but I'm not sure what a less bogus solution would look like.\n>\n\nYeah, I did think that beta might be consuming everyone's attention (of \ncourse immediately *after* sending the email)!\n\nAnd yes, the patch was merely to illustrate the problem rather than any \nserious attempt at a solution.\n\nRegards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 02 May 2013 12:49:17 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 2 May 2013 01:49, Mark Kirkwood <[email protected]> wrote:\n> On 02/05/13 02:06, Tom Lane wrote:\n>>\n>> Mark Kirkwood <[email protected]> writes:\n>>>\n>>> I am concerned that the deafening lack of any replies to my original\n>>> message is a result of folk glancing at your original quick reply and\n>>> thinking... incomplete problem spec...ignore... when that is not that\n>>> case - yes I should have muttered \"9.2\" in the original email, but we\n>>> have covered that now.\n>>\n>> No, I think it's more that we're trying to get to beta, and so anything\n>> that looks like new development is getting shuffled to folks' \"to\n>> look at later\" queues. The proposed patch is IMO a complete nonstarter\n>> anyway; but I'm not sure what a less bogus solution would look like.\n>>\n>\n> Yeah, I did think that beta might be consuming everyone's attention (of\n> course immediately *after* sending the email)!\n>\n> And yes, the patch was merely to illustrate the problem rather than any\n> serious attempt at a solution.\n\nI think we need a problem statement before we attempt a solution,\nwhich is what Tom is alluding to.\n\nISTM that you've got a case where the plan is very sensitive to a\ntable load. Which is a pretty common situation and one that can be\nsolved in various ways. I don't see much that Postgres can do because\nit can't know ahead of time you're about to load rows. We could\nimagine an optimizer that set thresholds on plans that caused the\nwhole plan to be recalculated half way thru a run, but that would be a\nlot of work to design and implement and even harder to test. Having\nstatic plans at least allows us to discuss what it does after the fact\nwith some ease.\n\nThe plan is set using stats that are set when there are very few\nnon-NULL rows, and those increase massively on load. The way to cope\nis to run the ANALYZE immediately after the load and then don't allow\nauto-ANALYZE to reset them later.\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 May 2013 13:27:28 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 03/05/13 00:27, Simon Riggs wrote:\n> On 2 May 2013 01:49, Mark Kirkwood<[email protected]> wrote:\n>> On 02/05/13 02:06, Tom Lane wrote:\n>>> Mark Kirkwood<[email protected]> writes:\n>>>> I am concerned that the deafening lack of any replies to my original\n>>>> message is a result of folk glancing at your original quick reply and\n>>>> thinking... incomplete problem spec...ignore... when that is not that\n>>>> case - yes I should have muttered \"9.2\" in the original email, but we\n>>>> have covered that now.\n>>> No, I think it's more that we're trying to get to beta, and so anything\n>>> that looks like new development is getting shuffled to folks' \"to\n>>> look at later\" queues. The proposed patch is IMO a complete nonstarter\n>>> anyway; but I'm not sure what a less bogus solution would look like.\n>>>\n>> Yeah, I did think that beta might be consuming everyone's attention (of\n>> course immediately *after* sending the email)!\n>>\n>> And yes, the patch was merely to illustrate the problem rather than any\n>> serious attempt at a solution.\n> I think we need a problem statement before we attempt a solution,\n> which is what Tom is alluding to.\n>\n> ISTM that you've got a case where the plan is very sensitive to a\n> table load. Which is a pretty common situation and one that can be\n> solved in various ways. I don't see much that Postgres can do because\n> it can't know ahead of time you're about to load rows. We could\n> imagine an optimizer that set thresholds on plans that caused the\n> whole plan to be recalculated half way thru a run, but that would be a\n> lot of work to design and implement and even harder to test. Having\n> static plans at least allows us to discuss what it does after the fact\n> with some ease.\n>\n> The plan is set using stats that are set when there are very few\n> non-NULL rows, and those increase massively on load. The way to cope\n> is to run the ANALYZE immediately after the load and then don't allow\n> auto-ANALYZE to reset them later.\n>\n> --\n> Simon Riggshttp://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\nWould be practicable to have a facility for telling Postgres in advance \nwhat you intend to do, so it can create plans accordingly?\n\nI won't try and invent syntax, but it would be good to tell the system \nthat you intend to:\ninsert a million rows sequentially\nor\ninsert ten million rows randomly\nor\nupdate two million rows in the primary key range 'AA00000' to 'PP88877'\netc.\n\nThough, sometime it may be useful to give a little more \ndetail,especially if you have a good estimate of the distribution of \nprimary keys, e.g.:\nAA000000\n20%\nAA000456\n 0%\nKE700999\n 30%\nNN400005\n 35%\nPA000001\n 15%\nPP808877\n\n\nI figure that if the planner had more information about what one intends \nto do, then it could combine that with the statistics it knows, to come \nup with a more realistic plan.\n\n\nCheers,\nGavin\n\n\n\n\n\n\n\n\n\nOn 03/05/13 00:27, Simon Riggs wrote:\n\n\nOn 2 May 2013 01:49, Mark Kirkwood <[email protected]> wrote:\n\n\nOn 02/05/13 02:06, Tom Lane wrote:\n\n\nMark Kirkwood <[email protected]> writes:\n\n\nI am concerned that the deafening lack of any replies to my original\nmessage is a result of folk glancing at your original quick reply and\nthinking... incomplete problem spec...ignore... when that is not that\ncase - yes I should have muttered \"9.2\" in the original email, but we\nhave covered that now.\n\n\nNo, I think it's more that we're trying to get to beta, and so anything\nthat looks like new development is getting shuffled to folks' \"to\nlook at later\" queues. The proposed patch is IMO a complete nonstarter\nanyway; but I'm not sure what a less bogus solution would look like.\n\n\n\nYeah, I did think that beta might be consuming everyone's attention (of\ncourse immediately *after* sending the email)!\n\nAnd yes, the patch was merely to illustrate the problem rather than any\nserious attempt at a solution.\n\n\nI think we need a problem statement before we attempt a solution,\nwhich is what Tom is alluding to.\n\nISTM that you've got a case where the plan is very sensitive to a\ntable load. Which is a pretty common situation and one that can be\nsolved in various ways. I don't see much that Postgres can do because\nit can't know ahead of time you're about to load rows. We could\nimagine an optimizer that set thresholds on plans that caused the\nwhole plan to be recalculated half way thru a run, but that would be a\nlot of work to design and implement and even harder to test. Having\nstatic plans at least allows us to discuss what it does after the fact\nwith some ease.\n\nThe plan is set using stats that are set when there are very few\nnon-NULL rows, and those increase massively on load. The way to cope\nis to run the ANALYZE immediately after the load and then don't allow\nauto-ANALYZE to reset them later.\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\nWould be practicable to have a facility for telling\n Postgres in advance what you intend to do,\n so it can create plans accordingly?\n\n I won't try and invent syntax, but it would be good to tell the\n system that you intend to:\n insert a million rows sequentially\n or\n insert ten million rows randomly\n or\n update two million rows in the primary key range 'AA00000' to\n 'PP88877'\n etc.\n\n Though, sometime it may be useful to give a little more\n detail,especially if you have a good estimate of the distribution of\n primary keys, e.g.:\nAA000000\n 20%\nAA000456\n 0%\nKE700999\n 30%\n NN400005\n 35%\n PA000001\n 15%\n PP808877\n\n\n I figure that if the planner had more information about what one\n intends to do, then it could combine that with the statistics it\n knows, to come up with a more realistic plan.\n\n\n Cheers,\n Gavin",
"msg_date": "Fri, 03 May 2013 09:06:03 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "> On 2 May 2013 01:49, Mark Kirkwood <[email protected]> wrote:\n>\n> I think we need a problem statement before we attempt a solution,\n> which is what Tom is alluding to.\n>\n\nActually no - I think Tom (quite correctly) was saying that the patch was\nnot a viable solution. With which I agree.\n\nI believe the title of this thread is the problem statement.\n\n> ISTM that you've got a case where the plan is very sensitive to a\n> table load. Which is a pretty common situation and one that can be\n> solved in various ways. I don't see much that Postgres can do because\n> it can't know ahead of time you're about to load rows. We could\n> imagine an optimizer that set thresholds on plans that caused the\n> whole plan to be recalculated half way thru a run, but that would be a\n> lot of work to design and implement and even harder to test. Having\n> static plans at least allows us to discuss what it does after the fact\n> with some ease.\n>\n> The plan is set using stats that are set when there are very few\n> non-NULL rows, and those increase massively on load. The way to cope\n> is to run the ANALYZE immediately after the load and then don't allow\n> auto-ANALYZE to reset them later.\n\nNo. We do run analyze immediately after the load. The surprise was that\nthis was not sufficient - the (small) amount of time where non optimal\nplans were being used due to the in progress row activity was enough to\ncripple the system - that is the problem. The analysis of why not led to\nthe test case included in the original email. And sure it is deliberately\ncrafted to display the issue, and is therefore open to criticism for being\nartificial. However it was purely meant to make it easy to see what I was\ntalking about.\n\n\nCurrently we are working around this by coercing one of the predicates in\nthe query to discourage the attractive looking but dangerous index.\n\nI think the idea of telling postgres that we are doing a load is probably\nthe wrong way to go about this. We have a framework that tries to\nautomatically figure out the best plans...I think some more thought about\nhow to make that understand some of the more subtle triggers for a\ntime-to-do-new-plans moment is the way to go. I understand this is\nprobably hard - and may imply some radical surgery to how the stats\ncollector and planner interact.\n\nRegards\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 May 2013 10:19:18 +1200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "[email protected] wrote on 03.05.2013 00:19:\n> I think the idea of telling postgres that we are doing a load is probably\n> the wrong way to go about this. We have a framework that tries to\n> automatically figure out the best plans...I think some more thought about\n> how to make that understand some of the more subtle triggers for a\n> time-to-do-new-plans moment is the way to go. I understand this is\n> probably hard - and may imply some radical surgery to how the stats\n> collector and planner interact.\n\nI wonder if \"freezing\" (analyze, then disable autovacuum) the statistics for the large number of rows would work.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 03 May 2013 00:32:20 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "> [email protected] wrote on 03.05.2013 00:19:\n>> I think the idea of telling postgres that we are doing a load is\n>> probably\n>> the wrong way to go about this. We have a framework that tries to\n>> automatically figure out the best plans...I think some more thought\n>> about\n>> how to make that understand some of the more subtle triggers for a\n>> time-to-do-new-plans moment is the way to go. I understand this is\n>> probably hard - and may imply some radical surgery to how the stats\n>> collector and planner interact.\n>\n> I wonder if \"freezing\" (analyze, then disable autovacuum) the statistics\n> for the large number of rows would work.\n>\n>\n>\n\nI'm thinking that the issue is actually the opposite - it is that a new\nplan is needed because the new (uncomitted) rows are changing the data\ndistribution. So we want more plan instability rather than plan stability\n:-)\n\nCheers\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 May 2013 10:59:31 +1200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 2 May 2013 23:19, <[email protected]> wrote:\n>> On 2 May 2013 01:49, Mark Kirkwood <[email protected]> wrote:\n>>\n>> I think we need a problem statement before we attempt a solution,\n>> which is what Tom is alluding to.\n>>\n>\n> Actually no - I think Tom (quite correctly) was saying that the patch was\n> not a viable solution. With which I agree.\n>\n> I believe the title of this thread is the problem statement.\n>\n>> ISTM that you've got a case where the plan is very sensitive to a\n>> table load. Which is a pretty common situation and one that can be\n>> solved in various ways. I don't see much that Postgres can do because\n>> it can't know ahead of time you're about to load rows. We could\n>> imagine an optimizer that set thresholds on plans that caused the\n>> whole plan to be recalculated half way thru a run, but that would be a\n>> lot of work to design and implement and even harder to test. Having\n>> static plans at least allows us to discuss what it does after the fact\n>> with some ease.\n>>\n>> The plan is set using stats that are set when there are very few\n>> non-NULL rows, and those increase massively on load. The way to cope\n>> is to run the ANALYZE immediately after the load and then don't allow\n>> auto-ANALYZE to reset them later.\n>\n> No. We do run analyze immediately after the load. The surprise was that\n> this was not sufficient - the (small) amount of time where non optimal\n> plans were being used due to the in progress row activity was enough to\n> cripple the system - that is the problem. The analysis of why not led to\n> the test case included in the original email. And sure it is deliberately\n> crafted to display the issue, and is therefore open to criticism for being\n> artificial. However it was purely meant to make it easy to see what I was\n> talking about.\n\nI had another look at this and see I that I read the second explain incorrectly.\n\nThe amount of data examined and returned is identical in both plans.\nThe only difference is the number of in-progress rows seen by the\nsecond query. Looking at the numbers some more, it looks like 6000\nin-progress rows are examined in addition to the data. It might be\nworth an EXPLAIN patch to put instrumentation in to show that, but its\nnot that interesting.\n\nIt would be useful to force the indexscan into a bitmapscan to check\nthat the cost isn't attributable to the plan but to other overheads.\n\nWhat appears to be happening is we're spending a lot of time in\nTransactionIdIsInProgress() so we can set hints and then when we find\nit is still in progress we then spend more time in XidIsInSnapshot()\nwhile we check that it is still invisible to us. Even if the\ntransaction we see repeatedly ends, we will still pay the cost in\nXidIsInSnapshot repeatedly as we execute.\n\nGiven that code path, I would expect it to suck worse on a live system\nwith many sessions, and even worse with many subtransactions.\n\n(1) A proposed fix is attached, but its only a partial one and barely tested.\n\nDeeper fixes might be\n\n(2) to sort the xid array if we call XidIsInSnapshot too many times\nin a transaction. I don't think that is worth it, because a long\nrunning snapshot may be examined many times, but is unlikely to see\nmultiple in-progress xids repeatedly. Whereas your case seems\nreasonably common.\n\n(3) to make the check on TransactionIdIsInProgress() into a heuristic,\nsince we don't *need* to check that, so if we keep checking the same\nxid repeatedly we can reduce the number of checks or avoid xids that\nseem to be long running. That's slightly more coding than my quick\nhack here but seems worth it.\n\nI think we need both (1) and (3) but the attached patch does just (1).\n\nThis is a similar optimisation to the one I introduced for\nTransactionIdIsKnownCompleted(), except this applies to repeated\nchecking of as yet-incomplete xids, and to bulk concurrent\ntransactions.\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 3 May 2013 13:41:31 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 3 May 2013 13:41, Simon Riggs <[email protected]> wrote:\n\n> (3) to make the check on TransactionIdIsInProgress() into a heuristic,\n> since we don't *need* to check that, so if we keep checking the same\n> xid repeatedly we can reduce the number of checks or avoid xids that\n> seem to be long running. That's slightly more coding than my quick\n> hack here but seems worth it.\n>\n> I think we need both (1) and (3) but the attached patch does just (1).\n>\n> This is a similar optimisation to the one I introduced for\n> TransactionIdIsKnownCompleted(), except this applies to repeated\n> checking of as yet-incomplete xids, and to bulk concurrent\n> transactions.\n\nISTM we can improve performance of TransactionIdIsInProgress() by\ncaching the procno of our last xid.\n\nMark, could you retest with both these patches? Thanks.\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 4 May 2013 13:49:39 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 05/05/13 00:49, Simon Riggs wrote:\n> On 3 May 2013 13:41, Simon Riggs <[email protected]> wrote:\n>\n>> (3) to make the check on TransactionIdIsInProgress() into a heuristic,\n>> since we don't *need* to check that, so if we keep checking the same\n>> xid repeatedly we can reduce the number of checks or avoid xids that\n>> seem to be long running. That's slightly more coding than my quick\n>> hack here but seems worth it.\n>>\n>> I think we need both (1) and (3) but the attached patch does just (1).\n>>\n>> This is a similar optimisation to the one I introduced for\n>> TransactionIdIsKnownCompleted(), except this applies to repeated\n>> checking of as yet-incomplete xids, and to bulk concurrent\n>> transactions.\n>\n> ISTM we can improve performance of TransactionIdIsInProgress() by\n> caching the procno of our last xid.\n>\n> Mark, could you retest with both these patches? Thanks.\n>\n\nThanks Simon, will do and report back.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 06 May 2013 13:51:50 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 6 May 2013 02:51, Mark Kirkwood <[email protected]> wrote:\n> On 05/05/13 00:49, Simon Riggs wrote:\n>>\n>> On 3 May 2013 13:41, Simon Riggs <[email protected]> wrote:\n>>\n>>> (3) to make the check on TransactionIdIsInProgress() into a heuristic,\n>>> since we don't *need* to check that, so if we keep checking the same\n>>> xid repeatedly we can reduce the number of checks or avoid xids that\n>>> seem to be long running. That's slightly more coding than my quick\n>>> hack here but seems worth it.\n>>>\n>>> I think we need both (1) and (3) but the attached patch does just (1).\n>>>\n>>> This is a similar optimisation to the one I introduced for\n>>> TransactionIdIsKnownCompleted(), except this applies to repeated\n>>> checking of as yet-incomplete xids, and to bulk concurrent\n>>> transactions.\n>>\n>>\n>> ISTM we can improve performance of TransactionIdIsInProgress() by\n>> caching the procno of our last xid.\n>>\n>> Mark, could you retest with both these patches? Thanks.\n>>\n>\n> Thanks Simon, will do and report back.\n\nOK, here's a easily reproducible test...\n\nPrep:\nDROP TABLE IF EXISTS plan;\nCREATE TABLE plan\n(\n id INTEGER NOT NULL,\n typ INTEGER NOT NULL,\n dat TIMESTAMP,\n val TEXT NOT NULL\n);\ninsert into plan select generate_series(1,100000), 0,\ncurrent_timestamp, 'some texts';\nCREATE UNIQUE INDEX plan_id ON plan(id);\nCREATE INDEX plan_dat ON plan(dat);\n\ntestcase.pgb\nselect count(*) from plan where dat is null and typ = 3;\n\nSession 1:\npgbench -n -f testcase.pgb -t 100\n\nSession 2:\nBEGIN; insert into plan select 1000000 + generate_series(1, 100000),\n3, NULL, 'b';\n\nTransaction rate in Session 1: (in tps)\n(a) before we run Session 2:\nCurrent: 5600tps\nPatched: 5600tps\n\n(b) after Session 2 has run, yet before transaction end\nCurrent: 56tps\nPatched: 65tps\n\n(c ) after Session 2 has aborted\nCurrent/Patched: 836, 1028, 5400tps\nVACUUM improves timing again\n\nNew version of patch attached which fixes a few bugs.\n\nPatch works and improves things, but we're still swamped by the block\naccesses via the index.\n\nWhich brings me back to Mark's original point, which is that we are\nx100 times slower in this case and it *is* because the choice of\nIndexScan is a bad one for this situation.\n\nAfter some thought on this, I do think we need to do something about\nit directly, rather than by tuning infrastructire (as I just\nattempted). The root cause here is that IndexScan plans are sensitive\nto mistakes in data distribution, much more so than other plan types.\n\nThe two options, broadly, are to either\n\n1. avoid IndexScans in the planner unless they have a *significantly*\nbetter cost. At the moment we use IndexScans if cost is lowest, even\nif that is only by a whisker.\n\n2. make IndexScans adaptive so that they switch to other plan types\nmid-way through execution.\n\n(2) seems fairly hard generically, since we'd have to keep track of\nthe tids returned from the IndexScan to allow us to switch to a\ndifferent plan and avoid re-issuing rows that we've already returned.\nBut maybe if we adapted the IndexScan plan type so that it adopted a\nmore page oriented approach internally, it could act like a\nbitmapscan. Anyway, that would need some proof that it would work and\nsounds like a fair task.\n\n(1) sounds more easily possible and plausible. At the moment we have\nenable_indexscan = off. If we had something like\nplan_cost_weight_indexscan = N, we could selectively increase the cost\nof index scans so that they would be less likely to be selected. i.e.\nplan_cost_weight_indexscan = 2 would mean an indexscan would need to\nbe half the cost of any other plan before it was selected. (parameter\nname selected so it could apply to all parameter types). The reason to\napply this weighting would be to calculate \"risk adjusted cost\" not\njust estimated cost.\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 6 May 2013 09:14:01 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "\n> \n> (2) seems fairly hard generically, since we'd have to keep track of\n> the tids returned from the IndexScan to allow us to switch to a\n> different plan and avoid re-issuing rows that we've already returned.\n> But maybe if we adapted the IndexScan plan type so that it adopted a\n> more page oriented approach internally, it could act like a\n> bitmapscan. Anyway, that would need some proof that it would work and\n> sounds like a fair task.\n> \n> (1) sounds more easily possible and plausible. At the moment we have\n> enable_indexscan = off. If we had something like\n> plan_cost_weight_indexscan = N, we could selectively increase the cost\n> of index scans so that they would be less likely to be selected. i.e.\n> plan_cost_weight_indexscan = 2 would mean an indexscan would need to\n> be half the cost of any other plan before it was selected. (parameter\n> name selected so it could apply to all parameter types). The reason to\n> apply this weighting would be to calculate \"risk adjusted cost\" not\n> just estimated cost.\n> \n> --\n> Simon Riggs http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\nAnother option would be for the bulk insert/update/delete to track the\ndistribution stats as the operation progresses and if it detects that it\nis changing the distribution of data beyond a certain threshold it would\nupdate the pg stats accordingly.\n\n-- \nMatt Clarkson\nCatalyst.Net Limited\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 May 2013 08:18:52 +1200",
"msg_from": "Matt Clarkson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "> Simon Riggs wrote:\n>\n> Patch works and improves things, but we're still swamped by the block\n> accesses via the index.\n\nWhich *might* be enough to stop it making the server go unresponsive,\nwe'll look at the effect of this in the next few days, nice work!\n\n>\n> Which brings me back to Mark's original point, which is that we are\n> x100 times slower in this case and it *is* because the choice of\n> IndexScan is a bad one for this situation.\n>\n> After some thought on this, I do think we need to do something about\n> it directly, rather than by tuning infrastructire (as I just\n> attempted). The root cause here is that IndexScan plans are sensitive\n> to mistakes in data distribution, much more so than other plan types.\n>\n> The two options, broadly, are to either\n>\n> 1. avoid IndexScans in the planner unless they have a *significantly*\n> better cost. At the moment we use IndexScans if cost is lowest, even\n> if that is only by a whisker.\n>\n> 2. make IndexScans adaptive so that they switch to other plan types\n> mid-way through execution.\n>\n> (2) seems fairly hard generically, since we'd have to keep track of\n> the tids returned from the IndexScan to allow us to switch to a\n> different plan and avoid re-issuing rows that we've already returned.\n> But maybe if we adapted the IndexScan plan type so that it adopted a\n> more page oriented approach internally, it could act like a\n> bitmapscan. Anyway, that would need some proof that it would work and\n> sounds like a fair task.\n>\n> (1) sounds more easily possible and plausible. At the moment we have\n> enable_indexscan = off. If we had something like\n> plan_cost_weight_indexscan = N, we could selectively increase the cost\n> of index scans so that they would be less likely to be selected. i.e.\n> plan_cost_weight_indexscan = 2 would mean an indexscan would need to\n> be half the cost of any other plan before it was selected. (parameter\n> name selected so it could apply to all parameter types). The reason to\n> apply this weighting would be to calculate \"risk adjusted cost\" not\n> just estimated cost.\n>\n\nI'm thinking that a variant of (2) might be simpler to inplement:\n\n(I think Matt C essentially beat me to this suggestion - he originally\ndiscovered this issue). It is probably good enough for only *new* plans to\nreact to the increased/increasing number of in progress rows. So this\nwould require backends doing significant numbers of row changes to either\ndirectly update pg_statistic or report their in progress numbers to the\nstats collector. The key change here is the partial execution numbers\nwould need to be sent. Clearly one would need to avoid doing this too\noften (!) - possibly only when number of changed rows >\nautovacuum_analyze_scale_factor proportion of the relation concerned or\nsimilar.\n\nregards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 May 2013 12:23:58 +1200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 7 May 2013 01:23, <[email protected]> wrote:\n\n> I'm thinking that a variant of (2) might be simpler to inplement:\n>\n> (I think Matt C essentially beat me to this suggestion - he originally\n> discovered this issue). It is probably good enough for only *new* plans to\n> react to the increased/increasing number of in progress rows. So this\n> would require backends doing significant numbers of row changes to either\n> directly update pg_statistic or report their in progress numbers to the\n> stats collector. The key change here is the partial execution numbers\n> would need to be sent. Clearly one would need to avoid doing this too\n> often (!) - possibly only when number of changed rows >\n> autovacuum_analyze_scale_factor proportion of the relation concerned or\n> similar.\n\nAre you loading using COPY? Why not break down the load into chunks?\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 May 2013 07:10:26 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 07/05/13 18:10, Simon Riggs wrote:\n> On 7 May 2013 01:23, <[email protected]> wrote:\n>\n>> I'm thinking that a variant of (2) might be simpler to inplement:\n>>\n>> (I think Matt C essentially beat me to this suggestion - he originally\n>> discovered this issue). It is probably good enough for only *new* plans to\n>> react to the increased/increasing number of in progress rows. So this\n>> would require backends doing significant numbers of row changes to either\n>> directly update pg_statistic or report their in progress numbers to the\n>> stats collector. The key change here is the partial execution numbers\n>> would need to be sent. Clearly one would need to avoid doing this too\n>> often (!) - possibly only when number of changed rows >\n>> autovacuum_analyze_scale_factor proportion of the relation concerned or\n>> similar.\n>\n> Are you loading using COPY? Why not break down the load into chunks?\n>\n\nINSERT - but we could maybe workaround by chunking the INSERT. However \nthat *really* breaks the idea that in SQL you just say what you want, \nnot how the database engine should do it! And more practically means \nthat the most obvious and clear way to add your new data has nasty side \neffects, and you have to tip toe around muttering secret incantations to \nmake things work well :-)\n\nI'm still thinking that making postgres smarter about having current \nstats for getting the actual optimal plan is the best solution.\n\nCheers\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 May 2013 18:32:20 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "\nOn Tue, 2013-05-07 at 18:32 +1200, Mark Kirkwood wrote:\n> On 07/05/13 18:10, Simon Riggs wrote:\n> > On 7 May 2013 01:23, <[email protected]> wrote:\n> >\n> >> I'm thinking that a variant of (2) might be simpler to inplement:\n> >>\n> >> (I think Matt C essentially beat me to this suggestion - he originally\n> >> discovered this issue). It is probably good enough for only *new* plans to\n> >> react to the increased/increasing number of in progress rows. So this\n> >> would require backends doing significant numbers of row changes to either\n> >> directly update pg_statistic or report their in progress numbers to the\n> >> stats collector. The key change here is the partial execution numbers\n> >> would need to be sent. Clearly one would need to avoid doing this too\n> >> often (!) - possibly only when number of changed rows >\n> >> autovacuum_analyze_scale_factor proportion of the relation concerned or\n> >> similar.\n> >\n> > Are you loading using COPY? Why not break down the load into chunks?\n> >\n> \n> INSERT - but we could maybe workaround by chunking the INSERT. However \n> that *really* breaks the idea that in SQL you just say what you want, \n> not how the database engine should do it! And more practically means \n> that the most obvious and clear way to add your new data has nasty side \n> effects, and you have to tip toe around muttering secret incantations to \n> make things work well :-)\n\nWe also had the same problem with an UPDATE altering the data\ndistribution in such a way that trivial but frequently executed queries\ncause massive server load until auto analyze sorted out the stats.\n\n-- \nMatt Clarkson\nCatalyst.Net Limited\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 May 2013 19:19:25 +1200",
"msg_from": "Matt Clarkson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 7 May 2013 07:32, Mark Kirkwood <[email protected]> wrote:\n> On 07/05/13 18:10, Simon Riggs wrote:\n>>\n>> On 7 May 2013 01:23, <[email protected]> wrote:\n>>\n>>> I'm thinking that a variant of (2) might be simpler to inplement:\n>>>\n>>> (I think Matt C essentially beat me to this suggestion - he originally\n>>> discovered this issue). It is probably good enough for only *new* plans\n>>> to\n>>> react to the increased/increasing number of in progress rows. So this\n>>> would require backends doing significant numbers of row changes to either\n>>> directly update pg_statistic or report their in progress numbers to the\n>>> stats collector. The key change here is the partial execution numbers\n>>> would need to be sent. Clearly one would need to avoid doing this too\n>>> often (!) - possibly only when number of changed rows >\n>>> autovacuum_analyze_scale_factor proportion of the relation concerned or\n>>> similar.\n>>\n>>\n>> Are you loading using COPY? Why not break down the load into chunks?\n>>\n>\n> INSERT - but we could maybe workaround by chunking the INSERT. However that\n> *really* breaks the idea that in SQL you just say what you want, not how the\n> database engine should do it! And more practically means that the most\n> obvious and clear way to add your new data has nasty side effects, and you\n> have to tip toe around muttering secret incantations to make things work\n> well :-)\n\nYes, we'd need to break up SQL statements into pieces and use external\ntransaction snapshots to do that.\n\n> I'm still thinking that making postgres smarter about having current stats\n> for getting the actual optimal plan is the best solution.\n\nI agree.\n\nThe challenge now is to come up with something that actually works;\nmost of the ideas have been very vague and ignore the many downsides.\nThe hard bit is the analysis and balanced thinking, not the\ndeveloping.\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 May 2013 08:33:36 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 07/05/13 19:33, Simon Riggs wrote:\n> On 7 May 2013 07:32, Mark Kirkwood <[email protected]> wrote:\n>> On 07/05/13 18:10, Simon Riggs wrote:\n>>>\n>>> On 7 May 2013 01:23, <[email protected]> wrote:\n>>>\n>>>> I'm thinking that a variant of (2) might be simpler to inplement:\n>>>>\n>>>> (I think Matt C essentially beat me to this suggestion - he originally\n>>>> discovered this issue). It is probably good enough for only *new* plans\n>>>> to\n>>>> react to the increased/increasing number of in progress rows. So this\n>>>> would require backends doing significant numbers of row changes to either\n>>>> directly update pg_statistic or report their in progress numbers to the\n>>>> stats collector. The key change here is the partial execution numbers\n>>>> would need to be sent. Clearly one would need to avoid doing this too\n>>>> often (!) - possibly only when number of changed rows >\n>>>> autovacuum_analyze_scale_factor proportion of the relation concerned or\n>>>> similar.\n>>>\n>>>\n>>> Are you loading using COPY? Why not break down the load into chunks?\n>>>\n>>\n>> INSERT - but we could maybe workaround by chunking the INSERT. However that\n>> *really* breaks the idea that in SQL you just say what you want, not how the\n>> database engine should do it! And more practically means that the most\n>> obvious and clear way to add your new data has nasty side effects, and you\n>> have to tip toe around muttering secret incantations to make things work\n>> well :-)\n>\n> Yes, we'd need to break up SQL statements into pieces and use external\n> transaction snapshots to do that.\n>\n>> I'm still thinking that making postgres smarter about having current stats\n>> for getting the actual optimal plan is the best solution.\n>\n> I agree.\n>\n> The challenge now is to come up with something that actually works;\n> most of the ideas have been very vague and ignore the many downsides.\n> The hard bit is the analysis and balanced thinking, not the\n> developing.\n>\n\nYeah - seeing likely downsides can be a bit tricky too. I'll have a play \nwith some prototyping ideas, since this is actually an area of postgres \n(analyze/stats collector) that I've fiddled with before :-)\n\nCheers\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 May 2013 20:17:51 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "Well, could you write a trigger that would do what you need? AFAIR analyze\ndata is stored no matter transaction boundaries. You could store some\ncounters in session vars and issue an explicit analyze when enough rows\nwere added.\n7 трав. 2013 08:33, \"Mark Kirkwood\" <[email protected]> напис.\n\n> On 07/05/13 18:10, Simon Riggs wrote:\n>\n>> On 7 May 2013 01:23, <[email protected]**> wrote:\n>>\n>> I'm thinking that a variant of (2) might be simpler to inplement:\n>>>\n>>> (I think Matt C essentially beat me to this suggestion - he originally\n>>> discovered this issue). It is probably good enough for only *new* plans\n>>> to\n>>> react to the increased/increasing number of in progress rows. So this\n>>> would require backends doing significant numbers of row changes to either\n>>> directly update pg_statistic or report their in progress numbers to the\n>>> stats collector. The key change here is the partial execution numbers\n>>> would need to be sent. Clearly one would need to avoid doing this too\n>>> often (!) - possibly only when number of changed rows >\n>>> autovacuum_analyze_scale_**factor proportion of the relation concerned\n>>> or\n>>> similar.\n>>>\n>>\n>> Are you loading using COPY? Why not break down the load into chunks?\n>>\n>>\n> INSERT - but we could maybe workaround by chunking the INSERT. However\n> that *really* breaks the idea that in SQL you just say what you want, not\n> how the database engine should do it! And more practically means that the\n> most obvious and clear way to add your new data has nasty side effects, and\n> you have to tip toe around muttering secret incantations to make things\n> work well :-)\n>\n> I'm still thinking that making postgres smarter about having current stats\n> for getting the actual optimal plan is the best solution.\n>\n> Cheers\n>\n> Mark\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nWell, could you write a trigger that would do what you need? AFAIR analyze data is stored no matter transaction boundaries. You could store some counters in session vars and issue an explicit analyze when enough rows were added.\n7 трав. 2013 08:33, \"Mark Kirkwood\" <[email protected]> напис.\nOn 07/05/13 18:10, Simon Riggs wrote:\n\nOn 7 May 2013 01:23, <[email protected]> wrote:\n\n\nI'm thinking that a variant of (2) might be simpler to inplement:\n\n(I think Matt C essentially beat me to this suggestion - he originally\ndiscovered this issue). It is probably good enough for only *new* plans to\nreact to the increased/increasing number of in progress rows. So this\nwould require backends doing significant numbers of row changes to either\ndirectly update pg_statistic or report their in progress numbers to the\nstats collector. The key change here is the partial execution numbers\nwould need to be sent. Clearly one would need to avoid doing this too\noften (!) - possibly only when number of changed rows >\nautovacuum_analyze_scale_factor proportion of the relation concerned or\nsimilar.\n\n\nAre you loading using COPY? Why not break down the load into chunks?\n\n\n\nINSERT - but we could maybe workaround by chunking the INSERT. However that *really* breaks the idea that in SQL you just say what you want, not how the database engine should do it! And more practically means that the most obvious and clear way to add your new data has nasty side effects, and you have to tip toe around muttering secret incantations to make things work well :-)\n\nI'm still thinking that making postgres smarter about having current stats for getting the actual optimal plan is the best solution.\n\nCheers\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 10 May 2013 13:48:49 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "(See below for the reply)\n\nOn 10/05/13 22:48, Vitalii Tymchyshyn wrote:\n> Well, could you write a trigger that would do what you need? AFAIR\n> analyze data is stored no matter transaction boundaries. You could store\n> some counters in session vars and issue an explicit analyze when enough\n> rows were added.\n>\n> 7 трав. 2013 08:33, \"Mark Kirkwood\" <[email protected]\n> <mailto:[email protected]>> напис.\n>\n> On 07/05/13 18:10, Simon Riggs wrote:\n>\n> On 7 May 2013 01:23, <[email protected]\n> <mailto:[email protected]>__> wrote:\n>\n> I'm thinking that a variant of (2) might be simpler to\n> inplement:\n>\n> (I think Matt C essentially beat me to this suggestion - he\n> originally\n> discovered this issue). It is probably good enough for only\n> *new* plans to\n> react to the increased/increasing number of in progress\n> rows. So this\n> would require backends doing significant numbers of row\n> changes to either\n> directly update pg_statistic or report their in progress\n> numbers to the\n> stats collector. The key change here is the partial\n> execution numbers\n> would need to be sent. Clearly one would need to avoid doing\n> this too\n> often (!) - possibly only when number of changed rows >\n> autovacuum_analyze_scale___factor proportion of the relation\n> concerned or\n> similar.\n>\n>\n> Are you loading using COPY? Why not break down the load into chunks?\n>\n>\n> INSERT - but we could maybe workaround by chunking the INSERT.\n> However that *really* breaks the idea that in SQL you just say what\n> you want, not how the database engine should do it! And more\n> practically means that the most obvious and clear way to add your\n> new data has nasty side effects, and you have to tip toe around\n> muttering secret incantations to make things work well :-)\n>\n> I'm still thinking that making postgres smarter about having current\n> stats for getting the actual optimal plan is the best solution.\n\nUnfortunately a trigger will not really do the job - analyze ignores in \nprogress rows (unless they were added by the current transaction), and \nthen the changes made by analyze are not seen by any other sessions. So \nno changes to plans until the entire INSERT is complete and COMMIT \nhappens (which could be a while - too long in our case).\n\nFiguring out how to improve on this situation is tricky.\n\n\nCheers\n\nMark\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 11 May 2013 00:51:20 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> writes:\n> Unfortunately a trigger will not really do the job - analyze ignores in \n> progress rows (unless they were added by the current transaction), and \n> then the changes made by analyze are not seen by any other sessions. So \n> no changes to plans until the entire INSERT is complete and COMMIT \n> happens (which could be a while - too long in our case).\n\nI'm not sure I believe the thesis that plans won't change at all.\nThe planner will notice that the physical size of the table is growing.\nThat may not be enough, if the table-contents statistics are missing\nor completely unreflective of reality, but it's something.\n\nIt is true that *already cached* plans won't change until after an\nANALYZE is done (the key point there being that ANALYZE sends out a\nshared-inval message to force replanning of plans for the table).\nConceivably you could issue concurrent ANALYZEs occasionally while\nthe INSERT is running, not so much to update the stats --- because\nthey wouldn't --- as to force cached-plan invalidation.\n\n\t\t\tregards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 10 May 2013 09:30:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 11/05/13 01:30, Tom Lane wrote:\n> Mark Kirkwood <[email protected]> writes:\n>> Unfortunately a trigger will not really do the job - analyze ignores in\n>> progress rows (unless they were added by the current transaction), and\n>> then the changes made by analyze are not seen by any other sessions. So\n>> no changes to plans until the entire INSERT is complete and COMMIT\n>> happens (which could be a while - too long in our case).\n>\n> I'm not sure I believe the thesis that plans won't change at all.\n> The planner will notice that the physical size of the table is growing.\n> That may not be enough, if the table-contents statistics are missing\n> or completely unreflective of reality, but it's something.\n>\n> It is true that *already cached* plans won't change until after an\n> ANALYZE is done (the key point there being that ANALYZE sends out a\n> shared-inval message to force replanning of plans for the table).\n> Conceivably you could issue concurrent ANALYZEs occasionally while\n> the INSERT is running, not so much to update the stats --- because\n> they wouldn't --- as to force cached-plan invalidation.\n\nYeah - true, I was focusing on the particular type of query illustrated \nin the test case - pretty much entirely needing updated selectivity \nstats for a column, which wouldn't change unfortunately.\n\nCheers\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 11 May 2013 16:58:47 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 03.05.2013 15:41, Simon Riggs wrote:\n> What appears to be happening is we're spending a lot of time in\n> TransactionIdIsInProgress() so we can set hints and then when we find\n> it is still in progress we then spend more time in XidIsInSnapshot()\n> while we check that it is still invisible to us. Even if the\n> transaction we see repeatedly ends, we will still pay the cost in\n> XidIsInSnapshot repeatedly as we execute.\n>\n> Given that code path, I would expect it to suck worse on a live system\n> with many sessions, and even worse with many subtransactions.\n>\n> (1) A proposed fix is attached, but its only a partial one and barely tested.\n>\n> Deeper fixes might be\n>\n> (2) to sort the xid array if we call XidIsInSnapshot too many times\n> in a transaction. I don't think that is worth it, because a long\n> running snapshot may be examined many times, but is unlikely to see\n> multiple in-progress xids repeatedly. Whereas your case seems\n> reasonably common.\n\nYeah, sorting would be a waste of time most of the time.\n\nInstead of adding a new cache field, how about just swapping the matched \nXID to the beginning of the array?\n\nDid you have some simple performance test script for this?\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 16 Jun 2013 18:04:53 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 06.05.2013 04:51, Mark Kirkwood wrote:\n> On 05/05/13 00:49, Simon Riggs wrote:\n>> On 3 May 2013 13:41, Simon Riggs <[email protected]> wrote:\n>>\n>>> (3) to make the check on TransactionIdIsInProgress() into a heuristic,\n>>> since we don't *need* to check that, so if we keep checking the same\n>>> xid repeatedly we can reduce the number of checks or avoid xids that\n>>> seem to be long running. That's slightly more coding than my quick\n>>> hack here but seems worth it.\n>>>\n>>> I think we need both (1) and (3) but the attached patch does just (1).\n>>>\n>>> This is a similar optimisation to the one I introduced for\n>>> TransactionIdIsKnownCompleted(), except this applies to repeated\n>>> checking of as yet-incomplete xids, and to bulk concurrent\n>>> transactions.\n>>\n>> ISTM we can improve performance of TransactionIdIsInProgress() by\n>> caching the procno of our last xid.\n>>\n>> Mark, could you retest with both these patches? Thanks.\n>>\n>\n> Thanks Simon, will do and report back.\n\nDid anyone ever try (3) ?\n\nI'm not sure if this the same idea as (3) above, but ISTM that \nHeapTupleSatisfiesMVCC doesn't actually need to call \nTransactionIdIsInProgress(), because it checks XidInMVCCSnapshot(). The \ncomment at the top of tqual.c says:\n\n> * NOTE: must check TransactionIdIsInProgress (which looks in PGXACT array)\n> * before TransactionIdDidCommit/TransactionIdDidAbort (which look in\n> * pg_clog). Otherwise we have a race condition: we might decide that a\n> * just-committed transaction crashed, because none of the tests succeed.\n> * xact.c is careful to record commit/abort in pg_clog before it unsets\n> * MyPgXact->xid in PGXACT array. That fixes that problem, but it also\n> * means there is a window where TransactionIdIsInProgress and\n> * TransactionIdDidCommit will both return true. If we check only\n> * TransactionIdDidCommit, we could consider a tuple committed when a\n> * later GetSnapshotData call will still think the originating transaction\n> * is in progress, which leads to application-level inconsistency.\tThe\n> * upshot is that we gotta check TransactionIdIsInProgress first in all\n> * code paths, except for a few cases where we are looking at\n> * subtransactions of our own main transaction and so there can't be any\n> * race condition.\n\nIf TransactionIdIsInProgress() returns true for a given XID, then surely \nit was also running when the snapshot was taken (or had not even began \nyet). In which case the XidInMVCCSnapshot() call will also return true. \nAm I missing something?\n\nThere's one little problem: we currently only set the hint bits when \nTransactionIdIsInProgress() returns false. If we do that earlier, then \neven though HeapTupleSatisfiesMVCC works correctly thanks to the \nXidInMVCCSnapshot call, other HeapTupleSatisfies* functions that don't \ncall XIdInMVCCSnapshot might see the tuple as committed or aborted too \nearly, if they see the hint bit as set while the transaction is still \nin-progress according to the proc array. Would have to check all the \ncallers of those other HeapTupleSatisfies* functions to verify if that's OK.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 16 Jun 2013 18:23:20 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> I'm not sure if this the same idea as (3) above, but ISTM that \n> HeapTupleSatisfiesMVCC doesn't actually need to call \n> TransactionIdIsInProgress(), because it checks XidInMVCCSnapshot(). The \n> comment at the top of tqual.c says:\n\n>> * NOTE: must check TransactionIdIsInProgress (which looks in PGXACT array)\n>> * before TransactionIdDidCommit/TransactionIdDidAbort (which look in\n>> * pg_clog). Otherwise we have a race condition: we might decide that a\n>> * just-committed transaction crashed, because none of the tests succeed.\n>> * xact.c is careful to record commit/abort in pg_clog before it unsets\n>> * MyPgXact->xid in PGXACT array. That fixes that problem, but it also\n>> * means there is a window where TransactionIdIsInProgress and\n>> * TransactionIdDidCommit will both return true. If we check only\n>> * TransactionIdDidCommit, we could consider a tuple committed when a\n>> * later GetSnapshotData call will still think the originating transaction\n>> * is in progress, which leads to application-level inconsistency.\tThe\n>> * upshot is that we gotta check TransactionIdIsInProgress first in all\n>> * code paths, except for a few cases where we are looking at\n>> * subtransactions of our own main transaction and so there can't be any\n>> * race condition.\n\n> If TransactionIdIsInProgress() returns true for a given XID, then surely \n> it was also running when the snapshot was taken (or had not even began \n> yet). In which case the XidInMVCCSnapshot() call will also return true. \n> Am I missing something?\n\nYes, you're failing to understand the nature of the race condition.\nWhat we're concerned about is that if tqual says a tuple is committed,\nits transaction must be committed (not still in progress) according to\nany *subsequently taken* snapshot. This is not about the contents of\nthe snapshot we're currently consulting; it's about not wanting a tuple\nto be thought committed if anyone could possibly later decide its\ntransaction is still in progress.\n\nIt's possible that this issue would be moot if the only use of\ntransaction-in-progress data were in tqual.c (so that we could assume\nall later tests use the same logic you propose here), but I doubt that\nthat's true.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 16 Jun 2013 12:59:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 16 June 2013 16:04, Heikki Linnakangas <[email protected]> wrote:\n> On 03.05.2013 15:41, Simon Riggs wrote:\n>>\n>> What appears to be happening is we're spending a lot of time in\n>> TransactionIdIsInProgress() so we can set hints and then when we find\n>> it is still in progress we then spend more time in XidIsInSnapshot()\n>> while we check that it is still invisible to us. Even if the\n>> transaction we see repeatedly ends, we will still pay the cost in\n>> XidIsInSnapshot repeatedly as we execute.\n>>\n>> Given that code path, I would expect it to suck worse on a live system\n>> with many sessions, and even worse with many subtransactions.\n>>\n>> (1) A proposed fix is attached, but its only a partial one and barely\n>> tested.\n>>\n>> Deeper fixes might be\n>>\n>> (2) to sort the xid array if we call XidIsInSnapshot too many times\n>> in a transaction. I don't think that is worth it, because a long\n>> running snapshot may be examined many times, but is unlikely to see\n>> multiple in-progress xids repeatedly. Whereas your case seems\n>> reasonably common.\n>\n>\n> Yeah, sorting would be a waste of time most of the time.\n>\n> Instead of adding a new cache field, how about just swapping the matched XID\n> to the beginning of the array?\n\nDo you think that is significantly different from what I've done?\n\n> Did you have some simple performance test script for this?\n\nFiles attached to set up and tear down the test. Needs\nmax_prepared_transactions = 100\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sun, 16 Jun 2013 18:25:25 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 16 June 2013 16:23, Heikki Linnakangas <[email protected]> wrote:\n> On 06.05.2013 04:51, Mark Kirkwood wrote:\n>>\n>> On 05/05/13 00:49, Simon Riggs wrote:\n>>>\n>>> On 3 May 2013 13:41, Simon Riggs <[email protected]> wrote:\n>>>\n>>>> (3) to make the check on TransactionIdIsInProgress() into a heuristic,\n>>>> since we don't *need* to check that, so if we keep checking the same\n>>>> xid repeatedly we can reduce the number of checks or avoid xids that\n>>>> seem to be long running. That's slightly more coding than my quick\n>>>> hack here but seems worth it.\n>>>>\n>>>> I think we need both (1) and (3) but the attached patch does just (1).\n>>>>\n>>>> This is a similar optimisation to the one I introduced for\n>>>> TransactionIdIsKnownCompleted(), except this applies to repeated\n>>>> checking of as yet-incomplete xids, and to bulk concurrent\n>>>> transactions.\n>>>\n>>>\n>>> ISTM we can improve performance of TransactionIdIsInProgress() by\n>>> caching the procno of our last xid.\n>>>\n>>> Mark, could you retest with both these patches? Thanks.\n>>>\n>>\n>> Thanks Simon, will do and report back.\n>\n>\n> Did anyone ever try (3) ?\n\nNo, because my other patch meant I didn't need to. In other words, my\nother patch speeded up repeated access enough I didn't care about (3)\nanymore.\n\n\n> I'm not sure if this the same idea as (3) above, but ISTM that\n> HeapTupleSatisfiesMVCC doesn't actually need to call\n> TransactionIdIsInProgress(), because it checks XidInMVCCSnapshot(). The\n> comment at the top of tqual.c says:\n>\n>> * NOTE: must check TransactionIdIsInProgress (which looks in PGXACT\n>> array)\n>> * before TransactionIdDidCommit/TransactionIdDidAbort (which look in\n>> * pg_clog). Otherwise we have a race condition: we might decide that a\n>> * just-committed transaction crashed, because none of the tests succeed.\n>> * xact.c is careful to record commit/abort in pg_clog before it unsets\n>> * MyPgXact->xid in PGXACT array. That fixes that problem, but it also\n>> * means there is a window where TransactionIdIsInProgress and\n>> * TransactionIdDidCommit will both return true. If we check only\n>> * TransactionIdDidCommit, we could consider a tuple committed when a\n>> * later GetSnapshotData call will still think the originating transaction\n>> * is in progress, which leads to application-level inconsistency.\n>> The\n>> * upshot is that we gotta check TransactionIdIsInProgress first in all\n>> * code paths, except for a few cases where we are looking at\n>> * subtransactions of our own main transaction and so there can't be any\n>> * race condition.\n>\n>\n> If TransactionIdIsInProgress() returns true for a given XID, then surely it\n> was also running when the snapshot was taken (or had not even began yet). In\n> which case the XidInMVCCSnapshot() call will also return true. Am I missing\n> something?\n>\n> There's one little problem: we currently only set the hint bits when\n> TransactionIdIsInProgress() returns false. If we do that earlier, then even\n> though HeapTupleSatisfiesMVCC works correctly thanks to the\n> XidInMVCCSnapshot call, other HeapTupleSatisfies* functions that don't call\n> XIdInMVCCSnapshot might see the tuple as committed or aborted too early, if\n> they see the hint bit as set while the transaction is still in-progress\n> according to the proc array. Would have to check all the callers of those\n> other HeapTupleSatisfies* functions to verify if that's OK.\n\nWell, I looked at that and its too complex and fiddly to be worth it, IMHO.\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 16 Jun 2013 18:28:43 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "(Cc: to pgsql-performance dropped, pgsql-hackers added.)\n\nAt 2013-05-06 09:14:01 +0100, [email protected] wrote:\n>\n> New version of patch attached which fixes a few bugs.\n\nI read the patch, but only skimmed the earlier discussion about it. In\nisolation, I can say that the patch applies cleanly and looks sensible\nfor what it does (i.e., cache pgprocno to speed up repeated calls to\nTransactionIdIsInProgress(somexid)).\n\nIn that sense, it's ready for committer, but I don't know if there's a\nbetter/more complete/etc. way to address the original problem.\n\n-- Abhijit\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Mon, 24 Jun 2013 10:13:50 +0530",
"msg_from": "Abhijit Menon-Sen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 06/23/2013 09:43 PM, Abhijit Menon-Sen wrote:\n> (Cc: to pgsql-performance dropped, pgsql-hackers added.)\n> \n> At 2013-05-06 09:14:01 +0100, [email protected] wrote:\n>>\n>> New version of patch attached which fixes a few bugs.\n> \n> I read the patch, but only skimmed the earlier discussion about it. In\n> isolation, I can say that the patch applies cleanly and looks sensible\n> for what it does (i.e., cache pgprocno to speed up repeated calls to\n> TransactionIdIsInProgress(somexid)).\n> \n> In that sense, it's ready for committer, but I don't know if there's a\n> better/more complete/etc. way to address the original problem.\n\nHas this patch had performance testing? Because of the list crossover I\ndon't have any information on that.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Mon, 08 Jul 2013 10:11:36 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 07/08/2013 10:11 AM, Josh Berkus wrote:\n> On 06/23/2013 09:43 PM, Abhijit Menon-Sen wrote:\n>> (Cc: to pgsql-performance dropped, pgsql-hackers added.)\n>>\n>> At 2013-05-06 09:14:01 +0100, [email protected] wrote:\n>>>\n>>> New version of patch attached which fixes a few bugs.\n>>\n>> I read the patch, but only skimmed the earlier discussion about it. In\n>> isolation, I can say that the patch applies cleanly and looks sensible\n>> for what it does (i.e., cache pgprocno to speed up repeated calls to\n>> TransactionIdIsInProgress(somexid)).\n>>\n>> In that sense, it's ready for committer, but I don't know if there's a\n>> better/more complete/etc. way to address the original problem.\n> \n> Has this patch had performance testing? Because of the list crossover I\n> don't have any information on that.\n\nDue to the apparent lack of performance testing, I'm setting this back\nto \"needs review\".\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Wed, 10 Jul 2013 09:47:34 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] In progress INSERT wrecks plans on table"
},
{
"msg_contents": "At 2013-07-10 09:47:34 -0700, [email protected] wrote:\n>\n> Due to the apparent lack of performance testing, I'm setting this back\n> to \"needs review\".\n\nThe original submission (i.e. the message linked from the CF page)\nincludes test results that showed a clear performance improvement.\nHere's an excerpt:\n\n> OK, here's a easily reproducible test...\n> \n> Prep:\n> DROP TABLE IF EXISTS plan;\n> CREATE TABLE plan\n> (\n> id INTEGER NOT NULL,\n> typ INTEGER NOT NULL,\n> dat TIMESTAMP,\n> val TEXT NOT NULL\n> );\n> insert into plan select generate_series(1,100000), 0,\n> current_timestamp, 'some texts';\n> CREATE UNIQUE INDEX plan_id ON plan(id);\n> CREATE INDEX plan_dat ON plan(dat);\n> \n> testcase.pgb\n> select count(*) from plan where dat is null and typ = 3;\n> \n> Session 1:\n> pgbench -n -f testcase.pgb -t 100\n> \n> Session 2:\n> BEGIN; insert into plan select 1000000 + generate_series(1, 100000),\n> 3, NULL, 'b';\n> \n> Transaction rate in Session 1: (in tps)\n> (a) before we run Session 2:\n> Current: 5600tps\n> Patched: 5600tps\n> \n> (b) after Session 2 has run, yet before transaction end\n> Current: 56tps\n> Patched: 65tps\n> \n> (c ) after Session 2 has aborted\n> Current/Patched: 836, 1028, 5400tps\n> VACUUM improves timing again\n> \n> New version of patch attached which fixes a few bugs.\n> \n> Patch works and improves things, but we're still swamped by the block\n> accesses via the index.\n\n-- Abhijit\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 11 Jul 2013 10:39:58 +0530",
"msg_from": "Abhijit Menon-Sen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On 07/10/2013 10:09 PM, Abhijit Menon-Sen wrote:\n> At 2013-07-10 09:47:34 -0700, [email protected] wrote:\n>>\n>> Due to the apparent lack of performance testing, I'm setting this back\n>> to \"needs review\".\n> \n> The original submission (i.e. the message linked from the CF page)\n> includes test results that showed a clear performance improvement.\n> Here's an excerpt:\n\nI didn't see that, and nobody replied to my email.\n\nSo, where are we with this patch, then?\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 11 Jul 2013 17:47:58 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] In progress INSERT wrecks plans on table"
},
{
"msg_contents": "At 2013-07-11 17:47:58 -0700, [email protected] wrote:\n>\n> So, where are we with this patch, then?\n\nIt's ready for committer.\n\n-- Abhijit\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 12 Jul 2013 11:48:36 +0530",
"msg_from": "Abhijit Menon-Sen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On Mon, May 6, 2013 at 1:14 AM, Simon Riggs <[email protected]> wrote:\n> On 6 May 2013 02:51, Mark Kirkwood <[email protected]> wrote:\n>> On 05/05/13 00:49, Simon Riggs wrote:\n>>>\n>>> On 3 May 2013 13:41, Simon Riggs <[email protected]> wrote:\n>>>\n>>>> (3) to make the check on TransactionIdIsInProgress() into a heuristic,\n>>>> since we don't *need* to check that, so if we keep checking the same\n>>>> xid repeatedly we can reduce the number of checks or avoid xids that\n>>>> seem to be long running. That's slightly more coding than my quick\n>>>> hack here but seems worth it.\n>>>>\n>>>> I think we need both (1) and (3) but the attached patch does just (1).\n>>>>\n>>>> This is a similar optimisation to the one I introduced for\n>>>> TransactionIdIsKnownCompleted(), except this applies to repeated\n>>>> checking of as yet-incomplete xids, and to bulk concurrent\n>>>> transactions.\n>>>\n>>>\n>>> ISTM we can improve performance of TransactionIdIsInProgress() by\n>>> caching the procno of our last xid.\n>>>\n>>> Mark, could you retest with both these patches? Thanks.\n>>>\n>>\n>> Thanks Simon, will do and report back.\n>\n> OK, here's a easily reproducible test...\n>\n> Prep:\n> DROP TABLE IF EXISTS plan;\n> CREATE TABLE plan\n> (\n> id INTEGER NOT NULL,\n> typ INTEGER NOT NULL,\n> dat TIMESTAMP,\n> val TEXT NOT NULL\n> );\n> insert into plan select generate_series(1,100000), 0,\n> current_timestamp, 'some texts';\n> CREATE UNIQUE INDEX plan_id ON plan(id);\n> CREATE INDEX plan_dat ON plan(dat);\n>\n> testcase.pgb\n> select count(*) from plan where dat is null and typ = 3;\n>\n> Session 1:\n> pgbench -n -f testcase.pgb -t 100\n>\n> Session 2:\n> BEGIN; insert into plan select 1000000 + generate_series(1, 100000),\n> 3, NULL, 'b';\n>\n> Transaction rate in Session 1: (in tps)\n> (a) before we run Session 2:\n> Current: 5600tps\n> Patched: 5600tps\n>\n> (b) after Session 2 has run, yet before transaction end\n> Current: 56tps\n> Patched: 65tps\n\n\nWhen I run this test case in single-client mode, I don't see nearly\nthat much speedup, it just goes from 38.99 TPS to 40.12 TPS. But\nstill, it is a speedup, and very reproducible (t-test p-val < 1e-40, n\nof 21 for both)\n\nBut I also tried it with 4 pgbench clients, and ran into a collapse of\nthe performance, TPS dropping down to ~8 TPS. It is too variable to\nfigure out how reliable the speed-up with this patch is, so far.\nApparently they are all fighting over the spinlock on the\nProcArrayLock.\n\nThis is a single quad core, \"Intel(R) Xeon(R) CPU X3210 @ 2.13GHz\"\n\nSo I agree with (3) above, about not checking\nTransactionIdIsInProgress repeatedly. Or could we change the order of\noperations so that TransactionIdIsInProgress is checked only after\nXidInMVCCSnapshot?\n\nOr perhaps the comment \"XXX Can we test without the lock first?\" could\nbe implemented and save the day here?\n\nLooking at the code, there is something that bothers me about this part:\n\n pxid = cxid = InvalidTransactionId;\n return false;\n\nIf it is safe to return false at this point (as determined by the\nstale values of pxid and cxid) then why would we clear out the stale\nvalues so they can't be used in the future to also short circuit\nthings? On the other hand, if the stale values need to be cleared so\nthey are not misleading to future invocations, why is it safe for this\ninvocation to have made a decision based on them? Maybe with more\nthought I will see why this is so.\n\n....\n\n\n>\n> Which brings me back to Mark's original point, which is that we are\n> x100 times slower in this case and it *is* because the choice of\n> IndexScan is a bad one for this situation.\n>\n> After some thought on this, I do think we need to do something about\n> it directly, rather than by tuning infrastructire (as I just\n> attempted). The root cause here is that IndexScan plans are sensitive\n> to mistakes in data distribution, much more so than other plan types.\n>\n> The two options, broadly, are to either\n>\n> 1. avoid IndexScans in the planner unless they have a *significantly*\n> better cost. At the moment we use IndexScans if cost is lowest, even\n> if that is only by a whisker.\n\nThis wouldn't work well in Mark's specific case, because the problem\nis that it is using the wrong index, not that it is using an index at\nall. There are two candidate indexes, and one looks slightly better\nbut then gets ruined by the in-progress insert, while the other looks\nslightly worse but would be resistant to the in-progress insert.\nSwitching from the bad index to the seq scan is not going to fix\nthings. I don't think there is any heuristic solution here other than\nto keep track of the invisible data distribution as well as the\nvisible data distribution.\n\nThe more I've thought about it, the more I see the charm of Mark's\noriginal proposal. Why not build the statistics assuming that the\nin-progress insert will commit? It is not a complete solution,\nbecause autoanalyze will not get triggered until the transaction\ncompletes. But why not let the manual ANALYZE get the benefit of\nseeing them?\n\nThe statistics serve two masters. One is to estimate how many rows\nwill actually be returned. The other is to estimate how much work it\nwill take to return them (including the work of groveling through a\nlist of in-process tuples). Right now those are implicitly considered\nthe same thing--we could change that and keep separate sets of\nstatistics, but I think we could improve things some without doing\nthat. For the first case of estimating actual rows returned, I think\ncounting in-progress rows is a wash. It seems just about as likely\nthat the bulk transaction which was in progress at the time of the\nlast ANALYZE has already committed at the time of the planning (but\nnot yet completed the autoanalyze) as it is that the bulk transaction\nis still in progress at the time of the planning; which means counting\nthe rows as if they committed is sometimes right and sometimes wrong\nbut in about equal measure. But for the second master, counting the\nin progress rows seems like a win all the time. Either we actually\nsee them and do the work, or we refuse to see them but still have to\ndo the work to figure that out.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Jul 2013 15:47:13 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On Wed, Jul 10, 2013 at 10:09 PM, Abhijit Menon-Sen <[email protected]> wrote:\n> At 2013-07-10 09:47:34 -0700, [email protected] wrote:\n>>\n>> Due to the apparent lack of performance testing, I'm setting this back\n>> to \"needs review\".\n>\n> The original submission (i.e. the message linked from the CF page)\n> includes test results that showed a clear performance improvement.\n\nI think the reviewer of a performance patch should do some independent\ntesting of the performance, to replicate the author's numbers; and\nhopefully with a few different scenarios.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 12 Jul 2013 16:25:14 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] In progress INSERT wrecks plans on table"
},
{
"msg_contents": "At 2013-07-12 16:25:14 -0700, [email protected] wrote:\n>\n> I think the reviewer of a performance patch should do some independent\n> testing of the performance, to replicate the author's numbers; and\n> hopefully with a few different scenarios.\n\nYou're quite right. I apologise for being lazy; doubly so because I\ncan't actually see any difference while running the test case with\nthe patches applied.\n\nunpatched:\n before: 1629.831391, 1559.793758, 1498.765018, 1639.384038\n during: 37.434492, 37.044989, 37.112422, 36.950895\n after : 46.591688, 46.341256, 46.042169, 46.260684\n\npatched:\n before: 1813.091975, 1798.923524, 1629.301356, 1606.849033\n during: 37.344987, 37.207359, 37.406788, 37.316925\n after : 46.657747, 46.537420, 46.746377, 46.577052\n\n(\"before\" is before starting session 2; \"during\" is after session 2\ninserts, but before it commits; \"after\" is after session 2 issues a\nrollback.)\n\nThe timings above are with both xid_in_snapshot_cache.v1.patch and\ncache_TransactionIdInProgress.v2.patch applied, but the numbers are\nnot noticeably different with only the first patch applied. After I\n\"vacuum plan\", the timings in both cases return to normal.\n\nIn a quick test with gdb (and also in perf report output), I didn't see\nthe following block in procarray.c being entered at all:\n\n+ if (max_prepared_xacts == 0 && pgprocno >= 0 &&\n+ (TransactionIdEquals(xid, pxid) || TransactionIdEquals(xid, cxid)))\n+ {\n …\n\nI'll keep looking, but comments are welcome. I'm setting this back to\n\"Needs Review\" in the meantime.\n\n-- Abhijit\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Sat, 13 Jul 2013 14:19:23 +0530",
"msg_from": "Abhijit Menon-Sen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] In progress INSERT wrecks plans on table"
},
{
"msg_contents": "At 2013-07-13 14:19:23 +0530, [email protected] wrote:\n>\n> The timings above are with both xid_in_snapshot_cache.v1.patch and\n> cache_TransactionIdInProgress.v2.patch applied\n\nFor anyone who wants to try to reproduce the results, here's the patch I\nused, which is both patches above plus some typo fixes in comments.\n\n-- Abhijit\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers",
"msg_date": "Sat, 13 Jul 2013 14:41:57 +0530",
"msg_from": "Abhijit Menon-Sen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On Sat, Jul 13, 2013 at 1:47 AM, Jeff Janes <[email protected]> wrote:\n> But I also tried it with 4 pgbench clients, and ran into a collapse of\n> the performance, TPS dropping down to ~8 TPS. It is too variable to\n> figure out how reliable the speed-up with this patch is, so far.\n> Apparently they are all fighting over the spinlock on the\n> ProcArrayLock.\n>\n> This is a single quad core, \"Intel(R) Xeon(R) CPU X3210 @ 2.13GHz\"\n>\n> So I agree with (3) above, about not checking\n> TransactionIdIsInProgress repeatedly. Or could we change the order of\n> operations so that TransactionIdIsInProgress is checked only after\n> XidInMVCCSnapshot?\n\nI haven't checked the patch in detail, but it sounds like my proposal\nfor CSN based snapshots[1] could help here. Using it\nTransactionIdIsInProgress can be done completely lock-free. It would\ninclude a direct dense array lookup, read barrier and a check of the\ndense/sparse horizon, and if necessary a binary search in the sparse\narray and another read barrier and check for sparse array version\ncounter.\n\nI plan to start working on the patch next week. I hope to have a first\ncut available for CF2.\n\n[1] http://www.postgresql.org/message-id/CA+CSw_tEpJ=md1zgxPkjH6CWDnTDft4gBi=+P9SnoC+Wy3pKdA@mail.gmail.com\n\nRegards,\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 13 Jul 2013 16:41:38 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
},
{
"msg_contents": "On Sunday, June 16, 2013, Heikki Linnakangas wrote:\n\n> On 06.05.2013 04:51, Mark Kirkwood wrote:\n>\n>> On 05/05/13 00:49, Simon Riggs wrote:\n>>\n>>> On 3 May 2013 13:41, Simon Riggs <[email protected]> wrote:\n>>>\n>>> (3) to make the check on TransactionIdIsInProgress() into a heuristic,\n>>>> since we don't *need* to check that, so if we keep checking the same\n>>>> xid repeatedly we can reduce the number of checks or avoid xids that\n>>>> seem to be long running. That's slightly more coding than my quick\n>>>> hack here but seems worth it.\n>>>>\n>>>> I think we need both (1) and (3) but the attached patch does just (1).\n>>>>\n>>>> This is a similar optimisation to the one I introduced for\n>>>> TransactionIdIsKnownCompleted(**), except this applies to repeated\n>>>> checking of as yet-incomplete xids, and to bulk concurrent\n>>>> transactions.\n>>>>\n>>>\n>>> ISTM we can improve performance of TransactionIdIsInProgress() by\n>>> caching the procno of our last xid.\n>>>\n>>> Mark, could you retest with both these patches? Thanks.\n>>>\n>>>\n>> Thanks Simon, will do and report back.\n>>\n>\n> Did anyone ever try (3) ?\n>\n> I'm not sure if this the same idea as (3) above, but ISTM that\n> HeapTupleSatisfiesMVCC doesn't actually need to call\n> TransactionIdIsInProgress(), because it checks XidInMVCCSnapshot(). The\n> comment at the top of tqual.c says:\n>\n\nOr at least, it doesn't need to call TransactionIdIsInProgress() if\nXidInMVCCSnapshot() returned true.\n\n\n\n>\n> * NOTE: must check TransactionIdIsInProgress (which looks in PGXACT\n>> array)\n>> * before TransactionIdDidCommit/**TransactionIdDidAbort (which look in\n>> * pg_clog). Otherwise we have a race condition: we might decide that a\n>> * just-committed transaction crashed, because none of the tests succeed.\n>> * xact.c is careful to record commit/abort in pg_clog before it unsets\n>> * MyPgXact->xid in PGXACT array. That fixes that problem, but it also\n>> * means there is a window where TransactionIdIsInProgress and\n>> * TransactionIdDidCommit will both return true. If we check only\n>> * TransactionIdDidCommit, we could consider a tuple committed when a\n>> * later GetSnapshotData call will still think the originating transaction\n>> * is in progress, which leads to application-level inconsistency.\n>> The\n>> * upshot is that we gotta check TransactionIdIsInProgress first in all\n>> * code paths, except for a few cases where we are looking at\n>> * subtransactions of our own main transaction and so there can't be any\n>> * race condition.\n>>\n>\n> If TransactionIdIsInProgress() returns true for a given XID, then surely\n> it was also running when the snapshot was taken (or had not even began\n> yet). In which case the XidInMVCCSnapshot() call will also return true. Am\n> I missing something?\n>\n> There's one little problem: we currently only set the hint bits when\n> TransactionIdIsInProgress() returns false. If we do that earlier,\n\n\nBut why would we do that earlier? If we never bother to call\nTransactionIdIsInProgress(), then we just don't set the hint bits, because\nwe don't know what to set them to. It can't matter what order we call\nTransactionIdIsInProgress and TransactionIdDidCommit in if we don't call\neither of them at all during this invocation.\n\nCheers,\n\nJeff\n\nOn Sunday, June 16, 2013, Heikki Linnakangas wrote:On 06.05.2013 04:51, Mark Kirkwood wrote:\n\nOn 05/05/13 00:49, Simon Riggs wrote:\n\nOn 3 May 2013 13:41, Simon Riggs <[email protected]> wrote:\n\n\n(3) to make the check on TransactionIdIsInProgress() into a heuristic,\nsince we don't *need* to check that, so if we keep checking the same\nxid repeatedly we can reduce the number of checks or avoid xids that\nseem to be long running. That's slightly more coding than my quick\nhack here but seems worth it.\n\nI think we need both (1) and (3) but the attached patch does just (1).\n\nThis is a similar optimisation to the one I introduced for\nTransactionIdIsKnownCompleted(), except this applies to repeated\nchecking of as yet-incomplete xids, and to bulk concurrent\ntransactions.\n\n\nISTM we can improve performance of TransactionIdIsInProgress() by\ncaching the procno of our last xid.\n\nMark, could you retest with both these patches? Thanks.\n\n\n\nThanks Simon, will do and report back.\n\n\nDid anyone ever try (3) ?\n\nI'm not sure if this the same idea as (3) above, but ISTM that HeapTupleSatisfiesMVCC doesn't actually need to call TransactionIdIsInProgress(), because it checks XidInMVCCSnapshot(). The comment at the top of tqual.c says:\nOr at least, it doesn't need to call TransactionIdIsInProgress() if XidInMVCCSnapshot() returned true. \n\n\n * NOTE: must check TransactionIdIsInProgress (which looks in PGXACT array)\n * before TransactionIdDidCommit/TransactionIdDidAbort (which look in\n * pg_clog). Otherwise we have a race condition: we might decide that a\n * just-committed transaction crashed, because none of the tests succeed.\n * xact.c is careful to record commit/abort in pg_clog before it unsets\n * MyPgXact->xid in PGXACT array. That fixes that problem, but it also\n * means there is a window where TransactionIdIsInProgress and\n * TransactionIdDidCommit will both return true. If we check only\n * TransactionIdDidCommit, we could consider a tuple committed when a\n * later GetSnapshotData call will still think the originating transaction\n * is in progress, which leads to application-level inconsistency. The\n * upshot is that we gotta check TransactionIdIsInProgress first in all\n * code paths, except for a few cases where we are looking at\n * subtransactions of our own main transaction and so there can't be any\n * race condition.\n\n\nIf TransactionIdIsInProgress() returns true for a given XID, then surely it was also running when the snapshot was taken (or had not even began yet). In which case the XidInMVCCSnapshot() call will also return true. Am I missing something?\n\nThere's one little problem: we currently only set the hint bits when TransactionIdIsInProgress() returns false. If we do that earlier, But why would we do that earlier? If we never bother to call TransactionIdIsInProgress(), then we just don't set the hint bits, because we don't know what to set them to. It can't matter what order we call TransactionIdIsInProgress and TransactionIdDidCommit in if we don't call either of them at all during this invocation.\nCheers,Jeff",
"msg_date": "Sat, 13 Jul 2013 14:29:20 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In progress INSERT wrecks plans on table"
}
] |
[
{
"msg_contents": "Hi,\n\nthis is more of a report than a question, because we thought this\nwould be interesting to share.\n\nWe recently (finally) migrated an Request Tracker 3.4 database running\non 8.1.19 to 9.2.4. The queries used by rt3.4 are sometimes weird, but\n8.1 coped without too much tuning. The schema looks like this:\n\nhttp://bestpractical.com/rt/3.4-schema.png\n\nOne query that took about 80ms on 8.1.19 took 8s on 9.2.4:\n\nSELECT DISTINCT main.* FROM Users main , Principals Principals_1, ACL ACL_2, Groups Groups_3, CachedGroupMembers CachedGroupMembers_4\n WHERE ((ACL_2.RightName = 'OwnTicket'))\n AND ((CachedGroupMembers_4.MemberId = Principals_1.id))\n AND ((Groups_3.id = CachedGroupMembers_4.GroupId))\n AND ((Principals_1.Disabled = '0') OR (Principals_1.Disabled = '0'))\n AND ((Principals_1.id != '1'))\n AND ((main.id = Principals_1.id))\n AND (\n ( ACL_2.PrincipalId = Groups_3.id AND ACL_2.PrincipalType = 'Group'\n AND ( Groups_3.Domain = 'SystemInternal' OR Groups_3.Domain = 'UserDefined' OR Groups_3.Domain = 'ACLEquivalence'))\n OR ( ( (Groups_3.Domain = 'RT::Queue-Role' AND Groups_3.Instance = 10) OR ( Groups_3.Domain = 'RT::Ticket-Role' AND Groups_3.Instance = 999028) ) AND Groups_3.Type = ACL_2.PrincipalType)\n )\n AND (ACL_2.ObjectType = 'RT::System' OR (ACL_2.ObjectType = 'RT::Queue' AND ACL_2.ObjectId = 10) )\n ORDER BY main.Name ASC;\n\n\n8.1 plan: (http://explain.depesz.com/s/gZ6)\n\n Unique (cost=1117.67..1118.46 rows=9 width=1115) (actual time=82.646..85.695 rows=439 loops=1)\n -> Sort (cost=1117.67..1117.70 rows=9 width=1115) (actual time=82.645..82.786 rows=1518 loops=1)\n Sort Key: main.name, main.id, main.\"password\", main.comments, main.signature, main.emailaddress, main.freeformcontactinfo, main.organization, main.realname, main.nickname, main.lang, main.emailencoding, main.webencoding, main.externalcontactinfoid, main.contactinfosystem, main.externalauthid, main.authsystem, main.gecos, main.homephone, main.workphone, main.mobilephone, main.pagerphone, main.address1, main.address2, main.city, main.state, main.zip, main.country, main.timezone, main.pgpkey, main.creator, main.created, main.lastupdatedby, main.lastupdated\n -> Nested Loop (cost=10.51..1117.53 rows=9 width=1115) (actual time=0.205..23.688 rows=1518 loops=1)\n -> Nested Loop (cost=10.51..1087.81 rows=9 width=1119) (actual time=0.193..13.495 rows=1600 loops=1)\n -> Nested Loop (cost=10.51..1060.15 rows=9 width=4) (actual time=0.175..3.307 rows=1635 loops=1)\n -> Nested Loop (cost=10.51..536.13 rows=4 width=4) (actual time=0.161..1.057 rows=23 loops=1)\n Join Filter: (((\"outer\".principalid = \"inner\".id) AND ((\"outer\".principaltype)::text = 'Group'::text) AND (((\"inner\".\"domain\")::text = 'SystemInternal'::text) OR ((\"inner\".\"domain\")::text = 'UserDefined'::text) OR ((\"inner\".\"domain\")::text = 'ACLEquivalence'::text))) OR (((((\"inner\".\"domain\")::text = 'RT::Queue-Role'::text) AND (\"inner\".instance = 10)) OR (((\"inner\".\"domain\")::text = 'RT::Ticket-Role'::text) AND (\"inner\".instance = 999028))) AND ((\"inner\".\"type\")::text = (\"outer\".principaltype)::text)))\n -> Bitmap Heap Scan on acl acl_2 (cost=4.24..61.15 rows=33 width=13) (actual time=0.107..0.141 rows=22 loops=1)\n Recheck Cond: ((((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::System'::text)) OR (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::Queue'::text) AND (objectid = 10)))\n -> BitmapOr (cost=4.24..4.24 rows=34 width=0) (actual time=0.097..0.097 rows=0 loops=1)\n -> Bitmap Index Scan on acl1 (cost=0.00..2.13 rows=22 width=0) (actual time=0.054..0.054 rows=8 loops=1)\n Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::System'::text))\n -> Bitmap Index Scan on acl1 (cost=0.00..2.11 rows=13 width=0) (actual time=0.041..0.041 rows=14 loops=1)\n Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::Queue'::text) AND (objectid = 10))\n -> Bitmap Heap Scan on groups groups_3 (cost=6.27..14.32 rows=2 width=36) (actual time=0.036..0.036 rows=1 loops=22)\n Recheck Cond: ((\"outer\".principalid = groups_3.id) OR ((((groups_3.\"type\")::text = (\"outer\".principaltype)::text) AND (groups_3.instance = 10) AND ((groups_3.\"domain\")::text = 'RT::Queue-Role'::text)) OR (((groups_3.\"type\")::text = (\"outer\".principaltype)::text) AND (groups_3.instance = 999028) AND ((groups_3.\"domain\")::text = 'RT::Ticket-Role'::text))))\n Filter: (((\"domain\")::text = 'SystemInternal'::text) OR ((\"domain\")::text = 'UserDefined'::text) OR ((\"domain\")::text = 'ACLEquivalence'::text) OR (((\"domain\")::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (instance = 999028))\n -> BitmapOr (cost=6.27..6.27 rows=2 width=0) (actual time=0.033..0.033 rows=0 loops=22)\n -> Bitmap Index Scan on groups_pkey (cost=0.00..2.00 rows=1 width=0) (actual time=0.006..0.006 rows=1 loops=22)\n Index Cond: (\"outer\".principalid = groups_3.id)\n -> BitmapOr (cost=4.02..4.02 rows=1 width=0) (actual time=0.025..0.025 rows=0 loops=22)\n -> Bitmap Index Scan on groups2 (cost=0.00..2.01 rows=1 width=0) (actual time=0.013..0.013 rows=0 loops=22)\n Index Cond: (((groups_3.\"type\")::text = (\"outer\".principaltype)::text) AND (groups_3.instance = 10) AND ((groups_3.\"domain\")::text = 'RT::Queue-Role'::text))\n -> Bitmap Index Scan on groups2 (cost=0.00..2.01 rows=1 width=0) (actual time=0.011..0.011 rows=0 loops=22)\n Index Cond: (((groups_3.\"type\")::text = (\"outer\".principaltype)::text) AND (groups_3.instance = 999028) AND ((groups_3.\"domain\")::text = 'RT::Ticket-Role'::text))\n -> Index Scan using cachedgroupmembers3 on cachedgroupmembers cachedgroupmembers_4 (cost=0.00..130.13 rows=70 width=8) (actual time=0.007..0.074 rows=71 loops=23)\n Index Cond: (\"outer\".id = cachedgroupmembers_4.groupid)\n -> Index Scan using users_pkey on users main (cost=0.00..3.06 rows=1 width=1115) (actual time=0.004..0.005 rows=1 loops=1635)\n Index Cond: (main.id = \"outer\".memberid)\n -> Index Scan using principals_pkey on principals principals_1 (cost=0.00..3.29 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=1600)\n Index Cond: (\"outer\".memberid = principals_1.id)\n Filter: ((disabled = 0) AND (id <> 1))\n Total runtime: 86.293 ms\n(34 Zeilen)\n\n\nUntuned 9.2 plan: (http://explain.depesz.com/s/mQw)\n\n Unique (cost=784205.94..796940.08 rows=145533 width=1061) (actual time=9710.683..9713.175 rows=439 loops=1)\n -> Sort (cost=784205.94..784569.77 rows=145533 width=1061) (actual time=9710.682..9710.792 rows=1518 loops=1)\n Sort Key: main.name, main.id, main.password, main.comments, main.signature, main.emailaddress, main.freeformcontactinfo, main.organization, main.realname, main.nickname, main.lang, main.emailencoding, main.webencoding, main.externalcontactinfoid, main.contactinfosystem, main.externalauthid, main.authsystem, main.gecos, main.homephone, main.workphone, main.mobilephone, main.pagerphone, main.address1, main.address2, main.city, main.state, main.zip, main.country, main.timezone, main.pgpkey, main.creator, main.created, main.lastupdatedby, main.lastupdated\n Sort Method: quicksort Memory: 569kB\n -> Hash Join (cost=379261.01..568771.27 rows=145533 width=1061) (actual time=6432.551..9673.393 rows=1518 loops=1)\n Hash Cond: (principals_1.id = main.id)\n -> Seq Scan on principals principals_1 (cost=0.00..111112.14 rows=4970343 width=4) (actual time=0.024..1903.364 rows=4970357 loops=1)\n Filter: ((id <> 1) AND (disabled = 0))\n Rows Removed by Filter: 149\n -> Hash (cost=357969.80..357969.80 rows=145537 width=1065) (actual time=5887.121..5887.121 rows=1600 loops=1)\n Buckets: 1024 Batches: 256 Memory Usage: 17kB\n -> Merge Join (cost=327489.90..357969.80 rows=145537 width=1065) (actual time=5618.604..5880.608 rows=1600 loops=1)\n Merge Cond: (main.id = cachedgroupmembers_4.memberid)\n -> Index Scan using users_pkey on users main (cost=0.00..27100.40 rows=389108 width=1061) (actual time=0.032..205.696 rows=383693 loops=1)\n -> Materialize (cost=327350.03..328077.71 rows=145536 width=4) (actual time=5618.545..5619.315 rows=1635 loops=1)\n -> Sort (cost=327350.03..327713.87 rows=145536 width=4) (actual time=5618.539..5618.940 rows=1635 loops=1)\n Sort Key: cachedgroupmembers_4.memberid\n Sort Method: quicksort Memory: 125kB\n -> Hash Join (cost=1868.02..312878.08 rows=145536 width=4) (actual time=0.890..5617.609 rows=1635 loops=1)\n Hash Cond: (cachedgroupmembers_4.groupid = groups_3.id)\n -> Seq Scan on cachedgroupmembers cachedgroupmembers_4 (cost=0.00..185630.60 rows=10696560 width=8) (actual time=0.018..2940.137 rows=10696622 loops=1)\n -> Hash (cost=844.83..844.83 rows=62335 width=4) (actual time=0.760..0.760 rows=23 loops=1)\n Buckets: 4096 Batches: 4 Memory Usage: 12kB\n -> Nested Loop (cost=24.57..844.83 rows=62335 width=4) (actual time=0.109..0.633 rows=23 loops=1)\n -> Bitmap Heap Scan on acl acl_2 (cost=8.90..61.36 rows=33 width=10) (actual time=0.070..0.112 rows=22 loops=1)\n Recheck Cond: ((((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::System'::text)) OR (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::Queue'::text) AND (objectid = 10)))\n -> BitmapOr (cost=8.90..8.90 rows=35 width=0) (actual time=0.064..0.064 rows=0 loops=1)\n -> Bitmap Index Scan on acl1 (cost=0.00..4.47 rows=22 width=0) (actual time=0.036..0.036 rows=8 loops=1)\n Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::System'::text))\n -> Bitmap Index Scan on acl1 (cost=0.00..4.41 rows=13 width=0) (actual time=0.026..0.026 rows=14 loops=1)\n Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::Queue'::text) AND (objectid = 10))\n -> Bitmap Heap Scan on groups groups_3 (cost=15.67..23.73 rows=1 width=30) (actual time=0.022..0.023 rows=1 loops=22)\n Recheck Cond: ((acl_2.principalid = id) OR ((((type)::text = (acl_2.principaltype)::text) AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text)) OR (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))))\n Filter: ((((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text) OR (((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (((domain)::text = 'RT::Ticket-Role'::text) AND (instance = 999028))) AND (((acl_2.principalid = id) AND ((acl_2.principaltype)::text = 'Group'::text) AND (((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text))) OR (((((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (((domain)::text = 'RT::Ticket-Role'::text) AND (instance = 999028))) AND ((type)::text = (acl_2.principaltype)::text))))\n -> BitmapOr (cost=15.67..15.67 rows=2 width=0) (actual time=0.019..0.019 rows=0 loops=22)\n -> Bitmap Index Scan on groups_pkey (cost=0.00..4.76 rows=1 width=0) (actual time=0.005..0.005 rows=1 loops=22)\n Index Cond: (acl_2.principalid = id)\n -> BitmapOr (cost=10.66..10.66 rows=1 width=0) (actual time=0.013..0.013 rows=0 loops=22)\n -> Bitmap Index Scan on groups2 (cost=0.00..5.33 rows=1 width=0) (actual time=0.007..0.007 rows=0 loops=22)\n Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text))\n -> Bitmap Index Scan on groups2 (cost=0.00..5.33 rows=1 width=0) (actual time=0.006..0.006 rows=0 loops=22)\n Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))\n Total runtime: 9713.547 ms\n(43 Zeilen)\n\n\nThings got a lot better with enable_seqscan=off: (http://explain.depesz.com/s/WPt)\n\n Unique (cost=1509543.77..1522277.91 rows=145533 width=1061) (actual time=306.972..309.551 rows=439 loops=1)\n -> Sort (cost=1509543.77..1509907.60 rows=145533 width=1061) (actual time=306.971..307.108 rows=1518 loops=1)\n Sort Key: main.name, main.id, main.password, main.comments, main.signature, main.emailaddress, main.freeformcontactinfo, main.organization, main.realname, main.nickname, main.lang, main.emailencoding, main.webencoding, main.externalcontactinfoid, main.contactinfosystem, main.externalauthid, main.authsystem, main.gecos, main.homephone, main.workphone, main.mobilephone, main.pagerphone, main.address1, main.address2, main.city, main.state, main.zip, main.country, main.timezone, main.pgpkey, main.creator, main.created, main.lastupdatedby, main.lastupdated\n Sort Method: quicksort Memory: 569kB\n -> Nested Loop (cost=828855.15..1294109.10 rows=145533 width=1061) (actual time=2.951..267.996 rows=1518 loops=1)\n Join Filter: (main.id = principals_1.id)\n -> Merge Join (cost=828855.15..858971.23 rows=145537 width=1065) (actual time=2.940..260.852 rows=1600 loops=1)\n Merge Cond: (cachedgroupmembers_4.memberid = main.id)\n -> Sort (cost=828715.29..829079.13 rows=145537 width=4) (actual time=2.903..3.321 rows=1635 loops=1)\n Sort Key: cachedgroupmembers_4.memberid\n Sort Method: quicksort Memory: 125kB\n -> Nested Loop (cost=15.67..814243.24 rows=145537 width=4) (actual time=0.234..2.407 rows=1635 loops=1)\n -> Nested Loop (cost=15.67..1108.61 rows=62334 width=4) (actual time=0.219..0.903 rows=23 loops=1)\n -> Index Only Scan using acl1 on acl acl_2 (cost=0.00..325.14 rows=33 width=10) (actual time=0.121..0.367 rows=22 loops=1)\n Index Cond: (rightname = 'OwnTicket'::text)\n Filter: (((objecttype)::text = 'RT::System'::text) OR (((objecttype)::text = 'RT::Queue'::text) AND (objectid = 10)))\n Rows Removed by Filter: 220\n Heap Fetches: 242\n -> Bitmap Heap Scan on groups groups_3 (cost=15.67..23.73 rows=1 width=30) (actual time=0.023..0.023 rows=1 loops=22)\n Recheck Cond: ((acl_2.principalid = id) OR ((((type)::text = (acl_2.principaltype)::text) AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text)) OR (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))))\n Filter: ((((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text) OR (((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (((domain)::text = 'RT::Ticket-Role'::text) AND (instance = 999028))) AND (((acl_2.principalid = id) AND ((acl_2.principaltype)::text = 'Group'::text) AND (((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text))) OR (((((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (((domain)::text = 'RT::Ticket-Role'::text) AND (instance = 999028))) AND ((type)::text = (acl_2.principaltype)::text))))\n -> BitmapOr (cost=15.67..15.67 rows=2 width=0) (actual time=0.019..0.019 rows=0 loops=22)\n -> Bitmap Index Scan on groups_pkey (cost=0.00..4.76 rows=1 width=0) (actual time=0.005..0.005 rows=1 loops=22)\n Index Cond: (acl_2.principalid = id)\n -> BitmapOr (cost=10.66..10.66 rows=1 width=0) (actual time=0.013..0.013 rows=0 loops=22)\n -> Bitmap Index Scan on groups2 (cost=0.00..5.33 rows=1 width=0) (actual time=0.007..0.007 rows=0 loops=22)\n Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text))\n -> Bitmap Index Scan on groups2 (cost=0.00..5.33 rows=1 width=0) (actual time=0.006..0.006 rows=0 loops=22)\n Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))\n -> Index Scan using cachedgroupmembers3 on cachedgroupmembers cachedgroupmembers_4 (cost=0.00..12.89 rows=15 width=8) (actual time=0.009..0.049 rows=71 loops=23)\n Index Cond: (groupid = groups_3.id)\n -> Index Scan using users_pkey on users main (cost=0.00..27100.40 rows=389108 width=1061) (actual time=0.030..201.696 rows=384832 loops=1)\n -> Index Scan using principals_pkey on principals principals_1 (cost=0.00..2.98 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=1600)\n Index Cond: (id = cachedgroupmembers_4.memberid)\n Filter: ((id <> 1) AND (disabled = 0))\n Rows Removed by Filter: 0\n Total runtime: 309.868 ms\n(37 Zeilen)\n\n\nA similar result was with seqscans re-enabled, but effective_cache_size=32GB\n(anything >= 2GB worked), and the cachedgroupmembers.memberis stats target set\nto 500: (http://explain.depesz.com/s/GJa)\n\n Unique (cost=422364.07..434891.88 rows=143175 width=1085) (actual time=313.184..315.682 rows=439 loops=1)\n -> Sort (cost=422364.07..422722.01 rows=143175 width=1085) (actual time=313.181..313.301 rows=1518 loops=1)\n Sort Key: main.name, main.id, main.password, main.comments, main.signature, main.emailaddress, main.freeformcontactinfo, main.organization, main.realname, main.nickname, main.lang, main.emailencoding, main.webencoding, main.externalcontactinfoid, main.contactinfosystem, main.externalauthid, main.authsystem, main.gecos, main.homephone, main.workphone, main.mobilephone, main.pagerphone, main.address1, main.address2, main.city, main.state, main.zip, main.country, main.timezone, main.pgpkey, main.creator, main.created, main.lastupdatedby, main.lastupdated\n Sort Method: quicksort Memory: 569kB\n -> Nested Loop (cost=197638.24..342080.48 rows=143175 width=1085) (actual time=4.382..274.157 rows=1518 loops=1)\n Join Filter: (main.id = principals_1.id)\n -> Merge Join (cost=197638.24..220156.59 rows=143179 width=1089) (actual time=4.369..267.021 rows=1600 loops=1)\n Merge Cond: (main.id = cachedgroupmembers_4.memberid)\n -> Index Scan using users_pkey on users main (cost=0.00..19537.00 rows=389111 width=1085) (actual time=0.033..206.621 rows=383693 loops=1)\n -> Sort (cost=197499.17..197857.11 rows=143178 width=4) (actual time=4.326..4.737 rows=1635 loops=1)\n Sort Key: cachedgroupmembers_4.memberid\n Sort Method: quicksort Memory: 125kB\n -> Nested Loop (cost=15.67..185237.80 rows=143178 width=4) (actual time=0.088..3.749 rows=1635 loops=1)\n -> Nested Loop (cost=15.67..937.99 rows=61327 width=4) (actual time=0.073..2.047 rows=23 loops=1)\n -> Seq Scan on acl acl_2 (cost=0.00..154.68 rows=33 width=10) (actual time=0.022..1.485 rows=22 loops=1)\n Filter: (((rightname)::text = 'OwnTicket'::text) AND (((objecttype)::text = 'RT::System'::text) OR (((objecttype)::text = 'RT::Queue'::text) AND (objectid = 10))))\n Rows Removed by Filter: 4912\n -> Bitmap Heap Scan on groups groups_3 (cost=15.67..23.73 rows=1 width=30) (actual time=0.024..0.024 rows=1 loops=22)\n Recheck Cond: ((acl_2.principalid = id) OR ((((type)::text = (acl_2.principaltype)::text) AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text)) OR (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))))\n Filter: ((((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text) OR (((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (instance = 999028)) AND (((acl_2.principalid = id) AND ((acl_2.principaltype)::text = 'Group'::text) AND (((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text))) OR (((((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (((domain)::text = 'RT::Ticket-Role'::text) AND (instance = 999028))) AND ((type)::text = (acl_2.principaltype)::text))))\n -> BitmapOr (cost=15.67..15.67 rows=2 width=0) (actual time=0.020..0.020 rows=0 loops=22)\n -> Bitmap Index Scan on groups_pkey (cost=0.00..4.76 rows=1 width=0) (actual time=0.005..0.005 rows=1 loops=22)\n Index Cond: (acl_2.principalid = id)\n -> BitmapOr (cost=10.66..10.66 rows=1 width=0) (actual time=0.014..0.014 rows=0 loops=22)\n -> Bitmap Index Scan on groups2 (cost=0.00..5.33 rows=1 width=0) (actual time=0.007..0.007 rows=0 loops=22)\n Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text))\n -> Bitmap Index Scan on groups2 (cost=0.00..5.33 rows=1 width=0) (actual time=0.006..0.006 rows=0 loops=22)\n Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))\n -> Index Scan using cachedgroupmembers3 on cachedgroupmembers cachedgroupmembers_4 (cost=0.00..2.86 rows=15 width=8) (actual time=0.009..0.055 rows=71 loops=23)\n Index Cond: (groupid = groups_3.id)\n -> Index Scan using principals_pkey on principals principals_1 (cost=0.00..0.84 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=1600)\n Index Cond: (id = cachedgroupmembers_4.memberid)\n Filter: ((id <> 1) AND (disabled = 0))\n Rows Removed by Filter: 0\n Total runtime: 316.054 ms\n(35 Zeilen)\n\n\nThis plan is still slower than the origial 80ms, due to the Index Scan on\nusers_pkey. Setting cpu_tuple_cost = 0.06 fixed this:\n(http://explain.depesz.com/s/R0g)\n\n Unique (cost=430656.51..443500.29 rows=146786 width=1065) (actual time=50.400..52.876 rows=441 loops=1)\n -> Sort (cost=430656.51..431023.48 rows=146786 width=1065) (actual time=50.399..50.504 rows=1522 loops=1)\n Sort Key: main.name, main.id, main.password, main.comments, main.signature, main.emailaddress, main.freeformcontactinfo, main.organization, main.realname, main.nickname, main.lang, main.emailencoding, main.webencoding, main.externalcontactinfoid, main.contactinfosystem, main.externalauthid, main.authsystem, main.gecos, main.homephone, main.workphone, main.mobilephone, main.pagerphone, main.address1, main.address2, main.city, main.state, main.zip, main.country, main.timezone, main.pgpkey, main.creator, main.created, main.lastupdatedby, main.lastupdated\n Sort Method: quicksort Memory: 570kB\n -> Nested Loop (cost=8.36..368962.31 rows=146786 width=1065) (actual time=0.326..15.253 rows=1522 loops=1)\n -> Nested Loop (cost=8.36..309751.83 rows=146786 width=8) (actual time=0.232..9.224 rows=1551 loops=1)\n -> Nested Loop (cost=8.36..211474.01 rows=146790 width=4) (actual time=0.225..2.590 rows=1639 loops=1)\n -> Nested Loop (cost=8.36..550.15 rows=63034 width=4) (actual time=0.213..0.919 rows=23 loops=1)\n -> Index Only Scan using acl1 on acl acl_2 (cost=0.00..133.99 rows=33 width=10) (actual time=0.117..0.370 rows=22 loops=1)\n Index Cond: (rightname = 'OwnTicket'::text)\n Filter: (((objecttype)::text = 'RT::System'::text) OR (((objecttype)::text = 'RT::Queue'::text) AND (objectid = 10)))\n Rows Removed by Filter: 220\n Heap Fetches: 242\n -> Bitmap Heap Scan on groups groups_3 (cost=8.36..12.55 rows=1 width=30) (actual time=0.023..0.023 rows=1 loops=22)\n Recheck Cond: ((acl_2.principalid = id) OR ((((type)::text = (acl_2.principaltype)::text) AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text)) OR (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))))\n Filter: ((((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text) OR (((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (instance = 999028)) AND (((acl_2.principalid = id) AND ((acl_2.principaltype)::text = 'Group'::text) AND (((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text))) OR (((((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (((domain)::text = 'RT::Ticket-Role'::text) AND (instance = 999028))) AND ((type)::text = (acl_2.principaltype)::text))))\n -> BitmapOr (cost=8.36..8.36 rows=2 width=0) (actual time=0.019..0.019 rows=0 loops=22)\n -> Bitmap Index Scan on groups_pkey (cost=0.00..2.51 rows=1 width=0) (actual time=0.005..0.005 rows=1 loops=22)\n Index Cond: (acl_2.principalid = id)\n -> BitmapOr (cost=5.60..5.60 rows=1 width=0) (actual time=0.014..0.014 rows=0 loops=22)\n -> Bitmap Index Scan on groups2 (cost=0.00..2.80 rows=1 width=0) (actual time=0.007..0.007 rows=0 loops=22)\n Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text))\n -> Bitmap Index Scan on groups2 (cost=0.00..2.80 rows=1 width=0) (actual time=0.006..0.006 rows=0 loops=22)\n Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))\n -> Index Scan using cachedgroupmembers3 on cachedgroupmembers cachedgroupmembers_4 (cost=0.00..2.45 rows=15 width=8) (actual time=0.008..0.054 rows=71 loops=23)\n Index Cond: (groupid = groups_3.id)\n -> Index Scan using principals_pkey on principals principals_1 (cost=0.00..0.61 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=1639)\n Index Cond: (id = cachedgroupmembers_4.memberid)\n Filter: ((id <> 1) AND (disabled = 0))\n Rows Removed by Filter: 0\n -> Index Scan using users_pkey on users main (cost=0.00..0.34 rows=1 width=1065) (actual time=0.003..0.003 rows=1 loops=1551)\n Index Cond: (id = cachedgroupmembers_4.memberid)\n Total runtime: 53.174 ms\n(33 Zeilen)\n\n\nThe tipping point for cpu_tuple_cost was 0.05, 0.04 didn't have any effect.\n\n\nOld 8.1 config:\nshared_buffers = 262144 # min 16 or max_connections*2, 8KB each\ntemp_buffers = 65536 # min 100, 8KB each\nwork_mem = 24576 # min 64, size in KB\nmaintenance_work_mem = 65536\neffective_cache_size = 786432\ndefault_statistics_target = 100\n\nNew 9.2 config:\nshared_buffers = 4GB\nwork_mem = 32MB\nrandom_page_cost = 2.0 (after tuning, but didn't change anything)\neffective_cache_size = 32GB (after tuning)\n\nChristoph\n-- \[email protected] | http://www.df7cb.de/",
"msg_date": "Tue, 30 Apr 2013 13:20:55 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "RT3.4 query needed a lot more tuning with 9.2 than it did with 8.1"
},
{
"msg_contents": "Christoph Berg <[email protected]> writes:\n> We recently (finally) migrated an Request Tracker 3.4 database running\n> on 8.1.19 to 9.2.4. The queries used by rt3.4 are sometimes weird, but\n> 8.1 coped without too much tuning. The schema looks like this:\n\nThe newer rowcount estimates are much further away from reality:\n\n> Unique (cost=1117.67..1118.46 rows=9 width=1115) (actual time=82.646..85.695 rows=439 loops=1)\n\n> Unique (cost=784205.94..796940.08 rows=145533 width=1061) (actual time=9710.683..9713.175 rows=439 loops=1)\n\nHas the new DB been analyzed? Maybe you had custom stats targets in\nthe old DB that didn't get copied to the new one?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 06 May 2013 12:38:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RT3.4 query needed a lot more tuning with 9.2 than it did with\n 8.1"
},
{
"msg_contents": "On Tue, Apr 30, 2013 at 7:20 AM, Christoph Berg\n<[email protected]> wrote:\n> -> Nested Loop (cost=24.57..844.83 rows=62335 width=4) (actual time=0.109..0.633 rows=23 loops=1)\n> -> Bitmap Heap Scan on acl acl_2 (cost=8.90..61.36 rows=33 width=10) (actual time=0.070..0.112 rows=22 loops=1)\n> Recheck Cond: ((((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::System'::text)) OR (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::Queue'::text) AND (objectid = 10)))\n> -> BitmapOr (cost=8.90..8.90 rows=35 width=0) (actual time=0.064..0.064 rows=0 loops=1)\n> -> Bitmap Index Scan on acl1 (cost=0.00..4.47 rows=22 width=0) (actual time=0.036..0.036 rows=8 loops=1)\n> Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::System'::text))\n> -> Bitmap Index Scan on acl1 (cost=0.00..4.41 rows=13 width=0) (actual time=0.026..0.026 rows=14 loops=1)\n> Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::Queue'::text) AND (objectid = 10))\n> -> Bitmap Heap Scan on groups groups_3 (cost=15.67..23.73 rows=1 width=30) (actual time=0.022..0.023 rows=1 loops=22)\n> Recheck Cond: ((acl_2.principalid = id) OR ((((type)::text = (acl_2.principaltype)::text) AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text)) OR (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))))\n> Filter: ((((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text) OR (((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (((domain)::text = 'RT::Ticket-Role'::text) AND (instance = 999028))) AND (((acl_2.principalid = id) AND ((acl_2.principaltype)::text = 'Group'::text) AND (((domain)::text = 'SystemInternal'::text) OR ((domain)::text = 'UserDefined'::text) OR ((domain)::text = 'ACLEquivalence'::text))) OR (((((domain)::text = 'RT::Queue-Role'::text) AND (instance = 10)) OR (((domain)::text = 'RT::Ticket-Role'::text) AND (instance = 999028))) AND ((type)::text = (acl_2.principaltype)::text))))\n> -> BitmapOr (cost=15.67..15.67 rows=2 width=0) (actual time=0.019..0.019 rows=0 loops=22)\n> -> Bitmap Index Scan on groups_pkey (cost=0.00..4.76 rows=1 width=0) (actual time=0.005..0.005 rows=1 loops=22)\n> Index Cond: (acl_2.principalid = id)\n> -> BitmapOr (cost=10.66..10.66 rows=1 width=0) (actual time=0.013..0.013 rows=0 loops=22)\n> -> Bitmap Index Scan on groups2 (cost=0.00..5.33 rows=1 width=0) (actual time=0.007..0.007 rows=0 loops=22)\n> Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 10) AND ((domain)::text = 'RT::Queue-Role'::text))\n> -> Bitmap Index Scan on groups2 (cost=0.00..5.33 rows=1 width=0) (actual time=0.006..0.006 rows=0 loops=22)\n> Index Cond: (((type)::text = (acl_2.principaltype)::text) AND (instance = 999028) AND ((domain)::text = 'RT::Ticket-Role'::text))\n\nThe planner is estimating this the outer side of this nested loop will\nproduce 33 rows and that the inner side will produce 1. One would\nassume that the row estimate for the join product couldn't be more\nthan 33 * 1 = 33 rows, but the planner is estimating 62335 rows, which\nseems like nonsense. The actual result cardinality is 23.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 May 2013 15:58:50 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RT3.4 query needed a lot more tuning with 9.2 than it did with\n 8.1"
},
{
"msg_contents": "On Tue, 30 Apr 2013 06:20:55 -0500, Christoph Berg \n<[email protected]> wrote:\n\n> Hi,\n> this is more of a report than a question, because we thought this\n> would be interesting to share.\n> We recently (finally) migrated an Request Tracker 3.4 database running\n> on 8.1.19 to 9.2.4. The queries used by rt3.4 are sometimes weird, but\n> 8.1 coped without too much tuning. The schema looks like this:\n\nWhat version of DBIx-SearchBuilder do you have on that server? The RT guys \nusually recommend you have the latest possible so RT is performing the \nmost sane/optimized queries possible for your database. I honestly don't \nknow if it will make a difference for you, but it's worth a shot.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 May 2013 15:08:23 -0500",
"msg_from": "\"Mark Felder\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RT3.4 query needed a lot more tuning with 9.2 than it did with\n 8.1"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> The planner is estimating this the outer side of this nested loop will\n> produce 33 rows and that the inner side will produce 1. One would\n> assume that the row estimate for the join product couldn't be more\n> than 33 * 1 = 33 rows, but the planner is estimating 62335 rows, which\n> seems like nonsense.\n\nYou know, of course, that the join size estimate isn't arrived at that\nway. Still, this point does make it seem more like a planner bug and\nless like bad input stats. It would be nice to see a self-contained\nexample ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 May 2013 16:14:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RT3.4 query needed a lot more tuning with 9.2 than it did with\n 8.1"
},
{
"msg_contents": "On Mon, May 13, 2013 at 4:14 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> The planner is estimating this the outer side of this nested loop will\n>> produce 33 rows and that the inner side will produce 1. One would\n>> assume that the row estimate for the join product couldn't be more\n>> than 33 * 1 = 33 rows, but the planner is estimating 62335 rows, which\n>> seems like nonsense.\n>\n> You know, of course, that the join size estimate isn't arrived at that\n> way. Still, this point does make it seem more like a planner bug and\n> less like bad input stats. It would be nice to see a self-contained\n> example ...\n\nYeah, I remember there have been examples like this that have come up\nbefore. Unfortunately, I haven't fully grokked what's actually going\non here that allows this kind of thing to happen. Refresh my memory\non where the relevant code is?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 May 2013 16:29:29 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RT3.4 query needed a lot more tuning with 9.2 than it did with\n 8.1"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, May 13, 2013 at 4:14 PM, Tom Lane <[email protected]> wrote:\n>> You know, of course, that the join size estimate isn't arrived at that\n>> way. Still, this point does make it seem more like a planner bug and\n>> less like bad input stats. It would be nice to see a self-contained\n>> example ...\n\n> Yeah, I remember there have been examples like this that have come up\n> before. Unfortunately, I haven't fully grokked what's actually going\n> on here that allows this kind of thing to happen. Refresh my memory\n> on where the relevant code is?\n\nThe point is that we estimate the size of a joinrel independently of\nany particular input paths for it, and indeed before we've built any\nsuch paths. So this seems like a bug somewhere in selectivity\nestimation, but I'm not prepared to speculate as to just where.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 May 2013 16:33:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RT3.4 query needed a lot more tuning with 9.2 than it did with\n 8.1"
},
{
"msg_contents": "On Mon, May 13, 2013 at 4:33 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Mon, May 13, 2013 at 4:14 PM, Tom Lane <[email protected]> wrote:\n>>> You know, of course, that the join size estimate isn't arrived at that\n>>> way. Still, this point does make it seem more like a planner bug and\n>>> less like bad input stats. It would be nice to see a self-contained\n>>> example ...\n>\n>> Yeah, I remember there have been examples like this that have come up\n>> before. Unfortunately, I haven't fully grokked what's actually going\n>> on here that allows this kind of thing to happen. Refresh my memory\n>> on where the relevant code is?\n>\n> The point is that we estimate the size of a joinrel independently of\n> any particular input paths for it, and indeed before we've built any\n> such paths. So this seems like a bug somewhere in selectivity\n> estimation, but I'm not prepared to speculate as to just where.\n\nHmm. I went looking for the relevant code and found\ncalc_joinrel_size_estimate(). If that's actually the right place to\nbe looking, it's hard to escape the conclusion that pselec > 1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 May 2013 08:12:55 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RT3.4 query needed a lot more tuning with 9.2 than it did with\n 8.1"
},
{
"msg_contents": "Re: Tom Lane 2013-05-06 <[email protected]>\n> The newer rowcount estimates are much further away from reality:\n> \n> > Unique (cost=1117.67..1118.46 rows=9 width=1115) (actual time=82.646..85.695 rows=439 loops=1)\n> \n> > Unique (cost=784205.94..796940.08 rows=145533 width=1061) (actual time=9710.683..9713.175 rows=439 loops=1)\n> \n> Has the new DB been analyzed? Maybe you had custom stats targets in\n> the old DB that didn't get copied to the new one?\n\nThe new DB was analyzed with various stats targets, including values\nthat were higher that anything we would have used in 8.1. I don't\nthink we had per-table settings in there (the actual DB is now gone\nfor good).\n\nChristoph\n-- \[email protected] | http://www.df7cb.de/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 May 2013 23:48:20 -0700",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RT3.4 query needed a lot more tuning with 9.2 than it did with\n 8.1"
},
{
"msg_contents": "Re: Mark Felder 2013-05-13 <[email protected]>\n> What version of DBIx-SearchBuilder do you have on that server? The\n> RT guys usually recommend you have the latest possible so RT is\n> performing the most sane/optimized queries possible for your\n> database. I honestly don't know if it will make a difference for\n> you, but it's worth a shot.\n\nThat's a \"never touch a running system\" kind of machine there, we are\nhappy that they let us finally upgrade at least the PostgreSQL part of\nthe setup, so changing any perl libs there is out of the question.\n\nThe version is libdbix-searchbuilder-perl 1.26-1 from Debian Sarge/3.1\n*cough*.\n\nChristoph\n-- \[email protected] | http://www.df7cb.de/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 May 2013 23:52:29 -0700",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RT3.4 query needed a lot more tuning with 9.2 than it did with\n 8.1"
},
{
"msg_contents": "On Tue, May 14, 2013 at 11:52:29PM -0700, Christoph Berg wrote:\n> Re: Mark Felder 2013-05-13 <[email protected]>\n> > What version of DBIx-SearchBuilder do you have on that server? The\n> > RT guys usually recommend you have the latest possible so RT is\n> > performing the most sane/optimized queries possible for your\n> > database. I honestly don't know if it will make a difference for\n> > you, but it's worth a shot.\n> \n> That's a \"never touch a running system\" kind of machine there, we are\n> happy that they let us finally upgrade at least the PostgreSQL part of\n> the setup, so changing any perl libs there is out of the question.\n> \n> The version is libdbix-searchbuilder-perl 1.26-1 from Debian Sarge/3.1\n> *cough*.\n> \n> Christoph\n> -- \n\nHi Christoph,\n\nI understand the sentiment but you really should consider upgrading. I\nthink the current release is 1.63 and since it is the DB interface it\ncould have a positive effect on your problem not to mention that they\ndo fix bugs. :)\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 May 2013 08:52:28 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RT3.4 query needed a lot more tuning with 9.2 than it did with\n 8.1"
}
] |
[
{
"msg_contents": "I have a Hibernate-generated query (That's not going to change, so let's\njust focus on the Postgres side for now) like this:\n\nSELECT *\nfrom PERSON p\nwhere p.PERSON_ID in (\n select distinct p2.PERSON_ID\n from PERSON p2\n left outer join PERSON_ALIAS pa on\n p2.PERSON_ID = pa.PERSON_ID\n where (lower(p1.SURNAME) = 'duck' or\n lower(pa.SURNAME) = 'duck') and\n (lower(p1.FORENAME) = 'donald' or\n lower(pa.FORENAME) = 'donald')\n )\norder by p.PERSON_ID asc;\n\nThere are function-based indexes on PERSON and PERSON_ALIAS as follows:\n\nCREATE INDEX PERSON_FORENAME_LOWER_FBIDX ON PERSON (LOWER(FORENAME) VARCHAR\n_PATTERN_OPS);\nCREATE INDEX PERSON_SURNAME_LOWER_FBIDX ON PERSON (LOWER(SURNAME) VARCHAR\n_PATTERN_OPS);\nCREATE INDEX PERSON_ALIAS_FORENAME_LOWER_FBIDX ON PERSON_ALIAS\n(LOWER(FORENAME) VARCHAR_PATTERN_OPS);\nCREATE INDEX PERSON_ALIAS_SURNAME_LOWER_FBIDX ON PERSON_ALIAS\n(LOWER(SURNAME) VARCHAR_PATTERN_OPS);\n\nThe problem is that the above query doesn't use the indexes. The \"or\"\nclauses across the outer-join seem to be the culprit. If I rewrite the\nquery as follows, Postgres will use the index:\n\nSELECT *\nfrom PERSON p\nwhere (p.PERSON_ID in (\n select p2.PERSON_ID\n from TRAVELER.PERSON p2\n join TRAVELER.OTHER_NAME pa on p2.PERSON_ID =\n pa.PERSON_ID\n where lower(p2.SURNAME) = 'duck' and\n lower(pa.FORENAME) = 'donald'\n ) or\n p.PERSON_ID in (\n select p2.PERSON_ID\n from TRAVELER.PERSON p2\n join TRAVELER.OTHER_NAME pa on p2.PERSON_ID =\n pa.PERSON_ID\n where lower(pa.SURNAME) = 'duck' and\n lower(p2.FORENAME) = 'donald'\n ) or\n p.PERSON_ID in (\n select p2.PERSON_ID\n from TRAVELER.PERSON p2\n where lower(p2.SURNAME) = 'duck' and\n lower(p2.FORENAME) = 'donald'\n ) or\n p.PERSON_ID in (\n select p2.PERSON_ID\n from TRAVELER.OTHER_NAME pa\n where lower(pa.SURNAME) = 'duck' and\n lower(pa.FORENAME) = 'donald'\n ))\norder by p.PERSON_ID asc;\n\nSo my question is this: Is there a way to get the Postgres optimizer\n\"rewrite\" the query execution plan to use the equivalent, but much more\nefficient latter form?\n\nAnd before you ask; yes, there are better ways of writing this query. But\nwe're dealing with Java developers and Hibernate here. It's a legacy\nsystem, and the policy is to avoid hand-written SQL, so for the moment\nlet's not go down that rabbit hole, and focus on the issue of what the\noptimizer can and cannot do.\n\nI have a Hibernate-generated query (That's not going to change, so let's just focus on the Postgres side for now) like this:SELECT *from PERSON p\nwhere p.PERSON_ID in ( select distinct p2.PERSON_ID from PERSON p2 left outer join PERSON_ALIAS pa on\n p2.PERSON_ID = pa.PERSON_ID where (lower(p1.SURNAME) = 'duck' or lower(pa.SURNAME) = 'duck') and\n (lower(p1.FORENAME) = 'donald' or lower(pa.FORENAME) = 'donald') )order by p.PERSON_ID asc;\nThere are function-based indexes on PERSON and PERSON_ALIAS as follows:CREATE INDEX PERSON_FORENAME_LOWER_FBIDX ON PERSON (LOWER(FORENAME) VARCHAR_PATTERN_OPS);\nCREATE INDEX PERSON_SURNAME_LOWER_FBIDX ON PERSON (LOWER(SURNAME) VARCHAR_PATTERN_OPS);CREATE INDEX PERSON_ALIAS_FORENAME_LOWER_FBIDX ON PERSON_ALIAS (LOWER(FORENAME) VARCHAR_PATTERN_OPS);\nCREATE INDEX PERSON_ALIAS_SURNAME_LOWER_FBIDX ON PERSON_ALIAS (LOWER(SURNAME) VARCHAR_PATTERN_OPS);The problem is that the above query doesn't use the indexes. The \"or\" clauses across the outer-join seem to be the culprit. If I rewrite the query as follows, Postgres will use the index:\nSELECT *from PERSON pwhere (p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.PERSON p2\n join TRAVELER.OTHER_NAME pa on p2.PERSON_ID = pa.PERSON_ID where lower(p2.SURNAME) = 'duck' and\n lower(pa.FORENAME) = 'donald' ) or p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.PERSON p2\n join TRAVELER.OTHER_NAME pa on p2.PERSON_ID = pa.PERSON_ID where lower(pa.SURNAME) = 'duck' and lower(p2.FORENAME) = 'donald'\n ) or p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.PERSON p2 where lower(p2.SURNAME) = 'duck' and\n lower(p2.FORENAME) = 'donald' ) or p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.OTHER_NAME pa\n where lower(pa.SURNAME) = 'duck' and lower(pa.FORENAME) = 'donald' ))order by p.PERSON_ID asc;\nSo my question is this: Is there a way to get the Postgres optimizer \"rewrite\" the query execution plan to use the equivalent, but much more efficient latter form?\nAnd before you ask; yes, there are better ways of writing this query. But we're dealing with Java developers and Hibernate here. It's a legacy system, and the policy is to avoid hand-written SQL, so for the moment let's not go down that rabbit hole, and focus on the issue of what the optimizer can and cannot do.",
"msg_date": "Tue, 30 Apr 2013 13:13:22 -0400",
"msg_from": "Mark Hampton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad Execution Plan with \"OR\" Clauses Across Outer-Joined Tables"
},
{
"msg_contents": "What I can say is that hibernate has \"exists\" in both HQL and criteria API\n(e.g. see\nhttp://www.cereslogic.com/pages/2008/09/22/hibernate-criteria-subqueries-exists/\nfor\ncriteria). So, may be it's easier for you to tune your hibernate query to\nuse exists\n\n\n2013/4/30 Mark Hampton <[email protected]>\n\n> I have a Hibernate-generated query (That's not going to change, so let's\n> just focus on the Postgres side for now) like this:\n>\n> SELECT *\n> from PERSON p\n> where p.PERSON_ID in (\n> select distinct p2.PERSON_ID\n> from PERSON p2\n> left outer join PERSON_ALIAS pa on\n> p2.PERSON_ID = pa.PERSON_ID\n> where (lower(p1.SURNAME) = 'duck' or\n> lower(pa.SURNAME) = 'duck') and\n> (lower(p1.FORENAME) = 'donald' or\n> lower(pa.FORENAME) = 'donald')\n> )\n> order by p.PERSON_ID asc;\n>\n> There are function-based indexes on PERSON and PERSON_ALIAS as follows:\n>\n> CREATE INDEX PERSON_FORENAME_LOWER_FBIDX ON PERSON (LOWER(FORENAME)\n> VARCHAR_PATTERN_OPS);\n> CREATE INDEX PERSON_SURNAME_LOWER_FBIDX ON PERSON (LOWER(SURNAME) VARCHAR\n> _PATTERN_OPS);\n> CREATE INDEX PERSON_ALIAS_FORENAME_LOWER_FBIDX ON PERSON_ALIAS\n> (LOWER(FORENAME) VARCHAR_PATTERN_OPS);\n> CREATE INDEX PERSON_ALIAS_SURNAME_LOWER_FBIDX ON PERSON_ALIAS\n> (LOWER(SURNAME) VARCHAR_PATTERN_OPS);\n>\n> The problem is that the above query doesn't use the indexes. The \"or\"\n> clauses across the outer-join seem to be the culprit. If I rewrite the\n> query as follows, Postgres will use the index:\n>\n> SELECT *\n> from PERSON p\n> where (p.PERSON_ID in (\n> select p2.PERSON_ID\n> from TRAVELER.PERSON p2\n> join TRAVELER.OTHER_NAME pa on p2.PERSON_ID =\n> pa.PERSON_ID\n> where lower(p2.SURNAME) = 'duck' and\n> lower(pa.FORENAME) = 'donald'\n> ) or\n> p.PERSON_ID in (\n> select p2.PERSON_ID\n> from TRAVELER.PERSON p2\n> join TRAVELER.OTHER_NAME pa on p2.PERSON_ID =\n> pa.PERSON_ID\n> where lower(pa.SURNAME) = 'duck' and\n> lower(p2.FORENAME) = 'donald'\n> ) or\n> p.PERSON_ID in (\n> select p2.PERSON_ID\n> from TRAVELER.PERSON p2\n> where lower(p2.SURNAME) = 'duck' and\n> lower(p2.FORENAME) = 'donald'\n> ) or\n> p.PERSON_ID in (\n> select p2.PERSON_ID\n> from TRAVELER.OTHER_NAME pa\n> where lower(pa.SURNAME) = 'duck' and\n> lower(pa.FORENAME) = 'donald'\n> ))\n> order by p.PERSON_ID asc;\n>\n> So my question is this: Is there a way to get the Postgres optimizer\n> \"rewrite\" the query execution plan to use the equivalent, but much more\n> efficient latter form?\n>\n> And before you ask; yes, there are better ways of writing this query. But\n> we're dealing with Java developers and Hibernate here. It's a legacy\n> system, and the policy is to avoid hand-written SQL, so for the moment\n> let's not go down that rabbit hole, and focus on the issue of what the\n> optimizer can and cannot do.\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nWhat I can say is that hibernate has \"exists\" in both HQL and criteria API (e.g. see http://www.cereslogic.com/pages/2008/09/22/hibernate-criteria-subqueries-exists/ for criteria). So, may be it's easier for you to tune your hibernate query to use exists\n2013/4/30 Mark Hampton <[email protected]>\nI have a Hibernate-generated query (That's not going to change, so let's just focus on the Postgres side for now) like this:SELECT *from PERSON p\nwhere p.PERSON_ID in ( select distinct p2.PERSON_ID from PERSON p2 left outer join PERSON_ALIAS pa on\n\n p2.PERSON_ID = pa.PERSON_ID where (lower(p1.SURNAME) = 'duck' or lower(pa.SURNAME) = 'duck') and\n\n (lower(p1.FORENAME) = 'donald' or lower(pa.FORENAME) = 'donald') )order by p.PERSON_ID asc;\nThere are function-based indexes on PERSON and PERSON_ALIAS as follows:CREATE INDEX PERSON_FORENAME_LOWER_FBIDX ON PERSON (LOWER(FORENAME) VARCHAR_PATTERN_OPS);\nCREATE INDEX PERSON_SURNAME_LOWER_FBIDX ON PERSON (LOWER(SURNAME) VARCHAR_PATTERN_OPS);CREATE INDEX PERSON_ALIAS_FORENAME_LOWER_FBIDX ON PERSON_ALIAS (LOWER(FORENAME) VARCHAR_PATTERN_OPS);\nCREATE INDEX PERSON_ALIAS_SURNAME_LOWER_FBIDX ON PERSON_ALIAS (LOWER(SURNAME) VARCHAR_PATTERN_OPS);The problem is that the above query doesn't use the indexes. The \"or\" clauses across the outer-join seem to be the culprit. If I rewrite the query as follows, Postgres will use the index:\nSELECT *from PERSON pwhere (p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.PERSON p2\n join TRAVELER.OTHER_NAME pa on p2.PERSON_ID = pa.PERSON_ID where lower(p2.SURNAME) = 'duck' and\n\n lower(pa.FORENAME) = 'donald' ) or p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.PERSON p2\n join TRAVELER.OTHER_NAME pa on p2.PERSON_ID = pa.PERSON_ID where lower(pa.SURNAME) = 'duck' and\n lower(p2.FORENAME) = 'donald'\n ) or p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.PERSON p2 where lower(p2.SURNAME) = 'duck' and\n lower(p2.FORENAME) = 'donald' ) or p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.OTHER_NAME pa\n where lower(pa.SURNAME) = 'duck' and lower(pa.FORENAME) = 'donald' ))order by p.PERSON_ID asc;\nSo my question is this: Is there a way to get the Postgres optimizer \"rewrite\" the query execution plan to use the equivalent, but much more efficient latter form?\nAnd before you ask; yes, there are better ways of writing this query. But we're dealing with Java developers and Hibernate here. It's a legacy system, and the policy is to avoid hand-written SQL, so for the moment let's not go down that rabbit hole, and focus on the issue of what the optimizer can and cannot do.\n\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Tue, 30 Apr 2013 22:24:43 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad Execution Plan with \"OR\" Clauses Across\n Outer-Joined Tables"
},
{
"msg_contents": "It's an interesting idea, however when I rewrote the original query to use\n\"WHERE EXISTS\" rather than \"WHERE IN\", I get the same bad execution plan.\n I think this really has to do with the Postgres optimizer's limitations\nwith respect to outer joins.\n\nIn my case it's certainly possible to rewrite the query by hand to\neliminate the outer join and get the same results.\n\nAnd after posting the original problem, I have also found that with some\nwork it's possible to make Hibernate generate a query that eliminates the\nouter join and get the same results.\n\nBut I think improving the Postgres optimizer to handle such cases would be\na nice improvement. Then again, having lived through many years of Oracle\noptimizer bugs, it might be easier said than done.\n\n\nOn Tue, Apr 30, 2013 at 3:24 PM, Vitalii Tymchyshyn <[email protected]>wrote:\n\n> What I can say is that hibernate has \"exists\" in both HQL and criteria API\n> (e.g. see\n> http://www.cereslogic.com/pages/2008/09/22/hibernate-criteria-subqueries-exists/ for\n> criteria). So, may be it's easier for you to tune your hibernate query to\n> use exists\n>\n>\n> 2013/4/30 Mark Hampton <[email protected]>\n>\n>> I have a Hibernate-generated query (That's not going to change, so let's\n>> just focus on the Postgres side for now) like this:\n>>\n>> SELECT *\n>> from PERSON p\n>> where p.PERSON_ID in (\n>> select distinct p2.PERSON_ID\n>> from PERSON p2\n>> left outer join PERSON_ALIAS pa on\n>> p2.PERSON_ID = pa.PERSON_ID\n>> where (lower(p1.SURNAME) = 'duck' or\n>> lower(pa.SURNAME) = 'duck') and\n>> (lower(p1.FORENAME) = 'donald' or\n>> lower(pa.FORENAME) = 'donald')\n>> )\n>> order by p.PERSON_ID asc;\n>>\n>> There are function-based indexes on PERSON and PERSON_ALIAS as follows:\n>>\n>> CREATE INDEX PERSON_FORENAME_LOWER_FBIDX ON PERSON (LOWER(FORENAME)\n>> VARCHAR_PATTERN_OPS);\n>> CREATE INDEX PERSON_SURNAME_LOWER_FBIDX ON PERSON (LOWER(SURNAME) VARCHAR\n>> _PATTERN_OPS);\n>> CREATE INDEX PERSON_ALIAS_FORENAME_LOWER_FBIDX ON PERSON_ALIAS\n>> (LOWER(FORENAME) VARCHAR_PATTERN_OPS);\n>> CREATE INDEX PERSON_ALIAS_SURNAME_LOWER_FBIDX ON PERSON_ALIAS\n>> (LOWER(SURNAME) VARCHAR_PATTERN_OPS);\n>>\n>> The problem is that the above query doesn't use the indexes. The \"or\"\n>> clauses across the outer-join seem to be the culprit. If I rewrite the\n>> query as follows, Postgres will use the index:\n>>\n>> SELECT *\n>> from PERSON p\n>> where (p.PERSON_ID in (\n>> select p2.PERSON_ID\n>> from TRAVELER.PERSON p2\n>> join TRAVELER.OTHER_NAME pa on p2.PERSON_ID =\n>> pa.PERSON_ID\n>> where lower(p2.SURNAME) = 'duck' and\n>> lower(pa.FORENAME) = 'donald'\n>> ) or\n>> p.PERSON_ID in (\n>> select p2.PERSON_ID\n>> from TRAVELER.PERSON p2\n>> join TRAVELER.OTHER_NAME pa on p2.PERSON_ID =\n>> pa.PERSON_ID\n>> where lower(pa.SURNAME) = 'duck' and\n>> lower(p2.FORENAME) = 'donald'\n>> ) or\n>> p.PERSON_ID in (\n>> select p2.PERSON_ID\n>> from TRAVELER.PERSON p2\n>> where lower(p2.SURNAME) = 'duck' and\n>> lower(p2.FORENAME) = 'donald'\n>> ) or\n>> p.PERSON_ID in (\n>> select p2.PERSON_ID\n>> from TRAVELER.OTHER_NAME pa\n>> where lower(pa.SURNAME) = 'duck' and\n>> lower(pa.FORENAME) = 'donald'\n>> ))\n>> order by p.PERSON_ID asc;\n>>\n>> So my question is this: Is there a way to get the Postgres optimizer\n>> \"rewrite\" the query execution plan to use the equivalent, but much more\n>> efficient latter form?\n>>\n>> And before you ask; yes, there are better ways of writing this query.\n>> But we're dealing with Java developers and Hibernate here. It's a legacy\n>> system, and the policy is to avoid hand-written SQL, so for the moment\n>> let's not go down that rabbit hole, and focus on the issue of what the\n>> optimizer can and cannot do.\n>>\n>\n>\n>\n> --\n> Best regards,\n> Vitalii Tymchyshyn\n>\n\nIt's an interesting idea, however when I rewrote the original query to use \"WHERE EXISTS\" rather than \"WHERE IN\", I get the same bad execution plan. I think this really has to do with the Postgres optimizer's limitations with respect to outer joins. \nIn my case it's certainly possible to rewrite the query by hand to eliminate the outer join and get the same results.And after posting the original problem, I have also found that with some work it's possible to make Hibernate generate a query that eliminates the outer join and get the same results.\nBut I think improving the Postgres optimizer to handle such cases would be a nice improvement. Then again, having lived through many years of Oracle optimizer bugs, it might be easier said than done. \nOn Tue, Apr 30, 2013 at 3:24 PM, Vitalii Tymchyshyn <[email protected]> wrote:\nWhat I can say is that hibernate has \"exists\" in both HQL and criteria API (e.g. see http://www.cereslogic.com/pages/2008/09/22/hibernate-criteria-subqueries-exists/ for criteria). So, may be it's easier for you to tune your hibernate query to use exists\n2013/4/30 Mark Hampton <[email protected]>\nI have a Hibernate-generated query (That's not going to change, so let's just focus on the Postgres side for now) like this:SELECT *from PERSON p\nwhere p.PERSON_ID in ( select distinct p2.PERSON_ID from PERSON p2 left outer join PERSON_ALIAS pa on\n\n\n p2.PERSON_ID = pa.PERSON_ID where (lower(p1.SURNAME) = 'duck' or lower(pa.SURNAME) = 'duck') and\n\n (lower(p1.FORENAME) = 'donald' or lower(pa.FORENAME) = 'donald') )order by p.PERSON_ID asc;\nThere are function-based indexes on PERSON and PERSON_ALIAS as follows:CREATE INDEX PERSON_FORENAME_LOWER_FBIDX ON PERSON (LOWER(FORENAME) VARCHAR_PATTERN_OPS);\nCREATE INDEX PERSON_SURNAME_LOWER_FBIDX ON PERSON (LOWER(SURNAME) VARCHAR_PATTERN_OPS);CREATE INDEX PERSON_ALIAS_FORENAME_LOWER_FBIDX ON PERSON_ALIAS (LOWER(FORENAME) VARCHAR_PATTERN_OPS);\nCREATE INDEX PERSON_ALIAS_SURNAME_LOWER_FBIDX ON PERSON_ALIAS (LOWER(SURNAME) VARCHAR_PATTERN_OPS);The problem is that the above query doesn't use the indexes. The \"or\" clauses across the outer-join seem to be the culprit. If I rewrite the query as follows, Postgres will use the index:\nSELECT *from PERSON pwhere (p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.PERSON p2\n join TRAVELER.OTHER_NAME pa on p2.PERSON_ID = pa.PERSON_ID where lower(p2.SURNAME) = 'duck' and\n\n\n lower(pa.FORENAME) = 'donald' ) or p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.PERSON p2\n join TRAVELER.OTHER_NAME pa on p2.PERSON_ID = pa.PERSON_ID where lower(pa.SURNAME) = 'duck' and\n\n lower(p2.FORENAME) = 'donald'\n ) or p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.PERSON p2 where lower(p2.SURNAME) = 'duck' and\n lower(p2.FORENAME) = 'donald' ) or p.PERSON_ID in ( select p2.PERSON_ID from TRAVELER.OTHER_NAME pa\n where lower(pa.SURNAME) = 'duck' and lower(pa.FORENAME) = 'donald' ))order by p.PERSON_ID asc;\nSo my question is this: Is there a way to get the Postgres optimizer \"rewrite\" the query execution plan to use the equivalent, but much more efficient latter form?\nAnd before you ask; yes, there are better ways of writing this query. But we're dealing with Java developers and Hibernate here. It's a legacy system, and the policy is to avoid hand-written SQL, so for the moment let's not go down that rabbit hole, and focus on the issue of what the optimizer can and cannot do.\n\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Tue, 30 Apr 2013 16:03:21 -0400",
"msg_from": "Mark Hampton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad Execution Plan with \"OR\" Clauses Across\n Outer-Joined Tables"
}
] |
[
{
"msg_contents": "Hi all,\nWe are running a stress test that executes one select query with multiple threads.\nThe query executes very fast (10ms). It returns 100 rows. I see deterioration in the performance when we have multiple threads executing the query. With 100 threads, the query takes between 3s and 8s.\n\nI suppose there is a way to tune our database. What are the parameters I should look into? (shared_buffer?, wal_buffer?)\n\n\nsrdb=> explain analyze SELECT\npsrdb-> artifact.id AS id,\npsrdb-> artifact.priority AS priority,\npsrdb-> project.path AS projectPathString,\npsrdb-> project.title AS projectTitle,\npsrdb-> folder.project_id AS projectId,\npsrdb-> folder.path AS folderPathString,\npsrdb-> folder.title AS folderTitle,\npsrdb-> item.folder_id AS folderId,\npsrdb-> item.planning_folder_id AS planningFolderId,\npsrdb-> item.title AS title,\npsrdb-> item.name AS name,\npsrdb-> artifact.description AS description,\npsrdb-> field_value.value AS artifactGroup,\npsrdb-> field_value2.value AS status,\npsrdb-> field_value2.value_class AS statusClass,\npsrdb-> field_value3.value AS category,\npsrdb-> field_value4.value AS customer,\npsrdb-> sfuser.username AS submittedByUsername,\npsrdb-> sfuser.full_name AS submittedByFullname,\npsrdb-> item.date_created AS submittedDate,\npsrdb-> artifact.close_date AS closeDate,\npsrdb-> sfuser2.username AS assignedToUsername,\npsrdb-> sfuser2.full_name AS assignedToFullname,\npsrdb-> item.date_last_modified AS lastModifiedDate,\npsrdb-> artifact.estimated_effort AS estimatedEffort,\npsrdb-> artifact.actual_effort AS actualEffort,\npsrdb-> artifact.remaining_effort AS remainingEffort,\npsrdb-> artifact.points AS points,\npsrdb-> artifact.autosumming AS autosumming,\npsrdb-> item.version AS version\npsrdb-> FROM\npsrdb-> field_value field_value2,\npsrdb-> sfuser sfuser2,\npsrdb-> field_value field_value3,\npsrdb-> field_value field_value,\npsrdb-> field_value field_value4,\npsrdb-> item item,\npsrdb-> project project,\npsrdb-> relationship relationship,\npsrdb-> artifact artifact,\npsrdb-> sfuser sfuser,\npsrdb-> folder folder\npsrdb-> WHERE\npsrdb-> artifact.id=item.id\npsrdb-> AND item.folder_id=folder.id\npsrdb-> AND folder.project_id=project.id\npsrdb-> AND artifact.group_fv=field_value.id\npsrdb-> AND artifact.status_fv=field_value2.id\npsrdb-> AND artifact.category_fv=field_value3.id\npsrdb-> AND artifact.customer_fv=field_value4.id\npsrdb-> AND item.created_by_id=sfuser.id\npsrdb-> AND relationship.is_deleted=false\npsrdb-> AND relationship.relationship_type_name='ArtifactAssignment'\npsrdb->\npsrdb-> AND relationship.origin_id=sfuser2.id\npsrdb-> AND artifact.id=relationship.target_id\npsrdb-> AND item.is_deleted=false\npsrdb-> AND folder.is_deleted=false\npsrdb-> AND folder.project_id='proj1032'\npsrdb-> AND item.folder_id='tracker1213'\npsrdb-> AND folder.path='tracker.trackerName';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nNested Loop (cost=0.00..117.32 rows=3 width=1272) (actual time=7.003..9.684 rows=100 loops=1)\n -> Nested Loop (cost=0.00..116.69 rows=2 width=1271) (actual time=6.987..8.820 rows=100 loops=1)\n Join Filter: ((item.created_by_id)::text = (sfuser.id)::text)\n -> Seq Scan on sfuser (cost=0.00..7.65 rows=65 width=30) (actual time=0.013..0.053 rows=65 loops=1)\n -> Materialize (cost=0.00..107.10 rows=2 width=1259) (actual time=0.005..0.100 rows=100 loops=65)\n -> Nested Loop (cost=0.00..107.09 rows=2 width=1259) (actual time=0.307..5.667 rows=100 loops=1)\n -> Nested Loop (cost=0.00..106.45 rows=2 width=1263) (actual time=0.294..4.841 rows=100 loops=1)\n -> Nested Loop (cost=0.00..105.82 rows=2 width=1267) (actual time=0.281..3.988 rows=100 loops=1)\n -> Nested Loop (cost=0.00..105.18 rows=2 width=1271) (actual time=0.239..3.132 rows=100 loops=1)\n -> Nested Loop (cost=0.00..104.61 rows=2 width=1259) (actual time=0.223..2.457 rows=100 loops=1)\n -> Nested Loop (cost=0.00..16.55 rows=1 width=1099) (actual time=0.095..0.096 rows=1 loops=1)\n -> Index Scan using project_pk on project (cost=0.00..8.27 rows=1 width=1114) (actual time=0.039..0.039 rows=1 loops=1)\n Index Cond: ((id)::text = 'proj1032'::text)\n -> Index Scan using folder_pk on folder (cost=0.00..8.27 rows=1 width=67) (actual time=0.054..0.055 rows=1 loops=1)\n Index Cond: ((folder.id)::text = 'tracker1213'::text)\n Filter: ((NOT folder.is_deleted) AND ((folder.project_id)::text = 'proj1032'::text) AND (folder.path = 'tracker.trackerName'::text))\n -> Nested Loop (cost=0.00..88.04 rows=2 width=169) (actual time=0.127..2.323 rows=100 loops=1)\n -> Nested Loop (cost=0.00..63.19 rows=3 width=168) (actual time=0.090..1.309 rows=100 loops=1)\n -> Index Scan using item_folder on item (cost=0.00..21.78 rows=5 width=77) (actual time=0.046..0.265 rows=100 loops=1)\n Index Cond: ((folder_id)::text = 'tracker1213'::text)\n Filter: (NOT is_deleted)\n -> Index Scan using artifact_pk on artifact (cost=0.00..8.27 rows=1 width=91) (actual time=0.009..0.009 rows=1 loops=100)\n Index Cond: ((artifact.id)::text = (item.id)::text)\n -> Index Scan using relation_target on relationship (cost=0.00..8.27 rows=1 width=18) (actual time=0.009..0.009 rows=1 loops=100)\n Index Cond: ((relationship.target_id)::text = (artifact.id)::text)\n Filter: ((NOT relationship.is_deleted) AND ((relationship.relationship_type_name)::text = 'ArtifactAssignment'::text))\n -> Index Scan using sfuser_pk on sfuser sfuser2 (cost=0.00..0.27 rows=1 width=30) (actual time=0.005..0.005 rows=1 loops=100)\n Index Cond: ((sfuser2.id)::text = (relationship.origin_id)::text)\n -> Index Scan using field_value_pk on field_value field_value3 (cost=0.00..0.30 rows=1 width=14) (actual time=0.007..0.007 rows=1 loops=100)\n Index Cond: ((field_value3.id)::text = (artifact.category_fv)::text)\n -> Index Scan using field_value_pk on field_value (cost=0.00..0.30 rows=1 width=14) (actual time=0.007..0.007 rows=1 loops=100)\n Index Cond: ((field_value.id)::text = (artifact.group_fv)::text)\n -> Index Scan using field_value_pk on field_value field_value4 (cost=0.00..0.30 rows=1 width=14) (actual time=0.007..0.007 rows=1 loops=100)\n Index Cond: ((field_value4.id)::text = (artifact.customer_fv)::text)\n -> Index Scan using field_value_pk on field_value field_value2 (cost=0.00..0.30 rows=1 width=19) (actual time=0.007..0.007 rows=1 loops=100)\n Index Cond: ((field_value2.id)::text = (artifact.status_fv)::text)\n Total runtime: 10.210 ms\n(37 rows)\n\n\nThanks for your help,\nAnne\n\n\n\n\n\n\n\n\n\nHi all,\nWe are running a stress test that executes one select query with multiple threads.\nThe query executes very fast (10ms). It returns 100 rows. I see deterioration in the performance when we have multiple threads executing the query. With 100 threads, the query takes between 3s and 8s.\n \nI suppose there is a way to tune our database. What are the parameters I should look into? (shared_buffer?, wal_buffer?)\n \n \nsrdb=> explain analyze SELECT\npsrdb-> artifact.id AS id,\npsrdb-> artifact.priority AS priority,\npsrdb-> project.path AS projectPathString,\npsrdb-> project.title AS projectTitle, \npsrdb-> folder.project_id AS projectId, \npsrdb-> folder.path AS folderPathString, \npsrdb-> folder.title AS folderTitle, \npsrdb-> item.folder_id AS folderId, \npsrdb-> item.planning_folder_id AS planningFolderId,\npsrdb-> item.title AS title, \n\npsrdb-> item.name AS name, \npsrdb-> artifact.description AS description, \n\npsrdb-> field_value.value AS artifactGroup, \n\npsrdb-> field_value2.value AS status, \n\npsrdb-> field_value2.value_class AS statusClass, \n\npsrdb-> field_value3.value AS category, \n\npsrdb-> field_value4.value AS customer, \n\npsrdb-> sfuser.username AS submittedByUsername, \n\npsrdb-> sfuser.full_name AS submittedByFullname, \n\npsrdb-> item.date_created AS submittedDate, \n\npsrdb-> artifact.close_date AS closeDate, \n\npsrdb-> sfuser2.username AS assignedToUsername, \n\npsrdb-> sfuser2.full_name AS assignedToFullname, \n\npsrdb-> item.date_last_modified AS lastModifiedDate,\npsrdb-> artifact.estimated_effort AS estimatedEffort,\npsrdb-> artifact.actual_effort AS actualEffort, \n\npsrdb-> artifact.remaining_effort AS remainingEffort,\npsrdb-> artifact.points AS points, \n\npsrdb-> artifact.autosumming AS autosumming, \n\npsrdb-> item.version AS version \n\npsrdb-> FROM \n\npsrdb-> field_value field_value2, \npsrdb-> sfuser sfuser2, \n\npsrdb-> field_value field_value3, \n\npsrdb-> field_value field_value, \n\npsrdb-> field_value field_value4, \n\npsrdb-> item item, \n\npsrdb-> project project, \n\npsrdb-> relationship relationship, \n\npsrdb-> artifact artifact, \n\npsrdb-> sfuser sfuser, \n\npsrdb-> folder folder \n\npsrdb-> WHERE \n\npsrdb-> artifact.id=item.id \n\npsrdb-> AND item.folder_id=folder.id \n\npsrdb-> AND folder.project_id=project.id \n\npsrdb-> AND artifact.group_fv=field_value.id \n\npsrdb-> AND artifact.status_fv=field_value2.id \n\npsrdb-> AND artifact.category_fv=field_value3.id \n\npsrdb-> AND artifact.customer_fv=field_value4.id \n\npsrdb-> AND item.created_by_id=sfuser.id \n\npsrdb-> AND relationship.is_deleted=false \n\npsrdb-> AND relationship.relationship_type_name='ArtifactAssignment'\npsrdb-> \n\npsrdb-> AND relationship.origin_id=sfuser2.id \n\npsrdb-> AND artifact.id=relationship.target_id \npsrdb-> AND item.is_deleted=false \n\npsrdb-> AND folder.is_deleted=false \n\npsrdb-> AND folder.project_id='proj1032' \n\npsrdb-> AND item.folder_id='tracker1213' \n\npsrdb-> AND folder.path='tracker.trackerName'; \n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nNested Loop (cost=0.00..117.32 rows=3 width=1272) (actual time=7.003..9.684 rows=100 loops=1) \n\n -> Nested Loop (cost=0.00..116.69 rows=2 width=1271) (actual time=6.987..8.820 rows=100 loops=1) \n\n Join Filter: ((item.created_by_id)::text = (sfuser.id)::text) \n\n -> Seq Scan on sfuser (cost=0.00..7.65 rows=65 width=30) (actual time=0.013..0.053 rows=65 loops=1) \n -> Materialize (cost=0.00..107.10 rows=2 width=1259) (actual time=0.005..0.100 rows=100 loops=65) \n\n -> Nested Loop (cost=0.00..107.09 rows=2 width=1259) (actual time=0.307..5.667 rows=100 loops=1) \n\n -> Nested Loop (cost=0.00..106.45 rows=2 width=1263) (actual time=0.294..4.841 rows=100 loops=1) \n\n -> Nested Loop (cost=0.00..105.82 rows=2 width=1267) (actual time=0.281..3.988 rows=100 loops=1) \n -> Nested Loop (cost=0.00..105.18 rows=2 width=1271) (actual time=0.239..3.132 rows=100 loops=1) \n\n -> Nested Loop (cost=0.00..104.61 rows=2 width=1259) (actual time=0.223..2.457 rows=100 loops=1) \n\n -> Nested Loop (cost=0.00..16.55 rows=1 width=1099) (actual time=0.095..0.096 rows=1 loops=1) \n\n -> Index Scan using project_pk on project (cost=0.00..8.27 rows=1 width=1114) (actual time=0.039..0.039 rows=1 loops=1) \n\n Index Cond: ((id)::text = 'proj1032'::text) \n\n -> Index Scan using folder_pk on folder (cost=0.00..8.27 rows=1 width=67) (actual time=0.054..0.055 rows=1 loops=1) \n\n Index Cond: ((folder.id)::text = 'tracker1213'::text) \n\n Filter: ((NOT folder.is_deleted) AND ((folder.project_id)::text = 'proj1032'::text) AND (folder.path = 'tracker.trackerName'::text))\n\n -> Nested Loop (cost=0.00..88.04 rows=2 width=169) (actual time=0.127..2.323 rows=100 loops=1) \n\n -> Nested Loop (cost=0.00..63.19 rows=3 width=168) (actual time=0.090..1.309 rows=100 loops=1) \n\n -> Index Scan using item_folder on item (cost=0.00..21.78 rows=5 width=77) (actual time=0.046..0.265 rows=100 loops=1) \n\n Index Cond: ((folder_id)::text = 'tracker1213'::text) \n Filter: (NOT is_deleted) \n\n -> Index Scan using artifact_pk on artifact (cost=0.00..8.27 rows=1 width=91) (actual time=0.009..0.009 rows=1 loops=100) \n\n Index Cond: ((artifact.id)::text = (item.id)::text) \n\n -> Index Scan using relation_target on relationship (cost=0.00..8.27 rows=1 width=18) (actual time=0.009..0.009 rows=1 loops=100) \n\n Index Cond: ((relationship.target_id)::text = (artifact.id)::text) \n\n Filter: ((NOT relationship.is_deleted) AND ((relationship.relationship_type_name)::text = 'ArtifactAssignment'::text)) \n\n -> Index Scan using sfuser_pk on sfuser sfuser2 (cost=0.00..0.27 rows=1 width=30) (actual time=0.005..0.005 rows=1 loops=100) \n\n Index Cond: ((sfuser2.id)::text = (relationship.origin_id)::text) \n -> Index Scan using field_value_pk on field_value field_value3 (cost=0.00..0.30 rows=1 width=14) (actual time=0.007..0.007 rows=1 loops=100) \n\n Index Cond: ((field_value3.id)::text = (artifact.category_fv)::text) \n\n -> Index Scan using field_value_pk on field_value (cost=0.00..0.30 rows=1 width=14) (actual time=0.007..0.007 rows=1 loops=100) \n\n Index Cond: ((field_value.id)::text = (artifact.group_fv)::text) \n\n -> Index Scan using field_value_pk on field_value field_value4 (cost=0.00..0.30 rows=1 width=14) (actual time=0.007..0.007 rows=1 loops=100) \n\n Index Cond: ((field_value4.id)::text = (artifact.customer_fv)::text) \n\n -> Index Scan using field_value_pk on field_value field_value2 (cost=0.00..0.30 rows=1 width=19) (actual time=0.007..0.007 rows=1 loops=100) \n\n Index Cond: ((field_value2.id)::text = (artifact.status_fv)::text) \n\n Total runtime: 10.210 ms \n(37 rows) \n \n \nThanks for your help,\nAnne",
"msg_date": "Wed, 1 May 2013 05:50:59 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Deterioration in performance when query executed in multi threads"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 8130\nLogged by: Stefan de Konink\nEmail address: [email protected]\nPostgreSQL version: 9.2.4\nOperating system: Linux\nDescription: \n\nWe figured out that two very close query give a massive difference\nperformance between using select * vs select id.\n\nSELECT *\nFROM ambit_privateevent_calendars AS a\n\t,ambit_privateevent AS b\n\t,ambit_calendarsubscription AS c\n\t,ambit_calendar AS d\nWHERE c.calendar_id = d.id\n\tAND a.privateevent_id = b.id\n\tAND c.user_id = 1270\n\tAND c.calendar_id = a.calendar_id\n\tAND c.STATUS IN (\n\t\t1\n\t\t,8\n\t\t,2\n\t\t,15\n\t\t,18\n\t\t,4\n\t\t,12\n\t\t,20\n\t\t)\n\tAND NOT b.main_recurrence = true;\n\nWith some help on IRC we figured out that \"there was a bugfix in hash\nestimation recently and I was hoping you were older than that\", but since we\nare not:\nPostgreSQL 9.2.4 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc\n(Gentoo 4.7.2-r1 p1.6, pie-0.5.5) 4.7.2, 64-bit\n\n...there might still be a bug around.\n\nWe compare:\nhttp://explain.depesz.com/s/jRx\nhttp://explain.depesz.com/s/eKE\n\nBy setting \"set enable_hashjoin = off;\" performance in our entire\napplication increased 30 fold in throughput, which was a bit unexpected but\nhighly appreciated. The result of the last query:\n\nhttp://explain.depesz.com/s/AWB\n\nWhat can we do to provide a bit more of information?\n\n\n\n-- \nSent via pgsql-bugs mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-bugs\n",
"msg_date": "Wed, 01 May 2013 12:48:07 +0000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "BUG #8130: Hashjoin still gives issues"
},
{
"msg_contents": "[email protected] writes:\n> By setting \"set enable_hashjoin = off;\" performance in our entire\n> application increased 30 fold in throughput, which was a bit unexpected but\n> highly appreciated. The result of the last query:\n\nAt least in this example, the query appears to be fully cached and so\nyou would need a random_page_cost near 1 to reflect the system's\nbehavior properly. If your DB fits mostly in RAM, adjusting the cost\nparameters is a much better idea than fooling with the enable_\nparameters.\n\n> What can we do to provide a bit more of information?\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nThere is no particularly good reason to think this is a bug; please\ntake it up on pgsql-performance if you have more questions.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-bugs mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-bugs\n",
"msg_date": "Wed, 01 May 2013 11:10:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #8130: Hashjoin still gives issues"
},
{
"msg_contents": "Dear Tom,\n\n\nOn Wed, 1 May 2013, Tom Lane wrote:\n\n>> What can we do to provide a bit more of information?\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> There is no particularly good reason to think this is a bug; please\n> take it up on pgsql-performance if you have more questions.\n\nI beg to disagree, the performance of a select * query and the select b.id \nquery are both \"hot\". The result in a fundamentally different query plan \n(and performance). Combined with the recent bugfix regarding hash \nestimation, it gives me a good indication that there might be a bug.\n\nI am not deep into the query optimiser of PostgreSQL but given the above \nsame were different selections can change an entire query plan (and * is \nin fact out of the box 30 times faster than b.id) it does. When hash is \ndisabled the entire query is -depending on the system checked- 2 to \n30x faster.\n\n\nThe original query:\n\nselect * from ambit_privateevent_calendars as a, ambit_privateevent as b, \nambit_calendarsubscription as c, ambit_calendar as d where c.calendar_id = \nd.id and a.privateevent_id = b.id and c.user_id = 1270 and c.calendar_id \n= a.calendar_id and c.STATUS IN (1, 8, 2, 15, 18, 4, 12, 20) and not \nb.main_recurrence = true;\n\nselect b.id from ambit_privateevent_calendars as a, ambit_privateevent as \nb, ambit_calendarsubscription as c, ambit_calendar as d where c.calendar_id = \nd.id and a.privateevent_id = b.id and c.user_id = 1270 and c.calendar_id \n= a.calendar_id and c.STATUS IN (1, 8, 2, 15, 18, 4, 12, 20) and not \nb.main_recurrence = true;\n\n(select * => select b.id, the star query is *fastest*)\n\nWe compare:\nhttp://explain.depesz.com/s/jRx\nhttp://explain.depesz.com/s/eKE\n\n\nBy setting \"set enable_hashjoin = off;\" performance in our entire\napplication increased 30 fold in throughput, which was a bit unexpected \nbut highly appreciated. The result of the last query switch the mergejoin:\n\nhttp://explain.depesz.com/s/AWB\n\nIt is also visible that after hashjoin is off, the b.id query is faster \nthan the * query (what would be expected).\n\n\nOur test machine is overbudgetted, 4x the memory of the entire database \n~4GB, and uses the PostgreSQL stock settings.\n\n\nStefan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 May 2013 17:44:54 +0200 (CEST)",
"msg_from": "Stefan de Konink <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] BUG #8130: Hashjoin still gives issues"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> \n\n> \n> The original query:\n> \n> select * from ambit_privateevent_calendars as a, ambit_privateevent as\n> b, ambit_calendarsubscription as c, ambit_calendar as d where\n> c.calendar_id = d.id and a.privateevent_id = b.id and c.user_id = 1270\n> and c.calendar_id = a.calendar_id and c.STATUS IN (1, 8, 2, 15, 18, 4,\n> 12, 20) and not b.main_recurrence = true;\n> \n> select b.id from ambit_privateevent_calendars as a, ambit_privateevent\n> as b, ambit_calendarsubscription as c, ambit_calendar as d where\n> c.calendar_id = d.id and a.privateevent_id = b.id and c.user_id = 1270\n> and c.calendar_id = a.calendar_id and c.STATUS IN (1, 8, 2, 15, 18, 4,\n> 12, 20) and not b.main_recurrence = true;\n> \n> (select * => select b.id, the star query is *fastest*)\n> \n> We compare:\n> http://explain.depesz.com/s/jRx\n> http://explain.depesz.com/s/eKE\n> \n> \n> By setting \"set enable_hashjoin = off;\" performance in our entire\n> application increased 30 fold in throughput, which was a bit unexpected\n> but highly appreciated. The result of the last query switch the\n> mergejoin:\n> \n> http://explain.depesz.com/s/AWB\n> \n> It is also visible that after hashjoin is off, the b.id query is faster\n> than the * query (what would be expected).\n> \n> \n> Our test machine is overbudgetted, 4x the memory of the entire database\n> ~4GB, and uses the PostgreSQL stock settings.\n> \n> \n> Stefan\n> \n\nI'd suggest that you adjust Postgres configuration, specifically memory settings (buffer_cache, work_mem, effective_cache_size), to reflect your hardware config, and see how it affects your query.\n\nRegards,\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 May 2013 17:59:22 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] BUG #8130: Hashjoin still gives issues"
},
{
"msg_contents": "On Wed, 2013-05-01 at 17:44 +0200, Stefan de Konink wrote:\n> Combined with the recent bugfix regarding hash \n> estimation, it gives me a good indication that there might be a bug.\n\nTo which recent bugfix are you referring?\n\nThe best venue for fixing an issue like this is pgsql-performance -- it\ndoesn't make too much difference whether it's a \"bug\" or not.\nPerformance problems sometimes end up as bugs and sometimes end up being\ntreated more like an enhancement; but most of the progress is made on\npgsql-performance regardless.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 01 May 2013 13:07:34 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] BUG #8130: Hashjoin still gives issues"
}
] |
[
{
"msg_contents": "Hi all,\nWe are running a stress test that executes one select query with multiple threads.\nThe query executes very fast (10ms). It returns 100 rows. I see deterioration in the performance when we have multiple threads executing the query. With 100 threads, the query takes between 3s and 8s.\n\nI suppose there is a way to tune our database. What are the parameters I should look into? (shared_buffer?, wal_buffer?)\n\n\n\n\nThanks for your help,\nAnne\n\n\n\n\n\n\n\n\n\nHi all,\nWe are running a stress test that executes one select query with multiple threads.\nThe query executes very fast (10ms). It returns 100 rows. I see deterioration in the performance when we have multiple threads executing the query. With 100 threads, the query takes between 3s and 8s.\n \nI suppose there is a way to tune our database. What are the parameters I should look into? (shared_buffer?, wal_buffer?)\n \n \n \n \nThanks for your help,\nAnne",
"msg_date": "Wed, 1 May 2013 14:05:06 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "On Wed, May 01, 2013 at 02:05:06PM +0000, Anne Rosset wrote:\n> Hi all,\n> We are running a stress test that executes one select query with multiple threads.\n> The query executes very fast (10ms). It returns 100 rows. I see deterioration in the performance when we have multiple threads executing the query. With 100 threads, the query takes between 3s and 8s.\n> \n> I suppose there is a way to tune our database. What are the parameters I should look into? (shared_buffer?, wal_buffer?)\n> \n> Thanks for your help,\n> Anne\n\nTry a connection pooler like pgbouncer to keep the number of simultaneous queries\nbounded to a reasonable number. You will actually get better performance.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 May 2013 09:12:36 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "Hi Ken,\nThanks for your answer. My test is actually running with jboss 7/jdbc and the connection pool is defined with min-pool-size =10 and max-pool-size=400.\n\nWhy would you think it is an issue with the connection pool?\n\nThanks,\nAnne\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] \nSent: Wednesday, May 01, 2013 7:13 AM\nTo: Anne Rosset\nCc: [email protected]\nSubject: Re: [PERFORM] Deterioration in performance when query executed in multi threads\n\nOn Wed, May 01, 2013 at 02:05:06PM +0000, Anne Rosset wrote:\n> Hi all,\n> We are running a stress test that executes one select query with multiple threads.\n> The query executes very fast (10ms). It returns 100 rows. I see deterioration in the performance when we have multiple threads executing the query. With 100 threads, the query takes between 3s and 8s.\n> \n> I suppose there is a way to tune our database. What are the parameters \n> I should look into? (shared_buffer?, wal_buffer?)\n> \n> Thanks for your help,\n> Anne\n\nTry a connection pooler like pgbouncer to keep the number of simultaneous queries bounded to a reasonable number. You will actually get better performance.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 May 2013 16:07:55 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "On Wed, May 01, 2013 at 04:07:55PM +0000, Anne Rosset wrote:\n> Hi Ken,\n> Thanks for your answer. My test is actually running with jboss 7/jdbc and the connection pool is defined with min-pool-size =10 and max-pool-size=400.\n> \n> Why would you think it is an issue with the connection pool?\n> \n> Thanks,\n> Anne\n> \n\nHi Anne,\n\nYou want to be able to run as many jobs productively at once as your hardware is\ncapable of supporting. Usually something starting a 2 x number of CPUs is best.\nIf you make several runs increasing the size of the pool each time, you will\nsee a maximum throughput somewhere near there and then the performance will\ndecrease as you add more and more connections. You can then use that sweet spot.\nYour test harness should make that pretty easy to find.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 May 2013 11:26:50 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "On Wed, May 1, 2013 at 10:26 AM, [email protected] <[email protected]> wrote:\n> On Wed, May 01, 2013 at 04:07:55PM +0000, Anne Rosset wrote:\n>> Hi Ken,\n>> Thanks for your answer. My test is actually running with jboss 7/jdbc and the connection pool is defined with min-pool-size =10 and max-pool-size=400.\n>>\n>> Why would you think it is an issue with the connection pool?\n>>\n>> Thanks,\n>> Anne\n>>\n>\n> Hi Anne,\n>\n> You want to be able to run as many jobs productively at once as your hardware is\n> capable of supporting. Usually something starting a 2 x number of CPUs is best.\n> If you make several runs increasing the size of the pool each time, you will\n> see a maximum throughput somewhere near there and then the performance will\n> decrease as you add more and more connections. You can then use that sweet spot.\n> Your test harness should make that pretty easy to find.\n\nHere's a graph of tps from pgbench on a 48 core / 32 drive battery\nbacked cache RAID machine:\nhttps://plus.google.com/u/0/photos/117090950881008682691/albums/5537418842370875697/5537418902326245874\nNote that on that machine, the peak is between 40 and 50 clients at once.\nNote also the asymptote levelling off at 2800tps. This is a good\nindication of how the machine will behave if overloaded / connection\npooling goes crazy etc.\nSo yeah I suggest Anne do what you're saying and chart it. It should\nbe obvious where the sweet spot is.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 May 2013 11:07:43 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "Thanks Ken. I am going to test with different pool sizes and see if I see any improvements.\nAre there other configuration options I should look like? I was thinking of playing with shared_buffer.\n\nThanks,\nAnne\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] \nSent: Wednesday, May 01, 2013 9:27 AM\nTo: Anne Rosset\nCc: [email protected]\nSubject: Re: [PERFORM] Deterioration in performance when query executed in multi threads\n\nOn Wed, May 01, 2013 at 04:07:55PM +0000, Anne Rosset wrote:\n> Hi Ken,\n> Thanks for your answer. My test is actually running with jboss 7/jdbc and the connection pool is defined with min-pool-size =10 and max-pool-size=400.\n> \n> Why would you think it is an issue with the connection pool?\n> \n> Thanks,\n> Anne\n> \n\nHi Anne,\n\nYou want to be able to run as many jobs productively at once as your hardware is capable of supporting. Usually something starting a 2 x number of CPUs is best.\nIf you make several runs increasing the size of the pool each time, you will see a maximum throughput somewhere near there and then the performance will decrease as you add more and more connections. You can then use that sweet spot.\nYour test harness should make that pretty easy to find.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 May 2013 17:10:09 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Anne Rosset\n> Sent: Wednesday, May 01, 2013 1:10 PM\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Deterioration in performance when query executed\n> in multi threads\n> \n> Thanks Ken. I am going to test with different pool sizes and see if I\n> see any improvements.\n> Are there other configuration options I should look like? I was\n> thinking of playing with shared_buffer.\n> \n> Thanks,\n> Anne\n> \n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]\n> Sent: Wednesday, May 01, 2013 9:27 AM\n> To: Anne Rosset\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Deterioration in performance when query executed\n> in multi threads\n> \n> On Wed, May 01, 2013 at 04:07:55PM +0000, Anne Rosset wrote:\n> > Hi Ken,\n> > Thanks for your answer. My test is actually running with jboss 7/jdbc\n> and the connection pool is defined with min-pool-size =10 and max-\n> pool-size=400.\n> >\n> > Why would you think it is an issue with the connection pool?\n> >\n> > Thanks,\n> > Anne\n> >\n> \n> Hi Anne,\n> \n> You want to be able to run as many jobs productively at once as your\n> hardware is capable of supporting. Usually something starting a 2 x\n> number of CPUs is best.\n> If you make several runs increasing the size of the pool each time, you\n> will see a maximum throughput somewhere near there and then the\n> performance will decrease as you add more and more connections. You can\n> then use that sweet spot.\n> Your test harness should make that pretty easy to find.\n> \n> Regards,\n> Ken\n> \n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nAnne,\n\nBefore expecting advice on specific changes to Postgres configuration parameters,\nYou should provide this list with your hardware configuration, Postgres version, your current Postgres configuration parameters (at least those that changed from defaults).\nAnd, if you do the testing using specific query, would be nice if you provide the results of:\n\nExplain analyze <your_select>;\n\nalong with the definition of database objects (tables, indexes) involved in this select.\n\nAlso, you mention client-side connection pooler. In my experience, server-side poolers, such as PgBouncer mentioned earlier, are much more effective.\n\nRegards,\nIgor Neyman\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 May 2013 17:26:01 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "We saw a little bit improvement by increasing the min_pool_size but again I see a bigvariation in the time the query is executed. Here is the query:\n\nsrdb=> explain analyze SELECT\npsrdb-> artifact.id AS id,\npsrdb-> artifact.priority AS priority,\npsrdb-> project.path AS projectPathString,\npsrdb-> project.title AS projectTitle, \npsrdb-> folder.project_id AS projectId, \npsrdb-> folder.title AS folderTitle, \npsrdb-> item.folder_id AS folderId, \npsrdb-> item.title AS title, \npsrdb-> item.name AS name, \npsrdb-> field_value2.value AS status, \npsrdb-> field_value3.value AS category, \npsrdb-> sfuser.username AS submittedByUsername,\npsrdb-> sfuser.full_name AS submittedByFullname,\npsrdb-> sfuser2.username AS assignedToUsername, \npsrdb-> sfuser2.full_name AS assignedToFullname,\npsrdb-> item.version AS version, \npsrdb-> CASE when ((SELECT \npsrdb(> mntr_subscription.user_id AS userId \npsrdb(> FROM \npsrdb(> mntr_subscription mntr_subscription \npsrdb(> WHERE \npsrdb(> artifact.id=mntr_subscription.object_key\npsrdb(> AND mntr_subscription.user_id='user1439'\npsrdb(> )= 'user1439') THEN 'user1439' ELSE null END AS monitoringUserId,\npsrdb-> tracker.icon AS trackerIcon, \npsrdb-> tracker.remaining_effort_disabled AS remainingEffortDisabled,\npsrdb-> tracker.actual_effort_disabled AS actualEffortDisabled, \npsrdb-> tracker.estimated_effort_disabled AS estimatedEffortDisabled \npsrdb-> FROM \npsrdb-> field_value field_value2, \npsrdb-> field_value field_value, \npsrdb-> sfuser sfuser2, \npsrdb-> field_value field_value3, \npsrdb-> field_value field_value4, \npsrdb-> item item, \npsrdb-> project project, \npsrdb-> relationship relationship, \npsrdb-> tracker tracker, \npsrdb-> artifact artifact, \npsrdb-> sfuser sfuser, \npsrdb-> folder folder \npsrdb-> WHERE \npsrdb-> artifact.id=item.id \npsrdb-> AND item.folder_id=folder.id \npsrdb-> AND folder.project_id=project.id \npsrdb-> AND artifact.group_fv=field_value.id \npsrdb-> AND artifact.status_fv=field_value2.id \npsrdb-> AND artifact.category_fv=field_value3.id \npsrdb-> AND artifact.customer_fv=field_value4.id \npsrdb-> AND item.created_by_id=sfuser.id \npsrdb-> AND relationship.is_deleted=false \npsrdb-> AND relationship.relationship_type_name='ArtifactAssignment'\npsrdb-> AND relationship.origin_id=sfuser2.id \npsrdb-> AND artifact.id=relationship.target_id \npsrdb-> AND item.is_deleted=false \npsrdb-> AND ((artifact.priority=3)) \npsrdb-> AND (project.path='projects.psr-pub-13') \npsrdb-> AND item.folder_id=tracker.id \npsrdb-> ; \n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------------\n Nested Loop (cost=0.00..272.62 rows=1 width=181) (actual time=805.934..1792.596 rows=177 loops=1) \n \n -> Nested Loop (cost=0.00..263.87 rows=1 width=167) (actual time=707.739..1553.348 rows=177 loops=1) \n \n -> Nested Loop (cost=0.00..263.58 rows=1 width=153) (actual time=653.053..1496.839 rows=177 loops=1) \n \n -> Nested Loop (cost=0.00..262.50 rows=1 width=154) (actual time=565.627..1385.667 rows=177 loops=1) \n \n -> Nested Loop (cost=0.00..262.08 rows=1 width=163) (actual time=565.605..1383.686 rows=177 loops\n=1) \n -> Nested Loop (cost=0.00..261.67 rows=1 width=166) (actual time=530.928..1347.053 rows=177\n loops=1) \n -> Nested Loop (cost=0.00..261.26 rows=1 width=175) (actual time=530.866..1345.032 \nrows=177 loops=1) \n -> Nested Loop (cost=0.00..260.84 rows=1 width=178) (actual time=372.825..1184.\n668 rows=177 loops=1) \n -> Nested Loop (cost=0.00..250.33 rows=29 width=128) (actual time=317.897\n..534.645 rows=1011 loops=1) \n -> Nested Loop (cost=0.00..207.56 rows=3 width=92) (actual time=251\n.014..408.868 rows=10 loops=1) \n -> Nested Loop (cost=0.00..163.54 rows=155 width=65) (actual \ntime=146.176..382.023 rows=615 loops=1) \n -> Index Scan using project_path on project (cost=0.00.\n.8.27 rows=1 width=42) (actual time=76.581..76.583 rows=1 loops=1) \n Index Cond: ((path)::text = 'projects.psr-pub-13'::\ntext) \n -> Index Scan using folder_project on folder (cost=0.00\n..153.26 rows=161 width=32) (actual time=69.564..305.083 rows=615 loops=1) \n Index Cond: ((folder.project_id)::text = (project.\nid)::text) \n -> Index Scan using tracker_pk on tracker (cost=0.00..0.27 \nrows=1 width=27) (actual time=0.043..0.043 rows=0 loops=615) \n Index Cond: ((tracker.id)::text = (folder.id)::text)\n -> Index Scan using item_folder on item (cost=0.00..14.11 rows=12 \nwidth=58) (actual time=7.603..12.532 rows=101 loops=10)\n Index Cond: ((item.folder_id)::text = (folder.id)::text)\n Filter: (NOT item.is_deleted)\n -> Index Scan using artifact_pk on artifact (cost=0.00..0.35 rows=1 width\n=50) (actual time=0.642..0.642 rows=0 loops=1011)\n Index Cond: ((artifact.id)::text = (item.id)::text)\n Filter: (artifact.priority = 3)\n -> Index Scan using field_value_pk on field_value field_value2 (cost=0.00..0.40\n rows=1 width=15) (actual time=0.904..0.905 rows=1 loops=177)\n Index Cond: ((field_value2.id)::text = (artifact.status_fv)::text)\n -> Index Scan using field_value_pk on field_value (cost=0.00..0.40 rows=1 width=9) \n(actual time=0.010..0.010 rows=1 loops=177)\n Index Cond: ((field_value.id)::text = (artifact.group_fv)::text)\n -> Index Scan using field_value_pk on field_value field_value3 (cost=0.00..0.40 rows=1 \nwidth=15) (actual time=0.205..0.206 rows=1 loops=177)\n Index Cond: ((field_value3.id)::text = (artifact.category_fv)::text)\n -> Index Scan using field_value_pk on field_value field_value4 (cost=0.00..0.40 rows=1 width=9) \n(actual time=0.010..0.010 rows=1 loops=177)\n Index Cond: ((field_value4.id)::text = (artifact.customer_fv)::text)\n -> Index Scan using relation_target on relationship (cost=0.00..1.07 rows=1 width=19) (actual time=0.\n627..0.627 rows=1 loops=177)\n Index Cond: ((relationship.target_id)::text = (artifact.id)::text)\n Filter: ((NOT relationship.is_deleted) AND ((relationship.relationship_type_name)::text = \n'ArtifactAssignment'::text))\n -> Index Scan using sfuser_pk on sfuser sfuser2 (cost=0.00..0.28 rows=1 width=32) (actual time=0.318..0.318 \nrows=1 loops=177)\n Index Cond: ((sfuser2.id)::text = (relationship.origin_id)::text)\n -> Index Scan using sfuser_pk on sfuser (cost=0.00..0.27 rows=1 width=32) (actual time=0.178..0.179 rows=1 loops=\n177)\n Index Cond: ((sfuser.id)::text = (item.created_by_id)::text)\n SubPlan 1\n -> Index Scan using mntr_subscr_user on mntr_subscription (cost=0.00..8.47 rows=1 width=9) (actual time=1.170..1.\n170 rows=0 loops=177)\n Index Cond: ((($0)::text = (object_key)::text) AND ((user_id)::text = 'user1439'::text))\n Total runtime: 1793.203 ms\n(42 rows)\n\n\nWork_mem is set to 64MB\t\nShared_buffer to 240MB\nSegment_size is 1GB\nWal_buffer is 10MB\n\nIf you can give me some pointers, I would really appreciate.\nThanks,\nAnne\n\n\n-----Original Message-----\nFrom: Igor Neyman [mailto:[email protected]] \nSent: Wednesday, May 01, 2013 10:26 AM\nTo: Anne Rosset; [email protected]\nCc: [email protected]\nSubject: RE: [PERFORM] Deterioration in performance when query executed in multi threads\n\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:pgsql-performance- [email protected]] On Behalf Of Anne \n> Rosset\n> Sent: Wednesday, May 01, 2013 1:10 PM\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Deterioration in performance when query \n> executed in multi threads\n> \n> Thanks Ken. I am going to test with different pool sizes and see if I \n> see any improvements.\n> Are there other configuration options I should look like? I was \n> thinking of playing with shared_buffer.\n> \n> Thanks,\n> Anne\n> \n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]\n> Sent: Wednesday, May 01, 2013 9:27 AM\n> To: Anne Rosset\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Deterioration in performance when query \n> executed in multi threads\n> \n> On Wed, May 01, 2013 at 04:07:55PM +0000, Anne Rosset wrote:\n> > Hi Ken,\n> > Thanks for your answer. My test is actually running with jboss \n> > 7/jdbc\n> and the connection pool is defined with min-pool-size =10 and max- \n> pool-size=400.\n> >\n> > Why would you think it is an issue with the connection pool?\n> >\n> > Thanks,\n> > Anne\n> >\n> \n> Hi Anne,\n> \n> You want to be able to run as many jobs productively at once as your \n> hardware is capable of supporting. Usually something starting a 2 x \n> number of CPUs is best.\n> If you make several runs increasing the size of the pool each time, \n> you will see a maximum throughput somewhere near there and then the \n> performance will decrease as you add more and more connections. You \n> can then use that sweet spot.\n> Your test harness should make that pretty easy to find.\n> \n> Regards,\n> Ken\n> \n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nAnne,\n\nBefore expecting advice on specific changes to Postgres configuration parameters, You should provide this list with your hardware configuration, Postgres version, your current Postgres configuration parameters (at least those that changed from defaults).\nAnd, if you do the testing using specific query, would be nice if you provide the results of:\n\nExplain analyze <your_select>;\n\nalong with the definition of database objects (tables, indexes) involved in this select.\n\nAlso, you mention client-side connection pooler. In my experience, server-side poolers, such as PgBouncer mentioned earlier, are much more effective.\n\nRegards,\nIgor Neyman\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 May 2013 20:52:19 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Anne Rosset [mailto:[email protected]]\n> Sent: Friday, May 03, 2013 4:52 PM\n> To: Igor Neyman; [email protected]\n> Cc: [email protected]\n> Subject: RE: [PERFORM] Deterioration in performance when query executed\n> in multi threads\n> \n> We saw a little bit improvement by increasing the min_pool_size but\n> again I see a bigvariation in the time the query is executed. Here is\n> the query:\n> \n> srdb=> explain analyze SELECT\n> psrdb-> artifact.id AS id,\n> psrdb-> artifact.priority AS priority,\n> psrdb-> project.path AS projectPathString,\n> psrdb-> project.title AS projectTitle,\n> psrdb-> folder.project_id AS projectId,\n> psrdb-> folder.title AS folderTitle,\n> psrdb-> item.folder_id AS folderId,\n> psrdb-> item.title AS title,\n> psrdb-> item.name AS name,\n> psrdb-> field_value2.value AS status,\n> psrdb-> field_value3.value AS category,\n> psrdb-> sfuser.username AS submittedByUsername,\n> psrdb-> sfuser.full_name AS submittedByFullname,\n> psrdb-> sfuser2.username AS assignedToUsername,\n> psrdb-> sfuser2.full_name AS assignedToFullname,\n> psrdb-> item.version AS version,\n> psrdb-> CASE when ((SELECT\n> psrdb(> mntr_subscription.user_id AS userId\n> psrdb(> FROM\n> psrdb(> mntr_subscription mntr_subscription\n> psrdb(> WHERE\n> psrdb(> artifact.id=mntr_subscription.object_key\n> psrdb(> AND mntr_subscription.user_id='user1439'\n> psrdb(> )= 'user1439') THEN 'user1439' ELSE null END AS\n> monitoringUserId,\n> psrdb-> tracker.icon AS trackerIcon,\n> psrdb-> tracker.remaining_effort_disabled AS\n> remainingEffortDisabled,\n> psrdb-> tracker.actual_effort_disabled AS actualEffortDisabled,\n> psrdb-> tracker.estimated_effort_disabled AS\n> estimatedEffortDisabled\n> psrdb-> FROM\n> psrdb-> field_value field_value2,\n> psrdb-> field_value field_value,\n> psrdb-> sfuser sfuser2,\n> psrdb-> field_value field_value3,\n> psrdb-> field_value field_value4,\n> psrdb-> item item,\n> psrdb-> project project,\n> psrdb-> relationship relationship,\n> psrdb-> tracker tracker,\n> psrdb-> artifact artifact,\n> psrdb-> sfuser sfuser,\n> psrdb-> folder folder\n> psrdb-> WHERE\n> psrdb-> artifact.id=item.id\n> psrdb-> AND item.folder_id=folder.id\n> psrdb-> AND folder.project_id=project.id\n> psrdb-> AND artifact.group_fv=field_value.id\n> psrdb-> AND artifact.status_fv=field_value2.id\n> psrdb-> AND artifact.category_fv=field_value3.id\n> psrdb-> AND artifact.customer_fv=field_value4.id\n> psrdb-> AND item.created_by_id=sfuser.id\n> psrdb-> AND relationship.is_deleted=false\n> psrdb-> AND\n> relationship.relationship_type_name='ArtifactAssignment'\n> psrdb-> AND relationship.origin_id=sfuser2.id\n> psrdb-> AND artifact.id=relationship.target_id\n> psrdb-> AND item.is_deleted=false\n> psrdb-> AND ((artifact.priority=3))\n> psrdb-> AND (project.path='projects.psr-pub-13')\n> psrdb-> AND item.folder_id=tracker.id\n> psrdb-> ;\n> \n> QUERY PLAN\n> \n> -----------------------------------------------------------------------\n> -------------------------------------------------\n> -----------------------------------------------------------------------\n> ----\n> Nested Loop (cost=0.00..272.62 rows=1 width=181) (actual\n> time=805.934..1792.596 rows=177 loops=1)\n> \n> -> Nested Loop (cost=0.00..263.87 rows=1 width=167) (actual\n> time=707.739..1553.348 rows=177 loops=1)\n> \n> -> Nested Loop (cost=0.00..263.58 rows=1 width=153) (actual\n> time=653.053..1496.839 rows=177 loops=1)\n> \n> -> Nested Loop (cost=0.00..262.50 rows=1 width=154)\n> (actual time=565.627..1385.667 rows=177 loops=1)\n> \n> -> Nested Loop (cost=0.00..262.08 rows=1\n> width=163) (actual time=565.605..1383.686 rows=177 loops\n> =1)\n> -> Nested Loop (cost=0.00..261.67 rows=1\n> width=166) (actual time=530.928..1347.053 rows=177\n> loops=1)\n> -> Nested Loop (cost=0.00..261.26\n> rows=1 width=175) (actual time=530.866..1345.032\n> rows=177 loops=1)\n> -> Nested Loop\n> (cost=0.00..260.84 rows=1 width=178) (actual time=372.825..1184.\n> 668 rows=177 loops=1)\n> -> Nested Loop\n> (cost=0.00..250.33 rows=29 width=128) (actual time=317.897\n> ..534.645 rows=1011 loops=1)\n> -> Nested Loop\n> (cost=0.00..207.56 rows=3 width=92) (actual time=251\n> .014..408.868 rows=10 loops=1)\n> -> Nested\n> Loop (cost=0.00..163.54 rows=155 width=65) (actual\n> time=146.176..382.023 rows=615 loops=1)\n> ->\n> Index Scan using project_path on project (cost=0.00.\n> .8.27 rows=1 width=42) (actual time=76.581..76.583 rows=1 loops=1)\n> \n> Index Cond: ((path)::text = 'projects.psr-pub-13'::\n> text)\n> ->\n> Index Scan using folder_project on folder (cost=0.00\n> ..153.26 rows=161 width=32) (actual time=69.564..305.083 rows=615\n> loops=1)\n> \n> Index Cond: ((folder.project_id)::text = (project.\n> id)::text)\n> -> Index Scan\n> using tracker_pk on tracker (cost=0.00..0.27\n> rows=1 width=27) (actual time=0.043..0.043 rows=0 loops=615)\n> Index\n> Cond: ((tracker.id)::text = (folder.id)::text)\n> -> Index Scan using\n> item_folder on item (cost=0.00..14.11 rows=12\n> width=58) (actual time=7.603..12.532 rows=101 loops=10)\n> Index Cond:\n> ((item.folder_id)::text = (folder.id)::text)\n> Filter: (NOT\n> item.is_deleted)\n> -> Index Scan using\n> artifact_pk on artifact (cost=0.00..0.35 rows=1 width\n> =50) (actual time=0.642..0.642 rows=0 loops=1011)\n> Index Cond:\n> ((artifact.id)::text = (item.id)::text)\n> Filter:\n> (artifact.priority = 3)\n> -> Index Scan using\n> field_value_pk on field_value field_value2 (cost=0.00..0.40\n> rows=1 width=15) (actual time=0.904..0.905 rows=1 loops=177)\n> Index Cond:\n> ((field_value2.id)::text = (artifact.status_fv)::text)\n> -> Index Scan using field_value_pk on\n> field_value (cost=0.00..0.40 rows=1 width=9)\n> (actual time=0.010..0.010 rows=1 loops=177)\n> Index Cond:\n> ((field_value.id)::text = (artifact.group_fv)::text)\n> -> Index Scan using field_value_pk on\n> field_value field_value3 (cost=0.00..0.40 rows=1\n> width=15) (actual time=0.205..0.206 rows=1 loops=177)\n> Index Cond: ((field_value3.id)::text =\n> (artifact.category_fv)::text)\n> -> Index Scan using field_value_pk on field_value\n> field_value4 (cost=0.00..0.40 rows=1 width=9)\n> (actual time=0.010..0.010 rows=1 loops=177)\n> Index Cond: ((field_value4.id)::text =\n> (artifact.customer_fv)::text)\n> -> Index Scan using relation_target on relationship\n> (cost=0.00..1.07 rows=1 width=19) (actual time=0.\n> 627..0.627 rows=1 loops=177)\n> Index Cond: ((relationship.target_id)::text =\n> (artifact.id)::text)\n> Filter: ((NOT relationship.is_deleted) AND\n> ((relationship.relationship_type_name)::text =\n> 'ArtifactAssignment'::text))\n> -> Index Scan using sfuser_pk on sfuser sfuser2\n> (cost=0.00..0.28 rows=1 width=32) (actual time=0.318..0.318\n> rows=1 loops=177)\n> Index Cond: ((sfuser2.id)::text =\n> (relationship.origin_id)::text)\n> -> Index Scan using sfuser_pk on sfuser (cost=0.00..0.27 rows=1\n> width=32) (actual time=0.178..0.179 rows=1 loops=\n> 177)\n> Index Cond: ((sfuser.id)::text = (item.created_by_id)::text)\n> SubPlan 1\n> -> Index Scan using mntr_subscr_user on mntr_subscription\n> (cost=0.00..8.47 rows=1 width=9) (actual time=1.170..1.\n> 170 rows=0 loops=177)\n> Index Cond: ((($0)::text = (object_key)::text) AND\n> ((user_id)::text = 'user1439'::text))\n> Total runtime: 1793.203 ms\n> (42 rows)\n> \n> \n> Work_mem is set to 64MB\n> Shared_buffer to 240MB\n> Segment_size is 1GB\n> Wal_buffer is 10MB\n> \n> If you can give me some pointers, I would really appreciate.\n> Thanks,\n> Anne\n> \n>\n\nAnne,\n\nSo, results of \"explain analyze\" that you provided - is this the case, when the query considered \"slow\" (when you have many threads running)?\n\nLooks like optimizer clearly favors \"nested loops\" (never hash joins). What are the sizes of tables involved in this query?\n\nYou never told us about your server hardware configuration: # of CPUs, RAM size? Version of Postgres that you are using?\n\nAnd, (again) did you consider switching from \"client-side polling\" to using PgBouncer for pooling purposes? It is very \"light-weight\" tool and very easy to install/configure.\n\nRegards,\nIgor Neyman\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 May 2013 14:05:53 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "On Fri, May 3, 2013 at 3:52 PM, Anne Rosset <[email protected]> wrote:\n> We saw a little bit improvement by increasing the min_pool_size but again I see a bigvariation in the time the query is executed. Here is the query:\n>\n> srdb=> explain analyze SELECT\n> psrdb-> artifact.id AS id,\n> psrdb-> artifact.priority AS priority,\n> psrdb-> project.path AS projectPathString,\n> psrdb-> project.title AS projectTitle,\n> psrdb-> folder.project_id AS projectId,\n> psrdb-> folder.title AS folderTitle,\n> psrdb-> item.folder_id AS folderId,\n> psrdb-> item.title AS title,\n> psrdb-> item.name AS name,\n> psrdb-> field_value2.value AS status,\n> psrdb-> field_value3.value AS category,\n> psrdb-> sfuser.username AS submittedByUsername,\n> psrdb-> sfuser.full_name AS submittedByFullname,\n> psrdb-> sfuser2.username AS assignedToUsername,\n> psrdb-> sfuser2.full_name AS assignedToFullname,\n> psrdb-> item.version AS version,\n> psrdb-> CASE when ((SELECT\n> psrdb(> mntr_subscription.user_id AS userId\n> psrdb(> FROM\n> psrdb(> mntr_subscription mntr_subscription\n> psrdb(> WHERE\n> psrdb(> artifact.id=mntr_subscription.object_key\n> psrdb(> AND mntr_subscription.user_id='user1439'\n> psrdb(> )= 'user1439') THEN 'user1439' ELSE null END AS monitoringUserId,\n> psrdb-> tracker.icon AS trackerIcon,\n> psrdb-> tracker.remaining_effort_disabled AS remainingEffortDisabled,\n> psrdb-> tracker.actual_effort_disabled AS actualEffortDisabled,\n> psrdb-> tracker.estimated_effort_disabled AS estimatedEffortDisabled\n> psrdb-> FROM\n> psrdb-> field_value field_value2,\n> psrdb-> field_value field_value,\n> psrdb-> sfuser sfuser2,\n> psrdb-> field_value field_value3,\n> psrdb-> field_value field_value4,\n> psrdb-> item item,\n> psrdb-> project project,\n> psrdb-> relationship relationship,\n> psrdb-> tracker tracker,\n> psrdb-> artifact artifact,\n> psrdb-> sfuser sfuser,\n> psrdb-> folder folder\n> psrdb-> WHERE\n> psrdb-> artifact.id=item.id\n> psrdb-> AND item.folder_id=folder.id\n> psrdb-> AND folder.project_id=project.id\n> psrdb-> AND artifact.group_fv=field_value.id\n> psrdb-> AND artifact.status_fv=field_value2.id\n> psrdb-> AND artifact.category_fv=field_value3.id\n> psrdb-> AND artifact.customer_fv=field_value4.id\n> psrdb-> AND item.created_by_id=sfuser.id\n> psrdb-> AND relationship.is_deleted=false\n> psrdb-> AND relationship.relationship_type_name='ArtifactAssignment'\n> psrdb-> AND relationship.origin_id=sfuser2.id\n> psrdb-> AND artifact.id=relationship.target_id\n> psrdb-> AND item.is_deleted=false\n> psrdb-> AND ((artifact.priority=3))\n> psrdb-> AND (project.path='projects.psr-pub-13')\n> psrdb-> AND item.folder_id=tracker.id\n> psrdb-> ;\n\n\n(*please* stop top-posting).\n\nWhat is the cpu profile of the machine while you are threading the\nquery out? if all cpu peggged @ or near 100%, it's possible seeing\nspinlock contention on some of the key index buffers -- but that's a\nlong shot. More likely it's planner malfeasance. Are you running\nthis *exact* query across all threads or are the specific parameters\nchanging (and if so, maybe instead the problem is that specific\narguments sets providing bad plans?)\n\nThis is a classic case of surrogate key design run amok, leading to\nbad performance via difficult to plan queries and/or poorly utilized\nindexes.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 May 2013 11:11:34 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "Hi Igor,\nThe explain analyze is from when there was no load.\n\nArtifact table: 251831 rows\nField_value table: 77378 rows\nMntr_subscription: 929071 rows\nRelationship: 270478 row\nFolder: 280356 rows\nItem: 716465 rows\nSfuser: 5733 rows\nProject: 1817 rows\n\n8CPUs\nRAM: 8GB\n\nPostgres version: 9.0.13\n\n And no we haven't switched or tested yet with pgbouncer. We would like to do a bit more analysis before trying this.\n\nThanks for your help,\nAnne\n\n\n-----Original Message-----\nFrom: Igor Neyman [mailto:[email protected]] \nSent: Monday, May 06, 2013 7:06 AM\nTo: Anne Rosset; [email protected]\nCc: [email protected]\nSubject: RE: [PERFORM] Deterioration in performance when query executed in multi threads\n\n\n\n> -----Original Message-----\n> From: Anne Rosset [mailto:[email protected]]\n> Sent: Friday, May 03, 2013 4:52 PM\n> To: Igor Neyman; [email protected]\n> Cc: [email protected]\n> Subject: RE: [PERFORM] Deterioration in performance when query \n> executed in multi threads\n> \n> We saw a little bit improvement by increasing the min_pool_size but \n> again I see a bigvariation in the time the query is executed. Here is \n> the query:\n> \n> srdb=> explain analyze SELECT\n> psrdb-> artifact.id AS id,\n> psrdb-> artifact.priority AS priority,\n> psrdb-> project.path AS projectPathString,\n> psrdb-> project.title AS projectTitle,\n> psrdb-> folder.project_id AS projectId,\n> psrdb-> folder.title AS folderTitle,\n> psrdb-> item.folder_id AS folderId,\n> psrdb-> item.title AS title,\n> psrdb-> item.name AS name,\n> psrdb-> field_value2.value AS status,\n> psrdb-> field_value3.value AS category,\n> psrdb-> sfuser.username AS submittedByUsername,\n> psrdb-> sfuser.full_name AS submittedByFullname,\n> psrdb-> sfuser2.username AS assignedToUsername,\n> psrdb-> sfuser2.full_name AS assignedToFullname,\n> psrdb-> item.version AS version,\n> psrdb-> CASE when ((SELECT\n> psrdb(> mntr_subscription.user_id AS userId\n> psrdb(> FROM\n> psrdb(> mntr_subscription mntr_subscription\n> psrdb(> WHERE\n> psrdb(> artifact.id=mntr_subscription.object_key\n> psrdb(> AND mntr_subscription.user_id='user1439'\n> psrdb(> )= 'user1439') THEN 'user1439' ELSE null END AS \n> monitoringUserId,\n> psrdb-> tracker.icon AS trackerIcon,\n> psrdb-> tracker.remaining_effort_disabled AS\n> remainingEffortDisabled,\n> psrdb-> tracker.actual_effort_disabled AS actualEffortDisabled,\n> psrdb-> tracker.estimated_effort_disabled AS\n> estimatedEffortDisabled\n> psrdb-> FROM\n> psrdb-> field_value field_value2,\n> psrdb-> field_value field_value,\n> psrdb-> sfuser sfuser2,\n> psrdb-> field_value field_value3,\n> psrdb-> field_value field_value4,\n> psrdb-> item item,\n> psrdb-> project project,\n> psrdb-> relationship relationship,\n> psrdb-> tracker tracker,\n> psrdb-> artifact artifact,\n> psrdb-> sfuser sfuser,\n> psrdb-> folder folder\n> psrdb-> WHERE\n> psrdb-> artifact.id=item.id\n> psrdb-> AND item.folder_id=folder.id\n> psrdb-> AND folder.project_id=project.id\n> psrdb-> AND artifact.group_fv=field_value.id\n> psrdb-> AND artifact.status_fv=field_value2.id\n> psrdb-> AND artifact.category_fv=field_value3.id\n> psrdb-> AND artifact.customer_fv=field_value4.id\n> psrdb-> AND item.created_by_id=sfuser.id\n> psrdb-> AND relationship.is_deleted=false\n> psrdb-> AND\n> relationship.relationship_type_name='ArtifactAssignment'\n> psrdb-> AND relationship.origin_id=sfuser2.id\n> psrdb-> AND artifact.id=relationship.target_id\n> psrdb-> AND item.is_deleted=false\n> psrdb-> AND ((artifact.priority=3))\n> psrdb-> AND (project.path='projects.psr-pub-13')\n> psrdb-> AND item.folder_id=tracker.id ;\n> \n> QUERY PLAN\n> \n> ----------------------------------------------------------------------\n> -\n> -------------------------------------------------\n> ----------------------------------------------------------------------\n> -\n> ----\n> Nested Loop (cost=0.00..272.62 rows=1 width=181) (actual\n> time=805.934..1792.596 rows=177 loops=1)\n> \n> -> Nested Loop (cost=0.00..263.87 rows=1 width=167) (actual\n> time=707.739..1553.348 rows=177 loops=1)\n> \n> -> Nested Loop (cost=0.00..263.58 rows=1 width=153) (actual\n> time=653.053..1496.839 rows=177 loops=1)\n> \n> -> Nested Loop (cost=0.00..262.50 rows=1 width=154) \n> (actual time=565.627..1385.667 rows=177 loops=1)\n> \n> -> Nested Loop (cost=0.00..262.08 rows=1\n> width=163) (actual time=565.605..1383.686 rows=177 loops\n> =1)\n> -> Nested Loop (cost=0.00..261.67 rows=1\n> width=166) (actual time=530.928..1347.053 rows=177\n> loops=1)\n> -> Nested Loop (cost=0.00..261.26\n> rows=1 width=175) (actual time=530.866..1345.032\n> rows=177 loops=1)\n> -> Nested Loop\n> (cost=0.00..260.84 rows=1 width=178) (actual time=372.825..1184.\n> 668 rows=177 loops=1)\n> -> Nested Loop\n> (cost=0.00..250.33 rows=29 width=128) (actual time=317.897\n> ..534.645 rows=1011 loops=1)\n> -> Nested Loop\n> (cost=0.00..207.56 rows=3 width=92) (actual time=251\n> .014..408.868 rows=10 loops=1)\n> -> Nested \n> Loop (cost=0.00..163.54 rows=155 width=65) (actual\n> time=146.176..382.023 rows=615 loops=1)\n> -> \n> Index Scan using project_path on project (cost=0.00.\n> .8.27 rows=1 width=42) (actual time=76.581..76.583 rows=1 loops=1)\n> \n> Index Cond: ((path)::text = 'projects.psr-pub-13'::\n> text)\n> -> \n> Index Scan using folder_project on folder (cost=0.00\n> ..153.26 rows=161 width=32) (actual time=69.564..305.083 rows=615\n> loops=1)\n> \n> Index Cond: ((folder.project_id)::text = (project.\n> id)::text)\n> -> Index \n> Scan using tracker_pk on tracker (cost=0.00..0.27\n> rows=1 width=27) (actual time=0.043..0.043 rows=0 loops=615)\n> Index\n> Cond: ((tracker.id)::text = (folder.id)::text)\n> -> Index Scan \n> using item_folder on item (cost=0.00..14.11 rows=12\n> width=58) (actual time=7.603..12.532 rows=101 loops=10)\n> Index Cond:\n> ((item.folder_id)::text = (folder.id)::text)\n> Filter: (NOT\n> item.is_deleted)\n> -> Index Scan using \n> artifact_pk on artifact (cost=0.00..0.35 rows=1 width\n> =50) (actual time=0.642..0.642 rows=0 loops=1011)\n> Index Cond:\n> ((artifact.id)::text = (item.id)::text)\n> Filter:\n> (artifact.priority = 3)\n> -> Index Scan using \n> field_value_pk on field_value field_value2 (cost=0.00..0.40\n> rows=1 width=15) (actual time=0.904..0.905 rows=1 loops=177)\n> Index Cond:\n> ((field_value2.id)::text = (artifact.status_fv)::text)\n> -> Index Scan using field_value_pk \n> on field_value (cost=0.00..0.40 rows=1 width=9) (actual \n> time=0.010..0.010 rows=1 loops=177)\n> Index Cond:\n> ((field_value.id)::text = (artifact.group_fv)::text)\n> -> Index Scan using field_value_pk on \n> field_value field_value3 (cost=0.00..0.40 rows=1\n> width=15) (actual time=0.205..0.206 rows=1 loops=177)\n> Index Cond: ((field_value3.id)::text \n> =\n> (artifact.category_fv)::text)\n> -> Index Scan using field_value_pk on \n> field_value\n> field_value4 (cost=0.00..0.40 rows=1 width=9) (actual \n> time=0.010..0.010 rows=1 loops=177)\n> Index Cond: ((field_value4.id)::text =\n> (artifact.customer_fv)::text)\n> -> Index Scan using relation_target on relationship\n> (cost=0.00..1.07 rows=1 width=19) (actual time=0.\n> 627..0.627 rows=1 loops=177)\n> Index Cond: ((relationship.target_id)::text =\n> (artifact.id)::text)\n> Filter: ((NOT relationship.is_deleted) AND \n> ((relationship.relationship_type_name)::text =\n> 'ArtifactAssignment'::text))\n> -> Index Scan using sfuser_pk on sfuser sfuser2\n> (cost=0.00..0.28 rows=1 width=32) (actual time=0.318..0.318\n> rows=1 loops=177)\n> Index Cond: ((sfuser2.id)::text =\n> (relationship.origin_id)::text)\n> -> Index Scan using sfuser_pk on sfuser (cost=0.00..0.27 rows=1\n> width=32) (actual time=0.178..0.179 rows=1 loops=\n> 177)\n> Index Cond: ((sfuser.id)::text = (item.created_by_id)::text)\n> SubPlan 1\n> -> Index Scan using mntr_subscr_user on mntr_subscription\n> (cost=0.00..8.47 rows=1 width=9) (actual time=1.170..1.\n> 170 rows=0 loops=177)\n> Index Cond: ((($0)::text = (object_key)::text) AND \n> ((user_id)::text = 'user1439'::text)) Total runtime: 1793.203 ms\n> (42 rows)\n> \n> \n> Work_mem is set to 64MB\n> Shared_buffer to 240MB\n> Segment_size is 1GB\n> Wal_buffer is 10MB\n> \n> If you can give me some pointers, I would really appreciate.\n> Thanks,\n> Anne\n> \n>\n\nAnne,\n\nSo, results of \"explain analyze\" that you provided - is this the case, when the query considered \"slow\" (when you have many threads running)?\n\nLooks like optimizer clearly favors \"nested loops\" (never hash joins). What are the sizes of tables involved in this query?\n\nYou never told us about your server hardware configuration: # of CPUs, RAM size? Version of Postgres that you are using?\n\nAnd, (again) did you consider switching from \"client-side polling\" to using PgBouncer for pooling purposes? It is very \"light-weight\" tool and very easy to install/configure.\n\nRegards,\nIgor Neyman\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 May 2013 17:00:58 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "Anne Rosset, 06.05.2013 19:00:\n> Postgres version: 9.0.13\n>\n>> Work_mem is set to 64MB\n>> Shared_buffer to 240MB\n>> Segment_size is 1GB\n>> Wal_buffer is 10MB\n>\n> Artifact table: 251831 rows\n> Field_value table: 77378 rows\n> Mntr_subscription: 929071 rows\n> Relationship: 270478 row\n> Folder: 280356 rows\n> Item: 716465 rows\n> Sfuser: 5733 rows\n> Project: 1817 rows\n>\n> 8CPUs\n> RAM: 8GB\n>\n\nWith 8GB RAM you should be able to increase shared_buffer to 1GB or maybe even higher especially if this is a dedicated server.\n240MB is pretty conservative for a server with that amount of RAM (unless you have many other applications running on that box)\n\nAlso what are the values for\n\ncpu_tuple_cost\nseq_page_cost\nrandom_page_cost\neffective_cache_size\n\nWhat kind of harddisk is in the server? SSD? Regular ones (spinning disks)?\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 06 May 2013 19:11:43 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Anne Rosset [mailto:[email protected]]\n> Sent: Monday, May 06, 2013 1:01 PM\n> To: Igor Neyman; [email protected]\n> Cc: [email protected]\n> Subject: RE: [PERFORM] Deterioration in performance when query executed\n> in multi threads\n> \n> Hi Igor,\n> The explain analyze is from when there was no load.\n> \n> Artifact table: 251831 rows\n> Field_value table: 77378 rows\n> Mntr_subscription: 929071 rows\n> Relationship: 270478 row\n> Folder: 280356 rows\n> Item: 716465 rows\n> Sfuser: 5733 rows\n> Project: 1817 rows\n> \n> 8CPUs\n> RAM: 8GB\n> \n> Postgres version: 9.0.13\n> \n> And no we haven't switched or tested yet with pgbouncer. We would\n> like to do a bit more analysis before trying this.\n> \n> Thanks for your help,\n> Anne\n> \n> \n\n\nAnne,\n\nJust as a quick test, try in the psql session/connection locally change enable_nestloop setting and run your query:\n\nset enable_nestloop = off;\nexplain analyze <your_query>;\n\njust to see if different execution plan will be better and optimizer needs to be \"convinced\" to use this different plan.\nPlease post what you get with the modified setting.\n\nAlso, what is the setting for effective_cache_size in postgresql.conf?\n\nRegards,\nIgor Neyman\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 May 2013 17:12:44 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "\r\n\r\n> -----Original Message-----\r\n> From: [email protected] [mailto:pgsql-performance-\r\n> [email protected]] On Behalf Of Thomas Kellerer\r\n> Sent: Monday, May 06, 2013 1:12 PM\r\n> To: [email protected]\r\n> Subject: Re: [PERFORM] Deterioration in performance when query executed\r\n> in multi threads\r\n> \r\n> Anne Rosset, 06.05.2013 19:00:\r\n> > Postgres version: 9.0.13\r\n> >\r\n> >> Work_mem is set to 64MB\r\n> >> Shared_buffer to 240MB\r\n> >> Segment_size is 1GB\r\n> >> Wal_buffer is 10MB\r\n> >\r\n> > Artifact table: 251831 rows\r\n> > Field_value table: 77378 rows\r\n> > Mntr_subscription: 929071 rows\r\n> > Relationship: 270478 row\r\n> > Folder: 280356 rows\r\n> > Item: 716465 rows\r\n> > Sfuser: 5733 rows\r\n> > Project: 1817 rows\r\n> >\r\n> > 8CPUs\r\n> > RAM: 8GB\r\n> >\r\n> \r\n> With 8GB RAM you should be able to increase shared_buffer to 1GB or\r\n> maybe even higher especially if this is a dedicated server.\r\n> 240MB is pretty conservative for a server with that amount of RAM\r\n> (unless you have many other applications running on that box)\r\n> \r\n> Also what are the values for\r\n> \r\n> cpu_tuple_cost\r\n> seq_page_cost\r\n> random_page_cost\r\n> effective_cache_size\r\n> \r\n> What kind of harddisk is in the server? SSD? Regular ones (spinning\r\n> disks)?\r\n> \r\n> \r\n> \r\n\r\n\r\nAlso, with 8 CPUs, your max connection_pool size shouldn't much bigger than 20.\r\n\r\nIgor Neyman\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 May 2013 17:25:13 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "Hi Thomas,\r\nIt is not a dedicated box (we have Jboss running too).\r\n\r\ncpu_tuple_cost | 0.01\r\nseq_page_cost | 1\r\nrandom_page_cost | 4\r\neffective_cache_size | 512MB\r\n\r\nWe have the data directory on nfs (rw,intr,hard,tcp,rsize=32768,wsize=32768,nfsvers=3,tcp). Note that we have also tested putting the data directory on local disk and didn't find a big improvement.\r\n\r\nThanks,\r\nAnne\r\n\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Thomas Kellerer\r\nSent: Monday, May 06, 2013 10:12 AM\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] Deterioration in performance when query executed in multi threads\r\n\r\nAnne Rosset, 06.05.2013 19:00:\r\n> Postgres version: 9.0.13\r\n>\r\n>> Work_mem is set to 64MB\r\n>> Shared_buffer to 240MB\r\n>> Segment_size is 1GB\r\n>> Wal_buffer is 10MB\r\n>\r\n> Artifact table: 251831 rows\r\n> Field_value table: 77378 rows\r\n> Mntr_subscription: 929071 rows\r\n> Relationship: 270478 row\r\n> Folder: 280356 rows\r\n> Item: 716465 rows\r\n> Sfuser: 5733 rows\r\n> Project: 1817 rows\r\n>\r\n> 8CPUs\r\n> RAM: 8GB\r\n>\r\n\r\nWith 8GB RAM you should be able to increase shared_buffer to 1GB or maybe even higher especially if this is a dedicated server.\r\n240MB is pretty conservative for a server with that amount of RAM (unless you have many other applications running on that box)\r\n\r\nAlso what are the values for\r\n\r\ncpu_tuple_cost\r\nseq_page_cost\r\nrandom_page_cost\r\neffective_cache_size\r\n\r\nWhat kind of harddisk is in the server? SSD? Regular ones (spinning disks)?\r\n\r\n\r\n\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 May 2013 21:46:52 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "Hi Igor,\nResult with enable_nestloop off:\n\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------\n Hash Join (cost=49946.49..58830.02 rows=1 width=181) (actual time=2189.474..2664.888 rows=180 loops=1)\n Hash Cond: ((item.created_by_id)::text = (sfuser.id)::text)\n -> Hash Join (cost=49470.50..58345.53 rows=1 width=167) (actual time=1931.870..2404.745 rows=180 loops=1)\n Hash Cond: ((relationship.origin_id)::text = (sfuser2.id)::text)\n -> Hash Join (cost=48994.51..57869.52 rows=1 width=153) (actual time=1927.603..2400.334 rows=180 loops=1)\n Hash Cond: ((relationship.target_id)::text = (artifact.id)::text)\n -> Seq Scan on relationship (cost=0.00..7973.38 rows=240435 width=19) (actual time=0.036..492.442 rows=241285 loops=1)\n Filter: ((NOT is_deleted) AND ((relationship_type_name)::text = 'ArtifactAssignment'::text))\n -> Hash (cost=48994.49..48994.49 rows=1 width=154) (actual time=1858.350..1858.350 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 34kB\n -> Hash Join (cost=47260.54..48994.49 rows=1 width=154) (actual time=1836.495..1858.151 rows=180 loops=1)\n Hash Cond: ((field_value4.id)::text = (artifact.customer_fv)::text)\n -> Seq Scan on field_value field_value4 (cost=0.00..1443.78 rows=77378 width=9) (actual time=22.104..30.694 rows=77378 loops=1)\n -> Hash (cost=47260.52..47260.52 rows=1 width=163) (actual time=1814.005..1814.005 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 35kB\n -> Hash Join (cost=45526.57..47260.52 rows=1 width=163) (actual time=1790.908..1813.780 rows=180 loops=1)\n Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)\n -> Seq Scan on field_value field_value3 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..9.262 rows=77378 loops=1)\n -> Hash (cost=45526.55..45526.55 rows=1 width=166) (actual time=1790.505..1790.505 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 36kB\n -> Hash Join (cost=43792.60..45526.55 rows=1 width=166) (actual time=1768.362..1790.304 rows=180 loops=1)\n Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)\n -> Seq Scan on field_value (cost=0.00..1443.78 rows=77378 width=9) (actual time=0.002..8.687 rows=77378 loops=1)\n -> Hash (cost=43792.58..43792.58 rows=1 width=175) (actual time=1767.928..1767.928 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=42058.63..43792.58 rows=1 width=175) (actual time=1499.822..1767.734 rows=180 loops=1)\n Hash Cond: ((field_value2.id)::text = (artifact.status_fv)::text)\n -> Seq Scan on field_value field_value2 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1)\n -> Hash (cost=42058.61..42058.61 rows=1 width=178) (actual time=1492.707..1492.707 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=18039.59..42058.61 rows=1 width=178) (actual time=1175.659..1492.482 rows=180 loops=1)\n Hash Cond: ((item.id)::text = (artifact.id)::text)\n -> Hash Join (cost=12112.31..36130.95 rows=30 width=128) (actual time=304.035..702.745 rows=1015 loops=1)\n Hash Cond: ((item.folder_id)::text = (folder.id)::text)\n -> Seq Scan on item (cost=0.00..21381.10 rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1)\n Filter: (NOT is_deleted)\n -> Hash (cost=12112.27..12112.27 rows=3 width=92) (actual time=189.285..189.285 rows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Hash Join (cost=195.86..12112.27 rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1)\n Hash Cond: ((folder.id)::text = (tracker.id)::text)\n -> Hash Join (cost=8.28..11923.31 rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1)\n Hash Cond: ((folder.project_id)::text = (project.id)::text)\n -> Seq Scan on folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1)\n -> Hash (cost=8.27..8.27 rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using project_path on project (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows=\n1 loops=1)\n Index Cond: ((path)::text = 'projects.psr-pub-13'::text)\n -> Hash (cost=129.48..129.48 rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 267kB\n -> Seq Scan on tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1)\n-> Hash Join (cost=45526.57..47260.52 rows=1 width=163) (actual time=1790.908..1813.780 rows=180 loops=1)\n Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)\n -> Seq Scan on field_value field_value3 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..9.262 rows=77378 loops=1)\n -> Hash (cost=45526.55..45526.55 rows=1 width=166) (actual time=1790.505..1790.505 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 36kB\n -> Hash Join (cost=43792.60..45526.55 rows=1 width=166) (actual time=1768.362..1790.304 rows=180 loops=1)\n Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)\n -> Seq Scan on field_value (cost=0.00..1443.78 rows=77378 width=9) (actual time=0.002..8.687 rows=77378 loops=1)\n -> Hash (cost=43792.58..43792.58 rows=1 width=175) (actual time=1767.928..1767.928 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=42058.63..43792.58 rows=1 width=175) (actual time=1499.822..1767.734 rows=180 loops=1)\n Hash Cond: ((field_value2.id)::text = (artifact.status_fv)::text)\n -> Seq Scan on field_value field_value2 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1)\n -> Hash (cost=42058.61..42058.61 rows=1 width=178) (actual time=1492.707..1492.707 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=18039.59..42058.61 rows=1 width=178) (actual time=1175.659..1492.482 rows=180 loops=1)\n Hash Cond: ((item.id)::text = (artifact.id)::text)\n -> Hash Join (cost=12112.31..36130.95 rows=30 width=128) (actual time=304.035..702.745 rows=1015 loops=1)\n Hash Cond: ((item.folder_id)::text = (folder.id)::text)\n -> Seq Scan on item (cost=0.00..21381.10 rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1)\n Filter: (NOT is_deleted)\n -> Hash (cost=12112.27..12112.27 rows=3 width=92) (actual time=189.285..189.285 rows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Hash Join (cost=195.86..12112.27 rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1)\n Hash Cond: ((folder.id)::text = (tracker.id)::text)\n -> Hash Join (cost=8.28..11923.31 rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1)\n Hash Cond: ((folder.project_id)::text = (project.id)::text)\n -> Seq Scan on folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1)\n -> Hash (cost=8.27..8.27 rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using project_path on project (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows=\n1 loops=1)\n Index Cond: ((path)::text = 'projects.psr-pub-13'::text)\n -> Hash (cost=129.48..129.48 rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 267kB\n -> Seq Scan on tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1)\n -> Hash (cost=5527.28..5527.28 rows=32000 width=50) (actual time=788.470..788.470 rows=31183 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 2662kB\n -> Bitmap Heap Scan on artifact (cost=628.28..5527.28 rows=32000 width=50) (actual time=83.568..771.651 rows=31183 loops=1)\n Recheck Cond: (priority = 3)\n -> Bitmap Index Scan on artifact_priority (cost=0.00..620.28 rows=32000 width=0) (actual time=82.877..82.877 rows=31260 loops=1\n)\n Index Cond: (priority = 3)\n -> Hash (cost=404.33..404.33 rows=5733 width=32) (actual time=4.232..4.232 rows=5733 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 364kB\n -> Seq Scan on sfuser sfuser2 (cost=0.00..404.33 rows=5733 width=32) (actual time=0.006..1.941 rows=5733 loops=1)\n -> Hash (cost=404.33..404.33 rows=5733 width=32) (actual time=257.485..257.485 rows=5733 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 364kB\n -> Seq Scan on sfuser (cost=0.00..404.33 rows=5733 width=32) (actual time=9.555..255.085 rows=5733 loops=1)\n SubPlan 1\n -> Index Scan using mntr_subscr_user on mntr_subscription (cost=0.00..8.47 rows=1 width=9) (actual time=0.013..0.013 rows=0 loops=180)\n Index Cond: ((($0)::text = (object_key)::text) AND ((user_id)::text = 'user1439'::text))\n Total runtime: 2666.011 ms\n\n\neffective_cache_size | 512MB\n\nThanks,\nAnne\n\n-----Original Message-----\nFrom: Igor Neyman [mailto:[email protected]] \nSent: Monday, May 06, 2013 10:13 AM\nTo: Anne Rosset; [email protected]\nCc: [email protected]\nSubject: RE: [PERFORM] Deterioration in performance when query executed in multi threads\n\n\n\n> -----Original Message-----\n> From: Anne Rosset [mailto:[email protected]]\n> Sent: Monday, May 06, 2013 1:01 PM\n> To: Igor Neyman; [email protected]\n> Cc: [email protected]\n> Subject: RE: [PERFORM] Deterioration in performance when query \n> executed in multi threads\n> \n> Hi Igor,\n> The explain analyze is from when there was no load.\n> \n> Artifact table: 251831 rows\n> Field_value table: 77378 rows\n> Mntr_subscription: 929071 rows\n> Relationship: 270478 row\n> Folder: 280356 rows\n> Item: 716465 rows\n> Sfuser: 5733 rows\n> Project: 1817 rows\n> \n> 8CPUs\n> RAM: 8GB\n> \n> Postgres version: 9.0.13\n> \n> And no we haven't switched or tested yet with pgbouncer. We would \n> like to do a bit more analysis before trying this.\n> \n> Thanks for your help,\n> Anne\n> \n> \n\n\nAnne,\n\nJust as a quick test, try in the psql session/connection locally change enable_nestloop setting and run your query:\n\nset enable_nestloop = off;\nexplain analyze <your_query>;\n\njust to see if different execution plan will be better and optimizer needs to be \"convinced\" to use this different plan.\nPlease post what you get with the modified setting.\n\nAlso, what is the setting for effective_cache_size in postgresql.conf?\n\nRegards,\nIgor Neyman\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 May 2013 21:51:55 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "Anne, please read the comment at the bottom of this post!\n\n\nOn 07/05/13 09:46, Anne Rosset wrote:\n> Hi Thomas,\n> It is not a dedicated box (we have Jboss running too).\n>\n> cpu_tuple_cost | 0.01\n> seq_page_cost | 1\n> random_page_cost | 4\n> effective_cache_size | 512MB\n>\n> We have the data directory on nfs (rw,intr,hard,tcp,rsize=32768,wsize=32768,nfsvers=3,tcp). Note that we have also tested putting the data directory on local disk and didn't find a big improvement.\n>\n> Thanks,\n> Anne\n>\n>\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of Thomas Kellerer\n> Sent: Monday, May 06, 2013 10:12 AM\n> To: [email protected]\n> Subject: Re: [PERFORM] Deterioration in performance when query executed in multi threads\n>\n> Anne Rosset, 06.05.2013 19:00:\n>> Postgres version: 9.0.13\n>>\n>>> Work_mem is set to 64MB\n>>> Shared_buffer to 240MB\n>>> Segment_size is 1GB\n>>> Wal_buffer is 10MB\n>> Artifact table: 251831 rows\n>> Field_value table: 77378 rows\n>> Mntr_subscription: 929071 rows\n>> Relationship: 270478 row\n>> Folder: 280356 rows\n>> Item: 716465 rows\n>> Sfuser: 5733 rows\n>> Project: 1817 rows\n>>\n>> 8CPUs\n>> RAM: 8GB\n>>\n> With 8GB RAM you should be able to increase shared_buffer to 1GB or maybe even higher especially if this is a dedicated server.\n> 240MB is pretty conservative for a server with that amount of RAM (unless you have many other applications running on that box)\n>\n> Also what are the values for\n>\n> cpu_tuple_cost\n> seq_page_cost\n> random_page_cost\n> effective_cache_size\n>\n> What kind of harddisk is in the server? SSD? Regular ones (spinning disks)?\n>\n>\nThe policy on this list is to add comments at the bottom, so people can \nfirst read what you are replying to.\n\nThough you can intersperse comments where that is apprporiate.\n\n\nCheers,\nGavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 May 2013 10:11:30 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "\n________________________________________\nFrom: Anne Rosset [[email protected]]\nSent: Monday, May 06, 2013 5:51 PM\nTo: Igor Neyman; [email protected]\nCc: [email protected]\nSubject: RE: [PERFORM] Deterioration in performance when query executed in multi threads\n\nHi Igor,\nResult with enable_nestloop off:\n\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------\n Hash Join (cost=49946.49..58830.02 rows=1 width=181) (actual time=2189.474..2664.888 rows=180 loops=1)\n Hash Cond: ((item.created_by_id)::text = (sfuser.id)::text)\n -> Hash Join (cost=49470.50..58345.53 rows=1 width=167) (actual time=1931.870..2404.745 rows=180 loops=1)\n Hash Cond: ((relationship.origin_id)::text = (sfuser2.id)::text)\n -> Hash Join (cost=48994.51..57869.52 rows=1 width=153) (actual time=1927.603..2400.334 rows=180 loops=1)\n Hash Cond: ((relationship.target_id)::text = (artifact.id)::text)\n -> Seq Scan on relationship (cost=0.00..7973.38 rows=240435 width=19) (actual time=0.036..492.442 rows=241285 loops=1)\n Filter: ((NOT is_deleted) AND ((relationship_type_name)::text = 'ArtifactAssignment'::text))\n -> Hash (cost=48994.49..48994.49 rows=1 width=154) (actual time=1858.350..1858.350 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 34kB\n -> Hash Join (cost=47260.54..48994.49 rows=1 width=154) (actual time=1836.495..1858.151 rows=180 loops=1)\n Hash Cond: ((field_value4.id)::text = (artifact.customer_fv)::text)\n -> Seq Scan on field_value field_value4 (cost=0.00..1443.78 rows=77378 width=9) (actual time=22.104..30.694 rows=77378 loops=1)\n -> Hash (cost=47260.52..47260.52 rows=1 width=163) (actual time=1814.005..1814.005 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 35kB\n -> Hash Join (cost=45526.57..47260.52 rows=1 width=163) (actual time=1790.908..1813.780 rows=180 loops=1)\n Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)\n -> Seq Scan on field_value field_value3 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..9.262 rows=77378 loops=1)\n -> Hash (cost=45526.55..45526.55 rows=1 width=166) (actual time=1790.505..1790.505 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 36kB\n -> Hash Join (cost=43792.60..45526.55 rows=1 width=166) (actual time=1768.362..1790.304 rows=180 loops=1)\n Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)\n -> Seq Scan on field_value (cost=0.00..1443.78 rows=77378 width=9) (actual time=0.002..8.687 rows=77378 loops=1)\n -> Hash (cost=43792.58..43792.58 rows=1 width=175) (actual time=1767.928..1767.928 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=42058.63..43792.58 rows=1 width=175) (actual time=1499.822..1767.734 rows=180 loops=1)\n Hash Cond: ((field_value2.id)::text = (artifact.status_fv)::text)\n -> Seq Scan on field_value field_value2 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1)\n -> Hash (cost=42058.61..42058.61 rows=1 width=178) (actual time=1492.707..1492.707 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=18039.59..42058.61 rows=1 width=178) (actual time=1175.659..1492.482 rows=180 loops=1)\n Hash Cond: ((item.id)::text = (artifact.id)::text)\n -> Hash Join (cost=12112.31..36130.95 rows=30 width=128) (actual time=304.035..702.745 rows=1015 loops=1)\n Hash Cond: ((item.folder_id)::text = (folder.id)::text)\n -> Seq Scan on item (cost=0.00..21381.10 rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1)\n Filter: (NOT is_deleted)\n -> Hash (cost=12112.27..12112.27 rows=3 width=92) (actual time=189.285..189.285 rows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Hash Join (cost=195.86..12112.27 rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1)\n Hash Cond: ((folder.id)::text = (tracker.id)::text)\n -> Hash Join (cost=8.28..11923.31 rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1)\n Hash Cond: ((folder.project_id)::text = (project.id)::text)\n -> Seq Scan on folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1)\n -> Hash (cost=8.27..8.27 rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using project_path on project (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows=\n1 loops=1)\n Index Cond: ((path)::text = 'projects.psr-pub-13'::text)\n -> Hash (cost=129.48..129.48 rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 267kB\n -> Seq Scan on tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1)\n-> Hash Join (cost=45526.57..47260.52 rows=1 width=163) (actual time=1790.908..1813.780 rows=180 loops=1)\n Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)\n -> Seq Scan on field_value field_value3 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..9.262 rows=77378 loops=1)\n -> Hash (cost=45526.55..45526.55 rows=1 width=166) (actual time=1790.505..1790.505 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 36kB\n -> Hash Join (cost=43792.60..45526.55 rows=1 width=166) (actual time=1768.362..1790.304 rows=180 loops=1)\n Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)\n -> Seq Scan on field_value (cost=0.00..1443.78 rows=77378 width=9) (actual time=0.002..8.687 rows=77378 loops=1)\n -> Hash (cost=43792.58..43792.58 rows=1 width=175) (actual time=1767.928..1767.928 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=42058.63..43792.58 rows=1 width=175) (actual time=1499.822..1767.734 rows=180 loops=1)\n Hash Cond: ((field_value2.id)::text = (artifact.status_fv)::text)\n -> Seq Scan on field_value field_value2 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1)\n -> Hash (cost=42058.61..42058.61 rows=1 width=178) (actual time=1492.707..1492.707 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=18039.59..42058.61 rows=1 width=178) (actual time=1175.659..1492.482 rows=180 loops=1)\n Hash Cond: ((item.id)::text = (artifact.id)::text)\n -> Hash Join (cost=12112.31..36130.95 rows=30 width=128) (actual time=304.035..702.745 rows=1015 loops=1)\n Hash Cond: ((item.folder_id)::text = (folder.id)::text)\n -> Seq Scan on item (cost=0.00..21381.10 rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1)\n Filter: (NOT is_deleted)\n -> Hash (cost=12112.27..12112.27 rows=3 width=92) (actual time=189.285..189.285 rows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Hash Join (cost=195.86..12112.27 rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1)\n Hash Cond: ((folder.id)::text = (tracker.id)::text)\n -> Hash Join (cost=8.28..11923.31 rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1)\n Hash Cond: ((folder.project_id)::text = (project.id)::text)\n -> Seq Scan on folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1)\n -> Hash (cost=8.27..8.27 rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using project_path on project (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows=\n1 loops=1)\n Index Cond: ((path)::text = 'projects.psr-pub-13'::text)\n -> Hash (cost=129.48..129.48 rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 267kB\n -> Seq Scan on tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1)\n -> Hash (cost=5527.28..5527.28 rows=32000 width=50) (actual time=788.470..788.470 rows=31183 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 2662kB\n -> Bitmap Heap Scan on artifact (cost=628.28..5527.28 rows=32000 width=50) (actual time=83.568..771.651 rows=31183 loops=1)\n Recheck Cond: (priority = 3)\n -> Bitmap Index Scan on artifact_priority (cost=0.00..620.28 rows=32000 width=0) (actual time=82.877..82.877 rows=31260 loops=1\n)\n Index Cond: (priority = 3)\n -> Hash (cost=404.33..404.33 rows=5733 width=32) (actual time=4.232..4.232 rows=5733 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 364kB\n -> Seq Scan on sfuser sfuser2 (cost=0.00..404.33 rows=5733 width=32) (actual time=0.006..1.941 rows=5733 loops=1)\n -> Hash (cost=404.33..404.33 rows=5733 width=32) (actual time=257.485..257.485 rows=5733 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 364kB\n -> Seq Scan on sfuser (cost=0.00..404.33 rows=5733 width=32) (actual time=9.555..255.085 rows=5733 loops=1)\n SubPlan 1\n -> Index Scan using mntr_subscr_user on mntr_subscription (cost=0.00..8.47 rows=1 width=9) (actual time=0.013..0.013 rows=0 loops=180)\n Index Cond: ((($0)::text = (object_key)::text) AND ((user_id)::text = 'user1439'::text))\n Total runtime: 2666.011 ms\n\n\neffective_cache_size | 512MB\n\nThanks,\nAnne\n\n--------------------------------------------------------------\n\nAnne,\n\nSo, this shows that original execution plan (with nested loops) was not that bad.\n\nStill, considering your hardware config (and I think, you mentioned it's a \"dedicated\" database server), I'd set:\n\nbuffer_cache = 3GB\neffective_cache_size = 7GB\n\nbuffer_cache is \"global\" setting and requires Postgres restart after changing in postgresql.conf.\n\nBut, besides postgres config parameters, it seems your particular problem is in correct setting of conection pooler.\n\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 May 2013 02:03:56 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
},
{
"msg_contents": "Thanks Igor.\nI am going to test with pgbouncer. Will let you know.\n\nThanks,\nAnne\n\n\n\n-----Original Message-----\nFrom: Igor Neyman [mailto:[email protected]] \nSent: Monday, May 06, 2013 7:04 PM\nTo: Anne Rosset; [email protected]\nCc: [email protected]\nSubject: RE: [PERFORM] Deterioration in performance when query executed in multi threads\n\n\n________________________________________\nFrom: Anne Rosset [[email protected]]\nSent: Monday, May 06, 2013 5:51 PM\nTo: Igor Neyman; [email protected]\nCc: [email protected]\nSubject: RE: [PERFORM] Deterioration in performance when query executed in multi threads\n\nHi Igor,\nResult with enable_nestloop off:\n\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------\n Hash Join (cost=49946.49..58830.02 rows=1 width=181) (actual time=2189.474..2664.888 rows=180 loops=1)\n Hash Cond: ((item.created_by_id)::text = (sfuser.id)::text)\n -> Hash Join (cost=49470.50..58345.53 rows=1 width=167) (actual time=1931.870..2404.745 rows=180 loops=1)\n Hash Cond: ((relationship.origin_id)::text = (sfuser2.id)::text)\n -> Hash Join (cost=48994.51..57869.52 rows=1 width=153) (actual time=1927.603..2400.334 rows=180 loops=1)\n Hash Cond: ((relationship.target_id)::text = (artifact.id)::text)\n -> Seq Scan on relationship (cost=0.00..7973.38 rows=240435 width=19) (actual time=0.036..492.442 rows=241285 loops=1)\n Filter: ((NOT is_deleted) AND ((relationship_type_name)::text = 'ArtifactAssignment'::text))\n -> Hash (cost=48994.49..48994.49 rows=1 width=154) (actual time=1858.350..1858.350 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 34kB\n -> Hash Join (cost=47260.54..48994.49 rows=1 width=154) (actual time=1836.495..1858.151 rows=180 loops=1)\n Hash Cond: ((field_value4.id)::text = (artifact.customer_fv)::text)\n -> Seq Scan on field_value field_value4 (cost=0.00..1443.78 rows=77378 width=9) (actual time=22.104..30.694 rows=77378 loops=1)\n -> Hash (cost=47260.52..47260.52 rows=1 width=163) (actual time=1814.005..1814.005 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 35kB\n -> Hash Join (cost=45526.57..47260.52 rows=1 width=163) (actual time=1790.908..1813.780 rows=180 loops=1)\n Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)\n -> Seq Scan on field_value field_value3 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..9.262 rows=77378 loops=1)\n -> Hash (cost=45526.55..45526.55 rows=1 width=166) (actual time=1790.505..1790.505 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 36kB\n -> Hash Join (cost=43792.60..45526.55 rows=1 width=166) (actual time=1768.362..1790.304 rows=180 loops=1)\n Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)\n -> Seq Scan on field_value (cost=0.00..1443.78 rows=77378 width=9) (actual time=0.002..8.687 rows=77378 loops=1)\n -> Hash (cost=43792.58..43792.58 rows=1 width=175) (actual time=1767.928..1767.928 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=42058.63..43792.58 rows=1 width=175) (actual time=1499.822..1767.734 rows=180 loops=1)\n Hash Cond: ((field_value2.id)::text = (artifact.status_fv)::text)\n -> Seq Scan on field_value field_value2 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1)\n -> Hash (cost=42058.61..42058.61 rows=1 width=178) (actual time=1492.707..1492.707 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=18039.59..42058.61 rows=1 width=178) (actual time=1175.659..1492.482 rows=180 loops=1)\n Hash Cond: ((item.id)::text = (artifact.id)::text)\n -> Hash Join (cost=12112.31..36130.95 rows=30 width=128) (actual time=304.035..702.745 rows=1015 loops=1)\n Hash Cond: ((item.folder_id)::text = (folder.id)::text)\n -> Seq Scan on item (cost=0.00..21381.10 rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1)\n Filter: (NOT is_deleted)\n -> Hash (cost=12112.27..12112.27 rows=3 width=92) (actual time=189.285..189.285 rows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Hash Join (cost=195.86..12112.27 rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1)\n Hash Cond: ((folder.id)::text = (tracker.id)::text)\n -> Hash Join (cost=8.28..11923.31 rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1)\n Hash Cond: ((folder.project_id)::text = (project.id)::text)\n -> Seq Scan on folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1)\n -> Hash (cost=8.27..8.27 rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using project_path on project (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows=\n1 loops=1)\n Index Cond: ((path)::text = 'projects.psr-pub-13'::text)\n -> Hash (cost=129.48..129.48 rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 267kB\n -> Seq Scan on tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1)\n-> Hash Join (cost=45526.57..47260.52 rows=1 width=163) (actual \n-> time=1790.908..1813.780 rows=180 loops=1)\n Hash Cond: ((field_value3.id)::text = (artifact.category_fv)::text)\n -> Seq Scan on field_value field_value3 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..9.262 rows=77378 loops=1)\n -> Hash (cost=45526.55..45526.55 rows=1 width=166) (actual time=1790.505..1790.505 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 36kB\n -> Hash Join (cost=43792.60..45526.55 rows=1 width=166) (actual time=1768.362..1790.304 rows=180 loops=1)\n Hash Cond: ((field_value.id)::text = (artifact.group_fv)::text)\n -> Seq Scan on field_value (cost=0.00..1443.78 rows=77378 width=9) (actual time=0.002..8.687 rows=77378 loops=1)\n -> Hash (cost=43792.58..43792.58 rows=1 width=175) (actual time=1767.928..1767.928 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=42058.63..43792.58 rows=1 width=175) (actual time=1499.822..1767.734 rows=180 loops=1)\n Hash Cond: ((field_value2.id)::text = (artifact.status_fv)::text)\n -> Seq Scan on field_value field_value2 (cost=0.00..1443.78 rows=77378 width=15) (actual time=0.002..261.082 rows=77378 loops=1)\n -> Hash (cost=42058.61..42058.61 rows=1 width=178) (actual time=1492.707..1492.707 rows=180 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 38kB\n -> Hash Join (cost=18039.59..42058.61 rows=1 width=178) (actual time=1175.659..1492.482 rows=180 loops=1)\n Hash Cond: ((item.id)::text = (artifact.id)::text)\n -> Hash Join (cost=12112.31..36130.95 rows=30 width=128) (actual time=304.035..702.745 rows=1015 loops=1)\n Hash Cond: ((item.folder_id)::text = (folder.id)::text)\n -> Seq Scan on item (cost=0.00..21381.10 rows=703322 width=58) (actual time=0.020..382.847 rows=704018 loops=1)\n Filter: (NOT is_deleted)\n -> Hash (cost=12112.27..12112.27 rows=3 width=92) (actual time=189.285..189.285 rows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Hash Join (cost=195.86..12112.27 rows=3 width=92) (actual time=31.304..189.269 rows=10 loops=1)\n Hash Cond: ((folder.id)::text = (tracker.id)::text)\n -> Hash Join (cost=8.28..11923.31 rows=155 width=65) (actual time=3.195..161.619 rows=612 loops=1)\n Hash Cond: ((folder.project_id)::text = (project.id)::text)\n -> Seq Scan on folder (cost=0.00..10858.71 rows=281271 width=32) (actual time=0.017..75.451 rows=280356 loops=1)\n -> Hash (cost=8.27..8.27 rows=1 width=42) (actual time=0.041..0.041 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using project_path on project (cost=0.00..8.27 rows=1 width=42) (actual time=0.038..0.039 rows=\n1 loops=1)\n Index Cond: ((path)::text = 'projects.psr-pub-13'::text)\n -> Hash (cost=129.48..129.48 rows=4648 width=27) (actual time=27.439..27.439 rows=4648 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 267kB\n -> Seq Scan on tracker (cost=0.00..129.48 rows=4648 width=27) (actual time=19.880..25.635 rows=4648 loops=1)\n -> Hash (cost=5527.28..5527.28 rows=32000 width=50) (actual time=788.470..788.470 rows=31183 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 2662kB\n -> Bitmap Heap Scan on artifact (cost=628.28..5527.28 rows=32000 width=50) (actual time=83.568..771.651 rows=31183 loops=1)\n Recheck Cond: (priority = 3)\n -> Bitmap Index Scan on artifact_priority (cost=0.00..620.28 rows=32000 width=0) (actual time=82.877..82.877 rows=31260 loops=1\n)\n Index Cond: (priority = 3)\n -> Hash (cost=404.33..404.33 rows=5733 width=32) (actual time=4.232..4.232 rows=5733 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 364kB\n -> Seq Scan on sfuser sfuser2 (cost=0.00..404.33 rows=5733 width=32) (actual time=0.006..1.941 rows=5733 loops=1)\n -> Hash (cost=404.33..404.33 rows=5733 width=32) (actual time=257.485..257.485 rows=5733 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 364kB\n -> Seq Scan on sfuser (cost=0.00..404.33 rows=5733 width=32) (actual time=9.555..255.085 rows=5733 loops=1)\n SubPlan 1\n -> Index Scan using mntr_subscr_user on mntr_subscription (cost=0.00..8.47 rows=1 width=9) (actual time=0.013..0.013 rows=0 loops=180)\n Index Cond: ((($0)::text = (object_key)::text) AND ((user_id)::text = 'user1439'::text)) Total runtime: 2666.011 ms\n\n\neffective_cache_size | 512MB\n\nThanks,\nAnne\n\n--------------------------------------------------------------\n\nAnne,\n\nSo, this shows that original execution plan (with nested loops) was not that bad.\n\nStill, considering your hardware config (and I think, you mentioned it's a \"dedicated\" database server), I'd set:\n\nbuffer_cache = 3GB\neffective_cache_size = 7GB\n\nbuffer_cache is \"global\" setting and requires Postgres restart after changing in postgresql.conf.\n\nBut, besides postgres config parameters, it seems your particular problem is in correct setting of conection pooler.\n\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 May 2013 17:53:41 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Deterioration in performance when query executed in multi threads"
}
] |
[
{
"msg_contents": "I have two tables that are nearly identical, yet the same query runs 100x\nslower on the newer one. The two tables have the same number of rows (+/-\nabout 1%), and are roughly the same size:\n\ndb=> SELECT relname AS table_name,\ndb-> pg_size_pretty(pg_relation_size(oid)) AS table_size,\ndb-> pg_size_pretty(pg_total_relation_size(oid)) AS total_size\ndb-> FROM pg_class\ndb-> WHERE relkind in ('r','i')\ndb-> ORDER BY pg_relation_size(oid) DESC;\n table_name | table_size | total_size\n----------------------------------------+------------+------------\n old_str_conntab | 26 GB | 27 GB\n str_conntab | 20 GB | 20 GB\n\nBoth tables have a single index, the primary key. The new table has\nseveral more columns, but they're mostly empty (note that the new table is\nSMALLER, yet it is 100x slower).\n\nI've already tried \"reindex table ...\" and \"analyze table\". No difference.\n\nThis is running on PG 8.4.17 and Ubuntu 10.04. Data is in a RAID10 (8\ndisks), and WAL is on a RAID1, both controlled by an LSI 3WARE 9650SE-12ML\nwith BBU.\n\nIf I re-run the same query, both the old and new tables drop to about 35\nmsec. But the question is, why is the initial query so fast on the old\ntable, and so slow on the new table? I have three other servers with\nsimilar or identical hardware/software, and this happens on all of them,\nincluding on a 9.1.2 version of Postgres.\n\nThanks in advance...\nCraig\n\n\ndb=> explain analyze select id, 1 from str_conntab\nwhere (id >= 12009977 and id <= 12509976) order by id;\n\n Index Scan using new_str_conntab_pkey_3217 on str_conntab\n (cost=0.00..230431.33 rows=87827 width=4)\n (actual time=65.771..51341.899 rows=48613 loops=1)\n Index Cond: ((id >= 12009977) AND (id <= 12509976))\n Total runtime: 51350.556 ms\n\ndb=> explain analyze select id, 1 from old_str_conntab\nwhere (id >= 12009977 and id <= 12509976) order by id;\n\n Index Scan using str_conntab_pkey on old_str_conntab\n (cost=0.00..82262.56 rows=78505 width=4)\n (actual time=38.327..581.235 rows=48725 loops=1)\n Index Cond: ((id >= 12009977) AND (id <= 12509976))\n Total runtime: 586.071 ms\n\ndb=> \\d str_conntab\n Table \"registry.str_conntab\"\n Column | Type | Modifiers\n------------------+---------+-----------\n id | integer | not null\n contab_len | integer |\n contab_data | text |\n orig_contab_len | integer |\n orig_contab_data | text |\n normalized | text |\nIndexes:\n \"new_str_conntab_pkey_3217\" PRIMARY KEY, btree (id)\nReferenced by:\n TABLE \"parent\" CONSTRAINT \"fk_parent_str_conntab_parent_id_3217\"\nFOREIGN KEY (parent_id) REFERENCES str_conntab(id)\n TABLE \"version\" CONSTRAINT \"fk_version_str_conntab_version_id_3217\"\nFOREIGN KEY (version_id) REFERENCES str_conntab(id)\n\ndb=> \\d old_str_conntab\n Table \"registry.old_str_conntab\"\n Column | Type | Modifiers\n-------------+---------+-----------\n id | integer | not null\n contab_len | integer |\n contab_data | text |\nIndexes:\n \"str_conntab_pkey\" PRIMARY KEY, btree (id)\n\nI have two tables that are nearly identical, yet the same query runs 100x slower on the newer one. The two tables have the same number of rows (+/- about 1%), and are roughly the same size:db=> SELECT relname AS table_name,\ndb-> pg_size_pretty(pg_relation_size(oid)) AS table_size,db-> pg_size_pretty(pg_total_relation_size(oid)) AS total_sizedb-> FROM pg_classdb-> WHERE relkind in ('r','i')db-> ORDER BY pg_relation_size(oid) DESC;\n table_name | table_size | total_size ----------------------------------------+------------+------------ old_str_conntab | 26 GB | 27 GB str_conntab | 20 GB | 20 GB\nBoth tables have a single index, the primary key. The new table has several more columns, but they're mostly empty (note that the new table is SMALLER, yet it is 100x slower).I've already tried \"reindex table ...\" and \"analyze table\". No difference.\nThis is running on PG 8.4.17 and Ubuntu 10.04. Data is in a RAID10 (8 disks), and WAL is on a RAID1, both controlled by an LSI 3WARE 9650SE-12ML with BBU.If I re-run the same query, both the old and new tables drop to about 35 msec. But the question is, why is the initial query so fast on the old table, and so slow on the new table? I have three other servers with similar or identical hardware/software, and this happens on all of them, including on a 9.1.2 version of Postgres.\nThanks in advance...Craigdb=> explain analyze select id, 1 from str_conntabwhere (id >= 12009977 and id <= 12509976) order by id; Index Scan using new_str_conntab_pkey_3217 on str_conntab\n (cost=0.00..230431.33 rows=87827 width=4) (actual time=65.771..51341.899 rows=48613 loops=1) Index Cond: ((id >= 12009977) AND (id <= 12509976)) Total runtime: 51350.556 msdb=> explain analyze select id, 1 from old_str_conntab\nwhere (id >= 12009977 and id <= 12509976) order by id; Index Scan using str_conntab_pkey on old_str_conntab (cost=0.00..82262.56 rows=78505 width=4) (actual time=38.327..581.235 rows=48725 loops=1)\n Index Cond: ((id >= 12009977) AND (id <= 12509976)) Total runtime: 586.071 msdb=> \\d str_conntab Table \"registry.str_conntab\" Column | Type | Modifiers \n------------------+---------+----------- id | integer | not null contab_len | integer | contab_data | text | orig_contab_len | integer | orig_contab_data | text | \n normalized | text | Indexes: \"new_str_conntab_pkey_3217\" PRIMARY KEY, btree (id)Referenced by: TABLE \"parent\" CONSTRAINT \"fk_parent_str_conntab_parent_id_3217\" FOREIGN KEY (parent_id) REFERENCES str_conntab(id)\n TABLE \"version\" CONSTRAINT \"fk_version_str_conntab_version_id_3217\" FOREIGN KEY (version_id) REFERENCES str_conntab(id)db=> \\d old_str_conntab Table \"registry.old_str_conntab\"\n Column | Type | Modifiers -------------+---------+----------- id | integer | not null contab_len | integer | contab_data | text | Indexes: \"str_conntab_pkey\" PRIMARY KEY, btree (id)",
"msg_date": "Wed, 1 May 2013 16:35:33 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "100x slowdown for nearly identical tables"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> I have two tables that are nearly identical, yet the same query runs 100x\n> slower on the newer one. ...\n\n> db=> explain analyze select id, 1 from str_conntab\n> where (id >= 12009977 and id <= 12509976) order by id;\n\n> Index Scan using new_str_conntab_pkey_3217 on str_conntab\n> (cost=0.00..230431.33 rows=87827 width=4)\n> (actual time=65.771..51341.899 rows=48613 loops=1)\n> Index Cond: ((id >= 12009977) AND (id <= 12509976))\n> Total runtime: 51350.556 ms\n\n> db=> explain analyze select id, 1 from old_str_conntab\n> where (id >= 12009977 and id <= 12509976) order by id;\n\n> Index Scan using str_conntab_pkey on old_str_conntab\n> (cost=0.00..82262.56 rows=78505 width=4)\n> (actual time=38.327..581.235 rows=48725 loops=1)\n> Index Cond: ((id >= 12009977) AND (id <= 12509976))\n> Total runtime: 586.071 ms\n\nIt looks like old_str_conntab is more or less clustered by \"id\",\nand str_conntab not so much. You could try EXPLAIN (ANALYZE, BUFFERS)\n(on newer PG versions) to verify how many distinct pages are getting\ntouched during the indexscan.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 01 May 2013 20:18:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100x slowdown for nearly identical tables"
},
{
"msg_contents": "On Wed, May 1, 2013 at 5:18 PM, Tom Lane <[email protected]> wrote:\n\n> Craig James <[email protected]> writes:\n> > I have two tables that are nearly identical, yet the same query runs 100x\n> > slower on the newer one. ...\n>\n> > db=> explain analyze select id, 1 from str_conntab\n> > where (id >= 12009977 and id <= 12509976) order by id;\n>\n> > Index Scan using new_str_conntab_pkey_3217 on str_conntab\n> > (cost=0.00..230431.33 rows=87827 width=4)\n> > (actual time=65.771..51341.899 rows=48613 loops=1)\n> > Index Cond: ((id >= 12009977) AND (id <= 12509976))\n> > Total runtime: 51350.556 ms\n>\n> > db=> explain analyze select id, 1 from old_str_conntab\n> > where (id >= 12009977 and id <= 12509976) order by id;\n>\n> > Index Scan using str_conntab_pkey on old_str_conntab\n> > (cost=0.00..82262.56 rows=78505 width=4)\n> > (actual time=38.327..581.235 rows=48725 loops=1)\n> > Index Cond: ((id >= 12009977) AND (id <= 12509976))\n> > Total runtime: 586.071 ms\n>\n> It looks like old_str_conntab is more or less clustered by \"id\",\n> and str_conntab not so much. You could try EXPLAIN (ANALYZE, BUFFERS)\n> (on newer PG versions) to verify how many distinct pages are getting\n> touched during the indexscan.\n>\n\nYeah, now that you say it, it's obvious. The original table was built with\nID from a sequence, so it's going to be naturally clustered by ID. The new\ntable was built by reloading the data in alphabetical order by supplier\nname, so it would have scattered the IDs all over the place.\n\nI guess I could actually cluster the new table, but since that one table\nholds about 90% of the total data in the database, that would be a chore.\nProbably better to find a more efficient way to do the query.\n\nThanks,\nCraig\n\n\n>\n> regards, tom lane\n>\n\nOn Wed, May 1, 2013 at 5:18 PM, Tom Lane <[email protected]> wrote:\nCraig James <[email protected]> writes:\n> I have two tables that are nearly identical, yet the same query runs 100x\n> slower on the newer one. ...\n\n> db=> explain analyze select id, 1 from str_conntab\n> where (id >= 12009977 and id <= 12509976) order by id;\n\n> Index Scan using new_str_conntab_pkey_3217 on str_conntab\n> (cost=0.00..230431.33 rows=87827 width=4)\n> (actual time=65.771..51341.899 rows=48613 loops=1)\n> Index Cond: ((id >= 12009977) AND (id <= 12509976))\n> Total runtime: 51350.556 ms\n\n> db=> explain analyze select id, 1 from old_str_conntab\n> where (id >= 12009977 and id <= 12509976) order by id;\n\n> Index Scan using str_conntab_pkey on old_str_conntab\n> (cost=0.00..82262.56 rows=78505 width=4)\n> (actual time=38.327..581.235 rows=48725 loops=1)\n> Index Cond: ((id >= 12009977) AND (id <= 12509976))\n> Total runtime: 586.071 ms\n\nIt looks like old_str_conntab is more or less clustered by \"id\",\nand str_conntab not so much. You could try EXPLAIN (ANALYZE, BUFFERS)\n(on newer PG versions) to verify how many distinct pages are getting\ntouched during the indexscan.Yeah, now that you say it, it's obvious. The original table was built with ID from a sequence, so it's going to be naturally clustered by ID. The new table was built by reloading the data in alphabetical order by supplier name, so it would have scattered the IDs all over the place.\nI guess I could actually cluster the new table, but since that one table holds about 90% of the total data in the database, that would be a chore. Probably better to find a more efficient way to do the query.\nThanks,Craig \n\n regards, tom lane",
"msg_date": "Wed, 1 May 2013 17:45:11 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 100x slowdown for nearly identical tables"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> On Wed, May 1, 2013 at 5:18 PM, Tom Lane <[email protected]> wrote:\n>> It looks like old_str_conntab is more or less clustered by \"id\",\n>> and str_conntab not so much. You could try EXPLAIN (ANALYZE, BUFFERS)\n>> (on newer PG versions) to verify how many distinct pages are getting\n>> touched during the indexscan.\n\n> Yeah, now that you say it, it's obvious. The original table was built with\n> ID from a sequence, so it's going to be naturally clustered by ID. The new\n> table was built by reloading the data in alphabetical order by supplier\n> name, so it would have scattered the IDs all over the place.\n\n> I guess I could actually cluster the new table, but since that one table\n> holds about 90% of the total data in the database, that would be a chore.\n> Probably better to find a more efficient way to do the query.\n\nJust out of curiosity, you could try forcing a bitmap indexscan to see\nhow much that helps. The planner evidently thinks \"not at all\", but\nit's been wrong before ;-)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 01 May 2013 21:51:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100x slowdown for nearly identical tables"
}
] |
[
{
"msg_contents": "Hello,\n\nWe are in the fortunate situation of having more money than time to help solve our PostgreSQL 9.1 performance problem.\n\nOur server hosts databases that are about 1 GB in size with the largest tables having order 10 million 20-byte indexed records. The data are loaded once and then read from a web app and other client programs. Some of the queries execute ORDER BY on the results. There are typically less than a dozen read-only concurrent connections to any one database.\n\nSELECTs for data are taking 10s of seconds. We'd like to reduce this to web app acceptable response times (less than 1 second). If this is successful then the size of the database will grow by a factor of ten - we will still want sub-second response times. We are in the process of going through the excellent suggestions in the \"PostgreSQL 9.0 High Performance\" book to identify the bottleneck (we have reasonable suspicions that we are I/O bound), but would also like to place an order soon for the dedicated server which will host the production databases. Here are the specs of a server that we are considering with a budget of $13k US:\n\nHP ProLiant DL360p Gen 8\nDual Intel Xeon 2.4GHz 4-core E5-2609 CPUs\n64GB RAM\n2x146GB 15K SAS hard drives\n3x200GB SATA SLC SSDs\n+ the usual accessories (optical drive, rail kit, dual power supplies)\n\nOpinions?\n\nThanks in advance for any suggestions you have.\n\n-Mike\n\n--\nMike McCann\nSoftware Engineer\nMonterey Bay Aquarium Research Institute\n7700 Sandholdt Road\nMoss Landing, CA 95039-9644\nVoice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org\n\n\nHello,We are in the fortunate situation of having more money than time to help solve our PostgreSQL 9.1 performance problem.Our server hosts databases that are about 1 GB in size with the largest tables having order 10 million 20-byte indexed records. The data are loaded once and then read from a web app and other client programs. Some of the queries execute ORDER BY on the results. There are typically less than a dozen read-only concurrent connections to any one database.SELECTs for data are taking 10s of seconds. We'd like to reduce this to web app acceptable response times (less than 1 second). If this is successful then the size of the database will grow by a factor of ten - we will still want sub-second response times. We are in the process of going through the excellent suggestions in the \"PostgreSQL 9.0 High Performance\" book to identify the bottleneck (we have reasonable suspicions that we are I/O bound), but would also like to place an order soon for the dedicated server which will host the production databases. Here are the specs of a server that we are considering with a budget of $13k US:HP ProLiant DL360p Gen 8Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs64GB RAM2x146GB 15K SAS hard drives3x200GB SATA SLC SSDs+ the usual accessories (optical drive, rail kit, dual power supplies)Opinions?Thanks in advance for any suggestions you have.-Mike\n--Mike McCannSoftware EngineerMonterey Bay Aquarium Research Institute7700 Sandholdt RoadMoss Landing, CA 95039-9644Voice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org",
"msg_date": "Thu, 2 May 2013 16:11:15 -0700",
"msg_from": "Mike McCann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware suggestions for maximum read performance"
},
{
"msg_contents": "On Thu, May 2, 2013 at 5:11 PM, Mike McCann <[email protected]> wrote:\n> Hello,\n>\n> We are in the fortunate situation of having more money than time to help\n> solve our PostgreSQL 9.1 performance problem.\n>\n> Our server hosts databases that are about 1 GB in size with the largest\n> tables having order 10 million 20-byte indexed records. The data are loaded\n> once and then read from a web app and other client programs. Some of the\n> queries execute ORDER BY on the results. There are typically less than a\n> dozen read-only concurrent connections to any one database.\n>\n> SELECTs for data are taking 10s of seconds. We'd like to reduce this to web\n> app acceptable response times (less than 1 second). If this is successful\n> then the size of the database will grow by a factor of ten - we will still\n> want sub-second response times. We are in the process of going through the\n> excellent suggestions in the \"PostgreSQL 9.0 High Performance\" book to\n> identify the bottleneck (we have reasonable suspicions that we are I/O\n> bound), but would also like to place an order soon for the dedicated server\n> which will host the production databases. Here are the specs of a server\n> that we are considering with a budget of $13k US:\n>\n> HP ProLiant DL360p Gen 8\n> Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs\n> 64GB RAM\n> 2x146GB 15K SAS hard drives\n> 3x200GB SATA SLC SSDs\n> + the usual accessories (optical drive, rail kit, dual power supplies)\n\nIf your DB is 1G, and will grow to 10G then the IO shouldn't be any\nproblem, as the whole db should be cached in memory. I'd look at\nwhether or not you've got good query plans or not, and tuning them.\nThings like setting random_cost to 1.something might be a good start,\nand cranking up work mem to ~16M or so.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 May 2013 19:35:44 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "3x200GB suggests you want to use RAID5?\n\nPerhaps you should just pick 2x200GB and set them to RAID1. With roughly \n200GB of storage, that should still easily house your \"potentially \n10GB\"-database with ample of room to allow the SSD's to balance the \nwrites. But you save the investment and its probably a bit faster with \nwrites (although your raid-card may reduce or remove the differences \nwith your workload).\n\nYou can then either keep the money or invest in faster cpu's. With few \nconcurrent connections the E5-2643 (also a quad core, but with 3.3GHz \ncores rather than 2.4GHz) may be interesting.\nIts obviously a bit of speculation to see whether that would help, but \nit should speed up sorts and other in-memory/cpu-operations (even if \nyou're not - and never will be - cpu-bound right now).\n\nBest regards,\n\nArjen\n\nOn 3-5-2013 1:11 Mike McCann wrote:\n> Hello,\n>\n> We are in the fortunate situation of having more money than time to help\n> solve our PostgreSQL 9.1 performance problem.\n>\n> Our server hosts databases that are about 1 GB in size with the largest\n> tables having order 10 million 20-byte indexed records. The data are\n> loaded once and then read from a web app and other client programs.\n> Some of the queries execute ORDER BY on the results. There are\n> typically less than a dozen read-only concurrent connections to any one\n> database.\n>\n> SELECTs for data are taking 10s of seconds. We'd like to reduce this to\n> web app acceptable response times (less than 1 second). If this is\n> successful then the size of the database will grow by a factor of ten -\n> we will still want sub-second response times. We are in the process of\n> going through the excellent suggestions in the \"PostgreSQL 9.0 High\n> Performance\" book to identify the bottleneck (we have reasonable\n> suspicions that we are I/O bound), but would also like to place an order\n> soon for the dedicated server which will host the production databases.\n> Here are the specs of a server that we are considering with a budget of\n> $13k US:\n>\n> HP ProLiant DL360p Gen 8\n> Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs\n> 64GB RAM\n> 2x146GB 15K SAS hard drives\n> 3x200GB SATA SLC SSDs\n> + the usual accessories (optical drive, rail kit, dual power supplies)\n>\n> Opinions?\n>\n> Thanks in advance for any suggestions you have.\n>\n> -Mike\n>\n> --\n> Mike McCann\n> Software Engineer\n> Monterey Bay Aquarium Research Institute\n> 7700 Sandholdt Road\n> Moss Landing, CA 95039-9644\n> Voice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 03 May 2013 08:16:54 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "Note that with linux (and a few other OSes) you can use RAID-1E\nhttp://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID_1E\nwith an odd number of drives.\n\nOn Fri, May 3, 2013 at 12:16 AM, Arjen van der Meijden\n<[email protected]> wrote:\n> 3x200GB suggests you want to use RAID5?\n>\n> Perhaps you should just pick 2x200GB and set them to RAID1. With roughly\n> 200GB of storage, that should still easily house your \"potentially\n> 10GB\"-database with ample of room to allow the SSD's to balance the writes.\n> But you save the investment and its probably a bit faster with writes\n> (although your raid-card may reduce or remove the differences with your\n> workload).\n>\n> You can then either keep the money or invest in faster cpu's. With few\n> concurrent connections the E5-2643 (also a quad core, but with 3.3GHz cores\n> rather than 2.4GHz) may be interesting.\n> Its obviously a bit of speculation to see whether that would help, but it\n> should speed up sorts and other in-memory/cpu-operations (even if you're not\n> - and never will be - cpu-bound right now).\n>\n> Best regards,\n>\n> Arjen\n>\n>\n> On 3-5-2013 1:11 Mike McCann wrote:\n>>\n>> Hello,\n>>\n>> We are in the fortunate situation of having more money than time to help\n>> solve our PostgreSQL 9.1 performance problem.\n>>\n>> Our server hosts databases that are about 1 GB in size with the largest\n>> tables having order 10 million 20-byte indexed records. The data are\n>> loaded once and then read from a web app and other client programs.\n>> Some of the queries execute ORDER BY on the results. There are\n>> typically less than a dozen read-only concurrent connections to any one\n>> database.\n>>\n>> SELECTs for data are taking 10s of seconds. We'd like to reduce this to\n>> web app acceptable response times (less than 1 second). If this is\n>> successful then the size of the database will grow by a factor of ten -\n>> we will still want sub-second response times. We are in the process of\n>> going through the excellent suggestions in the \"PostgreSQL 9.0 High\n>> Performance\" book to identify the bottleneck (we have reasonable\n>> suspicions that we are I/O bound), but would also like to place an order\n>> soon for the dedicated server which will host the production databases.\n>> Here are the specs of a server that we are considering with a budget of\n>> $13k US:\n>>\n>> HP ProLiant DL360p Gen 8\n>> Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs\n>> 64GB RAM\n>> 2x146GB 15K SAS hard drives\n>> 3x200GB SATA SLC SSDs\n>> + the usual accessories (optical drive, rail kit, dual power supplies)\n>>\n>> Opinions?\n>>\n>> Thanks in advance for any suggestions you have.\n>>\n>> -Mike\n>>\n>> --\n>> Mike McCann\n>> Software Engineer\n>> Monterey Bay Aquarium Research Institute\n>> 7700 Sandholdt Road\n>> Moss Landing, CA 95039-9644\n>> Voice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org\n>>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n--\nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 May 2013 04:39:26 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "On 05/03/2013 01:11, Mike McCann wrote:\n> Hello,\n>\n\nHello,\n\n> We are in the fortunate situation of having more money than time to \n> help solve our PostgreSQL 9.1 performance problem.\n>\n> Our server hosts databases that are about 1 GB in size with the \n> largest tables having order 10 million 20-byte indexed records. The \n> data are loaded once and then read from a web app and other client \n> programs. Some of the queries execute ORDER BY on the results. There \n> are typically less than a dozen read-only concurrent connections to \n> any one database.\n>\n\nI would first check the spurious queries .. 10 millions rows isn't that \nhuge. Perhaps you could paste your queries and an explain analyze of \nthem ..? You could also log slow queries and use the auto_explain module\n\n> SELECTs for data are taking 10s of seconds. We'd like to reduce this \n> to web app acceptable response times (less than 1 second). If this is \n> successful then the size of the database will grow by a factor of ten \n> - we will still want sub-second response times. We are in the process \n> of going through the excellent suggestions in the \"PostgreSQL 9.0 High \n> Performance\" book to identify the bottleneck (we have reasonable \n> suspicions that we are I/O bound), but would also like to place an \n> order soon for the dedicated server which will host the production \n> databases. Here are the specs of a server that we are considering with \n> a budget of $13k US:\n>\n> HP ProLiant DL360p Gen 8\n> Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs\n> 64GB RAM\n> 2x146GB 15K SAS hard drives\n> 3x200GB SATA SLC SSDs\n> + the usual accessories (optical drive, rail kit, dual power supplies)\n>\n> Opinions?\n>\n> Thanks in advance for any suggestions you have.\n>\n> -Mike\n>\n> --\n> Mike McCann\n> Software Engineer\n> Monterey Bay Aquarium Research Institute\n> 7700 Sandholdt Road\n> Moss Landing, CA 95039-9644\n> Voice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org\n>\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n\n\n\n\n\nOn 05/03/2013 01:11, Mike McCann wrote:\n\nHello,\n \n\n\n\n Hello,\n\n\nWe are in the fortunate situation of having more money than\n time to help solve our PostgreSQL 9.1 performance problem.\n\n\nOur server hosts databases that are about 1 GB in size with\n the largest tables having order 10 million 20-byte indexed\n records. The data are loaded once and then read from a web app\n and other client programs. Some of the queries execute ORDER BY\n on the results. There are typically less than a dozen read-only\n concurrent connections to any one database.\n\n\n\n\n I would first check the spurious queries .. 10 millions rows isn't\n that huge. Perhaps you could paste your queries and an explain\n analyze of them ..? You could also log slow queries and use the\n auto_explain module\n\n\nSELECTs for data are taking 10s of seconds. We'd like to\n reduce this to web app acceptable response times (less than 1\n second). If this is successful then the size of the database\n will grow by a factor of ten - we will still want sub-second\n response times. We are in the process of going through the\n excellent suggestions in the \"PostgreSQL 9.0 High Performance\"\n book to identify the bottleneck (we have reasonable suspicions\n that we are I/O bound), but would also like to place an order\n soon for the dedicated server which will host the production\n databases. Here are the specs of a server that we are\n considering with a budget of $13k US:\n\n\n\n\nHP ProLiant DL360p Gen 8\n\n\nDual Intel Xeon 2.4GHz 4-core E5-2609\n CPUs\n\n\n64GB RAM\n\n\n2x146GB 15K SAS hard drives\n\n\n3x200GB SATA SLC SSDs\n\n\n+ the usual accessories (optical\n drive, rail kit, dual power supplies)\n\n\n\n\nOpinions?\n\n\n Thanks in advance for any suggestions you have.\n \n -Mike\n \n\n\n--\n Mike McCann\n Software Engineer\n Monterey Bay Aquarium Research Institute\n 7700 Sandholdt Road\n Moss Landing, CA 95039-9644\n Voice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org\n\n\n\n\n\n\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Fri, 03 May 2013 14:49:40 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "Mike,\n\nAccording to your budget the following or similar might be useful for\nyou:\n\nHP 365GB Multi Level Cell G2 PCIe ioDrive2 for ProLiant Servers\n\n \n\nThis PCIe card-based direct-attach solid state storage technology\nsolutions for application performance enhancement. I believe you can\nfind cheaper solutions on the market that will provide same performance\ncharacteristics (935,000 write IOPS, up to 892,000 read IOPS, up to 3\nGB/s Bandwidth). \n\n \n\n \n\nSincerely yours,\n\n \n\n \n\nYuri Levinsky, DBA\n\nCelltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel\n\nMobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222\n\n \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Mike McCann\nSent: Friday, May 03, 2013 2:11 AM\nTo: [email protected]\nSubject: [PERFORM] Hardware suggestions for maximum read performance\n\n \n\nHello,\n\n \n\nWe are in the fortunate situation of having more money than time to help\nsolve our PostgreSQL 9.1 performance problem.\n\n \n\nOur server hosts databases that are about 1 GB in size with the largest\ntables having order 10 million 20-byte indexed records. The data are\nloaded once and then read from a web app and other client programs.\nSome of the queries execute ORDER BY on the results. There are typically\nless than a dozen read-only concurrent connections to any one database.\n\n \n\nSELECTs for data are taking 10s of seconds. We'd like to reduce this to\nweb app acceptable response times (less than 1 second). If this is\nsuccessful then the size of the database will grow by a factor of ten -\nwe will still want sub-second response times. We are in the process of\ngoing through the excellent suggestions in the \"PostgreSQL 9.0 High\nPerformance\" book to identify the bottleneck (we have reasonable\nsuspicions that we are I/O bound), but would also like to place an order\nsoon for the dedicated server which will host the production databases.\nHere are the specs of a server that we are considering with a budget of\n$13k US:\n\n \n\n\tHP ProLiant DL360p Gen 8\n\n\tDual Intel Xeon 2.4GHz 4-core E5-2609 CPUs\n\n\t64GB RAM\n\n\t2x146GB 15K SAS hard drives\n\n\t3x200GB SATA SLC SSDs\n\n\t+ the usual accessories (optical drive, rail kit, dual power\nsupplies)\n\n\t \n\nOpinions?\n\n \n\nThanks in advance for any suggestions you have.\n\n\n-Mike\n\n \n\n--\nMike McCann\nSoftware Engineer\nMonterey Bay Aquarium Research Institute\n7700 Sandholdt Road\nMoss Landing, CA 95039-9644\nVoice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org \n\n \n\n\n\nThis mail was received via Mail-SeCure System.\n\n=",
"msg_date": "Mon, 6 May 2013 09:51:02 +0300",
"msg_from": "\"Yuri Levinsky\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "On Thu, May 2, 2013 at 6:35 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Thu, May 2, 2013 at 5:11 PM, Mike McCann <[email protected]> wrote:\n> > Hello,\n> >\n> > We are in the fortunate situation of having more money than time to help\n> > solve our PostgreSQL 9.1 performance problem.\n> >\n> > Our server hosts databases that are about 1 GB in size with the largest\n> > tables having order 10 million 20-byte indexed records. The data are\n> loaded\n> > once and then read from a web app and other client programs. Some of the\n> > queries execute ORDER BY on the results. There are typically less than a\n> > dozen read-only concurrent connections to any one database.\n>\n\nI wouldn't count on this being a problem that can be fixed merely by\nthrowing money at it.\n\nHow many rows does any one of these queries need to access and then ORDER\nBY?\n\n...\n\n>\n> > HP ProLiant DL360p Gen 8\n> > Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs\n> > 64GB RAM\n> > 2x146GB 15K SAS hard drives\n> > 3x200GB SATA SLC SSDs\n> > + the usual accessories (optical drive, rail kit, dual power supplies)\n>\n> If your DB is 1G, and will grow to 10G then the IO shouldn't be any\n> problem, as the whole db should be cached in memory.\n\n\n\nBut it can take a surprisingly long time to get it cached in the first\nplace, from a cold start.\n\nIf that is the problem, pg_prewarm could help.\n\n\nCheers,\n\nJeff\n\nOn Thu, May 2, 2013 at 6:35 PM, Scott Marlowe <[email protected]> wrote:\nOn Thu, May 2, 2013 at 5:11 PM, Mike McCann <[email protected]> wrote:\n\n> Hello,\n>\n> We are in the fortunate situation of having more money than time to help\n> solve our PostgreSQL 9.1 performance problem.\n>\n> Our server hosts databases that are about 1 GB in size with the largest\n> tables having order 10 million 20-byte indexed records. The data are loaded\n> once and then read from a web app and other client programs. Some of the\n> queries execute ORDER BY on the results. There are typically less than a\n> dozen read-only concurrent connections to any one database.I wouldn't count on this being a problem that can be fixed merely by throwing money at it.\nHow many rows does any one of these queries need to access and then ORDER BY?...\n>\n> HP ProLiant DL360p Gen 8\n> Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs\n> 64GB RAM\n> 2x146GB 15K SAS hard drives\n> 3x200GB SATA SLC SSDs\n> + the usual accessories (optical drive, rail kit, dual power supplies)\n\nIf your DB is 1G, and will grow to 10G then the IO shouldn't be any\nproblem, as the whole db should be cached in memory.But it can take a surprisingly long time to get it cached in the first place, from a cold start.\nIf that is the problem, pg_prewarm could help. Cheers,Jeff",
"msg_date": "Tue, 7 May 2013 16:21:53 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "On May 7, 2013, at 4:21 PM, Jeff Janes wrote:\n\n> On Thu, May 2, 2013 at 6:35 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, May 2, 2013 at 5:11 PM, Mike McCann <[email protected]> wrote:\n> > Hello,\n> >\n> > We are in the fortunate situation of having more money than time to help\n> > solve our PostgreSQL 9.1 performance problem.\n> >\n> > Our server hosts databases that are about 1 GB in size with the largest\n> > tables having order 10 million 20-byte indexed records. The data are loaded\n> > once and then read from a web app and other client programs. Some of the\n> > queries execute ORDER BY on the results. There are typically less than a\n> > dozen read-only concurrent connections to any one database.\n> \n> I wouldn't count on this being a problem that can be fixed merely by throwing money at it.\n> \n> How many rows does any one of these queries need to access and then ORDER BY?\n> \n> ...\n> \n> >\n> > HP ProLiant DL360p Gen 8\n> > Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs\n> > 64GB RAM\n> > 2x146GB 15K SAS hard drives\n> > 3x200GB SATA SLC SSDs\n> > + the usual accessories (optical drive, rail kit, dual power supplies)\n> \n> If your DB is 1G, and will grow to 10G then the IO shouldn't be any\n> problem, as the whole db should be cached in memory.\n> \n> \n> But it can take a surprisingly long time to get it cached in the first place, from a cold start.\n> \n> If that is the problem, pg_prewarm could help. \n> \n> \n> Cheers,\n> \n> Jeff\n\nThank you everyone for your suggestions.\n\nIt's clear that our current read performance was not limited by hardware. An 'explain analyze' for a sample query is:\n\nstoqs_march2013_s=# show work_mem;\n work_mem \n----------\n 1MB\n(1 row)\n\nstoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter order by datavalue;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=541002.15..549456.68 rows=3381814 width=20) (actual time=6254.780..7244.074 rows=3381814 loops=1)\n Sort Key: datavalue\n Sort Method: external merge Disk: 112424kB\n -> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14 rows=3381814 width=20) (actual time=0.011..354.385 rows=3381814 loops=1)\n Total runtime: 7425.854 ms\n(5 rows)\n\n\nIncreasing work_mem to 355 MB improves the performance by a factor of 2:\n\nstoqs_march2013_s=# set work_mem='355MB';\nSET\nstoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter order by datavalue;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=422106.15..430560.68 rows=3381814 width=20) (actual time=2503.078..2937.130 rows=3381814 loops=1)\n Sort Key: datavalue\n Sort Method: quicksort Memory: 362509kB\n -> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14 rows=3381814 width=20) (actual time=0.016..335.745 rows=3381814 loops=1)\n Total runtime: 3094.601 ms\n(5 rows)\n\nI tried changing random_page_cost to from 4 to 1 and saw no change.\n\nI'm wondering now what changes might get this query to run in less than one second. If all the data is in memory, then will faster CPU and memory be the things that help?\n\nWe have an alternate (a bit more conventional) server configuration that we are considering:\n\nHP ProLiant DL360p Gen 8\nDual Intel Xeon 3.3GHz 4-core E5-2643 CPUs\n128GB PC3-12800 RAM\n16x146GB 15K SAS hard drives\nHP Smart Array P822/2GB FBWC controller + P420i w/ 2GB FBWC\n+ the usual accessories (optical drive, rail kit, dual power supplies)\n\n\nAll suggestions welcomed!\n\n-Mike\n\n--\nMike McCann\nSoftware Engineer\nMonterey Bay Aquarium Research Institute\n7700 Sandholdt Road\nMoss Landing, CA 95039-9644\nVoice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org\n\n\nOn May 7, 2013, at 4:21 PM, Jeff Janes wrote:On Thu, May 2, 2013 at 6:35 PM, Scott Marlowe <[email protected]> wrote:\nOn Thu, May 2, 2013 at 5:11 PM, Mike McCann <[email protected]> wrote:\n\n> Hello,\n>\n> We are in the fortunate situation of having more money than time to help\n> solve our PostgreSQL 9.1 performance problem.\n>\n> Our server hosts databases that are about 1 GB in size with the largest\n> tables having order 10 million 20-byte indexed records. The data are loaded\n> once and then read from a web app and other client programs. Some of the\n> queries execute ORDER BY on the results. There are typically less than a\n> dozen read-only concurrent connections to any one database.I wouldn't count on this being a problem that can be fixed merely by throwing money at it.\nHow many rows does any one of these queries need to access and then ORDER BY?...\n>\n> HP ProLiant DL360p Gen 8\n> Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs\n> 64GB RAM\n> 2x146GB 15K SAS hard drives\n> 3x200GB SATA SLC SSDs\n> + the usual accessories (optical drive, rail kit, dual power supplies)\n\nIf your DB is 1G, and will grow to 10G then the IO shouldn't be any\nproblem, as the whole db should be cached in memory.But it can take a surprisingly long time to get it cached in the first place, from a cold start.\nIf that is the problem, pg_prewarm could help. Cheers,Jeff\nThank you everyone for your suggestions.It's clear that our current read performance was not limited by hardware. An 'explain analyze' for a sample query is:stoqs_march2013_s=# show work_mem; work_mem ---------- 1MB(1 row)stoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter order by datavalue; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Sort (cost=541002.15..549456.68 rows=3381814 width=20) (actual time=6254.780..7244.074 rows=3381814 loops=1) Sort Key: datavalue Sort Method: external merge Disk: 112424kB -> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14 rows=3381814 width=20) (actual time=0.011..354.385 rows=3381814 loops=1) Total runtime: 7425.854 ms(5 rows)Increasing work_mem to 355 MB improves the performance by a factor of 2:stoqs_march2013_s=# set work_mem='355MB';SETstoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter order by datavalue; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Sort (cost=422106.15..430560.68 rows=3381814 width=20) (actual time=2503.078..2937.130 rows=3381814 loops=1) Sort Key: datavalue Sort Method: quicksort Memory: 362509kB -> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14 rows=3381814 width=20) (actual time=0.016..335.745 rows=3381814 loops=1) Total runtime: 3094.601 ms(5 rows)I tried changing random_page_cost to from 4 to 1 and saw no change.I'm wondering now what changes might get this query to run in less than one second. If all the data is in memory, then will faster CPU and memory be the things that help?We have an alternate (a bit more conventional) server configuration that we are considering:HP ProLiant DL360p Gen 8Dual Intel Xeon 3.3GHz 4-core E5-2643 CPUs128GB PC3-12800 RAM16x146GB 15K SAS hard drivesHP Smart Array P822/2GB FBWC controller + P420i w/ 2GB FBWC+ the usual accessories (optical drive, rail kit, dual power supplies)All suggestions welcomed!-Mike--Mike McCannSoftware EngineerMonterey Bay Aquarium Research Institute7700 Sandholdt RoadMoss Landing, CA 95039-9644Voice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org",
"msg_date": "Mon, 13 May 2013 15:36:03 -0700",
"msg_from": "Mike McCann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "On Mon, May 13, 2013 at 3:36 PM, Mike McCann <[email protected]> wrote:\n\n>\n> Increasing work_mem to 355 MB improves the performance by a factor of 2:\n>\n> stoqs_march2013_s=# set work_mem='355MB';\n> SET\n> stoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter\n> order by datavalue;\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=422106.15..430560.68 rows=3381814 width=20) (actual\n> time=2503.078..2937.130 rows=3381814 loops=1)\n> Sort Key: datavalue\n> Sort Method: quicksort Memory: 362509kB\n> -> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14\n> rows=3381814 width=20) (actual time=0.016..335.745 rows=3381814 loops=1)\n> Total runtime: 3094.601 ms\n> (5 rows)\n>\n>\n> I tried changing random_page_cost to from 4 to 1 and saw no change.\n>\n> I'm wondering now what changes might get this query to run in less than\n> one second.\n>\n\n\nI think you are worrying about the wrong thing here. What is a web app\ngoing to do with 3,381,814 rows, once it obtains them? Your current\ntesting is not testing the time it takes to stream that data to the client,\nor for the client to do something meaningful with that data.\n\nIf you only plan to actually fetch a few dozen of those rows, then you\nprobably need to incorporate that into your test, either by using a LIMIT,\nor by using a mock-up of the actual application to do some timings.\n\nAlso, what is the type and collation of the column you are sorting on?\n non-'C' collations of text columns sort about 3 times slower than 'C'\ncollation does.\n\n\n\n> If all the data is in memory, then will faster CPU and memory be the\n> things that help?\n>\n\nYes, those would help (it is not clear to me which of the two would help\nmore), but I think you need to rethink your design of sending the entire\ndatabase table to the application server for each page-view.\n\n\nCheers,\n\nJeff\n\nOn Mon, May 13, 2013 at 3:36 PM, Mike McCann <[email protected]> wrote:\n\nIncreasing work_mem to 355 MB improves the performance by a factor of 2:stoqs_march2013_s=# set work_mem='355MB';\nSETstoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter order by datavalue;\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=422106.15..430560.68 rows=3381814 width=20) (actual time=2503.078..2937.130 rows=3381814 loops=1) Sort Key: datavalue\n Sort Method: quicksort Memory: 362509kB -> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14 rows=3381814 width=20) (actual time=0.016..335.745 rows=3381814 loops=1)\n Total runtime: 3094.601 ms(5 rows)I tried changing random_page_cost to from 4 to 1 and saw no change.\nI'm wondering now what changes might get this query to run in less than one second. I think you are worrying about the wrong thing here. What is a web app going to do with 3,381,814 rows, once it obtains them? Your current testing is not testing the time it takes to stream that data to the client, or for the client to do something meaningful with that data.\nIf you only plan to actually fetch a few dozen of those rows, then you probably need to incorporate that into your test, either by using a LIMIT, or by using a mock-up of the actual application to do some timings.\nAlso, what is the type and collation of the column you are sorting on? non-'C' collations of text columns sort about 3 times slower than 'C' collation does.\n If all the data is in memory, then will faster CPU and memory be the things that help?\nYes, those would help (it is not clear to me which of the two would help more), but I think you need to rethink your design of sending the entire database table to the application server for each page-view.\nCheers,Jeff",
"msg_date": "Mon, 13 May 2013 16:24:44 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "On May 13, 2013, at 4:24 PM, Jeff Janes wrote:\n\n> On Mon, May 13, 2013 at 3:36 PM, Mike McCann <[email protected]> wrote:\n> \n> Increasing work_mem to 355 MB improves the performance by a factor of 2:\n> \n> stoqs_march2013_s=# set work_mem='355MB';\n> SET\n> stoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter order by datavalue;\n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=422106.15..430560.68 rows=3381814 width=20) (actual time=2503.078..2937.130 rows=3381814 loops=1)\n> Sort Key: datavalue\n> Sort Method: quicksort Memory: 362509kB\n> -> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14 rows=3381814 width=20) (actual time=0.016..335.745 rows=3381814 loops=1)\n> Total runtime: 3094.601 ms\n> (5 rows)\n> \n> I tried changing random_page_cost to from 4 to 1 and saw no change.\n> \n> I'm wondering now what changes might get this query to run in less than one second. \n> \n> \n> I think you are worrying about the wrong thing here. What is a web app going to do with 3,381,814 rows, once it obtains them? Your current testing is not testing the time it takes to stream that data to the client, or for the client to do something meaningful with that data.\n> \n> If you only plan to actually fetch a few dozen of those rows, then you probably need to incorporate that into your test, either by using a LIMIT, or by using a mock-up of the actual application to do some timings.\n> \n> Also, what is the type and collation of the column you are sorting on? non-'C' collations of text columns sort about 3 times slower than 'C' collation does.\n> \n> \n> If all the data is in memory, then will faster CPU and memory be the things that help?\n> \n> Yes, those would help (it is not clear to me which of the two would help more), but I think you need to rethink your design of sending the entire database table to the application server for each page-view.\n> \n> \n> Cheers,\n> \n> Jeff\n\nHi Jeff,\n\nThe datavalue column is double precision:\n\nstoqs_march2013_s=# \\d+ stoqs_measuredparameter\n Table \"public.stoqs_measuredparameter\"\n Column | Type | Modifiers | Storage | Description \n----------------+------------------+----------------------------------------------------------------------+---------+-------------\n id | integer | not null default nextval('stoqs_measuredparameter_id_seq'::regclass) | plain | \n measurement_id | integer | not null | plain | \n parameter_id | integer | not null | plain | \n datavalue | double precision | not null | plain | \nIndexes:\n \"stoqs_measuredparameter_pkey\" PRIMARY KEY, btree (id)\n \"stoqs_measuredparameter_measurement_id_parameter_id_key\" UNIQUE CONSTRAINT, btree (measurement_id, parameter_id)\n \"stoqs_measuredparameter_datavalue\" btree (datavalue)\n \"stoqs_measuredparameter_measurement_id\" btree (measurement_id)\n \"stoqs_measuredparameter_parameter_id\" btree (parameter_id)\nForeign-key constraints:\n \"stoqs_measuredparameter_measurement_id_fkey\" FOREIGN KEY (measurement_id) REFERENCES stoqs_measurement(id) DEFERRABLE INITIALLY DEFERRED\n \"stoqs_measuredparameter_parameter_id_fkey\" FOREIGN KEY (parameter_id) REFERENCES stoqs_parameter(id) DEFERRABLE INITIALLY DEFERRED\nHas OIDs: no\n\n\nThanks for the suggestion and advice to examine the web app performance. We've actually taken quite a few steps to optimize how the web app works. The example query I provided is a simple worst-case one that we can use to help us decide on the proper hardware. An actual query performed by the web app is:\n\nstoqs_march2013_s=# explain analyze SELECT stoqs_measuredparameter.id,\nstoqs_march2013_s-# stoqs_parameter.name AS parameter__name,\nstoqs_march2013_s-# stoqs_parameter.standard_name AS parameter__standard_name,\nstoqs_march2013_s-# stoqs_measurement.depth AS measurement__depth,\nstoqs_march2013_s-# stoqs_measurement.geom AS measurement__geom,\nstoqs_march2013_s-# stoqs_instantpoint.timevalue AS measurement__instantpoint__timevalue,\nstoqs_march2013_s-# stoqs_platform.name AS measurement__instantpoint__activity__platform__name,\nstoqs_march2013_s-# stoqs_measuredparameter.datavalue AS datavalue,\nstoqs_march2013_s-# stoqs_parameter.units AS parameter__units\nstoqs_march2013_s-# FROM stoqs_parameter p1,\nstoqs_march2013_s-# stoqs_measuredparameter\nstoqs_march2013_s-# INNER JOIN stoqs_measurement ON (stoqs_measuredparameter.measurement_id = stoqs_measurement.id)\nstoqs_march2013_s-# INNER JOIN stoqs_instantpoint ON (stoqs_measurement.instantpoint_id = stoqs_instantpoint.id)\nstoqs_march2013_s-# INNER JOIN stoqs_parameter ON (stoqs_measuredparameter.parameter_id = stoqs_parameter.id)\nstoqs_march2013_s-# INNER JOIN stoqs_activity ON (stoqs_instantpoint.activity_id = stoqs_activity.id)\nstoqs_march2013_s-# INNER JOIN stoqs_platform ON (stoqs_activity.platform_id = stoqs_platform.id)\nstoqs_march2013_s-# INNER JOIN stoqs_measuredparameter mp1 ON mp1.measurement_id = stoqs_measuredparameter.measurement_id\nstoqs_march2013_s-# WHERE (p1.name = 'sea_water_sigma_t')\nstoqs_march2013_s-# AND (mp1.datavalue > 25.19)\nstoqs_march2013_s-# AND (mp1.datavalue < 26.01)\nstoqs_march2013_s-# AND (mp1.parameter_id = p1.id)\nstoqs_march2013_s-# AND (stoqs_instantpoint.timevalue <= '2013-03-17 19:05:06'\nstoqs_march2013_s(# AND stoqs_instantpoint.timevalue >= '2013-03-17 15:35:13'\nstoqs_march2013_s(# AND stoqs_parameter.name IN ('fl700_uncorr')\nstoqs_march2013_s(# AND stoqs_measurement.depth >= -1.88\nstoqs_march2013_s(# AND stoqs_platform.name IN ('dorado')\nstoqs_march2013_s(# AND stoqs_measurement.depth <= 83.57)\nstoqs_march2013_s-# ORDER BY stoqs_activity.name ASC, stoqs_instantpoint.timevalue ASC;\n QUERY PLAN \n \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------\n Sort (cost=10741.41..10741.42 rows=1 width=1282) (actual time=770.211..770.211 rows=0 loops=1)\n Sort Key: stoqs_activity.name, stoqs_instantpoint.timevalue\n Sort Method: quicksort Memory: 25kB\n -> Hash Join (cost=3002.89..10741.40 rows=1 width=1282) (actual time=770.200..770.200 rows=0 loops=1)\n Hash Cond: (stoqs_instantpoint.activity_id = stoqs_activity.id)\n -> Nested Loop (cost=2983.69..10722.19 rows=3 width=954) (actual time=770.036..770.036 rows=0 loops=1)\n -> Nested Loop (cost=2983.69..9617.36 rows=191 width=946) (actual time=91.369..680.072 rows=20170 loops=1)\n -> Hash Join (cost=2983.69..8499.07 rows=193 width=842) (actual time=91.346..577.633 rows=20170 loops=1)\n Hash Cond: (stoqs_measuredparameter.parameter_id = stoqs_parameter.id)\n -> Nested Loop (cost=2982.38..8478.47 rows=4628 width=24) (actual time=91.280..531.408 rows=197746 loops=1)\n -> Nested Loop (cost=2982.38..4862.37 rows=512 width=4) (actual time=91.202..116.140 rows=20170 loops=1)\n -> Seq Scan on stoqs_parameter p1 (cost=0.00..1.30 rows=1 width=4) (actual time=0.002..0.011 rows=1 loops=1)\n Filter: ((name)::text = 'sea_water_sigma_t'::text)\n -> Bitmap Heap Scan on stoqs_measuredparameter mp1 (cost=2982.38..4854.40 rows=534 width=8) (actual time=91.194..109.846 rows=20170 loop\ns=1)\n Recheck Cond: ((datavalue > 25.19::double precision) AND (datavalue < 26.01::double precision) AND (parameter_id = p1.id))\n -> BitmapAnd (cost=2982.38..2982.38 rows=534 width=0) (actual time=90.794..90.794 rows=0 loops=1)\n -> Bitmap Index Scan on stoqs_measuredparameter_datavalue (cost=0.00..259.54 rows=12292 width=0) (actual time=62.769..62.769\n rows=23641 loops=1)\n Index Cond: ((datavalue > 25.19::double precision) AND (datavalue < 26.01::double precision))\n -> Bitmap Index Scan on stoqs_measuredparameter_parameter_id (cost=0.00..2719.38 rows=147035 width=0) (actual time=27.412..2\n7.412 rows=34750 loops=1)\n Index Cond: (parameter_id = p1.id)\n -> Index Scan using stoqs_measuredparameter_measurement_id on stoqs_measuredparameter (cost=0.00..6.98 rows=7 width=20) (actual time=0.008..0.\n017 rows=10 loops=20170)\n Index Cond: (measurement_id = mp1.measurement_id)\n -> Hash (cost=1.30..1.30 rows=1 width=826) (actual time=0.012..0.012 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Seq Scan on stoqs_parameter (cost=0.00..1.30 rows=1 width=826) (actual time=0.007..0.010 rows=1 loops=1)\n Filter: ((name)::text = 'fl700_uncorr'::text)\n -> Index Scan using stoqs_measurement_pkey on stoqs_measurement (cost=0.00..5.78 rows=1 width=116) (actual time=0.004..0.004 rows=1 loops=20170)\n Index Cond: (id = stoqs_measuredparameter.measurement_id)\n Filter: ((depth >= (-1.88)::double precision) AND (depth <= 83.57::double precision))\n -> Index Scan using stoqs_instantpoint_pkey on stoqs_instantpoint (cost=0.00..5.77 rows=1 width=16) (actual time=0.004..0.004 rows=0 loops=20170)\n Index Cond: (id = stoqs_measurement.instantpoint_id)\n Filter: ((timevalue <= '2013-03-17 19:05:06-07'::timestamp with time zone) AND (timevalue >= '2013-03-17 15:35:13-07'::timestamp with time zone))\n -> Hash (cost=18.82..18.82 rows=30 width=336) (actual time=0.151..0.151 rows=7 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Hash Join (cost=1.09..18.82 rows=30 width=336) (actual time=0.035..0.145 rows=7 loops=1)\n Hash Cond: (stoqs_activity.platform_id = stoqs_platform.id)\n -> Seq Scan on stoqs_activity (cost=0.00..16.77 rows=177 width=66) (actual time=0.005..0.069 rows=177 loops=1)\n -> Hash (cost=1.07..1.07 rows=1 width=278) (actual time=0.014..0.014 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Seq Scan on stoqs_platform (cost=0.00..1.07 rows=1 width=278) (actual time=0.008..0.012 rows=1 loops=1)\n Filter: ((name)::text = 'dorado'::text)\n Total runtime: 770.445 ms\n(42 rows)\n\n\nWe assume that steps taken to improve the worst-case query scenario will also improve these kind of queries. If anything above pops out as needing better planning please let us know that too!\n\nThanks,\nMike\n\n--\nMike McCann\nSoftware Engineer\nMonterey Bay Aquarium Research Institute\n7700 Sandholdt Road\nMoss Landing, CA 95039-9644\nVoice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org\nOn May 13, 2013, at 4:24 PM, Jeff Janes wrote:On Mon, May 13, 2013 at 3:36 PM, Mike McCann <[email protected]> wrote:\n\nIncreasing work_mem to 355 MB improves the performance by a factor of 2:stoqs_march2013_s=# set work_mem='355MB';\nSETstoqs_march2013_s=# explain analyze select * from stoqs_measuredparameter order by datavalue;\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=422106.15..430560.68 rows=3381814 width=20) (actual time=2503.078..2937.130 rows=3381814 loops=1) Sort Key: datavalue\n Sort Method: quicksort Memory: 362509kB -> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14 rows=3381814 width=20) (actual time=0.016..335.745 rows=3381814 loops=1)\n Total runtime: 3094.601 ms(5 rows)I tried changing random_page_cost to from 4 to 1 and saw no change.\nI'm wondering now what changes might get this query to run in less than one second. I think you are worrying about the wrong thing here. What is a web app going to do with 3,381,814 rows, once it obtains them? Your current testing is not testing the time it takes to stream that data to the client, or for the client to do something meaningful with that data.\nIf you only plan to actually fetch a few dozen of those rows, then you probably need to incorporate that into your test, either by using a LIMIT, or by using a mock-up of the actual application to do some timings.\nAlso, what is the type and collation of the column you are sorting on? non-'C' collations of text columns sort about 3 times slower than 'C' collation does.\n If all the data is in memory, then will faster CPU and memory be the things that help?\nYes, those would help (it is not clear to me which of the two would help more), but I think you need to rethink your design of sending the entire database table to the application server for each page-view.\nCheers,Jeff\nHi Jeff,The datavalue column is double precision:stoqs_march2013_s=# \\d+ stoqs_measuredparameter Table \"public.stoqs_measuredparameter\" Column | Type | Modifiers | Storage | Description ----------------+------------------+----------------------------------------------------------------------+---------+------------- id | integer | not null default nextval('stoqs_measuredparameter_id_seq'::regclass) | plain | measurement_id | integer | not null | plain | parameter_id | integer | not null | plain | datavalue | double precision | not null | plain | Indexes: \"stoqs_measuredparameter_pkey\" PRIMARY KEY, btree (id) \"stoqs_measuredparameter_measurement_id_parameter_id_key\" UNIQUE CONSTRAINT, btree (measurement_id, parameter_id) \"stoqs_measuredparameter_datavalue\" btree (datavalue) \"stoqs_measuredparameter_measurement_id\" btree (measurement_id) \"stoqs_measuredparameter_parameter_id\" btree (parameter_id)Foreign-key constraints: \"stoqs_measuredparameter_measurement_id_fkey\" FOREIGN KEY (measurement_id) REFERENCES stoqs_measurement(id) DEFERRABLE INITIALLY DEFERRED \"stoqs_measuredparameter_parameter_id_fkey\" FOREIGN KEY (parameter_id) REFERENCES stoqs_parameter(id) DEFERRABLE INITIALLY DEFERREDHas OIDs: noThanks for the suggestion and advice to examine the web app performance. We've actually taken quite a few steps to optimize how the web app works. The example query I provided is a simple worst-case one that we can use to help us decide on the proper hardware. An actual query performed by the web app is:stoqs_march2013_s=# explain analyze SELECT stoqs_measuredparameter.id,stoqs_march2013_s-# stoqs_parameter.name AS parameter__name,stoqs_march2013_s-# stoqs_parameter.standard_name AS parameter__standard_name,stoqs_march2013_s-# stoqs_measurement.depth AS measurement__depth,stoqs_march2013_s-# stoqs_measurement.geom AS measurement__geom,stoqs_march2013_s-# stoqs_instantpoint.timevalue AS measurement__instantpoint__timevalue,stoqs_march2013_s-# stoqs_platform.name AS measurement__instantpoint__activity__platform__name,stoqs_march2013_s-# stoqs_measuredparameter.datavalue AS datavalue,stoqs_march2013_s-# stoqs_parameter.units AS parameter__unitsstoqs_march2013_s-# FROM stoqs_parameter p1,stoqs_march2013_s-# stoqs_measuredparameterstoqs_march2013_s-# INNER JOIN stoqs_measurement ON (stoqs_measuredparameter.measurement_id = stoqs_measurement.id)stoqs_march2013_s-# INNER JOIN stoqs_instantpoint ON (stoqs_measurement.instantpoint_id = stoqs_instantpoint.id)stoqs_march2013_s-# INNER JOIN stoqs_parameter ON (stoqs_measuredparameter.parameter_id = stoqs_parameter.id)stoqs_march2013_s-# INNER JOIN stoqs_activity ON (stoqs_instantpoint.activity_id = stoqs_activity.id)stoqs_march2013_s-# INNER JOIN stoqs_platform ON (stoqs_activity.platform_id = stoqs_platform.id)stoqs_march2013_s-# INNER JOIN stoqs_measuredparameter mp1 ON mp1.measurement_id = stoqs_measuredparameter.measurement_idstoqs_march2013_s-# WHERE (p1.name = 'sea_water_sigma_t')stoqs_march2013_s-# AND (mp1.datavalue > 25.19)stoqs_march2013_s-# AND (mp1.datavalue < 26.01)stoqs_march2013_s-# AND (mp1.parameter_id = p1.id)stoqs_march2013_s-# AND (stoqs_instantpoint.timevalue <= '2013-03-17 19:05:06'stoqs_march2013_s(# AND stoqs_instantpoint.timevalue >= '2013-03-17 15:35:13'stoqs_march2013_s(# AND stoqs_parameter.name IN ('fl700_uncorr')stoqs_march2013_s(# AND stoqs_measurement.depth >= -1.88stoqs_march2013_s(# AND stoqs_platform.name IN ('dorado')stoqs_march2013_s(# AND stoqs_measurement.depth <= 83.57)stoqs_march2013_s-# ORDER BY stoqs_activity.name ASC, stoqs_instantpoint.timevalue ASC; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort (cost=10741.41..10741.42 rows=1 width=1282) (actual time=770.211..770.211 rows=0 loops=1) Sort Key: stoqs_activity.name, stoqs_instantpoint.timevalue Sort Method: quicksort Memory: 25kB -> Hash Join (cost=3002.89..10741.40 rows=1 width=1282) (actual time=770.200..770.200 rows=0 loops=1) Hash Cond: (stoqs_instantpoint.activity_id = stoqs_activity.id) -> Nested Loop (cost=2983.69..10722.19 rows=3 width=954) (actual time=770.036..770.036 rows=0 loops=1) -> Nested Loop (cost=2983.69..9617.36 rows=191 width=946) (actual time=91.369..680.072 rows=20170 loops=1) -> Hash Join (cost=2983.69..8499.07 rows=193 width=842) (actual time=91.346..577.633 rows=20170 loops=1) Hash Cond: (stoqs_measuredparameter.parameter_id = stoqs_parameter.id) -> Nested Loop (cost=2982.38..8478.47 rows=4628 width=24) (actual time=91.280..531.408 rows=197746 loops=1) -> Nested Loop (cost=2982.38..4862.37 rows=512 width=4) (actual time=91.202..116.140 rows=20170 loops=1) -> Seq Scan on stoqs_parameter p1 (cost=0.00..1.30 rows=1 width=4) (actual time=0.002..0.011 rows=1 loops=1) Filter: ((name)::text = 'sea_water_sigma_t'::text) -> Bitmap Heap Scan on stoqs_measuredparameter mp1 (cost=2982.38..4854.40 rows=534 width=8) (actual time=91.194..109.846 rows=20170 loops=1) Recheck Cond: ((datavalue > 25.19::double precision) AND (datavalue < 26.01::double precision) AND (parameter_id = p1.id)) -> BitmapAnd (cost=2982.38..2982.38 rows=534 width=0) (actual time=90.794..90.794 rows=0 loops=1) -> Bitmap Index Scan on stoqs_measuredparameter_datavalue (cost=0.00..259.54 rows=12292 width=0) (actual time=62.769..62.769 rows=23641 loops=1) Index Cond: ((datavalue > 25.19::double precision) AND (datavalue < 26.01::double precision)) -> Bitmap Index Scan on stoqs_measuredparameter_parameter_id (cost=0.00..2719.38 rows=147035 width=0) (actual time=27.412..27.412 rows=34750 loops=1) Index Cond: (parameter_id = p1.id) -> Index Scan using stoqs_measuredparameter_measurement_id on stoqs_measuredparameter (cost=0.00..6.98 rows=7 width=20) (actual time=0.008..0.017 rows=10 loops=20170) Index Cond: (measurement_id = mp1.measurement_id) -> Hash (cost=1.30..1.30 rows=1 width=826) (actual time=0.012..0.012 rows=1 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 1kB -> Seq Scan on stoqs_parameter (cost=0.00..1.30 rows=1 width=826) (actual time=0.007..0.010 rows=1 loops=1) Filter: ((name)::text = 'fl700_uncorr'::text) -> Index Scan using stoqs_measurement_pkey on stoqs_measurement (cost=0.00..5.78 rows=1 width=116) (actual time=0.004..0.004 rows=1 loops=20170) Index Cond: (id = stoqs_measuredparameter.measurement_id) Filter: ((depth >= (-1.88)::double precision) AND (depth <= 83.57::double precision)) -> Index Scan using stoqs_instantpoint_pkey on stoqs_instantpoint (cost=0.00..5.77 rows=1 width=16) (actual time=0.004..0.004 rows=0 loops=20170) Index Cond: (id = stoqs_measurement.instantpoint_id) Filter: ((timevalue <= '2013-03-17 19:05:06-07'::timestamp with time zone) AND (timevalue >= '2013-03-17 15:35:13-07'::timestamp with time zone)) -> Hash (cost=18.82..18.82 rows=30 width=336) (actual time=0.151..0.151 rows=7 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 1kB -> Hash Join (cost=1.09..18.82 rows=30 width=336) (actual time=0.035..0.145 rows=7 loops=1) Hash Cond: (stoqs_activity.platform_id = stoqs_platform.id) -> Seq Scan on stoqs_activity (cost=0.00..16.77 rows=177 width=66) (actual time=0.005..0.069 rows=177 loops=1) -> Hash (cost=1.07..1.07 rows=1 width=278) (actual time=0.014..0.014 rows=1 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 1kB -> Seq Scan on stoqs_platform (cost=0.00..1.07 rows=1 width=278) (actual time=0.008..0.012 rows=1 loops=1) Filter: ((name)::text = 'dorado'::text) Total runtime: 770.445 ms(42 rows)We assume that steps taken to improve the worst-case query scenario will also improve these kind of queries. If anything above pops out as needing better planning please let us know that too!Thanks,Mike--Mike McCannSoftware EngineerMonterey Bay Aquarium Research Institute7700 Sandholdt RoadMoss Landing, CA 95039-9644Voice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org",
"msg_date": "Mon, 13 May 2013 16:58:17 -0700",
"msg_from": "Mike McCann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "On Mon, May 13, 2013 at 5:58 PM, Mike McCann <[email protected]> wrote:\n\n> We assume that steps taken to improve the worst-case query scenario will\n> also improve these kind of queries. If anything above pops out as needing\n> better planning please let us know that too!\n\nBad assumption. If your real workload will be queries like the one\nhere that takes 700 ms, but you'll be running 10,000 of them a second,\nyou're tuning / hardware choices are going to be much different then\nif your query is going to be the previous 7 second one. Use realistic\nqueries, not ones that are nothing like what your real ones will be.\nthen use pgbench and its ability to run custom sql scripts to get a\nREAL idea how your hardware performs. Note that if you will run the\nslow query you posted like once a minute and roll it up or cache it\nthen don't get too worried about it. Pay attention to the queries that\nwill add up, in aggregate, to your greatest load.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 May 2013 19:09:58 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "On 5/13/13 6:36 PM, Mike McCann wrote:\n> stoqs_march2013_s=# explain analyze select * from\n> stoqs_measuredparameter order by datavalue;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=422106.15..430560.68 rows=3381814 width=20) (actual\n> time=2503.078..2937.130 rows=3381814 loops=1)\n> Sort Key: datavalue\n> Sort Method: quicksort Memory: 362509kB\n> -> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14\n> rows=3381814 width=20) (actual time=0.016..335.745 rows=3381814 loops=1)\n> Total runtime: 3094.601 ms\n> (5 rows)\n>\n> I tried changing random_page_cost to from 4 to 1 and saw no change.\n\nHave you tried putting an index by datavalue on this table? Once you've \ndone that, then changing random_page_cost will make using that index \nlook less expensive. Sorting chews through a good bit of CPU time, and \nthat's where all of your runtime is being spent at--once you increase \nwork_mem up very high that is.\n\n> I'm wondering now what changes might get this query to run in less than\n> one second. If all the data is in memory, then will faster CPU and\n> memory be the things that help?\n\nYou're trying to fix a fundamental design issue with hardware. That \nusually doesn't go well. Once you get a box big enough to hold the \nwhole database in RAM, beyond that the differences between server \nsystems are relatively small.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 May 2013 22:44:16 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
},
{
"msg_contents": "On Sun, May 19, 2013 at 8:44 PM, Greg Smith <[email protected]> wrote:\n> On 5/13/13 6:36 PM, Mike McCann wrote:\n>>\n>> stoqs_march2013_s=# explain analyze select * from\n>> stoqs_measuredparameter order by datavalue;\n>>\n>> QUERY PLAN\n>>\n>> ------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=422106.15..430560.68 rows=3381814 width=20) (actual\n>> time=2503.078..2937.130 rows=3381814 loops=1)\n>> Sort Key: datavalue\n>> Sort Method: quicksort Memory: 362509kB\n>> -> Seq Scan on stoqs_measuredparameter (cost=0.00..55359.14\n>> rows=3381814 width=20) (actual time=0.016..335.745 rows=3381814\n>> loops=1)\n>> Total runtime: 3094.601 ms\n>> (5 rows)\n>>\n>> I tried changing random_page_cost to from 4 to 1 and saw no change.\n>\n>\n> Have you tried putting an index by datavalue on this table? Once you've\n> done that, then changing random_page_cost will make using that index look\n> less expensive. Sorting chews through a good bit of CPU time, and that's\n> where all of your runtime is being spent at--once you increase work_mem up\n> very high that is.\n\nThis++ plus cluster on that index if you can.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 May 2013 21:57:32 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware suggestions for maximum read performance"
}
] |
[
{
"msg_contents": "- I have a problem with some files on a postgresql 9.0 on windows:\n\n2013-05-10 12:49:08 EDT ERROR: could not stat file \"base/773074/30352481\": Permission denied\n2013-05-10 12:49:08 EDT STATEMENT: SELECT pg_database_size($1) AS size;\n\nI know what does it means: the statistic pooler can`t access the file.\nIt is a only database server without antivirus (but on a windows cluster machine)\n\n- on disk, the file is shown as a 0-octet file, and there is no security tab when I try to get information.\nIt looks like this file has been created, opened, and not yet close (or written)\n\n- when I try to get more information on the file with `oid2name` it is unable to give me information:\nS:\\PostgreSQL\\9.0\\data\\base>\"C:\\Program Files\\PostgreSQL\\9.0\\bin\\oid2name.exe\" -\nU postgres -d mydb -f 30352481\nPassword:\n From database \"lxcal\":\n Filenode Table Name\n----------------------\n\nCertainly because the pg_stat worker can access it, so don`t have info on it?\n\nI tried also:\n select * from pg_class where oid=30352481;\nbut didn't got anything\n\n- This same file is owned by a postgresql backend thread (with `process explorer`) I see that the file is owned by a postgresql --forkbackend with pid 3520\nI tried to see what the 3520 process is doing. It is in \"<IDLE>\"\nIt is not statistic worker (it is not \"postgresql --backcol\")\n\nI thought it was maybe a file locked, so I check pg_locks with:\n\nselect pg_class.relname, pg_locks.virtualtransaction, pg_locks.mode, pg_locks.granted as \"g\",\nsubstr(pg_stat_activity.current_query,1,30), pg_stat_activity.query_start,\nage(now(),pg_stat_activity.query_start) as \"age\", pg_stat_activity.procpid\nfrom pg_stat_activity,pg_locks\nleft outer join pg_class on\n(pg_locks.relation = pg_class.oid)\nwhere pg_locks.pid=pg_stat_activity.procpid\norder by query_start;\n\nmy process (pid 3520) is not listed has having lock.\n\nHow can I debug to know what is going on?\nThis message may contain confidential and privileged information. If it has been sent to you in error, please reply to advise the sender of the error and then immediately delete it. If you are not the intended recipient, do not read, copy, disclose or otherwise use this message. The sender disclaims any liability for such unauthorized use. PLEASE NOTE that all incoming e-mails sent to Weatherford e-mail accounts will be archived and may be scanned by us and/or by external service providers to detect and prevent threats to our systems, investigate illegal or inappropriate behavior, and/or eliminate unsolicited promotional e-mails (spam). This process could result in deletion of a legitimate e-mail before it is read by its intended recipient at our organization. Moreover, based on the scanning results, the full text of e-mails and attachments may be made available to Weatherford security and other personnel for review and appropriate action. If you have any concerns about this process, please contact us at [email protected].\n\n\n\n\n\n\n\n\n- I have a problem with some files on a postgresql 9.0 on windows:\n \n2013-05-10 12:49:08 EDT ERROR: could not stat file \"base/773074/30352481\": Permission denied\n2013-05-10 12:49:08 EDT STATEMENT: SELECT pg_database_size($1) AS size;\n \nI know what does it means: the statistic pooler can`t access the file.\nIt is a only database server without antivirus (but on a windows cluster machine)\n \n- on disk, the file is shown as a 0-octet file, and there is no security tab when I try to get information.\nIt looks like this file has been created, opened, and not yet close (or written)\n \n- when I try to get more information on the file with `oid2name` it is unable to give me information:\nS:\\PostgreSQL\\9.0\\data\\base>\"C:\\Program Files\\PostgreSQL\\9.0\\bin\\oid2name.exe\" -\nU postgres -d mydb -f 30352481\nPassword:\nFrom database \"lxcal\":\n Filenode Table Name\n----------------------\n \nCertainly because the pg_stat worker can access it, so don`t have info on it?\n \nI tried also:\n select * from pg_class where oid=30352481;\nbut didn't got anything\n \n- This same file is owned by a postgresql backend thread (with `process explorer`) I see that the file is owned by a postgresql --forkbackend with pid 3520\nI tried to see what the 3520 process is doing. It is in \"<IDLE>\"\nIt is not statistic worker (it is not \"postgresql --backcol\")\n \nI thought it was maybe a file locked, so I check pg_locks with:\n \nselect pg_class.relname, pg_locks.virtualtransaction, pg_locks.mode, pg_locks.granted as \"g\",\nsubstr(pg_stat_activity.current_query,1,30), pg_stat_activity.query_start,\nage(now(),pg_stat_activity.query_start) as \"age\", pg_stat_activity.procpid\nfrom pg_stat_activity,pg_locks\nleft outer join pg_class on\n(pg_locks.relation = pg_class.oid)\nwhere pg_locks.pid=pg_stat_activity.procpid\norder by query_start;\n \nmy process (pid 3520) is not listed has having lock.\n \nHow can I debug to know what is going on?\n\nThis message may contain confidential and privileged information. If it has been sent to you in error, please reply to advise the sender of the error and then immediately delete it. If you are not the intended recipient, do not read, copy, disclose or otherwise\n use this message. The sender disclaims any liability for such unauthorized use. PLEASE NOTE that all incoming e-mails sent to Weatherford e-mail accounts will be archived and may be scanned by us and/or by external service providers to detect and prevent threats\n to our systems, investigate illegal or inappropriate behavior, and/or eliminate unsolicited promotional e-mails (spam). This process could result in deletion of a legitimate e-mail before it is read by its intended recipient at our organization. Moreover,\n based on the scanning results, the full text of e-mails and attachments may be made available to Weatherford security and other personnel for review and appropriate action. If you have any concerns about this process, please contact us at [email protected].",
"msg_date": "Mon, 13 May 2013 13:05:38 +0000",
"msg_from": "\"Desbiens, Eric\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lock and pg_stat"
},
{
"msg_contents": "On Mon, May 13, 2013 at 9:05 AM, Desbiens, Eric <[email protected]> wrote:\n> I tried also:\n>\n> select * from pg_class where oid=30352481;\n>\n> but didn't got anything\n\nYou probably want where relfilenode=30352481.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 May 2013 08:44:56 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lock and pg_stat"
}
] |
[
{
"msg_contents": "I have a unique constraint on two columns of a supermassive table (est. 1.7\nbn rows) that are the only way the table's ever queried - and it's\nblindingly fast: 51ms to retrieve any single row even non-partitioned.\n\nAnyway: Right now statistics on the two unique constrained columns are set\nto 200 each (database-wide default is 100), and what I'm wondering is, since\nthe unique constraint already covers the whole table and all rows in\nentirety, is it really necessary for statistics to be set that high on\nthose? Or does that only serve to slow down inserts to that table? \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/statistics-target-for-columns-in-unique-constraint-tp5755256.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 May 2013 08:01:33 -0700 (PDT)",
"msg_from": "ach <[email protected]>",
"msg_from_op": true,
"msg_subject": "statistics target for columns in unique constraint?"
},
{
"msg_contents": "On Mon, May 13, 2013 at 6:01 PM, ach <[email protected]> wrote:\n> what I'm wondering is, since\n> the unique constraint already covers the whole table and all rows in\n> entirety, is it really necessary for statistics to be set that high on\n> those?\n\nAFAIK if there are exact-matching unique constraints/indexes for a\nquery's WHERE clause, the planner will deduce that the query only\nreturns 1 row and won't consult statistics at all.\n\n> Or does that only serve to slow down inserts to that table?\n\nIt doesn't slow down inserts directly. Tables are analyzed in the\nbackground by autovacuum. However, I/O traffic from autovacuum analyze\nmay slow down inserts running concurrently.\n\nRegards,\nMarti\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 May 2013 01:10:06 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: statistics target for columns in unique constraint?"
},
{
"msg_contents": "On 14/05/13 10:10, Marti Raudsepp wrote:\n> On Mon, May 13, 2013 at 6:01 PM, ach <[email protected]> wrote:\n>> what I'm wondering is, since\n>> the unique constraint already covers the whole table and all rows in\n>> entirety, is it really necessary for statistics to be set that high on\n>> those?\n>\n> AFAIK if there are exact-matching unique constraints/indexes for a\n> query's WHERE clause, the planner will deduce that the query only\n> returns 1 row and won't consult statistics at all.\n>\n>> Or does that only serve to slow down inserts to that table?\n>\n> It doesn't slow down inserts directly. Tables are analyzed in the\n> background by autovacuum. However, I/O traffic from autovacuum analyze\n> may slow down inserts running concurrently.\n>\n>\n\nA higher number in stats target means larger stats structures - which in \nturn means that the planning stage of *all* queries may be impacted - \ne.g takes up more memory, slightly slower as these larger structures are \nread, iterated over, free'd etc.\n\nSo if your only access is via a defined unique key, then (as Marti \nsuggests) - a large setting for stats target would seem to be unnecessary.\n\nIf you have access to a test environment I'd recommend you model the \neffect of reducing stats target down (back to the default of 100 or even \nto the old version default of 10).\n\nA little - paranoia - maybe switch on statement logging and ensure that \nthere are no *other* ways this table is accessed...the fact that the \nnumber was cranked up from the default is a little suspicious!\n\nRegards\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 May 2013 10:42:33 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: statistics target for columns in unique constraint?"
},
{
"msg_contents": "Thanks guys! I'm gonna try tuning the statistics back down to 10 on that\ntable and see what that does to the insertion rates. Oh and for Mark: Not\nto worry - i'd actually tuned the stats there up myself awhile ago in an\nexperiment to see if -that- would've sped insertions some; back before i'd\nhad enough mileage on postgres for it to have occurred to me that might just\nhave been useless ;-)\n\nOne quick follow up since I'm expecting y'all might know: Do the statistics\ntargets actually speed performance on an index search itself; the actual\nlookup? Or are the JUST to inform the planner towards the best pathway\ndecision? In other words if I have statistics set to 1000, say, in one\ncase, and the planner chose the exact same path it would have if they'd just\nbeen set to 100, would the lookup return faster when the stats were at 1000? \nOr would it actually take the same time either way? My hunch is it's the\nlatter...\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/statistics-target-for-columns-in-unique-constraint-tp5755256p5756093.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 18 May 2013 08:35:51 -0700 (PDT)",
"msg_from": "ach <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: statistics target for columns in unique constraint?"
},
{
"msg_contents": "ach <[email protected]> wrote:\n\n> One quick follow up since I'm expecting y'all might know: Do the\n> statistics targets actually speed performance on an index search\n> itself; the actual lookup? Or are the JUST to inform the planner\n> towards the best pathway decision?\n\nSince the statistics are just a random sampling and generally not\ncompletely up-to-date, they really can't be used for anything other\nthan *estimating* relative costs in order to try to pick the best\nplan. Once a plan is chosen, its execution time is not influenced\nby the statistics. A higher statistics target can increase\nplanning time. In a complex query with many joins and many indexes\non the referenced tables, the increase in planning time can be\nsignificant. I have seen cases where blindly increasing the\ndefault statistics target resulted in planning time which was\nlonger than run time -- without any increase in plan quality.\n\nGenerally when something is configurable, it's because there can be\nbenefit to adjusting it. If there was a single setting which could\nnot be materially improved upon for some cases, we wouldn't expose\na configuration option. This is something which is not only\nglobally adjustable, you can override the setting for individual\ncolumns -- again, we don't go to the trouble of supporting that\nwithout a good reason.\n\n-- \nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 May 2013 07:12:20 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: statistics target for columns in unique constraint?"
}
] |
[
{
"msg_contents": "Hello,\nI am trying to find predicate information for a given SQL query plan as\nprovided by Oracle using DBMS_XPLAN. I am looking at the EXPLAIN command\nfor getting this query plan information, with no luck so far.\n\nDoes the EXPLAIN command provide predicate information?\n\nThank you\nSameer\n\nHello,I am trying to find predicate information for a given SQL query plan as provided by Oracle using DBMS_XPLAN. I am looking at the EXPLAIN command for getting this query plan information, with no luck so far.\nDoes the EXPLAIN command provide predicate information?Thank youSameer",
"msg_date": "Tue, 14 May 2013 14:53:48 +0530",
"msg_from": "Sameer Thakur <[email protected]>",
"msg_from_op": true,
"msg_subject": "Predicate information in EXPLAIN Command"
},
{
"msg_contents": "On 14.05.2013 12:23, Sameer Thakur wrote:\n> Hello,\n> I am trying to find predicate information for a given SQL query plan as\n> provided by Oracle using DBMS_XPLAN. I am looking at the EXPLAIN command\n> for getting this query plan information, with no luck so far.\n>\n> Does the EXPLAIN command provide predicate information?\n\nSure. For example,\n\npostgres=# explain select * from a where id = 123;\n QUERY PLAN\n---------------------------------------------------\n Seq Scan on a (cost=0.00..40.00 rows=12 width=4)\n Filter: (id = 123)\n(2 rows)\n\nThe predicate is right there on the Filter line. Likewise for a join:\n\npostgres=# explain select * from a, b where a.id = b.id;\n QUERY PLAN\n-----------------------------------------------------------------\n Hash Join (cost=64.00..134.00 rows=2400 width=8)\n Hash Cond: (a.id = b.id)\n -> Seq Scan on a (cost=0.00..34.00 rows=2400 width=4)\n -> Hash (cost=34.00..34.00 rows=2400 width=4)\n -> Seq Scan on b (cost=0.00..34.00 rows=2400 width=4)\n(5 rows)\n\nThe join predicate is on the Hash Cond line.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 14 May 2013 12:29:51 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Predicate information in EXPLAIN Command"
}
] |
[
{
"msg_contents": "Hi!\n\nThis has been a pain point for quite a while. While we've had several \ndiscussions in the area, it always seems to just kinda trail off and \neventually vanish every time it comes up.\n\nA really basic example of how bad the planner is here:\n\nCREATE TABLE foo AS\nSELECT a.id, a.id % 1000 AS col_a, a.id % 1000 AS col_b\n FROM generate_series(1, 1000000) a(id);\n\nCREATE INDEX idx_foo_ab ON foo (col_a, col_b);\n\nANALYZE foo;\n\nEXPLAIN ANALYZE\nSELECT *\n FROM foo\n WHERE col_a = 50\n AND col_b = 50;\n\nIndex Scan using idx_foo_ab on foo (cost=0.00..6.35 rows=1 width=12)\n (actual time=0.030..3.643 rows=1000 loops=1)\n Index Cond: ((col_a = 50) AND (col_b = 50))\n\nHey, look! The row estimate is off by a factor of 1000. This particular \ncase doesn't suffer terribly from the mis-estimation, but others do. \nBoy, do they ever.\n\nSince there really is no fix for this aside from completely rewriting \nthe query or horribly misusing CTEs (to force sequence scans instead of \nbloated nested loops, for example), I recently tried to use some \nexisting ALTER TABLE syntax:\n\nALTER TABLE foo ALTER col_a SET (n_distinct = 1);\nALTER TABLE foo ALTER col_b SET (n_distinct = 1);\n\nThe new explain output:\n\nIndex Scan using idx_foo_ab on foo (cost=0.00..9.37 rows=2 width=12)\n (actual time=0.013..0.871 rows=1000 loops=1)\n Index Cond: ((col_a = 50) AND (col_b = 50))\n\nWell... 2 is closer to 1000 than 1 was, but it's pretty clear the \nhistogram is still the basis for the cost estimates. I'm curious what \nbenefit overriding the n_distinct pg_stats field actually gives us in \npractice.\n\nI'm worried that without an easy answer for cases like this, more DBAs \nwill abuse optimization fences to get what they want and we'll end up in \na scenario that's actually worse than query hints. Theoretically, query \nhints can be deprecated or have GUCs to remove their influence, but CTEs \nare forever, or until the next code refactor.\n\nI've seen conversations on this since at least 2005. There were even \nproposed patches every once in a while, but never any consensus. Anyone \ncare to comment?\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 May 2013 10:31:46 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Thinking About Correlated Columns (again)"
},
{
"msg_contents": "On 15.05.2013 18:31, Shaun Thomas wrote:\n> I've seen conversations on this since at least 2005. There were even\n> proposed patches every once in a while, but never any consensus. Anyone\n> care to comment?\n\nWell, as you said, there has never been any consensus.\n\nThere are basically two pieces to the puzzle:\n\n1. What metric do you use to represent correlation between columns?\n\n2. How do use collect that statistic?\n\nBased on the prior discussions, collecting the stats seems to be tricky. \nIt's not clear for which combinations of columns it should be collected \n(all possible combinations? That explodes quickly...), or how it can be \ncollected without scanning the whole table.\n\nI think it would be pretty straightforward to use such a statistic, once \nwe have it. So perhaps we should get started by allowing the DBA to set \na correlation metric manually, and use that in the planner.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 May 2013 18:52:22 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
},
{
"msg_contents": "On Wed, May 15, 2013 at 8:31 AM, Shaun Thomas <[email protected]>wrote:\n\n> [Inefficient plans for correlated columns] has been a pain point for quite\n> a while. While we've had several discussions in the area, it always seems\n> to just kinda trail off and eventually vanish every time it comes up.\n>\n> ...\n> Since there really is no fix for this aside from completely rewriting the\n> query or horribly misusing CTEs (to force sequence scans instead of bloated\n> nested loops, for example)...\n> I'm worried that without an easy answer for cases like this, more DBAs\n> will abuse optimization fences to get what they want and we'll end up in a\n> scenario that's actually worse than query hints. Theoretically, query hints\n> can be deprecated or have GUCs to remove their influence, but CTEs are\n> forever, or until the next code refactor.\n>\n> I've seen conversations on this since at least 2005. There were even\n> proposed patches every once in a while, but never any consensus. Anyone\n> care to comment?\n>\n\nIt's a very hard problem. There's no way you can keep statistics about all\npossible correlations since the number of possibilities is O(N^2) with the\nnumber of columns.\n\nWe have an external search engine (not part of Postgres) for a\ndomain-specific problem that discovers correlations dynamically and does\non-the-fly alterations to its search plan. Our queries are simple\nconjunctions (A & B & C ...), which makes \"planning\" easy: you can do\nindexes in any order you like. So, the system watches each index's\nperformance:\n\n 1. An index that's rarely rejecting anything gets demoted and eventually\nis discarded.\n 2. An index that's rejecting lots of stuff gets promoted.\n\nRule 1: If two indexes are strongly correlated, which ever happens to be\nfirst in the plan will do all the work, so the second one will rarely\nreject anything. By demoting it to later in the plan, it allows more\nselective indexes to be examined first. If an index ends up rejecting\nalmost nothing, it is discarded.\n\nRule 2: If an index rejects lots of stuff and you promote it to the front\nof the plan, then \"anti-correlated\" indexes (those that reject different\nthings) tend to move to the front of the plan, resulting in very efficient\nindexing.\n\nTypically, our query plans start with roughly 20-100 indexes. The nature\nof our domain-specific problem is that the indexes are highly correlated\n(but it's impossible to predict in advance; each query causes unique\ncorrelations.) Within a few thousand rows, it usually has dropped most of\nthem, and finishes the query with 5-10 indexes that are about 95% as\nselective, but much faster, than the original plan.\n\nBut there are two caveats. First, our particular query is a simple\nconjunction (A & B & C ...). It's easy to shuffle the index orders around.\n\nSecond, databases tend to be non-homogeneous. There are clusters of\nsimilar rows. If you blindly optimize by going through a table in storage\norder, you may optimize strongly for a highly similar group of rows, then\ndiscover once you get out of that section of the database that your\n\"optimization\" threw out a bunch of indexes that would be selective in\nother sections of the database. To solve this, you have to add a mechanism\nthat examines the database rows in random order. This tends to minimize\nthe problem (although in theory it could still happen).\n\nA more typical query (i.e. anything past a simple conjunction) would be\nmuch more difficult for a dynamic optimizer.\n\nAnd I can't see how you can expect a static planner (one that doesn't do\non-the-fly modifications to the plan) to find correlations.\n\nCraig\n\nOn Wed, May 15, 2013 at 8:31 AM, Shaun Thomas <[email protected]> wrote:\n\n[Inefficient plans for correlated columns] has been a pain point for quite a while. While we've had several discussions in the area, it always seems to just kinda trail off and eventually vanish every time it comes up.\n\n... \nSince there really is no fix for this aside from completely rewriting the query or horribly misusing CTEs (to force sequence scans instead of bloated nested loops, for example)...\nI'm worried that without an easy answer for cases like this, more DBAs will abuse optimization fences to get what they want and we'll end up in a scenario that's actually worse than query hints. Theoretically, query hints can be deprecated or have GUCs to remove their influence, but CTEs are forever, or until the next code refactor.\n\nI've seen conversations on this since at least 2005. There were even proposed patches every once in a while, but never any consensus. Anyone care to comment?\nIt's a very hard problem. There's no way you can keep statistics about all possible correlations since the number of possibilities is O(N^2) with the number of columns.We have an external search engine (not part of Postgres) for a domain-specific problem that discovers correlations dynamically and does on-the-fly alterations to its search plan. Our queries are simple conjunctions (A & B & C ...), which makes \"planning\" easy: you can do indexes in any order you like. So, the system watches each index's performance:\n 1. An index that's rarely rejecting anything gets demoted and eventually is discarded. 2. An index that's rejecting lots of stuff gets promoted.Rule 1: If two indexes are strongly correlated, which ever happens to be first in the plan will do all the work, so the second one will rarely reject anything. By demoting it to later in the plan, it allows more selective indexes to be examined first. If an index ends up rejecting almost nothing, it is discarded.\nRule 2: If an index rejects lots of stuff and you promote it to the front of the plan, then \"anti-correlated\" indexes (those that reject different things) tend to move to the front of the plan, resulting in very efficient indexing.\nTypically, our query plans start with roughly 20-100 indexes. The nature of our domain-specific problem is that the indexes are highly correlated (but it's impossible to predict in advance; each query causes unique correlations.) Within a few thousand rows, it usually has dropped most of them, and finishes the query with 5-10 indexes that are about 95% as selective, but much faster, than the original plan.\nBut there are two caveats. First, our particular query is a simple conjunction (A & B & C ...). It's easy to shuffle the index orders around.Second, databases tend to be non-homogeneous. There are clusters of similar rows. If you blindly optimize by going through a table in storage order, you may optimize strongly for a highly similar group of rows, then discover once you get out of that section of the database that your \"optimization\" threw out a bunch of indexes that would be selective in other sections of the database. To solve this, you have to add a mechanism that examines the database rows in random order. This tends to minimize the problem (although in theory it could still happen).\nA more typical query (i.e. anything past a simple conjunction) would be much more difficult for a dynamic optimizer.And I can't see how you can expect a static planner (one that doesn't do on-the-fly modifications to the plan) to find correlations.\nCraig",
"msg_date": "Wed, 15 May 2013 09:23:07 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
},
{
"msg_contents": "On 05/15/2013 10:52 AM, Heikki Linnakangas wrote:\n\n> I think it would be pretty straightforward to use such a statistic,\n> once we have it. So perhaps we should get started by allowing the DBA\n> to set a correlation metric manually, and use that in the planner.\n\nThe planner already kinda does this with functional indexes. It would \ndefinitely be nice to override the stats with known correlations when \npossible.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 May 2013 11:27:29 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
},
{
"msg_contents": "\nOn 05/15/2013 12:23 PM, Craig James wrote:\n> On Wed, May 15, 2013 at 8:31 AM, Shaun Thomas \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> [Inefficient plans for correlated columns] has been a pain point\n> for quite a while. While we've had several discussions in the\n> area, it always seems to just kinda trail off and eventually\n> vanish every time it comes up.\n>\n> ...\n> Since there really is no fix for this aside from completely\n> rewriting the query or horribly misusing CTEs (to force sequence\n> scans instead of bloated nested loops, for example)...\n> I'm worried that without an easy answer for cases like this, more\n> DBAs will abuse optimization fences to get what they want and\n> we'll end up in a scenario that's actually worse than query hints.\n> Theoretically, query hints can be deprecated or have GUCs to\n> remove their influence, but CTEs are forever, or until the next\n> code refactor.\n>\n> I've seen conversations on this since at least 2005. There were\n> even proposed patches every once in a while, but never any\n> consensus. Anyone care to comment?\n>\n>\n> It's a very hard problem. There's no way you can keep statistics \n> about all possible correlations since the number of possibilities is \n> O(N^2) with the number of columns.\n>\n\nI don't see why we couldn't allow the DBA to specify some small subset \nof the combinations of columns for which correlation stats would be \nneeded. I suspect in most cases no more than a handful for any given \ntable would be required.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 May 2013 12:39:46 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
},
{
"msg_contents": "On Wed, May 15, 2013 at 11:52 AM, Heikki Linnakangas <\[email protected]> wrote:\n\n> On 15.05.2013 18:31, Shaun Thomas wrote:\n>\n>> I've seen conversations on this since at least 2005. There were even\n>> proposed patches every once in a while, but never any consensus. Anyone\n>> care to comment?\n>>\n>\n> Well, as you said, there has never been any consensus.\n>\n> There are basically two pieces to the puzzle:\n>\n> 1. What metric do you use to represent correlation between columns?\n>\n> 2. How do use collect that statistic?\n\n\nThe option that always made the most sense to me was having folks ask\npostgres to collect the statistic by running some command that marks two\ncolumns as correlated. This could at least be a starting point.\n\nNik\n\nOn Wed, May 15, 2013 at 11:52 AM, Heikki Linnakangas <[email protected]> wrote:\nOn 15.05.2013 18:31, Shaun Thomas wrote:\n\nI've seen conversations on this since at least 2005. There were even\nproposed patches every once in a while, but never any consensus. Anyone\ncare to comment?\n\n\nWell, as you said, there has never been any consensus.\n\nThere are basically two pieces to the puzzle:\n\n1. What metric do you use to represent correlation between columns?\n\n2. How do use collect that statistic? The option that always made the most sense to me was having folks ask postgres to collect the statistic by running some command that marks two columns as correlated. This could at least be a starting point.\nNik",
"msg_date": "Wed, 15 May 2013 13:30:57 -0400",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
},
{
"msg_contents": "On Wed, May 15, 2013 at 01:30:57PM -0400, Nikolas Everett wrote:\n> The option that always made the most sense to me was having folks ask\n> postgres to collect the statistic by running some command that marks two\n> columns as correlated. This could at least be a starting point.\n\nOne suggestion made in the past was to calculate these stats (provided someone\ncomes up with something worthwhile to calculate, that is) for all columns\ninvolved in a multicolumn index. That probably doesn't cover all the places\nwe'd like the planner to know stuff like this, but it's a reasonable start.\n\n--\nJoshua Tolley / eggyknap\nEnd Point Corporation\nhttp://www.endpoint.com",
"msg_date": "Wed, 15 May 2013 13:23:25 -0600",
"msg_from": "eggyknap <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
},
{
"msg_contents": "On 16/05/13 04:23, Craig James wrote:\n> On Wed, May 15, 2013 at 8:31 AM, Shaun Thomas \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> [Inefficient plans for correlated columns] has been a pain point\n> for quite a while. While we've had several discussions in the\n> area, it always seems to just kinda trail off and eventually\n> vanish every time it comes up.\n>\n[...]\n>\n> It's a very hard problem. There's no way you can keep statistics \n> about all possible correlations since the number of possibilities is \n> O(N^2) with the number of columns.\nActually far worse: N!/(N - K)!K! summed over K=1...N, assuming the \norder of columns in the correlation is unimportant (otherwise it is N \nfactorial) - based on my hazy recollection of the relevant maths...\n\n[...]\n\nCheers,\nGavin\n\n\n\n\n\n\n\nOn 16/05/13 04:23, Craig James wrote:\n\n\nOn Wed, May 15, 2013 at 8:31 AM, Shaun\n Thomas <[email protected]>\n wrote:\n\n [Inefficient plans for correlated columns] has been a pain\n point for quite a while. While we've had several discussions\n in the area, it always seems to just kinda trail off and\n eventually vanish every time it comes up.\n\n\n\n\n[...]\n\n\n\n It's a very hard problem. There's no way you can keep\n statistics about all possible correlations since the number of\n possibilities is O(N^2) with the number of columns.\n\n\n\n Actually far worse: N!/(N - K)!K! summed over K=1...N, assuming the\n order of columns in the correlation is unimportant (otherwise it is\n N factorial) - based on my hazy recollection of the relevant\n maths...\n\n [...]\n\n Cheers,\n Gavin",
"msg_date": "Thu, 16 May 2013 08:15:02 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
},
{
"msg_contents": "On 16/05/13 03:52, Heikki Linnakangas wrote:\n> On 15.05.2013 18:31, Shaun Thomas wrote:\n>> I've seen conversations on this since at least 2005. There were even\n>> proposed patches every once in a while, but never any consensus. Anyone\n>> care to comment?\n>\n> Well, as you said, there has never been any consensus.\n>\n> There are basically two pieces to the puzzle:\n>\n> 1. What metric do you use to represent correlation between columns?\n>\n> 2. How do use collect that statistic?\n>\n> Based on the prior discussions, collecting the stats seems to be \n> tricky. It's not clear for which combinations of columns it should be \n> collected (all possible combinations? That explodes quickly...), or \n> how it can be collected without scanning the whole table.\n>\n> I think it would be pretty straightforward to use such a statistic, \n> once we have it. So perhaps we should get started by allowing the DBA \n> to set a correlation metric manually, and use that in the planner.\n>\n> - Heikki\n>\n>\nHow about pg comparing actual numbers of rows delivered with the \npredicted number - and if a specified threshold is reached, then \nmaintaining statistics? There is obviously more to it, such as: is this \na relevant query to consider & the size of the tables (no point in \nattempting to optimise tables with only 10 rows for example).\n\n\nCheers,\nGavin\n\n\n\n\n\n\nOn 16/05/13 03:52, Heikki Linnakangas\n wrote:\n\nOn\n 15.05.2013 18:31, Shaun Thomas wrote:\n \nI've seen conversations on this since at\n least 2005. There were even\n \n proposed patches every once in a while, but never any consensus.\n Anyone\n \n care to comment?\n \n\n\n Well, as you said, there has never been any consensus.\n \n\n There are basically two pieces to the puzzle:\n \n\n 1. What metric do you use to represent correlation between\n columns?\n \n\n 2. How do use collect that statistic?\n \n\n Based on the prior discussions, collecting the stats seems to be\n tricky. It's not clear for which combinations of columns it should\n be collected (all possible combinations? That explodes\n quickly...), or how it can be collected without scanning the whole\n table.\n \n\n I think it would be pretty straightforward to use such a\n statistic, once we have it. So perhaps we should get started by\n allowing the DBA to set a correlation metric manually, and use\n that in the planner.\n \n\n - Heikki\n \n\n\n\nHow about pg comparing actual numbers\n of rows delivered with the predicted number - and\n if a specified threshold is reached, then maintaining statistics?\n There is obviously more to it, such as: is this a relevant query to\n consider & the size of the tables (no point in attempting to\n optimise tables with only 10 rows for example).\n\n\n Cheers,\n Gavin",
"msg_date": "Thu, 16 May 2013 08:22:33 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
},
{
"msg_contents": "Shaun Thomas wrote on 15.05.2013 17:31:\n> Hi!\n>\n> This has been a pain point for quite a while. While we've had several\n> discussions in the area, it always seems to just kinda trail off and\n> eventually vanish every time it comes up.\n>\n> A really basic example of how bad the planner is here:\n>\n> CREATE TABLE foo AS\n> SELECT a.id, a.id % 1000 AS col_a, a.id % 1000 AS col_b\n> FROM generate_series(1, 1000000) a(id);\n>\n> CREATE INDEX idx_foo_ab ON foo (col_a, col_b);\n>\n> Index Scan using idx_foo_ab on foo (cost=0.00..6.35 rows=1 width=12)\n> (actual time=0.030..3.643 rows=1000 loops=1)\n> Index Cond: ((col_a = 50) AND (col_b = 50))\n>\n> Hey, look! The row estimate is off by a factor of 1000. This\n> particular case doesn't suffer terribly from the mis-estimation, but\n> others do. Boy, do they ever.\n\nWhat happens if you create one index for each column? (instead of one combined index)\n\nFor your example it does not seem to improve the situation, but maybe things get better with the \"bad\" queries?\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 May 2013 22:31:46 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
},
{
"msg_contents": "On Wed, May 15, 2013 at 1:15 PM, Gavin Flower <[email protected]\n> wrote:\n\n> On 16/05/13 04:23, Craig James wrote:\n>\n> On Wed, May 15, 2013 at 8:31 AM, Shaun Thomas <[email protected]>wrote:\n>\n>> [Inefficient plans for correlated columns] has been a pain point for\n>> quite a while. While we've had several discussions in the area, it always\n>> seems to just kinda trail off and eventually vanish every time it comes up.\n>>\n>> [...]\n>\n>\n> It's a very hard problem. There's no way you can keep statistics about\n> all possible correlations since the number of possibilities is O(N^2) with\n> the number of columns.\n>\n> Actually far worse: N!/(N - K)!K! summed over K=1...N, assuming the order\n> of columns in the correlation is unimportant (otherwise it is N factorial)\n> - based on my hazy recollection of the relevant maths...\n>\n\nRight ... I was only thinking of combinations for two columns.\n\nCraig\n\n\n>\n> [...]\n>\n> Cheers,\n> Gavin\n>\n>\n\nOn Wed, May 15, 2013 at 1:15 PM, Gavin Flower <[email protected]> wrote:\n\nOn 16/05/13 04:23, Craig James wrote:\n\n\nOn Wed, May 15, 2013 at 8:31 AM, Shaun\n Thomas <[email protected]>\n wrote:\n\n [Inefficient plans for correlated columns] has been a pain\n point for quite a while. While we've had several discussions\n in the area, it always seems to just kinda trail off and\n eventually vanish every time it comes up.\n\n\n\n\n[...]\n\n\n\n It's a very hard problem. There's no way you can keep\n statistics about all possible correlations since the number of\n possibilities is O(N^2) with the number of columns.\n\n\n\n Actually far worse: N!/(N - K)!K! summed over K=1...N, assuming the\n order of columns in the correlation is unimportant (otherwise it is\n N factorial) - based on my hazy recollection of the relevant\n maths...Right ... I was only thinking of combinations for two columns.Craig \n\n\n [...]\n\n Cheers,\n Gavin",
"msg_date": "Wed, 15 May 2013 13:54:15 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
},
{
"msg_contents": "On 05/15/2013 03:31 PM, Thomas Kellerer wrote:\n\n> What happens if you create one index for each column? (instead of one\n> combined index)\n\nI just created the combined index to simplify the output. With two \nindexes, it does the usual bitmap index scan. Even then the row estimate \nis off by a factor of 1000, which is the actual problem.\n\nThe other reason I tried the combined index was that there was some \nnoise a while back about having the planner check for possible column \ncorrelations on composite indexes. Clearly that ended up not being the \ncase, but it was worth examination.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 16 May 2013 07:47:08 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Thinking About Correlated Columns (again)"
}
] |
[
{
"msg_contents": "Hi all,\n\nOur application is heavy write and IO utilisation has been the problem for\nus for a while. We've decided to use RAID 10 of 4x500GB Samsung 840 Pro for\nthe master server. I'm aware of write cache issue on SSDs in case of power\nloss. However, our hosting provider doesn't offer any other choices of SSD\ndrives with supercapacitor. To minimise risk, we will also set up another\nRAID 10 SAS in streaming replication mode. For our application, a few\nseconds of data loss is acceptable.\n\nMy question is, would corrupted data files on the primary server affect the\nstreaming standby? In other word, is this setup acceptable in terms of\nminimising deficiency of SSDs?\n\nCheers,\nCuong\n\nHi all,Our application is heavy write and IO utilisation has been the problem for us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840 Pro for the master server. I'm aware of write cache issue on SSDs in case of power loss. However, our hosting provider doesn't offer any other choices of SSD drives with supercapacitor. To minimise risk, we will also set up another RAID 10 SAS in streaming replication mode. For our application, a few seconds of data loss is acceptable. \nMy question is, would corrupted data files on the primary server affect the streaming standby? In other word, is this setup acceptable in terms of minimising deficiency of SSDs?\n\nCheers,Cuong",
"msg_date": "Fri, 17 May 2013 00:46:10 +1000",
"msg_from": "Cuong Hoang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Thu, May 16, 2013 at 9:46 AM, Cuong Hoang <[email protected]> wrote:\n> Hi all,\n>\n> Our application is heavy write and IO utilisation has been the problem for\n> us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840 Pro for\n> the master server. I'm aware of write cache issue on SSDs in case of power\n> loss. However, our hosting provider doesn't offer any other choices of SSD\n> drives with supercapacitor. To minimise risk, we will also set up another\n> RAID 10 SAS in streaming replication mode. For our application, a few\n> seconds of data loss is acceptable.\n>\n> My question is, would corrupted data files on the primary server affect the\n> streaming standby? In other word, is this setup acceptable in terms of\n> minimising deficiency of SSDs?\n\nData corruption caused by sudden power event on the master will not\ncross over. Basically with this configuration you must switch over to\nthe standby in that case. Corruption caused by other issues, say a\nfaulty drive, will transfer over however. Block checksum feature of\n9.3 as a strategy to reduce the risk of that class of issue.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 16 May 2013 12:31:48 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang <[email protected]> wrote:\n\n> Hi all,\n>\n> Our application is heavy write and IO utilisation has been the problem for\n> us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840 Pro for\n> the master server. I'm aware of write cache issue on SSDs in case of power\n> loss. However, our hosting provider doesn't offer any other choices of SSD\n> drives with supercapacitor. To minimise risk, we will also set up another\n> RAID 10 SAS in streaming replication mode. For our application, a few\n> seconds of data loss is acceptable.\n>\n> My question is, would corrupted data files on the primary server affect\n> the streaming standby? In other word, is this setup acceptable in terms of\n> minimising deficiency of SSDs?\n>\n\n\nThat seems rather scary to me for two reasons.\n\nIf the data center has a sudden power failure, why would it not take out\nboth machines either simultaneously or in short succession? Can you verify\nthat the hosting provider does not have them on the same UPS (or even\nworse, as two virtual machines on the same physical host)?\n\nThe other issue is that you'd have to make sure the master does not restart\nafter a crash. If your init.d scripts just blindly start postgresql, then\nafter a sudden OS restart it will automatically enter recovery and then\nopen as usual, even though it might be silently corrupt. At that point it\nwill be generating WAL based on corrupt data (and incorrect query results),\nand propagating that to the standby. So you have to be paranoid that if\nthe master ever crashes, it is shot in the head and then reconstructed from\nthe standby.\n\nCheers,\n\nJeff\n\nOn Thu, May 16, 2013 at 7:46 AM, Cuong Hoang <[email protected]> wrote:\nHi all,Our application is heavy write and IO utilisation has been the problem for us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840 Pro for the master server. I'm aware of write cache issue on SSDs in case of power loss. However, our hosting provider doesn't offer any other choices of SSD drives with supercapacitor. To minimise risk, we will also set up another RAID 10 SAS in streaming replication mode. For our application, a few seconds of data loss is acceptable. \nMy question is, would corrupted data files on the primary server affect the streaming standby? In other word, is this setup acceptable in terms of minimising deficiency of SSDs?\nThat seems rather scary to me for two reasons. If the data center has a sudden power failure, why would it not take out both machines either simultaneously or in short succession? Can you verify that the hosting provider does not have them on the same UPS (or even worse, as two virtual machines on the same physical host)?\nThe other issue is that you'd have to make sure the master does not restart after a crash. If your init.d scripts just blindly start postgresql, then after a sudden OS restart it will automatically enter recovery and then open as usual, even though it might be silently corrupt. At that point it will be generating WAL based on corrupt data (and incorrect query results), and propagating that to the standby. So you have to be paranoid that if the master ever crashes, it is shot in the head and then reconstructed from the standby.\nCheers,Jeff",
"msg_date": "Thu, 16 May 2013 11:34:40 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Thu, May 16, 2013 at 1:34 PM, Jeff Janes <[email protected]> wrote:\n> On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang <[email protected]> wrote:\n>>\n>> Hi all,\n>>\n>> Our application is heavy write and IO utilisation has been the problem for\n>> us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840 Pro for\n>> the master server. I'm aware of write cache issue on SSDs in case of power\n>> loss. However, our hosting provider doesn't offer any other choices of SSD\n>> drives with supercapacitor. To minimise risk, we will also set up another\n>> RAID 10 SAS in streaming replication mode. For our application, a few\n>> seconds of data loss is acceptable.\n>>\n>> My question is, would corrupted data files on the primary server affect\n>> the streaming standby? In other word, is this setup acceptable in terms of\n>> minimising deficiency of SSDs?\n>\n>\n>\n> That seems rather scary to me for two reasons.\n>\n> If the data center has a sudden power failure, why would it not take out\n> both machines either simultaneously or in short succession? Can you verify\n> that the hosting provider does not have them on the same UPS (or even worse,\n> as two virtual machines on the same physical host)?\n\nI took it to mean that his standby's \"raid 10 SAS\" meant disk drive\nbased standby. Agree that server should not be configured to\nautostart through init.d.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 16 May 2013 13:46:02 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Thu, May 16, 2013 at 11:46 AM, Merlin Moncure <[email protected]> wrote:\n\n> On Thu, May 16, 2013 at 1:34 PM, Jeff Janes <[email protected]> wrote:\n> > On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang <[email protected]>\n> wrote:\n> >>\n> >> Hi all,\n> >>\n> >> Our application is heavy write and IO utilisation has been the problem\n> for\n> >> us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840 Pro\n> for\n> >> the master server. I'm aware of write cache issue on SSDs in case of\n> power\n> >> loss. However, our hosting provider doesn't offer any other choices of\n> SSD\n> >> drives with supercapacitor. To minimise risk, we will also set up\n> another\n> >> RAID 10 SAS in streaming replication mode. For our application, a few\n> >> seconds of data loss is acceptable.\n> >>\n> >> My question is, would corrupted data files on the primary server affect\n> >> the streaming standby? In other word, is this setup acceptable in terms\n> of\n> >> minimising deficiency of SSDs?\n> >\n> >\n> >\n> > That seems rather scary to me for two reasons.\n> >\n> > If the data center has a sudden power failure, why would it not take out\n> > both machines either simultaneously or in short succession? Can you\n> verify\n> > that the hosting provider does not have them on the same UPS (or even\n> worse,\n> > as two virtual machines on the same physical host)?\n>\n> I took it to mean that his standby's \"raid 10 SAS\" meant disk drive\n> based standby.\n\n\nI had not considered that. If the master can't keep up with IO using disk\ndrives, wouldn't a replica using them probably fall infinitely far behind\ntrying to keep up with the workload?\n\nMaybe the best choice would just be stick with the current set-up (one\nserver, spinning rust) and just turn off synchrounous_commit, since he is\nalready willing to take the loss of a few seconds of transactions.\n\nCheers,\n\nJeff\n\nOn Thu, May 16, 2013 at 11:46 AM, Merlin Moncure <[email protected]> wrote:\nOn Thu, May 16, 2013 at 1:34 PM, Jeff Janes <[email protected]> wrote:\n\n> On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang <[email protected]> wrote:\n>>\n>> Hi all,\n>>\n>> Our application is heavy write and IO utilisation has been the problem for\n>> us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840 Pro for\n>> the master server. I'm aware of write cache issue on SSDs in case of power\n>> loss. However, our hosting provider doesn't offer any other choices of SSD\n>> drives with supercapacitor. To minimise risk, we will also set up another\n>> RAID 10 SAS in streaming replication mode. For our application, a few\n>> seconds of data loss is acceptable.\n>>\n>> My question is, would corrupted data files on the primary server affect\n>> the streaming standby? In other word, is this setup acceptable in terms of\n>> minimising deficiency of SSDs?\n>\n>\n>\n> That seems rather scary to me for two reasons.\n>\n> If the data center has a sudden power failure, why would it not take out\n> both machines either simultaneously or in short succession? Can you verify\n> that the hosting provider does not have them on the same UPS (or even worse,\n> as two virtual machines on the same physical host)?\n\nI took it to mean that his standby's \"raid 10 SAS\" meant disk drive\nbased standby. I had not considered that. If the master can't keep up with IO using disk drives, wouldn't a replica using them probably fall infinitely far behind trying to keep up with the workload?\nMaybe the best choice would just be stick with the current set-up (one server, spinning rust) and just turn off synchrounous_commit, since he is already willing to take the loss of a few seconds of transactions. \nCheers,Jeff",
"msg_date": "Thu, 16 May 2013 15:39:49 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "Thank you for your advice guys. We'll definitely turn off init.d script for\nPostgreSQL on the master. The standby host will be disk-based so it will be\nless vulnerable to power loss.\n\nI forgot to mention that we'll set up Wal-e <https://github.com/wal-e/wal-e> to\nship base backups and WALs to Amazon S3 continuous as another safety\nmeasure. Again, the lost of a few WALs would not be a big issue for us.\n\nDo you think that this setup will be acceptable for our purposes?\n\nThanks,\nCuong\n\n\nOn Fri, May 17, 2013 at 8:39 AM, Jeff Janes <[email protected]> wrote:\n\n> On Thu, May 16, 2013 at 11:46 AM, Merlin Moncure <[email protected]>wrote:\n>\n>> On Thu, May 16, 2013 at 1:34 PM, Jeff Janes <[email protected]> wrote:\n>> > On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang <[email protected]>\n>> wrote:\n>> >>\n>> >> Hi all,\n>> >>\n>> >> Our application is heavy write and IO utilisation has been the problem\n>> for\n>> >> us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840\n>> Pro for\n>> >> the master server. I'm aware of write cache issue on SSDs in case of\n>> power\n>> >> loss. However, our hosting provider doesn't offer any other choices of\n>> SSD\n>> >> drives with supercapacitor. To minimise risk, we will also set up\n>> another\n>> >> RAID 10 SAS in streaming replication mode. For our application, a few\n>> >> seconds of data loss is acceptable.\n>> >>\n>> >> My question is, would corrupted data files on the primary server affect\n>> >> the streaming standby? In other word, is this setup acceptable in\n>> terms of\n>> >> minimising deficiency of SSDs?\n>> >\n>> >\n>> >\n>> > That seems rather scary to me for two reasons.\n>> >\n>> > If the data center has a sudden power failure, why would it not take out\n>> > both machines either simultaneously or in short succession? Can you\n>> verify\n>> > that the hosting provider does not have them on the same UPS (or even\n>> worse,\n>> > as two virtual machines on the same physical host)?\n>>\n>> I took it to mean that his standby's \"raid 10 SAS\" meant disk drive\n>> based standby.\n>\n>\n> I had not considered that. If the master can't keep up with IO using\n> disk drives, wouldn't a replica using them probably fall infinitely far\n> behind trying to keep up with the workload?\n>\n> Maybe the best choice would just be stick with the current set-up (one\n> server, spinning rust) and just turn off synchrounous_commit, since he is\n> already willing to take the loss of a few seconds of transactions.\n>\n> Cheers,\n>\n> Jeff\n>\n\nThank you for your advice guys. We'll definitely turn off init.d script for PostgreSQL on the master. The standby host will be disk-based so it will be less vulnerable to power loss.\n\nI forgot to mention that we'll set up Wal-e to ship base backups and WALs to Amazon S3 continuous as another safety measure. Again, the lost of a few WALs would not be a big issue for us. \nDo you think that this setup will be acceptable for our purposes?Thanks,Cuong\n\nOn Fri, May 17, 2013 at 8:39 AM, Jeff Janes <[email protected]> wrote:\nOn Thu, May 16, 2013 at 11:46 AM, Merlin Moncure <[email protected]> wrote:\n\nOn Thu, May 16, 2013 at 1:34 PM, Jeff Janes <[email protected]> wrote:\n\n\n\n> On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang <[email protected]> wrote:\n>>\n>> Hi all,\n>>\n>> Our application is heavy write and IO utilisation has been the problem for\n>> us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840 Pro for\n>> the master server. I'm aware of write cache issue on SSDs in case of power\n>> loss. However, our hosting provider doesn't offer any other choices of SSD\n>> drives with supercapacitor. To minimise risk, we will also set up another\n>> RAID 10 SAS in streaming replication mode. For our application, a few\n>> seconds of data loss is acceptable.\n>>\n>> My question is, would corrupted data files on the primary server affect\n>> the streaming standby? In other word, is this setup acceptable in terms of\n>> minimising deficiency of SSDs?\n>\n>\n>\n> That seems rather scary to me for two reasons.\n>\n> If the data center has a sudden power failure, why would it not take out\n> both machines either simultaneously or in short succession? Can you verify\n> that the hosting provider does not have them on the same UPS (or even worse,\n> as two virtual machines on the same physical host)?\n\nI took it to mean that his standby's \"raid 10 SAS\" meant disk drive\nbased standby. I had not considered that. If the master can't keep up with IO using disk drives, wouldn't a replica using them probably fall infinitely far behind trying to keep up with the workload?\nMaybe the best choice would just be stick with the current set-up (one server, spinning rust) and just turn off synchrounous_commit, since he is already willing to take the loss of a few seconds of transactions. \nCheers,Jeff",
"msg_date": "Fri, 17 May 2013 09:52:00 +1000",
"msg_from": "Cuong Hoang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "Hi,\n\nOn 16.5.2013 16:46, Cuong Hoang wrote:\n> Hi all,\n> \n> Our application is heavy write and IO utilisation has been the problem\n> for us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840\n\nWhat does \"heavy write\" mean in your case? Does that mean a lot of small\ntransactions or few large ones?\n\nWhat have you done to tune the server?\n\n> Pro for the master server. I'm aware of write cache issue on SSDs in\n> case of power loss. However, our hosting provider doesn't offer any\n> other choices of SSD drives with supercapacitor. To minimise risk, we\n> will also set up another RAID 10 SAS in streaming replication mode. For\n> our application, a few seconds of data loss is acceptable.\n\nStreaming replication allows zero data loss if used in synchronous mode.\n\n> My question is, would corrupted data files on the primary server affect\n> the streaming standby? In other word, is this setup acceptable in terms\n> of minimising deficiency of SSDs?\n\nIt should be.\n\nHave you considered using a UPS? That would make the SSDs about as\nreliable as SATA/SAS drives - the UPS may fail, but so may a BBU unit on\nthe SAS controller.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 May 2013 02:06:49 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "Hi Tomas,\n\nWe have a lot of small updates and some inserts. The database size is at\n35GB including indexes and TOAST. We think it will keep growing to about\n200GB. We usually have a burst of about 500k writes in about 5-10 minutes\nwhich basically cripples IO on the current servers. I've tried to increase\nthe checkpoint_segments, checkpoint_timeout etc. as recommended in\n\"PostgreSQL 9.0 Performance\" book. However, it seems like our server just\ncouldn't handle the current load.\n\nHere is the server specs:\n\nDual E5620, 32GB RAM, 4x1TB SAS 15k in RAID10\n\nHere are some core PostgreSQL configs:\n\nshared_buffers = 2GB # min 128kB\nwork_mem = 64MB # min 64kB\nmaintenance_work_mem = 1GB # min 1MB\nwal_buffers = 16MB\ncheckpoint_segments = 128\ncheckpoint_timeout = 30min\ncheckpoint_completion_target = 0.7\n\n\nThanks,\nCuong\n\n\nOn Fri, May 17, 2013 at 10:06 AM, Tomas Vondra <[email protected]> wrote:\n\n> Hi,\n>\n> On 16.5.2013 16:46, Cuong Hoang wrote:\n> > Hi all,\n> >\n> > Our application is heavy write and IO utilisation has been the problem\n> > for us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840\n>\n> What does \"heavy write\" mean in your case? Does that mean a lot of small\n> transactions or few large ones?\n>\n> What have you done to tune the server?\n>\n> > Pro for the master server. I'm aware of write cache issue on SSDs in\n> > case of power loss. However, our hosting provider doesn't offer any\n> > other choices of SSD drives with supercapacitor. To minimise risk, we\n> > will also set up another RAID 10 SAS in streaming replication mode. For\n> > our application, a few seconds of data loss is acceptable.\n>\n> Streaming replication allows zero data loss if used in synchronous mode.\n>\n> > My question is, would corrupted data files on the primary server affect\n> > the streaming standby? In other word, is this setup acceptable in terms\n> > of minimising deficiency of SSDs?\n>\n> It should be.\n>\n> Have you considered using a UPS? That would make the SSDs about as\n> reliable as SATA/SAS drives - the UPS may fail, but so may a BBU unit on\n> the SAS controller.\n>\n> Tomas\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi Tomas,We have a lot of small updates and some inserts. The database size is at 35GB including indexes and TOAST. We think it will keep growing to about 200GB. We usually have a burst of about 500k writes in about 5-10 minutes which basically cripples IO on the current servers. I've tried to increase the checkpoint_segments, checkpoint_timeout etc. as recommended in \"PostgreSQL 9.0 Performance\" book. However, it seems like our server just couldn't handle the current load.\nHere is the server specs:Dual E5620, 32GB RAM, 4x1TB SAS 15k in RAID10Here are some core PostgreSQL configs:\nshared_buffers = 2GB # min 128kBwork_mem = 64MB # min 64kBmaintenance_work_mem = 1GB # min 1MB\nwal_buffers = 16MBcheckpoint_segments = 128checkpoint_timeout = 30mincheckpoint_completion_target = 0.7\nThanks,CuongOn Fri, May 17, 2013 at 10:06 AM, Tomas Vondra <[email protected]> wrote:\nHi,\n\nOn 16.5.2013 16:46, Cuong Hoang wrote:\n> Hi all,\n>\n> Our application is heavy write and IO utilisation has been the problem\n> for us for a while. We've decided to use RAID 10 of 4x500GB Samsung 840\n\nWhat does \"heavy write\" mean in your case? Does that mean a lot of small\ntransactions or few large ones?\n\nWhat have you done to tune the server?\n\n> Pro for the master server. I'm aware of write cache issue on SSDs in\n> case of power loss. However, our hosting provider doesn't offer any\n> other choices of SSD drives with supercapacitor. To minimise risk, we\n> will also set up another RAID 10 SAS in streaming replication mode. For\n> our application, a few seconds of data loss is acceptable.\n\nStreaming replication allows zero data loss if used in synchronous mode.\n\n> My question is, would corrupted data files on the primary server affect\n> the streaming standby? In other word, is this setup acceptable in terms\n> of minimising deficiency of SSDs?\n\nIt should be.\n\nHave you considered using a UPS? That would make the SSDs about as\nreliable as SATA/SAS drives - the UPS may fail, but so may a BBU unit on\nthe SAS controller.\n\nTomas\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 17 May 2013 10:21:49 +1000",
"msg_from": "Cuong Hoang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 17/05/13 12:06, Tomas Vondra wrote:\n> Hi,\n>\n> On 16.5.2013 16:46, Cuong Hoang wrote:\n\n>> Pro for the master server. I'm aware of write cache issue on SSDs in\n>> case of power loss. However, our hosting provider doesn't offer any\n>> other choices of SSD drives with supercapacitor. To minimise risk, we\n>> will also set up another RAID 10 SAS in streaming replication mode. For\n>> our application, a few seconds of data loss is acceptable.\n>\n> Streaming replication allows zero data loss if used in synchronous mode.\n>\n\nI'm not sure synchronous replication is really an option here as it will \nslow the master down to spinning disk io speeds, unless the standby is \nconfigured with SSDs as well - which probably defeats the purpose of \nthis setup.\n\nOn the other hand, if the system is so loaded that a pure SAS (spinning \ndrive) solution can't keen up, then the standby lag may get to be way \nmore than a few seconds...which means look out for huge data loss.\n\nI'd be inclined to apply more leverage to hosting provider to source \nSSDs suitable for your needs, or change hosting providers.\n\nRegards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 May 2013 13:34:40 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang <[email protected]> wrote:\n> For our application, a few seconds of data loss is acceptable.\n\nIf a few seconds of data loss is acceptable, I would seriously look at\nthe synchronous_commit setting and think about turning that off rather\nthan risk silent corruption with non-enterprise SSDs.\n\nhttp://www.postgresql.org/docs/9.2/interactive/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT\n\n\"Unlike fsync, setting this parameter to off does not create any risk\nof database inconsistency: an operating system or database crash might\nresult in some recent allegedly-committed transactions being lost, but\nthe database state will be just the same as if those transactions had\nbeen aborted cleanly. So, turning synchronous_commit off can be a\nuseful alternative when performance is more important than exact\ncertainty about the durability of a transaction.\"\n\nWith a default wal_writer_delay setting of 200ms, you will only be at\nrisk of losing at most 600ms of transactions in the event of an\nunexpected crash or power loss, but write performance should go up a\nhuge amount, especially if they are a lot of small writes as you\ndescribe.\n\n-Dave\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 16 May 2013 23:34:15 -0700",
"msg_from": "David Rees <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Fri, May 17, 2013 at 1:34 AM, David Rees <[email protected]> wrote:\n> On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang <[email protected]> wrote:\n>> For our application, a few seconds of data loss is acceptable.\n>\n> If a few seconds of data loss is acceptable, I would seriously look at\n> the synchronous_commit setting and think about turning that off rather\n> than risk silent corruption with non-enterprise SSDs.\n\nThat is not going to help. Since the drives lie about fsync, upon a\npower event you must assume the database is corrupt. I think his\nproposed configuration is the best bet (although I would strongly\nconsider putting SSD on the standby as well). Personally, I think\nnon SSD drives are obsolete for database purposes and will not\nrecommend them for any configuration. Ideally though, OP would be\nusing S3700 and we wouldn't be having this conversation.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 May 2013 08:17:52 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Fri, May 17, 2013 at 8:17 AM, Merlin Moncure <[email protected]> wrote:\n> On Fri, May 17, 2013 at 1:34 AM, David Rees <[email protected]> wrote:\n>> On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang <[email protected]> wrote:\n>>> For our application, a few seconds of data loss is acceptable.\n>>\n>> If a few seconds of data loss is acceptable, I would seriously look at\n>> the synchronous_commit setting and think about turning that off rather\n>> than risk silent corruption with non-enterprise SSDs.\n>\n> That is not going to help.\n\n\nwhoops -- misread your post heh (you were suggesting to use classic\nhard drives). yeah, that might work but it only buys you so much\nparticuarly if there is a lot of random activity in the heap.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 May 2013 08:19:17 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 17.5.2013 03:34, Mark Kirkwood wrote:\n> On 17/05/13 12:06, Tomas Vondra wrote:\n>> Hi,\n>>\n>> On 16.5.2013 16:46, Cuong Hoang wrote:\n> \n>>> Pro for the master server. I'm aware of write cache issue on SSDs in\n>>> case of power loss. However, our hosting provider doesn't offer any\n>>> other choices of SSD drives with supercapacitor. To minimise risk, we\n>>> will also set up another RAID 10 SAS in streaming replication mode. For\n>>> our application, a few seconds of data loss is acceptable.\n>>\n>> Streaming replication allows zero data loss if used in synchronous mode.\n>>\n> \n> I'm not sure synchronous replication is really an option here as it will\n> slow the master down to spinning disk io speeds, unless the standby is\n> configured with SSDs as well - which probably defeats the purpose of\n> this setup.\n\nThe master waits for reception of the data, not writing them to the\ndisks. It will have to write them eventually (and that might cause\nissues), but I'm not really sure it's that simple.\n\n> On the other hand, if the system is so loaded that a pure SAS (spinning\n> drive) solution can't keen up, then the standby lag may get to be way\n> more than a few seconds...which means look out for huge data loss.\n\nDon't forget the slave does not perform all the I/O (searching for the\nrow etc.). It's difficult to say how much this will save, though.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 May 2013 00:15:23 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "Do you really need a running standby for fast failover? What about doing\nplain WAL archiging? I'd definitely consider that, because even if you\nsetup a SAS-based replica, you can't use it for production as it does no\nhandle the load.\n\nI think you could setup WAL archiving and in case of crash just use the\nbase backup and replay the WAL from the archive.\n\nThis means the SAS-based system is purely for WAL archiving, i.e.\nperforms only sequential writes which should not be a big deal.\n\nThe recovery will be performed on the SSD system, which should handle it\nfine. If you need faster recovery, you may perform it incrementally on\nthe SAS system (it will take some time, but it won't influence the\nmaster). You might do that daily or something like that.\n\nThe only problem with this is that this is file based, and could mean\nlag (up to 16MB or archive_timeout). But this should not be problem if\nyou place the WAL on SAS drives with controller. If you use RAID, you\nshould be perfectly fine.\n\nSo this is what I'd suggest:\n\n 1) use SSD for data files, SAS RAID1 for WAL on the master\n 2) setup WAL archiving (base backup + archive on SAS system)\n 3) update the base backup daily (incremental recovery)\n 4) in case of crash, keep WAL from the archive and pg_xlog on the\n SAS RAID (on master)\n\n\nTomas\n\n\nOn 17.5.2013 02:21, Cuong Hoang wrote:\n> Hi Tomas,\n> \n> We have a lot of small updates and some inserts. The database size is at\n> 35GB including indexes and TOAST. We think it will keep growing to about\n> 200GB. We usually have a burst of about 500k writes in about 5-10\n> minutes which basically cripples IO on the current servers. I've tried\n> to increase the checkpoint_segments, checkpoint_timeout etc. as\n> recommended in \"PostgreSQL 9.0 Performance\" book. However, it seems like\n> our server just couldn't handle the current load.\n> \n> Here is the server specs:\n> \n> Dual E5620, 32GB RAM, 4x1TB SAS 15k in RAID10\n> \n> Here are some core PostgreSQL configs:\n> \n> shared_buffers = 2GB # min 128kB\n> work_mem = 64MB # min 64kB\n> maintenance_work_mem = 1GB # min 1MB\n> wal_buffers = 16MB\n> checkpoint_segments = 128\n> checkpoint_timeout = 30min\n> checkpoint_completion_target = 0.7\n> \n> \n> Thanks,\n> Cuong\n> \n> \n> On Fri, May 17, 2013 at 10:06 AM, Tomas Vondra <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Hi,\n> \n> On 16.5.2013 16:46, Cuong Hoang wrote:\n> > Hi all,\n> >\n> > Our application is heavy write and IO utilisation has been the problem\n> > for us for a while. We've decided to use RAID 10 of 4x500GB\n> Samsung 840\n> \n> What does \"heavy write\" mean in your case? Does that mean a lot of small\n> transactions or few large ones?\n> \n> What have you done to tune the server?\n> \n> > Pro for the master server. I'm aware of write cache issue on SSDs in\n> > case of power loss. However, our hosting provider doesn't offer any\n> > other choices of SSD drives with supercapacitor. To minimise risk, we\n> > will also set up another RAID 10 SAS in streaming replication\n> mode. For\n> > our application, a few seconds of data loss is acceptable.\n> \n> Streaming replication allows zero data loss if used in synchronous mode.\n> \n> > My question is, would corrupted data files on the primary server\n> affect\n> > the streaming standby? In other word, is this setup acceptable in\n> terms\n> > of minimising deficiency of SSDs?\n> \n> It should be.\n> \n> Have you considered using a UPS? That would make the SSDs about as\n> reliable as SATA/SAS drives - the UPS may fail, but so may a BBU unit on\n> the SAS controller.\n> \n> Tomas\n> \n> \n> --\n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 May 2013 00:34:50 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "Thanks for suggestion Tomas. We're about to set up WAL backup to Amazon S3.\nI think this should cover all of our bases. At least for the moment,\nSAS-based standby seems to keep up with the master because that's its sole\npurpose. We're not sending queries to the hot standby. We also consider\nswitching the hot standby to fast failover as you suggested. I guess for\nnow we should stick to streaming replication because the slave is still\nkeeping up with the master.\n\nBtw, after switching to SSD, performance improves vastly. IO utilisation\ndrops from 100% to 6% in peak periods. That's an order of magnitude faster!\n\nCheers,\nCuong\n\n\nOn Mon, May 20, 2013 at 8:34 AM, Tomas Vondra <[email protected]> wrote:\n\n> Do you really need a running standby for fast failover? What about doing\n> plain WAL archiging? I'd definitely consider that, because even if you\n> setup a SAS-based replica, you can't use it for production as it does no\n> handle the load.\n>\n> I think you could setup WAL archiving and in case of crash just use the\n> base backup and replay the WAL from the archive.\n>\n> This means the SAS-based system is purely for WAL archiving, i.e.\n> performs only sequential writes which should not be a big deal.\n>\n> The recovery will be performed on the SSD system, which should handle it\n> fine. If you need faster recovery, you may perform it incrementally on\n> the SAS system (it will take some time, but it won't influence the\n> master). You might do that daily or something like that.\n>\n> The only problem with this is that this is file based, and could mean\n> lag (up to 16MB or archive_timeout). But this should not be problem if\n> you place the WAL on SAS drives with controller. If you use RAID, you\n> should be perfectly fine.\n>\n> So this is what I'd suggest:\n>\n> 1) use SSD for data files, SAS RAID1 for WAL on the master\n> 2) setup WAL archiving (base backup + archive on SAS system)\n> 3) update the base backup daily (incremental recovery)\n> 4) in case of crash, keep WAL from the archive and pg_xlog on the\n> SAS RAID (on master)\n>\n>\n> Tomas\n>\n>\n> On 17.5.2013 02:21, Cuong Hoang wrote:\n> > Hi Tomas,\n> >\n> > We have a lot of small updates and some inserts. The database size is at\n> > 35GB including indexes and TOAST. We think it will keep growing to about\n> > 200GB. We usually have a burst of about 500k writes in about 5-10\n> > minutes which basically cripples IO on the current servers. I've tried\n> > to increase the checkpoint_segments, checkpoint_timeout etc. as\n> > recommended in \"PostgreSQL 9.0 Performance\" book. However, it seems like\n> > our server just couldn't handle the current load.\n> >\n> > Here is the server specs:\n> >\n> > Dual E5620, 32GB RAM, 4x1TB SAS 15k in RAID10\n> >\n> > Here are some core PostgreSQL configs:\n> >\n> > shared_buffers = 2GB # min 128kB\n> > work_mem = 64MB # min 64kB\n> > maintenance_work_mem = 1GB # min 1MB\n> > wal_buffers = 16MB\n> > checkpoint_segments = 128\n> > checkpoint_timeout = 30min\n> > checkpoint_completion_target = 0.7\n> >\n> >\n> > Thanks,\n> > Cuong\n> >\n> >\n> > On Fri, May 17, 2013 at 10:06 AM, Tomas Vondra <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > Hi,\n> >\n> > On 16.5.2013 16:46, Cuong Hoang wrote:\n> > > Hi all,\n> > >\n> > > Our application is heavy write and IO utilisation has been the\n> problem\n> > > for us for a while. We've decided to use RAID 10 of 4x500GB\n> > Samsung 840\n> >\n> > What does \"heavy write\" mean in your case? Does that mean a lot of\n> small\n> > transactions or few large ones?\n> >\n> > What have you done to tune the server?\n> >\n> > > Pro for the master server. I'm aware of write cache issue on SSDs\n> in\n> > > case of power loss. However, our hosting provider doesn't offer any\n> > > other choices of SSD drives with supercapacitor. To minimise risk,\n> we\n> > > will also set up another RAID 10 SAS in streaming replication\n> > mode. For\n> > > our application, a few seconds of data loss is acceptable.\n> >\n> > Streaming replication allows zero data loss if used in synchronous\n> mode.\n> >\n> > > My question is, would corrupted data files on the primary server\n> > affect\n> > > the streaming standby? In other word, is this setup acceptable in\n> > terms\n> > > of minimising deficiency of SSDs?\n> >\n> > It should be.\n> >\n> > Have you considered using a UPS? That would make the SSDs about as\n> > reliable as SATA/SAS drives - the UPS may fail, but so may a BBU\n> unit on\n> > the SAS controller.\n> >\n> > Tomas\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list\n> > ([email protected]\n> > <mailto:[email protected]>)\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n> >\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks for suggestion Tomas. We're about to set up WAL backup to Amazon S3. I think this should cover all of our bases. At least for the moment, SAS-based standby seems to keep up with the master because that's its sole purpose. We're not sending queries to the hot standby. We also consider switching the hot standby to fast failover as you suggested. I guess for now we should stick to streaming replication because the slave is still keeping up with the master.\nBtw, after switching to SSD, performance improves vastly. IO utilisation drops from 100% to 6% in peak periods. That's an order of magnitude faster!Cheers,\nCuongOn Mon, May 20, 2013 at 8:34 AM, Tomas Vondra <[email protected]> wrote:\nDo you really need a running standby for fast failover? What about doing\nplain WAL archiging? I'd definitely consider that, because even if you\nsetup a SAS-based replica, you can't use it for production as it does no\nhandle the load.\n\nI think you could setup WAL archiving and in case of crash just use the\nbase backup and replay the WAL from the archive.\n\nThis means the SAS-based system is purely for WAL archiving, i.e.\nperforms only sequential writes which should not be a big deal.\n\nThe recovery will be performed on the SSD system, which should handle it\nfine. If you need faster recovery, you may perform it incrementally on\nthe SAS system (it will take some time, but it won't influence the\nmaster). You might do that daily or something like that.\n\nThe only problem with this is that this is file based, and could mean\nlag (up to 16MB or archive_timeout). But this should not be problem if\nyou place the WAL on SAS drives with controller. If you use RAID, you\nshould be perfectly fine.\n\nSo this is what I'd suggest:\n\n 1) use SSD for data files, SAS RAID1 for WAL on the master\n 2) setup WAL archiving (base backup + archive on SAS system)\n 3) update the base backup daily (incremental recovery)\n 4) in case of crash, keep WAL from the archive and pg_xlog on the\n SAS RAID (on master)\n\n\nTomas\n\n\nOn 17.5.2013 02:21, Cuong Hoang wrote:\n> Hi Tomas,\n>\n> We have a lot of small updates and some inserts. The database size is at\n> 35GB including indexes and TOAST. We think it will keep growing to about\n> 200GB. We usually have a burst of about 500k writes in about 5-10\n> minutes which basically cripples IO on the current servers. I've tried\n> to increase the checkpoint_segments, checkpoint_timeout etc. as\n> recommended in \"PostgreSQL 9.0 Performance\" book. However, it seems like\n> our server just couldn't handle the current load.\n>\n> Here is the server specs:\n>\n> Dual E5620, 32GB RAM, 4x1TB SAS 15k in RAID10\n>\n> Here are some core PostgreSQL configs:\n>\n> shared_buffers = 2GB # min 128kB\n> work_mem = 64MB # min 64kB\n> maintenance_work_mem = 1GB # min 1MB\n> wal_buffers = 16MB\n> checkpoint_segments = 128\n> checkpoint_timeout = 30min\n> checkpoint_completion_target = 0.7\n>\n>\n> Thanks,\n> Cuong\n>\n>\n> On Fri, May 17, 2013 at 10:06 AM, Tomas Vondra <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> On 16.5.2013 16:46, Cuong Hoang wrote:\n> > Hi all,\n> >\n> > Our application is heavy write and IO utilisation has been the problem\n> > for us for a while. We've decided to use RAID 10 of 4x500GB\n> Samsung 840\n>\n> What does \"heavy write\" mean in your case? Does that mean a lot of small\n> transactions or few large ones?\n>\n> What have you done to tune the server?\n>\n> > Pro for the master server. I'm aware of write cache issue on SSDs in\n> > case of power loss. However, our hosting provider doesn't offer any\n> > other choices of SSD drives with supercapacitor. To minimise risk, we\n> > will also set up another RAID 10 SAS in streaming replication\n> mode. For\n> > our application, a few seconds of data loss is acceptable.\n>\n> Streaming replication allows zero data loss if used in synchronous mode.\n>\n> > My question is, would corrupted data files on the primary server\n> affect\n> > the streaming standby? In other word, is this setup acceptable in\n> terms\n> > of minimising deficiency of SSDs?\n>\n> It should be.\n>\n> Have you considered using a UPS? That would make the SSDs about as\n> reliable as SATA/SAS drives - the UPS may fail, but so may a BBU unit on\n> the SAS controller.\n>\n> Tomas\n>\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 20 May 2013 12:41:32 +1000",
"msg_from": "Cuong Hoang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/16/13 8:06 PM, Tomas Vondra wrote:\n> Have you considered using a UPS? That would make the SSDs about as\n> reliable as SATA/SAS drives - the UPS may fail, but so may a BBU unit on\n> the SAS controller.\n\nThat's not true at all. Any decent RAID controller will have an option \nto stop write-back caching when the battery is bad. Things will slow \nbadly when that happens, but there is zero data risk from a short-term \nBBU failure. The only serious risk with a good BBU setup are that \nyou'll have a power failure lasting so long that the battery runs down \nbefore the cache can be flushed to disk.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 May 2013 23:00:59 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/16/13 7:52 PM, Cuong Hoang wrote:\n> The standby host will be disk-based so it\n> will be less vulnerable to power loss.\n\nIf it can keep up with replay from the faster master, that sounds like a \ndecent backup. Make sure you setup all write caches very carefully on \nthat system, because it's going to be your best hope to come back up \nquickly after a real crash.\n\nAny vendor that pushes Samsung 840 drives for database use should be \nashamed of themselves. Those drives are turning into the new \nincarnation of what we saw with the Intel X25-E/X-25-M: they're very \npopular, but any system built with them will corrupt itself on the first \nfailure. I expect to see a new spike in people needing data recovery \nhelp after losing their Samsung 840 based servers start soon.\n\n> I forgot to mention that we'll set up Wal-e\n> <https://github.com/wal-e/wal-e> to ship base backups and WALs to Amazon\n> S3 continuous as another safety measure. Again, the lost of a few WALs\n> would not be a big issue for us.\n\nThat's a useful plan. Just make sure you ship new base backups fairly \noften. If you have to fall back to that copy of the data, you'll need \nto replay anything that's happened since the last base backup happened. \n That can easily result in a week of downtime if you're only shipping \nbackups once per month, for example.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 May 2013 23:10:08 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 20.5.2013 05:00, Greg Smith wrote:\n> On 5/16/13 8:06 PM, Tomas Vondra wrote:\n>> Have you considered using a UPS? That would make the SSDs about as\n>> reliable as SATA/SAS drives - the UPS may fail, but so may a BBU unit on\n>> the SAS controller.\n> \n> That's not true at all. Any decent RAID controller will have an option\n> to stop write-back caching when the battery is bad. Things will slow\n> badly when that happens, but there is zero data risk from a short-term\n> BBU failure. The only serious risk with a good BBU setup are that\n> you'll have a power failure lasting so long that the battery runs down\n> before the cache can be flushed to disk.\n\nThat's true, no doubt about that. What I was trying to say is that a\ncontroller with BBU (or a SSD with proper write cache protection) is\nabout as safe as an UPS when it comes to power outages. Assuming both\nare properly configured / watched / checked.\n\nSure, there are scenarios where UPS is not going to help (e.g. a PSU\nfailure) so a controller with BBU is better from this point of view.\nI've seen crashes with both options (BBU / UPS), both because of\nmisconfiguration and hw issues. BTW I don't know what controller are we\ntalking about here - it might be as crappy as the SSD drives.\n\nWhat I was thinking about in this case is using two SSD-based systems\nwith UPSes. That'd allow fast failover (which may not be possible with\nthe SAS based replica, as it does not handle the load).\n\nBut yes, I do agree that the provider should be ashamed for not\nproviding reliable SSDs in the first place. Getting reliable SSDs should\nbe the first option - all these suggestions are really just workarounds\nof this rather simple issue.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 May 2013 22:57:13 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Mon, May 20, 2013 at 3:57 PM, Tomas Vondra <[email protected]> wrote:\n> But yes, I do agree that the provider should be ashamed for not\n> providing reliable SSDs in the first place. Getting reliable SSDs should\n> be the first option - all these suggestions are really just workarounds\n> of this rather simple issue.\n\nAbsolutely. Reliable SSD should be the first and only option. They\nare significantly more expensive (more than 2x) but are worth it.\n\nWhen it comes to databases, particularly in the open source postgres\nworld, hard drives are completely obsolete. SSD are a couple of\norders of magnitude faster and this (while still slow in computer\nterms) is fast enough to put storage into the modern area by anyone\nwho is smart enough to connect a sata cable. While everyone likes to\nobsess over super scalable architectures technology has finally\nadvanced to the point where your typical SMB system can be handled by\na sincle device.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 May 2013 17:32:40 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/20/13 6:32 PM, Merlin Moncure wrote:\n\n> When it comes to databases, particularly in the open source postgres\n> world, hard drives are completely obsolete. SSD are a couple of\n> orders of magnitude faster and this (while still slow in computer\n> terms) is fast enough to put storage into the modern area by anyone\n> who is smart enough to connect a sata cable.\n\nYou're skirting the edge of vendor Kool-Aid here. I'm working on a very \ndetailed benchmark vs. real world piece centered on Intel's 710 models, \none of the few reliable drives on the market. (Yes, I have a DC S3700 \ntoo, just not as much data yet) While in theory these drives will hit \ntwo orders of magnitude speed improvement, and I have benchmarks where \nthat's the case, in practice I've seen them deliver less than 5X better \ntoo. You get one guess which I'd consider more likely to happen on a \ndifficult database server workload.\n\nThe only really huge gain to be had using SSD is commit rate at a low \nclient count. There you can easily do 5,000/second instead of a \nspinning disk that is closer to 100, for less than what the \nbattery-backed RAID card along costs to speed up mechanical drives. My \ntest server's 100GB DC S3700 was $250. That's still not two orders of \nmagnitude faster though.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 May 2013 20:19:09 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Tue, May 21, 2013 at 7:19 PM, Greg Smith <[email protected]> wrote:\n> On 5/20/13 6:32 PM, Merlin Moncure wrote:\n>\n>> When it comes to databases, particularly in the open source postgres\n>> world, hard drives are completely obsolete. SSD are a couple of\n>> orders of magnitude faster and this (while still slow in computer\n>> terms) is fast enough to put storage into the modern area by anyone\n>> who is smart enough to connect a sata cable.\n>\n>\n> You're skirting the edge of vendor Kool-Aid here. I'm working on a very\n> detailed benchmark vs. real world piece centered on Intel's 710 models, one\n> of the few reliable drives on the market. (Yes, I have a DC S3700 too, just\n> not as much data yet) While in theory these drives will hit two orders of\n> magnitude speed improvement, and I have benchmarks where that's the case, in\n> practice I've seen them deliver less than 5X better too. You get one guess\n> which I'd consider more likely to happen on a difficult database server\n> workload.\n>\n> The only really huge gain to be had using SSD is commit rate at a low client\n> count. There you can easily do 5,000/second instead of a spinning disk that\n> is closer to 100, for less than what the battery-backed RAID card along\n> costs to speed up mechanical drives. My test server's 100GB DC S3700 was\n> $250. That's still not two orders of magnitude faster though.\n\nThat's most certainly *not* the only gain to be had: random read rates\nof large databases (a very important metric for data analysis) can\neasily hit 20k tps. So I'll stand by the figure. Another point: that\n5000k commit raid is sustained, whereas a raid card will spectacularly\ndegrade until the cache overflows; it's not fair to compare burst with\nsustained performance. To hit 5000k sustained commit rate along with\ngood random read performance, you'd need a very expensive storage\nsystem. Right now I'm working (not by choice) with a teir-1 storage\nsystem (let's just say it rhymes with 'weefax') and I would trade it\nfor direct attached SSD in a heartbeat.\n\nAlso, note that 3rd party benchmarking is showing the 3700 completely\nsmoking the 710 in database workloads (for example, see\nhttp://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review/6).\n\nAnyways, SSD installation in the post-capactior era has been 100.0%\ncorrelated in my experience (admittedly, around a dozen or so systems)\nwith removal of storage as the primary performance bottleneck, and\nI'll stand by that. I'm not claiming to work with extremely high\ntransaction rate systems but then again neither are most of the people\nreading this list. Disk drives are obsolete for database\ninstallations.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 08:30:45 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/22/13 9:30 AM, Merlin Moncure wrote:\n> That's most certainly *not* the only gain to be had: random read rates\n> of large databases (a very important metric for data analysis) can\n> easily hit 20k tps. So I'll stand by the figure.\n\nThey can easily hit that number. Or they can do this:\n\nDevice: r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\nsdd 2702.80 19.40 19.67 0.16 14.91 273.68 71.74 0.37 100.00\nsdd 2707.60 13.00 19.53 0.10 14.78 276.61 90.34 0.37 100.00\n\nThat's an Intel 710 being crushed by a random read database server \nworkload, unable to deliver even 3000 IOPS / 20MB/s. I have hours of \ndata like this from several servers. Yes, the DC S3700 drives are at \nleast twice as fast on average, but I haven't had one for long enough to \nsee what its worst case really looks like yet.\n\nHere's a mechanical drive hitting its limits on the same server as the \nabove:\n\nDevice: r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm \n%util\nsdb 100.80 220.60 1.06 1.79 18.16 228.78 724.11 3.11 \n100.00\nsdb 119.20 220.40 1.09 1.77 17.22 228.36 677.46 2.94 \n100.00\n\nGiving around 3MB/s. I am quite happy saying the SSD is delivering \nabout a single order of magnitude improvement, in both throughput and \nlatency. But that's it, and a single order of magnitude improvement is \nsometimes not good enough to solve all storage issues.\n\nIf all you care about is speed, the main situation where I've found \nthere to still be value in \"tier 1 storage\" are extremely write-heavy \nworkloads. The best write numbers I've seen out of Postgres are still \ngoing into a monster EMC unit, simply because the unit I was working \nwith had 16GB of durable cache. Yes, that only supports burst speeds, \nbut 16GB absorbs a whole lot of writes before it fills. Write \nre-ordering and combining can accelerate traditional disk quite a bit \nwhen it's across a really large horizon like that.\n\n> Anyways, SSD installation in the post-capacitor era has been 100.0%\n> correlated in my experience (admittedly, around a dozen or so systems)\n> with removal of storage as the primary performance bottleneck, and\n> I'll stand by that.\n\nI wish it were that easy for everyone, but that's simply not true. Are \nthere lots of systems where SSD makes storage look almost free it's so \nfast? Sure. But presuming all systems will look like that is \noptimistic, and it sets unreasonable expectations.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 10:18:37 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Wed, May 22, 2013 at 9:18 AM, Greg Smith <[email protected]> wrote:\n> On 5/22/13 9:30 AM, Merlin Moncure wrote:\n>>\n>> That's most certainly *not* the only gain to be had: random read rates\n>> of large databases (a very important metric for data analysis) can\n>> easily hit 20k tps. So I'll stand by the figure.\n>\n>\n> They can easily hit that number. Or they can do this:\n>\n> Device: r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\n> sdd 2702.80 19.40 19.67 0.16 14.91 273.68 71.74 0.37 100.00\n> sdd 2707.60 13.00 19.53 0.10 14.78 276.61 90.34 0.37 100.00\n\nyup -- I've seen this too...the high transaction rates quickly fall\nover when there is concurrent writing (but for bulk 100% read OLAP\nqueries I see the higher figure more often than not). Even so, it's\na huge difference over 100. unfortunately, I don't have a s3700 to\ntest with, but based on everything i've seen it looks like it's a\nmostly solved problem. (for example, see here:\nhttp://www.storagereview.com/intel_ssd_dc_s3700_series_enterprise_ssd_review).\n Tests that drive the 710 to <3k iops were not able to take the 3700\ndown under 10k at any queue depth. Take a good look at the 8k\npreconditioning curve latency chart -- everything you need to know is\nright there; it's a completely different controller and offers much\nbetter worst case performance.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 10:05:34 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/22/13 11:05 AM, Merlin Moncure wrote:\n> unfortunately, I don't have a s3700 to\n> test with, but based on everything i've seen it looks like it's a\n> mostly solved problem. (for example, see here:\n> http://www.storagereview.com/intel_ssd_dc_s3700_series_enterprise_ssd_review).\n> Tests that drive the 710 to <3k iops were not able to take the 3700\n> down under 10k at any queue depth.\n\nI have two weeks of real-world data from DC S3700 units in production \nand a pile of synthetic test results. The S3700 drives are at least 2X \nas fast as the 710 models, and there are synthetic tests where it's \ncloser to 10X.\n\nOn a 5,000 IOPS workload that crushed a pair of 710 units, the new \ndrives are only hitting 50% utilization now. Does that make worst-case \n10K? Maybe. I can't just extrapolate from the 50% figures and predict \nthe throughput I'll see at 100% though, so I'm still waiting for more \ndata before I feel comfortable saying exactly what the worst case looks \nlike.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 11:33:16 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 05/22/2013 08:30 AM, Merlin Moncure wrote:\n\n> I'm not claiming to work with extremely high transaction rate systems\n> but then again neither are most of the people reading this list.\n> Disk drives are obsolete for database installations.\n\nWell, you may not be able to make that claim, but I can. While we don't \nuse Intel SSDs, our first-gen FusinoIO cards can deliver about 20k \nPostgreSQL TPS of our real-world data right off the device before \ncaching effects start boosting the numbers. These days, devices like \nthis make our current batch look like rusty old hulks in comparison, so \nthe gap is just widening. Hard drives stand no chance at all.\n\nAn 8-drive 15k RPM RAID-10 gave us about 1800 TPS back when we switched \nto FusionIO about two years ago. So, while Intel drives themselves may \nnot be able to hit sustained 100x speeds over spindles, it's pretty \nclear that that's a firmware or implementation limitation.\n\nThe main \"issue\" is that the sustained sequence scan speeds are \ngenerally less than an order of magnitude faster than drives. So as soon \nas you hit something that isn't limited by random IOPS, spindles get a \nchance to catch up. But those situations are few and far between in a \nheavy transactional setting. Having used NVRAM/SSDs, I could never go \nback so long as the budget allows us to procure them.\n\nA data warehouse? Maybe spindles still have a place there. Heavy \ntransactional system? Not a chance.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 11:56:44 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/22/2013 8:18 AM, Greg Smith wrote:\n>\n> They can easily hit that number. Or they can do this:\n>\n> Device: r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm \n> %util\n> sdd 2702.80 19.40 19.67 0.16 14.91 273.68 71.74 0.37 100.00\n> sdd 2707.60 13.00 19.53 0.10 14.78 276.61 90.34 0.37 100.00\n>\n> That's an Intel 710 being crushed by a random read database server \n> workload, unable to deliver even 3000 IOPS / 20MB/s. I have hours of \n> data like this from several servers.\n\nThis is interesting. Do you know what it is about the workload that \nleads to the unusually low rps ?\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 11:31:51 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/22/13 12:56 PM, Shaun Thomas wrote:\n> Well, you may not be able to make that claim, but I can. While we don't\n> use Intel SSDs, our first-gen FusinoIO cards can deliver about 20k\n> PostgreSQL TPS of our real-world data right off the device before\n> caching effects start boosting the numbers.\n\nI've seen FusionIO hit that 20K commit number, as well as hitting 75K \nIOPS on random reads (600MB/s). They are roughly 5 to 10X faster than \nthe Intel 320/710 drives. There's a corresponding price hit though, and \nhaving to provision PCI-E cards is a pain in some systems.\n\nA claim that a FusionIO drive in particular is capable of 100X the \nperformance of a spinning drive, that I wouldn't dispute. I even made \nthat claim myself with some benchmark numbers to back it up: \nhttp://www.fusionio.com/blog/fusion-io-boosts-postgresql-performance/ \nThat's not just a generic SSD anymore though.\n\n> An 8-drive 15k RPM RAID-10 gave us about 1800 TPS back when we switched\n> to FusionIO about two years ago. So, while Intel drives themselves may\n> not be able to hit sustained 100x speeds over spindles, it's pretty\n> clear that that's a firmware or implementation limitation.\n\n1800 TPS to 20K TPS is just over a 10X speedup.\n\nAs for Intel vs. FusionIO, rather than implementation quality it's more \nwhat architecture you're willing to pay for. If you test a few models \nacross Intel's product line, you can see there's a rough size vs. speed \ncorrelation. The larger units have more channels of flash going at the \nsame time. FusionIO has architected such that there is a wide write \npath even on their smallest cards. That 75K IOPS number I got even out \nof their little 80GB card. (since dropped from the product line)\n\nI can buy a good number of Intel DC S3700 drives for what a FusionIO \ncard costs though.\n\n> The main \"issue\" is that the sustained sequence scan speeds are\n> generally less than an order of magnitude faster than drives. So as soon\n> as you hit something that isn't limited by random IOPS, spindles get a\n> chance to catch up.\n\nI have some moderately fast SSD based transactional systems that are \nstill using traditional drives with battery-backed cache for the \nsequential writes of the WAL volume, where the data volume is on Intel \n710 disks. WAL writes really burn through flash cells, too, so keeping \nthem on traditional drives can be cost effective in a few ways. That \napproach is lucky to hit 10K TPS though, so it can't compete against \nwhat a PCI-E card like the FusionIO drives are capable of.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 14:06:25 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 05/22/2013 01:06 PM, Greg Smith wrote:\n\n> There's a corresponding price hit though, and\n> having to provision PCI-E cards is a pain in some systems.\n\nOh, totally. Specialist devices like RAMSAN, FusionIO, Virident, or \nWhiptail are hideously expensive, even compared to high-end SSDs. I was \njust pointing out out that the technical limitations of the underlying \nchips (NVRAM) can be overcome or augmented in ways Intel isn't doing (yet).\n\n> 1800 TPS to 20K TPS is just over a 10X speedup.\n\nTrue. But in that case, it was a single device pitted against 8 very \nhigh-end 15k RPM spindles. I'd need 80-100 drives in a massive SAN to \nget similar numbers, and at that point, we're not really saving any \nmoney and have a lot more failure points and maintenance.\n\nI guess you get way more space, though. :)\n\n> The larger units have more channels of flash going at the\n> same time. FusionIO has architected such that there is a wide write\n> path even on their smallest cards.\n\nYep. And I've been watching these technologies like a hawk waiting for \nthe new chips and their performance profiles. Some of the newer chips \nhave performance multipliers even on a single die in the larger sizes.\n\n> I can buy a good number of Intel DC S3700 drives for what a FusionIO\n> card costs though.\n\nI know. :(\n\nBut knowing the performance they can deliver, I often dream of a perfect \ndevice comprised of several PCIe-based NVRAM cards in a hot-swap PCIe \nenclosure (they exist!). Something like that in a 3U piece of gear would \nabsolutely annihilate even the largest SAN.\n\nAt the mere cost of a half million or so. :p\n\n> I have some moderately fast SSD based transactional systems that are\n> still using traditional drives with battery-backed cache for the\n> sequential writes of the WAL volume, where the data volume is on\n> Intel 710 disks.\n\nThat sounds like a very sane and recommendable approach, and \ncoincidentally the same we would use if we couldn't afford the FusionIO \ndrives.\n\nI'm actually curious to see how using ZFS with its CoW profile and using \na bundle of SSDs as a ZIL would compare. It's still disk-based, but the \ntransparent SSD layer acting as a gigantic passive read and write cache \nintrigue me. It seems like it would also make a good middle-ground \nconcerning cost vs. performance.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 13:28:02 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 05/22/2013 12:31 PM, David Boreham wrote:\n\n>> Device: r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\n>> sdd 2702.80 19.40 19.67 0.16 14.91 273.68 71.74 0.37 100.00\n>> sdd 2707.60 13.00 19.53 0.10 14.78 276.61 90.34 0.37 100.00\n>>\n>> That's an Intel 710 being crushed by a random read database server\n>> workload, unable to deliver even 3000 IOPS / 20MB/s. I have hours of\n>> data like this from several servers.\n>\n> This is interesting. Do you know what it is about the workload that\n> leads to the unusually low rps ?\n\nThat read rate and that throughput suggest 8k reads. The queue size is \n270+, which is pretty high for a single device, even when it's an SSD. \nSome SSDs seem to break down on queue sizes over 4, and 15 sectors \nspread across a read queue of 270 is pretty hash. The drive tested here \nbasically fell over on servicing a huge diverse read queue, which \nsuggests a firmware issue.\n\nOften this is because the device was optimized for sequential reads and \npost lower IOPS than is theoretically possible so they can advertise \nhigher numbers alongside consumer-grade disks. They're Greg's disks \nthough. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 13:45:06 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "\nOn 05/22/2013 11:06 AM, Greg Smith wrote:\n\n> I have some moderately fast SSD based transactional systems that are\n> still using traditional drives with battery-backed cache for the\n> sequential writes of the WAL volume, where the data volume is on Intel\n> 710 disks. WAL writes really burn through flash cells, too, so keeping\n> them on traditional drives can be cost effective in a few ways. That\n> approach is lucky to hit 10K TPS though, so it can't compete against\n> what a PCI-E card like the FusionIO drives are capable of.\n\nGreg, can you elaborate on the SSD + Xlog issue? What type of burn \nthrough are we talking about?\n\nJD\n\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nFor my dreams of your image that blossoms\n a rose in the deeps of my heart. - W.B. Yeats\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 12:06:50 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/22/13 3:06 PM, Joshua D. Drake wrote:\n> Greg, can you elaborate on the SSD + Xlog issue? What type of burn\n> through are we talking about?\n\nYou're burning through flash cells at a multiple of the total WAL write \nvolume. The system I gave iostat snapshots from upthread (with the \nIntel 710 hitting its limit) archives about 1TB of WAL each week. The \nactual amount of WAL written in terms of erased flash blocks is even \nhigher though, because sometimes the flash is hit with partial page \nwrites. The write amplification of WAL is much worse than the main \ndatabase.\n\nI gave a rough intro to this on the Intel drives at \nhttp://blog.2ndquadrant.com/intel_ssds_lifetime_and_the_32/ and there's \na nice \"Write endurance\" table at \nhttp://www.tomshardware.com/reviews/ssd-710-enterprise-x25-e,3038-2.html\n\nThe cheapest of the Intel SSDs I have here only guarantees 15TB of total \nwrite endurance. Eliminating >1TB of writes per week by moving the WAL \noff SSD is a pretty significant change, even though the burn rate isn't \na simple linear thing--you won't burn the flash out in only 15 weeks.\n\nThe production server is actually using the higher grade 710 drives that \naim for 900TB instead. But I do have standby servers using the low \ngrade stuff, so anything I can do to decrease SSD burn rate without \ndropping performance is useful. And only the top tier of transaction \nrates will outrun a RAID1 pair of 15K drives dedicated to WAL.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 15:30:30 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Wed, May 22, 2013 at 2:30 PM, Greg Smith <[email protected]> wrote:\n> On 5/22/13 3:06 PM, Joshua D. Drake wrote:\n>>\n>> Greg, can you elaborate on the SSD + Xlog issue? What type of burn\n>> through are we talking about?\n>\n>\n> You're burning through flash cells at a multiple of the total WAL write\n> volume. The system I gave iostat snapshots from upthread (with the Intel\n> 710 hitting its limit) archives about 1TB of WAL each week. The actual\n> amount of WAL written in terms of erased flash blocks is even higher though,\n> because sometimes the flash is hit with partial page writes. The write\n> amplification of WAL is much worse than the main database.\n>\n> I gave a rough intro to this on the Intel drives at\n> http://blog.2ndquadrant.com/intel_ssds_lifetime_and_the_32/ and there's a\n> nice \"Write endurance\" table at\n> http://www.tomshardware.com/reviews/ssd-710-enterprise-x25-e,3038-2.html\n>\n> The cheapest of the Intel SSDs I have here only guarantees 15TB of total\n> write endurance. Eliminating >1TB of writes per week by moving the WAL off\n> SSD is a pretty significant change, even though the burn rate isn't a simple\n> linear thing--you won't burn the flash out in only 15 weeks.\n\nCertainly, intel 320 is not designed for 1tb/week workloads.\n\n> The production server is actually using the higher grade 710 drives that aim\n> for 900TB instead. But I do have standby servers using the low grade stuff,\n> so anything I can do to decrease SSD burn rate without dropping performance\n> is useful. And only the top tier of transaction rates will outrun a RAID1\n> pair of 15K drives dedicated to WAL.\n\ns3700 is rated for 10 drive writes/day for 5 years. so, for 200gb drive, that's\n200gb * 10/day * 365 days * 5, that's 3.65 million gigabytes or ~ 3.5 petabytes.\n\n1tb/week would take 67 years to burn through / whatever you assume for\nwrite amplification / whatever extra penalty you give if you are\nshooting for > 5 year duty cycle (flash degrades faster the older it\nis) *for a single 200gb device*. write endurance is not a problem\nfor this drive, in fact it's a very reasonable assumption that the\nfaster worst case random performance is directly related to reduced\nwrite amplification. btw, cost/pb of this drive is less than half of\nthe 710 (which IMO was obsolete the day the s3700 hit the street).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 14:51:45 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 05/22/2013 02:51 PM, Merlin Moncure wrote:\n\n> s3700 is rated for 10 drive writes/day for 5 years. so, for 200gb\n> drive, that's 200gb * 10/day * 365 days * 5, that's 3.65 million\n> gigabytes or ~ 3.5 petabytes.\n\nNice. And on that note:\n\nhttp://www.tomshardware.com/reviews/ssd-dc-s3700-raid-0-benchmarks,3480.html\n\nThey actually over-saturated the backplane with 24 of these drives in a \ngiant RAID-0, tipping the scales at around 3.1M IOPS. Not bad for \nconsumer-level drives. I'd love to see a RAID-10 of these.\n\nI'm having a hard time coming up with a database workload that would run \ninto performance problems with a (relatively inexpensive) setup like this.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 15:01:31 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/22/13 3:51 PM, Merlin Moncure wrote:\n> s3700 is rated for 10 drive writes/day for 5 years. so, for 200gb drive, that's\n> 200gb * 10/day * 365 days * 5, that's 3.65 million gigabytes or ~ 3.5 petabytes.\n\nYes, they've improved on the 1.5PB that the 710 drives topped out at. \nFor that particular drive, this is unlikely to be a problem. But I'm \nnot willing to toss out longevity issues at therefore irrelevant in all \ncases. Some flash still costs a lot more than Intel's SSDs do, like the \nFusionIO products. Chop even a few percent of the wear out of the price \ntag on a RAMSAN and you've saved some real money.\n\nAnd there are some other products with interesting \nprice/performance/capacity combinations that are also sensitive to \nwearout. Seagate's hybrid drives have turned interesting now that they \ncache writes safely for example. There's no cheaper way to get 1TB with \nflash write speeds for small commits than that drive right now. (Test \nresults on that drive coming soon, along with my full DC S3700 review)\n\n> btw, cost/pb of this drive is less than half of\n> the 710 (which IMO was obsolete the day the s3700 hit the street).\n\nYou bet, and I haven't recommended anyone buy a 710 since the \nannouncement. However, \"hit the street\" is still an issue. No one has \nbeen able to keep DC S3700 drives in stock very well yet. It took me \nthree tries through Newegg before my S3700 drive actually shipped.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 16:06:58 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Wed, May 22, 2013 at 3:06 PM, Greg Smith <[email protected]> wrote:\n> You bet, and I haven't recommended anyone buy a 710 since the announcement.\n> However, \"hit the street\" is still an issue. No one has been able to keep\n> DC S3700 drives in stock very well yet. It took me three tries through\n> Newegg before my S3700 drive actually shipped.\n\nWell, let's look a the facts:\n*) >2x write endurance vs 710 (500x 320)\n*) 2-10x performance depending on workload specifics\n*) much better worst case/average latency\n*) half the cost of the 710!?\n\nAfter obsoleting hard drives with the introduction of the 320/710,\nintel managed to obsolete their *own* entire lineup with the s3700\n(with the exception of the pcie devices and the ultra low cost\nnotebook 1$/gb segment). I'm amazed these drives were sold at that\nprice point: they could have been sold at 3-4x the current price and\nstill have a willing market (note, please don't do this). Presumably\nmost of the inventory is being bought up by small channel resellers\nfor a quick profit.\n\nEven by the fast moving standards of the SSD world this product is an\nabsolute game changer and has ushered in the new era of fast storage\nwith a loud 'gong'. Oh, the major vendors will still keep their\nrip-off going on a little longer selling their storage trays, raid\ncontrollers, entry/mid level SANS, SAS HBAs etc at huge markup to\ncustomers who don't need them (some will still need them, but the bar\nsuddenly just got spectacularly raised before you have to look into\nenterprise gear). CRT was overtaken by LCD monitor in mind 2004 in\nterms of sales: I'd say it's late 2002/early 2003, at least for new\ndeployments.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 15:57:31 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On May 22, 2013, at 4:06 PM, Greg Smith wrote:\n\n> And there are some other products with interesting price/performance/capacity combinations that are also sensitive to wearout. Seagate's hybrid drives have turned interesting now that they cache writes safely for example. There's no cheaper way to get 1TB with flash write speeds for small commits than that drive right now. (Test results on that drive coming soon, along with my full DC S3700 review)\n\nI am really looking forward to that. Will you announce here or just post on the 2ndQuadrant blog?\n\nAnother \"hybrid\" solution is to run ZFS on some decent hard drives and then put the ZFS intent log on SSDs. With very synthetic benchmarks, the random write performance is excellent.\n\nAll of these discussions about alternate storage media are great - everyone has different needs and there are certainly a number of deployments that can \"get away\" with spending much less money by adding some solid state storage. There's really an amazing number of options today…\n\nThanks,\n\nCharles\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 17:49:44 -0400",
"msg_from": "CSS <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "\nOn 05/22/2013 01:57 PM, Merlin Moncure wrote:\n>\n> On Wed, May 22, 2013 at 3:06 PM, Greg Smith <[email protected]> wrote:\n>> You bet, and I haven't recommended anyone buy a 710 since the announcement.\n>> However, \"hit the street\" is still an issue. No one has been able to keep\n>> DC S3700 drives in stock very well yet. It took me three tries through\n>> Newegg before my S3700 drive actually shipped.\n>\n> Well, let's look a the facts:\n> *) >2x write endurance vs 710 (500x 320)\n> *) 2-10x performance depending on workload specifics\n> *) much better worst case/average latency\n> *) half the cost of the 710!?\n\nI am curious how the 710 or S3700 stacks up against the new M500 from \nCrucial? I know Intel is kind of the goto for these things but the m500 \nis power off protected and rated at: Endurance: 72TB total bytes written \n(TBW), equal to 40GB per day for 5 years .\n\nGranted it isn't he fasted pig in the poke but it sure seems like a very \nreasonable drive for the price:\n\nhttp://www.newegg.com/Product/Product.aspx?Item=20-148-695&ParentOnly=1&IsVirtualParent=1\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nFor my dreams of your image that blossoms\n a rose in the deeps of my heart. - W.B. Yeats\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 15:42:36 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Wed, May 22, 2013 at 5:42 PM, Joshua D. Drake <[email protected]> wrote:\n> I am curious how the 710 or S3700 stacks up against the new M500 from\n> Crucial? I know Intel is kind of the goto for these things but the m500 is\n> power off protected and rated at: Endurance: 72TB total bytes written (TBW),\n> equal to 40GB per day for 5 years .\n\nI don't think the m500 is power safe (nor is any drive at the <1$/gb\nprice point). This drive is positioned as a desktop class disk drive.\n AFAIK, the s3700 strongly outclasses all competitors on price,\nperformance, or both. Once you give up enterprise features of\nendurance and iops you have many options (samsung 840 is another one).\n Pretty soon these types of drives are going to be standard kit in\nworkstations (and we'll be back to the IDE area of corrupted data,\nha!). I would recommend none of them for server class use, they are\ninferior in terms of $/iop and $/gb written.\n\nfor server class drives, see:\nhitachi ssd400m (10$/gb, slower!)\nkingston e100,\netc.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 18:37:39 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "\nOn 05/22/2013 04:37 PM, Merlin Moncure wrote:\n>\n> On Wed, May 22, 2013 at 5:42 PM, Joshua D. Drake <[email protected]> wrote:\n>> I am curious how the 710 or S3700 stacks up against the new M500 from\n>> Crucial? I know Intel is kind of the goto for these things but the m500 is\n>> power off protected and rated at: Endurance: 72TB total bytes written (TBW),\n>> equal to 40GB per day for 5 years .\n>\n> I don't think the m500 is power safe (nor is any drive at the <1$/gb\n> price point).\n\nAccording the the data sheet it is power safe.\n\nhttp://investors.micron.com/releasedetail.cfm?ReleaseID=732650\nhttp://www.micron.com/products/solid-state-storage/client-ssd/m500-ssd\n\nSincerely,\n\nJD\n\n\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nFor my dreams of your image that blossoms\n a rose in the deeps of my heart. - W.B. Yeats\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 18:01:56 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 23/05/13 13:01, Joshua D. Drake wrote:\n>\n> On 05/22/2013 04:37 PM, Merlin Moncure wrote:\n>>\n>> On Wed, May 22, 2013 at 5:42 PM, Joshua D. Drake \n>> <[email protected]> wrote:\n>>> I am curious how the 710 or S3700 stacks up against the new M500 from\n>>> Crucial? I know Intel is kind of the goto for these things but the \n>>> m500 is\n>>> power off protected and rated at: Endurance: 72TB total bytes \n>>> written (TBW),\n>>> equal to 40GB per day for 5 years .\n>>\n>> I don't think the m500 is power safe (nor is any drive at the <1$/gb\n>> price point).\n>\n> According the the data sheet it is power safe.\n>\n> http://investors.micron.com/releasedetail.cfm?ReleaseID=732650\n> http://www.micron.com/products/solid-state-storage/client-ssd/m500-ssd\n>\n>\n\nYeah - they apparently have a capacitor on board.\n\nTheir write endurance is where they don't compare so favorably to the \nS3700 (they are *much* cheaper mind you):\n\n- M500 120GB drive: 40GB per day for 5 years\n- S3700 100GB drive: 1000GB per day for 5 years\n\nBut great to see more reasonably priced SSD with power off protection.\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 May 2013 13:32:29 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/22/13 6:42 PM, Joshua D. Drake wrote:\n> I am curious how the 710 or S3700 stacks up against the new M500 from\n> Crucial? I know Intel is kind of the goto for these things but the m500\n> is power off protected and rated at: Endurance: 72TB total bytes written\n> (TBW), equal to 40GB per day for 5 years .\n\nThe M500 is fine on paper, I had that one on my list of things to \nevaluate when I can. The general reliability of Crucial's consumer SSD \nhas looked good recently. I'm not going to recommend that one until I \nactually see one work as expected though. I'm waiting for one to pass \nby or I reach a new toy purchasing spree.\n\nWhat makes me step very carefully here is watching what Intel went \nthrough when they released their first supercap drive, the 320 series. \nIf you look at the nastiest of the firmware bugs they had, like the \ninfamous \"8MB bug\", a lot of them were related to the new clean shutdown \nfeature. It's the type of firmware that takes some exposure to the real \nworld to flush out the bugs. The last of the enthusiast SSD players who \ntried to take this job on was OCZ with the Vertex 3 Pro, and they never \ngot that model quite right before abandoning it altogether.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 21:41:32 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/22/13 4:57 PM, Merlin Moncure wrote:\n> Oh, the major vendors will still keep their\n> rip-off going on a little longer selling their storage trays, raid\n> controllers, entry/mid level SANS, SAS HBAs etc at huge markup to\n> customers who don't need them (some will still need them, but the bar\n> suddenly just got spectacularly raised before you have to look into\n> enterprise gear).\n\nThe angle to distinguish \"enterprise\" hardware is moving on to error \nrelated capabilities. Soon we'll see SAS drives with the 520 byte \nsectors and checksumming for example.\n\nAnd while SATA drives have advanced a long way, they haven't caught up \nwith SAS for failure handling. It's still far too easy for a single \ncrazy SATA device to force crippling bus resets for example. Individual \nSATA ports don't expect to share things with others, while SAS chains \nhave a much better protocol for handling things.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 21:53:31 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 23/05/13 13:32, Mark Kirkwood wrote:\n> On 23/05/13 13:01, Joshua D. Drake wrote:\n>>\n>> On 05/22/2013 04:37 PM, Merlin Moncure wrote:\n>>>\n>>> On Wed, May 22, 2013 at 5:42 PM, Joshua D. Drake \n>>> <[email protected]> wrote:\n>>>> I am curious how the 710 or S3700 stacks up against the new M500 from\n>>>> Crucial? I know Intel is kind of the goto for these things but the \n>>>> m500 is\n>>>> power off protected and rated at: Endurance: 72TB total bytes \n>>>> written (TBW),\n>>>> equal to 40GB per day for 5 years .\n>>>\n>>> I don't think the m500 is power safe (nor is any drive at the <1$/gb\n>>> price point).\n>>\n>> According the the data sheet it is power safe.\n>>\n>> http://investors.micron.com/releasedetail.cfm?ReleaseID=732650\n>> http://www.micron.com/products/solid-state-storage/client-ssd/m500-ssd\n>>\n>>\n>\n> Yeah - they apparently have a capacitor on board.\n>\n\nMake that quite a few capacitors (top right corner):\n\nhttp://regmedia.co.uk/2013/05/07/m500_4.jpg\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 May 2013 14:04:32 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Wednesday, May 22, 2013, Joshua D. Drake <[email protected]> wrote:\n>\n> On 05/22/2013 04:37 PM, Merlin Moncure wrote:\n>>\n>> On Wed, May 22, 2013 at 5:42 PM, Joshua D. Drake <[email protected]>\nwrote:\n>>>\n>>> I am curious how the 710 or S3700 stacks up against the new M500 from\n>>> Crucial? I know Intel is kind of the goto for these things but the m500\nis\n>>> power off protected and rated at: Endurance: 72TB total bytes written\n(TBW),\n>>> equal to 40GB per day for 5 years .\n>>\n>> I don't think the m500 is power safe (nor is any drive at the <1$/gb\n>> price point).\n>\n> According the the data sheet it is power safe.\n>\n> http://investors.micron.com/releasedetail.cfm?ReleaseID=732650\n> http://www.micron.com/products/solid-state-storage/client-ssd/m500-ssd\n\nWow, that seems like a pretty good deal then assuming it works and performs\ndecently.\n\nmerlin\n\nOn Wednesday, May 22, 2013, Joshua D. Drake <[email protected]> wrote:>> On 05/22/2013 04:37 PM, Merlin Moncure wrote:>>>> On Wed, May 22, 2013 at 5:42 PM, Joshua D. Drake <[email protected]> wrote:\n>>>>>> I am curious how the 710 or S3700 stacks up against the new M500 from>>> Crucial? I know Intel is kind of the goto for these things but the m500 is>>> power off protected and rated at: Endurance: 72TB total bytes written (TBW),\n>>> equal to 40GB per day for 5 years .>>>> I don't think the m500 is power safe (nor is any drive at the <1$/gb>> price point).>> According the the data sheet it is power safe.\n>> http://investors.micron.com/releasedetail.cfm?ReleaseID=732650> http://www.micron.com/products/solid-state-storage/client-ssd/m500-ssd\nWow, that seems like a pretty good deal then assuming it works and performs decently.merlin",
"msg_date": "Wed, 22 May 2013 21:17:15 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/22/13 10:04 PM, Mark Kirkwood wrote:\n> Make that quite a few capacitors (top right corner):\n> http://regmedia.co.uk/2013/05/07/m500_4.jpg\n\nThere are some more shots and descriptions of the internals in the \nexcellent review at \nhttp://techreport.com/review/24666/crucial-m500-ssd-reviewed\n\nThat also highlights the big problem with this drive that's kept me from \nbuying one so far:\n\n\"Unlike rivals Intel and Samsung, Crucial doesn't provide utility \nsoftware with a built-in health indicator. The M500's payload of SMART \nattributes doesn't contain any references to flash wear or bytes \nwritten, either. Several of the SMART attributes are labeled \n\"Vendor-specific,\" but you'll need to guess what they track and read the \nassociated values using third-party software.\"\n\nThat's a serious problem for most business use of this sort of drive.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 22:22:24 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 23/05/13 14:22, Greg Smith wrote:\n> On 5/22/13 10:04 PM, Mark Kirkwood wrote:\n>> Make that quite a few capacitors (top right corner):\n>> http://regmedia.co.uk/2013/05/07/m500_4.jpg\n>\n> There are some more shots and descriptions of the internals in the \n> excellent review at \n> http://techreport.com/review/24666/crucial-m500-ssd-reviewed\n>\n> That also highlights the big problem with this drive that's kept me \n> from buying one so far:\n>\n> \"Unlike rivals Intel and Samsung, Crucial doesn't provide utility \n> software with a built-in health indicator. The M500's payload of SMART \n> attributes doesn't contain any references to flash wear or bytes \n> written, either. Several of the SMART attributes are labeled \n> \"Vendor-specific,\" but you'll need to guess what they track and read \n> the associated values using third-party software.\"\n>\n> That's a serious problem for most business use of this sort of drive.\n>\n\nAgreed - I was thinking the same thing!\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 May 2013 14:26:28 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 23/05/13 14:26, Mark Kirkwood wrote:\n> On 23/05/13 14:22, Greg Smith wrote:\n>> On 5/22/13 10:04 PM, Mark Kirkwood wrote:\n>>> Make that quite a few capacitors (top right corner):\n>>> http://regmedia.co.uk/2013/05/07/m500_4.jpg\n>>\n>> There are some more shots and descriptions of the internals in the \n>> excellent review at \n>> http://techreport.com/review/24666/crucial-m500-ssd-reviewed\n>>\n>> That also highlights the big problem with this drive that's kept me \n>> from buying one so far:\n>>\n>> \"Unlike rivals Intel and Samsung, Crucial doesn't provide utility \n>> software with a built-in health indicator. The M500's payload of \n>> SMART attributes doesn't contain any references to flash wear or \n>> bytes written, either. Several of the SMART attributes are labeled \n>> \"Vendor-specific,\" but you'll need to guess what they track and read \n>> the associated values using third-party software.\"\n>>\n>> That's a serious problem for most business use of this sort of drive.\n>>\n>\n> Agreed - I was thinking the same thing!\n>\n>\n\nHaving said that, there does seem to be a wear leveling counter in its \nSMART attributes - but, yes - I'd like to see indicators more similar \nthe level of detail that Intel provides.\n\nCheers\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 May 2013 14:44:18 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "\nOn 05/22/2013 07:17 PM, Merlin Moncure wrote:\n\n> > According the the data sheet it is power safe.\n> >\n> > http://investors.micron.com/releasedetail.cfm?ReleaseID=732650\n> > http://www.micron.com/products/solid-state-storage/client-ssd/m500-ssd\n>\n> Wow, that seems like a pretty good deal then assuming it works and\n> performs decently.\n\nYeah that was my thinking. Sure it isn't an S3700 but for the money it \nis still faster than the comparable spindle configuration.\n\nJD\n\n>\n> merlin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 19:51:45 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 05/22/2013 03:30 PM, Merlin Moncure wrote:\n> On Tue, May 21, 2013 at 7:19 PM, Greg Smith <[email protected]> wrote:\n>> On 5/20/13 6:32 PM, Merlin Moncure wrote:\n\n[cut]\n\n>> The only really huge gain to be had using SSD is commit rate at a low client\n>> count. There you can easily do 5,000/second instead of a spinning disk that\n>> is closer to 100, for less than what the battery-backed RAID card along\n>> costs to speed up mechanical drives. My test server's 100GB DC S3700 was\n>> $250. That's still not two orders of magnitude faster though.\n>\n> That's most certainly *not* the only gain to be had: random read rates\n> of large databases (a very important metric for data analysis) can\n> easily hit 20k tps. So I'll stand by the figure. Another point: that\n> 5000k commit raid is sustained, whereas a raid card will spectacularly\n> degrade until the cache overflows; it's not fair to compare burst with\n> sustained performance. To hit 5000k sustained commit rate along with\n> good random read performance, you'd need a very expensive storage\n> system. Right now I'm working (not by choice) with a teir-1 storage\n> system (let's just say it rhymes with 'weefax') and I would trade it\n> for direct attached SSD in a heartbeat.\n>\n> Also, note that 3rd party benchmarking is showing the 3700 completely\n> smoking the 710 in database workloads (for example, see\n> http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review/6).\n\n[cut]\n\nSorry for interrupting but on a related note I would like to know your\nopinions on what the anandtech review said about 3700 poor performance\non \"Oracle Swingbench\", quoting the relevant part that you can find here (*)\n\n<quote>\n\n[..] There are two components to the Swingbench test we're running here:\nthe database itself, and the redo log. The redo log stores all changes that\nare made to the database, which allows the database to be reconstructed in\nthe event of a failure. In good DB design, these two would exist on separate\nstorage systems, but in order to increase IO we combined them both for this test.\nAccesses to the DB end up being 8KB and random in nature, a definite strong suit\nof the S3700 as we've already shown. The redo log however consists of a bunch\nof 1KB - 1.5KB, QD1, sequential accesses. The S3700, like many of the newer\ncontrollers we've tested, isn't optimized for low queue depth, sub-4KB, sequential\nworkloads like this. [..]\n\n</quote>\n\nDoes this kind of scenario apply to postgresql wal files repo ?\n\nThanks\nandrea\n\n\n(*) http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review/5\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 May 2013 08:56:50 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On Thu, May 23, 2013 at 1:56 AM, Andrea Suisani <[email protected]> wrote:\n> On 05/22/2013 03:30 PM, Merlin Moncure wrote:\n>>\n>> On Tue, May 21, 2013 at 7:19 PM, Greg Smith <[email protected]> wrote:\n>>>\n>>> On 5/20/13 6:32 PM, Merlin Moncure wrote:\n>\n>\n> [cut]\n>\n>\n>>> The only really huge gain to be had using SSD is commit rate at a low\n>>> client\n>>> count. There you can easily do 5,000/second instead of a spinning disk\n>>> that\n>>> is closer to 100, for less than what the battery-backed RAID card along\n>>> costs to speed up mechanical drives. My test server's 100GB DC S3700 was\n>>> $250. That's still not two orders of magnitude faster though.\n>>\n>>\n>> That's most certainly *not* the only gain to be had: random read rates\n>> of large databases (a very important metric for data analysis) can\n>> easily hit 20k tps. So I'll stand by the figure. Another point: that\n>> 5000k commit raid is sustained, whereas a raid card will spectacularly\n>> degrade until the cache overflows; it's not fair to compare burst with\n>> sustained performance. To hit 5000k sustained commit rate along with\n>> good random read performance, you'd need a very expensive storage\n>> system. Right now I'm working (not by choice) with a teir-1 storage\n>> system (let's just say it rhymes with 'weefax') and I would trade it\n>> for direct attached SSD in a heartbeat.\n>>\n>> Also, note that 3rd party benchmarking is showing the 3700 completely\n>> smoking the 710 in database workloads (for example, see\n>> http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review/6).\n>\n>\n> [cut]\n>\n> Sorry for interrupting but on a related note I would like to know your\n> opinions on what the anandtech review said about 3700 poor performance\n> on \"Oracle Swingbench\", quoting the relevant part that you can find here (*)\n>\n> <quote>\n>\n> [..] There are two components to the Swingbench test we're running here:\n> the database itself, and the redo log. The redo log stores all changes that\n> are made to the database, which allows the database to be reconstructed in\n> the event of a failure. In good DB design, these two would exist on separate\n> storage systems, but in order to increase IO we combined them both for this\n> test.\n> Accesses to the DB end up being 8KB and random in nature, a definite strong\n> suit\n> of the S3700 as we've already shown. The redo log however consists of a\n> bunch\n> of 1KB - 1.5KB, QD1, sequential accesses. The S3700, like many of the newer\n> controllers we've tested, isn't optimized for low queue depth, sub-4KB,\n> sequential\n> workloads like this. [..]\n>\n> </quote>\n>\n> Does this kind of scenario apply to postgresql wal files repo ?\n\nhuh -- I don't think so. wal file segments are 8kb aligned, ditto\nclog, etc. In XLogWrite():\n\n /* OK to write the page(s) */\n from = XLogCtl->pages + startidx * (Size) XLOG_BLCKSZ;\n nbytes = npages * (Size) XLOG_BLCKSZ; <--\n errno = 0;\n if (write(openLogFile, from, nbytes) != nbytes)\n {\n\nAFICT, that's the only way you write out xlog. One thing I would\ndefinitely advise though is to disable partial page writes if it's\nenabled. s3700 is algined on 8kb blocks internally -- hm.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 May 2013 08:47:13 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 05/23/2013 03:47 PM, Merlin Moncure wrote:\n\n[cut]\n\n>> <quote>\n>>\n>> [..] There are two components to the Swingbench test we're running here:\n>> the database itself, and the redo log. The redo log stores all changes that\n>> are made to the database, which allows the database to be reconstructed in\n>> the event of a failure. In good DB design, these two would exist on separate\n>> storage systems, but in order to increase IO we combined them both for this\n>> test.\n>> Accesses to the DB end up being 8KB and random in nature, a definite strong\n>> suit\n>> of the S3700 as we've already shown. The redo log however consists of a\n>> bunch\n>> of 1KB - 1.5KB, QD1, sequential accesses. The S3700, like many of the newer\n>> controllers we've tested, isn't optimized for low queue depth, sub-4KB,\n>> sequential\n>> workloads like this. [..]\n>>\n>> </quote>\n>>\n>> Does this kind of scenario apply to postgresql wal files repo ?\n>\n> huh -- I don't think so. wal file segments are 8kb aligned, ditto\n> clog, etc. In XLogWrite():\n>\n> /* OK to write the page(s) */\n> from = XLogCtl->pages + startidx * (Size) XLOG_BLCKSZ;\n> nbytes = npages * (Size) XLOG_BLCKSZ; <--\n> errno = 0;\n> if (write(openLogFile, from, nbytes) != nbytes)\n> {\n>\n> AFICT, that's the only way you write out xlog. One thing I would\n> definitely advise though is to disable partial page writes if it's\n> enabled. s3700 is algined on 8kb blocks internally -- hm.\n\nmany thanks merlin for both the explanation and the good advice :)\n\nandrea\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 May 2013 15:55:08 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
},
{
"msg_contents": "On 5/22/13 2:45 PM, Shaun Thomas wrote:\n> That read rate and that throughput suggest 8k reads. The queue size is\n> 270+, which is pretty high for a single device, even when it's an SSD.\n> Some SSDs seem to break down on queue sizes over 4, and 15 sectors\n> spread across a read queue of 270 is pretty hash. The drive tested here\n> basically fell over on servicing a huge diverse read queue, which\n> suggests a firmware issue.\n\nThat's basically it. I don't know that I'd put the blame specifically \nonto a firmware issue without further evidence that's the case though. \nThe last time I chased down a SSD performance issue like this it ended \nup being a Linux scheduler bug. One thing I plan to do for future SSD \ntests is to try and replicate this issue better, starting by increasing \nthe number of clients to at least 300.\n\nRelated: if anyone read my \"Seeking PostgreSQL\" talk last year, some of \nmy Intel 320 results there were understating the drive's worst-case \nperformance due to a testing setup error. I have a blog entry talking \nabout what was wrong and how it slipped past me at \nhttp://highperfpostgres.com/2013/05/seeking-revisited-intel-320-series-and-ncq/\n\nWith that loose end sorted, I'll be kicking off a brand new round of SSD \ntests on a 24 core server here soon. All those will appear on my blog. \n The 320 drive is returning as the bang for buck champ, along with a DC \nS3700 and a Seagate 1TB Hybrid drive with NAND durable write cache.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 24 May 2013 08:11:53 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reliability with RAID 10 SSD and Streaming Replication"
}
] |
[
{
"msg_contents": "Hi All,\n\nWe've got 3 quite large tables that due to an unexpected surge in\nusage (!) have grown to about 10GB each, with 72, 32 and 31 million\nrows in. I've been tasked with cleaning out about half of them, the\nproblem I've got is that even deleting the first 1,000,000 rows seems\nto take an unreasonable amount of time. Unfortunately this is on quite\nan old server (Dell 2950 with a RAID-10 over 6 disks) running Postgres\n8.4; which serves other things like our logging systems.\n\nIf I run a sustained (more than about 5 minutes) delete it'll have a\ndetrimental effect on the other services. I'm trying to batch up the\ndeletes into small chunks of approximately 1 month of data ; even this\nseems to take too long, I originally reduced this down to a single\nday's data and had the same problem. I can keep decreasing the size of\nthe window I'm deleting but I feel I must be doing something either\nfundamentally wrong or over-complicating this enormously. I've\nswitched over to retrieving a list of IDs to delete, storing them in\ntemporary tables and deleting based on the primary keys on each of the\ntables with something similar to this:\n\nBEGIN TRANSACTION;\n\nCREATE TEMPORARY TABLE table_a_ids_to_delete (id INT);\nCREATE TEMPORARY TABLE table_b_ids_to_delete (id INT);\n\nINSERT INTO table_a_ids_to_delete\n SELECT id FROM table_a WHERE purchased ='-infinity' AND created_at\n< '2007-01-01T00:00:00';\n\nINSERT INTO table_b_ids_to_delete\n SELECT table_b_id FROM table_a_table_b_xref\n INNER JOIN table_a_ids_to_delete ON (table_a_ids_to_delete.id =\ntable_a_table_b.quote_id);\n\nDELETE FROM table_a_table_b_xref USING table_a_ids_to_delete\n WHERE table_a_table_b_xref.table_a_id = table_a_ids_to_delete.id;\n\nDELETE FROM table_b USING table_b_ids_to_delete\n WHERE table_b.id = table_b_ids_to_delete.id;\n\nDELETE FROM table_a USING table_a_ids_to_delete\n WHERE table_a.id = table_a_ids_to_delete.id;\n\nCOMMIT;\n\nThere're indices on table_a on the queried columns, table_b's primary\nkey is it's id, and table_a_table_b_xref has an index on (table_a_id,\ntable_b_id). There're FK defined on the xref table, hence why I'm\ndeleting from it first.\n\nDoes anyone have any ideas as to what I can do to make the deletes any\nfaster? I'm running out of ideas!\n\nThanks in advance,\n\n--\nRob Emery\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 May 2013 12:26:11 +0100",
"msg_from": "Rob Emery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Deleting Rows From Large Tables"
},
{
"msg_contents": "Oh, sorry, overlooked that part.\nMaybe refreshing stats with VACUUM FULL ?\n\n\n2013/5/17 Robert Emery <[email protected]>\n\n> Hi Sékine,\n>\n> Unfortunately I'm not trying to empty the table completely, just\n> delete about 10-15% of the data in it.\n>\n> Thanks,\n>\n> On 17 May 2013 14:11, Sékine Coulibaly <[email protected]> wrote:\n> > Rob,\n> >\n> > Did you tried TRUNCATE ?\n> > http://www.postgresql.org/docs/8.4/static/sql-truncate.html\n> >\n> > This is is supposed to be quicker since it does scan the table.\n> >\n> > Regards\n> >\n> >\n> > 2013/5/17 Rob Emery <[email protected]>\n> >>\n> >> Hi All,\n> >>\n> >> We've got 3 quite large tables that due to an unexpected surge in\n> >> usage (!) have grown to about 10GB each, with 72, 32 and 31 million\n> >> rows in. I've been tasked with cleaning out about half of them, the\n> >> problem I've got is that even deleting the first 1,000,000 rows seems\n> >> to take an unreasonable amount of time. Unfortunately this is on quite\n> >> an old server (Dell 2950 with a RAID-10 over 6 disks) running Postgres\n> >> 8.4; which serves other things like our logging systems.\n> >>\n> >> If I run a sustained (more than about 5 minutes) delete it'll have a\n> >> detrimental effect on the other services. I'm trying to batch up the\n> >> deletes into small chunks of approximately 1 month of data ; even this\n> >> seems to take too long, I originally reduced this down to a single\n> >> day's data and had the same problem. I can keep decreasing the size of\n> >> the window I'm deleting but I feel I must be doing something either\n> >> fundamentally wrong or over-complicating this enormously. I've\n> >> switched over to retrieving a list of IDs to delete, storing them in\n> >> temporary tables and deleting based on the primary keys on each of the\n> >> tables with something similar to this:\n> >>\n> >> BEGIN TRANSACTION;\n> >>\n> >> CREATE TEMPORARY TABLE table_a_ids_to_delete (id INT);\n> >> CREATE TEMPORARY TABLE table_b_ids_to_delete (id INT);\n> >>\n> >> INSERT INTO table_a_ids_to_delete\n> >> SELECT id FROM table_a WHERE purchased ='-infinity' AND created_at\n> >> < '2007-01-01T00:00:00';\n> >>\n> >> INSERT INTO table_b_ids_to_delete\n> >> SELECT table_b_id FROM table_a_table_b_xref\n> >> INNER JOIN table_a_ids_to_delete ON (table_a_ids_to_delete.id =\n> >> table_a_table_b.quote_id);\n> >>\n> >> DELETE FROM table_a_table_b_xref USING table_a_ids_to_delete\n> >> WHERE table_a_table_b_xref.table_a_id = table_a_ids_to_delete.id;\n> >>\n> >> DELETE FROM table_b USING table_b_ids_to_delete\n> >> WHERE table_b.id = table_b_ids_to_delete.id;\n> >>\n> >> DELETE FROM table_a USING table_a_ids_to_delete\n> >> WHERE table_a.id = table_a_ids_to_delete.id;\n> >>\n> >> COMMIT;\n> >>\n> >> There're indices on table_a on the queried columns, table_b's primary\n> >> key is it's id, and table_a_table_b_xref has an index on (table_a_id,\n> >> table_b_id). There're FK defined on the xref table, hence why I'm\n> >> deleting from it first.\n> >>\n> >> Does anyone have any ideas as to what I can do to make the deletes any\n> >> faster? I'm running out of ideas!\n> >>\n> >> Thanks in advance,\n> >>\n> >> --\n> >> Rob Emery\n> >>\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list (\n> [email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >\n> >\n>\n>\n>\n> --\n> Robert Emery\n> Database Administrator\n>\n> | T: 0800 021 0888 | www.codeweavers.net |\n> | Codeweavers Limited | Barn 4 | Dunston Business Village | Dunston | ST18\n> 9AB |\n> | Registered in England and Wales No. 04092394 | VAT registration no.\n> 974 9705 63 |\n>\n> CUSTOMERS' BLOG TWITTER FACEBOOK LINKED IN\n> DEVELOPERS' BLOG YOUTUBE\n>\n\nOh, sorry, overlooked that part.Maybe refreshing stats with VACUUM FULL ?\n2013/5/17 Robert Emery <[email protected]>\n\nHi Sékine,\n\nUnfortunately I'm not trying to empty the table completely, just\ndelete about 10-15% of the data in it.\n\nThanks,\n\nOn 17 May 2013 14:11, Sékine Coulibaly <[email protected]> wrote:\n> Rob,\n>\n> Did you tried TRUNCATE ?\n> http://www.postgresql.org/docs/8.4/static/sql-truncate.html\n>\n> This is is supposed to be quicker since it does scan the table.\n>\n> Regards\n>\n>\n> 2013/5/17 Rob Emery <[email protected]>\n>>\n>> Hi All,\n>>\n>> We've got 3 quite large tables that due to an unexpected surge in\n>> usage (!) have grown to about 10GB each, with 72, 32 and 31 million\n>> rows in. I've been tasked with cleaning out about half of them, the\n>> problem I've got is that even deleting the first 1,000,000 rows seems\n>> to take an unreasonable amount of time. Unfortunately this is on quite\n>> an old server (Dell 2950 with a RAID-10 over 6 disks) running Postgres\n>> 8.4; which serves other things like our logging systems.\n>>\n>> If I run a sustained (more than about 5 minutes) delete it'll have a\n>> detrimental effect on the other services. I'm trying to batch up the\n>> deletes into small chunks of approximately 1 month of data ; even this\n>> seems to take too long, I originally reduced this down to a single\n>> day's data and had the same problem. I can keep decreasing the size of\n>> the window I'm deleting but I feel I must be doing something either\n>> fundamentally wrong or over-complicating this enormously. I've\n>> switched over to retrieving a list of IDs to delete, storing them in\n>> temporary tables and deleting based on the primary keys on each of the\n>> tables with something similar to this:\n>>\n>> BEGIN TRANSACTION;\n>>\n>> CREATE TEMPORARY TABLE table_a_ids_to_delete (id INT);\n>> CREATE TEMPORARY TABLE table_b_ids_to_delete (id INT);\n>>\n>> INSERT INTO table_a_ids_to_delete\n>> SELECT id FROM table_a WHERE purchased ='-infinity' AND created_at\n>> < '2007-01-01T00:00:00';\n>>\n>> INSERT INTO table_b_ids_to_delete\n>> SELECT table_b_id FROM table_a_table_b_xref\n>> INNER JOIN table_a_ids_to_delete ON (table_a_ids_to_delete.id =\n>> table_a_table_b.quote_id);\n>>\n>> DELETE FROM table_a_table_b_xref USING table_a_ids_to_delete\n>> WHERE table_a_table_b_xref.table_a_id = table_a_ids_to_delete.id;\n>>\n>> DELETE FROM table_b USING table_b_ids_to_delete\n>> WHERE table_b.id = table_b_ids_to_delete.id;\n>>\n>> DELETE FROM table_a USING table_a_ids_to_delete\n>> WHERE table_a.id = table_a_ids_to_delete.id;\n>>\n>> COMMIT;\n>>\n>> There're indices on table_a on the queried columns, table_b's primary\n>> key is it's id, and table_a_table_b_xref has an index on (table_a_id,\n>> table_b_id). There're FK defined on the xref table, hence why I'm\n>> deleting from it first.\n>>\n>> Does anyone have any ideas as to what I can do to make the deletes any\n>> faster? I'm running out of ideas!\n>>\n>> Thanks in advance,\n>>\n>> --\n>> Rob Emery\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n--\nRobert Emery\nDatabase Administrator\n\n| T: 0800 021 0888 | www.codeweavers.net |\n| Codeweavers Limited | Barn 4 | Dunston Business Village | Dunston | ST18 9AB |\n| Registered in England and Wales No. 04092394 | VAT registration no.\n974 9705 63 |\n\n CUSTOMERS' BLOG TWITTER FACEBOOK LINKED IN\nDEVELOPERS' BLOG YOUTUBE",
"msg_date": "Fri, 17 May 2013 16:26:00 +0200",
"msg_from": "=?UTF-8?Q?S=C3=A9kine_Coulibaly?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting Rows From Large Tables"
},
{
"msg_contents": "Analyze your temp tables after filling and before using!\n17 трав. 2013 17:27, \"Sékine Coulibaly\" <[email protected]> напис.\n\n> Oh, sorry, overlooked that part.\n> Maybe refreshing stats with VACUUM FULL ?\n>\n>\n> 2013/5/17 Robert Emery <[email protected]>\n>\n>> Hi Sékine,\n>>\n>> Unfortunately I'm not trying to empty the table completely, just\n>> delete about 10-15% of the data in it.\n>>\n>> Thanks,\n>>\n>> On 17 May 2013 14:11, Sékine Coulibaly <[email protected]> wrote:\n>> > Rob,\n>> >\n>> > Did you tried TRUNCATE ?\n>> > http://www.postgresql.org/docs/8.4/static/sql-truncate.html\n>> >\n>> > This is is supposed to be quicker since it does scan the table.\n>> >\n>> > Regards\n>> >\n>> >\n>> > 2013/5/17 Rob Emery <[email protected]>\n>> >>\n>> >> Hi All,\n>> >>\n>> >> We've got 3 quite large tables that due to an unexpected surge in\n>> >> usage (!) have grown to about 10GB each, with 72, 32 and 31 million\n>> >> rows in. I've been tasked with cleaning out about half of them, the\n>> >> problem I've got is that even deleting the first 1,000,000 rows seems\n>> >> to take an unreasonable amount of time. Unfortunately this is on quite\n>> >> an old server (Dell 2950 with a RAID-10 over 6 disks) running Postgres\n>> >> 8.4; which serves other things like our logging systems.\n>> >>\n>> >> If I run a sustained (more than about 5 minutes) delete it'll have a\n>> >> detrimental effect on the other services. I'm trying to batch up the\n>> >> deletes into small chunks of approximately 1 month of data ; even this\n>> >> seems to take too long, I originally reduced this down to a single\n>> >> day's data and had the same problem. I can keep decreasing the size of\n>> >> the window I'm deleting but I feel I must be doing something either\n>> >> fundamentally wrong or over-complicating this enormously. I've\n>> >> switched over to retrieving a list of IDs to delete, storing them in\n>> >> temporary tables and deleting based on the primary keys on each of the\n>> >> tables with something similar to this:\n>> >>\n>> >> BEGIN TRANSACTION;\n>> >>\n>> >> CREATE TEMPORARY TABLE table_a_ids_to_delete (id INT);\n>> >> CREATE TEMPORARY TABLE table_b_ids_to_delete (id INT);\n>> >>\n>> >> INSERT INTO table_a_ids_to_delete\n>> >> SELECT id FROM table_a WHERE purchased ='-infinity' AND created_at\n>> >> < '2007-01-01T00:00:00';\n>> >>\n>> >> INSERT INTO table_b_ids_to_delete\n>> >> SELECT table_b_id FROM table_a_table_b_xref\n>> >> INNER JOIN table_a_ids_to_delete ON (table_a_ids_to_delete.id =\n>> >> table_a_table_b.quote_id);\n>> >>\n>> >> DELETE FROM table_a_table_b_xref USING table_a_ids_to_delete\n>> >> WHERE table_a_table_b_xref.table_a_id = table_a_ids_to_delete.id;\n>> >>\n>> >> DELETE FROM table_b USING table_b_ids_to_delete\n>> >> WHERE table_b.id = table_b_ids_to_delete.id;\n>> >>\n>> >> DELETE FROM table_a USING table_a_ids_to_delete\n>> >> WHERE table_a.id = table_a_ids_to_delete.id;\n>> >>\n>> >> COMMIT;\n>> >>\n>> >> There're indices on table_a on the queried columns, table_b's primary\n>> >> key is it's id, and table_a_table_b_xref has an index on (table_a_id,\n>> >> table_b_id). There're FK defined on the xref table, hence why I'm\n>> >> deleting from it first.\n>> >>\n>> >> Does anyone have any ideas as to what I can do to make the deletes any\n>> >> faster? I'm running out of ideas!\n>> >>\n>> >> Thanks in advance,\n>> >>\n>> >> --\n>> >> Rob Emery\n>> >>\n>> >>\n>> >> --\n>> >> Sent via pgsql-performance mailing list (\n>> [email protected])\n>> >> To make changes to your subscription:\n>> >> http://www.postgresql.org/mailpref/pgsql-performance\n>> >\n>> >\n>>\n>>\n>>\n>> --\n>> Robert Emery\n>> Database Administrator\n>>\n>> | T: 0800 021 0888 | www.codeweavers.net |\n>> | Codeweavers Limited | Barn 4 | Dunston Business Village | Dunston |\n>> ST18 9AB |\n>> | Registered in England and Wales No. 04092394 | VAT registration no.\n>> 974 9705 63 |\n>>\n>> CUSTOMERS' BLOG TWITTER FACEBOOK LINKED IN\n>> DEVELOPERS' BLOG YOUTUBE\n>>\n>\n>\n\nAnalyze your temp tables after filling and before using!\n17 трав. 2013 17:27, \"Sékine Coulibaly\" <[email protected]> напис.\nOh, sorry, overlooked that part.Maybe refreshing stats with VACUUM FULL ?\n2013/5/17 Robert Emery <[email protected]>\n\n\nHi Sékine,\n\nUnfortunately I'm not trying to empty the table completely, just\ndelete about 10-15% of the data in it.\n\nThanks,\n\nOn 17 May 2013 14:11, Sékine Coulibaly <[email protected]> wrote:\n> Rob,\n>\n> Did you tried TRUNCATE ?\n> http://www.postgresql.org/docs/8.4/static/sql-truncate.html\n>\n> This is is supposed to be quicker since it does scan the table.\n>\n> Regards\n>\n>\n> 2013/5/17 Rob Emery <[email protected]>\n>>\n>> Hi All,\n>>\n>> We've got 3 quite large tables that due to an unexpected surge in\n>> usage (!) have grown to about 10GB each, with 72, 32 and 31 million\n>> rows in. I've been tasked with cleaning out about half of them, the\n>> problem I've got is that even deleting the first 1,000,000 rows seems\n>> to take an unreasonable amount of time. Unfortunately this is on quite\n>> an old server (Dell 2950 with a RAID-10 over 6 disks) running Postgres\n>> 8.4; which serves other things like our logging systems.\n>>\n>> If I run a sustained (more than about 5 minutes) delete it'll have a\n>> detrimental effect on the other services. I'm trying to batch up the\n>> deletes into small chunks of approximately 1 month of data ; even this\n>> seems to take too long, I originally reduced this down to a single\n>> day's data and had the same problem. I can keep decreasing the size of\n>> the window I'm deleting but I feel I must be doing something either\n>> fundamentally wrong or over-complicating this enormously. I've\n>> switched over to retrieving a list of IDs to delete, storing them in\n>> temporary tables and deleting based on the primary keys on each of the\n>> tables with something similar to this:\n>>\n>> BEGIN TRANSACTION;\n>>\n>> CREATE TEMPORARY TABLE table_a_ids_to_delete (id INT);\n>> CREATE TEMPORARY TABLE table_b_ids_to_delete (id INT);\n>>\n>> INSERT INTO table_a_ids_to_delete\n>> SELECT id FROM table_a WHERE purchased ='-infinity' AND created_at\n>> < '2007-01-01T00:00:00';\n>>\n>> INSERT INTO table_b_ids_to_delete\n>> SELECT table_b_id FROM table_a_table_b_xref\n>> INNER JOIN table_a_ids_to_delete ON (table_a_ids_to_delete.id =\n>> table_a_table_b.quote_id);\n>>\n>> DELETE FROM table_a_table_b_xref USING table_a_ids_to_delete\n>> WHERE table_a_table_b_xref.table_a_id = table_a_ids_to_delete.id;\n>>\n>> DELETE FROM table_b USING table_b_ids_to_delete\n>> WHERE table_b.id = table_b_ids_to_delete.id;\n>>\n>> DELETE FROM table_a USING table_a_ids_to_delete\n>> WHERE table_a.id = table_a_ids_to_delete.id;\n>>\n>> COMMIT;\n>>\n>> There're indices on table_a on the queried columns, table_b's primary\n>> key is it's id, and table_a_table_b_xref has an index on (table_a_id,\n>> table_b_id). There're FK defined on the xref table, hence why I'm\n>> deleting from it first.\n>>\n>> Does anyone have any ideas as to what I can do to make the deletes any\n>> faster? I'm running out of ideas!\n>>\n>> Thanks in advance,\n>>\n>> --\n>> Rob Emery\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n--\nRobert Emery\nDatabase Administrator\n\n| T: 0800 021 0888 | www.codeweavers.net |\n| Codeweavers Limited | Barn 4 | Dunston Business Village | Dunston | ST18 9AB |\n| Registered in England and Wales No. 04092394 | VAT registration no.\n974 9705 63 |\n\n CUSTOMERS' BLOG TWITTER FACEBOOK LINKED IN\nDEVELOPERS' BLOG YOUTUBE",
"msg_date": "Sat, 18 May 2013 10:15:26 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting Rows From Large Tables"
},
{
"msg_contents": "On Fri, May 17, 2013 at 4:26 AM, Rob Emery <[email protected]> wrote:\n\n> Hi All,\n>\n> We've got 3 quite large tables that due to an unexpected surge in\n> usage (!) have grown to about 10GB each, with 72, 32 and 31 million\n> rows in. I've been tasked with cleaning out about half of them, the\n> problem I've got is that even deleting the first 1,000,000 rows seems\n> to take an unreasonable amount of time. Unfortunately this is on quite\n> an old server (Dell 2950 with a RAID-10 over 6 disks) running Postgres\n> 8.4; which serves other things like our logging systems.\n>\n\nHow many Cores do you have? I think the Dell 2950 could have anywhere from\n1 to 8.\n\nPick a smaller number of rows to delete, and run it with \"explain analyze\"\nto see what it is going on. I would say to use \"explain (analyze,\nbuffers)\" with track_io_timing on, but those don't exist back in 8.4.\n\nPerhaps this would be a good excuse to upgrade!\n\nIf I run a sustained (more than about 5 minutes) delete it'll have a\n> detrimental effect on the other services.\n\n\n\nDo you know why? Can you identify the affected queries from those other\nservices and run explain analyze on them?\n\n\n\n\n> I'm trying to batch up the\n> deletes into small chunks of approximately 1 month of data ; even this\n> seems to take too long, I originally reduced this down to a single\n> day's data and had the same problem. I can keep decreasing the size of\n> the window I'm deleting but I feel I must be doing something either\n> fundamentally wrong or over-complicating this enormously.\n\n\n\nIf your server is sized only to do its typical workload, then any\nsubstantial extra work load is going to cause problems. Trying to delete 1\nday's work in a few seconds stills seems like it is very likely excessive.\n Why not jump all the way down to 5 minutes, or limit it to a certain\nnumber of rows from table a, say 100 per unit? If you start large and work\nyour way down, you will often be working in the dark because you won't have\nthe patience to let the large ones run to completion, slowing down the\nwhole system. If you start at the bottom and work up, you will always know\nwhere you are as the previous one ran to completion and you have the\ntimings from it.\n\nHow fast do you need to clean this up? If it took months to get into the\nsituation, can't you take a few weeks to get out of it?\n\n\n\n> I've\n> switched over to retrieving a list of IDs to delete, storing them in\n> temporary tables and deleting based on the primary keys on each of the\n> tables with something similar to this:\n>\n> BEGIN TRANSACTION;\n>\n> CREATE TEMPORARY TABLE table_a_ids_to_delete (id INT);\n> CREATE TEMPORARY TABLE table_b_ids_to_delete (id INT);\n>\n> INSERT INTO table_a_ids_to_delete\n> SELECT id FROM table_a WHERE purchased ='-infinity' AND created_at\n> < '2007-01-01T00:00:00';\n>\n\nI'd probably add a \"LIMIT 100\" in there. Then you can set created_at to\nthe final time point desired, rather than trying to increment it each time\nand deciding how much to increment.\n\n\n>\n> INSERT INTO table_b_ids_to_delete\n> SELECT table_b_id FROM table_a_table_b_xref\n> INNER JOIN table_a_ids_to_delete ON (table_a_ids_to_delete.id =\n> table_a_table_b.quote_id);\n>\n\nDo these to queries slow down other operations? Or is it just the deletes?\n\n\n>\n> DELETE FROM table_a_table_b_xref USING table_a_ids_to_delete\n> WHERE table_a_table_b_xref.table_a_id = table_a_ids_to_delete.id;\n>\n> DELETE FROM table_b USING table_b_ids_to_delete\n> WHERE table_b.id = table_b_ids_to_delete.id;\n>\n> DELETE FROM table_a USING table_a_ids_to_delete\n> WHERE table_a.id = table_a_ids_to_delete.id;\n>\n> COMMIT;\n>\n\nHow much time to do the 3 deletes take relative to each other and to the\ninserts?\n\nCheers,\n\nJef\n\nOn Fri, May 17, 2013 at 4:26 AM, Rob Emery <[email protected]> wrote:\nHi All,\n\nWe've got 3 quite large tables that due to an unexpected surge in\nusage (!) have grown to about 10GB each, with 72, 32 and 31 million\nrows in. I've been tasked with cleaning out about half of them, the\nproblem I've got is that even deleting the first 1,000,000 rows seems\nto take an unreasonable amount of time. Unfortunately this is on quite\nan old server (Dell 2950 with a RAID-10 over 6 disks) running Postgres\n8.4; which serves other things like our logging systems.How many Cores do you have? I think the Dell 2950 could have anywhere from 1 to 8.\nPick a smaller number of rows to delete, and run it with \"explain analyze\" to see what it is going on. I would say to use \"explain (analyze, buffers)\" with track_io_timing on, but those don't exist back in 8.4.\nPerhaps this would be a good excuse to upgrade!If I run a sustained (more than about 5 minutes) delete it'll have a\n\ndetrimental effect on the other services.Do you know why? Can you identify the affected queries from those other services and run explain analyze on them?\n I'm trying to batch up the\ndeletes into small chunks of approximately 1 month of data ; even this\nseems to take too long, I originally reduced this down to a single\nday's data and had the same problem. I can keep decreasing the size of\nthe window I'm deleting but I feel I must be doing something either\nfundamentally wrong or over-complicating this enormously. If your server is sized only to do its typical workload, then any substantial extra work load is going to cause problems. Trying to delete 1 day's work in a few seconds stills seems like it is very likely excessive. Why not jump all the way down to 5 minutes, or limit it to a certain number of rows from table a, say 100 per unit? If you start large and work your way down, you will often be working in the dark because you won't have the patience to let the large ones run to completion, slowing down the whole system. If you start at the bottom and work up, you will always know where you are as the previous one ran to completion and you have the timings from it.\nHow fast do you need to clean this up? If it took months to get into the situation, can't you take a few weeks to get out of it? \nI've\nswitched over to retrieving a list of IDs to delete, storing them in\ntemporary tables and deleting based on the primary keys on each of the\ntables with something similar to this:\n\nBEGIN TRANSACTION;\n\nCREATE TEMPORARY TABLE table_a_ids_to_delete (id INT);\nCREATE TEMPORARY TABLE table_b_ids_to_delete (id INT);\n\nINSERT INTO table_a_ids_to_delete\n SELECT id FROM table_a WHERE purchased ='-infinity' AND created_at\n< '2007-01-01T00:00:00';I'd probably add a \"LIMIT 100\" in there. Then you can set created_at to the final time point desired, rather than trying to increment it each time and deciding how much to increment.\n \n\nINSERT INTO table_b_ids_to_delete\n SELECT table_b_id FROM table_a_table_b_xref\n INNER JOIN table_a_ids_to_delete ON (table_a_ids_to_delete.id =\ntable_a_table_b.quote_id);Do these to queries slow down other operations? Or is it just the deletes? \n\nDELETE FROM table_a_table_b_xref USING table_a_ids_to_delete\n WHERE table_a_table_b_xref.table_a_id = table_a_ids_to_delete.id;\n\nDELETE FROM table_b USING table_b_ids_to_delete\n WHERE table_b.id = table_b_ids_to_delete.id;\n\nDELETE FROM table_a USING table_a_ids_to_delete\n WHERE table_a.id = table_a_ids_to_delete.id;\n\nCOMMIT;How much time to do the 3 deletes take relative to each other and to the inserts?Cheers,Jef",
"msg_date": "Sun, 19 May 2013 13:36:50 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting Rows From Large Tables"
},
{
"msg_contents": "Rob,\n\nI'm going to make half of the list cringe at this suggestion though I have\nused it successfully.\n\nIf you can guarantee the table will not be vacuumed during this cleanup or\nrows you want deleted updated, I would suggest using the ctid column to\nfacilitate the delete. Using the simple transaction below, I have\nwitnessed a DELETE move much more quickly than one using a PK or any other\ncolumn with an index.\n\nBEGIN;\nSELECT ctid INTO TEMP TABLE ctids_to_be deleted FROM my_big_table WHERE *delete\ncriteria*;\nDELETE FROM my_big_table bt USING ctids_to_be_deleted dels WHERE bt.ctid =\ndels.ctid;\nCOMMIT;\n\nHTH.\n-Greg\n\n\nOn Fri, May 17, 2013 at 5:26 AM, Rob Emery <[email protected]> wrote:\n\n> Hi All,\n>\n> We've got 3 quite large tables that due to an unexpected surge in\n> usage (!) have grown to about 10GB each, with 72, 32 and 31 million\n> rows in. I've been tasked with cleaning out about half of them, the\n> problem I've got is that even deleting the first 1,000,000 rows seems\n> to take an unreasonable amount of time. Unfortunately this is on quite\n> an old server (Dell 2950 with a RAID-10 over 6 disks) running Postgres\n> 8.4; which serves other things like our logging systems.\n>\n> If I run a sustained (more than about 5 minutes) delete it'll have a\n> detrimental effect on the other services. I'm trying to batch up the\n> deletes into small chunks of approximately 1 month of data ; even this\n> seems to take too long, I originally reduced this down to a single\n> day's data and had the same problem. I can keep decreasing the size of\n> the window I'm deleting but I feel I must be doing something either\n> fundamentally wrong or over-complicating this enormously. I've\n> switched over to retrieving a list of IDs to delete, storing them in\n> temporary tables and deleting based on the primary keys on each of the\n> tables with something similar to this:\n>\n> BEGIN TRANSACTION;\n>\n> CREATE TEMPORARY TABLE table_a_ids_to_delete (id INT);\n> CREATE TEMPORARY TABLE table_b_ids_to_delete (id INT);\n>\n> INSERT INTO table_a_ids_to_delete\n> SELECT id FROM table_a WHERE purchased ='-infinity' AND created_at\n> < '2007-01-01T00:00:00';\n>\n> INSERT INTO table_b_ids_to_delete\n> SELECT table_b_id FROM table_a_table_b_xref\n> INNER JOIN table_a_ids_to_delete ON (table_a_ids_to_delete.id =\n> table_a_table_b.quote_id);\n>\n> DELETE FROM table_a_table_b_xref USING table_a_ids_to_delete\n> WHERE table_a_table_b_xref.table_a_id = table_a_ids_to_delete.id;\n>\n> DELETE FROM table_b USING table_b_ids_to_delete\n> WHERE table_b.id = table_b_ids_to_delete.id;\n>\n> DELETE FROM table_a USING table_a_ids_to_delete\n> WHERE table_a.id = table_a_ids_to_delete.id;\n>\n> COMMIT;\n>\n> There're indices on table_a on the queried columns, table_b's primary\n> key is it's id, and table_a_table_b_xref has an index on (table_a_id,\n> table_b_id). There're FK defined on the xref table, hence why I'm\n> deleting from it first.\n>\n> Does anyone have any ideas as to what I can do to make the deletes any\n> faster? I'm running out of ideas!\n>\n> Thanks in advance,\n>\n> --\n> Rob Emery\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nRob,I'm going to make half of the list cringe at this suggestion though I have used it successfully.If you can guarantee the table will not be vacuumed during this cleanup or rows you want deleted updated, I would suggest using the ctid column to facilitate the delete. Using the simple transaction below, I have witnessed a DELETE move much more quickly than one using a PK or any other column with an index.\nBEGIN;SELECT ctid INTO TEMP TABLE ctids_to_be deleted FROM my_big_table WHERE delete criteria;DELETE FROM my_big_table bt USING ctids_to_be_deleted dels WHERE bt.ctid = dels.ctid;\nCOMMIT;HTH.-GregOn Fri, May 17, 2013 at 5:26 AM, Rob Emery <[email protected]> wrote:\nHi All,\n\nWe've got 3 quite large tables that due to an unexpected surge in\nusage (!) have grown to about 10GB each, with 72, 32 and 31 million\nrows in. I've been tasked with cleaning out about half of them, the\nproblem I've got is that even deleting the first 1,000,000 rows seems\nto take an unreasonable amount of time. Unfortunately this is on quite\nan old server (Dell 2950 with a RAID-10 over 6 disks) running Postgres\n8.4; which serves other things like our logging systems.\n\nIf I run a sustained (more than about 5 minutes) delete it'll have a\ndetrimental effect on the other services. I'm trying to batch up the\ndeletes into small chunks of approximately 1 month of data ; even this\nseems to take too long, I originally reduced this down to a single\nday's data and had the same problem. I can keep decreasing the size of\nthe window I'm deleting but I feel I must be doing something either\nfundamentally wrong or over-complicating this enormously. I've\nswitched over to retrieving a list of IDs to delete, storing them in\ntemporary tables and deleting based on the primary keys on each of the\ntables with something similar to this:\n\nBEGIN TRANSACTION;\n\nCREATE TEMPORARY TABLE table_a_ids_to_delete (id INT);\nCREATE TEMPORARY TABLE table_b_ids_to_delete (id INT);\n\nINSERT INTO table_a_ids_to_delete\n SELECT id FROM table_a WHERE purchased ='-infinity' AND created_at\n< '2007-01-01T00:00:00';\n\nINSERT INTO table_b_ids_to_delete\n SELECT table_b_id FROM table_a_table_b_xref\n INNER JOIN table_a_ids_to_delete ON (table_a_ids_to_delete.id =\ntable_a_table_b.quote_id);\n\nDELETE FROM table_a_table_b_xref USING table_a_ids_to_delete\n WHERE table_a_table_b_xref.table_a_id = table_a_ids_to_delete.id;\n\nDELETE FROM table_b USING table_b_ids_to_delete\n WHERE table_b.id = table_b_ids_to_delete.id;\n\nDELETE FROM table_a USING table_a_ids_to_delete\n WHERE table_a.id = table_a_ids_to_delete.id;\n\nCOMMIT;\n\nThere're indices on table_a on the queried columns, table_b's primary\nkey is it's id, and table_a_table_b_xref has an index on (table_a_id,\ntable_b_id). There're FK defined on the xref table, hence why I'm\ndeleting from it first.\n\nDoes anyone have any ideas as to what I can do to make the deletes any\nfaster? I'm running out of ideas!\n\nThanks in advance,\n\n--\nRob Emery\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sun, 19 May 2013 15:14:07 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting Rows From Large Tables"
},
{
"msg_contents": "On 5/17/13 7:26 AM, Rob Emery wrote:\n> I can keep decreasing the size of\n> the window I'm deleting but I feel I must be doing something either\n> fundamentally wrong or over-complicating this enormously.\n\nI've had jobs like this where we ended up making the batch size cover \nonly 4 hours at a time. Once you've looked at the EXPLAIN plans for the \nrow selection criteria and they're reasonable, dropping the period \nthat's deleted per pass is really the only thing you can do. Do some \nDELETEs, then pause to let the disk cache clear; repeat.\n\nThe other useful thing to do here is get very aggressive about settings \nfor shared_buffers, checkpoint_segments, and checkpoint_timeout. I'll \nnormally push for settings like 8GB/256/15 minutes when doing this sort \nof thing. The usual situation with a checkpoint every 5 minutes may not \nbe feasible when you've got this type of work going on in the background.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 May 2013 22:55:28 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deleting Rows From Large Tables"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm experiencing a very slow CTE query (see below).\n\nWhen I split the three aggregationns into separate views, its' decent\nfast. So I think it's due to the planner.\n\nAny ideas how to reformulate the query?\n\nThese are the tables and views involved:\n* Table promotion with start/end date and a region, and table\npromo2mission (each 1 to dozen tupels).\n* View all_errors (more than 20'000 tubles, based on table errors\nwithout tupels from table fix)\n* Table error_type (7 tupels)\n\nHere's the EXPLAIN ANALYZE log: http://explain.depesz.com/s/tbF\n\nYours, Stefan\n\n\nCTE Query:\n\nWITH aggregation1\n AS (SELECT p.id AS promo_id,\n p.startdate,\n p.enddate,\n p.geom AS promogeom,\n pm.error_type,\n pm.mission_extra_coins AS extra_coins\n FROM (promotion p\n join promo2mission pm\n ON (( p.id = pm.promo_id )))\n WHERE ( ( p.startdate <= Now() )\n AND ( p.enddate >= Now() ) )),\n aggregation2\n AS (SELECT e.error_id AS missionid,\n e.schemaid,\n t.TYPE,\n e.osm_id,\n e.osm_type,\n t.description AS title,\n t.view_type,\n t.answer_placeholder,\n t.bug_question AS description,\n t.fix_koin_count,\n t.vote_koin_count,\n e.latitude,\n e.longitude,\n e.geom AS missiongeom,\n e.txt1,\n e.txt2,\n e.txt3,\n e.txt4,\n e.txt5\n FROM all_errors e,\n error_type t\n WHERE ( ( e.error_type_id = t.error_type_id )\n AND ( NOT ( EXISTS (SELECT 1\n FROM fix f\n WHERE ( ( ( ( f.error_id = e.error_id )\n AND ( f.osm_id =\ne.osm_id ) )\n AND ( ( f.schemaid ) :: text =\n ( e.schemaid ) :: text ) )\n AND ( ( f.complete\n AND f.valid )\n OR ( NOT\n f.complete ) ) )) ) ) )),\n aggregation3\n AS (SELECT ag2.missionid AS missionidtemp,\n ag1.promo_id,\n ag1.extra_coins\n FROM (aggregation2 ag2\n join aggregation1 ag1\n ON (( ( ag2.TYPE ) :: text = ( ag1.error_type ) :: text )))\n WHERE public._st_contains(ag1.promogeom, ag2.missiongeom))\nSELECT ag2.missionid AS id,\n ag2.schemaid,\n ag2.TYPE,\n ag2.osm_id,\n ag2.osm_type,\n ag2.title,\n ag2.description,\n ag2.latitude,\n ag2.longitude,\n ag2.view_type,\n ag2.answer_placeholder,\n ag2.fix_koin_count,\n ag2.missiongeom,\n ag2.txt1,\n ag2.txt2,\n ag2.txt3,\n ag2.txt4,\n ag2.txt5,\n ag3.promo_id,\n ag3.extra_coins\nFROM (aggregation2 ag2\n left join aggregation3 ag3\n ON (( ag2.missionid = ag3.missionidtemp )));\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 18 May 2013 21:50:15 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Hi,\n\nI'm experiencing a very slow CTE query (see below).\n\nWhen I split the three aggregations into three separate views, its' decent\nfast. So I think it's due to the planner.\n\nAny ideas like reformulating the query?\n\nThese are the tables and views involved:\n* Table promotion with start/end date and a region, and table\npromo2mission (each 1 to dozen tupels).\n* View all_errors (more than 20'000 tubles, based on table errors\nwithout tupels from table fix)\n* Table error_type (7 tupels)\n\nHere's the EXPLAIN ANALYZE log: http://explain.depesz.com/s/tbF\n\nYours, Stefan\n\n\nCTE Query:\n\nWITH aggregation1\n AS (SELECT p.id AS promo_id,\n p.startdate,\n p.enddate,\n p.geom AS promogeom,\n pm.error_type,\n pm.mission_extra_coins AS extra_coins\n FROM (promotion p\n join promo2mission pm\n ON (( p.id = pm.promo_id )))\n WHERE ( ( p.startdate <= Now() )\n AND ( p.enddate >= Now() ) )),\n aggregation2\n AS (SELECT e.error_id AS missionid,\n e.schemaid,\n t.TYPE,\n e.osm_id,\n e.osm_type,\n t.description AS title,\n t.view_type,\n t.answer_placeholder,\n t.bug_question AS description,\n t.fix_koin_count,\n t.vote_koin_count,\n e.latitude,\n e.longitude,\n e.geom AS missiongeom,\n e.txt1,\n e.txt2,\n e.txt3,\n e.txt4,\n e.txt5\n FROM all_errors e,\n error_type t\n WHERE ( ( e.error_type_id = t.error_type_id )\n AND ( NOT ( EXISTS (SELECT 1\n FROM fix f\n WHERE ( ( ( ( f.error_id = e.error_id )\n AND ( f.osm_id =\ne.osm_id ) )\n AND ( ( f.schemaid ) :: text =\n ( e.schemaid ) :: text ) )\n AND ( ( f.complete\n AND f.valid )\n OR ( NOT\n f.complete ) ) )) ) ) )),\n aggregation3\n AS (SELECT ag2.missionid AS missionidtemp,\n ag1.promo_id,\n ag1.extra_coins\n FROM (aggregation2 ag2\n join aggregation1 ag1\n ON (( ( ag2.TYPE ) :: text = ( ag1.error_type ) :: text )))\n WHERE public._st_contains(ag1.promogeom, ag2.missiongeom))\nSELECT ag2.missionid AS id,\n ag2.schemaid,\n ag2.TYPE,\n ag2.osm_id,\n ag2.osm_type,\n ag2.title,\n ag2.description,\n ag2.latitude,\n ag2.longitude,\n ag2.view_type,\n ag2.answer_placeholder,\n ag2.fix_koin_count,\n ag2.missiongeom,\n ag2.txt1,\n ag2.txt2,\n ag2.txt3,\n ag2.txt4,\n ag2.txt5,\n ag3.promo_id,\n ag3.extra_coins\nFROM (aggregation2 ag2\n left join aggregation3 ag3\n ON (( ag2.missionid = ag3.missionidtemp )));\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 18 May 2013 21:54:45 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow CTE Query"
},
{
"msg_contents": "On Sat, May 18, 2013 at 12:54 PM, Stefan Keller <[email protected]> wrote:\n> I'm experiencing a very slow CTE query (see below).\n>\n> When I split the three aggregations into three separate views, its' decent\n> fast. So I think it's due to the planner.\n>\n> Any ideas like reformulating the query?\n\nRewrite it without CTE. Planner will have more freedom in this case.\nAlso I would try to use LEFT JOIN ... IS NULL technique instead of NOT\nEXISTS.\n\n>\n> These are the tables and views involved:\n> * Table promotion with start/end date and a region, and table\n> promo2mission (each 1 to dozen tupels).\n> * View all_errors (more than 20'000 tubles, based on table errors\n> without tupels from table fix)\n> * Table error_type (7 tupels)\n>\n> Here's the EXPLAIN ANALYZE log: http://explain.depesz.com/s/tbF\n>\n> Yours, Stefan\n>\n>\n> CTE Query:\n>\n> WITH aggregation1\n> AS (SELECT p.id AS promo_id,\n> p.startdate,\n> p.enddate,\n> p.geom AS promogeom,\n> pm.error_type,\n> pm.mission_extra_coins AS extra_coins\n> FROM (promotion p\n> join promo2mission pm\n> ON (( p.id = pm.promo_id )))\n> WHERE ( ( p.startdate <= Now() )\n> AND ( p.enddate >= Now() ) )),\n> aggregation2\n> AS (SELECT e.error_id AS missionid,\n> e.schemaid,\n> t.TYPE,\n> e.osm_id,\n> e.osm_type,\n> t.description AS title,\n> t.view_type,\n> t.answer_placeholder,\n> t.bug_question AS description,\n> t.fix_koin_count,\n> t.vote_koin_count,\n> e.latitude,\n> e.longitude,\n> e.geom AS missiongeom,\n> e.txt1,\n> e.txt2,\n> e.txt3,\n> e.txt4,\n> e.txt5\n> FROM all_errors e,\n> error_type t\n> WHERE ( ( e.error_type_id = t.error_type_id )\n> AND ( NOT ( EXISTS (SELECT 1\n> FROM fix f\n> WHERE ( ( ( ( f.error_id = e.error_id )\n> AND ( f.osm_id =\n> e.osm_id ) )\n> AND ( ( f.schemaid ) :: text =\n> ( e.schemaid ) :: text ) )\n> AND ( ( f.complete\n> AND f.valid )\n> OR ( NOT\n> f.complete ) ) )) ) ) )),\n> aggregation3\n> AS (SELECT ag2.missionid AS missionidtemp,\n> ag1.promo_id,\n> ag1.extra_coins\n> FROM (aggregation2 ag2\n> join aggregation1 ag1\n> ON (( ( ag2.TYPE ) :: text = ( ag1.error_type ) :: text )))\n> WHERE public._st_contains(ag1.promogeom, ag2.missiongeom))\n> SELECT ag2.missionid AS id,\n> ag2.schemaid,\n> ag2.TYPE,\n> ag2.osm_id,\n> ag2.osm_type,\n> ag2.title,\n> ag2.description,\n> ag2.latitude,\n> ag2.longitude,\n> ag2.view_type,\n> ag2.answer_placeholder,\n> ag2.fix_koin_count,\n> ag2.missiongeom,\n> ag2.txt1,\n> ag2.txt2,\n> ag2.txt3,\n> ag2.txt4,\n> ag2.txt5,\n> ag3.promo_id,\n> ag3.extra_coins\n> FROM (aggregation2 ag2\n> left join aggregation3 ag3\n> ON (( ag2.missionid = ag3.missionidtemp )));\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n--\nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nProfile: http://www.linkedin.com/in/grayhemp\nPhone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 18 May 2013 14:44:34 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow CTE Query"
}
] |
[
{
"msg_contents": "Hello,\nAfter executing make install for pg_statsinfo when i start the server i see\nerror\n\nERROR: could not connect to repository\nWARNING: writer discards 1 items\nLOG: pg_statsinfo launcher shutting down\nDEBUG: shmem_exit(0): 0 callbacks to make\nDEBUG: proc_exit(0): 0 callbacks to make\nDEBUG: exit(0)\nDEBUG: shmem_exit(-1): 0 callbacks to make\nDEBUG: proc_exit(-1): 0 callbacks to make\n\n\npostgres.conf changes made are:\n\nshared_preload_libraries = 'pg_statsinfo,pg_stat_statements'\ntrack_counts = on\ntrack_activities = on\nlog_min_messages = debug5\npg_statsinfo.textlog_min_messages.\nlog_timezone = 'utc'\nlog_destination = 'csvlog'\nlogging_collector = on\n\nIn pg_hba.conf changes made are\nlocal all postgres ident\n\n\nAny pointers would be appreciated\n\nregards,\n\nHello,After executing make install for pg_statsinfo when i start the server i see errorERROR: could not connect to repositoryWARNING: writer discards 1 items\nLOG: pg_statsinfo launcher shutting downDEBUG: shmem_exit(0): 0 callbacks to makeDEBUG: proc_exit(0): 0 callbacks to makeDEBUG: exit(0)DEBUG: shmem_exit(-1): 0 callbacks to make\nDEBUG: proc_exit(-1): 0 callbacks to makepostgres.conf changes made are:shared_preload_libraries = 'pg_statsinfo,pg_stat_statements' \ntrack_counts = on track_activities = on log_min_messages = debug5 \npg_statsinfo.textlog_min_messages.log_timezone = 'utc' log_destination = 'csvlog' \nlogging_collector = on In pg_hba.conf changes made are\nlocal all postgres ident\nAny pointers would be appreciated\nregards,",
"msg_date": "Tue, 21 May 2013 17:26:52 +0530",
"msg_from": "Sameer Thakur <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_statsinfo : error could not connect to repository"
},
{
"msg_contents": "Hello\n\n> After executing make install for pg_statsinfo when i start the server i see\n> error\nIf you have any questions or troubles about pg_statsinfo, please send\nit to pgstatsinfo Mailing List.\n(http://pgfoundry.org/mail/?group_id=1000422)\n\n> postgres.conf changes made are:\nAnd you may have to change \"log_filename\" option.\n# It may not relate to this problem, though.\n\n> In pg_hba.conf changes made are\n> local all postgres ident\nCould you try to change the ident -> trust in pg_hba.conf and start\nthe PostgreSQL server with pg_statsinfo ?\n\n# Again, if you would reply, please post to the dedicated Mailing Lists.\n\nBest regards\n--\nKasahara Tatsuhito\n\n2013/5/21 Sameer Thakur <[email protected]>:\n> Hello,\n> After executing make install for pg_statsinfo when i start the server i see\n> error\n>\n> ERROR: could not connect to repository\n> WARNING: writer discards 1 items\n> LOG: pg_statsinfo launcher shutting down\n> DEBUG: shmem_exit(0): 0 callbacks to make\n> DEBUG: proc_exit(0): 0 callbacks to make\n> DEBUG: exit(0)\n> DEBUG: shmem_exit(-1): 0 callbacks to make\n> DEBUG: proc_exit(-1): 0 callbacks to make\n>\n>\n> postgres.conf changes made are:\n>\n> shared_preload_libraries = 'pg_statsinfo,pg_stat_statements'\n> track_counts = on\n> track_activities = on\n> log_min_messages = debug5\n> pg_statsinfo.textlog_min_messages.\n> log_timezone = 'utc'\n> log_destination = 'csvlog'\n> logging_collector = on\n>\n> In pg_hba.conf changes made are\n> local all postgres ident\n>\n>\n> Any pointers would be appreciated\n>\n> regards,\n\n\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 01:43:11 +0900",
"msg_from": "Kasahara Tatsuhito <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_statsinfo : error could not connect to repository"
},
{
"msg_contents": "On Tue, May 21, 2013 at 8:56 PM, Sameer Thakur <[email protected]> wrote:\n> Hello,\n> After executing make install for pg_statsinfo when i start the server i see\n> error\n>\n> ERROR: could not connect to repository\n> WARNING: writer discards 1 items\n> LOG: pg_statsinfo launcher shutting down\n> DEBUG: shmem_exit(0): 0 callbacks to make\n> DEBUG: proc_exit(0): 0 callbacks to make\n> DEBUG: exit(0)\n> DEBUG: shmem_exit(-1): 0 callbacks to make\n> DEBUG: proc_exit(-1): 0 callbacks to make\n>\n>\n> postgres.conf changes made are:\n>\n> shared_preload_libraries = 'pg_statsinfo,pg_stat_statements'\n> track_counts = on\n> track_activities = on\n> log_min_messages = debug5\n> pg_statsinfo.textlog_min_messages.\n> log_timezone = 'utc'\n> log_destination = 'csvlog'\n> logging_collector = on\n>\n> In pg_hba.conf changes made are\n> local all postgres ident\n>\n>\n> Any pointers would be appreciated\n\nDid you install your PostgreSQL server with --with-libxml configure\noption if you installed your server from source?\n\n\n--\nAmit Langote\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 10:26:58 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_statsinfo : error could not connect to repository"
}
] |
[
{
"msg_contents": "Hi people, i have a database with 400GB running in a server with 128Gb \nRAM, and 32 cores, and storage over SAN with fiberchannel, the problem \nis when i go to do a backup whit pg_dumpall take a lot of 5 hours, next \ni do a restore and take a lot of 17 hours, that is a normal time for \nthat process in that machine? or i can do something to optimize the \nprocess of backup/restore.\n\nThis is my current configuration\n\nPostgres version 9.2.2\nconnections 1000\nshared buffers 4096MB\nwork_mem = 2048MB\nmaintenance_work_mem = 2048MB\ncheckpoint_segments = 103\n\nthe other params are by default.\n\nThankyou very much\n\n-- \nAtentamente,\n\n\nJEISON BEDOYA DELGADO\nAdm. Servidores y Comunicaciones\nAUDIFARMA S.A.\n\n\n--\nNOTA VERDE:\nNo imprima este correo a menos que sea absolutamente necesario.\nAhorre papel, ayude a salvar un arbol.\n\n--------------------------------------------------------------------\nEste mensaje ha sido analizado por MailScanner\nen busca de virus y otros contenidos peligrosos,\ny se considera que esta limpio.\n\n--------------------------------------------------------------------\nEste texto fue anadido por el servidor de correo de Audifarma S.A.:\n\nLas opiniones contenidas en este mensaje no necesariamente coinciden\ncon las institucionales de Audifarma. La informacion y todos sus\narchivos Anexos, son confidenciales, privilegiados y solo pueden ser\nutilizados por sus destinatarios. Si por error usted recibe este\nmensaje, le ofrecemos disculpas, solicitamos eliminarlo de inmediato,\nnotificarle de su error a la persona que lo envio y abstenerse de\nutilizar su contenido.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 May 2013 08:18:55 -0500",
"msg_from": "Jeison Bedoya <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance database for backup/restore"
},
{
"msg_contents": "\nOn May 21, 2013, at 5:18 PM, Jeison Bedoya <[email protected]> wrote:\n\n> Hi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with fiberchannel, the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and take a lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the process of backup/restore.\n> \n\nI'd recommend you to dump with \n\npg_dump --format=c\n\nIt will compress the output and later you can restore it in parallel with\n\npg_restore -j 32 (for example)\n\nRight now you can not dump in parallel, wait for 9.3 release. Or may be someone will back port it to 9.2 pg_dump.\n\nAlso during restore you can speed up a little more by disabling fsync and synchronous_commit. \n\n> This is my current configuration\n> \n> Postgres version 9.2.2\n> connections 1000\n> shared buffers 4096MB\n> work_mem = 2048MB\n> maintenance_work_mem = 2048MB\n> checkpoint_segments = 103\n> \n> the other params are by default.\n> \n> Thankyou very much\n> \n> -- \n> Atentamente,\n> \n> \n> JEISON BEDOYA DELGADO\n> Adm. Servidores y Comunicaciones\n> AUDIFARMA S.A.\n> \n> \n> --\n> NOTA VERDE:\n> No imprima este correo a menos que sea absolutamente necesario.\n> Ahorre papel, ayude a salvar un arbol.\n> \n> --------------------------------------------------------------------\n> Este mensaje ha sido analizado por MailScanner\n> en busca de virus y otros contenidos peligrosos,\n> y se considera que esta limpio.\n> \n> --------------------------------------------------------------------\n> Este texto fue anadido por el servidor de correo de Audifarma S.A.:\n> \n> Las opiniones contenidas en este mensaje no necesariamente coinciden\n> con las institucionales de Audifarma. La informacion y todos sus\n> archivos Anexos, son confidenciales, privilegiados y solo pueden ser\n> utilizados por sus destinatarios. Si por error usted recibe este\n> mensaje, le ofrecemos disculpas, solicitamos eliminarlo de inmediato,\n> notificarle de su error a la persona que lo envio y abstenerse de\n> utilizar su contenido.\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 May 2013 17:28:31 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance database for backup/restore"
},
{
"msg_contents": "On 05/21/2013 06:18 AM, Jeison Bedoya wrote:\n> Hi people, i have a database with 400GB running in a server with 128Gb \n> RAM, and 32 cores, and storage over SAN with fiberchannel, the problem \n> is when i go to do a backup whit pg_dumpall take a lot of 5 hours, \n> next i do a restore and take a lot of 17 hours, that is a normal time \n> for that process in that machine? or i can do something to optimize \n> the process of backup/restore.\nIt would help to know what you wish to solve. I.e. setting up a test/dev \nserver, testing disaster-recovery, deploying to a new server, etc. Also, \nare you dumping to a file then restoring from a file or dumping to a \npipe into the restore?\n\nIf you use the custom format in pg_dump *and* are dumping to a file \n*and* restoring via pg_restore, you can set the -j flag to somewhat \nfewer than the number of cores (though at 128 cores I can't say where \nthe sweet spot might be) to allow pg_restore to run things like index \nrecreation in parallel to help your restore speed.\n\nYou can also *temporarily* disable fsync while rebuilding the database - \njust be sure to turn it back on afterward.\n\nCopying the files is not the recommended method for backups but may work \nfor certain cases. One is when you can shut down the database so the \nwhole directory is quiescent while you copy the files. Also, depending \non your SAN features, you *might* be able to do a snapshot of the \nrunning PostgreSQL data directory and use that.\n\n>\n>\n>\n> Postgres version 9.2.2 ...\n...has a nasty security issue. Upgrade. Now.\n\nCheers,\nSteve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 May 2013 08:11:14 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance database for backup/restore"
},
{
"msg_contents": "On Tue, May 21, 2013 at 05:28:31PM +0400, Evgeny Shishkin wrote:\n> \n> On May 21, 2013, at 5:18 PM, Jeison Bedoya <[email protected]> wrote:\n> \n> > Hi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with fiberchannel, the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and take a lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the process of backup/restore.\n> > \n> \n> I'd recommend you to dump with \n> \n> pg_dump --format=c\n> \n> It will compress the output and later you can restore it in parallel with\n> \n> pg_restore -j 32 (for example)\n> \n> Right now you can not dump in parallel, wait for 9.3 release. Or may be someone will back port it to 9.2 pg_dump.\n> \n> Also during restore you can speed up a little more by disabling fsync and synchronous_commit. \n> \n\nIf you have the space and I/O capacity, avoiding the compress option will be\nmuch faster. The current compression scheme using zlib type compression is\nvery CPU intensive and limits your dump rate. On a system that we have, a\ndump without compression takes 20m and with compression 2h20m. The parallel\nrestore make a big difference as well.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 May 2013 10:46:13 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance database for backup/restore"
},
{
"msg_contents": "2013/5/21 Jeison Bedoya <[email protected]>\n\n> Hi people, i have a database with 400GB running in a server with 128Gb\n> RAM, and 32 cores, and storage over SAN with fiberchannel, the problem is\n> when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a\n> restore and take a lot of 17 hours, that is a normal time for that process\n> in that machine? or i can do something to optimize the process of\n> backup/restore.\n>\n\nHow many database objects do you have? A few large objects will dump and\nrestore faster than a huge number of smallish objects.\n\nWhere is your bottleneck? \"top\" should show you whether it is CPU or IO.\n\nI can pg_dump about 6GB/minute to /dev/null using all defaults with a small\nnumber of large objects.\n\nCheers,\n\nJeff\n\n2013/5/21 Jeison Bedoya <[email protected]>\nHi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with fiberchannel, the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and take a lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the process of backup/restore.\nHow many database objects do you have? A few large objects will dump and restore faster than a huge number of smallish objects.Where is your bottleneck? \"top\" should show you whether it is CPU or IO.\nI can pg_dump about 6GB/minute to /dev/null using all defaults with a small number of large objects.Cheers,Jeff",
"msg_date": "Tue, 21 May 2013 09:11:40 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance database for backup/restore"
},
{
"msg_contents": "hi jeff thanks by your answer, when you say \"database objects\" you \ntalking about the tables?, because i have 1782 tables in my database.\n\nUmm, my boottleneck not is on CPU because the top don�t show something \nabout that, the memory is used 30%, and the IO not have problem, because \nthe Fiber channel SAN have capacity of 8GB and tthe i/o transfer to the \ndisk is no Upper to 1 GB.\n\ncan you explainme again how do you do a 6bg/min for pd_dump\n\nthanks\n\nAtentamente,\n\n\nJEISON BEDOYA DELGADO\nAdm. Servidores y Comunicaciones\nAUDIFARMA S.A.\n\nEl 21/05/2013 11:11 a.m., Jeff Janes escribi�:\n> 2013/5/21 Jeison Bedoya <[email protected] \n> <mailto:[email protected]>>\n>\n> Hi people, i have a database with 400GB running in a server with\n> 128Gb RAM, and 32 cores, and storage over SAN with fiberchannel,\n> the problem is when i go to do a backup whit pg_dumpall take a lot\n> of 5 hours, next i do a restore and take a lot of 17 hours, that\n> is a normal time for that process in that machine? or i can do\n> something to optimize the process of backup/restore.\n>\n>\n> How many database objects do you have? A few large objects will dump \n> and restore faster than a huge number of smallish objects.\n>\n> Where is your bottleneck? \"top\" should show you whether it is CPU or IO.\n>\n> I can pg_dump about 6GB/minute to /dev/null using all defaults with a \n> small number of large objects.\n>\n> Cheers,\n>\n> Jeff\n>\n> -- \n> NOTA VERDE:\n> No imprima este correo\n> a menos que sea absolutamente necesario.\n> Ahorre papel, ayude a salvar un arbol.\n>\n> --------------------------------------------------------------------\n> Este mensaje ha sido analizado por *MailScanner* \n> <http://www.mailscanner.info/>\n> en busca de virus y otros contenidos peligrosos,\n> y se considera que esta limpio.\n>\n> --------------------------------------------------------------------\n> Este texto fue anadido por el servidor de correo de Audifarma S.A.:\n>\n> Las opiniones contenidas en este mensaje no necesariamente coinciden\n> con las institucionales de Audifarma. La informacion y todos sus\n> archivos Anexos, son confidenciales, privilegiados y solo pueden ser\n> utilizados por sus destinatarios. Si por error usted recibe este\n> mensaje, le ofrecemos disculpas, solicitamos eliminarlo de inmediato,\n> notificarle de su error a la persona que lo envia y abstenerse de\n> utilizar su contenido. \n\n\n--\nNOTA VERDE:\nNo imprima este correo a menos que sea absolutamente necesario.\nAhorre papel, ayude a salvar un arbol.\n\n--------------------------------------------------------------------\nEste mensaje ha sido analizado por MailScanner\nen busca de virus y otros contenidos peligrosos,\ny se considera que esta limpio.\n\n--------------------------------------------------------------------\nEste texto fue anadido por el servidor de correo de Audifarma S.A.:\n\nLas opiniones contenidas en este mensaje no necesariamente coinciden\ncon las institucionales de Audifarma. La informacion y todos sus\narchivos Anexos, son confidenciales, privilegiados y solo pueden ser\nutilizados por sus destinatarios. Si por error usted recibe este\nmensaje, le ofrecemos disculpas, solicitamos eliminarlo de inmediato,\nnotificarle de su error a la persona que lo envio y abstenerse de\nutilizar su contenido.\n\n\n\n\n\n\n\n\nhi jeff thanks by your answer, when you\n say \"database objects\" you talking about the tables?, because i\n have 1782 tables in my database.\n\n Umm, my boottleneck not is on CPU because the top don´t show\n something about that, the memory is used 30%, and the IO not have\n problem, because the Fiber channel SAN have capacity of 8GB and\n tthe i/o transfer to the disk is no Upper to 1 GB.\n\n can you explainme again how do you do a 6bg/min for pd_dump \n\n thanks\nAtentamente,\n\n\nJEISON BEDOYA DELGADO\nAdm. Servidores y Comunicaciones\nAUDIFARMA S.A.\n El 21/05/2013 11:11 a.m., Jeff Janes escribió:\n\n\n2013/5/21 Jeison Bedoya <[email protected]>\n\n\n\n Hi people, i have a database with 400GB running in a\n server with 128Gb RAM, and 32 cores, and storage over SAN\n with fiberchannel, the problem is when i go to do a backup\n whit pg_dumpall take a lot of 5 hours, next i do a restore\n and take a lot of 17 hours, that is a normal time for that\n process in that machine? or i can do something to optimize\n the process of backup/restore.\n\n\n\nHow many database objects do you have? A few\n large objects will dump and restore faster than a huge\n number of smallish objects.\n\n\nWhere is your bottleneck? \"top\" should show\n you whether it is CPU or IO.\n\n\nI can pg_dump about 6GB/minute to /dev/null\n using all defaults with a small number of large objects.\n\n\nCheers,\n\n\nJeff\n\n\n\n\n --\n \nNOTA VERDE:\n \n No imprima este correo\n \n a menos que sea absolutamente necesario.\n \n Ahorre papel, ayude a salvar un arbol.\n \n\n--------------------------------------------------------------------\n \n Este mensaje ha sido analizado por MailScanner\n\n en busca de virus y otros contenidos peligrosos,\n \n y se considera que esta limpio.\n \n\n--------------------------------------------------------------------\n \n Este texto fue anadido por el servidor de correo de Audifarma\n S.A.:\n \n\n Las opiniones contenidas en este mensaje no necesariamente\n coinciden\n \n con las institucionales de Audifarma. La informacion y todos sus\n \n archivos Anexos, son confidenciales, privilegiados y solo pueden\n ser\n \n utilizados por sus destinatarios. Si por error usted recibe este\n \n mensaje, le ofrecemos disculpas, solicitamos eliminarlo de\n inmediato,\n \n notificarle de su error a la persona que lo envia y abstenerse de\n \n utilizar su contenido.\n \n\n--\nNOTA VERDE:\nNo imprima este correo\n a menos que sea absolutamente necesario.\nAhorre papel, ayude a salvar un arbol.\n\n--------------------------------------------------------------------\nEste mensaje ha sido analizado por MailScanner\nen busca de virus y otros contenidos peligrosos,\ny se considera que esta limpio.\n\n--------------------------------------------------------------------\nEste texto fue anadido por el servidor de correo de Audifarma S.A.:\n\nLas opiniones contenidas en este mensaje no necesariamente coinciden\ncon las institucionales de Audifarma. La informacion y todos sus\narchivos Anexos, son confidenciales, privilegiados y solo pueden ser\nutilizados por sus destinatarios. Si por error usted recibe este\nmensaje, le ofrecemos disculpas, solicitamos eliminarlo de inmediato,\nnotificarle de su error a la persona que lo envia y abstenerse de\nutilizar su contenido.",
"msg_date": "Tue, 21 May 2013 15:03:59 -0500",
"msg_from": "Jeison Bedoya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance database for backup/restore"
}
] |
[
{
"msg_contents": "The SARS_ACTS table currently has 37,115,515 rowswe have indexed: idx_sars_acts_acts_run_id ON SARS_ACTS USING btree (sars_run_id)we have pk constraint on the SARS_ACTS_RUN table; sars_acts_run_pkey PRIMARY KEY (id )serverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1_.ALGORITHM='SMAT'; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------Aggregate (cost=4213952.17..4213952.18 rows=1 width=0) -> Hash Join (cost=230573.06..4213943.93 rows=3296 width=0) Hash Cond: (this_.SARS_RUN_ID=tr1_.ID) -> Seq Scan om sars_acts this_ (cost=0.00..3844241.84 rows=37092284 width=8) -> Hash (cost=230565.81..230565.81 rows=580 width=8) -> Seq Scan on sars_acts_run tr1_ (cost=0.00..230565.81 rows=580 width=8) Filter: ((algorithm)::text = 'SMAT'::text)(7 rows)This query executes in approximately 5.3 minutes to complete, very very slow, our users are not happy.I did add an index on SARS_ACTS_RUN.ALGORITHM column but it didn't improve the run time. The planner just changed the \"Filter:\" to an \"Index Scan:\" improving the cost of the Seq Scan on the sars_acts_run table, but the overall run time remained the same. It seems like the bottleneck is in the Seq Scan on the sars_acts table. -> Seq Scan on sars_acts_run tr1_ (cost=0.00..230565.81 rows=580 width=8) Filter: ((algorithm)::text = 'SMAT'::text)Does anyone have suggestions about how to speed it up?\n",
"msg_date": "Tue, 21 May 2013 14:53:45 -0700",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow inner join query Unacceptable latency."
},
{
"msg_contents": "On Tue, May 21, 2013 at 4:53 PM, <[email protected]> wrote:\n> The SARS_ACTS table currently has 37,115,515 rows\n>\n> we have indexed: idx_sars_acts_acts_run_id ON SARS_ACTS USING btree\n> (sars_run_id)\n> we have pk constraint on the SARS_ACTS_RUN table; sars_acts_run_pkey PRIMARY\n> KEY (id )\n>\n> serverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join\n> SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1_.ALGORITHM='SMAT';\n\ncan you please show us an EXPLAIN ANALYZE of this query (not only\nEXPLAIN). please paste it in a file and attach it so it doesn't get\nreformatted by the mail client.\n\nwhat version of postgres is this?\n\n--\nJaime Casanova www.2ndQuadrant.com\nProfessional PostgreSQL: Soporte 24x7 y capacitación\nPhone: +593 4 5107566 Cell: +593 987171157\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 May 2013 16:59:56 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow inner join query Unacceptable latency."
},
{
"msg_contents": "On Wednesday, May 22, 2013 3:24 AM fburgess wrote:\n\n> The SARS_ACTS table currently has 37,115,515 rows\n\n> we have indexed: idx_sars_acts_acts_run_id ON SARS_ACTS USING btree (sars_run_id)\n> we have pk constraint on the SARS_ACTS_RUN table; sars_acts_run_pkey PRIMARY KEY (id )\n\n> serverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1_.ALGORITHM='SMAT';\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=4213952.17..4213952.18 rows=1 width=0)\n> -> Hash Join (cost=230573.06..4213943.93 rows=3296 width=0)\n> Hash Cond: (this_.SARS_RUN_ID=tr1_.ID)\n> -> Seq Scan om sars_acts this_ (cost=0.00..3844241.84 rows=37092284 width=8)\n> -> Hash (cost=230565.81..230565.81 rows=580 width=8)\n> -> Seq Scan on sars_acts_run tr1_ (cost=0.00..230565.81 rows=580 width=8)\n> Filter: ((algorithm)::text = 'SMAT'::text)\n> (7 rows)\n\n\n\n> This query executes in approximately 5.3 minutes to complete, very very slow, our users are not happy.\n\n> I did add an index on SARS_ACTS_RUN.ALGORITHM column but it didn't improve the run time. \n> The planner just changed the \"Filter:\" to an \"Index Scan:\" improving the cost of the Seq Scan \n> on the sars_acts_run table, but the overall run time remained the same. It seems like the bottleneck \n> is in the Seq Scan on the sars_acts table.\n\n> -> Seq Scan on sars_acts_run tr1_ (cost=0.00..230565.81 rows=580 width=8)\n> Filter: ((algorithm)::text = 'SMAT'::text)\n\n> Does anyone have suggestions about how to speed it up?\n\nCould you please once trying Analyzing both tables and then run the query to check which plan it uses:\n\nAnalyze SARS_ACTS;\nAnalyze SARS_ACTS_RUN;\n\n\nWith Regards,\nAmit Kapila.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 11:27:19 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow inner join query Unacceptable latency."
}
] |
[
{
"msg_contents": "Howdy,\n\nEnvironment:\n\nPostgres 8.4.15\nUbuntu 10.04\n\nSyslog view def:\n\nnms=# \\d syslog\n View \"public.syslog\"\n Column | Type | Modifiers\n----------+-----------------------------+-----------\nip | inet |\nfacility | character varying(10) |\nlevel | character varying(10) |\ndatetime | timestamp without time zone |\nprogram | character varying(25) |\nmsg | text |\nseq | bigint |\nView definition:\nSELECT syslog_master.ip, syslog_master.facility, syslog_master.level, syslog_master.datetime, syslog_master.program, syslog_master.msg, syslog_master.seq\n FROM syslog_master;\nRules:\nsyslog_insert_201304 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-04-01'::date AND new.datetime < '2013-05-01'::date DO INSTEAD INSERT INTO syslog_201304 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201305 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-05-01'::date AND new.datetime < '2013-06-01'::date DO INSTEAD INSERT INTO syslog_201305 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201306 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-06-01'::date AND new.datetime < '2013-07-01'::date DO INSTEAD INSERT INTO syslog_201306 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_null AS\n ON INSERT TO syslog DO INSTEAD NOTHING\n\nDevices table def:\n\nnms=# \\d devices\n\n\n Table \"public.devices\"\n Column | Type | Modifiers\n------------------+-----------------------------+------------------------------------------------------\nid | integer | not null default nextval('devices_id_seq'::regclass)\nhostname | character varying(20) |\nhostpop | character varying(20) |\nhostgroup | character varying(20) |\nrack | character varying(10) |\nasset | character varying(10) |\nip | inet |\nsnmprw | character varying(20) |\nsnmpro | character varying(20) |\nsnmpver | character varying(3) |\nconsole | character varying(20) |\npsu1 | character varying(20) |\npsu2 | character varying(20) |\npsu3 | character varying(20) |\npsu4 | character varying(20) |\nalias1 | character varying(20) |\nalias2 | character varying(20) |\nfailure | character varying(255) |\nmodified | timestamp without time zone | not null default now()\nmodified_by | character varying(20) |\nactive | character(1) | default 't'::bpchar\nrad_secret | character varying(20) |\nrad_atr | character varying(40) |\nsnmpdev | integer |\nnetflow | text |\ncpu | integer |\ntemp | integer |\nfirmware_type_id | bigint | default 1\nIndexes:\n \"id_pkey\" PRIMARY KEY, btree (id)\n \"devices_active_index\" btree (active)\n \"devices_failure\" btree (failure)\n \"devices_hostgroup\" btree (hostgroup)\n \"devices_hostname\" btree (hostname)\n \"devices_hostpop\" btree (hostpop)\n \"devices_ip_index\" btree (ip)\n \"devices_snmprw\" btree (snmprw)\nForeign-key constraints:\n \"devices_firmware_type_id_fkey\" FOREIGN KEY (firmware_type_id) REFERENCES firmware_type(id)\nReferenced by:\n TABLE \"ac_attributes\" CONSTRAINT \"ac_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n TABLE \"acls_matrix\" CONSTRAINT \"acls_matrix_device_id_fkey\" FOREIGN KEY (device_id) REFERENCES devices(id) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"ip_local_pool_aggregates\" CONSTRAINT \"ip_local_pool_aggregates_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id)\n TABLE \"ipsla_instances\" CONSTRAINT \"ipsla_instances_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id) ON DELETE CASCADE\n TABLE \"lns_attributes\" CONSTRAINT \"lns_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n\nMongroups table def:\n\nnms=# \\d mongroups\n Table \"public.mongroups\"\n Column | Type | Modifiers\n------------+-----------------------+-----------\nhostgroup | character varying(20) |\nlocale | text |\ndepartment | character varying(20) |\nIndexes:\n \"ukey_hostgroup_department\" UNIQUE, btree (hostgroup, department)\n\nThe following SELECT runs for 86 seconds on average:\n\nSELECT syslog.ip,\n syslog.msg,\n syslog.datetime,\n devices.hostname,\n devices.hostpop\nFROM syslog,\n devices\nWHERE syslog.ip IN\n (SELECT ip\n FROM devices,\n mongroups\n WHERE (active = 't'\n OR active = 's')\n AND devices.hostgroup = mongroups.hostgroup\n AND devices.hostname || '.' || devices.hostpop = 'pe1.mel4'\n AND devices.id != '1291')\n AND datetime <= '2013-04-24 00:00:00'\n AND datetime >= '2013-04-21 00:00:00' AND syslog.ip = devices.ip AND ( devices.active = 't'\n OR devices.active = 's' );\n\nIs there anything I can do to get the SELECT to run a little quicker.\n\nThank you,\n\nSamuel Stearns\n\n\n\n\n\n\n\n\n\n\n\n\nHowdy,\n \nEnvironment:\n \nPostgres 8.4.15\nUbuntu 10.04\n \nSyslog view def:\n \nnms=# \\d syslog\n View \"public.syslog\"\n Column | Type | Modifiers\n----------+-----------------------------+-----------\nip | inet |\nfacility | character varying(10) |\nlevel | character varying(10) |\ndatetime | timestamp without time zone |\nprogram | character varying(25) |\nmsg | text |\nseq | bigint |\nView definition:\nSELECT syslog_master.ip, syslog_master.facility, syslog_master.level, syslog_master.datetime, syslog_master.program, syslog_master.msg, syslog_master.seq\n FROM syslog_master;\nRules:\nsyslog_insert_201304 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-04-01'::date AND new.datetime < '2013-05-01'::date DO INSTEAD INSERT INTO syslog_201304 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201305 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-05-01'::date AND new.datetime < '2013-06-01'::date DO INSTEAD INSERT INTO syslog_201305 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_201306 AS\n ON INSERT TO syslog\n WHERE new.datetime >= '2013-06-01'::date AND new.datetime < '2013-07-01'::date DO INSTEAD INSERT INTO syslog_201306 (ip, facility, level, datetime, program, msg)\n VALUES (new.ip, new.facility, new.level, new.datetime, new.program, new.msg)\nsyslog_insert_null AS\n ON INSERT TO syslog DO INSTEAD NOTHING\n \nDevices table def:\n \nnms=# \\d devices\n \n \n Table \"public.devices\"\n Column | Type | Modifiers\n------------------+-----------------------------+------------------------------------------------------\nid | integer | not null default nextval('devices_id_seq'::regclass)\nhostname | character varying(20) |\nhostpop | character varying(20) |\nhostgroup | character varying(20) |\nrack | character varying(10) |\nasset | character varying(10) |\nip | inet |\nsnmprw | character varying(20) |\nsnmpro | character varying(20) |\nsnmpver | character varying(3) |\nconsole | character varying(20) |\npsu1 | character varying(20) |\npsu2 | character varying(20) |\npsu3 | character varying(20) |\npsu4 | character varying(20) |\nalias1 | character varying(20) |\nalias2 | character varying(20) |\nfailure | character varying(255) |\nmodified | timestamp without time zone | not null default now()\nmodified_by | character varying(20) |\nactive | character(1) | default 't'::bpchar\nrad_secret | character varying(20) |\nrad_atr | character varying(40) |\nsnmpdev | integer |\nnetflow | text |\ncpu | integer |\ntemp | integer |\nfirmware_type_id | bigint | default 1\nIndexes:\n \"id_pkey\" PRIMARY KEY, btree (id)\n \"devices_active_index\" btree (active)\n \"devices_failure\" btree (failure)\n \"devices_hostgroup\" btree (hostgroup)\n \"devices_hostname\" btree (hostname)\n \"devices_hostpop\" btree (hostpop)\n \"devices_ip_index\" btree (ip)\n \"devices_snmprw\" btree (snmprw)\nForeign-key constraints:\n \"devices_firmware_type_id_fkey\" FOREIGN KEY (firmware_type_id) REFERENCES firmware_type(id)\nReferenced by:\n TABLE \"ac_attributes\" CONSTRAINT \"ac_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n TABLE \"acls_matrix\" CONSTRAINT \"acls_matrix_device_id_fkey\" FOREIGN KEY (device_id) REFERENCES devices(id) ON UPDATE CASCADE ON DELETE CASCADE\n TABLE \"ip_local_pool_aggregates\" CONSTRAINT \"ip_local_pool_aggregates_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id)\n TABLE \"ipsla_instances\" CONSTRAINT \"ipsla_instances_host_fkey\" FOREIGN KEY (host) REFERENCES devices(id) ON DELETE CASCADE\n TABLE \"lns_attributes\" CONSTRAINT \"lns_attributes_id_fkey\" FOREIGN KEY (id) REFERENCES devices(id) ON DELETE CASCADE\n \nMongroups table def:\n \nnms=# \\d mongroups\n Table \"public.mongroups\"\n Column | Type | Modifiers\n------------+-----------------------+-----------\nhostgroup | character varying(20) |\nlocale | text |\ndepartment | character varying(20) |\nIndexes:\n \"ukey_hostgroup_department\" UNIQUE, btree (hostgroup, department)\n \nThe following SELECT runs for 86 seconds on average:\n \nSELECT syslog.ip,\n syslog.msg,\n syslog.datetime,\n devices.hostname,\n devices.hostpop\nFROM syslog,\n devices\nWHERE syslog.ip IN\n (SELECT ip\n FROM devices,\n mongroups\n WHERE (active = 't'\n OR active = 's')\n AND devices.hostgroup = mongroups.hostgroup\n AND devices.hostname || '.' || devices.hostpop = 'pe1.mel4'\n AND devices.id != '1291')\n AND datetime <= '2013-04-24 00:00:00'\n AND datetime >= '2013-04-21 00:00:00' AND syslog.ip = devices.ip AND ( devices.active = 't'\n OR devices.active = 's' );\n \nIs there anything I can do to get the SELECT to run a little quicker.\n \nThank you,\n \nSamuel Stearns",
"msg_date": "Tue, 21 May 2013 23:16:58 +0000",
"msg_from": "Samuel Stearns <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advice on tuning slow query"
},
{
"msg_contents": "On Tue, May 21, 2013 at 4:16 PM, Samuel Stearns\n<[email protected]> wrote:\n> Is there anything I can do to get the SELECT to run a little quicker.\n\nPlease carefully follow the instruction first\nhttp://wiki.postgresql.org/wiki/Slow_Query_Questions.\n\nI would also suggest to upgrade postgres to the latest version, as it\nhas a lot of performance improvements.\n\n--\nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nProfile: http://www.linkedin.com/in/grayhemp\nPhone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 May 2013 16:33:16 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice on tuning slow query"
}
] |
[
{
"msg_contents": "Hi, I have a database where one of my tables (Adverts) are requested a LOT. It's a relatively narrow table with 12 columns, but the size is growing pretty rapidly. The table is used i relation to another one called (Car), and in the form of \"cars has many adverts\". I have indexed the foreign key car_id on Adverts.\n\nHowever the performance when doing a \"SELECT .* FROM cars LEFT OUTER JOIN adverts on cars.id = adverts.car_id WHERE cars.brand = 'Audi'\" is too poor. I have identified that it's the Adverts table part that performs very bad, and it's by far the biggest of the two. I would like to optimize the query/index, but I don't know if there at all is any theoretical option of actually getting a performance boost on a join, where the foreign key is already indexed?\n\nOne idea I'm thinking of my self is that I have a column called state on the adverts which can either be 'active' or 'deactivated'. The absolute amount of 'active adverts are relatively constant (currently 15%) where the remaining and growing part is 'deactivated'.\n\nIn reality the adverts that are selected is all 'active'. I'm hence wondering if it theoretically (and in reality of cause) would make my query faster if I did something like: \"SELECT .* FROM cars LEFT OUTER JOIN adverts on cars.id = adverts.car_id WHERE cars.brand = 'Audi' AND adverts.state = 'active'\" with a partial index on \"INDEX adverts ON (car_id) WHERE state = 'active'\"?\n\nRegards Niels Kristian\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 May 2013 16:37:36 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Advice on optimizing select/index"
},
{
"msg_contents": "On 22.05.2013 16:37, Niels Kristian Schjødt wrote:\n\n> In reality the adverts that are selected is all 'active'. I'm hence\n> wondering if it theoretically (and in reality of cause) would make my\n> query faster if I did something like: \"SELECT .* FROM cars LEFT\n> OUTER JOIN adverts on cars.id = adverts.car_id WHERE cars.brand =\n> 'Audi' AND adverts.state = 'active'\" with a partial index on \"INDEX\n> adverts ON (car_id) WHERE state = 'active'\"?\n\nThat sounds reasonable to do. If you have enough bandwidth on your \nproduction database why not just try it out? Otherwise you could do \nthis on a test database and see how it goes and what plan you get. Btw. \ndid you look at the original plan?\n\nCheers\n\n\trobert\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 26 May 2013 14:34:10 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice on optimizing select/index"
},
{
"msg_contents": "On Wed, May 22, 2013 at 7:37 AM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n>\n> One idea I'm thinking of my self is that I have a column called state on\n> the adverts which can either be 'active' or 'deactivated'. The absolute\n> amount of 'active adverts are relatively constant (currently 15%) where the\n> remaining and growing part is 'deactivated'.\n>\n\nYou might consider deleting the rows from the active table, rather than\njust setting an inactive flag, possibly inserting them into a history\ntable, if you need to preserve the info. You can do that in a single\nstatement using \"WITH foo as (delete from advert where ... returning *)\ninsert into advert_history select * from foo\"\n\n\n>\n> In reality the adverts that are selected is all 'active'. I'm hence\n> wondering if it theoretically (and in reality of cause) would make my query\n> faster if I did something like: \"SELECT .* FROM cars LEFT OUTER JOIN\n> adverts on cars.id = adverts.car_id WHERE cars.brand = 'Audi' AND\n> adverts.state = 'active'\" with a partial index on \"INDEX adverts ON\n> (car_id) WHERE state = 'active'\"?\n>\n\n\nThe left join isn't doing you much good there, as the made-up rows just get\nfiltered out anyway.\n\nThe partial index could help, but not as much as partitioning away the\ninactive records from the table, as well as from the index.\n\nCheers,\n\nJeff\n\nOn Wed, May 22, 2013 at 7:37 AM, Niels Kristian Schjødt <[email protected]> wrote:\n\nOne idea I'm thinking of my self is that I have a column called state on the adverts which can either be 'active' or 'deactivated'. The absolute amount of 'active adverts are relatively constant (currently 15%) where the remaining and growing part is 'deactivated'.\nYou might consider deleting the rows from the active table, rather than just setting an inactive flag, possibly inserting them into a history table, if you need to preserve the info. You can do that in a single statement using \"WITH foo as (delete from advert where ... returning *) insert into advert_history select * from foo\"\n \n\nIn reality the adverts that are selected is all 'active'. I'm hence wondering if it theoretically (and in reality of cause) would make my query faster if I did something like: \"SELECT .* FROM cars LEFT OUTER JOIN adverts on cars.id = adverts.car_id WHERE cars.brand = 'Audi' AND adverts.state = 'active'\" with a partial index on \"INDEX adverts ON (car_id) WHERE state = 'active'\"?\nThe left join isn't doing you much good there, as the made-up rows just get filtered out anyway.The partial index could help, but not as much as partitioning away the inactive records from the table, as well as from the index.\nCheers,Jeff",
"msg_date": "Mon, 3 Jun 2013 09:49:36 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Advice on optimizing select/index"
}
] |
[
{
"msg_contents": "PostgreSQL 9.1.6 on linux\n\n\n-------- Original Message --------\nSubject: Re: [PERFORM] Very slow inner join query Unacceptable latency.\nFrom: Jaime Casanova <[email protected]>\nDate: Tue, May 21, 2013 2:59 pm\nTo: Freddie Burgess <[email protected]>\nCc: psql performance list <[email protected]>, Postgres\nGeneral <[email protected]>\n\nOn Tue, May 21, 2013 at 4:53 PM, <[email protected]> wrote:\n> The SARS_ACTS table currently has 37,115,515 rows\n>\n> we have indexed: idx_sars_acts_acts_run_id ON SARS_ACTS USING btree\n> (sars_run_id)\n> we have pk constraint on the SARS_ACTS_RUN table; sars_acts_run_pkey PRIMARY\n> KEY (id )\n>\n> serverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join\n> SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1_.ALGORITHM='SMAT';\n\ncan you please show us an EXPLAIN ANALYZE of this query (not only\nEXPLAIN). please paste it in a file and attach it so it doesn't get\nreformatted by the mail client.\n\nwhat version of postgres is this?\n\n--\nJaime Casanova www.2ndQuadrant.com\nProfessional PostgreSQL: Soporte 24x7 y capacitación\nPhone: +593 4 5107566 Cell: +593 987171157\n\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general",
"msg_date": "Wed, 22 May 2013 07:41:28 -0700",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Very slow inner join query Unacceptable latency."
},
{
"msg_contents": "On Wed, May 22, 2013 at 7:41 AM, <[email protected]> wrote:\n\n> PostgreSQL 9.1.6 on linux\n>\n\n\n From the numbers in your attached plan, it seems like it should be doing a\nnested loop from the 580 rows (it thinks) that match in SARS_ACTS_RUN\nagainst the index on sars_run_id to pull out the 3297 rows (again, it\nthink, though it is way of there). I can't see why it would not do that.\nThere were some planner issues in the early 9.2 releases that caused very\nlarge indexes to be punished, but I don't think those were in 9.1\n\nCould you \"set enable_hashjoin to off\" and post the \"explain analyze\" that\nthat gives?\n\n\nCheers,\n\nJeff\n\nOn Wed, May 22, 2013 at 7:41 AM, <[email protected]> wrote:\n\nPostgreSQL 9.1.6 on linuxFrom the numbers in your attached plan, it seems like it should be doing a nested loop from the 580 rows (it thinks) that match in SARS_ACTS_RUN against the index on sars_run_id to pull out the 3297 rows (again, it think, though it is way of there). I can't see why it would not do that. There were some planner issues in the early 9.2 releases that caused very large indexes to be punished, but I don't think those were in 9.1\nCould you \"set enable_hashjoin to off\" and post the \"explain analyze\" that that gives?\nCheers,\nJeff",
"msg_date": "Wed, 22 May 2013 17:17:26 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow inner join query Unacceptable latency."
},
{
"msg_contents": "Looking at the execution plan makes me wonder what your work_mem is\nset to. Try cranking it up to test and lowering random_page_cost:\n\nset work_mem='500MB';\nset random_page_cost=1.2;\nexplain analyze select ...\n\nand see what you get.\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Fri, 24 May 2013 00:16:16 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Very slow inner join query Unacceptable latency."
}
] |
[
{
"msg_contents": "On Wednesday, May 22, 2013 10:03 PM fburgess wrote:\n\n> I did perform a explain analyze on the query.\n\nExplain analyze doesn't help to collect statistics. You should use Analyze <table_name>.\n\nIdeally optimizer should have slected the best plan, but just to check you can once try with\n\nSET enable_hashjoin=off;\n\nAnd see what is the plan it chooses and does it pick up index scan on larger table?\n\nCould you please output of \\d SARS_ACTS and \\d SARS_ACTS_RUN?\n\n\nWith Regards,\nAmit Kapila.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 May 2013 11:43:44 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow inner join query Unacceptable latency."
}
] |
[
{
"msg_contents": "Hello,\n\nI am testing performance of postgresql application using pgbench.\nI am getting spike in results(graphs) as shown in attached graph due to\nthroughput drop at that time.\npgbench itself doing checkpoint on server (where queries are running)\nbefore and after test starts.\npgbench is running on separate client machine.\nactual queries are running on separate server machine.\n\n*Test configurations:*\ntest duration is 5 minutes\nnumbers clients are 120\nscale is 100\nquery mode is prepared\nonly select queries are used.\n\nresult graph: see attachment tps.png\n\n*spike is at 12:09:14*\n\n*My Observatios:*\n*\n*\nIn vmstat, sar , iostat, top logs i found that at the time of spike there\nis more iowait on pgbench-client.\nthere is similar iowait at another timestamp present but there is no spike.\nSo i am not getting why spike occure at *12:09:14 *only*.*\nIf anyone find solution of this problem please reply.\nAlso i am working to get context switches at the time of spike occurred.\n\nPlease reply if any clue.\n\n------------------------------------------------------------------------------\n*pgbench Client machine configuration:*\nHardware and OS specifications for pgbench-client\n\nParameter:Value\nProcessor: INTEL XEON (E5645) 2.40GHz*2 PROCESSOR\nTotal Cores:12. 6 cores per processor\nRAM: 8 GB RAM (4GB*2)\nHDD: 300GB*2 SAS HDD. RAID 1 configured. So only one disk in use at a time.\n2nd disk is used for mirroring.\n\nOperating System:GNU/Linux\nRed Hat release: Red Hat Enterprise Linux Server release 6.3 (Santiago)\n\n*Server machine configuration and environment setup:*\nHardware and OS specifications for server:\n\nParameter: Value\nProcessors: Xeon E5-2650 Processor Kit , Intel® Xeon ® Processor E5-2650 (2\nGHz, 8C/16T, 20 MB) * 2 nos (Part No. N8101-549F)\nRAM: 32GB DDR3-1600 REG Memory Kit , 8x 4GB Registered ECC DIMM,\nDDR3L-1600(PC3L-12800) (Part No. N8102-469F)\nHDD: 450GB 10K Hot Plug 2.5-inch SAS HDD * 8 nos 1 x 450 GB SAS HDD,\n2.5-inch, 6Gb/s, 10,000 rpm (Part No. N8150-322)\n\nOperating System: GNU/Linux\nRed Hat release: Red Hat Enterprise Linux Server release 6.3 (Santiago)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 23 May 2013 17:09:32 +0530",
"msg_from": "\"Sachin D. Bhosale-Kotwal\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbench: spike in pgbench results(graphs) while testing\n\tpg_hint_plan performance"
},
{
"msg_contents": "On 5/23/13 7:39 AM, Sachin D. Bhosale-Kotwal wrote:\n> So i am not getting why spike occure at *12:09:14 *only*.*\n\nThis could easily be caused by something outside of the test itself. \nBackground processes. A monitoring system kicking in to write some data \nto disk will cause a drop like this.\n\nThree things you could look into to try and track down the issue:\n\n-Did you turn on log_checkpoints to see if one happens to run when the \nrate drops? If so, that could be the explanation. Looks like you setup \nthe test to make this less likely, but checkpoints are tough to \neliminate altogether.\n\n-Does this happen on every test run? Is it at the same time?\n\n-You can run \"top -bc\" to dump snapshots of what the system is doing \nevery second. With some work you can then figure out what was actually \nhappening during the two seconds around when the throughput dropped.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 24 May 2013 08:21:31 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: spike in pgbench results(graphs) while testing\n\tpg_hint_plan performance"
},
{
"msg_contents": "Thanks for reply.\n\n>This could easily be caused by something outside of the test itself.\nBackground processes. A monitoring system kicking in to write some data to\ndisk will cause a drop like this.\n\n>Three things you could look into to try and track down the issue:\n\n>-Did you turn on log_checkpoints to see if one happens to run when the rate\ndrops?\nYes. I have turn on log_checkpoints.\n\nHere i am sending my postgrsql.conf details:\npostgresql.conf for pgbench-client and server machine.\n\nParameter: Value\n\nConnection setup: Connection Settings\n\nlisten_addresses: *\nmax_connections: 150\nsuperuser_reserved_connections: 10\ntcp_keepalives_idle: 60\ntcp_keepalives_interval: 5\ntcp_keepalives_count: 5\nResource Consumption (Memory)\nshared_buffers: 4 GB\nmaintenance_work_mem: 800 MB\n\nWrite Ahead Log\n\nwal_level: archive \nwal_sync_method: fdatasync\nwal_buffers: 16 MB\nsynchronous_commit: on\n\nCheckpoint\n\ncheckpoint_segments: 300\ncheckpoint_timeout:15 min\narchive_mode: on\narchive_command: cp %p /archive/archive_pg_hint_plan/%f \n\nQuery Planning\n\nrandom_page_cost: 2\neffective_cache_size: 8 GB\ndefault_statistics_target: 10\n\nError Reporting and Logging\n\nlogging_collector: on\nlog_directory: /var/log/pgsql\nlog_filename: postgresql.log\nlog_rotation_age: 1d \nlog_rotation_size: 0\nlog_truncate_on_rotation: off\nlog_min_duration_statement: 20s\nlog_checkpoints: on\nlog_error_verbosity : verbose\nlog_connections: on\nlog_line_prefix: %t %p %c-%l %x %q %u, %d, %r, %a\nlog_lock_waits: on\n\nAutomatic Vacuuming\n\nlog_autovacuum_min_duration:1 min\nautovacuum_max_workers: 4\nautovacuum_freeze_max_age: 2000000000\nautovacuum_vacuum_cost_limit: 400\n\n\n>If so, that could be the explanation. Looks like you setup the test to\nmake this less likely, but checkpoints are tough to eliminate altogether.\n\n>-Does this happen on every test run?\nYes. It happen on every test run.\n\n>Is it at the same time?\nNo. It is not occurring at same time.There is no as such pattern.\n\n>-You can run \"top -bc\" to dump snapshots of what the system is doing every\nsecond. With some work you can then figure out what was actually happening\nduring the two seconds around when the throughput dropped.\n\nI have used vmstat, iostat, sar, pidstat, top etc.\n\nhere i am sending some snaps from above tolls logs.\n\n\n1. postgresql log\n From Server:\n2013-05-22 12:08:00 IST 19697 519c65a7.4cf1-1 0 LOG: 00000: checkpoint\nstarting: immediate force wait\n2013-05-22 12:08:00 IST 19697 519c65a7.4cf1-2 0 LOCATION: \nLogCheckpointStart, xlog.c:7638\n2013-05-22 12:08:04 IST 19697 519c65a7.4cf1-3 0 LOG: 00000: checkpoint\ncomplete: wrote 2320 buffers (0.4%); 0 transaction log file(s) added, 0\nremoved, 1 recycled; write=0.045 s, sync=3.606 s, total=4.058 s; sync\nfiles=48, longest=1.425 s, average=0.075 s\n\n2013-05-22 12:08:05 IST 20587 519c67cd.506b-3 0 postgres_user, pg_bench,\n172.26.127.101(33356), [unknown]LOG: 00000: connection authorized:\nuser=postgres_user database=pg_bench\n2013-05-22 12:08:05 IST 20587 519c67cd.506b-4 0 postgres_user, pg_bench,\n172.26.127.101(33356), [unknown]LOCATION: PerformAuthentication,\npostinit.c:230\n2013-05-22 12:13:05 IST 21486 519c68f9.53ee-1 0 [unknown], [unknown], ,\n[unknown]LOG: 00000: connection received: host=172.26.127.101 port=33362\n2013-05-22 12:13:05 IST 21486 519c68f9.53ee-2 0 [unknown], [unknown], ,\n[unknown]LOCATION: BackendInitialize, postmaster.c:3476\n2013-05-22 12:13:05 IST 21486 519c68f9.53ee-3 0 postgres_user, pg_bench,\n172.26.127.101(33362), [unknown]LOG: 00000: connection authorized:\nuser=postgres_user database=pg_bench\n\n2013-05-22 12:13:07 IST 19697 519c65a7.4cf1-5 0 LOG: 00000: checkpoint\nstarting: immediate force wait\n2013-05-22 12:13:07 IST 19697 519c65a7.4cf1-6 0 LOCATION: \nLogCheckpointStart, xlog.c:7638\n2013-05-22 12:13:07 IST 19697 519c65a7.4cf1-7 0 LOG: 00000: checkpoint\ncomplete: wrote 4 buffers (0.0%); 0 transaction log file(s) added, 0\nremoved, 234 recycled; write=0.012 s, sync=0.082 s, total=0.222 s; sync\nfiles=6, longest=0.028 s, average=0.013 s\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n From pgbench-Client:\n\n2013-05-22 12:08:08 IST 3988 519c67d0.f94-3 0 postgres_user, results,\n::1(59216), [unknown]LOG: 00000: connection authorized: user=postgres_user\ndatabase=results\n2013-05-22 12:08:08 IST 3988 519c67d0.f94-4 0 postgres_user, results,\n::1(59216), [unknown]LOCATION: PerformAuthentication, postinit.c:230\n2013-05-22 12:13:08 IST 4810 519c68fc.12ca-1 0 [unknown], [unknown], ,\n[unknown]LOG: 00000: connection received: host=::1 port=59343\n2013-05-22 12:13:08 IST 4810 519c68fc.12ca-2 0 [unknown], [unknown], ,\n[unknown]LOCATION: BackendInitialize, postmaster.c:3476\n\n==================================================================================\n2. iostat log\n From Server\n\n2013-05-22 12:09:13.563992\t05/22/2013 12:09:10 PM\n2013-05-22 12:09:13.564105\tDevice: tps Blk_read/s Blk_wrtn/s \nBlk_read Blk_wrtn\n2013-05-22 12:09:13.564187\tsda 3.00 0.00 32.00 \n0 32\n2013-05-22 12:09:13.564234\t\n*2013-05-22 12:09:14.563849\t05/22/2013 12:09:11 PM\n2013-05-22 12:09:15.003184\tDevice: tps Blk_read/s Blk_wrtn/s \nBlk_read Blk_wrtn\n2013-05-22 12:09:15.003253\tsda 0.00 0.00 0.00 \n0 0\n2013-05-22 12:09:15.003277\t*\n2013-05-22 12:09:15.563731\t05/22/2013 12:09:12 PM\n2013-05-22 12:09:15.563836\tDevice: tps Blk_read/s Blk_wrtn/s \nBlk_read Blk_wrtn\n2013-05-22 12:09:15.563894\tsda 16.00 0.00 144.00 \n0 144\n2013-05-22 12:09:15.563923\t\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n From pgbench-Client\n\n2013-05-22 12:09:13.746632\t05/22/2013 12:09:13 PM\n2013-05-22 12:09:13.746709\tDevice: tps Blk_read/s Blk_wrtn/s \nBlk_read Blk_wrtn\n2013-05-22 12:09:13.746753\tsda 0.00 0.00 0.00 \n0 0\n2013-05-22 12:09:13.746774\t\n*2013-05-22 12:09:14.746723\t05/22/2013 12:09:14 PM\n2013-05-22 12:09:15.003297\tDevice: tps Blk_read/s Blk_wrtn/s \nBlk_read Blk_wrtn\n2013-05-22 12:09:15.003382\tsda 104.00 0.00 77576.00 \n0 77576\n2013-05-22 12:09:15.003417\t*\n2013-05-22 12:09:15.747222\t05/22/2013 12:09:15 PM\n2013-05-22 12:09:15.747324\tDevice: tps Blk_read/s Blk_wrtn/s \nBlk_read Blk_wrtn\n2013-05-22 12:09:15.747392\tsda 1031.00 8.00 244832.00 \n8 244832\n2013-05-22 12:09:15.747423\t\n==================================================================================\n3. vmstat log\n From Server\n-------------------------------------------------procs\n-----------memory---------- ---swap-- -----io---- --system-- \n-----cpu------ ---timestamp---\n2013-05-22 12:09:13.601546\t54 0 51008 1640400 193400 28216852 0 0 \n0 16 90753 320892 63 24 13 0 0\t2013-05-22 12:09:10 IST\n*2013-05-22 12:09:14.602209\t84 0 51008 1640384 193400 28216864 0 0 \n0 0 91317 319178 64 25 11 0 0\t2013-05-22 12:09:11 IST*\n2013-05-22 12:09:15.602892\t72 0 51008 1640352 193404 28216876 0 0 \n0 72 60904 203523 39 15 45 0 0\t2013-05-22 12:09:12 IST\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n From pgbench-Client\n-----------------------------------------------procs\n-----------memory---------- ---swap-- -----io---- --system-- \n-----cpu------ ---timestamp---\n2013-05-22 12:09:13.777954\t 6 0 58640 3676280 62260 3838180 0 0 \n0 0 143224 13530 8 12 80 0 0\t2013-05-22 12:09:13 IST\n*2013-05-22 12:09:14.778444\t 0 3 58640 3666952 62260 3843848 0 0 \n0 74044 126646 12754 8 10 79 3 0\t2013-05-22 12:09:14 IST*\n2013-05-22 12:09:15.778965\t 5 0 58640 3663744 62272 3848352 0 0 \n4 87160 105091 8539 6 8 81 5 0\t2013-05-22 12:09:15 IST\n==================================================================================\n4. sar log\n From Server\n-------------------------------------------------------------------CPU \n%usr %nice %sys %iowait %steal %irq %soft %guest \n%idle\n2013-05-22 12:09:13.574923\t12:09:10 PM all 63.10 0.00 16.91 \n0.03 0.00 0.00 7.16 0.00 12.80\n*2013-05-22 12:09:14.575210\t12:09:11 PM all 64.09 0.00 \n17.09 0.00 0.00 0.00 7.42 0.00 11.40*\n2013-05-22 12:09:15.574179\t12:09:12 PM all 39.79 0.00 11.03 \n0.00 0.00 0.00 4.38 0.00 44.80\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n From pgbench-Client\n-------------------------------------------------------------------CPU \n%usr %nice %sys %iowait %steal %irq %soft %guest \n%idle\n2013-05-22 12:09:13.788155\t12:09:13 PM all 8.38 0.00 7.91 \n0.00 0.00 0.00 3.70 0.00 80.01\n*2013-05-22 12:09:14.788134\t12:09:14 PM all 7.66 0.00 \n6.98 3.00 0.00 0.00 3.39 0.00 78.97*\n2013-05-22 12:09:15.788362\t12:09:15 PM all 5.88 0.00 5.63 \n4.61 0.00 0.00 2.62 0.00 81.24\n==================================================================================\n5. top command (top -d 1)log\n From Server\n12:09:13\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n3362 root 20 0 0 0 0 S 1.0 0.0 34:05.66 kondemand/25 \n31517 postgres 20 0 15688 1932 952 R 1.0 0.0 0:28.35 top \n1 root 20 0 19348 1036 808 S 0.0 0.0 0:10.57 init \n\n*12:09:14\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n31517 postgres 20 0 15688 1932 952 R 2.0 0.0 0:28.37 top \n 1 root 20 0 19348 1036 808 S 0.0 0.0 0:10.57 init *\n\n12:09:15\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n31517 postgres 20 0 15688 1932 952 R 2.9 0.0 0:28.40 top \n143 root 20 0 0 0 0 S 1.0 0.0 1:04.72 events/12 \n1 root 20 0 19348 1036 808 S 0.0 0.0 0:10.57 init \n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n From pgbench-Client\n12:09:13\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n19157 postgres 20 0 121m 4172 1872 R 98.7 0.1 5:53.45 python \n1 root 20 0 19348 580 404 S 0.0 0.0 0:01.41 init \n\n*12:09:14\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n19157 postgres 20 0 121m 4172 1872 R 99.6 0.1 5:54.46 python \n18946 postgres 20 0 15424 1676 940 R 2.0 0.0 0:18.97 top \n1 root 20 0 19348 580 404 S 0.0 0.0 0:01.41 init * \n\n12:09:15\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n19157 postgres 20 0 121m 4172 1872 R 98.7 0.1 5:55.46 python \n2658 root 20 0 4064 264 212 S 1.0 0.0 0:08.64 cpuspeed \n18946 postgres 20 0 15424 1676 940 R 1.0 0.0 0:18.98 top \n1 root 20 0 19348 580 404 S 0.0 0.0 0:01.41 init \n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pgbench-spike-in-pgbench-results-graphs-while-testing-pg-hint-plan-performance-tp5756585p5756740.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 24 May 2013 06:22:22 -0700 (PDT)",
"msg_from": "sachin kotwal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench: spike in pgbench results(graphs) while testing\n\tpg_hint_plan performance"
}
] |
[
{
"msg_contents": "I am fairly new to squeezing performance out of Postgres, but I hope this\nmailing list can help me. I have read the instructions found at\nhttp://wiki.postgresql.org/wiki/Slow_Query_Questions and have tried to\nabide by them the best that I can. I am running \"PostgreSQL 9.1.7,\ncompiled by Visual C++ build 1500, 64-bit\" on an x64 Windows 7 Professional\nService Pack 1 machine with 8 GB of RAM. I installed this using the\ndownloadable installer. I am testing this using pgAdminIII but ultimately\nthis will be deployed within a Rails application. Here are the values of\nsome configuration parameters:\n\nshared_buffers = 1GB\ntemp_buffers = 8MB\nwork_mem = 10MB\nmaintenance_work_mem = 256MB\nrandom_page_cost = 1.2\ndefault_statistics_target = 10000\n\nTable schema:\n\nreads-- ~250,000 rows\nCREATE TABLE reads\n(\n id serial NOT NULL,\n device_id integer NOT NULL,\n value bigint NOT NULL,\n read_datetime timestamp without time zone NOT NULL,\n created_at timestamp without time zone NOT NULL,\n updated_at timestamp without time zone NOT NULL,\n CONSTRAINT reads_pkey PRIMARY KEY (id )\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE reads\n OWNER TO postgres;\n\nCREATE INDEX index_reads_on_device_id\n ON reads\n USING btree\n (device_id );\n\nCREATE INDEX index_reads_on_device_id_and_read_datetime\n ON reads\n USING btree\n (device_id , read_datetime );\n\nCREATE INDEX index_reads_on_read_datetime\n ON reads\n USING btree\n (read_datetime );\n\ndevices -- ~25,000 rows\nCREATE TABLE devices\n(\n id serial NOT NULL,\n serial_number character varying(20) NOT NULL,\n created_at timestamp without time zone NOT NULL,\n updated_at timestamp without time zone NOT NULL,\n CONSTRAINT devices_pkey PRIMARY KEY (id )\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE devices\n OWNER TO postgres;\n\nCREATE UNIQUE INDEX index_devices_on_serial_number\n ON devices\n USING btree\n (serial_number COLLATE pg_catalog.\"default\" );\n\npatient_devices -- ~25,000 rows\nCREATE TABLE patient_devices\n(\n id serial NOT NULL,\n patient_id integer NOT NULL,\n device_id integer NOT NULL,\n issuance_datetime timestamp without time zone NOT NULL,\n unassignment_datetime timestamp without time zone,\n issued_value bigint NOT NULL,\n created_at timestamp without time zone NOT NULL,\n updated_at timestamp without time zone NOT NULL,\n CONSTRAINT patient_devices_pkey PRIMARY KEY (id )\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE patient_devices\n OWNER TO postgres;\n\nCREATE INDEX index_patient_devices_on_device_id\n ON patient_devices\n USING btree\n (device_id );\n\nCREATE INDEX index_patient_devices_on_issuance_datetime\n ON patient_devices\n USING btree\n (issuance_datetime );\n\nCREATE INDEX index_patient_devices_on_patient_id\n ON patient_devices\n USING btree\n (patient_id );\n\nCREATE INDEX index_patient_devices_on_unassignment_datetime\n ON patient_devices\n USING btree\n (unassignment_datetime );\n\npatients -- ~1000 rows\nCREATE TABLE patients\n(\n id serial NOT NULL,\n first_name character varying(50) NOT NULL,\n last_name character varying(50) NOT NULL,\n created_at timestamp without time zone NOT NULL,\n updated_at timestamp without time zone NOT NULL,\n CONSTRAINT patients_pkey PRIMARY KEY (id )\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE patients\n OWNER TO postgres;\n\nFinally, this is the query I am running:\n\nSELECT first_name, last_name, serial_number, latest_read, value,\nlifetime_value, lifetime.patient_id\nFROM (\nSELECT DISTINCT patient_id, first_name, last_name, MAX(max_read)\nOVER(PARTITION BY patient_id) AS latest_read, SUM(value) OVER(PARTITION BY\npatient_id) AS value, first_value(serial_number) OVER(PARTITION BY\npatient_id ORDER BY max_read DESC) AS serial_number\n FROM (\nSELECT patient_id, first_name, last_name, value - issued_value AS value,\nserial_number, read_datetime, MAX(read_datetime) OVER (PARTITION BY\npatient_devices.id) AS max_read\nFROM reads\nINNER JOIN devices ON devices.id = reads.device_id\nINNER JOIN patient_devices ON patient_devices.device_id = devices.id\nAND read_datetime >= issuance_datetime\nAND read_datetime < COALESCE(unassignment_datetime , 'infinity'::timestamp)\nINNER JOIN patients ON patients.id = patient_devices.patient_id\nWHERE read_datetime BETWEEN '2012-01-01 10:30:01' AND '2013-05-18 03:03:42'\n) AS first WHERE read_datetime = max_read\n) AS filtered\nINNER JOIN (\nSELECT DISTINCT patient_id, SUM(value) AS lifetime_value\n FROM (\nSELECT patient_id, value - issued_value AS value, read_datetime,\nMAX(read_datetime) OVER (PARTITION BY patient_devices.id) AS max_read\nFROM reads\nINNER JOIN devices ON devices.id = reads.device_id\nINNER JOIN patient_devices ON patient_devices.device_id = devices.id\nAND read_datetime >= issuance_datetime\nAND read_datetime < COALESCE(unassignment_datetime , 'infinity'::timestamp)\n) AS first WHERE read_datetime = max_read GROUP BY patient_id\n) AS lifetime ON filtered.patient_id = lifetime.patient_id\n\nThe EXPLAIN (ANALYZE, BUFFERS) output can be found at the following link\nhttp://explain.depesz.com/s/7Zr. Ultimately what I want to do is to find a\nsum of values for each patient. The scenario is that each patient is\nassigned a device and they get incremental values on their device. Since\nthese values are incremental if a patient never switches devices, the\nreported value should be the last value for a patient. However, if a\npatient switches devices then the reported value should be the sum of the\nlast value for each device that the patient was assigned. This leads to\nthe conditions read_datetime >= issuance_datetime AND read_datetime <\nCOALESCE(unassignment_datetime , 'infinity'::timestamp). In addition I\nmust report the serial number of the last device that the patient was\nassigned (or is currently assigned). The only way I could come up with\ndoing that is first_value(serial_number) OVER(PARTITION BY patient_id ORDER\nBY max_read DESC) AS serial_number. Finally, I must report 2 values, one\nwith respect to a time range and one which is the lifetime value. In order\nto satisfy this requirement, I have to run essentially the same query twice\n(one with the WHERE time clause and one without) and INNER JOIN the\nresults. My questions are\n\n1. Can I make the query as I have constructed it faster by adding indices\nor changing any postgres configuration parameters?\n2. Can I modify the query to return the same results in a faster way?\n3. Can I modify my tables to make this query (which is the crux of my\napplication) run faster?\n\nThanks\n\nI am fairly new to squeezing performance out of Postgres, but I hope this mailing list can help me. I have read the instructions found at http://wiki.postgresql.org/wiki/Slow_Query_Questions and have tried to abide by them the best that I can. I am running \"PostgreSQL 9.1.7, compiled by Visual C++ build 1500, 64-bit\" on an x64 Windows 7 Professional Service Pack 1 machine with 8 GB of RAM. I installed this using the downloadable installer. I am testing this using pgAdminIII but ultimately this will be deployed within a Rails application. Here are the values of some configuration parameters:\nshared_buffers = 1GBtemp_buffers = 8MBwork_mem = 10MBmaintenance_work_mem = 256MBrandom_page_cost = 1.2default_statistics_target = 10000\nTable schema:reads-- ~250,000 rowsCREATE TABLE reads( id serial NOT NULL, device_id integer NOT NULL, value bigint NOT NULL,\n read_datetime timestamp without time zone NOT NULL, created_at timestamp without time zone NOT NULL, updated_at timestamp without time zone NOT NULL, CONSTRAINT reads_pkey PRIMARY KEY (id )\n)WITH ( OIDS=FALSE);ALTER TABLE reads OWNER TO postgres;CREATE INDEX index_reads_on_device_id ON reads USING btree\n (device_id );CREATE INDEX index_reads_on_device_id_and_read_datetime ON reads USING btree (device_id , read_datetime );CREATE INDEX index_reads_on_read_datetime\n ON reads USING btree (read_datetime );devices -- ~25,000 rowsCREATE TABLE devices( id serial NOT NULL, serial_number character varying(20) NOT NULL,\n created_at timestamp without time zone NOT NULL, updated_at timestamp without time zone NOT NULL, CONSTRAINT devices_pkey PRIMARY KEY (id ))WITH ( OIDS=FALSE\n);ALTER TABLE devices OWNER TO postgres;CREATE UNIQUE INDEX index_devices_on_serial_number ON devices USING btree (serial_number COLLATE pg_catalog.\"default\" );\npatient_devices -- ~25,000 rowsCREATE TABLE patient_devices( id serial NOT NULL, patient_id integer NOT NULL, device_id integer NOT NULL,\n issuance_datetime timestamp without time zone NOT NULL, unassignment_datetime timestamp without time zone, issued_value bigint NOT NULL, created_at timestamp without time zone NOT NULL,\n updated_at timestamp without time zone NOT NULL, CONSTRAINT patient_devices_pkey PRIMARY KEY (id ))WITH ( OIDS=FALSE);ALTER TABLE patient_devices\n OWNER TO postgres;CREATE INDEX index_patient_devices_on_device_id ON patient_devices USING btree (device_id );CREATE INDEX index_patient_devices_on_issuance_datetime\n ON patient_devices USING btree (issuance_datetime );CREATE INDEX index_patient_devices_on_patient_id ON patient_devices USING btree\n (patient_id );CREATE INDEX index_patient_devices_on_unassignment_datetime ON patient_devices USING btree (unassignment_datetime );patients -- ~1000 rows\nCREATE TABLE patients( id serial NOT NULL, first_name character varying(50) NOT NULL, last_name character varying(50) NOT NULL, created_at timestamp without time zone NOT NULL,\n updated_at timestamp without time zone NOT NULL, CONSTRAINT patients_pkey PRIMARY KEY (id ))WITH ( OIDS=FALSE);ALTER TABLE patients OWNER TO postgres;\nFinally, this is the query I am running:SELECT first_name, last_name, serial_number, latest_read, value, lifetime_value, lifetime.patient_idFROM ( SELECT DISTINCT patient_id, first_name, last_name, MAX(max_read) OVER(PARTITION BY patient_id) AS latest_read, SUM(value) OVER(PARTITION BY patient_id) AS value, first_value(serial_number) OVER(PARTITION BY patient_id ORDER BY max_read DESC) AS serial_number\n FROM ( SELECT patient_id, first_name, last_name, value - issued_value AS value, serial_number, read_datetime, MAX(read_datetime) OVER (PARTITION BY patient_devices.id) AS max_read\n FROM reads INNER JOIN devices ON devices.id = reads.device_id INNER JOIN patient_devices ON patient_devices.device_id = devices.id\n AND read_datetime >= issuance_datetime AND read_datetime < COALESCE(unassignment_datetime , 'infinity'::timestamp)\n INNER JOIN patients ON patients.id = patient_devices.patient_id WHERE read_datetime BETWEEN '2012-01-01 10:30:01' AND '2013-05-18 03:03:42'\n ) AS first WHERE read_datetime = max_read) AS filteredINNER JOIN ( SELECT DISTINCT patient_id, SUM(value) AS lifetime_value\n FROM ( SELECT patient_id, value - issued_value AS value, read_datetime, MAX(read_datetime) OVER (PARTITION BY patient_devices.id) AS max_read\n FROM reads INNER JOIN devices ON devices.id = reads.device_id INNER JOIN patient_devices ON patient_devices.device_id = devices.id\n AND read_datetime >= issuance_datetime AND read_datetime < COALESCE(unassignment_datetime , 'infinity'::timestamp)\n ) AS first WHERE read_datetime = max_read GROUP BY patient_id) AS lifetime ON filtered.patient_id = lifetime.patient_idThe EXPLAIN (ANALYZE, BUFFERS) output can be found at the following link http://explain.depesz.com/s/7Zr. Ultimately what I want to do is to find a sum of values for each patient. The scenario is that each patient is assigned a device and they get incremental values on their device. Since these values are incremental if a patient never switches devices, the reported value should be the last value for a patient. However, if a patient switches devices then the reported value should be the sum of the last value for each device that the patient was assigned. This leads to the conditions read_datetime >= issuance_datetime AND read_datetime < COALESCE(unassignment_datetime , 'infinity'::timestamp). In addition I must report the serial number of the last device that the patient was assigned (or is currently assigned). The only way I could come up with doing that is first_value(serial_number) OVER(PARTITION BY patient_id ORDER BY max_read DESC) AS serial_number. Finally, I must report 2 values, one with respect to a time range and one which is the lifetime value. In order to satisfy this requirement, I have to run essentially the same query twice (one with the WHERE time clause and one without) and INNER JOIN the results. My questions are\n1. Can I make the query as I have constructed it faster by adding indices or changing any postgres configuration parameters?2. Can I modify the query to return the same results in a faster way?\n3. Can I modify my tables to make this query (which is the crux of my application) run faster?Thanks",
"msg_date": "Thu, 23 May 2013 10:19:50 -0700",
"msg_from": "Jonathan Morra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of complicated query"
},
{
"msg_contents": "On 05/23/2013 10:19 AM, Jonathan Morra wrote:\n> I am fairly new to squeezing performance out of Postgres, but I hope \n> this mailing list can help me. I have read the instructions found at \n> http://wiki.postgresql.org/wiki/Slow_Query_Questions and have tried to \n> abide by them the best that I can. I am running \"PostgreSQL 9.1.7, \n> compiled by Visual C++ build 1500, 64-bit\" on an x64 Windows 7 \n> Professional Service Pack 1 machine with 8 GB of RAM.\n\nI'm not sure under what constraints you are operating but you will find \nmost people on the list will recommend running live systems on \nLinux/Unix for a variety of reasons.\n\n> CREATE TABLE reads\n> ...\n> ALTER TABLE reads\n> OWNER TO postgres;\n\nTo avoid future grief you should set up a user (see CREATE ROLE...) for \nyour database that is not the cluster superuser (postgres). I assume you \nset up a database (see CREATE DATABASE...) for your app. The base \ndatabases (postgres, template*) should be used for administrative \npurposes only.\n\n>\n> ...\n> Ultimately what I want to do is to find a sum of values for each \n> patient. The scenario is that each patient is assigned a device and \n> they get incremental values on their device. Since these values are \n> incremental if a patient never switches devices, the reported value \n> should be the last value for a patient. However, if a patient \n> switches devices then the reported value should be the sum of the last \n> value for each device that the patient was assigned.\n\nI'm afraid I'm a bit confused about what you are after due to switching \nbetween \"sum\" and \"last\".\n\nIt sounds like a patient is issued a device which takes a number of \nreadings. Do you want the sum of those readings for a given patient \nacross all devices they have been issued, the sum of readings for a \nspecific device, the most recent reading for a specific patient \nregardless of which device was in use for that reading, or the sum of \nthe most recent readings on each device issued to a specific patient?\n\nAre you looking to generate a report across all patients/devices or \nlookup information on a specific patient or device?\n\nCheers,\nSteve\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 May 2013 10:47:38 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of complicated query"
},
{
"msg_contents": "Ultimately I'm going to deploy this to Heroku on a Linux machine (my tests\nhave so far indicated that Heroku is MUCH slower than my machine), but I\nwanted to get it fast on my local machine first. I agree with your role\npartitioning, however, this is only a dev machine.\n\nFor the sum vs. last, the idea is that each patient is issued a device and\nreads are recorded. The nature of the reads are that they are incremental,\nso if a patient never changes devices there is no need for a sum. However,\npatients will be changing devices, and the patient_device table records\nwhen each patient had a given device. What I want to sum up is the total\nvalue for a patient regardless of how many times they changed devices. In\norder to do this I have to sum up just the values of the last read for each\ndevice a patient was assigned to. This leads to the WHERE clause, WHERE\nread_datetime = max_read, and hence I'm only summing the last read for each\ndevice for each patient. Ultimately I want to report the values listed in\nthe outer select for each patient. I will use these values to run other\nqueries, but those queries are currently very quick (< 50ms) and so I'm not\nworried about them now.\n\n\nOn Thu, May 23, 2013 at 10:47 AM, Steve Crawford <\[email protected]> wrote:\n\n> On 05/23/2013 10:19 AM, Jonathan Morra wrote:\n>\n>> I am fairly new to squeezing performance out of Postgres, but I hope this\n>> mailing list can help me. I have read the instructions found at\n>> http://wiki.postgresql.org/**wiki/Slow_Query_Questions<http://wiki.postgresql.org/wiki/Slow_Query_Questions>and have tried to abide by them the best that I can. I am running\n>> \"PostgreSQL 9.1.7, compiled by Visual C++ build 1500, 64-bit\" on an x64\n>> Windows 7 Professional Service Pack 1 machine with 8 GB of RAM.\n>>\n>\n> I'm not sure under what constraints you are operating but you will find\n> most people on the list will recommend running live systems on Linux/Unix\n> for a variety of reasons.\n>\n> CREATE TABLE reads\n>> ...\n>>\n>> ALTER TABLE reads\n>> OWNER TO postgres;\n>>\n>\n> To avoid future grief you should set up a user (see CREATE ROLE...) for\n> your database that is not the cluster superuser (postgres). I assume you\n> set up a database (see CREATE DATABASE...) for your app. The base databases\n> (postgres, template*) should be used for administrative purposes only.\n>\n>\n>> ...\n>>\n>> Ultimately what I want to do is to find a sum of values for each patient.\n>> The scenario is that each patient is assigned a device and they get\n>> incremental values on their device. Since these values are incremental if\n>> a patient never switches devices, the reported value should be the last\n>> value for a patient. However, if a patient switches devices then the\n>> reported value should be the sum of the last value for each device that the\n>> patient was assigned.\n>>\n>\n> I'm afraid I'm a bit confused about what you are after due to switching\n> between \"sum\" and \"last\".\n>\n> It sounds like a patient is issued a device which takes a number of\n> readings. Do you want the sum of those readings for a given patient across\n> all devices they have been issued, the sum of readings for a specific\n> device, the most recent reading for a specific patient regardless of which\n> device was in use for that reading, or the sum of the most recent readings\n> on each device issued to a specific patient?\n>\n> Are you looking to generate a report across all patients/devices or lookup\n> information on a specific patient or device?\n>\n> Cheers,\n> Steve\n>\n>\n>\n>\n\nUltimately I'm going to deploy this to Heroku on a Linux machine (my tests have so far indicated that Heroku is MUCH slower than my machine), but I wanted to get it fast on my local machine first. I agree with your role partitioning, however, this is only a dev machine.\nFor the sum vs. last, the idea is that each patient is issued a device and reads are recorded. The nature of the reads are that they are incremental, so if a patient never changes devices there is no need for a sum. However, patients will be changing devices, and the patient_device table records when each patient had a given device. What I want to sum up is the total value for a patient regardless of how many times they changed devices. In order to do this I have to sum up just the values of the last read for each device a patient was assigned to. This leads to the WHERE clause, WHERE read_datetime = max_read, and hence I'm only summing the last read for each device for each patient. Ultimately I want to report the values listed in the outer select for each patient. I will use these values to run other queries, but those queries are currently very quick (< 50ms) and so I'm not worried about them now.\nOn Thu, May 23, 2013 at 10:47 AM, Steve Crawford <[email protected]> wrote:\nOn 05/23/2013 10:19 AM, Jonathan Morra wrote:\n\nI am fairly new to squeezing performance out of Postgres, but I hope this mailing list can help me. I have read the instructions found at http://wiki.postgresql.org/wiki/Slow_Query_Questions and have tried to abide by them the best that I can. I am running \"PostgreSQL 9.1.7, compiled by Visual C++ build 1500, 64-bit\" on an x64 Windows 7 Professional Service Pack 1 machine with 8 GB of RAM.\n\n\nI'm not sure under what constraints you are operating but you will find most people on the list will recommend running live systems on Linux/Unix for a variety of reasons.\n\n\nCREATE TABLE reads\n...\nALTER TABLE reads\n OWNER TO postgres;\n\n\nTo avoid future grief you should set up a user (see CREATE ROLE...) for your database that is not the cluster superuser (postgres). I assume you set up a database (see CREATE DATABASE...) for your app. The base databases (postgres, template*) should be used for administrative purposes only.\n\n\n\n...\nUltimately what I want to do is to find a sum of values for each patient. The scenario is that each patient is assigned a device and they get incremental values on their device. Since these values are incremental if a patient never switches devices, the reported value should be the last value for a patient. However, if a patient switches devices then the reported value should be the sum of the last value for each device that the patient was assigned.\n\n\nI'm afraid I'm a bit confused about what you are after due to switching between \"sum\" and \"last\".\n\nIt sounds like a patient is issued a device which takes a number of readings. Do you want the sum of those readings for a given patient across all devices they have been issued, the sum of readings for a specific device, the most recent reading for a specific patient regardless of which device was in use for that reading, or the sum of the most recent readings on each device issued to a specific patient?\n\nAre you looking to generate a report across all patients/devices or lookup information on a specific patient or device?\n\nCheers,\nSteve",
"msg_date": "Thu, 23 May 2013 10:57:26 -0700",
"msg_from": "Jonathan Morra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of complicated query"
},
{
"msg_contents": ">>This leads to the WHERE clause, WHERE read_datetime = max_read, and hence\nI'm only summing the last read for each device for each patient.\nIs \"reads\" table insert-only? Do you have updates/deletes of the\n\"historical\" rows?\n\n>>3. Can I modify my tables to make this query (which is the crux of my\napplication) run faster?\nCan you have a second \"reads\" table that stores only up to date values?\nThat will eliminate max-over completely, enable efficient usage in other\nqueries, and make your queries much easier to understand by humans and\ncomputers.\n\nPS. read_datetime = max_read is prone to \"what if two measurements have\nsame date\" errors.\nPPS. distinct MAX(max_read) OVER(PARTITION BY patient_id) AS latest_read\nlooks like a complete mess. Why don't you just use group by?\n\n>\nRegards,\nVladimir\n\n>>This leads to the WHERE clause, WHERE read_datetime = max_read, and hence I'm only summing the last read for each device for each patient. Is \"reads\" table insert-only? Do you have updates/deletes of the \"historical\" rows?\n>>3. Can I modify my tables to make this query (which is the crux of my application) run faster?Can you have a second \"reads\" table that stores only up to date values?\n\nThat will eliminate max-over completely, enable efficient usage in other queries, and make your queries much easier to understand by humans and computers.PS. read_datetime = max_read is prone to \"what if two measurements have same date\" errors.\nPPS. distinct MAX(max_read) OVER(PARTITION BY patient_id) AS latest_read looks like a complete mess. Why don't you just use group by?\n\n\nRegards,Vladimir",
"msg_date": "Thu, 23 May 2013 23:23:37 +0400",
"msg_from": "Vladimir Sitnikov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of complicated query"
},
{
"msg_contents": "1. Reads is constantly inserted upon. It should never be updated or\ndeleted.\n2. I suppose I can, but that will make my insertion logic very\ncomplicated. I cannot guarantee the order of any of this data, so I might\nget reads at any time and also get assignments at any time (historical as\nwell). I suppose I could do that, but I'd like to avoid it if at all\npossible.\n3. 2 measurements can have the same date, and that is fine. The problem\narises when the same device produces 2 reads at the same time and that\nisn't possible.\n4. I agree that a lot of this is a mess, however MAX(max_read)\nOVER(PARTITION BY patient_id) AS latest_read seems necessary as using a\ngroup by clause forces me to group by all elements I'm selecting, which I\ndon't want to do.\n\n\nOn Thu, May 23, 2013 at 12:23 PM, Vladimir Sitnikov <\[email protected]> wrote:\n\n> >>This leads to the WHERE clause, WHERE read_datetime = max_read, and\n> hence I'm only summing the last read for each device for each patient.\n> Is \"reads\" table insert-only? Do you have updates/deletes of the\n> \"historical\" rows?\n>\n> >>3. Can I modify my tables to make this query (which is the crux of my\n> application) run faster?\n> Can you have a second \"reads\" table that stores only up to date values?\n> That will eliminate max-over completely, enable efficient usage in other\n> queries, and make your queries much easier to understand by humans and\n> computers.\n>\n> PS. read_datetime = max_read is prone to \"what if two measurements have\n> same date\" errors.\n> PPS. distinct MAX(max_read) OVER(PARTITION BY patient_id) AS latest_read\n> looks like a complete mess. Why don't you just use group by?\n>\n>>\n> Regards,\n> Vladimir\n>\n\n1. Reads is constantly inserted upon. It should never be updated or deleted.2. I suppose I can, but that will make my insertion logic very complicated. I cannot guarantee the order of any of this data, so I might get reads at any time and also get assignments at any time (historical as well). I suppose I could do that, but I'd like to avoid it if at all possible.\n3. 2 measurements can have the same date, and that is fine. The problem arises when the same device produces 2 reads at the same time and that isn't possible.4. I agree that a lot of this is a mess, however MAX(max_read) OVER(PARTITION BY patient_id) AS latest_read seems necessary as using a group by clause forces me to group by all elements I'm selecting, which I don't want to do.\nOn Thu, May 23, 2013 at 12:23 PM, Vladimir Sitnikov <[email protected]> wrote:\n>>This leads to the WHERE clause, WHERE read_datetime = max_read, and hence I'm only summing the last read for each device for each patient. \nIs \"reads\" table insert-only? Do you have updates/deletes of the \"historical\" rows?\n>>3. Can I modify my tables to make this query (which is the crux of my application) run faster?Can you have a second \"reads\" table that stores only up to date values?\n\n\nThat will eliminate max-over completely, enable efficient usage in other queries, and make your queries much easier to understand by humans and computers.PS. read_datetime = max_read is prone to \"what if two measurements have same date\" errors.\nPPS. distinct MAX(max_read) OVER(PARTITION BY patient_id) AS latest_read looks like a complete mess. Why don't you just use group by?\n\n\nRegards,Vladimir",
"msg_date": "Thu, 23 May 2013 12:43:55 -0700",
"msg_from": "Jonathan Morra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of complicated query"
},
{
"msg_contents": "On 05/23/2013 10:57 AM, Jonathan Morra wrote:\n> Ultimately I'm going to deploy this to Heroku on a Linux machine (my \n> tests have so far indicated that Heroku is MUCH slower than my \n> machine), but I wanted to get it fast on my local machine first. I \n> agree with your role partitioning, however, this is only a dev machine.\n>\n> For the sum vs. last, the idea is that each patient is issued a device \n> and reads are recorded. The nature of the reads are that they are \n> incremental, so if a patient never changes devices there is no need \n> for a sum. However, patients will be changing devices, and the \n> patient_device table records when each patient had a given device. \n> What I want to sum up is the total value for a patient regardless of \n> how many times they changed devices\n\nIf the reads are always incremented - that is the read you want is \nalways the largest read - then something along these lines might work \nwell and be more readable (untested code);\n\n-- distill out max value for each device\nwith device_maxreads as (\nselect\n device_id,\n max(value) as max_read\nfrom\n reads\ngroup by\n device_id)\n\n-- then sum into a totals for each patient\npatient_value as (\nselect\n p.patient_id,\n sum(max_read) patient_value\nfrom\n device_maxreads d\n join patient_devices p on p.device_id = d.device_id\ngroup by\n p.patient_id\n)\n\nselect\n ...whatever...\nfrom\n ...your tables.\n join patient_value p on p.patient_id = ...\n;\n\n\nIf the values increment and decrement or patients are issued devices at \noverlapping times (i.e. using two devices at one time) then the query \ngets more complicated but \"with...\" is still a likely usable construct.\n\nCheers,\nSteve\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 May 2013 13:01:48 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of complicated query"
},
{
"msg_contents": "I'm not sure I understand your proposed solution. There is also the case\nto consider where the same patient can be assigned the same device multiple\ntimes. In this case, the value may be reset at each assignment (hence the\nline value - issued_value AS value from the original query).\n\n\nOn Thu, May 23, 2013 at 1:01 PM, Steve Crawford <\[email protected]> wrote:\n\n> On 05/23/2013 10:57 AM, Jonathan Morra wrote:\n>\n>> Ultimately I'm going to deploy this to Heroku on a Linux machine (my\n>> tests have so far indicated that Heroku is MUCH slower than my machine),\n>> but I wanted to get it fast on my local machine first. I agree with your\n>> role partitioning, however, this is only a dev machine.\n>>\n>> For the sum vs. last, the idea is that each patient is issued a device\n>> and reads are recorded. The nature of the reads are that they are\n>> incremental, so if a patient never changes devices there is no need for a\n>> sum. However, patients will be changing devices, and the patient_device\n>> table records when each patient had a given device. What I want to sum up\n>> is the total value for a patient regardless of how many times they changed\n>> devices\n>>\n>\n> If the reads are always incremented - that is the read you want is always\n> the largest read - then something along these lines might work well and be\n> more readable (untested code);\n>\n> -- distill out max value for each device\n> with device_maxreads as (\n> select\n> device_id,\n> max(value) as max_read\n> from\n> reads\n> group by\n> device_id)\n>\n> -- then sum into a totals for each patient\n> patient_value as (\n> select\n> p.patient_id,\n> sum(max_read) patient_value\n> from\n> device_maxreads d\n> join patient_devices p on p.device_id = d.device_id\n> group by\n> p.patient_id\n> )\n>\n> select\n> ...whatever...\n> from\n> ...your tables.\n> join patient_value p on p.patient_id = ...\n> ;\n>\n>\n> If the values increment and decrement or patients are issued devices at\n> overlapping times (i.e. using two devices at one time) then the query gets\n> more complicated but \"with...\" is still a likely usable construct.\n>\n> Cheers,\n> Steve\n>\n\nI'm not sure I understand your proposed solution. There is also the case to consider where the same patient can be assigned the same device multiple times. In this case, the value may be reset at each assignment (hence the line value - issued_value AS value from the original query).\nOn Thu, May 23, 2013 at 1:01 PM, Steve Crawford <[email protected]> wrote:\nOn 05/23/2013 10:57 AM, Jonathan Morra wrote:\n\nUltimately I'm going to deploy this to Heroku on a Linux machine (my tests have so far indicated that Heroku is MUCH slower than my machine), but I wanted to get it fast on my local machine first. I agree with your role partitioning, however, this is only a dev machine.\n\nFor the sum vs. last, the idea is that each patient is issued a device and reads are recorded. The nature of the reads are that they are incremental, so if a patient never changes devices there is no need for a sum. However, patients will be changing devices, and the patient_device table records when each patient had a given device. What I want to sum up is the total value for a patient regardless of how many times they changed devices\n\n\nIf the reads are always incremented - that is the read you want is always the largest read - then something along these lines might work well and be more readable (untested code);\n\n-- distill out max value for each device\nwith device_maxreads as (\nselect\n device_id,\n max(value) as max_read\nfrom\n reads\ngroup by\n device_id)\n\n-- then sum into a totals for each patient\npatient_value as (\nselect\n p.patient_id,\n sum(max_read) patient_value\nfrom\n device_maxreads d\n join patient_devices p on p.device_id = d.device_id\ngroup by\n p.patient_id\n)\n\nselect\n ...whatever...\nfrom\n ...your tables.\n join patient_value p on p.patient_id = ...\n;\n\n\nIf the values increment and decrement or patients are issued devices at overlapping times (i.e. using two devices at one time) then the query gets more complicated but \"with...\" is still a likely usable construct.\n\nCheers,\nSteve",
"msg_date": "Thu, 23 May 2013 14:57:19 -0700",
"msg_from": "Jonathan Morra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of complicated query"
},
{
"msg_contents": "On 23/05/2013 22:57, Jonathan Morra wrote:\n> I'm not sure I understand your proposed solution. There is also the \n> case to consider where the same patient can be assigned the same \n> device multiple times. In this case, the value may be reset at each \n> assignment (hence the line value - issued_value AS value from the \n> original query).\n>\n\nPerhaps you could use triggers to help somewhat? At least for the \nlifetime part.\n\nFor a given assignment of a device to a patient, only the last value is \nuseful, so you can maintain that easily enough (a bit like a \nmaterialised view but before 9.3 I guess).\n\nBut, that might fix 'lifetime' but not some arbitrary windowed view. I \ncan see why an 'as at' end time is useful, but not why a start time is \nso useful: if a device has readings before the window but not in the \nwindow, is that 'no reading' or should the last reading prior to the \nwindow apply?\n\nIt also seems to me that the solution you have is hard to reason about. \nIts like a Haskell program done in one big inline fold rather than a \nbunch of 'where' clauses, and I find these cause significant brain overload.\n\nPerhaps you could break it out into identifiable chunks that work out \n(both for lifetime if not using triggers, and for your date range \notherwise) the readings that are not superceded (ie the last in the date \nbounds for a device assignment), and then work with those. Consider the \nCTE 'WITH queries' for doing this?\n\nIt seems to me that if you can do this, then the problem might be easier \nto express.\n\nFailing that, I'd be looking at using temporary tables, and forcing a \nseries of reduce steps using them, but then I'm a nasty old Sybase \nhacker at heart. ;-)\n\n\n\n\n\n\n\nOn 23/05/2013 22:57, Jonathan Morra\n wrote:\n\n\n\nI'm not sure I understand your proposed solution.\n There is also the case to consider where the same patient can\n be assigned the same device multiple times. In this case, the\n value may be reset at each assignment (hence the line value -\n issued_value AS value from the original query).\n\n\n\n\n Perhaps you could use triggers to help somewhat? At least for the\n lifetime part.\n\n For a given assignment of a device to a patient, only the last value\n is useful, so you can maintain that easily enough (a bit like a\n materialised view but before 9.3 I guess).\n\n But, that might fix 'lifetime' but not some arbitrary windowed\n view. I can see why an 'as at' end time is useful, but not why a\n start time is so useful: if a device has readings before the window\n but not in the window, is that 'no reading' or should the last\n reading prior to the window apply?\n\n It also seems to me that the solution you have is hard to reason\n about. Its like a Haskell program done in one big inline fold\n rather than a bunch of 'where' clauses, and I find these cause\n significant brain overload.\n\n Perhaps you could break it out into identifiable chunks that work\n out (both for lifetime if not using triggers, and for your date\n range otherwise) the readings that are not superceded (ie the last\n in the date bounds for a device assignment), and then work with\n those. Consider the CTE 'WITH queries' for doing this?\n\n It seems to me that if you can do this, then the problem might be\n easier to express.\n\n Failing that, I'd be looking at using temporary tables, and forcing\n a series of reduce steps using them, but then I'm a nasty old Sybase\n hacker at heart. ;-)",
"msg_date": "Fri, 24 May 2013 00:22:03 +0100",
"msg_from": "james <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of complicated query"
},
{
"msg_contents": "Sorry for the messy query, I'm very new to writing these complex queries.\n I'll try and make it easier to read by using WITH clauses. However, just\nto clarify, the WITH clauses only increase readability and not performance\nin any way, right?\n\n\nOn Thu, May 23, 2013 at 4:22 PM, james <[email protected]> wrote:\n\n> On 23/05/2013 22:57, Jonathan Morra wrote:\n>\n> I'm not sure I understand your proposed solution. There is also the\n> case to consider where the same patient can be assigned the same device\n> multiple times. In this case, the value may be reset at each assignment\n> (hence the line value - issued_value AS value from the original query).\n>\n>\n> Perhaps you could use triggers to help somewhat? At least for the\n> lifetime part.\n>\n> For a given assignment of a device to a patient, only the last value is\n> useful, so you can maintain that easily enough (a bit like a materialised\n> view but before 9.3 I guess).\n>\n> But, that might fix 'lifetime' but not some arbitrary windowed view. I\n> can see why an 'as at' end time is useful, but not why a start time is so\n> useful: if a device has readings before the window but not in the window,\n> is that 'no reading' or should the last reading prior to the window apply?\n>\n> It also seems to me that the solution you have is hard to reason about.\n> Its like a Haskell program done in one big inline fold rather than a bunch\n> of 'where' clauses, and I find these cause significant brain overload.\n>\n> Perhaps you could break it out into identifiable chunks that work out\n> (both for lifetime if not using triggers, and for your date range\n> otherwise) the readings that are not superceded (ie the last in the date\n> bounds for a device assignment), and then work with those. Consider the\n> CTE 'WITH queries' for doing this?\n>\n> It seems to me that if you can do this, then the problem might be easier\n> to express.\n>\n> Failing that, I'd be looking at using temporary tables, and forcing a\n> series of reduce steps using them, but then I'm a nasty old Sybase hacker\n> at heart. ;-)\n>\n>\n\nSorry for the messy query, I'm very new to writing these complex queries. I'll try and make it easier to read by using WITH clauses. However, just to clarify, the WITH clauses only increase readability and not performance in any way, right?\nOn Thu, May 23, 2013 at 4:22 PM, james <[email protected]> wrote:\n\n\nOn 23/05/2013 22:57, Jonathan Morra\n wrote:\n\n\n\nI'm not sure I understand your proposed solution.\n There is also the case to consider where the same patient can\n be assigned the same device multiple times. In this case, the\n value may be reset at each assignment (hence the line value -\n issued_value AS value from the original query).\n\n\n\n\n Perhaps you could use triggers to help somewhat? At least for the\n lifetime part.\n\n For a given assignment of a device to a patient, only the last value\n is useful, so you can maintain that easily enough (a bit like a\n materialised view but before 9.3 I guess).\n\n But, that might fix 'lifetime' but not some arbitrary windowed\n view. I can see why an 'as at' end time is useful, but not why a\n start time is so useful: if a device has readings before the window\n but not in the window, is that 'no reading' or should the last\n reading prior to the window apply?\n\n It also seems to me that the solution you have is hard to reason\n about. Its like a Haskell program done in one big inline fold\n rather than a bunch of 'where' clauses, and I find these cause\n significant brain overload.\n\n Perhaps you could break it out into identifiable chunks that work\n out (both for lifetime if not using triggers, and for your date\n range otherwise) the readings that are not superceded (ie the last\n in the date bounds for a device assignment), and then work with\n those. Consider the CTE 'WITH queries' for doing this?\n\n It seems to me that if you can do this, then the problem might be\n easier to express.\n\n Failing that, I'd be looking at using temporary tables, and forcing\n a series of reduce steps using them, but then I'm a nasty old Sybase\n hacker at heart. ;-)",
"msg_date": "Thu, 23 May 2013 17:21:00 -0700",
"msg_from": "Jonathan Morra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of complicated query"
},
{
"msg_contents": "I have been working on this query, and I was able to modify it and get it's\nrun time cut in half. Here's where it is right now:\n\nSELECT first_name, last_name, serial_number, latest_read, value,\nlifetime_value, lifetime.patient_id\nFROM (\nSELECT DISTINCT patient_id, first_name, last_name, MAX(read_datetime)\nOVER(PARTITION BY patient_id) AS latest_read\n, SUM(value) OVER(PARTITION BY patient_id) AS value,\nfirst_value(serial_number) OVER(PARTITION BY patient_id ORDER BY\nread_datetime DESC) AS serial_number\n FROM (\nSELECT patient_devices.device_id, patient_id, MAX(value - issued_value) AS\nvalue, MAX(read_datetime) AS read_datetime\nFROM read_reads\nINNER JOIN patient_devices ON patient_devices.device_id =\nread_reads.device_id\nAND read_datetime >= issuance_datetime\nAND read_datetime < COALESCE(unassignment_datetime , 'infinity'::timestamp)\nWHERE read_datetime BETWEEN '2012-01-01 10:30:01' AND '2013-05-18 03:03:42'\nGROUP BY patient_devices.id\n) AS first\nINNER JOIN devices ON devices.id = device_id\nINNER JOIN patients ON patient_id = patients.id\n) AS filtered\nINNER JOIN (\nSELECT patient_id, SUM(value) AS lifetime_value\n FROM (\nSELECT patient_id, MAX(value - issued_value) AS value FROM read_reads\nINNER JOIN patient_devices ON patient_devices.device_id =\nread_reads.device_id\nAND read_datetime >= issuance_datetime\nAND read_datetime < COALESCE(unassignment_datetime , 'infinity'::timestamp)\nGROUP BY patient_devices.id\n) AS first GROUP BY patient_id\n) AS lifetime ON filtered.patient_id = lifetime.patient_id\n\nI think the key to cutting it down was moving some of the joins up a level.\n Even though this is faster, I'd still like to cut it down a bunch more (as\nthis will be run a lot in my application). Any more insight would be\ngreatly appreciated. A summary of explain (analyze, buffers) can be found\nat http://explain.depesz.com/s/qx7f.\n\nThanks\n\n\nOn Thu, May 23, 2013 at 5:21 PM, Jonathan Morra <[email protected]> wrote:\n\n> Sorry for the messy query, I'm very new to writing these complex queries.\n> I'll try and make it easier to read by using WITH clauses. However, just\n> to clarify, the WITH clauses only increase readability and not performance\n> in any way, right?\n>\n>\n> On Thu, May 23, 2013 at 4:22 PM, james <[email protected]>wrote:\n>\n>> On 23/05/2013 22:57, Jonathan Morra wrote:\n>>\n>> I'm not sure I understand your proposed solution. There is also the\n>> case to consider where the same patient can be assigned the same device\n>> multiple times. In this case, the value may be reset at each assignment\n>> (hence the line value - issued_value AS value from the original query).\n>>\n>>\n>> Perhaps you could use triggers to help somewhat? At least for the\n>> lifetime part.\n>>\n>> For a given assignment of a device to a patient, only the last value is\n>> useful, so you can maintain that easily enough (a bit like a materialised\n>> view but before 9.3 I guess).\n>>\n>> But, that might fix 'lifetime' but not some arbitrary windowed view. I\n>> can see why an 'as at' end time is useful, but not why a start time is so\n>> useful: if a device has readings before the window but not in the window,\n>> is that 'no reading' or should the last reading prior to the window apply?\n>>\n>> It also seems to me that the solution you have is hard to reason about.\n>> Its like a Haskell program done in one big inline fold rather than a bunch\n>> of 'where' clauses, and I find these cause significant brain overload.\n>>\n>> Perhaps you could break it out into identifiable chunks that work out\n>> (both for lifetime if not using triggers, and for your date range\n>> otherwise) the readings that are not superceded (ie the last in the date\n>> bounds for a device assignment), and then work with those. Consider the\n>> CTE 'WITH queries' for doing this?\n>>\n>> It seems to me that if you can do this, then the problem might be easier\n>> to express.\n>>\n>> Failing that, I'd be looking at using temporary tables, and forcing a\n>> series of reduce steps using them, but then I'm a nasty old Sybase hacker\n>> at heart. ;-)\n>>\n>>\n>\n\nI have been working on this query, and I was able to modify it and get it's run time cut in half. Here's where it is right now:SELECT first_name, last_name, serial_number, latest_read, value, lifetime_value, lifetime.patient_id\nFROM ( SELECT DISTINCT patient_id, first_name, last_name, MAX(read_datetime) OVER(PARTITION BY patient_id) AS latest_read , SUM(value) OVER(PARTITION BY patient_id) AS value, first_value(serial_number) OVER(PARTITION BY patient_id ORDER BY read_datetime DESC) AS serial_number\n FROM ( SELECT patient_devices.device_id, patient_id, MAX(value - issued_value) AS value, MAX(read_datetime) AS read_datetime\n FROM read_reads INNER JOIN patient_devices ON patient_devices.device_id = read_reads.device_id AND read_datetime >= issuance_datetime\n AND read_datetime < COALESCE(unassignment_datetime , 'infinity'::timestamp) WHERE read_datetime BETWEEN '2012-01-01 10:30:01' AND '2013-05-18 03:03:42'\n\t\tGROUP BY patient_devices.id ) AS first INNER JOIN devices ON devices.id = device_id\n INNER JOIN patients ON patient_id = patients.id) AS filteredINNER JOIN ( SELECT patient_id, SUM(value) AS lifetime_value\n FROM ( SELECT patient_id, MAX(value - issued_value) AS value FROM read_reads\n INNER JOIN patient_devices ON patient_devices.device_id = read_reads.device_id AND read_datetime >= issuance_datetime\n AND read_datetime < COALESCE(unassignment_datetime , 'infinity'::timestamp)\t\tGROUP BY patient_devices.id\n ) AS first GROUP BY patient_id) AS lifetime ON filtered.patient_id = lifetime.patient_idI think the key to cutting it down was moving some of the joins up a level. Even though this is faster, I'd still like to cut it down a bunch more (as this will be run a lot in my application). Any more insight would be greatly appreciated. A summary of explain (analyze, buffers) can be found at http://explain.depesz.com/s/qx7f.\nThanksOn Thu, May 23, 2013 at 5:21 PM, Jonathan Morra <[email protected]> wrote:\nSorry for the messy query, I'm very new to writing these complex queries. I'll try and make it easier to read by using WITH clauses. However, just to clarify, the WITH clauses only increase readability and not performance in any way, right?\n\nOn Thu, May 23, 2013 at 4:22 PM, james <[email protected]> wrote:\n\n\nOn 23/05/2013 22:57, Jonathan Morra\n wrote:\n\n\n\nI'm not sure I understand your proposed solution.\n There is also the case to consider where the same patient can\n be assigned the same device multiple times. In this case, the\n value may be reset at each assignment (hence the line value -\n issued_value AS value from the original query).\n\n\n\n\n Perhaps you could use triggers to help somewhat? At least for the\n lifetime part.\n\n For a given assignment of a device to a patient, only the last value\n is useful, so you can maintain that easily enough (a bit like a\n materialised view but before 9.3 I guess).\n\n But, that might fix 'lifetime' but not some arbitrary windowed\n view. I can see why an 'as at' end time is useful, but not why a\n start time is so useful: if a device has readings before the window\n but not in the window, is that 'no reading' or should the last\n reading prior to the window apply?\n\n It also seems to me that the solution you have is hard to reason\n about. Its like a Haskell program done in one big inline fold\n rather than a bunch of 'where' clauses, and I find these cause\n significant brain overload.\n\n Perhaps you could break it out into identifiable chunks that work\n out (both for lifetime if not using triggers, and for your date\n range otherwise) the readings that are not superceded (ie the last\n in the date bounds for a device assignment), and then work with\n those. Consider the CTE 'WITH queries' for doing this?\n\n It seems to me that if you can do this, then the problem might be\n easier to express.\n\n Failing that, I'd be looking at using temporary tables, and forcing\n a series of reduce steps using them, but then I'm a nasty old Sybase\n hacker at heart. ;-)",
"msg_date": "Tue, 28 May 2013 07:43:32 -0700",
"msg_from": "Jonathan Morra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of complicated query"
},
{
"msg_contents": "On 05/23/2013 05:21 PM, Jonathan Morra wrote:\n> Sorry for the messy query, I'm very new to writing these complex \n> queries. I'll try and make it easier to read by using WITH clauses. \n> However, just to clarify, the WITH clauses only increase readability \n> and not performance in any way, right?\n\nIt depends. The planner is a tricky beast and sometimes rewriting a \nseeming identical query will result in a much more (or less) efficient \nplan. A classic case was the difference between ....where foo in (select \nbar from...)... vs. where exists (select 1 from bar where...).... In an \nideal world the planner would figure out that both are the same and \noptimize accordingly but there was a point where one was typically more \nefficient then it switched to the other being better for the planner. I \ndon't recall the current state.\n\nCasting can be important - sometimes the planner needs a \"nudge\" to use \nan index on, say, a varchar column being compared to, perhaps, a text \nvalue or column in which case casting to the exact data-type being \nindexed can be a big win.\n\nCheers,\nSteve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 28 May 2013 09:04:38 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of complicated query"
}
] |
[
{
"msg_contents": "serverdb=# set enable_hashjoin=off;SETserverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT'; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------Aggregate (cost=7765563.69..7765563.70 rows=1 width=0) Nested Loop (cost=0.00..7765555.35 rows=3336 width=0) -> Index Scan using idx_sars_acts_run_algorithm on sars_acts_run tr1_ (cost=0.00..44.32 rows=650 width=8) Index Cond: ((algorithm)::text = 'SMAT'::text) -> Index Scan using idx_sars_acts_run_id_end_time on sars_acts this_ (cost=0.00..11891.29 rows=4452 width=8) Index Cond: (SARS_RUN_ID=tr1_.ID)(6 rows)serverdb=# \\timingTIming is on.serverdb=# select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT'; y0_------1481710(1 row)Time: 85069.416 ms < 1.4 minutes <-- not great, but much better!Subsequently, runs in the milliseconds once cached.But what negative impact is disabling hash joins?Sorry, I just executed the explain without the analyze, I'll send out the \"explain analyze\" next reply.thanksFreddie\n\n\n-------- Original Message --------\nSubject: Re: [PERFORM] Very slow inner join query Unacceptable latency.\nFrom: Jeff Janes <[email protected]>\nDate: Wed, May 22, 2013 5:17 pm\nTo: [email protected]\nCc: Jaime Casanova <[email protected]>, psql performance list\n<[email protected]>, Postgres General\n<[email protected]>\n\nOn Wed, May 22, 2013 at 7:41 AM, <[email protected]> wrote: PostgreSQL 9.1.6 on linux>From the numbers in your attached plan, it seems like it should be doing a nested loop from the 580 rows (it thinks) that match in SARS_ACTS_RUN against the index on sars_run_id to pull out the 3297 rows (again, it think, though it is way of there). I can't see why it would not do that. There were some planner issues in the early 9.2 releases that caused very large indexes to be punished, but I don't think those were in 9.1 Could you \"set enable_hashjoin to off\" and post the \"explain analyze\" that that gives? Cheers, Jeff \n\n\n",
"msg_date": "Thu, 23 May 2013 10:21:28 -0700",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Very slow inner join query Unacceptable latency."
},
{
"msg_contents": "On Thu, May 23, 2013 at 12:21 PM, <[email protected]> wrote:\n>\n> But what negative impact is disabling hash joins?\n>\n\ndoing it just for a single query, could be a tool for solving\nparticular problems.\nsetting it in postgresql.conf, therefore affecting all queries, is\nlike using a hammer to change tv channel... it will cause more\nproblems than the one it solves.\n\nwhat you can do is:\n\n1) execute:\n\nSET enable_hashjoin TO OFF;\nSELECT here\nRESET enable_hashjoin TO ON;\n\n2) in a function:\n\nCREATE FUNCTION do_something() RETURNS bigint AS\n$$\n SELECT here\n$$ LANGUAGE sql SET enable_hashjoin TO OFF STABLE;\n\n--\nJaime Casanova www.2ndQuadrant.com\nProfessional PostgreSQL: Soporte 24x7 y capacitación\nPhone: +593 4 5107566 Cell: +593 987171157\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Thu, 23 May 2013 18:34:33 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Very slow inner join query Unacceptable latency."
},
{
"msg_contents": "\nOn Thursday, May 23, 2013 10:51 PM fburgess wrote:\n> serverdb=# set enable_hashjoin=off;\n> SET\n> serverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT';\n\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=7765563.69..7765563.70 rows=1 width=0) \n> Nested Loop (cost=0.00..7765555.35 rows=3336 width=0)\n> -> Index Scan using idx_sars_acts_run_algorithm on sars_acts_run tr1_ (cost=0.00..44.32 rows=650 width=8) \n> Index Cond: ((algorithm)::text = 'SMAT'::text)\n> -> Index Scan using idx_sars_acts_run_id_end_time on sars_acts this_ (cost=0.00..11891.29 rows=4452 width=8) \n> Index Cond: (SARS_RUN_ID=tr1_.ID)\n>(6 rows)\n\n>serverdb=# \\timing\n>TIming is on.\n\n>serverdb=# select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT';\n> y0_\n>------\n>1481710\n>(1 row)\n\n> Time: 85069.416 ms < 1.4 minutes <-- not great, but much better!\n\n> Subsequently, runs in the milliseconds once cached.\n\nIf I see the plan from your other mail as below where Hash join is selected, the cost of Nested Loop is much more, that is the reason why optimizer would have selected \nHash Join. \n\nserverdb=# explain analyze select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1_.ALGORITHM='SMAT';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=3983424.05..3983424.06 rows=1 width=0) (actual time=1358298.003..1358298.004 rows=1 loops=1)\n -> Hash Join (cost=44.93..3983415.81 rows=3297 width=0) (actual time=2593.768..1358041.205 rows 1481710 loops=1)\n\n\nIt is quite surprising that after optimizer decided the cost of some plan (Hash Join) to be lower but actual execution cost of same is more. \nThere might be some problem with cost calculation model of Hash Join for some cases.\n\nBy the way which version of PostgreSQL you are using?\n\n> But what negative impact is disabling hash joins?\n\nI think using it as a temporary fix might be okay, but keeping such code in your application might be risky for you, because as the data changes in your tables, it could be quite possible that\nin future Hash Join might be the best and cheapest way.\n\nCan you try reproducing it with small data or else can you attach your schema and data for the tables/indexes used in query?\n\nWith Regards,\nAmit Kapila.\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Fri, 24 May 2013 10:44:44 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Very slow inner join query Unacceptable latency."
}
] |
[
{
"msg_contents": "1.) Server settingmemory: 32960116kB = 32GB2.) Current Postgresql configuration settings of note in my environment.enable_hashjoin=offwork_mem = 16MB #random_page_cost-4.0 <- defaultmaintenance_work_mem=256MBshared_buffers = 8GBserverdb=# explain analyze select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT'; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------Aggregate (cost=5714258.72..5714258.73 rows=1 width=0) (actual time=54402.148..54402.148 rows=1 loops=1) Nested Loop (cost=0.00..5714253.25 rows=2188 width=0) (actual time=5.920..54090.676 rows=1481710 loops=1) -> Index Scan using idx_SARS_ACTS_run_algorithm on SARS_ACTS_run tr1_ (cost=0.00..32.71 rows=442 width=8) (actual time=1.423..205.256 rows=441 loops=1) Index Cond: ((algorithm)::text = 'SMAT'::text) -> Index Scan using idx_SARS_ACTS_run_id_end_time on SARS_ACTS this_ (cost=0.00..12874.40 rows=4296 width=8) (actual time=749..121.125 rows=3360 loops=441) Index Cond: (SARS_RUN_ID=tr1_.ID)Total runtime: 54402.212 ms <- 54 seconds(7 rows)3.) Setting the recommended parametersserverdb=# set work_mem='500MB';SETserverdb=# set random_page_cost=1.2;SETserverdb=# explain analyze select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT'; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------Aggregate (cost=1754246.85..1754246.86 rows=1 width=0) (actual time=1817.644..1817.644 rows=1 loops=1) Nested Loop (cost=0.00..1754241.38 rows=2188 width=0) (actual time=0.135..1627.954 rows=1481710 loops=1) -> Index Scan using idx_SARS_ACTS_run_algorithm on SARS_ACTS_run tr1_ (cost=0.00..22.40 rows=442 width=8) (actual time=0.067..0.561 rows=441 loops=1) Index Cond: ((algorithm)::text = 'SMAT'::text) -> Index Scan using idx_SARS_ACTS_run_id_end_time on SARS_ACTS this_ (cost=0.00..3915.12 rows=4296 width=8) (actual time=0.008..2.972 rows=3360 loops=441) Index Cond: (SARS_RUN_ID=tr1_.ID)Total runtime: 1817.695 ms 1.8 seconds <- very good response time improvement(7 rows)4.) Now toggling the enable_hashjoin, I suspect the plan is cached, so these results may be suspect.serverdb=# set enable_hashjoin=on;SETserverdb=# explain analyze select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT'; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------Aggregate (cost=1754246.85..1754246.86 rows=1 width=0) (actual time=1606.683..1606.683 rows=1 loops=1) Nested Loop (cost=0.00..1754241.38 rows=2188 width=0) (actual time=0.136..1442.463 rows=1481710 loops=1) -> Index Scan using idx_SARS_ACTS_run_algorithm on SARS_ACTS_run tr1_ (cost=0.00..22.40 rows=442 width=8) (actual time=0.068..0.591 rows=441 loops=1) Index Cond: ((algorithm)::text = 'SMAT'::text) -> Index Scan using idx_SARS_ACTS_run_id_end_time on SARS_ACTS this_ (cost=0.00..3915.12 rows=4296 width=8) (actual time=0.007..2.659 rows=3360 loops=441) Index Cond: (SARS_RUN_ID=tr1_.ID)Total runtime: 1606.728 ms 1.6 seconds <- very good response time improvement(7 rows)Questions:Any concerns with setting these conf variables you recommended; work_mem, random_page_cost dbserver wide (in postgresql,conf)? Thanks so much!!!\n\n\n-------- Original Message --------\nSubject: Re: [GENERAL] [PERFORM] Very slow inner join query Unacceptable\nlatency.\nFrom: Scott Marlowe <[email protected]>\nDate: Thu, May 23, 2013 11:16 pm\nTo: [email protected]\nCc: Jaime Casanova <[email protected]>, psql performance list\n<[email protected]>, Postgres General\n<[email protected]>\n\nLooking at the execution plan makes me wonder what your work_mem is\nset to. Try cranking it up to test and lowering random_page_cost:\n\nset work_mem='500MB';\nset random_page_cost=1.2;\nexplain analyze select ...\n\nand see what you get.\n\n\n",
"msg_date": "Fri, 24 May 2013 14:44:28 -0700",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Very slow inner join query Unacceptable latency."
},
{
"msg_contents": "On Fri, May 24, 2013 at 3:44 PM, <[email protected]> wrote:\n\n> Total runtime: 1606.728 ms 1.6 seconds <- very good response time\n> improvement\n>\n> (7 rows)\n>\n> Questions:\n>\n> Any concerns with setting these conf variables you recommended; work_mem,\n> random_page_cost dbserver wide (in postgresql,conf)?\n>\n> Thanks so much!!!\n\nYes 500MB is pretty high especially if you have a lot of connections.\nTry it with it back down to 16MB and see how it does. Work mem is per\nsort so a setting as high as 500MB can exhaust memory on the machine\nunder heavy load.\n\n--\nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Fri, 24 May 2013 16:03:39 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Very slow inner join query Unacceptable latency."
}
] |
[
{
"msg_contents": "I am using postgresql 9.2, and we are getting error like -\nSystem.Data.EntityException: The underlying provider failed on Open. --->\nDevart.Data.PostgreSql.PgSqlException: Stream already closed!!!. I have\nturned the autovacuum on but my reporting query is failing. When the manual\nfull autovacuum is performed along with analyse the reporting query works. \nPlease let me know how to deal with this situation.\n\nThank you\nA.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Reporting-query-failing-tp5756840.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 25 May 2013 06:46:24 -0700 (PDT)",
"msg_from": "aup20 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reporting query failing"
},
{
"msg_contents": "Hi!\n\nOn 25.5.2013 15:46, aup20 wrote:\n> I am using postgresql 9.2, and we are getting error like - \n> System.Data.EntityException: The underlying provider failed on Open. \n> ---> Devart.Data.PostgreSql.PgSqlException: Stream already closed!!!.\n> I have turned the autovacuum on but my reporting query is failing.\n> When the manual full autovacuum is performed along with analyse the\n> reporting query works. Please let me know how to deal with this\n> situation.\n\nWe need substantially more info to be able to help you.\n\n1) What does the PostgreSQL log say?\n\n2) What query are you running? Post the SQL query, please.\n\n3) Was this happening since the beginning, or did that start recently?\n\n4) Are you able to reproduce the issues when running the query using\n psql (or other client, i.e. not through dotconnect)?\n\n5) I don't understand how is this related to autovacuum (or vacuum in\n general). Why have you disabled the autovacuum in the first place?\n\n6) I assume all this is on Windows (as you're using dotconnect). Is\n that correct? Please describe the environment a bit (versions etc.)\n It the application running on the same system as the database?\n\nkind regards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 25 May 2013 23:00:19 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reporting query failing"
}
] |
[
{
"msg_contents": "Postgres 9.1.2 on Ubuntu 12.04\n\nAny reason why a select by primary key would be slower than a select that\nincludes an ORDER BY? I was really hoping using the primary key would give\nme a boost.\n\nI stopped the server and cleared the O/S cache using \"sync; echo 3 >\n/proc/sys/vm/drop_caches\" between the runs.\n\n\n\ntest=# VACUUM ANALYZE test_select;\nVACUUM\n\n(stopped postgres; reset O/S cache; started postgres)\n\ntest=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER\nBY key1, key2, key3, id LIMIT 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600\nrows=1 loops=1)\n -> Index Scan using my_key on test_select (cost=0.00..41895.49\nrows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n Index Cond: (key1 >= 500000)\n Total runtime: 12.678 ms\n\n(stopped postgres; reset O/S cache; started postgres)\n\ntest=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1\nwidth=21) (actual time=31.396..31.398 rows=1 loops=1)\n Index Cond: (id = 500000)\n Total runtime: 31.504 ms\n\n\n\nSchema:\n\ntest=# \\d test_select\n Table \"public.test_select\"\n Column | Type | Modifiers\n\n--------+--------------+----------------------------------------------------------\n id | integer | not null default\nnextval('test_select_id_seq'::regclass)\n key1 | integer |\n key2 | integer |\n key3 | integer |\n data | character(4) |\nIndexes:\n \"test_select_pkey\" PRIMARY KEY, btree (id)\n \"my_key\" btree (key1, key2, key3, id)\n\ntest=#\n\n\n\nSample data:\n\ntest=# SELECT * FROM test_select LIMIT 10;\n id | key1 | key2 | key3 | data\n----+--------+--------+--------+------\n 1 | 984966 | 283954 | 772063 | x\n 2 | 817668 | 393533 | 924888 | x\n 3 | 751039 | 798753 | 454309 | x\n 4 | 128505 | 329643 | 280553 | x\n 5 | 105600 | 257225 | 710015 | x\n 6 | 323891 | 615614 | 83206 | x\n 7 | 194054 | 63506 | 353171 | x\n 8 | 212068 | 881225 | 271804 | x\n 9 | 644180 | 26693 | 200738 | x\n 10 | 136586 | 498699 | 554417 | x\n(10 rows)\n\n\n\n\nHere's how I populated the table:\n\nimport psycopg2\n\nconn = psycopg2.connect('dbname=test')\n\ncur = conn.cursor()\n\ndef random_int():\n n = 1000000\n return random.randint(0,n)\n\ndef random_key():\n return random_int(), random_int(), random_int()\n\ndef create_table():\n cur.execute('''\n DROP TABLE IF EXISTS test_select;\n\n CREATE TABLE test_select (\n id SERIAL PRIMARY KEY,\n key1 INTEGER,\n key2 INTEGER,\n key3 INTEGER,\n data char(4)\n );\n ''')\n conn.commit()\n\n n = 1000000\n for i in range(n):\n cur.execute(\"INSERT INTO test_select(key1, key2, key3, data)\nVALUES(%s, %s, %s, 'x')\", random_key())\n conn.commit()\n\n cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3, id)')\n conn.commit()\n\ncreate_table()\n\nPostgres 9.1.2 on Ubuntu 12.04Any reason why a select by primary key would be slower than a select that includes an ORDER BY? I was really hoping using the primary key would give me a boost. \nI stopped the server and cleared the O/S cache using \"sync; echo 3 > /proc/sys/vm/drop_caches\" between the runs.\n\ntest=# VACUUM ANALYZE test_select;VACUUM(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1 loops=1) -> Index Scan using my_key on test_select (cost=0.00..41895.49 rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n Index Cond: (key1 >= 500000) Total runtime: 12.678 ms(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1) Index Cond: (id = 500000) Total runtime: 31.504 ms\nSchema:test=# \\d test_select Table \"public.test_select\" Column | Type | Modifiers \n--------+--------------+---------------------------------------------------------- id | integer | not null default nextval('test_select_id_seq'::regclass) key1 | integer | \n key2 | integer | key3 | integer | data | character(4) | Indexes: \"test_select_pkey\" PRIMARY KEY, btree (id) \"my_key\" btree (key1, key2, key3, id)\ntest=# Sample data:test=# SELECT * FROM test_select LIMIT 10; id | key1 | key2 | key3 | data \n----+--------+--------+--------+------ 1 | 984966 | 283954 | 772063 | x 2 | 817668 | 393533 | 924888 | x 3 | 751039 | 798753 | 454309 | x 4 | 128505 | 329643 | 280553 | x \n 5 | 105600 | 257225 | 710015 | x 6 | 323891 | 615614 | 83206 | x 7 | 194054 | 63506 | 353171 | x 8 | 212068 | 881225 | 271804 | x 9 | 644180 | 26693 | 200738 | x \n 10 | 136586 | 498699 | 554417 | x (10 rows)Here's how I populated the table:\nimport psycopg2conn = psycopg2.connect('dbname=test')cur = conn.cursor()def random_int(): n = 1000000\n return random.randint(0,n)def random_key(): return random_int(), random_int(), random_int()def create_table(): cur.execute('''\n DROP TABLE IF EXISTS test_select; CREATE TABLE test_select ( id SERIAL PRIMARY KEY, key1 INTEGER,\n key2 INTEGER, key3 INTEGER, data char(4) ); ''')\n conn.commit() n = 1000000 for i in range(n): cur.execute(\"INSERT INTO test_select(key1, key2, key3, data) VALUES(%s, %s, %s, 'x')\", random_key())\n conn.commit() cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3, id)') conn.commit()create_table()",
"msg_date": "Mon, 27 May 2013 10:02:26 -0400",
"msg_from": "John Mudd <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "\nOn May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n\n> Postgres 9.1.2 on Ubuntu 12.04\n> \n> Any reason why a select by primary key would be slower than a select that includes an ORDER BY? I was really hoping using the primary key would give me a boost. \n> \n\nYou created my_key after data loading, and PK was there all the time.\nIf you REINDEX PK, i bet it will be as fast.\n\n> I stopped the server and cleared the O/S cache using \"sync; echo 3 > /proc/sys/vm/drop_caches\" between the runs.\n> \n> \n> \n> test=# VACUUM ANALYZE test_select;\n> VACUUM\n> \n> (stopped postgres; reset O/S cache; started postgres)\n> \n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41895.49 rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 12.678 ms\n> \n> (stopped postgres; reset O/S cache; started postgres)\n> \n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 31.504 ms\n> \n> \n> \n> Schema:\n> \n> test=# \\d test_select\n> Table \"public.test_select\"\n> Column | Type | Modifiers \n> --------+--------------+----------------------------------------------------------\n> id | integer | not null default nextval('test_select_id_seq'::regclass)\n> key1 | integer | \n> key2 | integer | \n> key3 | integer | \n> data | character(4) | \n> Indexes:\n> \"test_select_pkey\" PRIMARY KEY, btree (id)\n> \"my_key\" btree (key1, key2, key3, id)\n> \n> test=# \n> \n> \n> \n> Sample data:\n> \n> test=# SELECT * FROM test_select LIMIT 10;\n> id | key1 | key2 | key3 | data \n> ----+--------+--------+--------+------\n> 1 | 984966 | 283954 | 772063 | x \n> 2 | 817668 | 393533 | 924888 | x \n> 3 | 751039 | 798753 | 454309 | x \n> 4 | 128505 | 329643 | 280553 | x \n> 5 | 105600 | 257225 | 710015 | x \n> 6 | 323891 | 615614 | 83206 | x \n> 7 | 194054 | 63506 | 353171 | x \n> 8 | 212068 | 881225 | 271804 | x \n> 9 | 644180 | 26693 | 200738 | x \n> 10 | 136586 | 498699 | 554417 | x \n> (10 rows)\n> \n> \n> \n> \n> Here's how I populated the table:\n> \n> import psycopg2\n> \n> conn = psycopg2.connect('dbname=test')\n> \n> cur = conn.cursor()\n> \n> def random_int():\n> n = 1000000\n> return random.randint(0,n)\n> \n> def random_key():\n> return random_int(), random_int(), random_int()\n> \n> def create_table():\n> cur.execute('''\n> DROP TABLE IF EXISTS test_select;\n> \n> CREATE TABLE test_select (\n> id SERIAL PRIMARY KEY,\n> key1 INTEGER,\n> key2 INTEGER,\n> key3 INTEGER,\n> data char(4)\n> );\n> ''')\n> conn.commit()\n> \n> n = 1000000\n> for i in range(n):\n> cur.execute(\"INSERT INTO test_select(key1, key2, key3, data) VALUES(%s, %s, %s, 'x')\", random_key())\n> conn.commit()\n> \n> cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3, id)')\n> conn.commit()\n> \n> create_table()\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 27 May 2013 18:21:32 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "Thanks, that's easy enough to test. Didn't seem to help though.\n\n\ntest=# REINDEX index test_select_pkey;\nREINDEX\ntest=# VACUUM ANALYZE test_select ;\nVACUUM\n\n\n(stopped postgres; reset O/S cache; started postgres)\n\ntest=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER\nBY key1, key2, key3, id LIMIT 1;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.08 rows=1 width=21) (actual time=16.368..16.369\nrows=1 loops=1)\n -> Index Scan using my_key on test_select (cost=0.00..41981.16\nrows=501333 width=21) (actual time=16.366..16.366 rows=1 loops=1)\n Index Cond: (key1 >= 500000)\n Total runtime: 16.444 ms\n\n\n(stopped postgres; reset O/S cache; started postgres)\n\ntest=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1\nwidth=21) (actual time=23.072..23.074 rows=1 loops=1)\n Index Cond: (id = 500000)\n Total runtime: 23.192 ms\n\n\n\n\nOn Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin <[email protected]>wrote:\n\n>\n> On May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n>\n> > Postgres 9.1.2 on Ubuntu 12.04\n> >\n> > Any reason why a select by primary key would be slower than a select\n> that includes an ORDER BY? I was really hoping using the primary key would\n> give me a boost.\n> >\n>\n> You created my_key after data loading, and PK was there all the time.\n> If you REINDEX PK, i bet it will be as fast.\n>\n> > I stopped the server and cleared the O/S cache using \"sync; echo 3 >\n> /proc/sys/vm/drop_caches\" between the runs.\n> >\n> >\n> >\n> > test=# VACUUM ANALYZE test_select;\n> > VACUUM\n> >\n> > (stopped postgres; reset O/S cache; started postgres)\n> >\n> > test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000\n> ORDER BY key1, key2, key3, id LIMIT 1;\n> > QUERY PLAN\n> >\n> --------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600\n> rows=1 loops=1)\n> > -> Index Scan using my_key on test_select (cost=0.00..41895.49\n> rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> > Index Cond: (key1 >= 500000)\n> > Total runtime: 12.678 ms\n> >\n> > (stopped postgres; reset O/S cache; started postgres)\n> >\n> > test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> > QUERY PLAN\n> >\n> ---------------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using test_select_pkey on test_select (cost=0.00..8.36\n> rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n> > Index Cond: (id = 500000)\n> > Total runtime: 31.504 ms\n> >\n> >\n> >\n> > Schema:\n> >\n> > test=# \\d test_select\n> > Table \"public.test_select\"\n> > Column | Type | Modifiers\n> >\n> --------+--------------+----------------------------------------------------------\n> > id | integer | not null default\n> nextval('test_select_id_seq'::regclass)\n> > key1 | integer |\n> > key2 | integer |\n> > key3 | integer |\n> > data | character(4) |\n> > Indexes:\n> > \"test_select_pkey\" PRIMARY KEY, btree (id)\n> > \"my_key\" btree (key1, key2, key3, id)\n> >\n> > test=#\n> >\n> >\n> >\n> > Sample data:\n> >\n> > test=# SELECT * FROM test_select LIMIT 10;\n> > id | key1 | key2 | key3 | data\n> > ----+--------+--------+--------+------\n> > 1 | 984966 | 283954 | 772063 | x\n> > 2 | 817668 | 393533 | 924888 | x\n> > 3 | 751039 | 798753 | 454309 | x\n> > 4 | 128505 | 329643 | 280553 | x\n> > 5 | 105600 | 257225 | 710015 | x\n> > 6 | 323891 | 615614 | 83206 | x\n> > 7 | 194054 | 63506 | 353171 | x\n> > 8 | 212068 | 881225 | 271804 | x\n> > 9 | 644180 | 26693 | 200738 | x\n> > 10 | 136586 | 498699 | 554417 | x\n> > (10 rows)\n> >\n> >\n> >\n> >\n> > Here's how I populated the table:\n> >\n> > import psycopg2\n> >\n> > conn = psycopg2.connect('dbname=test')\n> >\n> > cur = conn.cursor()\n> >\n> > def random_int():\n> > n = 1000000\n> > return random.randint(0,n)\n> >\n> > def random_key():\n> > return random_int(), random_int(), random_int()\n> >\n> > def create_table():\n> > cur.execute('''\n> > DROP TABLE IF EXISTS test_select;\n> >\n> > CREATE TABLE test_select (\n> > id SERIAL PRIMARY KEY,\n> > key1 INTEGER,\n> > key2 INTEGER,\n> > key3 INTEGER,\n> > data char(4)\n> > );\n> > ''')\n> > conn.commit()\n> >\n> > n = 1000000\n> > for i in range(n):\n> > cur.execute(\"INSERT INTO test_select(key1, key2, key3, data)\n> VALUES(%s, %s, %s, 'x')\", random_key())\n> > conn.commit()\n> >\n> > cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3,\n> id)')\n> > conn.commit()\n> >\n> > create_table()\n> >\n>\n>\n\nThanks, that's easy enough to test. Didn't seem to help though.\ntest=# REINDEX index test_select_pkey;REINDEX\ntest=# VACUUM ANALYZE test_select ;VACUUM(stopped postgres; reset O/S cache; started postgres)\ntest=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..0.08 rows=1 width=21) (actual time=16.368..16.369 rows=1 loops=1)\n -> Index Scan using my_key on test_select (cost=0.00..41981.16 rows=501333 width=21) (actual time=16.366..16.366 rows=1 loops=1) Index Cond: (key1 >= 500000) Total runtime: 16.444 ms\n(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------\n\n Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=23.072..23.074 rows=1 loops=1)\n\n Index Cond: (id = 500000) Total runtime: 23.192 ms\nOn Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin <[email protected]> wrote:\n\nOn May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n\n> Postgres 9.1.2 on Ubuntu 12.04\n>\n> Any reason why a select by primary key would be slower than a select that includes an ORDER BY? I was really hoping using the primary key would give me a boost.\n>\n\nYou created my_key after data loading, and PK was there all the time.\nIf you REINDEX PK, i bet it will be as fast.\n\n> I stopped the server and cleared the O/S cache using \"sync; echo 3 > /proc/sys/vm/drop_caches\" between the runs.\n>\n>\n>\n> test=# VACUUM ANALYZE test_select;\n> VACUUM\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41895.49 rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 12.678 ms\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 31.504 ms\n>\n>\n>\n> Schema:\n>\n> test=# \\d test_select\n> Table \"public.test_select\"\n> Column | Type | Modifiers\n> --------+--------------+----------------------------------------------------------\n> id | integer | not null default nextval('test_select_id_seq'::regclass)\n> key1 | integer |\n> key2 | integer |\n> key3 | integer |\n> data | character(4) |\n> Indexes:\n> \"test_select_pkey\" PRIMARY KEY, btree (id)\n> \"my_key\" btree (key1, key2, key3, id)\n>\n> test=#\n>\n>\n>\n> Sample data:\n>\n> test=# SELECT * FROM test_select LIMIT 10;\n> id | key1 | key2 | key3 | data\n> ----+--------+--------+--------+------\n> 1 | 984966 | 283954 | 772063 | x\n> 2 | 817668 | 393533 | 924888 | x\n> 3 | 751039 | 798753 | 454309 | x\n> 4 | 128505 | 329643 | 280553 | x\n> 5 | 105600 | 257225 | 710015 | x\n> 6 | 323891 | 615614 | 83206 | x\n> 7 | 194054 | 63506 | 353171 | x\n> 8 | 212068 | 881225 | 271804 | x\n> 9 | 644180 | 26693 | 200738 | x\n> 10 | 136586 | 498699 | 554417 | x\n> (10 rows)\n>\n>\n>\n>\n> Here's how I populated the table:\n>\n> import psycopg2\n>\n> conn = psycopg2.connect('dbname=test')\n>\n> cur = conn.cursor()\n>\n> def random_int():\n> n = 1000000\n> return random.randint(0,n)\n>\n> def random_key():\n> return random_int(), random_int(), random_int()\n>\n> def create_table():\n> cur.execute('''\n> DROP TABLE IF EXISTS test_select;\n>\n> CREATE TABLE test_select (\n> id SERIAL PRIMARY KEY,\n> key1 INTEGER,\n> key2 INTEGER,\n> key3 INTEGER,\n> data char(4)\n> );\n> ''')\n> conn.commit()\n>\n> n = 1000000\n> for i in range(n):\n> cur.execute(\"INSERT INTO test_select(key1, key2, key3, data) VALUES(%s, %s, %s, 'x')\", random_key())\n> conn.commit()\n>\n> cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3, id)')\n> conn.commit()\n>\n> create_table()\n>",
"msg_date": "Mon, 27 May 2013 10:35:38 -0400",
"msg_from": "John Mudd <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "On May 27, 2013, at 6:35 PM, John Mudd <[email protected]> wrote:\n\n> Thanks, that's easy enough to test. Didn't seem to help though.\n> \n\nOk. And if you CLUSTER tables USING PK?\n\n> \n> test=# REINDEX index test_select_pkey;\n> REINDEX\n> test=# VACUUM ANALYZE test_select ;\n> VACUUM\n> \n> \n> (stopped postgres; reset O/S cache; started postgres)\n> \n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=16.368..16.369 rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41981.16 rows=501333 width=21) (actual time=16.366..16.366 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 16.444 ms\n> \n> \n> (stopped postgres; reset O/S cache; started postgres)\n> \n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=23.072..23.074 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 23.192 ms\n> \n> \n> \n> \n> On Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin <[email protected]> wrote:\n> \n> On May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n> \n> > Postgres 9.1.2 on Ubuntu 12.04\n> >\n> > Any reason why a select by primary key would be slower than a select that includes an ORDER BY? I was really hoping using the primary key would give me a boost.\n> >\n> \n> You created my_key after data loading, and PK was there all the time.\n> If you REINDEX PK, i bet it will be as fast.\n> \n> > I stopped the server and cleared the O/S cache using \"sync; echo 3 > /proc/sys/vm/drop_caches\" between the runs.\n> >\n> >\n> >\n> > test=# VACUUM ANALYZE test_select;\n> > VACUUM\n> >\n> > (stopped postgres; reset O/S cache; started postgres)\n> >\n> > test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n> > QUERY PLAN\n> > --------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1 loops=1)\n> > -> Index Scan using my_key on test_select (cost=0.00..41895.49 rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> > Index Cond: (key1 >= 500000)\n> > Total runtime: 12.678 ms\n> >\n> > (stopped postgres; reset O/S cache; started postgres)\n> >\n> > test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> > QUERY PLAN\n> > ---------------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n> > Index Cond: (id = 500000)\n> > Total runtime: 31.504 ms\n> >\n> >\n> >\n> > Schema:\n> >\n> > test=# \\d test_select\n> > Table \"public.test_select\"\n> > Column | Type | Modifiers\n> > --------+--------------+----------------------------------------------------------\n> > id | integer | not null default nextval('test_select_id_seq'::regclass)\n> > key1 | integer |\n> > key2 | integer |\n> > key3 | integer |\n> > data | character(4) |\n> > Indexes:\n> > \"test_select_pkey\" PRIMARY KEY, btree (id)\n> > \"my_key\" btree (key1, key2, key3, id)\n> >\n> > test=#\n> >\n> >\n> >\n> > Sample data:\n> >\n> > test=# SELECT * FROM test_select LIMIT 10;\n> > id | key1 | key2 | key3 | data\n> > ----+--------+--------+--------+------\n> > 1 | 984966 | 283954 | 772063 | x\n> > 2 | 817668 | 393533 | 924888 | x\n> > 3 | 751039 | 798753 | 454309 | x\n> > 4 | 128505 | 329643 | 280553 | x\n> > 5 | 105600 | 257225 | 710015 | x\n> > 6 | 323891 | 615614 | 83206 | x\n> > 7 | 194054 | 63506 | 353171 | x\n> > 8 | 212068 | 881225 | 271804 | x\n> > 9 | 644180 | 26693 | 200738 | x\n> > 10 | 136586 | 498699 | 554417 | x\n> > (10 rows)\n> >\n> >\n> >\n> >\n> > Here's how I populated the table:\n> >\n> > import psycopg2\n> >\n> > conn = psycopg2.connect('dbname=test')\n> >\n> > cur = conn.cursor()\n> >\n> > def random_int():\n> > n = 1000000\n> > return random.randint(0,n)\n> >\n> > def random_key():\n> > return random_int(), random_int(), random_int()\n> >\n> > def create_table():\n> > cur.execute('''\n> > DROP TABLE IF EXISTS test_select;\n> >\n> > CREATE TABLE test_select (\n> > id SERIAL PRIMARY KEY,\n> > key1 INTEGER,\n> > key2 INTEGER,\n> > key3 INTEGER,\n> > data char(4)\n> > );\n> > ''')\n> > conn.commit()\n> >\n> > n = 1000000\n> > for i in range(n):\n> > cur.execute(\"INSERT INTO test_select(key1, key2, key3, data) VALUES(%s, %s, %s, 'x')\", random_key())\n> > conn.commit()\n> >\n> > cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3, id)')\n> > conn.commit()\n> >\n> > create_table()\n> >\n> \n> \n\n\nOn May 27, 2013, at 6:35 PM, John Mudd <[email protected]> wrote:Thanks, that's easy enough to test. Didn't seem to help though.Ok. And if you CLUSTER tables USING PK?\ntest=# REINDEX index test_select_pkey;REINDEX\ntest=# VACUUM ANALYZE test_select ;VACUUM(stopped postgres; reset O/S cache; started postgres)\ntest=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..0.08 rows=1 width=21) (actual time=16.368..16.369 rows=1 loops=1)\n -> Index Scan using my_key on test_select (cost=0.00..41981.16 rows=501333 width=21) (actual time=16.366..16.366 rows=1 loops=1) Index Cond: (key1 >= 500000) Total runtime: 16.444 ms\n(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------\n\n Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=23.072..23.074 rows=1 loops=1)\n\n Index Cond: (id = 500000) Total runtime: 23.192 ms\nOn Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin <[email protected]> wrote:\n\nOn May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n\n> Postgres 9.1.2 on Ubuntu 12.04\n>\n> Any reason why a select by primary key would be slower than a select that includes an ORDER BY? I was really hoping using the primary key would give me a boost.\n>\n\nYou created my_key after data loading, and PK was there all the time.\nIf you REINDEX PK, i bet it will be as fast.\n\n> I stopped the server and cleared the O/S cache using \"sync; echo 3 > /proc/sys/vm/drop_caches\" between the runs.\n>\n>\n>\n> test=# VACUUM ANALYZE test_select;\n> VACUUM\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41895.49 rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 12.678 ms\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 31.504 ms\n>\n>\n>\n> Schema:\n>\n> test=# \\d test_select\n> Table \"public.test_select\"\n> Column | Type | Modifiers\n> --------+--------------+----------------------------------------------------------\n> id | integer | not null default nextval('test_select_id_seq'::regclass)\n> key1 | integer |\n> key2 | integer |\n> key3 | integer |\n> data | character(4) |\n> Indexes:\n> \"test_select_pkey\" PRIMARY KEY, btree (id)\n> \"my_key\" btree (key1, key2, key3, id)\n>\n> test=#\n>\n>\n>\n> Sample data:\n>\n> test=# SELECT * FROM test_select LIMIT 10;\n> id | key1 | key2 | key3 | data\n> ----+--------+--------+--------+------\n> 1 | 984966 | 283954 | 772063 | x\n> 2 | 817668 | 393533 | 924888 | x\n> 3 | 751039 | 798753 | 454309 | x\n> 4 | 128505 | 329643 | 280553 | x\n> 5 | 105600 | 257225 | 710015 | x\n> 6 | 323891 | 615614 | 83206 | x\n> 7 | 194054 | 63506 | 353171 | x\n> 8 | 212068 | 881225 | 271804 | x\n> 9 | 644180 | 26693 | 200738 | x\n> 10 | 136586 | 498699 | 554417 | x\n> (10 rows)\n>\n>\n>\n>\n> Here's how I populated the table:\n>\n> import psycopg2\n>\n> conn = psycopg2.connect('dbname=test')\n>\n> cur = conn.cursor()\n>\n> def random_int():\n> n = 1000000\n> return random.randint(0,n)\n>\n> def random_key():\n> return random_int(), random_int(), random_int()\n>\n> def create_table():\n> cur.execute('''\n> DROP TABLE IF EXISTS test_select;\n>\n> CREATE TABLE test_select (\n> id SERIAL PRIMARY KEY,\n> key1 INTEGER,\n> key2 INTEGER,\n> key3 INTEGER,\n> data char(4)\n> );\n> ''')\n> conn.commit()\n>\n> n = 1000000\n> for i in range(n):\n> cur.execute(\"INSERT INTO test_select(key1, key2, key3, data) VALUES(%s, %s, %s, 'x')\", random_key())\n> conn.commit()\n>\n> cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3, id)')\n> conn.commit()\n>\n> create_table()\n>",
"msg_date": "Mon, 27 May 2013 18:59:36 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "Thanks again.\n\nWell, I have two problems with using the CLUSTER option. It's only\ntemporary since any updates, depending how much free space is reserved per\npage, requires re-running the CLUSTER. And my primary concern is that it\narbitrarily gives an unfair advantage to the primary key SELECT. Still,\nit's easy to test so here are the results. The primary key still looses\neven with the CLUSTER. Granted it is close but considering this is now an\nunfair comparison it still doesn't make sense to me. How can a search for a\nspecific row that should be fairly straight forward take longer than a\nsearch that includes an ORDER BY clause?\n\n\ntest=# CLUSTER test_select USING test_select_pkey ;\nCLUSTER\ntest=# VACUUM ANALYZE test_select ;\nVACUUM\n\n(stopped postgres; reset O/S cache; started postgres)\n\n\ntest=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER\nBY key1, key2, key3, id LIMIT 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.08 rows=1 width=21) (actual time=19.430..19.431\nrows=1 loops=1)\n -> Index Scan using my_key on test_select (cost=0.00..41938.15\nrows=499992 width=21) (actual time=19.428..19.428 rows=1 loops=1)\n Index Cond: (key1 >= 500000)\n Total runtime: 19.526 ms\n\n\n(stopped postgres; reset O/S cache; started postgres)\n\n\ntest=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1\nwidth=21) (actual time=21.070..21.072 rows=1 loops=1)\n Index Cond: (id = 500000)\n Total runtime: 21.178 ms\n\n\n\n\nOn Mon, May 27, 2013 at 10:59 AM, Evgeny Shishkin <[email protected]>wrote:\n\n>\n> On May 27, 2013, at 6:35 PM, John Mudd <[email protected]> wrote:\n>\n> Thanks, that's easy enough to test. Didn't seem to help though.\n>\n>\n> Ok. And if you CLUSTER tables USING PK?\n>\n>\n> test=# REINDEX index test_select_pkey;\n> REINDEX\n> test=# VACUUM ANALYZE test_select ;\n> VACUUM\n>\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000\n> ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=16.368..16.369\n> rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41981.16\n> rows=501333 width=21) (actual time=16.366..16.366 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 16.444 ms\n>\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36\n> rows=1 width=21) (actual time=23.072..23.074 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 23.192 ms\n>\n>\n>\n>\n> On Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin <[email protected]>wrote:\n>\n>>\n>> On May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n>>\n>> > Postgres 9.1.2 on Ubuntu 12.04\n>> >\n>> > Any reason why a select by primary key would be slower than a select\n>> that includes an ORDER BY? I was really hoping using the primary key would\n>> give me a boost.\n>> >\n>>\n>> You created my_key after data loading, and PK was there all the time.\n>> If you REINDEX PK, i bet it will be as fast.\n>>\n>> > I stopped the server and cleared the O/S cache using \"sync; echo 3 >\n>> /proc/sys/vm/drop_caches\" between the runs.\n>> >\n>> >\n>> >\n>> > test=# VACUUM ANALYZE test_select;\n>> > VACUUM\n>> >\n>> > (stopped postgres; reset O/S cache; started postgres)\n>> >\n>> > test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000\n>> ORDER BY key1, key2, key3, id LIMIT 1;\n>> > QUERY PLAN\n>> >\n>> --------------------------------------------------------------------------------------------------------------------------------------\n>> > Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600\n>> rows=1 loops=1)\n>> > -> Index Scan using my_key on test_select (cost=0.00..41895.49\n>> rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n>> > Index Cond: (key1 >= 500000)\n>> > Total runtime: 12.678 ms\n>> >\n>> > (stopped postgres; reset O/S cache; started postgres)\n>> >\n>> > test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n>> > QUERY PLAN\n>> >\n>> ---------------------------------------------------------------------------------------------------------------------------------\n>> > Index Scan using test_select_pkey on test_select (cost=0.00..8.36\n>> rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n>> > Index Cond: (id = 500000)\n>> > Total runtime: 31.504 ms\n>> >\n>> >\n>> >\n>> > Schema:\n>> >\n>> > test=# \\d test_select\n>> > Table \"public.test_select\"\n>> > Column | Type | Modifiers\n>> >\n>> --------+--------------+----------------------------------------------------------\n>> > id | integer | not null default\n>> nextval('test_select_id_seq'::regclass)\n>> > key1 | integer |\n>> > key2 | integer |\n>> > key3 | integer |\n>> > data | character(4) |\n>> > Indexes:\n>> > \"test_select_pkey\" PRIMARY KEY, btree (id)\n>> > \"my_key\" btree (key1, key2, key3, id)\n>> >\n>> > test=#\n>> >\n>> >\n>> >\n>> > Sample data:\n>> >\n>> > test=# SELECT * FROM test_select LIMIT 10;\n>> > id | key1 | key2 | key3 | data\n>> > ----+--------+--------+--------+------\n>> > 1 | 984966 | 283954 | 772063 | x\n>> > 2 | 817668 | 393533 | 924888 | x\n>> > 3 | 751039 | 798753 | 454309 | x\n>> > 4 | 128505 | 329643 | 280553 | x\n>> > 5 | 105600 | 257225 | 710015 | x\n>> > 6 | 323891 | 615614 | 83206 | x\n>> > 7 | 194054 | 63506 | 353171 | x\n>> > 8 | 212068 | 881225 | 271804 | x\n>> > 9 | 644180 | 26693 | 200738 | x\n>> > 10 | 136586 | 498699 | 554417 | x\n>> > (10 rows)\n>> >\n>> >\n>> >\n>> >\n>> > Here's how I populated the table:\n>> >\n>> > import psycopg2\n>> >\n>> > conn = psycopg2.connect('dbname=test')\n>> >\n>> > cur = conn.cursor()\n>> >\n>> > def random_int():\n>> > n = 1000000\n>> > return random.randint(0,n)\n>> >\n>> > def random_key():\n>> > return random_int(), random_int(), random_int()\n>> >\n>> > def create_table():\n>> > cur.execute('''\n>> > DROP TABLE IF EXISTS test_select;\n>> >\n>> > CREATE TABLE test_select (\n>> > id SERIAL PRIMARY KEY,\n>> > key1 INTEGER,\n>> > key2 INTEGER,\n>> > key3 INTEGER,\n>> > data char(4)\n>> > );\n>> > ''')\n>> > conn.commit()\n>> >\n>> > n = 1000000\n>> > for i in range(n):\n>> > cur.execute(\"INSERT INTO test_select(key1, key2, key3, data)\n>> VALUES(%s, %s, %s, 'x')\", random_key())\n>> > conn.commit()\n>> >\n>> > cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3,\n>> id)')\n>> > conn.commit()\n>> >\n>> > create_table()\n>> >\n>>\n>>\n>\n>\n\nThanks again.Well, I have two problems with using the CLUSTER option. It's only temporary since any updates, depending how much free space is reserved per page, requires re-running the CLUSTER. And my primary concern is that it arbitrarily gives an unfair advantage to the primary key SELECT. Still, it's easy to test so here are the results. The primary key still looses even with the CLUSTER. Granted it is close but considering this is now an unfair comparison it still doesn't make sense to me. How can a search for a specific row that should be fairly straight forward take longer than a search that includes an ORDER BY clause?\ntest=# CLUSTER test_select USING test_select_pkey ;\nCLUSTERtest=# VACUUM ANALYZE test_select ;VACUUM\n(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.08 rows=1 width=21) (actual time=19.430..19.431 rows=1 loops=1) -> Index Scan using my_key on test_select (cost=0.00..41938.15 rows=499992 width=21) (actual time=19.428..19.428 rows=1 loops=1)\n Index Cond: (key1 >= 500000) Total runtime: 19.526 ms(stopped postgres; reset O/S cache; started postgres)\ntest=# explain analyze SELECT * FROM test_select WHERE id = 500000; QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------- Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=21.070..21.072 rows=1 loops=1)\n Index Cond: (id = 500000) Total runtime: 21.178 msOn Mon, May 27, 2013 at 10:59 AM, Evgeny Shishkin <[email protected]> wrote:\nOn May 27, 2013, at 6:35 PM, John Mudd <[email protected]> wrote:\nThanks, that's easy enough to test. Didn't seem to help though.\nOk. And if you CLUSTER tables USING PK?\ntest=# REINDEX index test_select_pkey;REINDEX\ntest=# VACUUM ANALYZE test_select ;VACUUM(stopped postgres; reset O/S cache; started postgres)\ntest=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..0.08 rows=1 width=21) (actual time=16.368..16.369 rows=1 loops=1)\n -> Index Scan using my_key on test_select (cost=0.00..41981.16 rows=501333 width=21) (actual time=16.366..16.366 rows=1 loops=1) Index Cond: (key1 >= 500000) Total runtime: 16.444 ms\n(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------\n\n Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=23.072..23.074 rows=1 loops=1)\n\n\n\n Index Cond: (id = 500000) Total runtime: 23.192 ms\nOn Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin <[email protected]> wrote:\n\nOn May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n\n> Postgres 9.1.2 on Ubuntu 12.04\n>\n> Any reason why a select by primary key would be slower than a select that includes an ORDER BY? I was really hoping using the primary key would give me a boost.\n>\n\nYou created my_key after data loading, and PK was there all the time.\nIf you REINDEX PK, i bet it will be as fast.\n\n> I stopped the server and cleared the O/S cache using \"sync; echo 3 > /proc/sys/vm/drop_caches\" between the runs.\n>\n>\n>\n> test=# VACUUM ANALYZE test_select;\n> VACUUM\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41895.49 rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 12.678 ms\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 31.504 ms\n>\n>\n>\n> Schema:\n>\n> test=# \\d test_select\n> Table \"public.test_select\"\n> Column | Type | Modifiers\n> --------+--------------+----------------------------------------------------------\n> id | integer | not null default nextval('test_select_id_seq'::regclass)\n> key1 | integer |\n> key2 | integer |\n> key3 | integer |\n> data | character(4) |\n> Indexes:\n> \"test_select_pkey\" PRIMARY KEY, btree (id)\n> \"my_key\" btree (key1, key2, key3, id)\n>\n> test=#\n>\n>\n>\n> Sample data:\n>\n> test=# SELECT * FROM test_select LIMIT 10;\n> id | key1 | key2 | key3 | data\n> ----+--------+--------+--------+------\n> 1 | 984966 | 283954 | 772063 | x\n> 2 | 817668 | 393533 | 924888 | x\n> 3 | 751039 | 798753 | 454309 | x\n> 4 | 128505 | 329643 | 280553 | x\n> 5 | 105600 | 257225 | 710015 | x\n> 6 | 323891 | 615614 | 83206 | x\n> 7 | 194054 | 63506 | 353171 | x\n> 8 | 212068 | 881225 | 271804 | x\n> 9 | 644180 | 26693 | 200738 | x\n> 10 | 136586 | 498699 | 554417 | x\n> (10 rows)\n>\n>\n>\n>\n> Here's how I populated the table:\n>\n> import psycopg2\n>\n> conn = psycopg2.connect('dbname=test')\n>\n> cur = conn.cursor()\n>\n> def random_int():\n> n = 1000000\n> return random.randint(0,n)\n>\n> def random_key():\n> return random_int(), random_int(), random_int()\n>\n> def create_table():\n> cur.execute('''\n> DROP TABLE IF EXISTS test_select;\n>\n> CREATE TABLE test_select (\n> id SERIAL PRIMARY KEY,\n> key1 INTEGER,\n> key2 INTEGER,\n> key3 INTEGER,\n> data char(4)\n> );\n> ''')\n> conn.commit()\n>\n> n = 1000000\n> for i in range(n):\n> cur.execute(\"INSERT INTO test_select(key1, key2, key3, data) VALUES(%s, %s, %s, 'x')\", random_key())\n> conn.commit()\n>\n> cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3, id)')\n> conn.commit()\n>\n> create_table()\n>",
"msg_date": "Mon, 27 May 2013 18:17:45 -0400",
"msg_from": "John Mudd <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "On 28.05.2013, at 2:17, John Mudd <[email protected]> wrote:\n\n> Thanks again.\n> \n> Well, I have two problems with using the CLUSTER option. It's only temporary since any updates, depending how much free space is reserved per page, requires re-running the CLUSTER. And my primary concern is that it arbitrarily gives an unfair advantage to the primary key SELECT. Still, it's easy to test so here are the results. The primary key still looses even with the CLUSTER. Granted it is close but considering this is now an unfair comparison it still doesn't make sense to me. How can a search for a specific row that should be fairly straight forward take longer than a search that includes an ORDER BY clause?\n> \n\nWell, you do just regular index scan because of LIMIT 1.\n\nAnd now it is just a matter of index size and table organization.\n\nI also don't understand why you consider CLUSTER unfair - the way you populated the table was natural cluster over my_key.\n\nBut it bothers me why my_key is always better. Can you please test it on different values but the same rows? Because now it is two different tuples and you count every io.\n\n> \n> test=# CLUSTER test_select USING test_select_pkey ;\n> CLUSTER\n> test=# VACUUM ANALYZE test_select ;\n> VACUUM\n> \n>> (stopped postgres; reset O/S cache; started postgres)\n> \n> \n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=19.430..19.431 rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41938.15 rows=499992 width=21) (actual time=19.428..19.428 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 19.526 ms\n> \n> \n>> (stopped postgres; reset O/S cache; started postgres)\n> \n> \n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=21.070..21.072 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 21.178 ms\n> \n> \n> \n> \n> On Mon, May 27, 2013 at 10:59 AM, Evgeny Shishkin <[email protected]> wrote:\n>> \n>> On May 27, 2013, at 6:35 PM, John Mudd <[email protected]> wrote:\n>> \n>>> Thanks, that's easy enough to test. Didn't seem to help though.\n>> \n>> Ok. And if you CLUSTER tables USING PK?\n>> \n>>> \n>>> test=# REINDEX index test_select_pkey;\n>>> REINDEX\n>>> test=# VACUUM ANALYZE test_select ;\n>>> VACUUM\n>>> \n>>> \n>>> (stopped postgres; reset O/S cache; started postgres)\n>>> \n>>> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n>>> QUERY PLAN \n>>> --------------------------------------------------------------------------------------------------------------------------------------\n>>> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=16.368..16.369 rows=1 loops=1)\n>>> -> Index Scan using my_key on test_select (cost=0.00..41981.16 rows=501333 width=21) (actual time=16.366..16.366 rows=1 loops=1)\n>>> Index Cond: (key1 >= 500000)\n>>> Total runtime: 16.444 ms\n>>> \n>>> \n>>> (stopped postgres; reset O/S cache; started postgres)\n>>> \n>>> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n>>> QUERY PLAN\n>>> ---------------------------------------------------------------------------------------------------------------------------------\n>>> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=23.072..23.074 rows=1 loops=1)\n>>> Index Cond: (id = 500000)\n>>> Total runtime: 23.192 ms\n>>> \n>>> \n>>> \n>>> \n>>> On Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin <[email protected]> wrote:\n>>>> \n>>>> On May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n>>>> \n>>>> > Postgres 9.1.2 on Ubuntu 12.04\n>>>> >\n>>>> > Any reason why a select by primary key would be slower than a select that includes an ORDER BY? I was really hoping using the primary key would give me a boost.\n>>>> >\n>>>> \n>>>> You created my_key after data loading, and PK was there all the time.\n>>>> If you REINDEX PK, i bet it will be as fast.\n>>>> \n>>>> > I stopped the server and cleared the O/S cache using \"sync; echo 3 > /proc/sys/vm/drop_caches\" between the runs.\n>>>> >\n>>>> >\n>>>> >\n>>>> > test=# VACUUM ANALYZE test_select;\n>>>> > VACUUM\n>>>> >\n>>>> > (stopped postgres; reset O/S cache; started postgres)\n>>>> >\n>>>> > test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n>>>> > QUERY PLAN\n>>>> > --------------------------------------------------------------------------------------------------------------------------------------\n>>>> > Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1 loops=1)\n>>>> > -> Index Scan using my_key on test_select (cost=0.00..41895.49 rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n>>>> > Index Cond: (key1 >= 500000)\n>>>> > Total runtime: 12.678 ms\n>>>> >\n>>>> > (stopped postgres; reset O/S cache; started postgres)\n>>>> >\n>>>> > test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n>>>> > QUERY PLAN\n>>>> > ---------------------------------------------------------------------------------------------------------------------------------\n>>>> > Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n>>>> > Index Cond: (id = 500000)\n>>>> > Total runtime: 31.504 ms\n>>>> >\n>>>> >\n>>>> >\n>>>> > Schema:\n>>>> >\n>>>> > test=# \\d test_select\n>>>> > Table \"public.test_select\"\n>>>> > Column | Type | Modifiers\n>>>> > --------+--------------+----------------------------------------------------------\n>>>> > id | integer | not null default nextval('test_select_id_seq'::regclass)\n>>>> > key1 | integer |\n>>>> > key2 | integer |\n>>>> > key3 | integer |\n>>>> > data | character(4) |\n>>>> > Indexes:\n>>>> > \"test_select_pkey\" PRIMARY KEY, btree (id)\n>>>> > \"my_key\" btree (key1, key2, key3, id)\n>>>> >\n>>>> > test=#\n>>>> >\n>>>> >\n>>>> >\n>>>> > Sample data:\n>>>> >\n>>>> > test=# SELECT * FROM test_select LIMIT 10;\n>>>> > id | key1 | key2 | key3 | data\n>>>> > ----+--------+--------+--------+------\n>>>> > 1 | 984966 | 283954 | 772063 | x\n>>>> > 2 | 817668 | 393533 | 924888 | x\n>>>> > 3 | 751039 | 798753 | 454309 | x\n>>>> > 4 | 128505 | 329643 | 280553 | x\n>>>> > 5 | 105600 | 257225 | 710015 | x\n>>>> > 6 | 323891 | 615614 | 83206 | x\n>>>> > 7 | 194054 | 63506 | 353171 | x\n>>>> > 8 | 212068 | 881225 | 271804 | x\n>>>> > 9 | 644180 | 26693 | 200738 | x\n>>>> > 10 | 136586 | 498699 | 554417 | x\n>>>> > (10 rows)\n>>>> >\n>>>> >\n>>>> >\n>>>> >\n>>>> > Here's how I populated the table:\n>>>> >\n>>>> > import psycopg2\n>>>> >\n>>>> > conn = psycopg2.connect('dbname=test')\n>>>> >\n>>>> > cur = conn.cursor()\n>>>> >\n>>>> > def random_int():\n>>>> > n = 1000000\n>>>> > return random.randint(0,n)\n>>>> >\n>>>> > def random_key():\n>>>> > return random_int(), random_int(), random_int()\n>>>> >\n>>>> > def create_table():\n>>>> > cur.execute('''\n>>>> > DROP TABLE IF EXISTS test_select;\n>>>> >\n>>>> > CREATE TABLE test_select (\n>>>> > id SERIAL PRIMARY KEY,\n>>>> > key1 INTEGER,\n>>>> > key2 INTEGER,\n>>>> > key3 INTEGER,\n>>>> > data char(4)\n>>>> > );\n>>>> > ''')\n>>>> > conn.commit()\n>>>> >\n>>>> > n = 1000000\n>>>> > for i in range(n):\n>>>> > cur.execute(\"INSERT INTO test_select(key1, key2, key3, data) VALUES(%s, %s, %s, 'x')\", random_key())\n>>>> > conn.commit()\n>>>> >\n>>>> > cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3, id)')\n>>>> > conn.commit()\n>>>> >\n>>>> > create_table()\n>>>> >\n> \n\nOn 28.05.2013, at 2:17, John Mudd <[email protected]> wrote:Thanks again.Well, I have two problems with using the CLUSTER option. It's only temporary since any updates, depending how much free space is reserved per page, requires re-running the CLUSTER. And my primary concern is that it arbitrarily gives an unfair advantage to the primary key SELECT. Still, it's easy to test so here are the results. The primary key still looses even with the CLUSTER. Granted it is close but considering this is now an unfair comparison it still doesn't make sense to me. How can a search for a specific row that should be fairly straight forward take longer than a search that includes an ORDER BY clause?\nWell, you do just regular index scan because of LIMIT 1.And now it is just a matter of index size and table organization.I also don't understand why you consider CLUSTER unfair - the way you populated the table was natural cluster over my_key.But it bothers me why my_key is always better. Can you please test it on different values but the same rows? Because now it is two different tuples and you count every io.test=# CLUSTER test_select USING test_select_pkey ;\nCLUSTERtest=# VACUUM ANALYZE test_select ;VACUUM\n(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.08 rows=1 width=21) (actual time=19.430..19.431 rows=1 loops=1) -> Index Scan using my_key on test_select (cost=0.00..41938.15 rows=499992 width=21) (actual time=19.428..19.428 rows=1 loops=1)\n Index Cond: (key1 >= 500000) Total runtime: 19.526 ms(stopped postgres; reset O/S cache; started postgres)\ntest=# explain analyze SELECT * FROM test_select WHERE id = 500000; QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------- Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=21.070..21.072 rows=1 loops=1)\n Index Cond: (id = 500000) Total runtime: 21.178 msOn Mon, May 27, 2013 at 10:59 AM, Evgeny Shishkin <[email protected]> wrote:\nOn May 27, 2013, at 6:35 PM, John Mudd <[email protected]> wrote:\nThanks, that's easy enough to test. Didn't seem to help though.\nOk. And if you CLUSTER tables USING PK?\ntest=# REINDEX index test_select_pkey;REINDEX\ntest=# VACUUM ANALYZE test_select ;VACUUM(stopped postgres; reset O/S cache; started postgres)\ntest=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..0.08 rows=1 width=21) (actual time=16.368..16.369 rows=1 loops=1)\n -> Index Scan using my_key on test_select (cost=0.00..41981.16 rows=501333 width=21) (actual time=16.366..16.366 rows=1 loops=1) Index Cond: (key1 >= 500000) Total runtime: 16.444 ms\n(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------\n\n Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=23.072..23.074 rows=1 loops=1)\n\n\n\n Index Cond: (id = 500000) Total runtime: 23.192 ms\nOn Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin <[email protected]> wrote:\n\nOn May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n\n> Postgres 9.1.2 on Ubuntu 12.04\n>\n> Any reason why a select by primary key would be slower than a select that includes an ORDER BY? I was really hoping using the primary key would give me a boost.\n>\n\nYou created my_key after data loading, and PK was there all the time.\nIf you REINDEX PK, i bet it will be as fast.\n\n> I stopped the server and cleared the O/S cache using \"sync; echo 3 > /proc/sys/vm/drop_caches\" between the runs.\n>\n>\n>\n> test=# VACUUM ANALYZE test_select;\n> VACUUM\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41895.49 rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 12.678 ms\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 31.504 ms\n>\n>\n>\n> Schema:\n>\n> test=# \\d test_select\n> Table \"public.test_select\"\n> Column | Type | Modifiers\n> --------+--------------+----------------------------------------------------------\n> id | integer | not null default nextval('test_select_id_seq'::regclass)\n> key1 | integer |\n> key2 | integer |\n> key3 | integer |\n> data | character(4) |\n> Indexes:\n> \"test_select_pkey\" PRIMARY KEY, btree (id)\n> \"my_key\" btree (key1, key2, key3, id)\n>\n> test=#\n>\n>\n>\n> Sample data:\n>\n> test=# SELECT * FROM test_select LIMIT 10;\n> id | key1 | key2 | key3 | data\n> ----+--------+--------+--------+------\n> 1 | 984966 | 283954 | 772063 | x\n> 2 | 817668 | 393533 | 924888 | x\n> 3 | 751039 | 798753 | 454309 | x\n> 4 | 128505 | 329643 | 280553 | x\n> 5 | 105600 | 257225 | 710015 | x\n> 6 | 323891 | 615614 | 83206 | x\n> 7 | 194054 | 63506 | 353171 | x\n> 8 | 212068 | 881225 | 271804 | x\n> 9 | 644180 | 26693 | 200738 | x\n> 10 | 136586 | 498699 | 554417 | x\n> (10 rows)\n>\n>\n>\n>\n> Here's how I populated the table:\n>\n> import psycopg2\n>\n> conn = psycopg2.connect('dbname=test')\n>\n> cur = conn.cursor()\n>\n> def random_int():\n> n = 1000000\n> return random.randint(0,n)\n>\n> def random_key():\n> return random_int(), random_int(), random_int()\n>\n> def create_table():\n> cur.execute('''\n> DROP TABLE IF EXISTS test_select;\n>\n> CREATE TABLE test_select (\n> id SERIAL PRIMARY KEY,\n> key1 INTEGER,\n> key2 INTEGER,\n> key3 INTEGER,\n> data char(4)\n> );\n> ''')\n> conn.commit()\n>\n> n = 1000000\n> for i in range(n):\n> cur.execute(\"INSERT INTO test_select(key1, key2, key3, data) VALUES(%s, %s, %s, 'x')\", random_key())\n> conn.commit()\n>\n> cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3, id)')\n> conn.commit()\n>\n> create_table()\n>",
"msg_date": "Tue, 28 May 2013 10:48:45 +0400",
"msg_from": "Evgeniy Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "John,\n\nAnd can you please include BUFFERS to ANALYZE?\n\nRegards,\nRoman Konoval\n\n\nOn Tue, May 28, 2013 at 9:48 AM, Evgeniy Shishkin <[email protected]>wrote:\n\n>\n>\n>\n>\n> On 28.05.2013, at 2:17, John Mudd <[email protected]> wrote:\n>\n> Thanks again.\n>\n> Well, I have two problems with using the CLUSTER option. It's only\n> temporary since any updates, depending how much free space is reserved per\n> page, requires re-running the CLUSTER. And my primary concern is that it\n> arbitrarily gives an unfair advantage to the primary key SELECT. Still,\n> it's easy to test so here are the results. The primary key still looses\n> even with the CLUSTER. Granted it is close but considering this is now an\n> unfair comparison it still doesn't make sense to me. How can a search for a\n> specific row that should be fairly straight forward take longer than a\n> search that includes an ORDER BY clause?\n>\n>\n> Well, you do just regular index scan because of LIMIT 1.\n>\n> And now it is just a matter of index size and table organization.\n>\n> I also don't understand why you consider CLUSTER unfair - the way you\n> populated the table was natural cluster over my_key.\n>\n> But it bothers me why my_key is always better. Can you please test it on\n> different values but the same rows? Because now it is two different tuples\n> and you count every io.\n>\n>\n> test=# CLUSTER test_select USING test_select_pkey ;\n> CLUSTER\n> test=# VACUUM ANALYZE test_select ;\n> VACUUM\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n>\n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000\n> ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=19.430..19.431\n> rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41938.15\n> rows=499992 width=21) (actual time=19.428..19.428 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 19.526 ms\n>\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n>\n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1\n> width=21) (actual time=21.070..21.072 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 21.178 ms\n>\n>\n>\n>\n> On Mon, May 27, 2013 at 10:59 AM, Evgeny Shishkin <[email protected]>wrote:\n>\n>>\n>> On May 27, 2013, at 6:35 PM, John Mudd <[email protected]> wrote:\n>>\n>> Thanks, that's easy enough to test. Didn't seem to help though.\n>>\n>>\n>> Ok. And if you CLUSTER tables USING PK?\n>>\n>>\n>> test=# REINDEX index test_select_pkey;\n>> REINDEX\n>> test=# VACUUM ANALYZE test_select ;\n>> VACUUM\n>>\n>>\n>> (stopped postgres; reset O/S cache; started postgres)\n>>\n>> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000\n>> ORDER BY key1, key2, key3, id LIMIT 1;\n>> QUERY PLAN\n>>\n>>\n>> --------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=16.368..16.369\n>> rows=1 loops=1)\n>> -> Index Scan using my_key on test_select (cost=0.00..41981.16\n>> rows=501333 width=21) (actual time=16.366..16.366 rows=1 loops=1)\n>> Index Cond: (key1 >= 500000)\n>> Total runtime: 16.444 ms\n>>\n>>\n>> (stopped postgres; reset O/S cache; started postgres)\n>>\n>> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n>> QUERY PLAN\n>>\n>> ---------------------------------------------------------------------------------------------------------------------------------\n>> Index Scan using test_select_pkey on test_select (cost=0.00..8.36\n>> rows=1 width=21) (actual time=23.072..23.074 rows=1 loops=1)\n>> Index Cond: (id = 500000)\n>> Total runtime: 23.192 ms\n>>\n>>\n>>\n>>\n>> On Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin <[email protected]>wrote:\n>>\n>>>\n>>> On May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n>>>\n>>> > Postgres 9.1.2 on Ubuntu 12.04\n>>> >\n>>> > Any reason why a select by primary key would be slower than a select\n>>> that includes an ORDER BY? I was really hoping using the primary key would\n>>> give me a boost.\n>>> >\n>>>\n>>> You created my_key after data loading, and PK was there all the time.\n>>> If you REINDEX PK, i bet it will be as fast.\n>>>\n>>> > I stopped the server and cleared the O/S cache using \"sync; echo 3 >\n>>> /proc/sys/vm/drop_caches\" between the runs.\n>>> >\n>>> >\n>>> >\n>>> > test=# VACUUM ANALYZE test_select;\n>>> > VACUUM\n>>> >\n>>> > (stopped postgres; reset O/S cache; started postgres)\n>>> >\n>>> > test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000\n>>> ORDER BY key1, key2, key3, id LIMIT 1;\n>>> > QUERY\n>>> PLAN\n>>> >\n>>> --------------------------------------------------------------------------------------------------------------------------------------\n>>> > Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600\n>>> rows=1 loops=1)\n>>> > -> Index Scan using my_key on test_select (cost=0.00..41895.49\n>>> rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n>>> > Index Cond: (key1 >= 500000)\n>>> > Total runtime: 12.678 ms\n>>> >\n>>> > (stopped postgres; reset O/S cache; started postgres)\n>>> >\n>>> > test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n>>> > QUERY PLAN\n>>> >\n>>> ---------------------------------------------------------------------------------------------------------------------------------\n>>> > Index Scan using test_select_pkey on test_select (cost=0.00..8.36\n>>> rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n>>> > Index Cond: (id = 500000)\n>>> > Total runtime: 31.504 ms\n>>> >\n>>> >\n>>> >\n>>> > Schema:\n>>> >\n>>> > test=# \\d test_select\n>>> > Table \"public.test_select\"\n>>> > Column | Type | Modifiers\n>>> >\n>>> --------+--------------+----------------------------------------------------------\n>>> > id | integer | not null default\n>>> nextval('test_select_id_seq'::regclass)\n>>> > key1 | integer |\n>>> > key2 | integer |\n>>> > key3 | integer |\n>>> > data | character(4) |\n>>> > Indexes:\n>>> > \"test_select_pkey\" PRIMARY KEY, btree (id)\n>>> > \"my_key\" btree (key1, key2, key3, id)\n>>> >\n>>> > test=#\n>>> >\n>>> >\n>>> >\n>>> > Sample data:\n>>> >\n>>> > test=# SELECT * FROM test_select LIMIT 10;\n>>> > id | key1 | key2 | key3 | data\n>>> > ----+--------+--------+--------+------\n>>> > 1 | 984966 | 283954 | 772063 | x\n>>> > 2 | 817668 | 393533 | 924888 | x\n>>> > 3 | 751039 | 798753 | 454309 | x\n>>> > 4 | 128505 | 329643 | 280553 | x\n>>> > 5 | 105600 | 257225 | 710015 | x\n>>> > 6 | 323891 | 615614 | 83206 | x\n>>> > 7 | 194054 | 63506 | 353171 | x\n>>> > 8 | 212068 | 881225 | 271804 | x\n>>> > 9 | 644180 | 26693 | 200738 | x\n>>> > 10 | 136586 | 498699 | 554417 | x\n>>> > (10 rows)\n>>> >\n>>> >\n>>> >\n>>> >\n>>> > Here's how I populated the table:\n>>> >\n>>> > import psycopg2\n>>> >\n>>> > conn = psycopg2.connect('dbname=test')\n>>> >\n>>> > cur = conn.cursor()\n>>> >\n>>> > def random_int():\n>>> > n = 1000000\n>>> > return random.randint(0,n)\n>>> >\n>>> > def random_key():\n>>> > return random_int(), random_int(), random_int()\n>>> >\n>>> > def create_table():\n>>> > cur.execute('''\n>>> > DROP TABLE IF EXISTS test_select;\n>>> >\n>>> > CREATE TABLE test_select (\n>>> > id SERIAL PRIMARY KEY,\n>>> > key1 INTEGER,\n>>> > key2 INTEGER,\n>>> > key3 INTEGER,\n>>> > data char(4)\n>>> > );\n>>> > ''')\n>>> > conn.commit()\n>>> >\n>>> > n = 1000000\n>>> > for i in range(n):\n>>> > cur.execute(\"INSERT INTO test_select(key1, key2, key3, data)\n>>> VALUES(%s, %s, %s, 'x')\", random_key())\n>>> > conn.commit()\n>>> >\n>>> > cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3,\n>>> id)')\n>>> > conn.commit()\n>>> >\n>>> > create_table()\n>>> >\n>>>\n>>>\n>>\n>>\n>\n\nJohn,And can you please include BUFFERS to ANALYZE?\nRegards,\nRoman KonovalOn Tue, May 28, 2013 at 9:48 AM, Evgeniy Shishkin <[email protected]> wrote:\nOn 28.05.2013, at 2:17, John Mudd <[email protected]> wrote:\nThanks again.\n\nWell, I have two problems with using the CLUSTER option. It's only temporary since any updates, depending how much free space is reserved per page, requires re-running the CLUSTER. And my primary concern is that it arbitrarily gives an unfair advantage to the primary key SELECT. Still, it's easy to test so here are the results. The primary key still looses even with the CLUSTER. Granted it is close but considering this is now an unfair comparison it still doesn't make sense to me. How can a search for a specific row that should be fairly straight forward take longer than a search that includes an ORDER BY clause?\nWell, you do just regular index scan because of LIMIT 1.And now it is just a matter of index size and table organization.\nI also don't understand why you consider CLUSTER unfair - the way you populated the table was natural cluster over my_key.But it bothers me why my_key is always better. Can you please test it on different values but the same rows? Because now it is two different tuples and you count every io.\ntest=# CLUSTER test_select USING test_select_pkey ;\nCLUSTERtest=# VACUUM ANALYZE test_select ;VACUUM\n(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.08 rows=1 width=21) (actual time=19.430..19.431 rows=1 loops=1) -> Index Scan using my_key on test_select (cost=0.00..41938.15 rows=499992 width=21) (actual time=19.428..19.428 rows=1 loops=1)\n Index Cond: (key1 >= 500000) Total runtime: 19.526 ms(stopped postgres; reset O/S cache; started postgres)\ntest=# explain analyze SELECT * FROM test_select WHERE id = 500000; QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------- Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=21.070..21.072 rows=1 loops=1)\n Index Cond: (id = 500000) Total runtime: 21.178 msOn Mon, May 27, 2013 at 10:59 AM, Evgeny Shishkin <[email protected]> wrote:\nOn May 27, 2013, at 6:35 PM, John Mudd <[email protected]> wrote:\nThanks, that's easy enough to test. Didn't seem to help though.\nOk. And if you CLUSTER tables USING PK?\ntest=# REINDEX index test_select_pkey;REINDEX\ntest=# VACUUM ANALYZE test_select ;VACUUM(stopped postgres; reset O/S cache; started postgres)\ntest=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..0.08 rows=1 width=21) (actual time=16.368..16.369 rows=1 loops=1)\n -> Index Scan using my_key on test_select (cost=0.00..41981.16 rows=501333 width=21) (actual time=16.366..16.366 rows=1 loops=1) Index Cond: (key1 >= 500000) Total runtime: 16.444 ms\n(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------\n\n Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=23.072..23.074 rows=1 loops=1)\n\n\n\n\n\n Index Cond: (id = 500000) Total runtime: 23.192 ms\nOn Mon, May 27, 2013 at 10:21 AM, Evgeny Shishkin <[email protected]> wrote:\n\nOn May 27, 2013, at 6:02 PM, John Mudd <[email protected]> wrote:\n\n> Postgres 9.1.2 on Ubuntu 12.04\n>\n> Any reason why a select by primary key would be slower than a select that includes an ORDER BY? I was really hoping using the primary key would give me a boost.\n>\n\nYou created my_key after data loading, and PK was there all the time.\nIf you REINDEX PK, i bet it will be as fast.\n\n> I stopped the server and cleared the O/S cache using \"sync; echo 3 > /proc/sys/vm/drop_caches\" between the runs.\n>\n>\n>\n> test=# VACUUM ANALYZE test_select;\n> VACUUM\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41895.49 rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 12.678 ms\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 31.504 ms\n>\n>\n>\n> Schema:\n>\n> test=# \\d test_select\n> Table \"public.test_select\"\n> Column | Type | Modifiers\n> --------+--------------+----------------------------------------------------------\n> id | integer | not null default nextval('test_select_id_seq'::regclass)\n> key1 | integer |\n> key2 | integer |\n> key3 | integer |\n> data | character(4) |\n> Indexes:\n> \"test_select_pkey\" PRIMARY KEY, btree (id)\n> \"my_key\" btree (key1, key2, key3, id)\n>\n> test=#\n>\n>\n>\n> Sample data:\n>\n> test=# SELECT * FROM test_select LIMIT 10;\n> id | key1 | key2 | key3 | data\n> ----+--------+--------+--------+------\n> 1 | 984966 | 283954 | 772063 | x\n> 2 | 817668 | 393533 | 924888 | x\n> 3 | 751039 | 798753 | 454309 | x\n> 4 | 128505 | 329643 | 280553 | x\n> 5 | 105600 | 257225 | 710015 | x\n> 6 | 323891 | 615614 | 83206 | x\n> 7 | 194054 | 63506 | 353171 | x\n> 8 | 212068 | 881225 | 271804 | x\n> 9 | 644180 | 26693 | 200738 | x\n> 10 | 136586 | 498699 | 554417 | x\n> (10 rows)\n>\n>\n>\n>\n> Here's how I populated the table:\n>\n> import psycopg2\n>\n> conn = psycopg2.connect('dbname=test')\n>\n> cur = conn.cursor()\n>\n> def random_int():\n> n = 1000000\n> return random.randint(0,n)\n>\n> def random_key():\n> return random_int(), random_int(), random_int()\n>\n> def create_table():\n> cur.execute('''\n> DROP TABLE IF EXISTS test_select;\n>\n> CREATE TABLE test_select (\n> id SERIAL PRIMARY KEY,\n> key1 INTEGER,\n> key2 INTEGER,\n> key3 INTEGER,\n> data char(4)\n> );\n> ''')\n> conn.commit()\n>\n> n = 1000000\n> for i in range(n):\n> cur.execute(\"INSERT INTO test_select(key1, key2, key3, data) VALUES(%s, %s, %s, 'x')\", random_key())\n> conn.commit()\n>\n> cur.execute('CREATE INDEX my_key ON test_select(key1, key2, key3, id)')\n> conn.commit()\n>\n> create_table()\n>",
"msg_date": "Tue, 28 May 2013 11:10:24 +0300",
"msg_from": "Roman Konoval <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "On Mon, May 27, 2013 at 11:02 AM, John Mudd <[email protected]> wrote:\n\n> Postgres 9.1.2 on Ubuntu 12.04\n>\n> Any reason why a select by primary key would be slower than a select that\n> includes an ORDER BY? I was really hoping using the primary key would give\n> me a boost.\n>\n> I stopped the server and cleared the O/S cache using \"sync; echo 3 >\n> /proc/sys/vm/drop_caches\" between the runs.\n>\n>\n>\n> test=# VACUUM ANALYZE test_select;\n> VACUUM\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000\n> ORDER BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600\n> rows=1 loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41895.49\n> rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 12.678 ms\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1\n> width=21) (actual time=31.396..31.398 rows=1 loops=1)\n> Index Cond: (id = 500000)\n> Total runtime: 31.504 ms\n>\n>\n>\n> Schema:\n>\n> test=# \\d test_select\n> Table \"public.test_select\"\n> Column | Type | Modifiers\n>\n>\n> --------+--------------+----------------------------------------------------------\n> id | integer | not null default\n> nextval('test_select_id_seq'::regclass)\n> key1 | integer |\n> key2 | integer |\n> key3 | integer |\n> data | character(4) |\n> Indexes:\n> \"test_select_pkey\" PRIMARY KEY, btree (id)\n> \"my_key\" btree (key1, key2, key3, id)\n>\n> test=#\n>\n>\n>\n> Sample data:\n>\n> test=# SELECT * FROM test_select LIMIT 10;\n> id | key1 | key2 | key3 | data\n> ----+--------+--------+--------+------\n> 1 | 984966 | 283954 | 772063 | x\n> 2 | 817668 | 393533 | 924888 | x\n> 3 | 751039 | 798753 | 454309 | x\n> 4 | 128505 | 329643 | 280553 | x\n> 5 | 105600 | 257225 | 710015 | x\n> 6 | 323891 | 615614 | 83206 | x\n> 7 | 194054 | 63506 | 353171 | x\n> 8 | 212068 | 881225 | 271804 | x\n> 9 | 644180 | 26693 | 200738 | x\n> 10 | 136586 | 498699 | 554417 | x\n> (10 rows)\n>\n>\n>\nFor me looks like \"my_key\" index should be better than the PK in this case.\nFor some reasons:\n\n1. You are using a ORDER BY that has the same fields (and at the same\norder) from your index, so PG only needs to navigate the index.\n2. You are using LIMIT 1, which means PG only needs to fetch the first\nelement which key1>=50000 (and stop the search right after it).\n\nIn the case of your PK, PG will need to navigate through the index and\nreturn only one value also, but in this case the number of entries it needs\nto look at is bigger, because \"id\" has more distinct values than \"key1\".\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, May 27, 2013 at 11:02 AM, John Mudd <[email protected]> wrote:\nPostgres 9.1.2 on Ubuntu 12.04Any reason why a select by primary key would be slower than a select that includes an ORDER BY? I was really hoping using the primary key would give me a boost. \nI stopped the server and cleared the O/S cache using \"sync; echo 3 > /proc/sys/vm/drop_caches\" between the runs.\n\ntest=# VACUUM ANALYZE test_select;VACUUM(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER BY key1, key2, key3, id LIMIT 1;\n QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1 loops=1) -> Index Scan using my_key on test_select (cost=0.00..41895.49 rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n Index Cond: (key1 >= 500000) Total runtime: 12.678 ms(stopped postgres; reset O/S cache; started postgres)test=# explain analyze SELECT * FROM test_select WHERE id = 500000;\n QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_select_pkey on test_select (cost=0.00..8.36 rows=1 width=21) (actual time=31.396..31.398 rows=1 loops=1) Index Cond: (id = 500000) Total runtime: 31.504 ms\nSchema:test=# \\d test_select Table \"public.test_select\" Column | Type | Modifiers \n--------+--------------+---------------------------------------------------------- id | integer | not null default nextval('test_select_id_seq'::regclass) key1 | integer | \n key2 | integer | key3 | integer | data | character(4) | Indexes: \"test_select_pkey\" PRIMARY KEY, btree (id) \"my_key\" btree (key1, key2, key3, id)\ntest=# Sample data:test=# SELECT * FROM test_select LIMIT 10; id | key1 | key2 | key3 | data \n----+--------+--------+--------+------ 1 | 984966 | 283954 | 772063 | x 2 | 817668 | 393533 | 924888 | x 3 | 751039 | 798753 | 454309 | x 4 | 128505 | 329643 | 280553 | x \n 5 | 105600 | 257225 | 710015 | x 6 | 323891 | 615614 | 83206 | x 7 | 194054 | 63506 | 353171 | x 8 | 212068 | 881225 | 271804 | x 9 | 644180 | 26693 | 200738 | x \n 10 | 136586 | 498699 | 554417 | x (10 rows)For me looks like \"my_key\" index should be better than the PK in this case. For some reasons:\n1. You are using a ORDER BY that has the same fields (and at the same order) from your index, so PG only needs to navigate the index.\n\n2. You are using LIMIT 1, which means PG only needs to fetch the first element which key1>=50000 (and stop the search right after it).In the case of your PK, PG will need to navigate through the index and return only one value also, but in this case the number of entries it needs to look at is bigger, because \"id\" has more distinct values than \"key1\".\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Tue, 28 May 2013 09:39:45 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "On Mon, May 27, 2013 at 9:02 AM, John Mudd <[email protected]> wrote:\n> Postgres 9.1.2 on Ubuntu 12.04\n>\n> Any reason why a select by primary key would be slower than a select that\n> includes an ORDER BY? I was really hoping using the primary key would give\n> me a boost.\n>\n> I stopped the server and cleared the O/S cache using \"sync; echo 3 >\n> /proc/sys/vm/drop_caches\" between the runs.\n>\n>\n>\n> test=# VACUUM ANALYZE test_select;\n> VACUUM\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER\n> BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1\n> loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41895.49\n> rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 12.678 ms\n>\n> (stopped postgres; reset O/S cache; started postgres)\n\n\nwhy are you flushing postgres/os cache? when you do that, you are\nmeasuring raw read time from disks. Typical disk seek time is\nmeasured in milliseconds so the timings are completely appropriate\nonce you remove caching effects. Hard drives (at least, the spinning\nkind) are slow and one of the major challenges of database and\nhardware engineering is working around their limitations. Fortunately\nit looks like faster storage will soon be commonplace for reasonable\nprices.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 May 2013 10:13:41 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Merlin Moncure\n> Sent: Thursday, May 30, 2013 11:14 AM\n> To: John Mudd\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Slow SELECT by primary key? Postgres 9.1.2\n> \n> On Mon, May 27, 2013 at 9:02 AM, John Mudd <[email protected]> wrote:\n> > Postgres 9.1.2 on Ubuntu 12.04\n> >\n> > Any reason why a select by primary key would be slower than a select\n> > that includes an ORDER BY? I was really hoping using the primary key\n> > would give me a boost.\n> >\n> > I stopped the server and cleared the O/S cache using \"sync; echo 3 >\n> > /proc/sys/vm/drop_caches\" between the runs.\n> >\n> >\n> >\n> > test=# VACUUM ANALYZE test_select;\n> > VACUUM\n> >\n> > (stopped postgres; reset O/S cache; started postgres)\n> >\n> > test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000\n> > ORDER BY key1, key2, key3, id LIMIT 1;\n> > QUERY\n> > PLAN\n> > ---------------------------------------------------------------------\n> -\n> > ----------------------------------------------------------------\n> > Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600\n> > rows=1\n> > loops=1)\n> > -> Index Scan using my_key on test_select (cost=0.00..41895.49\n> > rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> > Index Cond: (key1 >= 500000)\n> > Total runtime: 12.678 ms\n> >\n> > (stopped postgres; reset O/S cache; started postgres)\n> \n> \n> why are you flushing postgres/os cache? when you do that, you are\n> measuring raw read time from disks. Typical disk seek time is measured\n> in milliseconds so the timings are completely appropriate once you\n> remove caching effects. Hard drives (at least, the spinning\n> kind) are slow and one of the major challenges of database and hardware\n> engineering is working around their limitations. Fortunately it looks\n> like faster storage will soon be commonplace for reasonable prices.\n> \n> merlin\n> \n\nTrue.\nBut, on the hand (back to original question), \nexecution plans that John got before and after suggested change in configuration parameters are exactly the same, though timing is different but only due to buffer cache issue.\n\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 May 2013 15:22:26 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "I flushed the caches in an attempt to get meaningful results. I've seen\ncomplaints to previous posts that don't include clearing the caches.\n\nI agree this tends to be artificial in another direction. I will strive to\ncome up with a more realistic test environment next time. Maybe performing\nmany random reads initially to fill the caches with random blocks. That\nmight allow for minimal assistance from the cache and be more realistic.\n\n\n\nOn Thu, May 30, 2013 at 11:13 AM, Merlin Moncure <[email protected]> wrote:\n\n> On Mon, May 27, 2013 at 9:02 AM, John Mudd <[email protected]> wrote:\n> > Postgres 9.1.2 on Ubuntu 12.04\n> >\n> > Any reason why a select by primary key would be slower than a select that\n> > includes an ORDER BY? I was really hoping using the primary key would\n> give\n> > me a boost.\n> >\n> > I stopped the server and cleared the O/S cache using \"sync; echo 3 >\n> > /proc/sys/vm/drop_caches\" between the runs.\n> >\n> >\n> >\n> > test=# VACUUM ANALYZE test_select;\n> > VACUUM\n> >\n> > (stopped postgres; reset O/S cache; started postgres)\n> >\n> > test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000\n> ORDER\n> > BY key1, key2, key3, id LIMIT 1;\n> > QUERY PLAN\n> >\n> --------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600\n> rows=1\n> > loops=1)\n> > -> Index Scan using my_key on test_select (cost=0.00..41895.49\n> > rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> > Index Cond: (key1 >= 500000)\n> > Total runtime: 12.678 ms\n> >\n> > (stopped postgres; reset O/S cache; started postgres)\n>\n>\n> why are you flushing postgres/os cache? when you do that, you are\n> measuring raw read time from disks. Typical disk seek time is\n> measured in milliseconds so the timings are completely appropriate\n> once you remove caching effects. Hard drives (at least, the spinning\n> kind) are slow and one of the major challenges of database and\n> hardware engineering is working around their limitations. Fortunately\n> it looks like faster storage will soon be commonplace for reasonable\n> prices.\n>\n> merlin\n>\n\nI flushed the caches in an attempt to get meaningful results. I've seen complaints to previous posts that don't include clearing the caches.I agree this tends to be artificial in another direction. I will strive to come up with a more realistic test environment next time. Maybe performing many random reads initially to fill the caches with random blocks. That might allow for minimal assistance from the cache and be more realistic.\nOn Thu, May 30, 2013 at 11:13 AM, Merlin Moncure <[email protected]> wrote:\nOn Mon, May 27, 2013 at 9:02 AM, John Mudd <[email protected]> wrote:\n\n\n> Postgres 9.1.2 on Ubuntu 12.04\n>\n> Any reason why a select by primary key would be slower than a select that\n> includes an ORDER BY? I was really hoping using the primary key would give\n> me a boost.\n>\n> I stopped the server and cleared the O/S cache using \"sync; echo 3 >\n> /proc/sys/vm/drop_caches\" between the runs.\n>\n>\n>\n> test=# VACUUM ANALYZE test_select;\n> VACUUM\n>\n> (stopped postgres; reset O/S cache; started postgres)\n>\n> test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000 ORDER\n> BY key1, key2, key3, id LIMIT 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600 rows=1\n> loops=1)\n> -> Index Scan using my_key on test_select (cost=0.00..41895.49\n> rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n> Index Cond: (key1 >= 500000)\n> Total runtime: 12.678 ms\n>\n> (stopped postgres; reset O/S cache; started postgres)\n\n\nwhy are you flushing postgres/os cache? when you do that, you are\nmeasuring raw read time from disks. Typical disk seek time is\nmeasured in milliseconds so the timings are completely appropriate\nonce you remove caching effects. Hard drives (at least, the spinning\nkind) are slow and one of the major challenges of database and\nhardware engineering is working around their limitations. Fortunately\nit looks like faster storage will soon be commonplace for reasonable\nprices.\n\nmerlin",
"msg_date": "Thu, 30 May 2013 11:23:51 -0400",
"msg_from": "John Mudd <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
},
{
"msg_contents": "On Thu, May 30, 2013 at 10:22 AM, Igor Neyman <[email protected]> wrote:\n>> -----Original Message-----\n>> From: [email protected] [mailto:pgsql-performance-\n>> [email protected]] On Behalf Of Merlin Moncure\n>> Sent: Thursday, May 30, 2013 11:14 AM\n>> To: John Mudd\n>> Cc: [email protected]\n>> Subject: Re: [PERFORM] Slow SELECT by primary key? Postgres 9.1.2\n>>\n>> On Mon, May 27, 2013 at 9:02 AM, John Mudd <[email protected]> wrote:\n>> > Postgres 9.1.2 on Ubuntu 12.04\n>> >\n>> > Any reason why a select by primary key would be slower than a select\n>> > that includes an ORDER BY? I was really hoping using the primary key\n>> > would give me a boost.\n>> >\n>> > I stopped the server and cleared the O/S cache using \"sync; echo 3 >\n>> > /proc/sys/vm/drop_caches\" between the runs.\n>> >\n>> >\n>> >\n>> > test=# VACUUM ANALYZE test_select;\n>> > VACUUM\n>> >\n>> > (stopped postgres; reset O/S cache; started postgres)\n>> >\n>> > test=# explain analyze SELECT * FROM test_select WHERE key1 >= 500000\n>> > ORDER BY key1, key2, key3, id LIMIT 1;\n>> > QUERY\n>> > PLAN\n>> > ---------------------------------------------------------------------\n>> -\n>> > ----------------------------------------------------------------\n>> > Limit (cost=0.00..0.08 rows=1 width=21) (actual time=12.599..12.600\n>> > rows=1\n>> > loops=1)\n>> > -> Index Scan using my_key on test_select (cost=0.00..41895.49\n>> > rows=498724 width=21) (actual time=12.597..12.597 rows=1 loops=1)\n>> > Index Cond: (key1 >= 500000)\n>> > Total runtime: 12.678 ms\n>> >\n>> > (stopped postgres; reset O/S cache; started postgres)\n>>\n>>\n>> why are you flushing postgres/os cache? when you do that, you are\n>> measuring raw read time from disks. Typical disk seek time is measured\n>> in milliseconds so the timings are completely appropriate once you\n>> remove caching effects. Hard drives (at least, the spinning\n>> kind) are slow and one of the major challenges of database and hardware\n>> engineering is working around their limitations. Fortunately it looks\n>> like faster storage will soon be commonplace for reasonable prices.\n>>\n>> merlin\n>>\n>\n> True.\n> But, on the hand (back to original question),\n> execution plans that John got before and after suggested change in configuration parameters are exactly the same, though timing is different but only due to buffer cache issue.\n\nRight. Well, I think Matheus's answer is the right one. But my\npoint was that what's going here is we are measuring number of raw\nuncached seeks to satisfy query on index A vs B. Pure luck in terms\nof how the index data is organized could throw it off one way or the\nother. But the test methodology is bogus because the root index pages\nwill stay hot so the more compact pkey will likely be slightly faster\nin real world usage. (but, I prefer the composite key style of design\nespecially for range searching).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 May 2013 10:59:07 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT by primary key? Postgres 9.1.2"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a database with quite some data (millions of rows), that is heavily updated all the time. Once a day I would like to reindex my database (and maybe re cluster it - don't know if that's worth it yet?). I need the database to be usable while doing this (both read and write). I see that there is no way to REINDEX CONCURRENTLY - So what approach would you suggest that I take on this?\n\nRegards Niels Kristian\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 14:24:14 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best practice when reindexing in production"
},
{
"msg_contents": "On Wed, May 29, 2013 at 8:24 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> Hi,\n>\n> I have a database with quite some data (millions of rows), that is heavily updated all the time. Once a day I would like to reindex my database (and maybe re cluster it - don't know if that's worth it yet?). I need the database to be usable while doing this (both read and write). I see that there is no way to REINDEX CONCURRENTLY - So what approach would you suggest that I take on this?\n\nIf you have the diskspaec, it's generally a good idea to do a CREATE\nINDEX CONCURRENTLY, and then rename the new one into place (typically\nin a transaction). (If your app, documentation or dba doesn't mind the\nindex changing names, you don't need to rename of course, you can just\ndrop the old one).\n\n\n--\n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 08:26:18 -0400",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "On Wed, May 29, 2013 at 2:26 PM, Magnus Hagander <[email protected]>wrote:\n\n> On Wed, May 29, 2013 at 8:24 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n> > Hi,\n> >\n> > I have a database with quite some data (millions of rows), that is\n> heavily updated all the time. Once a day I would like to reindex my\n> database (and maybe re cluster it - don't know if that's worth it yet?). I\n> need the database to be usable while doing this (both read and write). I\n> see that there is no way to REINDEX CONCURRENTLY - So what approach would\n> you suggest that I take on this?\n>\n> If you have the diskspaec, it's generally a good idea to do a CREATE\n> INDEX CONCURRENTLY, and then rename the new one into place (typically\n> in a transaction). (If your app, documentation or dba doesn't mind the\n> index changing names, you don't need to rename of course, you can just\n> drop the old one).\n>\n\nIf you wish to recluster it online you can also look into pg_repack -\nhttps://github.com/reorg/pg_repack Great tool allows you to repack and\nreindex your database without going offline.\n\nOn Wed, May 29, 2013 at 2:26 PM, Magnus Hagander <[email protected]> wrote:\nOn Wed, May 29, 2013 at 8:24 AM, Niels Kristian Schjødt\n\n\n\n<[email protected]> wrote:\n> Hi,\n>\n> I have a database with quite some data (millions of rows), that is heavily updated all the time. Once a day I would like to reindex my database (and maybe re cluster it - don't know if that's worth it yet?). I need the database to be usable while doing this (both read and write). I see that there is no way to REINDEX CONCURRENTLY - So what approach would you suggest that I take on this?\n\nIf you have the diskspaec, it's generally a good idea to do a CREATE\nINDEX CONCURRENTLY, and then rename the new one into place (typically\nin a transaction). (If your app, documentation or dba doesn't mind the\nindex changing names, you don't need to rename of course, you can just\ndrop the old one).If you wish to recluster it online you can also look into pg_repack - https://github.com/reorg/pg_repack Great tool allows you to repack and reindex your database without going offline.",
"msg_date": "Wed, 29 May 2013 14:30:45 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "I looked at pg_repack - however - is it \"safe\" for production? \nIt seems very intrusive and black-box-like to me...\n\n\nDen 29/05/2013 kl. 14.30 skrev Armand du Plessis <[email protected]>:\n\n> \n> On Wed, May 29, 2013 at 2:26 PM, Magnus Hagander <[email protected]> wrote:\n> On Wed, May 29, 2013 at 8:24 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n> > Hi,\n> >\n> > I have a database with quite some data (millions of rows), that is heavily updated all the time. Once a day I would like to reindex my database (and maybe re cluster it - don't know if that's worth it yet?). I need the database to be usable while doing this (both read and write). I see that there is no way to REINDEX CONCURRENTLY - So what approach would you suggest that I take on this?\n> \n> If you have the diskspaec, it's generally a good idea to do a CREATE\n> INDEX CONCURRENTLY, and then rename the new one into place (typically\n> in a transaction). (If your app, documentation or dba doesn't mind the\n> index changing names, you don't need to rename of course, you can just\n> drop the old one).\n> \n> If you wish to recluster it online you can also look into pg_repack - https://github.com/reorg/pg_repack Great tool allows you to repack and reindex your database without going offline. \n> \n\n\nI looked at pg_repack - however - is it \"safe\" for production? It seems very intrusive and black-box-like to me...Den 29/05/2013 kl. 14.30 skrev Armand du Plessis <[email protected]>:On Wed, May 29, 2013 at 2:26 PM, Magnus Hagander <[email protected]> wrote:\nOn Wed, May 29, 2013 at 8:24 AM, Niels Kristian Schjødt\n\n\n\n<[email protected]> wrote:\n> Hi,\n>\n> I have a database with quite some data (millions of rows), that is heavily updated all the time. Once a day I would like to reindex my database (and maybe re cluster it - don't know if that's worth it yet?). I need the database to be usable while doing this (both read and write). I see that there is no way to REINDEX CONCURRENTLY - So what approach would you suggest that I take on this?\n\nIf you have the diskspaec, it's generally a good idea to do a CREATE\nINDEX CONCURRENTLY, and then rename the new one into place (typically\nin a transaction). (If your app, documentation or dba doesn't mind the\nindex changing names, you don't need to rename of course, you can just\ndrop the old one).If you wish to recluster it online you can also look into pg_repack - https://github.com/reorg/pg_repack Great tool allows you to repack and reindex your database without going offline.",
"msg_date": "Wed, 29 May 2013 14:38:38 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "Thanks\n\nCan you think of a way to select all the indexes programmatically from a table and run CREATE INDEX CONCURRENTLY for each of them, so that I don't have to hardcode every index name + create statement ?\n\n\n\nDen 29/05/2013 kl. 14.26 skrev Magnus Hagander <[email protected]>:\n\n> On Wed, May 29, 2013 at 8:24 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n>> Hi,\n>> \n>> I have a database with quite some data (millions of rows), that is heavily updated all the time. Once a day I would like to reindex my database (and maybe re cluster it - don't know if that's worth it yet?). I need the database to be usable while doing this (both read and write). I see that there is no way to REINDEX CONCURRENTLY - So what approach would you suggest that I take on this?\n> \n> If you have the diskspaec, it's generally a good idea to do a CREATE\n> INDEX CONCURRENTLY, and then rename the new one into place (typically\n> in a transaction). (If your app, documentation or dba doesn't mind the\n> index changing names, you don't need to rename of course, you can just\n> drop the old one).\n> \n> \n> --\n> Magnus Hagander\n> Me: http://www.hagander.net/\n> Work: http://www.redpill-linpro.com/\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 14:41:55 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "On Wed, May 29, 2013 at 8:41 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> Thanks\n>\n> Can you think of a way to select all the indexes programmatically from a table and run CREATE INDEX CONCURRENTLY for each of them, so that I don't have to hardcode every index name + create statement ?\n\nYou can use something like SELECT pg_get_indexdef(indexrelid) FROM\npg_index. You will need to filter it not to include system indexes,\ntoast, etc, and then insert the CONCURRENCY part, but it should give\nyou a good startingpoint.\n\n\n--\n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 09:08:40 -0400",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "On Wed, May 29, 2013 at 9:41 AM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n> Thanks\n>\n> Can you think of a way to select all the indexes programmatically from a\n> table and run CREATE INDEX CONCURRENTLY for each of them, so that I don't\n> have to hardcode every index name + create statement ?\n>\n>\n>\nYou could do something like this (which considers you use simple names for\nyour indexes, where simple ~ [a-z_][a-z0-9_]*):\n\nSELECT\nregexp_replace(i.indexdef, '^CREATE( UNIQUE)? INDEX (.*) ON (.*)',\n'CREATE\\1 INDEX CONCURRENTLY tmp_\\2 ON \\3;') || E'\\n'\n|| E'BEGIN;\\n'\n|| 'DROP INDEX ' || i.indexname || E';\\n'\n|| 'ALTER INDEX tmp_' || i.indexname || ' RENAME TO ' || i.indexname ||\nE';\\n'\n|| E'COMMIT;\\n'\nFROM pg_indexes i\nWHERE schemaname !~ '^(pg_|information_schema$)';\n\nAlthough this one is *really simple* and *error phrone*, because it does\nnot consider at least two things: index that are constraints and index that\nhas FK depending on it. For the first case, you only need to change the\nconstraint to use the index and the DROP command. As for the second case,\nyou would need to remove the FKs, drop the old one and recreate the FK\n(inside a transaction, of course), but this could be really slow, a reindex\nfor this case would be simpler and perhaps faster.\n\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Wed, May 29, 2013 at 9:41 AM, Niels Kristian Schjødt <[email protected]> wrote:\nThanks\n\nCan you think of a way to select all the indexes programmatically from a table and run CREATE INDEX CONCURRENTLY for each of them, so that I don't have to hardcode every index name + create statement ?\n\nYou could do something like this (which considers you use simple names for your indexes, where simple ~ [a-z_][a-z0-9_]*):\nSELECT regexp_replace(i.indexdef, '^CREATE( UNIQUE)? INDEX (.*) ON (.*)', 'CREATE\\1 INDEX CONCURRENTLY tmp_\\2 ON \\3;') || E'\\n'\n|| E'BEGIN;\\n'|| 'DROP INDEX ' || i.indexname || E';\\n'|| 'ALTER INDEX tmp_' || i.indexname || ' RENAME TO ' || i.indexname || E';\\n'\n|| E'COMMIT;\\n'FROM pg_indexes iWHERE schemaname !~ '^(pg_|information_schema$)';\nAlthough this one is *really simple* and *error phrone*, because it does not consider at least two things: index that are constraints and index that has FK depending on it. For the first case, you only need to change the constraint to use the index and the DROP command. As for the second case, you would need to remove the FKs, drop the old one and recreate the FK (inside a transaction, of course), but this could be really slow, a reindex for this case would be simpler and perhaps faster.\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres",
"msg_date": "Wed, 29 May 2013 10:12:18 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "\n\nYou could do something like this (which considers you use simple names for your indexes, where simple ~ [a-z_][a-z0-9_]*):\n\nSELECT \nregexp_replace(i.indexdef, '^CREATE( UNIQUE)? INDEX (.*) ON (.*)', 'CREATE\\1 INDEX CONCURRENTLY tmp_\\2 ON \\3;') || E'\\n'\n|| E'BEGIN;\\n'\n|| 'DROP INDEX ' || i.indexname || E';\\n'\n|| 'ALTER INDEX tmp_' || i.indexname || ' RENAME TO ' || i.indexname || E';\\n'\n|| E'COMMIT;\\n'\nFROM pg_indexes i\nWHERE schemaname !~ '^(pg_|information_schema$)';\n\nAlthough this one is *really simple* and *error phrone*, because it does not consider at least two things: index that are constraints and index that has FK depending on it. For the first case, you only need to change the constraint to use the index and the DROP command. As for the second case, you would need to remove the FKs, drop the old one and recreate the FK (inside a transaction, of course), but this could be really slow, a reindex for this case would be simpler and perhaps faster.\n\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\n\nI must be missing something here.\nBut, how is that FK depends on the index?\nI understand FK lookup works much faster with the index supporting FK than without it, but you could have FK without index (on the \"child\" table).\nSo, what gives?\n\nRegards,\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 13:55:49 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "On Wed, May 29, 2013 at 10:55 AM, Igor Neyman <[email protected]>wrote:\n\n>\n>\n> You could do something like this (which considers you use simple names for\n> your indexes, where simple ~ [a-z_][a-z0-9_]*):\n>\n> SELECT\n> regexp_replace(i.indexdef, '^CREATE( UNIQUE)? INDEX (.*) ON (.*)',\n> 'CREATE\\1 INDEX CONCURRENTLY tmp_\\2 ON \\3;') || E'\\n'\n> || E'BEGIN;\\n'\n> || 'DROP INDEX ' || i.indexname || E';\\n'\n> || 'ALTER INDEX tmp_' || i.indexname || ' RENAME TO ' || i.indexname ||\n> E';\\n'\n> || E'COMMIT;\\n'\n> FROM pg_indexes i\n> WHERE schemaname !~ '^(pg_|information_schema$)';\n>\n> Although this one is *really simple* and *error phrone*, because it does\n> not consider at least two things: index that are constraints and index that\n> has FK depending on it. For the first case, you only need to change the\n> constraint to use the index and the DROP command. As for the second case,\n> you would need to remove the FKs, drop the old one and recreate the FK\n> (inside a transaction, of course), but this could be really slow, a reindex\n> for this case would be simpler and perhaps faster.\n>\n> =================\n>\n> I must be missing something here.\n> But, how is that FK depends on the index?\n> I understand FK lookup works much faster with the index supporting FK than\n> without it, but you could have FK without index (on the \"child\" table).\n> So, what gives?\n>\n>\nAFAIK, when you create a FK, PostgreSQL associate it with an UNIQUE INDEX\non the target table. It creates an entry on pg_depends (I don't know if\nsomewhere else), and when you try to drop the index, even if there is an\nidentical one that PGs could use, it will throw an error.\n\nYou can easily check this:\n\npostgres=# CREATE TABLE parent(id int);\nCREATE TABLE\npostgres=# CREATE UNIQUE INDEX parent_idx1 ON parent (id);\nCREATE INDEX\npostgres=# CREATE TABLE child(idparent int REFERENCES parent (id));\nCREATE TABLE\npostgres=# CREATE UNIQUE INDEX parent_idx2 ON parent (id);\nCREATE INDEX\npostgres=# DROP INDEX parent_idx1;\nERROR: cannot drop index parent_idx1 because other objects depend on it\nDETAIL: constraint child_idparent_fkey on table child depends on index\nparent_idx1\nHINT: Use DROP ... CASCADE to drop the dependent objects too.\n\nBTW, I do think PostgreSQL could verify if there is another candidate to\nthis FK. Is it in TODO list? Should it be?\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Wed, May 29, 2013 at 10:55 AM, Igor Neyman <[email protected]> wrote:\n\n\nYou could do something like this (which considers you use simple names for your indexes, where simple ~ [a-z_][a-z0-9_]*):\n\nSELECT \nregexp_replace(i.indexdef, '^CREATE( UNIQUE)? INDEX (.*) ON (.*)', 'CREATE\\1 INDEX CONCURRENTLY tmp_\\2 ON \\3;') || E'\\n'\n|| E'BEGIN;\\n'\n|| 'DROP INDEX ' || i.indexname || E';\\n'\n|| 'ALTER INDEX tmp_' || i.indexname || ' RENAME TO ' || i.indexname || E';\\n'\n|| E'COMMIT;\\n'\nFROM pg_indexes i\nWHERE schemaname !~ '^(pg_|information_schema$)';\n\nAlthough this one is *really simple* and *error phrone*, because it does not consider at least two things: index that are constraints and index that has FK depending on it. For the first case, you only need to change the constraint to use the index and the DROP command. As for the second case, you would need to remove the FKs, drop the old one and recreate the FK (inside a transaction, of course), but this could be really slow, a reindex for this case would be simpler and perhaps faster.\n=================\n\nI must be missing something here.\nBut, how is that FK depends on the index?\nI understand FK lookup works much faster with the index supporting FK than without it, but you could have FK without index (on the \"child\" table).\nSo, what gives?\nAFAIK, when you create a FK, PostgreSQL associate it with an UNIQUE INDEX on the target table. It creates an entry on pg_depends (I don't know if somewhere else), and when you try to drop the index, even if there is an identical one that PGs could use, it will throw an error.\nYou can easily check this:postgres=# CREATE TABLE parent(id int);CREATE TABLE\npostgres=# CREATE UNIQUE INDEX parent_idx1 ON parent (id);CREATE INDEXpostgres=# CREATE TABLE child(idparent int REFERENCES parent (id));\nCREATE TABLEpostgres=# CREATE UNIQUE INDEX parent_idx2 ON parent (id);CREATE INDEX\npostgres=# DROP INDEX parent_idx1;ERROR: cannot drop index parent_idx1 because other objects depend on itDETAIL: constraint child_idparent_fkey on table child depends on index parent_idx1\nHINT: Use DROP ... CASCADE to drop the dependent objects too.BTW, I do think PostgreSQL could verify if there is another candidate to this FK. Is it in TODO list? Should it be?\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Wed, 29 May 2013 11:19:02 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "\n\nFrom: Matheus de Oliveira [mailto:[email protected]] \nSent: Wednesday, May 29, 2013 10:19 AM\nTo: Igor Neyman\nCc: Niels Kristian Schjødt; Magnus Hagander; [email protected] list\nSubject: Re: [PERFORM] Best practice when reindexing in production\n\n\n\nOn Wed, May 29, 2013 at 10:55 AM, Igor Neyman <[email protected]> wrote:\n\n\nYou could do something like this (which considers you use simple names for your indexes, where simple ~ [a-z_][a-z0-9_]*):\n\nSELECT \nregexp_replace(i.indexdef, '^CREATE( UNIQUE)? INDEX (.*) ON (.*)', 'CREATE\\1 INDEX CONCURRENTLY tmp_\\2 ON \\3;') || E'\\n'\n|| E'BEGIN;\\n'\n|| 'DROP INDEX ' || i.indexname || E';\\n'\n|| 'ALTER INDEX tmp_' || i.indexname || ' RENAME TO ' || i.indexname || E';\\n'\n|| E'COMMIT;\\n'\nFROM pg_indexes i\nWHERE schemaname !~ '^(pg_|information_schema$)';\n\nAlthough this one is *really simple* and *error phrone*, because it does not consider at least two things: index that are constraints and index that has FK depending on it. For the first case, you only need to change the constraint to use the index and the DROP command. As for the second case, you would need to remove the FKs, drop the old one and recreate the FK (inside a transaction, of course), but this could be really slow, a reindex for this case would be simpler and perhaps faster.\n\n=================\nI must be missing something here.\nBut, how is that FK depends on the index?\nI understand FK lookup works much faster with the index supporting FK than without it, but you could have FK without index (on the \"child\" table).\nSo, what gives?\n\nAFAIK, when you create a FK, PostgreSQL associate it with an UNIQUE INDEX on the target table. It creates an entry on pg_depends (I don't know if somewhere else), and when you try to drop the index, even if there is an identical one that PGs could use, it will throw an error.\n\nYou can easily check this:\n\npostgres=# CREATE TABLE parent(id int);\nCREATE TABLE\npostgres=# CREATE UNIQUE INDEX parent_idx1 ON parent (id);\nCREATE INDEX\npostgres=# CREATE TABLE child(idparent int REFERENCES parent (id));\nCREATE TABLE\npostgres=# CREATE UNIQUE INDEX parent_idx2 ON parent (id);\nCREATE INDEX\npostgres=# DROP INDEX parent_idx1;\nERROR: cannot drop index parent_idx1 because other objects depend on it\nDETAIL: constraint child_idparent_fkey on table child depends on index parent_idx1\nHINT: Use DROP ... CASCADE to drop the dependent objects too.\n\nBTW, I do think PostgreSQL could verify if there is another candidate to this FK. Is it in TODO list? Should it be?\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\n\nSo, it's about index on parent table that's used for unique (or PK) constraint and referenced by FK on child table.\n From your previous email I thought that index on child table supporting FK (which is mostly created for performance purposes) cannot be dropped without disabling FK. My bad.\n\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 16:35:43 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "On 29/05/13 14:24, Niels Kristian Schj�dt wrote:On 29/05/13 14:24, Niels \nKristian Schj�dt wrote:\n> Hi,\n>\n> I have a database with quite some data (millions of rows), that is heavily updated all the time. Once a day I would like to reindex my database (and maybe re cluster it - don't know if that's worth it yet?). I need the database to be usable while doing this (both read and write). I see that there is no way to REINDEX CONCURRENTLY - So what approach would you suggest that I take on this?\n\nHi.\n\nSince you still dont know wether it is worth it or not, I would strongly \nsuggest that you test this out before. Simply just creating an index \nnext to the old one with the same options (but different name) and \ncompare sizes would be simple.\n\nSecond, if the new index is significantly smaller than the old on, I \nsuggest that you try to crank up the autovacuum daemon instead of \nblindly dropping and creating indexes, this will help to mitigate the \nbloat you're seeing accumulating in above test.\n\nCranking up autovacuum is going to have significan less impact on the \nconcurrent queries while doing it and can help to maintain the database \nin a shape where regular re-indexings shouldnt be nessesary. Autovacuum \nhas build in logic to sleep inbetween operations in order to reduce the \nIO-load of you system for the benefit of concurrent users. The approach \nof duplicate indices will pull all the resources it can get and \nconcurrent users may suffer while you do it..\n\nJesper\n\n-- \nJesper\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 19:12:55 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "On Wed, 2013-05-29 at 19:12 +0200, Jesper Krogh wrote:\n\n> Second, if the new index is significantly smaller than the old on, I \n> suggest that you try to crank up the autovacuum daemon instead of \n> blindly dropping and creating indexes, this will help to mitigate the \n> bloat you're seeing accumulating in above test.\n\nIn my experience vacuum/autovacuum just don't reclaim any space from the\nindexes, which accumulate bloat indefinitely. I've tried to work around\nthat in so many ways: the show-stopper has been the impossibility to\ndrop FK indexes in a concurrent way, coupled with VALIDATE CONSTRAINT\nnot doing what advertised and taking an exclusive lock.\n\nMy solution has been to become pg_repack maintainer. YMMV. Just don't\nexpect vacuum to reduce the indexes size: it doesn't.\n\n\n-- \nDaniele\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 18:25:21 +0100",
"msg_from": "Daniele Varrazzo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "On Wednesday, May 29, 2013 06:25:21 PM Daniele Varrazzo wrote:\n> My solution has been to become pg_repack maintainer. YMMV. Just don't\n> expect vacuum to reduce the indexes size: it doesn't.\n\nIt's not supposed to. It is supposed to keep them from indefinitely growing, \nthough, which it does reasonably well at.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 10:47:04 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "On Wed, May 29, 2013 at 6:47 PM, Alan Hodgson <[email protected]> wrote:\n> On Wednesday, May 29, 2013 06:25:21 PM Daniele Varrazzo wrote:\n>> My solution has been to become pg_repack maintainer. YMMV. Just don't\n>> expect vacuum to reduce the indexes size: it doesn't.\n>\n> It's not supposed to. It is supposed to keep them from indefinitely growing,\n> though, which it does reasonably well at.\n\nMy experience is different. I've repeated this test often. This is PG 9.1:\n\npiro=# create table test (id serial primary key);\nNOTICE: CREATE TABLE will create implicit sequence \"test_id_seq\" for\nserial column \"test.id\"\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"test_pkey\" for table \"test\"\nCREATE TABLE\npiro=# insert into test (id) select generate_series(1,10000000);\nINSERT 0 10000000\n\nThe table size is:\n\npiro=# select pg_size_pretty(pg_relation_size('test'::regclass));\n pg_size_pretty\n----------------\n 306 MB\n(1 row)\n\n...and the index size is:\n\npiro=# select pg_size_pretty(pg_relation_size('test_pkey'::regclass));\n pg_size_pretty\n----------------\n 171 MB\n(1 row)\n\npiro=# delete from test where id <= 9900000;\nDELETE 9900000\n\npiro=# select pg_size_pretty(pg_relation_size('test'::regclass)),\npg_size_pretty(pg_relation_size('test_pkey'::regclass));\n pg_size_pretty | pg_size_pretty\n----------------+----------------\n 306 MB | 171 MB\n(1 row)\n\nMy statement is that vacuum doesn't reclaim any space. Maybe sometimes\nin the tables, but never in the index, in my experience.\n\npiro=# vacuum test;\nVACUUM\npiro=# select pg_size_pretty(pg_relation_size('test'::regclass)),\npg_size_pretty(pg_relation_size('test_pkey'::regclass));\n pg_size_pretty | pg_size_pretty\n----------------+----------------\n 306 MB | 171 MB\n(1 row)\n\nVacuum full is a different story, but doesn't work online.\n\npiro=# vacuum full test;\nVACUUM\npiro=# select pg_size_pretty(pg_relation_size('test'::regclass)),\npg_size_pretty(pg_relation_size('test_pkey'::regclass));\n pg_size_pretty | pg_size_pretty\n----------------+----------------\n 3144 kB | 1768 kB\n\n\nIn our live system we have a small table of active records in a\ntransient state. No record stages there for a long time. The size of\nthe table stays reasonable (but not really stable) but not the\nindexes. One of them (friendly labeled \"the index of death\") is 5-6\ncolumns wide and, given enough time, regularly grows into the\ngigabytes for a table in the order of the ~100k records, only tamed by\na pg_repack treatment (previously by a create concurrently and drop).\n\n\n-- Daniele\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 20:22:13 +0100",
"msg_from": "Daniele Varrazzo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
},
{
"msg_contents": "On Wednesday, May 29, 2013, Niels Kristian Schjødt wrote:\n\n> Hi,\n>\n> I have a database with quite some data (millions of rows), that is heavily\n> updated all the time. Once a day I would like to reindex my database (and\n> maybe re cluster it - don't know if that's worth it yet?). I need the\n> database to be usable while doing this (both read and write). I see that\n> there is no way to REINDEX CONCURRENTLY - So what approach would you\n> suggest that I take on this?\n>\n\nI think the \"best practice\" is...not to do it in the first place.\n\nThere are some cases where it probably makes sense to reindex on a regular\nschedule. But unless you can specifically identify why you have one of\nthose cases, then you probably don't have one.\n\nCheers,\n\nJeff\n\nOn Wednesday, May 29, 2013, Niels Kristian Schjødt wrote:Hi,\n\nI have a database with quite some data (millions of rows), that is heavily updated all the time. Once a day I would like to reindex my database (and maybe re cluster it - don't know if that's worth it yet?). I need the database to be usable while doing this (both read and write). I see that there is no way to REINDEX CONCURRENTLY - So what approach would you suggest that I take on this?\nI think the \"best practice\" is...not to do it in the first place.There are some cases where it probably makes sense to reindex on a regular schedule. But unless you can specifically identify why you have one of those cases, then you probably don't have one. \nCheers,Jeff",
"msg_date": "Fri, 31 May 2013 19:38:19 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice when reindexing in production"
}
] |
[
{
"msg_contents": "We re-tested these settings a few times after our initial test and realized that the execution time I posted was shewed, because the execution plan was cached after the initial run. Subsequent executions ran in a little over a second.\nThere ended up being no significant saving by setting these parameters. Un-cached the query ran in about 55 seconds. \n \n\n-------- Original Message --------Subject: Re: [GENERAL] [PERFORM] Very slow inner join query Unacceptablelatency.From: Scott Marlowe <[email protected]>Date: Fri, May 24, 2013 3:03 pmTo: [email protected]: Jaime Casanova <[email protected]>, psql performance list<[email protected]>, Postgres General<[email protected]>On Fri, May 24, 2013 at 3:44 PM, <[email protected]> wrote:> Total runtime: 1606.728 ms 1.6 seconds <- very good response time> improvement>> (7 rows)>> Questions:>> Any concerns with setting these conf variables you recommended; work_mem,> random_page_cost dbserver wide (in postgresql,conf)?>> Thanks so much!!!Yes 500MB is pretty high especially if you have a lot of connections.Try it with it back down to 16MB and see how it does. Work mem is persort so a setting as high as 500MB can exhaust memory on the machineunder heavy load.--To understand recursion, one must first understand recursion.\n",
"msg_date": "Wed, 29 May 2013 07:44:19 -0700",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Very slow inner join query Unacceptable latency."
}
] |
[
{
"msg_contents": "Folks,\n\nI'm seeing what may be a major performance bug in BIND in 9.2.4.\n\nWe have a client who has an application which uses\nTomcat+Hibernate+JDBC. They are in the process of upgrading this\napplication from 8.4.17 to 9.2.4. As part of this, they have been doing\nperformance testing, and 9.2 is coming out MUCH worse than 8.4. The\nproblem appears to be bind/plan time.\n\nTheir application does not use prepared queries usefully, doing\nparse,bind,execute on every query cycle.\n\nHere's timings overall for 29 test cycles (cycle 1 has been omitted).\nAs you can see, parse+execute times are pretty much constant, as are\napplication think times, but bind times vary quite a lot. In 8.4, the\n29 cycles are constantly 4.5min to 5.75min long. In 9.2, which is the\nchart below, they are all over the place.\n\nDefinitions:\ncycle: test cycle #, arbitrary. Each cycle does the same amount of\n\"work\" measured in rows of data.\nnon_bind_time: time spent in PARSE and EXECUTE.\nbind_time: time spent in BIND\napp_time: time spent outside Postgres\nall times are in minutes:\n\n cycle | non_bind_time | bind_time | app_time | total_time\n-------+---------------+-----------+----------+------------\n 2 | 0.79 | 0.62 | 3.19 | 4.60\n 3 | 0.77 | 0.87 | 3.13 | 4.77\n 4 | 0.76 | 1.10 | 3.16 | 5.02\n 5 | 0.76 | 1.26 | 3.08 | 5.10\n 6 | 0.72 | 1.40 | 3.08 | 5.20\n 7 | 0.72 | 1.51 | 3.05 | 5.28\n 8 | 0.70 | 1.60 | 3.07 | 5.37\n 9 | 0.73 | 1.72 | 3.05 | 5.50\n 10 | 0.71 | 1.84 | 3.05 | 5.60\n 11 | 0.70 | 1.96 | 3.07 | 5.73\n 12 | 0.74 | 2.11 | 3.08 | 5.93\n 13 | 0.74 | 3.58 | 3.08 | 7.40\n 14 | 0.73 | 2.41 | 3.08 | 6.22\n 15 | 0.75 | 4.15 | 3.08 | 7.98\n 16 | 0.74 | 2.69 | 3.09 | 6.52\n 17 | 0.76 | 4.68 | 3.09 | 8.53\n 18 | 0.74 | 2.99 | 3.09 | 6.82\n 19 | 0.77 | 5.24 | 3.11 | 9.12\n 20 | 0.75 | 3.29 | 3.08 | 7.12\n 21 | 0.78 | 5.90 | 3.14 | 9.82\n 22 | 0.78 | 3.57 | 3.12 | 7.47\n 23 | 0.76 | 6.17 | 3.10 | 10.03\n 24 | 0.77 | 6.61 | 3.10 | 10.48\n 25 | 0.77 | 3.97 | 3.11 | 7.85\n 26 | 0.77 | 5.24 | 3.12 | 9.13\n 27 | 0.76 | 7.15 | 3.12 | 11.03\n 28 | 0.76 | 4.37 | 3.10 | 8.23\n 29 | 0.78 | 4.48 | 3.12 | 8.38\n 30 | 0.76 | 7.73 | 3.11 | 11.60\n\nI pulled out some of the queries with the greatest variance in bind\ntime. Unexpectedly, they are not particularly complex. Here's the\nanonymized plan for a query which in the logs took 80ms to bind:\n\nhttp://explain.depesz.com/s/YSj\n\nNested Loop (cost=8.280..26.740 rows=1 width=289)\n -> Nested Loop (cost=8.280..18.450 rows=1 width=248)\n -> Hash Join (cost=8.280..10.170 rows=1 width=140)\n Hash Cond: (foxtrot2kilo_oscar.quebec_seven =\nkilo_juliet1kilo_oscar.sierra_quebec)\n -> Seq Scan on foxtrot november (cost=0.000..1.640\nrows=64 width=25)\n -> Hash (cost=8.270..8.270 rows=1 width=115)\n -> Index Scan using quebec_six on victor_india\nsierra_oscar (cost=0.000..8.270 rows=1 width=115)\n Index Cond: (quebec_seven = 10079::bigint)\n -> Index Scan using alpha on seven_tango lima\n(cost=0.000..8.270 rows=1 width=108)\n Index Cond: ((xray = 10079::bigint) AND (golf =\n10002::bigint))\n -> Index Scan using six on india victor_romeo (cost=0.000..8.280\nrows=1 width=41)\n Index Cond: (quebec_seven = seven_victor0kilo_oscar.delta)\n\nAs you can see, it's not particularly complex; it only joins 4 tables,\nand it has 2 parameters. This database does have some horrible ugly\nqueries with up to 500 parameters, but inexplicably those don't take a\nparticularly long time to bind.\n\nNote that I have not been able to reproduce this long bind time\ninteractively, but it's 100% reproducable in the test.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 17:14:30 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 05/29/2013 05:14 PM, Josh Berkus wrote:\n> Here's timings overall for 29 test cycles (cycle 1 has been omitted).\n> As you can see, parse+execute times are pretty much constant, as are\n> application think times, but bind times vary quite a lot. In 8.4, the\n> 29 cycles are constantly 4.5min to 5.75min long. In 9.2, which is the\n> chart below, they are all over the place.\n\nTo be clear, the TOTAL times for 8.4 are 4.5 to 5.75 minutes long. Bind\ntimes in 8.4 are a more-or-less constant 0.75 minutes.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 May 2013 17:59:09 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On Thursday, May 30, 2013 5:45 AM Josh Berkus wrote:\n> Folks,\n> \n> I'm seeing what may be a major performance bug in BIND in 9.2.4.\n> \n> We have a client who has an application which uses\n> Tomcat+Hibernate+JDBC. They are in the process of upgrading this\n> application from 8.4.17 to 9.2.4. As part of this, they have been\n> doing\n> performance testing, and 9.2 is coming out MUCH worse than 8.4. The\n> problem appears to be bind/plan time.\n> \n> Their application does not use prepared queries usefully, doing\n> parse,bind,execute on every query cycle.\n> \n> Here's timings overall for 29 test cycles (cycle 1 has been omitted).\n> As you can see, parse+execute times are pretty much constant, as are\n> application think times, but bind times vary quite a lot. In 8.4, the\n> 29 cycles are constantly 4.5min to 5.75min long. In 9.2, which is the\n> chart below, they are all over the place.\n\nI think it might be because of choosing custom plan option due to which it might be generating new plan during exec_bind_message().\nexec_bind_message()->GetCachedPlan()->choose_custom_plan(). If it chooses custom plan, then it will regenerate the plan which can cause extra cost\nobserved in test.\nThough there is calculation that it should not choose custom plan always, but still I guess the variation observed in the test can be due to this reason.\n\nTo test if this is the cause, we might hack the code such that it always chooses generic plan, so that it doesn't need to generate plan again.\n\nWith Regards,\nAmit Kapila.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 May 2013 19:25:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "Amit,\n\n> I think it might be because of choosing custom plan option due to which it might be generating new plan during exec_bind_message().\n> exec_bind_message()->GetCachedPlan()->choose_custom_plan(). If it chooses custom plan, then it will regenerate the plan which can cause extra cost\n> observed in test.\n> Though there is calculation that it should not choose custom plan always, but still I guess the variation observed in the test can be due to this reason.\n\nThis is why I'm asking them to run tests on 9.1. If 9.1 doesn't exhibit\nthis behavior, then customplan is liable to be at fault.\n\nHOWEVER, that doesn't explain why creating a plan for a query during\napplication operation would take 80ms, but only 1.2ms when I do it\ninteractively.\n\nFYI, per questions from IRC: the times for each \"cycle\" in my data are\ncumulative minutes. Each cycle runs around 500,000 queries, so that's\nthe aggregate across all queries.\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 May 2013 11:06:10 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "\n> This is why I'm asking them to run tests on 9.1. If 9.1 doesn't exhibit\n> this behavior, then customplan is liable to be at fault.\n\n9.1 shows the same performance as 9.2. So it's not the custom plan thing.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 May 2013 13:18:44 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On Thursday, May 30, 2013 11:36 PM Josh Berkus wrote:\n> Amit,\n> \n> > I think it might be because of choosing custom plan option due to\n> which it might be generating new plan during exec_bind_message().\n> > exec_bind_message()->GetCachedPlan()->choose_custom_plan(). If it\n> chooses custom plan, then it will regenerate the plan which can cause\n> extra cost\n> > observed in test.\n> > Though there is calculation that it should not choose custom plan\n> always, but still I guess the variation observed in the test can be due\n> to this reason.\n> \n> This is why I'm asking them to run tests on 9.1. If 9.1 doesn't\n> exhibit\n> this behavior, then customplan is liable to be at fault.\n> \n> HOWEVER, that doesn't explain why creating a plan for a query during\n> application operation would take 80ms, but only 1.2ms when I do it\n> interactively.\n\nWhen you say interactively, does it mean that you are using psql to test the same?\n\n> FYI, per questions from IRC: the times for each \"cycle\" in my data are\n> cumulative minutes. Each cycle runs around 500,000 queries, so that's\n> the aggregate across all queries.\n\nToday I tried to see the changes between 8.4 and 9.1 for bind path in server. Following is summary of whatever I could see the differences\n\n1. 4 new parameters are added to ParamListInfo, for which palloc is done in exec_bind_message\n2. changed function for converting client to server encoding, but it seems for bind path, it will still follow same path as for 8.4\n2. small change in RevalidateCachedPlan() for new hook added in ParamListInfo\n3. standard_ExecutorStart(), changes to setup After Statement Trigger context\n4. InitPlan has some changes for FOR UPDATE/FOR SELECT statements and junk filter case (update/delete statements)\n\n From the changes, it doesn't seem that any of such changes can cause the problem you have seen.\n\nDo you think it can be due to\na. JDBC - communication, encoding or some other changes\nb. can we assume that plans generated for all statements are same, if not it might have some cost for query plan initialization (InitPlan) but again it should not be that big cost.\n\nHow do measure individual bind time cost (is the cost of only server side or it includes client bind or ..)?\n\nWith Regards,\nAmit Kapila.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 31 May 2013 13:09:01 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "\n> When you say interactively, does it mean that you are using psql to test the same?\n\nYes. Of course, on the psql command line, there's no separate BIND\nstep, just PREPARE and EXECUTE.\n\n> From the changes, it doesn't seem that any of such changes can cause the problem you have seen.\n\nNo, but clearly in this case something is broken.\n\n> Do you think it can be due to\n> a. JDBC - communication, encoding or some other changes\n> b. can we assume that plans generated for all statements are same, if not it might have some cost for query plan initialization (InitPlan) but again it should not be that big cost.\n\nI don't think we can assume that, no.\n\n> How do measure individual bind time cost (is the cost of only server side or it includes client bind or ..)?\n\n From the PostgreSQL activity log. BIND gets logged separately.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 31 May 2013 11:54:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "Amit, All:\n\nSo we just retested this on 9.3b2. The performance is the same as 9.1\nand 9.2; that is, progressively worse as the test cycles go on, and\nunacceptably slow compared to 8.4.\n\nSome issue introduced in 9.1 is causing BINDs to get progressively\nslower as the PARSEs BINDs get run repeatedly. Per earlier on this\nthread, that can bloat to 200X time required for a BIND, and it's\ndefinitely PostgreSQL-side.\n\nI'm trying to produce a test case which doesn't involve the user's\napplication. However, hints on other things to analyze would be keen.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 01 Aug 2013 10:58:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On Thu, Aug 1, 2013 at 10:58 AM, Josh Berkus <[email protected]> wrote:\n> Amit, All:\n>\n> So we just retested this on 9.3b2. The performance is the same as 9.1\n> and 9.2; that is, progressively worse as the test cycles go on, and\n> unacceptably slow compared to 8.4.\n>\n> Some issue introduced in 9.1 is causing BINDs to get progressively\n> slower as the PARSEs BINDs get run repeatedly. Per earlier on this\n> thread, that can bloat to 200X time required for a BIND, and it's\n> definitely PostgreSQL-side.\n>\n> I'm trying to produce a test case which doesn't involve the user's\n> application. However, hints on other things to analyze would be keen.\n\nDoes it seem to be all CPU time (it is hard to imagine what else it\nwould be, but...)\n\nCould you use oprofile or perf or gprof to get a profile of the\nbackend during a run? That should quickly narrow it down to which C\nfunction has the problem.\n\nDid you test 9.0 as well?\n\nIf the connection is dropped and re-established between \"cycles\" does\nthe problem still show up?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Aug 2013 12:20:39 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "\nOn 08/01/2013 03:20 PM, Jeff Janes wrote:\n> On Thu, Aug 1, 2013 at 10:58 AM, Josh Berkus <[email protected]> wrote:\n>> Amit, All:\n>>\n>> So we just retested this on 9.3b2. The performance is the same as 9.1\n>> and 9.2; that is, progressively worse as the test cycles go on, and\n>> unacceptably slow compared to 8.4.\n>>\n>> Some issue introduced in 9.1 is causing BINDs to get progressively\n>> slower as the PARSEs BINDs get run repeatedly. Per earlier on this\n>> thread, that can bloat to 200X time required for a BIND, and it's\n>> definitely PostgreSQL-side.\n>>\n>> I'm trying to produce a test case which doesn't involve the user's\n>> application. However, hints on other things to analyze would be keen.\n> Does it seem to be all CPU time (it is hard to imagine what else it\n> would be, but...)\n>\n> Could you use oprofile or perf or gprof to get a profile of the\n> backend during a run? That should quickly narrow it down to which C\n> function has the problem.\n>\n> Did you test 9.0 as well?\n\n\nThis has been tested back to 9.0. What we have found is that the problem \ndisappears if the database has come in via dump/restore, but is present \nif it is the result of pg_upgrade. There are some long-running \ntransactions also running alongside this - we are currently planning a \ntest where those are not present. We're also looking at constructing a \nself-contained test case.\n\nHere is some perf output from the bad case:\n\n + 14.67% postgres [.] heap_hot_search_buffer\n + 11.45% postgres [.] LWLockAcquire\n + 8.39% postgres [.] LWLockRelease\n + 6.60% postgres [.] _bt_checkkeys\n + 6.39% postgres [.] PinBuffer\n + 5.96% postgres [.] hash_search_with_hash_value\n + 5.43% postgres [.] hash_any\n + 5.14% postgres [.] UnpinBuffer\n + 3.43% postgres [.] ReadBuffer_common\n + 2.34% postgres [.] index_fetch_heap\n + 2.04% postgres [.] heap_page_prune_opt\n + 2.00% libc-2.15.so [.] 0x8041b\n + 1.94% postgres [.] _bt_next\n + 1.83% postgres [.] btgettuple\n + 1.76% postgres [.] index_getnext_tid\n + 1.70% postgres [.] LockBuffer\n + 1.54% postgres [.] ReadBufferExtended\n + 1.25% postgres [.] FunctionCall2Coll\n + 1.14% postgres [.] HeapTupleSatisfiesNow\n + 1.09% postgres [.] ReleaseAndReadBuffer\n + 0.94% postgres [.] ResourceOwnerForgetBuffer\n + 0.81% postgres [.] _bt_saveitem\n + 0.80% postgres [.] _bt_readpage\n + 0.79% [kernel.kallsyms] [k] 0xffffffff81170861\n + 0.64% postgres [.] CheckForSerializableConflictOut\n + 0.60% postgres [.] ResourceOwnerEnlargeBuffers\n + 0.59% postgres [.] BufTableLookup\n\nand here is the good case:\n\n + 9.54% libc-2.15.so [.] 0x15eb1f\n + 7.31% [kernel.kallsyms] [k] 0xffffffff8117924b\n + 5.65% postgres [.] AllocSetAlloc\n + 3.57% postgres [.] SearchCatCache\n + 2.67% postgres [.] hash_search_with_hash_value\n + 1.69% postgres [.] base_yyparse\n + 1.49% libc-2.15.so [.] vfprintf\n + 1.34% postgres [.] MemoryContextAllocZeroAligned\n + 1.34% postgres [.] XLogInsert\n + 1.24% postgres [.] copyObject\n + 1.10% postgres [.] palloc\n + 1.09% postgres [.] _bt_compare\n + 1.04% postgres [.] core_yylex\n + 0.96% postgres [.] _bt_checkkeys\n + 0.95% postgres [.] expression_tree_walker\n + 0.88% postgres [.] ScanKeywordLookup\n + 0.87% postgres [.] pg_encoding_mbcliplen\n + 0.86% postgres [.] LWLockAcquire\n + 0.72% postgres [.] nocachegetattr\n + 0.67% postgres [.] FunctionCall2Coll\n + 0.63% postgres [.] fmgr_info_cxt_security\n + 0.62% postgres [.] hash_any\n + 0.62% postgres [.] ExecInitExpr\n + 0.58% postgres [.] hash_uint32\n + 0.55% postgres [.] PostgresMain\n + 0.55% postgres [.] LWLockRelease\n + 0.54% postgres [.] lappend\n + 0.52% postgres [.] slot_deform_tuple\n + 0.50% postgres [.] PinBuffer\n + 0.48% postgres [.] AllocSetFree\n + 0.46% postgres [.] check_stack_depth\n + 0.44% postgres [.] DirectFunctionCall1Coll\n + 0.43% postgres [.] ExecScanHashBucket\n + 0.36% postgres [.] deconstruct_array\n + 0.36% postgres [.] CatalogCacheComputeHashValue\n + 0.35% postgres [.] pfree\n + 0.33% libc-2.15.so [.] _IO_default_xsputn\n + 0.32% libc-2.15.so [.] malloc\n + 0.32% postgres [.] TupleDescInitEntry\n + 0.30% postgres [.] new_tail_cell.isra.2\n + 0.30% libm-2.15.so [.] 0x5898\n + 0.30% postgres [.] LockAcquireExtended\n + 0.30% postgres [.] _bt_first\n + 0.29% postgres [.] add_paths_to_joinrel\n + 0.28% postgres [.] MemoryContextCreate\n + 0.28% postgres [.] appendBinaryStringInfo\n + 0.28% postgres [.] MemoryContextStrdup\n + 0.27% postgres [.] heap_hot_search_buffer\n + 0.27% postgres [.] GetSnapshotData\n + 0.26% postgres [.] hash_search\n + 0.26% postgres [.] heap_getsysattr\n + 0.26% [vdso] [.] 0x7fff681ff70c\n + 0.25% postgres [.] compare_scalars\n + 0.25% postgres [.] pg_verify_mbstr_len\n\n\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 09 Sep 2013 20:38:09 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On Mon, Sep 9, 2013 at 08:38:09PM -0400, Andrew Dunstan wrote:\n> \n> On 08/01/2013 03:20 PM, Jeff Janes wrote:\n> >On Thu, Aug 1, 2013 at 10:58 AM, Josh Berkus <[email protected]> wrote:\n> >>Amit, All:\n> >>\n> >>So we just retested this on 9.3b2. The performance is the same as 9.1\n> >>and 9.2; that is, progressively worse as the test cycles go on, and\n> >>unacceptably slow compared to 8.4.\n> >>\n> >>Some issue introduced in 9.1 is causing BINDs to get progressively\n> >>slower as the PARSEs BINDs get run repeatedly. Per earlier on this\n> >>thread, that can bloat to 200X time required for a BIND, and it's\n> >>definitely PostgreSQL-side.\n> >>\n> >>I'm trying to produce a test case which doesn't involve the user's\n> >>application. However, hints on other things to analyze would be keen.\n> >Does it seem to be all CPU time (it is hard to imagine what else it\n> >would be, but...)\n> >\n> >Could you use oprofile or perf or gprof to get a profile of the\n> >backend during a run? That should quickly narrow it down to which C\n> >function has the problem.\n> >\n> >Did you test 9.0 as well?\n> \n> \n> This has been tested back to 9.0. What we have found is that the\n> problem disappears if the database has come in via dump/restore, but\n> is present if it is the result of pg_upgrade. There are some\n> long-running transactions also running alongside this - we are\n> currently planning a test where those are not present. We're also\n> looking at constructing a self-contained test case.\n> \n> Here is some perf output from the bad case:\n> \n> + 14.67% postgres [.] heap_hot_search_buffer\n\nOK, certainly looks like a HOT chain issue. I think there are two\npossibilities:\n\n1) the heap or index file is different from a dump/restore vs.\n pg_upgrade\n2) some other files is missing or changed between the two\n\nMy guess is that the dump/restore removes all the HOT chains as it just\ndumps the most recent value of the chain. Could it be HOT chain\noverhead that you are seeing, rather than a pg_upgrade issue/bug?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 9 Sep 2013 21:04:20 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "Hi,\n\nOn 2013-09-09 20:38:09 -0400, Andrew Dunstan wrote:\n> \n> On 08/01/2013 03:20 PM, Jeff Janes wrote:\n> >On Thu, Aug 1, 2013 at 10:58 AM, Josh Berkus <[email protected]> wrote:\n> >>Amit, All:\n> >>\n> >>So we just retested this on 9.3b2. The performance is the same as 9.1\n> >>and 9.2; that is, progressively worse as the test cycles go on, and\n> >>unacceptably slow compared to 8.4.\n> >>\n> >>Some issue introduced in 9.1 is causing BINDs to get progressively\n> >>slower as the PARSEs BINDs get run repeatedly. Per earlier on this\n> >>thread, that can bloat to 200X time required for a BIND, and it's\n> >>definitely PostgreSQL-side.\n> >>\n> >>I'm trying to produce a test case which doesn't involve the user's\n> >>application. However, hints on other things to analyze would be keen.\n> >Does it seem to be all CPU time (it is hard to imagine what else it\n> >would be, but...)\n> >\n> >Could you use oprofile or perf or gprof to get a profile of the\n> >backend during a run? That should quickly narrow it down to which C\n> >function has the problem.\n> >\n> >Did you test 9.0 as well?\n> \n> \n> This has been tested back to 9.0. What we have found is that the problem\n> disappears if the database has come in via dump/restore, but is present if\n> it is the result of pg_upgrade. There are some long-running transactions\n> also running alongside this - we are currently planning a test where those\n> are not present. We're also looking at constructing a self-contained test\n> case.\n> \n> Here is some perf output from the bad case:\n> \n> + 14.67% postgres [.] heap_hot_search_buffer\n> + 11.45% postgres [.] LWLockAcquire\n> + 8.39% postgres [.] LWLockRelease\n> + 6.60% postgres [.] _bt_checkkeys\n> + 6.39% postgres [.] PinBuffer\n> + 5.96% postgres [.] hash_search_with_hash_value\n> + 5.43% postgres [.] hash_any\n> + 5.14% postgres [.] UnpinBuffer\n> + 3.43% postgres [.] ReadBuffer_common\n> + 2.34% postgres [.] index_fetch_heap\n> + 2.04% postgres [.] heap_page_prune_opt\n\nA backtrace for this would be useful. Alternatively you could recompile\npostgres using -fno-omit-frame-pointer in CFLAGS and use perf record -g.\n\nAny chance you have older prepared xacts, older sessions or something\nlike that around? I'd expect heap_prune* to be present in workloads that\nspend significant time in heap_hot_search_buffer...\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Sep 2013 14:20:36 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 2013-09-09 20:38:09 -0400, Andrew Dunstan wrote:\n> \n> On 08/01/2013 03:20 PM, Jeff Janes wrote:\n> >On Thu, Aug 1, 2013 at 10:58 AM, Josh Berkus <[email protected]> wrote:\n> >>Amit, All:\n> >>\n> >>So we just retested this on 9.3b2. The performance is the same as 9.1\n> >>and 9.2; that is, progressively worse as the test cycles go on, and\n> >>unacceptably slow compared to 8.4.\n> >>\n> >>Some issue introduced in 9.1 is causing BINDs to get progressively\n> >>slower as the PARSEs BINDs get run repeatedly. Per earlier on this\n> >>thread, that can bloat to 200X time required for a BIND, and it's\n> >>definitely PostgreSQL-side.\n> >>\n> >>I'm trying to produce a test case which doesn't involve the user's\n> >>application. However, hints on other things to analyze would be keen.\n> >Does it seem to be all CPU time (it is hard to imagine what else it\n> >would be, but...)\n> >\n> >Could you use oprofile or perf or gprof to get a profile of the\n> >backend during a run? That should quickly narrow it down to which C\n> >function has the problem.\n> >\n> >Did you test 9.0 as well?\n> \n> \n> This has been tested back to 9.0. What we have found is that the problem\n> disappears if the database has come in via dump/restore, but is present if\n> it is the result of pg_upgrade. There are some long-running transactions\n> also running alongside this - we are currently planning a test where those\n> are not present. We're also looking at constructing a self-contained test\n> case.\n> \n> Here is some perf output from the bad case:\n> \n> + 14.67% postgres [.] heap_hot_search_buffer\n> + 11.45% postgres [.] LWLockAcquire\n> + 8.39% postgres [.] LWLockRelease\n> + 6.60% postgres [.] _bt_checkkeys\n> + 6.39% postgres [.] PinBuffer\n> + 5.96% postgres [.] hash_search_with_hash_value\n> + 5.43% postgres [.] hash_any\n> + 5.14% postgres [.] UnpinBuffer\n> + 3.43% postgres [.] ReadBuffer_common\n> + 2.34% postgres [.] index_fetch_heap\n> + 2.04% postgres [.] heap_page_prune_opt\n> + 2.00% libc-2.15.so [.] 0x8041b\n> + 1.94% postgres [.] _bt_next\n> + 1.83% postgres [.] btgettuple\n> + 1.76% postgres [.] index_getnext_tid\n> + 1.70% postgres [.] LockBuffer\n> + 1.54% postgres [.] ReadBufferExtended\n> + 1.25% postgres [.] FunctionCall2Coll\n> + 1.14% postgres [.] HeapTupleSatisfiesNow\n> + 1.09% postgres [.] ReleaseAndReadBuffer\n> + 0.94% postgres [.] ResourceOwnerForgetBuffer\n> + 0.81% postgres [.] _bt_saveitem\n> + 0.80% postgres [.] _bt_readpage\n> + 0.79% [kernel.kallsyms] [k] 0xffffffff81170861\n> + 0.64% postgres [.] CheckForSerializableConflictOut\n> + 0.60% postgres [.] ResourceOwnerEnlargeBuffers\n> + 0.59% postgres [.] BufTableLookup\n\nAfter a second look at this, I very tentatively guess that you'll see\nget_actual_variable_range() as the entry point here. Which would explain\nwhy you're seing this during PARSE.\n\nBut there still is the question why we never actually seem to prune...\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Sep 2013 14:32:07 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "\nOn 09/10/2013 08:20 AM, Andres Freund wrote:\n\n> A backtrace for this would be useful. Alternatively you could recompile\n> postgres using -fno-omit-frame-pointer in CFLAGS and use perf record -g.\n\nIt's using a custom build, so this should be doable.\n\n>\n> Any chance you have older prepared xacts, older sessions or something\n> like that around? I'd expect heap_prune* to be present in workloads that\n> spend significant time in heap_hot_search_buffer...\n\n\nNot sure about prepared transactions. There are certainly probably old \nprepared statements around, and long running transactions alongside this \none.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Sep 2013 08:45:33 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 2013-09-10 08:45:33 -0400, Andrew Dunstan wrote:\n> \n> On 09/10/2013 08:20 AM, Andres Freund wrote:\n> \n> >A backtrace for this would be useful. Alternatively you could recompile\n> >postgres using -fno-omit-frame-pointer in CFLAGS and use perf record -g.\n> \n> It's using a custom build, so this should be doable.\n\nGreat.\n\n> >Any chance you have older prepared xacts, older sessions or something\n> >like that around? I'd expect heap_prune* to be present in workloads that\n> >spend significant time in heap_hot_search_buffer...\n> \n> \n> Not sure about prepared transactions. There are certainly probably old\n> prepared statements around, and long running transactions alongside this\n> one.\n\nOk, long running transactions will do the trick. I quicky checked and\ndoing an index lookup for min/max histogram lookups was added *after*\n8.4 which would explain why you're not seing the issue there\n(c.f. 40608e7f949fb7e4025c0ddd5be01939adc79eec).\n\nIt getting slower and slower during a testrun would be explained by the\nadditional tuple versions amassing which cannot be marked dead because\nof older transactions around. I guess those are also part of the test?\n\nIf I interpret things correctly you're using serializable? I guess there\nis no chance to use repeatable read instead?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Sep 2013 15:21:33 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 2013-09-10 15:21:33 +0200, Andres Freund wrote:\n> If I interpret things correctly you're using serializable? I guess there\n> is no chance to use repeatable read instead?\n\nErr, that wouldn't help much. Read committed. That lets PGXACT->xmin advance\nthese days and thus might help to reduce the impact of the longrunning\ntransactions.\nOtherwise you will have to shorten those...\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Sep 2013 15:23:17 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "\nOn 09/10/2013 09:23 AM, Andres Freund wrote:\n> On 2013-09-10 15:21:33 +0200, Andres Freund wrote:\n>> If I interpret things correctly you're using serializable? I guess there\n>> is no chance to use repeatable read instead?\n> Err, that wouldn't help much. Read committed. That lets PGXACT->xmin advance\n> these days and thus might help to reduce the impact of the longrunning\n> transactions.\n> Otherwise you will have to shorten those...\n>\n\n\nYeah, we're looking at eliminating them.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Sep 2013 09:41:48 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "All,\n\nWe've confirmed that this issue is caused by having long-running idle\ntransactions on the server. When we disabled their queueing system\n(which prodiced hour-long idle txns), the progressive slowness went away.\n\nWhy that should affect 9.X far more strongly than 8.4, I'm not sure\nabout. Does that mean that 8.4 was unsafe, or that this is something\nwhich *could* be fixed in later versions?\n\nI'm also confused as to why this would affect BIND time rather than\nEXECUTE time.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Sep 2013 11:35:45 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "\nOn 09/11/2013 02:35 PM, Josh Berkus wrote:\n> All,\n>\n> We've confirmed that this issue is caused by having long-running idle\n> transactions on the server. When we disabled their queueing system\n> (which prodiced hour-long idle txns), the progressive slowness went away.\n>\n> Why that should affect 9.X far more strongly than 8.4, I'm not sure\n> about. Does that mean that 8.4 was unsafe, or that this is something\n> which *could* be fixed in later versions?\n>\n> I'm also confused as to why this would affect BIND time rather than\n> EXECUTE time.\n>\n\n\nOne thing that this made me wonder is why we don't have \ntransaction_timeout, or maybe transaction_idle_timeout.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Sep 2013 15:06:23 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 2013-09-11 15:06:23 -0400, Andrew Dunstan wrote:\n> \n> On 09/11/2013 02:35 PM, Josh Berkus wrote:\n> >All,\n> >\n> >We've confirmed that this issue is caused by having long-running idle\n> >transactions on the server. When we disabled their queueing system\n> >(which prodiced hour-long idle txns), the progressive slowness went away.\n> >\n> >Why that should affect 9.X far more strongly than 8.4, I'm not sure\n> >about. Does that mean that 8.4 was unsafe, or that this is something\n> >which *could* be fixed in later versions?\n> >\n> >I'm also confused as to why this would affect BIND time rather than\n> >EXECUTE time.\n> >\n> \n> \n> One thing that this made me wonder is why we don't have transaction_timeout,\n> or maybe transaction_idle_timeout.\n\nBecause it's harder than it sounds, at least if you want to support\nidle-in-transactions. Note that we do not support pg_cancel_backend()\nfor those yet...\n\nAlso, I think it might lead to papering over actual issues with\napplications leaving transactions open. I don't really see a valid\nreason for an application needing cancelling of long idle transactions.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Sep 2013 23:10:08 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 2013-09-11 11:35:45 -0700, Josh Berkus wrote:\n> All,\n> \n> We've confirmed that this issue is caused by having long-running idle\n> transactions on the server. When we disabled their queueing system\n> (which prodiced hour-long idle txns), the progressive slowness went away.\n> \n> Why that should affect 9.X far more strongly than 8.4, I'm not sure\n> about. Does that mean that 8.4 was unsafe, or that this is something\n> which *could* be fixed in later versions?\n\nThe explanation is in\nhttp://archives.postgresql.org/message-id/20130910132133.GJ1024477%40alap2.anarazel.de\n\nThe referenced commit introduced a planner feature. Funnily you seem to\nhave been the trigger for it's introduction ;)\n\n> I'm also confused as to why this would affect BIND time rather than\n> EXECUTE time.\n\nBecause we're doing the histogram checks during planning and not during\nexecution.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Sep 2013 23:12:53 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "\n> The explanation is in\n> http://archives.postgresql.org/message-id/20130910132133.GJ1024477%40alap2.anarazel.de\n> \n> The referenced commit introduced a planner feature. Funnily you seem to\n> have been the trigger for it's introduction ;)\n\nOh, crap, the \"off the end of the index\" optimization?\n\nIt's the story of our lives: we can't optimize anything without\ndeoptimizing something else. Dammit.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Sep 2013 19:03:21 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On Wed, Sep 11, 2013 at 2:10 PM, Andres Freund <[email protected]>wrote:\n\n> On 2013-09-11 15:06:23 -0400, Andrew Dunstan wrote:\n> >\n> > One thing that this made me wonder is why we don't have\n> transaction_timeout,\n> > or maybe transaction_idle_timeout.\n>\n> Because it's harder than it sounds, at least if you want to support\n> idle-in-transactions. Note that we do not support pg_cancel_backend()\n> for those yet...\n>\n\nSo we are left with pg_terminate_backend in a cron job. That mostly seems\nto work, because usually apps that abandon connections in the\nidle-in-transaction state will never check back on them anyway, but cancel\nwould be nicer.\n\n\n>\n> Also, I think it might lead to papering over actual issues with\n> applications leaving transactions open. I don't really see a valid\n> reason for an application needing cancelling of long idle transactions.\n>\n\nSome of us make a living, at least partially, by papering over issues with\n3rd party applications that we have no control over!\n\nCheers,\n\nJeff\n\nOn Wed, Sep 11, 2013 at 2:10 PM, Andres Freund <[email protected]> wrote:\nOn 2013-09-11 15:06:23 -0400, Andrew Dunstan wrote:>\n> One thing that this made me wonder is why we don't have transaction_timeout,\n> or maybe transaction_idle_timeout.\n\nBecause it's harder than it sounds, at least if you want to support\nidle-in-transactions. Note that we do not support pg_cancel_backend()\nfor those yet...So we are left with pg_terminate_backend in a cron job. That mostly seems to work, because usually apps that abandon connections in the idle-in-transaction state will never check back on them anyway, but cancel would be nicer.\n \n\nAlso, I think it might lead to papering over actual issues with\napplications leaving transactions open. I don't really see a valid\nreason for an application needing cancelling of long idle transactions.Some of us make a living, at least partially, by papering over issues with 3rd party applications that we have no control over!\nCheers,Jeff",
"msg_date": "Tue, 24 Sep 2013 23:38:03 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> The explanation is in\n>> http://archives.postgresql.org/message-id/20130910132133.GJ1024477%40alap2.anarazel.de\n>> \n>> The referenced commit introduced a planner feature. Funnily you seem to\n>> have been the trigger for it's introduction ;)\n\n> Oh, crap, the \"off the end of the index\" optimization?\n\n> It's the story of our lives: we can't optimize anything without\n> deoptimizing something else. Dammit.\n\nI wonder if we could ameliorate this problem by making\nget_actual_variable_range() use SnapshotDirty rather than either\nSnapshotNow (as it does in released versions) or the active snapshot (as\nit does in HEAD). We discussed that idea in the SnapshotNow removal\nthread, see eg\nhttp://www.postgresql.org/message-id/CA+TgmoZ_q2KMkxZAoRxRHB7k1tOmjVjQgYt2JuA7=U7QZoLxNw@mail.gmail.com\nIn that thread I claimed that a current MVCC snapshot was the most\nappropriate thing, which it probably is; but the argument for it isn't so\nstrong that I think we should be willing to spend unbounded effort to get\nthat version of the column min/max rather than some other approximation.\nThe best objection to it that I can see is Robert's security concern about\nleakage of uncommitted values --- but I don't think that holds a huge\namount of water either. We already try to limit the visibility of the\nregular elements of the histogram, why are these not-yet-committed values\nsignificantly more of an issue?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 08 Nov 2013 14:46:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n\n> I wonder if we could ameliorate this problem by making\n> get_actual_variable_range() use SnapshotDirty\n\n> In that thread I claimed that a current MVCC snapshot was the\n> most appropriate thing, which it probably is;\n\nIf it reads from the end of the index, won't it actually be reading\nstarting with the value we will obtain using SnapshotDirty, and\nsince the transaction is not yet committed, won't we be making the\ntrip to the heap for each index tuple we read from that point? Why\ndoesn't that make SnapshotDirty more appropriate for purposes of\ncost estimation?\n\n> but the argument for it isn't so strong that I think we should be\n> willing to spend unbounded effort to get that version of the\n> column min/max rather than some other approximation.\n\nSince the whole point is to try to get the fastest runtime which\nproduces correct results, I agree even if the other value is in\nsome way more \"correct\".\n\n> The best objection to it that I can see is Robert's security\n> concern about leakage of uncommitted values --- but I don't think\n> that holds a huge amount of water either.\nIt doesn't look like Robert did either, if you read the whole\nmessage. In fact, he also questioned why index tuples which would\nneed to be read if we process from that end of the index don't\nmatter for purposes of cost estimation.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 9 Nov 2013 08:30:45 -0800 (PST)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "Kevin Grittner <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> I wonder if we could ameliorate this problem by making\n>> get_actual_variable_range() use SnapshotDirty\n\n>> In that thread I claimed that a current MVCC snapshot was the\n>> most appropriate thing, which it probably is;\n\n> If it reads from the end of the index, won't it actually be reading\n> starting with the value we will obtain using SnapshotDirty, and\n> since the transaction is not yet committed, won't we be making the\n> trip to the heap for each index tuple we read from that point?� Why\n> doesn't that make SnapshotDirty more appropriate for purposes of\n> cost estimation?\n\nThis code isn't used (directly) for cost estimation. It's used for\nselectivity estimation, that is counting how many valid rows the query\nwill fetch when it executes. We can't know that with certainty, of\ncourse, so the argument boils down to how good an approximation we're\ngetting and how much it costs to get it.\n\nThe original coding here was SnapshotNow, which could be defended on the\ngrounds that it was the newest available data and thus the best possible\napproximation if the query gets executed with a snapshot taken later.\nThat's still what's being used in the field, and of course it's got the\nproblem we're on about that uncommitted rows cause us to move on to the\nnext index entry (and thrash the ProcArrayLock for each uncommitted row\nwe hit :-(). In HEAD we replaced that with GetActiveSnapshot, which is\na better approximation if you assume the query will be executed with this\nsame snapshot, and otherwise slightly worse (because older). But it's\njust the same as far as the cost imposed by uncommitted rows goes.\n\nMy proposal to use SnapshotDirty instead is based on the thought that\nit will give the same answers as SnapshotNow for definitely-live or\ndefinitely-dead rows, while not imposing such large costs for in-doubt\nrows: it will accept the first in-doubt row, and thus prevent us from\nmoving on to the next index entry. The reported problems seem to involve\nhundreds or thousands of in-doubt rows at the extreme of the index,\nso the cost will go down proportionally to whatever that count is.\nAs far as the accuracy of the approximation is concerned, it's a win\nif you imagine that pending inserts will have committed by the time the\nquery's execution snapshot is taken. Otherwise not so much.\n\nThe other thing we might consider doing is using SnapshotAny, which\nwould amount to just taking the extremal index entry at face value.\nThis would be cheap all right (in fact, we might later be able to optimize\nit to not even visit the heap). However, I don't like the implications\nfor the accuracy of the approximation. It seems quite likely that an\nerroneous transaction could insert a wildly out-of-range index entry\nand then roll back --- or even if it committed, somebody might soon come\nalong and clean up the bogus entry in a separate transaction. If we use\nSnapshotAny then we'd believe that bogus entry until it got vacuumed, not\njust till it got deleted. This is a fairly scary proposition, because\nthe wackier that extremal value is, the more impact it will have on the\nselectivity estimates.\n\nIf it's demonstrated that SnapshotDirty doesn't reduce the estimation\ncosts enough to solve the performance problem the complainants are facing,\nI'd be willing to consider using SnapshotAny here. But I don't want to\ntake that step until it's proven that the more conservative approach\ndoesn't solve the performance problem.\n\nThere's an abbreviated version of this argument in the comments in\nmy proposed patch at\nhttp://www.postgresql.org/message-id/[email protected]\nWhat I'm hoping will happen next is that the complainants will hot-patch\nthat and see if it fixes their problems. We can't really determine\nwhat to do without that information.\n\n> It doesn't look like Robert did either, if you read the whole\n> message.� In fact, he also questioned why index tuples which would\n> need to be read if we process from that end of the index don't\n> matter for purposes of cost estimation.\n\nThe key point here is that we are not looking to estimate the cost of\ngetting the extremal value. This code gets called if we are interested\nin a probe value falling within the endmost histogram bin; that's not\nnecessarily, or even probably, the extremal value. We're just trying to\nfind out if the bin endpoint has moved a lot since the last ANALYZE.\n\nAnother way of explaining this is that what Robert was really suggesting\nis that we ought to account for the cost of fetching rows that are then\ndetermined not to be visible to the query. That's true, but it would be\na conceptual error to factor that in by supposing that they're visible.\nThe planner doesn't, at present, account for this cost explicitly, but I\nbelieve it's at least somewhat handled by the fact that we charge for page\naccesses on the basis of selectivity fraction times physical table size.\nA table containing a lot of uncommitted/dead tuples will get charged more\nI/O than a less bloated table. Of course, this approach can't account for\nspecial cases like \"there's a whole lot of uncommitted tuples right at the\nend of the index range, and not so many elsewhere\". That's below the\ngranularity of what we can sensibly model at the moment, I'm afraid.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Nov 2013 20:46:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "Tom,\n\n> There's an abbreviated version of this argument in the comments in\n> my proposed patch at\n> http://www.postgresql.org/message-id/[email protected]\n> What I'm hoping will happen next is that the complainants will hot-patch\n> that and see if it fixes their problems. We can't really determine\n> what to do without that information.\n\nUnfortunately, the original reporter of this issue will not be available\nfor testing for 2-3 weeks, and I haven't been able to devise a synthetic\ntest which clearly shows the issue.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Nov 2013 11:34:11 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> What I'm hoping will happen next is that the complainants will hot-patch\n>> that and see if it fixes their problems. We can't really determine\n>> what to do without that information.\n\n> Unfortunately, the original reporter of this issue will not be available\n> for testing for 2-3 weeks, and I haven't been able to devise a synthetic\n> test which clearly shows the issue.\n\nWell, we had a synthetic test from the other complainant. What's at\nstake now is whether this is a good enough fix for real-world cases.\nI'm willing to wait ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Nov 2013 15:14:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Tom,\n>> There's an abbreviated version of this argument in the comments in\n>> my proposed patch at\n>> http://www.postgresql.org/message-id/[email protected]\n>> What I'm hoping will happen next is that the complainants will hot-patch\n>> that and see if it fixes their problems. We can't really determine\n>> what to do without that information.\n\n> Unfortunately, the original reporter of this issue will not be available\n> for testing for 2-3 weeks, and I haven't been able to devise a synthetic\n> test which clearly shows the issue.\n\nPing? I've been waiting on committing that patch pending some real-world\ntesting. It'd be nice to resolve this question before we ship 9.3.3,\nwhich I'm supposing will be sometime in January ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 31 Dec 2013 12:55:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 12/31/2013 09:55 AM, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n>> Tom,\n>>> There's an abbreviated version of this argument in the comments in\n>>> my proposed patch at\n>>> http://www.postgresql.org/message-id/[email protected]\n>>> What I'm hoping will happen next is that the complainants will hot-patch\n>>> that and see if it fixes their problems. We can't really determine\n>>> what to do without that information.\n> \n>> Unfortunately, the original reporter of this issue will not be available\n>> for testing for 2-3 weeks, and I haven't been able to devise a synthetic\n>> test which clearly shows the issue.\n> \n> Ping? I've been waiting on committing that patch pending some real-world\n> testing. It'd be nice to resolve this question before we ship 9.3.3,\n> which I'm supposing will be sometime in January ...\n\nDid this patch every make it in? Or did it hang waiting for verification?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Nov 2014 10:48:24 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus <[email protected]> wrote:\n\n> On 12/31/2013 09:55 AM, Tom Lane wrote:\n> > Josh Berkus <[email protected]> writes:\n> >> Tom,\n> >>> There's an abbreviated version of this argument in the comments in\n> >>> my proposed patch at\n> >>> http://www.postgresql.org/message-id/[email protected]\n> >>> What I'm hoping will happen next is that the complainants will\n> hot-patch\n> >>> that and see if it fixes their problems. We can't really determine\n> >>> what to do without that information.\n> >\n> >> Unfortunately, the original reporter of this issue will not be available\n> >> for testing for 2-3 weeks, and I haven't been able to devise a synthetic\n> >> test which clearly shows the issue.\n> >\n> > Ping? I've been waiting on committing that patch pending some real-world\n> > testing. It'd be nice to resolve this question before we ship 9.3.3,\n> > which I'm supposing will be sometime in January ...\n>\n> Did this patch every make it in? Or did it hang waiting for verification?\n>\n\nIt made it in:\n\ncommit 4162a55c77cbb54acb4ac442ef3565b813b9d07a\nAuthor: Tom Lane <[email protected]>\nDate: Tue Feb 25 16:04:09 2014 -0500\n\nCheers,\n\nJeff\n\nOn Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus <[email protected]> wrote:On 12/31/2013 09:55 AM, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n>> Tom,\n>>> There's an abbreviated version of this argument in the comments in\n>>> my proposed patch at\n>>> http://www.postgresql.org/message-id/[email protected]\n>>> What I'm hoping will happen next is that the complainants will hot-patch\n>>> that and see if it fixes their problems. We can't really determine\n>>> what to do without that information.\n>\n>> Unfortunately, the original reporter of this issue will not be available\n>> for testing for 2-3 weeks, and I haven't been able to devise a synthetic\n>> test which clearly shows the issue.\n>\n> Ping? I've been waiting on committing that patch pending some real-world\n> testing. It'd be nice to resolve this question before we ship 9.3.3,\n> which I'm supposing will be sometime in January ...\n\nDid this patch every make it in? Or did it hang waiting for verification?It made it in:commit 4162a55c77cbb54acb4ac442ef3565b813b9d07aAuthor: Tom Lane <[email protected]>Date: Tue Feb 25 16:04:09 2014 -0500Cheers,Jeff",
"msg_date": "Mon, 10 Nov 2014 10:59:36 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 11/10/2014 10:59 AM, Jeff Janes wrote:\n> On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus <[email protected]> wrote:\n> \n>> On 12/31/2013 09:55 AM, Tom Lane wrote:\n>>> Josh Berkus <[email protected]> writes:\n>>>> Tom,\n>>>>> There's an abbreviated version of this argument in the comments in\n>>>>> my proposed patch at\n>>>>> http://www.postgresql.org/message-id/[email protected]\n>>>>> What I'm hoping will happen next is that the complainants will\n>> hot-patch\n>>>>> that and see if it fixes their problems. We can't really determine\n>>>>> what to do without that information.\n>>>\n>>>> Unfortunately, the original reporter of this issue will not be available\n>>>> for testing for 2-3 weeks, and I haven't been able to devise a synthetic\n>>>> test which clearly shows the issue.\n>>>\n>>> Ping? I've been waiting on committing that patch pending some real-world\n>>> testing. It'd be nice to resolve this question before we ship 9.3.3,\n>>> which I'm supposing will be sometime in January ...\n>>\n>> Did this patch every make it in? Or did it hang waiting for verification?\n>>\n> \n> It made it in:\n> \n> commit 4162a55c77cbb54acb4ac442ef3565b813b9d07a\n> Author: Tom Lane <[email protected]>\n> Date: Tue Feb 25 16:04:09 2014 -0500\n\nThanks, then the problem I'm seeing now is something else.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Nov 2014 11:04:33 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 2014-11-10 10:48:24 -0800, Josh Berkus wrote:\n> On 12/31/2013 09:55 AM, Tom Lane wrote:\n> > Josh Berkus <[email protected]> writes:\n> >> Tom,\n> >>> There's an abbreviated version of this argument in the comments in\n> >>> my proposed patch at\n> >>> http://www.postgresql.org/message-id/[email protected]\n> >>> What I'm hoping will happen next is that the complainants will hot-patch\n> >>> that and see if it fixes their problems. We can't really determine\n> >>> what to do without that information.\n> > \n> >> Unfortunately, the original reporter of this issue will not be available\n> >> for testing for 2-3 weeks, and I haven't been able to devise a synthetic\n> >> test which clearly shows the issue.\n> > \n> > Ping? I've been waiting on committing that patch pending some real-world\n> > testing. It'd be nice to resolve this question before we ship 9.3.3,\n> > which I'm supposing will be sometime in January ...\n> \n> Did this patch every make it in? Or did it hang waiting for\n> verification?\n\nsrc/tools/git_changelog is your friend.\n\nAuthor: Tom Lane <[email protected]>\nBranch: master Release: REL9_4_BR [fccebe421] 2014-02-25 16:04:06 -0500\nBranch: REL9_3_STABLE Release: REL9_3_4 [4162a55c7] 2014-02-25 16:04:09 -0500\nBranch: REL9_2_STABLE Release: REL9_2_8 [00283cae1] 2014-02-25 16:04:12 -0500\nBranch: REL9_1_STABLE Release: REL9_1_13 [3e2db4c80] 2014-02-25 16:04:16 -0500\nBranch: REL9_0_STABLE Release: REL9_0_17 [1e0fb6a2c] 2014-02-25 16:04:20 -0500\n\n Use SnapshotDirty rather than an active snapshot to probe index endpoints.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Nov 2014 20:08:36 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> On 11/10/2014 10:59 AM, Jeff Janes wrote:\n>> On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus <[email protected]> wrote:\n>>> Did this patch every make it in? Or did it hang waiting for verification?\n\n>> It made it in:\n>> commit 4162a55c77cbb54acb4ac442ef3565b813b9d07a\n>> Author: Tom Lane <[email protected]>\n>> Date: Tue Feb 25 16:04:09 2014 -0500\n\n> Thanks, then the problem I'm seeing now is something else.\n\nNotice that only went in this past Feb., so you need to check you're\ndealing with a fairly recent minor release before you dismiss it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Nov 2014 14:11:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 11/10/2014 11:11 AM, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n>> On 11/10/2014 10:59 AM, Jeff Janes wrote:\n>>> On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus <[email protected]> wrote:\n>>>> Did this patch every make it in? Or did it hang waiting for verification?\n> \n>>> It made it in:\n>>> commit 4162a55c77cbb54acb4ac442ef3565b813b9d07a\n>>> Author: Tom Lane <[email protected]>\n>>> Date: Tue Feb 25 16:04:09 2014 -0500\n> \n>> Thanks, then the problem I'm seeing now is something else.\n> \n> Notice that only went in this past Feb., so you need to check you're\n> dealing with a fairly recent minor release before you dismiss it.\n\nIt's 9.3.5\n\nThe new issue will get its own thread.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Nov 2014 11:34:00 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On Mon, Nov 10, 2014 at 11:04 AM, Josh Berkus <[email protected]> wrote:\n\n> On 11/10/2014 10:59 AM, Jeff Janes wrote:\n> > On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus <[email protected]> wrote:\n> >\n>\n> >> Did this patch every make it in? Or did it hang waiting for\n> verification?\n> >>\n> >\n> > It made it in:\n> >\n> > commit 4162a55c77cbb54acb4ac442ef3565b813b9d07a\n> > Author: Tom Lane <[email protected]>\n> > Date: Tue Feb 25 16:04:09 2014 -0500\n>\n> Thanks, then the problem I'm seeing now is something else.\n>\n\nThe related problem where the \"end\" rows are actually needed (e.g. ORDER\nBY...LIMIT) has not been fixed.\n\nMy idea to fix that was to check if the row's creation-transaction was in\nthe MVCC snapshot (which just uses local memory) before checking if that\ncreation-transaction had committed (which uses shared memory). But I\ndidn't really have the confidence to push that given the fragility of that\npart of the code and my lack of experience with it. See \"In progress\nINSERT wrecks plans on table\" thread.\n\nSimon also had some patches to still do the shared memory look up but make\nthem faster by caching where in the list it would be likely to find the\nmatch, based on where it found the last match.\n\n\n\nCheers,\n\nJeff\n\nOn Mon, Nov 10, 2014 at 11:04 AM, Josh Berkus <[email protected]> wrote:On 11/10/2014 10:59 AM, Jeff Janes wrote:\n> On Mon, Nov 10, 2014 at 10:48 AM, Josh Berkus <[email protected]> wrote:\n>\n>> Did this patch every make it in? Or did it hang waiting for verification?\n>>\n>\n> It made it in:\n>\n> commit 4162a55c77cbb54acb4ac442ef3565b813b9d07a\n> Author: Tom Lane <[email protected]>\n> Date: Tue Feb 25 16:04:09 2014 -0500\n\nThanks, then the problem I'm seeing now is something else.The related problem where the \"end\" rows are actually needed (e.g. ORDER BY...LIMIT) has not been fixed. My idea to fix that was to check if the row's creation-transaction was in the MVCC snapshot (which just uses local memory) before checking if that creation-transaction had committed (which uses shared memory). But I didn't really have the confidence to push that given the fragility of that part of the code and my lack of experience with it. See \"In progress INSERT wrecks plans on table\" thread.Simon also had some patches to still do the shared memory look up but make them faster by caching where in the list it would be likely to find the match, based on where it found the last match.Cheers,Jeff",
"msg_date": "Mon, 10 Nov 2014 12:13:09 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
},
{
"msg_contents": "On 11/10/2014 12:13 PM, Jeff Janes wrote:\n\n> The related problem where the \"end\" rows are actually needed (e.g. ORDER\n> BY...LIMIT) has not been fixed.\n> \n> My idea to fix that was to check if the row's creation-transaction was in\n> the MVCC snapshot (which just uses local memory) before checking if that\n> creation-transaction had committed (which uses shared memory). But I\n> didn't really have the confidence to push that given the fragility of that\n> part of the code and my lack of experience with it. See \"In progress\n> INSERT wrecks plans on table\" thread.\n\nOh! I thought this issue had been fixed by Tom's patch as well. It\ncould very well describe what I'm seeing (in the other thread), since\nsome of the waiting queries are INSERTs, and other queries do selects\nagainst the same tables concurrently.\n\nAlthough ... given that I'm seeing preposterously long BIND times (like\n50 seconds), I don't think that's explained just by bad plans.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Nov 2014 15:51:12 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance bug in prepared statement binding in 9.2?"
}
] |
[
{
"msg_contents": "Hi all,\n\nIn our server Check pointer process is consuming 8 GB of memory, what could\nbe the possible reason? Can any one please help.\n\nRegards,\nItishree\n\nHi all, In our server Check pointer process is consuming 8 GB of memory, what could be the possible reason? Can any one please help.Regards, Itishree",
"msg_date": "Thu, 30 May 2013 17:39:12 +0530",
"msg_from": "itishree sukla <[email protected]>",
"msg_from_op": true,
"msg_subject": "Check Pointer"
},
{
"msg_contents": "On 5/30/13 8:09 AM, itishree sukla wrote:\n> In our server Check pointer process is consuming 8 GB of memory, what\n> could be the possible reason? Can any one please help.\n\nThat process will eventually access all of shared_buffers, which shows \nas a shared memory block for that process. That's what you're seeing \nthere. It doesn't actually use any significant amount of memory on its own.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 May 2013 08:20:07 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check Pointer"
},
{
"msg_contents": "Hi ,\n\nCould you share the command, what you have used to confirm that, the\ncheckpoint process is consuming 8GB. And also, please share the addition\ninformation like PostgreSQL version and the OS details.\n\nI am suspecting that, your shared_buffers value is 8GB, and the \"top\"\ncommand is showing the used memory as 8GB.\n\nThanks.\n\nDinesh\n\n-- \n*Dinesh Kumar*\nSoftware Engineer\n\nPh: +918087463317\nSkype ID: dinesh.kumar432\nwww.enterprisedb.co\n<http://www.enterprisedb.com/>m<http://www.enterprisedb.com/>\n*\nFollow us on Twitter*\n@EnterpriseDB\n\nVisit EnterpriseDB for tutorials, webinars,\nwhitepapers<http://www.enterprisedb.com/resources-community> and\nmore <http://www.enterprisedb.com/resources-community>\n\n\nOn Thu, May 30, 2013 at 5:39 PM, itishree sukla <[email protected]>wrote:\n\n> Hi all,\n>\n> In our server Check pointer process is consuming 8 GB of memory, what\n> could be the possible reason? Can any one please help.\n>\n> Regards,\n> Itishree\n>\n\nHi ,\n\nCould you share the command, what you have used to confirm that, the checkpoint process is consuming 8GB. And also, please share the addition information like PostgreSQL version and the OS details.\nI am suspecting that, your shared_buffers value is 8GB, and the \"top\" command is showing the used memory as 8GB.\nThanks.\nDinesh-- Dinesh Kumar\nSoftware Engineer\nPh: +918087463317Skype ID: dinesh.kumar432www.enterprisedb.com\nFollow us on Twitter@EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers and more\n\nOn Thu, May 30, 2013 at 5:39 PM, itishree sukla <[email protected]> wrote:\nHi all, In our server Check pointer process is consuming 8 GB of memory, what could be the possible reason? Can any one please help.Regards, Itishree",
"msg_date": "Thu, 30 May 2013 17:50:24 +0530",
"msg_from": "Dinesh Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check Pointer"
},
{
"msg_contents": "On 30.05.2013 15:09, itishree sukla wrote:\n> In our server Check pointer process is consuming 8 GB of memory, what could\n> be the possible reason? Can any one please help.\n\nAre you sure you're measuring the memory correctly? The RES field in top \noutput, for example, includes shared memory, ie. the whole buffer cache. \nShared memory isn't really \"consumed\" by the checkpointer process, but \nshared by all postgres processes.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 May 2013 15:26:39 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check Pointer"
},
{
"msg_contents": "Thanks for the quick response. Below is the out put of Top Commnd.\n\n3971 postgres 20 0 8048m 303m 301m S 0 0.9 0:04.34\n/usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c\nconfig_file=/etc/postgre\n 3972 postgres 20 0 66828 1820 708 S 0 0.0 1:36.37 postgres:\nlogger\nprocess\n\n 3974 postgres 20 0 8054m 7.6g 7.6g S 0 24.1 0:56.59 postgres:\ncheckpointer\nprocess\n 3975 postgres 20 0 8051m 895m 891m S 0 2.8 0:04.98 postgres:\nwriter\nprocess\n\n 3976 postgres 20 0 8051m 9m 9072 S 0 0.0 0:35.17 postgres: wal\nwriter\nprocess\n 3977 postgres 20 0 70932 3352 716 S 0 0.0 0:05.19 postgres:\nstats collector\nprocess\n1\n\nPostgresql =9.2.3\n\n\n\nOn Thu, May 30, 2013 at 5:56 PM, Heikki Linnakangas <[email protected]\n> wrote:\n\n> On 30.05.2013 15:09, itishree sukla wrote:\n>\n>> In our server Check pointer process is consuming 8 GB of memory, what\n>> could\n>> be the possible reason? Can any one please help.\n>>\n>\n> Are you sure you're measuring the memory correctly? The RES field in top\n> output, for example, includes shared memory, ie. the whole buffer cache.\n> Shared memory isn't really \"consumed\" by the checkpointer process, but\n> shared by all postgres processes.\n>\n> - Heikki\n>\n\nThanks for the quick response. Below is the out put of Top Commnd.3971 postgres 20 0 8048m 303m 301m S 0 0.9 0:04.34 /usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c config_file=/etc/postgre\r\n 3972 postgres 20 0 66828 1820 708 S 0 0.0 1:36.37 postgres: logger process 3974 postgres 20 0 8054m 7.6g 7.6g S 0 24.1 0:56.59 postgres: checkpointer process \r\n 3975 postgres 20 0 8051m 895m 891m S 0 2.8 0:04.98 postgres: writer process 3976 postgres 20 0 8051m 9m 9072 S 0 0.0 0:35.17 postgres: wal writer process \r\n 3977 postgres 20 0 70932 3352 716 S 0 0.0 0:05.19 postgres: stats collector process 1Postgresql =9.2.3\nOn Thu, May 30, 2013 at 5:56 PM, Heikki Linnakangas <[email protected]> wrote:\nOn 30.05.2013 15:09, itishree sukla wrote:\n\r\nIn our server Check pointer process is consuming 8 GB of memory, what could\r\nbe the possible reason? Can any one please help.\n\n\r\nAre you sure you're measuring the memory correctly? The RES field in top output, for example, includes shared memory, ie. the whole buffer cache. Shared memory isn't really \"consumed\" by the checkpointer process, but shared by all postgres processes.\n\r\n- Heikki",
"msg_date": "Thu, 30 May 2013 18:15:04 +0530",
"msg_from": "itishree sukla <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Check Pointer"
},
{
"msg_contents": "Can any one give more input, you can see my top out put, in %MEM its taking\n24.1.\n\nRegards,\nItishree\n\n\nOn Thu, May 30, 2013 at 6:15 PM, itishree sukla <[email protected]>wrote:\n\n> Thanks for the quick response. Below is the out put of Top Commnd.\n>\n> 3971 postgres 20 0 8048m 303m 301m S 0 0.9 0:04.34\n> /usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c\n> config_file=/etc/postgre\n> 3972 postgres 20 0 66828 1820 708 S 0 0.0 1:36.37 postgres:\n> logger\n> process\n>\n> 3974 postgres 20 0 8054m 7.6g 7.6g S 0 24.1 0:56.59 postgres:\n> checkpointer\n> process\n> 3975 postgres 20 0 8051m 895m 891m S 0 2.8 0:04.98 postgres:\n> writer\n> process\n>\n> 3976 postgres 20 0 8051m 9m 9072 S 0 0.0 0:35.17 postgres: wal\n> writer\n> process\n> 3977 postgres 20 0 70932 3352 716 S 0 0.0 0:05.19 postgres:\n> stats collector\n> process\n> 1\n>\n> Postgresql =9.2.3\n>\n>\n>\n> On Thu, May 30, 2013 at 5:56 PM, Heikki Linnakangas <\n> [email protected]> wrote:\n>\n>> On 30.05.2013 15:09, itishree sukla wrote:\n>>\n>>> In our server Check pointer process is consuming 8 GB of memory, what\n>>> could\n>>> be the possible reason? Can any one please help.\n>>>\n>>\n>> Are you sure you're measuring the memory correctly? The RES field in top\n>> output, for example, includes shared memory, ie. the whole buffer cache.\n>> Shared memory isn't really \"consumed\" by the checkpointer process, but\n>> shared by all postgres processes.\n>>\n>> - Heikki\n>>\n>\n>\n\nCan any one give more input, you can see my top out put, in %MEM its taking 24.1.Regards, ItishreeOn Thu, May 30, 2013 at 6:15 PM, itishree sukla <[email protected]> wrote:\nThanks for the quick response. Below is the out put of Top Commnd.3971 postgres 20 0 8048m 303m 301m S 0 0.9 0:04.34 /usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c config_file=/etc/postgre\n\n 3972 postgres 20 0 66828 1820 708 S 0 0.0 1:36.37 postgres: logger process 3974 postgres 20 0 8054m 7.6g 7.6g S 0 24.1 0:56.59 postgres: checkpointer process \n\n 3975 postgres 20 0 8051m 895m 891m S 0 2.8 0:04.98 postgres: writer process 3976 postgres 20 0 8051m 9m 9072 S 0 0.0 0:35.17 postgres: wal writer process \n\n 3977 postgres 20 0 70932 3352 716 S 0 0.0 0:05.19 postgres: stats collector process 1Postgresql =9.2.3\n\nOn Thu, May 30, 2013 at 5:56 PM, Heikki Linnakangas <[email protected]> wrote:\nOn 30.05.2013 15:09, itishree sukla wrote:\n\nIn our server Check pointer process is consuming 8 GB of memory, what could\nbe the possible reason? Can any one please help.\n\n\nAre you sure you're measuring the memory correctly? The RES field in top output, for example, includes shared memory, ie. the whole buffer cache. Shared memory isn't really \"consumed\" by the checkpointer process, but shared by all postgres processes.\n\n- Heikki",
"msg_date": "Thu, 6 Jun 2013 10:14:14 +0530",
"msg_from": "itishree sukla <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Check Pointer"
},
{
"msg_contents": "itishree sukla <[email protected]> wrote:\n> itishree sukla <[email protected]> wrote:\n>> Heikki Linnakangas <[email protected]> wrote:\n>>> itishree sukla wrote:\n>>>\n>>>> In our server Check pointer process is consuming 8 GB of\n>>>> memory, what could be the possible reason? Can any one please\n>>>> help.\n>>>\n>>> Are you sure you're measuring the memory correctly? The RES\n>>> field in top output, for example, includes shared memory, ie.\n>>> the whole buffer cache. Shared memory isn't really \"consumed\"\n>>> by the checkpointer process, but shared by all postgres processes.\n>>\n>> Thanks for the quick response. Below is the out put of Top\n>> Commnd.\n>>\n>> 3971 postgres 20 0 8048m 303m 301m S 0 0.9 0:04.34 3971 postgres 20 0 8048m 303m 301m S 0 0.9 0:04.34 /usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c config_file=/etc/postgre\n>> 3972 postgres 20 0 66828 1820 708 S 0 0.0 1:36.37 postgres: logger process\n>> 3974 postgres 20 0 8054m 7.6g 7.6g S 0 24.1 0:56.59 postgres: checkpointer process\n>> 3975 postgres 20 0 8051m 895m 891m S 0 2.8 0:04.98 postgres: writer process\n>> 3976 postgres 20 0 8051m 9m 9072 S 0 0.0 0:35.17 postgres: wal writer process\n>> 3977 postgres 20 0 70932 3352 716 S 0 0.0 0:05.19 postgres: stats collector process\n\n> Can any one give more input, you can see my top out put, in %MEM\n> its taking 24.1.\n\nIt's not. It's referencing all of your shared_buffers.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 6 Jun 2013 05:53:18 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check Pointer"
},
{
"msg_contents": "\n\n--On 30. Mai 2013 18:15:04 +0530 itishree sukla <[email protected]> \nwrote:\n\n> Thanks for the quick response. Below is the out put of Top Commnd.\n>\n> 3971 postgres 20 0 8048m 303m 301m S 0 0.9 0:04.34\n> /usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c\n> config_file=/etc/postgre\n> 3972 postgres 20 0 66828 1820 708 S 0 0.0 1:36.37\n> postgres: logger\n> process \n> \n > \n \n> 3974 postgres 20 0 8054m 7.6g 7.6g S 0 24.1 0:56.59\n> postgres: checkpointer\n> process \n> \n\nOn Linux i often find the pmap utility a far better tool to get an idea \nwhat a process\nactually consumes of memory. The output can be large sometimes, but it's \nmore \"fine grained\".\n\n\n-- \nThanks\n\n\tBernd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 07 Jun 2013 10:19:58 +0200",
"msg_from": "Bernd Helmle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check Pointer"
},
{
"msg_contents": "On Thu, Jun 6, 2013 at 1:44 AM, itishree sukla <[email protected]>wrote:\n\n> Can any one give more input, you can see my top out put, in %MEM its\n> taking 24.1.\n>\n>\n> On Thu, May 30, 2013 at 6:15 PM, itishree sukla <[email protected]>wrote:\n>\n>> Thanks for the quick response. Below is the out put of Top Commnd.\n>>\n>> 3971 postgres 20 0 8048m 303m 301m S 0 0.9 0:04.34\n>> /usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c\n>> config_file=/etc/postgre\n>> 3972 postgres 20 0 66828 1820 708 S 0 0.0 1:36.37 postgres:\n>> logger\n>> process\n>>\n>> 3974 postgres 20 0 8054m 7.6g 7.6g S 0 24.1 0:56.59 postgres:\n>> checkpointer\n>> process\n>> 3975 postgres 20 0 8051m 895m 891m S 0 2.8 0:04.98 postgres:\n>> writer\n>> process\n>>\n>> 3976 postgres 20 0 8051m 9m 9072 S 0 0.0 0:35.17 postgres:\n>> wal writer\n>> process\n>> 3977 postgres 20 0 70932 3352 716 S 0 0.0 0:05.19 postgres:\n>> stats collector\n>> process\n>> 1\n>>\n>> Postgresql =9.2.3\n>>\n>>\n>>\n>> On Thu, May 30, 2013 at 5:56 PM, Heikki Linnakangas <\n>> [email protected]> wrote:\n>>\n>>> On 30.05.2013 15:09, itishree sukla wrote:\n>>>\n>>>> In our server Check pointer process is consuming 8 GB of memory, what\n>>>> could\n>>>> be the possible reason? Can any one please help.\n>>>>\n>>>\n>>> Are you sure you're measuring the memory correctly? The RES field in top\n>>> output, for example, includes shared memory, ie. the whole buffer cache.\n>>> Shared memory isn't really \"consumed\" by the checkpointer process, but\n>>> shared by all postgres processes.\n>>>\n>>> - Heikki\n>>>\n>>\n>>\n>\nAs said before, the memory you may be not the real memory consumed by\ncheckpointer process, but it includes the shared memory (which is,\nbasically, used by all postgres' processes).\nDepesz wrote a nice topic on his blog about this subject [1], read it and\ntry the commands to see the real memory usage by checkpointer (when I say\n\"real\", I mean \"private\").\n\n[1] http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Thu, Jun 6, 2013 at 1:44 AM, itishree sukla <[email protected]> wrote:\nCan any one give more input, you can see my top out put, in %MEM its taking 24.1.\nOn Thu, May 30, 2013 at 6:15 PM, itishree sukla <[email protected]> wrote:\nThanks for the quick response. Below is the out put of Top Commnd.\n3971 postgres 20 0 8048m 303m 301m S 0 0.9 0:04.34 /usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c config_file=/etc/postgre\n\n 3972 postgres 20 0 66828 1820 708 S 0 0.0 1:36.37 postgres: logger process 3974 postgres 20 0 8054m 7.6g 7.6g S 0 24.1 0:56.59 postgres: checkpointer process \n\n\n\n 3975 postgres 20 0 8051m 895m 891m S 0 2.8 0:04.98 postgres: writer process 3976 postgres 20 0 8051m 9m 9072 S 0 0.0 0:35.17 postgres: wal writer process \n\n\n\n 3977 postgres 20 0 70932 3352 716 S 0 0.0 0:05.19 postgres: stats collector process 1Postgresql =9.2.3\n\nOn Thu, May 30, 2013 at 5:56 PM, Heikki Linnakangas <[email protected]> wrote:\nOn 30.05.2013 15:09, itishree sukla wrote:\n\nIn our server Check pointer process is consuming 8 GB of memory, what could\nbe the possible reason? Can any one please help.\n\n\nAre you sure you're measuring the memory correctly? The RES field in top output, for example, includes shared memory, ie. the whole buffer cache. Shared memory isn't really \"consumed\" by the checkpointer process, but shared by all postgres processes.\n\n- Heikki\n\n\nAs said before, the memory you may be not the real memory consumed by checkpointer process, but it includes the shared memory (which is, basically, used by all postgres' processes).\n\nDepesz wrote a nice topic on his blog about this subject [1], read it and try the commands to see the real memory usage by checkpointer (when I say \"real\", I mean \"private\").\n[1] http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/\n\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 7 Jun 2013 10:53:14 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check Pointer"
}
] |
[
{
"msg_contents": "Two questions Please1.) Is there any way to clear the cache so that we can ensure that when we run \"explain analyze\" on a query and make some minor adjustments to that query and re-execute, the plan is not cached. Since the cached plan returns runtimes that are much lower than the initial execution, so we don't know for certain the tweaks we made improved the performance of the query, without having to bounce the database?2.) I am noticing that when I look at pg_stat_activities: autovacuum is re-processing some old Partition tables way back in 2007, which are static and are essentially read-only partitions. the line item in pg_stat reads as follows: autovacuum:VACUUM public.digi_sas_y2007m07 (to prevent wraparound). Is there a way to have autovacuum skip these static type partition tables, and only process partitions that have had; Inserts, updates, or deletes attributed to them? thanks. \n",
"msg_date": "Fri, 31 May 2013 09:32:54 -0700",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Evaluating query performance with caching in PostgreSQL 9.1.6"
},
{
"msg_contents": "On Fri, May 31, 2013 at 7:32 PM, <[email protected]> wrote:\n> 1.) Is there any way to clear the cache so that we can ensure that when we\n> run \"explain analyze\" on a query and make some minor adjustments to that\n> query and re-execute, the plan is not cached.\n\nPostgreSQL doesn't cache query plans if you do a normal \"SELECT\" or\n\"EXPLAIN ANALYZE SELECT\" query. Plans are cached only if you use\nprepared queries:\n1. Embedded queries within PL/pgSQL procedures\n2. Explicit PREPARE/EXECUTE commands\n3. PQprepare in the libpq library (or other client library)\n\nIf you don't use these, then you are experiencing something else and\nnot \"plan cache\".\n\nMaybe you're referring to disk cache. The only way to clear\nPostgreSQL's cache (shared buffers) is to restart it, but there is\nanother level of caching done by the operating system.\n\nOn Linux you can drop the OS cache using:\necho 1 > /proc/sys/vm/drop_caches\n\n> 2.) I am noticing that when I look at pg_stat_activities: autovacuum is\n> re-processing some old Partition tables way back in 2007, which are static\n> and are essentially read-only partitions. the line item in pg_stat reads as\n> follows: autovacuum:VACUUM public.digi_sas_y2007m07 (to prevent wraparound).\n> Is there a way to have autovacuum skip these static type partition tables,\n\nNo. This is a necessary and critical operation. PostgreSQL stores row\nvisibility information based on 32-bit transaction IDs (xids). This\nvalue is small enough that it can wrap around, so very old tables need\nto be \"frozen\". Details here:\nhttp://www.postgresql.org/docs/9.1/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND\n\nIf this is a problem for you then you may want to schedule manual\nVACUUM FREEZE on old tables during low usage periods.\n\nRegards,\nMarti\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Fri, 31 May 2013 19:57:19 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Evaluating query performance with caching in PostgreSQL\n\t9.1.6"
}
] |
[
{
"msg_contents": "I have a table called contacts. It has a BIGINT owner_id which references a\nrecord in the user table. It also has a BIGINT user_id which may be null.\nAdditionally it has a BOOLEAN blocked column to indicate if a contact is\nblocked. The final detail is that multiple contacts for an owner may\nreference the same user.\n\nI have a query to get all the user_ids of a non-blocked contact that is a\nmutual contact of the user. The important part of the table looks like this:\n\nCREATE TABLE contacts\n(\n id BIGINT PRIMARY KEY NOT NULL, // generated\n blocked BOOL,\n owner_id BIGINT NOT NULL,\n user_id BIGINT,\n FOREIGN KEY ( owner_id ) REFERENCES app_users ( id ) ON DELETE CASCADE,\n FOREIGN KEY ( user_id ) REFERENCES app_users ( id ) ON DELETE SET NULL\n);\nCREATE INDEX idx_contact_owner ON contacts ( owner_id );\nCREATE INDEX idx_contact_mutual ON contacts ( owner_id, user_id ) WHERE\nuser_id IS NOT NULL AND NOT blocked;\n\nThe query looks like this:\n\nexplain analyze verbose\nselect c.user_id\nfrom contact_entity c\nwhere c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT\nc.blocked and (exists (\n select 1\n from contact_entity c1\n where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT NULL\nand c1.user_id=24))\ngroup by c.user_id;\n\nThis will get all the users for user 24 that are mutual unblocked contacts\nbut exclude the user 24.\n\nI have run this through explain several times and I'm out of ideas on the\nindex. I note that I can also right the query like this:\n\nexplain analyze verbose\nselect distinct c.user_id\nfrom contact_entity c left outer join contact_entity c1 on c1.owner_id =\nc.user_id and c1.user_id = c.owner_id\nwhere NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id <>\n24\nAND c.user_id IS NOT NULL AND c1.user_id IS NOT NULL\ngroup by c.user_id;\n\nI don't notice a big difference in the query plans. I also notice no\ndifference if I replace the GROUP BY with DISTINCT.\n\nMy question is, can this be tightened further in a way I haven't been\ncreative enough to try? Does it matter if I use the EXISTS versus the OUTER\nJOIN or the GROUP BY versus the DISTINCT.\n\nIs there a better index and I just have not been clever enough to come up\nwith it yet? I've tried a bunch.\n\nThanks in advance!!\n\nRobert\n\nI have a table called contacts. It has a BIGINT owner_id which references a record in the user table. It also has a BIGINT user_id which may be null. Additionally it has a BOOLEAN blocked column to indicate if a contact is blocked. The final detail is that multiple contacts for an owner may reference the same user.\nI have a query to get all the user_ids of a non-blocked contact that is a mutual contact of the user. The important part of the table looks like this:CREATE TABLE contacts\n( id BIGINT PRIMARY KEY NOT NULL, // generated blocked BOOL, owner_id BIGINT NOT NULL, user_id BIGINT, FOREIGN KEY ( owner_id ) REFERENCES app_users ( id ) ON DELETE CASCADE,\n FOREIGN KEY ( user_id ) REFERENCES app_users ( id ) ON DELETE SET NULL);CREATE INDEX idx_contact_owner ON contacts ( owner_id );CREATE INDEX idx_contact_mutual ON contacts ( owner_id, user_id ) WHERE user_id IS NOT NULL AND NOT blocked;\nThe query looks like this:explain analyze verbose select c.user_id from contact_entity c where c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT c.blocked and (exists (\n select 1 from contact_entity c1 where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT NULL and c1.user_id=24)) group by c.user_id;\nThis will get all the users for user 24 that are mutual unblocked contacts but exclude the user 24.I have run this through explain several times and I'm out of ideas on the index. I note that I can also right the query like this:\nexplain analyze verbose select distinct c.user_id from contact_entity c left outer join contact_entity c1 on c1.owner_id = c.user_id and c1.user_id = c.owner_id\nwhere NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id <> 24AND c.user_id IS NOT NULL AND c1.user_id IS NOT NULLgroup by c.user_id;I don't notice a big difference in the query plans. I also notice no difference if I replace the GROUP BY with DISTINCT. \nMy question is, can this be tightened further in a way I haven't been creative enough to try? Does it matter if I use the EXISTS versus the OUTER JOIN or the GROUP BY versus the DISTINCT.\nIs there a better index and I just have not been clever enough to come up with it yet? I've tried a bunch.Thanks in advance!!\nRobert",
"msg_date": "Sun, 2 Jun 2013 12:39:02 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL performance"
},
{
"msg_contents": "On 2 June 2013 21:39, Robert DiFalco <[email protected]> wrote:\n\n> I have a table called contacts. It has a BIGINT owner_id which references\n> a record in the user table. It also has a BIGINT user_id which may be null.\n> Additionally it has a BOOLEAN blocked column to indicate if a contact is\n> blocked. The final detail is that multiple contacts for an owner may\n> reference the same user.\n>\n> I have a query to get all the user_ids of a non-blocked contact that is a\n> mutual contact of the user. The important part of the table looks like this:\n>\n> CREATE TABLE contacts\n> (\n> id BIGINT PRIMARY KEY NOT NULL, // generated\n> blocked BOOL,\n> owner_id BIGINT NOT NULL,\n> user_id BIGINT,\n> FOREIGN KEY ( owner_id ) REFERENCES app_users ( id ) ON DELETE CASCADE,\n> FOREIGN KEY ( user_id ) REFERENCES app_users ( id ) ON DELETE SET NULL\n> );\n> CREATE INDEX idx_contact_owner ON contacts ( owner_id );\n> CREATE INDEX idx_contact_mutual ON contacts ( owner_id, user_id ) WHERE\n> user_id IS NOT NULL AND NOT blocked;\n>\n> The query looks like this:\n>\n> explain analyze verbose\n> select c.user_id\n> from contact_entity c\n> where c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT\n> c.blocked and (exists (\n> select 1\n> from contact_entity c1\n> where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT\n> NULL and c1.user_id=24))\n> group by c.user_id;\n>\n> This will get all the users for user 24 that are mutual unblocked contacts\n> but exclude the user 24.\n>\n> I have run this through explain several times and I'm out of ideas on the\n> index. I note that I can also right the query like this:\n>\n> explain analyze verbose\n> select distinct c.user_id\n> from contact_entity c left outer join contact_entity c1 on c1.owner_id =\n> c.user_id and c1.user_id = c.owner_id\n> where NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id\n> <> 24\n> AND c.user_id IS NOT NULL AND c1.user_id IS NOT NULL\n> group by c.user_id;\n>\n> I don't notice a big difference in the query plans. I also notice no\n> difference if I replace the GROUP BY with DISTINCT.\n>\n> My question is, can this be tightened further in a way I haven't been\n> creative enough to try? Does it matter if I use the EXISTS versus the OUTER\n> JOIN or the GROUP BY versus the DISTINCT.\n>\n> Is there a better index and I just have not been clever enough to come up\n> with it yet? I've tried a bunch.\n>\n> Thanks in advance!!\n>\n> Robert\n>\n\n\nHi Robert,\ncould you show us the plans?\n\nthanks,\nSzymon\n\nOn 2 June 2013 21:39, Robert DiFalco <[email protected]> wrote:\nI have a table called contacts. It has a BIGINT owner_id which references a record in the user table. It also has a BIGINT user_id which may be null. Additionally it has a BOOLEAN blocked column to indicate if a contact is blocked. The final detail is that multiple contacts for an owner may reference the same user.\nI have a query to get all the user_ids of a non-blocked contact that is a mutual contact of the user. The important part of the table looks like this:CREATE TABLE contacts\n( id BIGINT PRIMARY KEY NOT NULL, // generated blocked BOOL, owner_id BIGINT NOT NULL, user_id BIGINT, FOREIGN KEY ( owner_id ) REFERENCES app_users ( id ) ON DELETE CASCADE,\n FOREIGN KEY ( user_id ) REFERENCES app_users ( id ) ON DELETE SET NULL);CREATE INDEX idx_contact_owner ON contacts ( owner_id );CREATE INDEX idx_contact_mutual ON contacts ( owner_id, user_id ) WHERE user_id IS NOT NULL AND NOT blocked;\nThe query looks like this:explain analyze verbose select c.user_id from contact_entity c where c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT c.blocked and (exists (\n select 1 from contact_entity c1 where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT NULL and c1.user_id=24)) group by c.user_id;\nThis will get all the users for user 24 that are mutual unblocked contacts but exclude the user 24.I have run this through explain several times and I'm out of ideas on the index. I note that I can also right the query like this:\nexplain analyze verbose select distinct c.user_id from contact_entity c left outer join contact_entity c1 on c1.owner_id = c.user_id and c1.user_id = c.owner_id\nwhere NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id <> 24AND c.user_id IS NOT NULL AND c1.user_id IS NOT NULLgroup by c.user_id;I don't notice a big difference in the query plans. I also notice no difference if I replace the GROUP BY with DISTINCT. \nMy question is, can this be tightened further in a way I haven't been creative enough to try? Does it matter if I use the EXISTS versus the OUTER JOIN or the GROUP BY versus the DISTINCT.\nIs there a better index and I just have not been clever enough to come up with it yet? I've tried a bunch.Thanks in advance!!\n\nRobertHi Robert,could you show us the plans?thanks,\nSzymon",
"msg_date": "Sun, 2 Jun 2013 21:42:12 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL performance"
},
{
"msg_contents": "Absolutely:\n\nexplain analyze verbose\nselect c.user_id\nfrom contact_entity c left outer join contact_entity c1 on c1.owner_id =\nc.user_id and c1.user_id = c.owner_id\nwhere NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id !=\n24\nAND c.user_id IS NOT NULL AND c1.user_id IS NOT NULL\ngroup by c.user_id;\n\nQUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=0.00..9.00 rows=1 width=8) (actual time=0.170..0.301 rows=8\nloops=1)\n Output: c.user_id\n -> Merge Join (cost=0.00..9.00 rows=1 width=8) (actual\ntime=0.166..0.270 rows=17 loops=1)\n Output: c.user_id\n Merge Cond: (c.user_id = c1.owner_id)\n -> Index Scan using idx_contact_mutual on public.contact_entity c\n (cost=0.00..5.10 rows=2 width=16) (actual time=0.146..0.164 rows=11\nloops=1)\n Output: c.id, c.blocked, c.first_name, c.last_name,\nc.owner_id, c.user_id\n Index Cond: ((c.owner_id = 24) AND (c.user_id IS NOT NULL))\n Filter: (c.user_id <> 24)\n Rows Removed by Filter: 1\n -> Index Scan using idx_contact_mutual on public.contact_entity\nc1 (cost=0.00..6.45 rows=1 width=16) (actual time=0.012..0.049 rows=18\nloops=1)\n Output: c1.id, c1.blocked, c1.first_name, c1.last_name,\nc1.owner_id, c1.user_id\n Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id = 24))\n Total runtime: 0.388 ms\n(14 rows)\n\nexplain analyze verbose\nselect c.user_id\nfrom contact_entity c\nwhere c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT\nc.blocked and (exists(\n select 1\n from contact_entity c1\n where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT NULL\nand c1.user_id=c.owner_id))\ngroup by c.user_id;\n\nQUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Group (cost=0.00..9.00 rows=1 width=8) (actual time=0.048..0.159 rows=8\nloops=1)\n Output: c.user_id\n -> Merge Semi Join (cost=0.00..9.00 rows=1 width=8) (actual\ntime=0.044..0.137 rows=9 loops=1)\n Output: c.user_id\n Merge Cond: (c.user_id = c1.owner_id)\n -> Index Scan using idx_contact_mutual on public.contact_entity c\n (cost=0.00..5.10 rows=2 width=16) (actual time=0.024..0.042 rows=11\nloops=1)\n Output: c.id, c.blocked, c.first_name, c.last_name,\nc.owner_id, c.user_id\n Index Cond: ((c.owner_id = 24) AND (c.user_id IS NOT NULL))\n Filter: (c.user_id <> 24)\n Rows Removed by Filter: 1\n -> Index Scan using idx_contact_mutual on public.contact_entity\nc1 (cost=0.00..6.45 rows=1 width=16) (actual time=0.011..0.047 rows=16\nloops=1)\n Output: c1.id, c1.blocked, c1.first_name, c1.last_name,\nc1.owner_id, c1.user_id\n Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id = 24))\n Total runtime: 0.224 ms\n(14 rows)\n\nThe only difference I see between the EXISTS and LEFT OUTER JOIN is the\nMerge Join versus the Merge Semi Join. Then again, there may be a third\noption for this query besides those two that will be much better. But those\nare the only two reasonable variations I can think of.\n\nThe GROUP BY versus the DISTINCT on c.user_id makes no impact at all on the\nplan. They are exactly the same.\n\n\nOn Sun, Jun 2, 2013 at 12:42 PM, Szymon Guz <[email protected]> wrote:\n\n> On 2 June 2013 21:39, Robert DiFalco <[email protected]> wrote:\n>\n>> I have a table called contacts. It has a BIGINT owner_id which references\n>> a record in the user table. It also has a BIGINT user_id which may be null.\n>> Additionally it has a BOOLEAN blocked column to indicate if a contact is\n>> blocked. The final detail is that multiple contacts for an owner may\n>> reference the same user.\n>>\n>> I have a query to get all the user_ids of a non-blocked contact that is a\n>> mutual contact of the user. The important part of the table looks like this:\n>>\n>> CREATE TABLE contacts\n>> (\n>> id BIGINT PRIMARY KEY NOT NULL, // generated\n>> blocked BOOL,\n>> owner_id BIGINT NOT NULL,\n>> user_id BIGINT,\n>> FOREIGN KEY ( owner_id ) REFERENCES app_users ( id ) ON DELETE\n>> CASCADE,\n>> FOREIGN KEY ( user_id ) REFERENCES app_users ( id ) ON DELETE SET NULL\n>> );\n>> CREATE INDEX idx_contact_owner ON contacts ( owner_id );\n>> CREATE INDEX idx_contact_mutual ON contacts ( owner_id, user_id ) WHERE\n>> user_id IS NOT NULL AND NOT blocked;\n>>\n>> The query looks like this:\n>>\n>> explain analyze verbose\n>> select c.user_id\n>> from contact_entity c\n>> where c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT\n>> c.blocked and (exists (\n>> select 1\n>> from contact_entity c1\n>> where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT\n>> NULL and c1.user_id=24))\n>> group by c.user_id;\n>>\n>> This will get all the users for user 24 that are mutual unblocked\n>> contacts but exclude the user 24.\n>>\n>> I have run this through explain several times and I'm out of ideas on the\n>> index. I note that I can also right the query like this:\n>>\n>> explain analyze verbose\n>> select distinct c.user_id\n>> from contact_entity c left outer join contact_entity c1 on c1.owner_id =\n>> c.user_id and c1.user_id = c.owner_id\n>> where NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id\n>> <> 24\n>> AND c.user_id IS NOT NULL AND c1.user_id IS NOT NULL\n>> group by c.user_id;\n>>\n>> I don't notice a big difference in the query plans. I also notice no\n>> difference if I replace the GROUP BY with DISTINCT.\n>>\n>> My question is, can this be tightened further in a way I haven't been\n>> creative enough to try? Does it matter if I use the EXISTS versus the OUTER\n>> JOIN or the GROUP BY versus the DISTINCT.\n>>\n>> Is there a better index and I just have not been clever enough to come up\n>> with it yet? I've tried a bunch.\n>>\n>> Thanks in advance!!\n>>\n>> Robert\n>>\n>\n>\n> Hi Robert,\n> could you show us the plans?\n>\n> thanks,\n> Szymon\n>\n\nAbsolutely:explain analyze verboseselect c.user_idfrom contact_entity c left outer join contact_entity c1 on c1.owner_id = c.user_id and c1.user_id = c.owner_id\nwhere NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id != 24AND c.user_id IS NOT NULL AND c1.user_id IS NOT NULLgroup by c.user_id; QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------- Group (cost=0.00..9.00 rows=1 width=8) (actual time=0.170..0.301 rows=8 loops=1)\n Output: c.user_id -> Merge Join (cost=0.00..9.00 rows=1 width=8) (actual time=0.166..0.270 rows=17 loops=1) Output: c.user_id Merge Cond: (c.user_id = c1.owner_id)\n -> Index Scan using idx_contact_mutual on public.contact_entity c (cost=0.00..5.10 rows=2 width=16) (actual time=0.146..0.164 rows=11 loops=1) Output: c.id, c.blocked, c.first_name, c.last_name, c.owner_id, c.user_id\n Index Cond: ((c.owner_id = 24) AND (c.user_id IS NOT NULL)) Filter: (c.user_id <> 24) Rows Removed by Filter: 1 -> Index Scan using idx_contact_mutual on public.contact_entity c1 (cost=0.00..6.45 rows=1 width=16) (actual time=0.012..0.049 rows=18 loops=1)\n Output: c1.id, c1.blocked, c1.first_name, c1.last_name, c1.owner_id, c1.user_id Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id = 24))\n Total runtime: 0.388 ms(14 rows)explain analyze verbose select c.user_id from contact_entity c where c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT c.blocked and (exists(\n select 1 from contact_entity c1 where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT NULL and c1.user_id=c.owner_id)) group by c.user_id; QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------- Group (cost=0.00..9.00 rows=1 width=8) (actual time=0.048..0.159 rows=8 loops=1)\n Output: c.user_id -> Merge Semi Join (cost=0.00..9.00 rows=1 width=8) (actual time=0.044..0.137 rows=9 loops=1) Output: c.user_id Merge Cond: (c.user_id = c1.owner_id)\n -> Index Scan using idx_contact_mutual on public.contact_entity c (cost=0.00..5.10 rows=2 width=16) (actual time=0.024..0.042 rows=11 loops=1) Output: c.id, c.blocked, c.first_name, c.last_name, c.owner_id, c.user_id\n Index Cond: ((c.owner_id = 24) AND (c.user_id IS NOT NULL)) Filter: (c.user_id <> 24) Rows Removed by Filter: 1 -> Index Scan using idx_contact_mutual on public.contact_entity c1 (cost=0.00..6.45 rows=1 width=16) (actual time=0.011..0.047 rows=16 loops=1)\n Output: c1.id, c1.blocked, c1.first_name, c1.last_name, c1.owner_id, c1.user_id Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id = 24))\n Total runtime: 0.224 ms(14 rows)The only difference I see between the EXISTS and LEFT OUTER JOIN is the Merge Join versus the Merge Semi Join. Then again, there may be a third option for this query besides those two that will be much better. But those are the only two reasonable variations I can think of.\nThe GROUP BY versus the DISTINCT on c.user_id makes no impact at all on the plan. They are exactly the same.On Sun, Jun 2, 2013 at 12:42 PM, Szymon Guz <[email protected]> wrote:\nOn 2 June 2013 21:39, Robert DiFalco <[email protected]> wrote:\n\nI have a table called contacts. It has a BIGINT owner_id which references a record in the user table. It also has a BIGINT user_id which may be null. Additionally it has a BOOLEAN blocked column to indicate if a contact is blocked. The final detail is that multiple contacts for an owner may reference the same user.\nI have a query to get all the user_ids of a non-blocked contact that is a mutual contact of the user. The important part of the table looks like this:CREATE TABLE contacts\n( id BIGINT PRIMARY KEY NOT NULL, // generated blocked BOOL, owner_id BIGINT NOT NULL, user_id BIGINT, FOREIGN KEY ( owner_id ) REFERENCES app_users ( id ) ON DELETE CASCADE,\n FOREIGN KEY ( user_id ) REFERENCES app_users ( id ) ON DELETE SET NULL);CREATE INDEX idx_contact_owner ON contacts ( owner_id );CREATE INDEX idx_contact_mutual ON contacts ( owner_id, user_id ) WHERE user_id IS NOT NULL AND NOT blocked;\nThe query looks like this:explain analyze verbose select c.user_id from contact_entity c where c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT c.blocked and (exists (\n select 1 from contact_entity c1 where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT NULL and c1.user_id=24)) group by c.user_id;\nThis will get all the users for user 24 that are mutual unblocked contacts but exclude the user 24.I have run this through explain several times and I'm out of ideas on the index. I note that I can also right the query like this:\nexplain analyze verbose select distinct c.user_id from contact_entity c left outer join contact_entity c1 on c1.owner_id = c.user_id and c1.user_id = c.owner_id\nwhere NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id <> 24AND c.user_id IS NOT NULL AND c1.user_id IS NOT NULLgroup by c.user_id;I don't notice a big difference in the query plans. I also notice no difference if I replace the GROUP BY with DISTINCT. \nMy question is, can this be tightened further in a way I haven't been creative enough to try? Does it matter if I use the EXISTS versus the OUTER JOIN or the GROUP BY versus the DISTINCT.\nIs there a better index and I just have not been clever enough to come up with it yet? I've tried a bunch.Thanks in advance!!\n\nRobertHi Robert,could you show us the plans?thanks,\nSzymon",
"msg_date": "Sun, 2 Jun 2013 13:19:16 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL performance"
},
{
"msg_contents": "Robert DiFalco <[email protected]> wrote:\n\n> CREATE TABLE contacts\n> (\n> id BIGINT PRIMARY KEY NOT NULL, // generated\n> \n> blocked BOOL,\n> owner_id BIGINT NOT NULL,\n> user_id BIGINT,\n> FOREIGN KEY ( owner_id ) REFERENCES app_users ( id ) ON DELETE CASCADE,\n> \n> FOREIGN KEY ( user_id ) REFERENCES app_users ( id ) ON DELETE SET NULL\n> );\n> CREATE INDEX idx_contact_owner ON contacts ( owner_id );\n> CREATE INDEX idx_contact_mutual ON contacts ( owner_id, user_id ) WHERE user_id IS NOT NULL AND NOT blocked;\n\nWell, the first thing I note is that \"blocked\" can be NULL. You\nexclude rows from the result where it IS NULL in either row. That\nmay be what you really want, but it seems worth mentioning. If you\ndon't need to support missing values there, you might want to add a\nNOT NULL constraint. If it should be NULL when user_id is, but not\notherwise, you might want a row-level constraint. You might shave\na tiny amount off the runtime by getting rid of the redundant tests\nfor NOT NULL on user_id; it cannot compare as either TRUE on either\n= or <> if either (or both) values are NULL.\n\n> explain analyze verbose\n> select c.user_id\n> from contact_entity c left outer join contact_entity c1 on c1.owner_id = c.user_id and c1.user_id = c.owner_id\n> where NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id != 24\n> AND c.user_id IS NOT NULL AND c1.user_id IS NOT NULL\n> group by c.user_id;\n\n> Group (cost=0.00..9.00 rows=1 width=8) (actual time=0.170..0.301 rows=8 loops=1)\n> Output: c.user_id\n> -> Merge Join (cost=0.00..9.00 rows=1 width=8) (actual time=0.166..0.270 rows=17 loops=1)\n> Output: c.user_id\n> Merge Cond: (c.user_id = c1.owner_id)\n> -> Index Scan using idx_contact_mutual on public.contact_entity c (cost=0.00..5.10 rows=2 width=16) (actual time=0.146..0.164 rows=11 loops=1)\n> Output: c.id, c.blocked, c.first_name, c.last_name, c.owner_id, c.user_id\n> Index Cond: ((c.owner_id = 24) AND (c.user_id IS NOT NULL))\n> Filter: (c.user_id <> 24)\n> Rows Removed by Filter: 1\n> -> Index Scan using idx_contact_mutual on public.contact_entity c1 (cost=0.00..6.45 rows=1 width=16) (actual time=0.012..0.049 rows=18 loops=1)\n> Output: c1.id, c1.blocked, c1.first_name, c1.last_name, c1.owner_id, c1.user_id\n> Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id = 24))\n> Total runtime: 0.388 ms\n\n> explain analyze verbose \n> select c.user_id \n> from contact_entity c \n> where c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT c.blocked and (exists(\n> select 1 \n> from contact_entity c1 \n> where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT NULL and c1.user_id=c.owner_id)) \n> group by c.user_id;\n\n> Group (cost=0.00..9.00 rows=1 width=8) (actual time=0.048..0.159 rows=8 loops=1)\n> Output: c.user_id\n> -> Merge Semi Join (cost=0.00..9.00 rows=1 width=8) (actual time=0.044..0.137 rows=9 loops=1)\n> Output: c.user_id\n> Merge Cond: (c.user_id = c1.owner_id)\n> -> Index Scan using idx_contact_mutual on public.contact_entity c (cost=0.00..5.10 rows=2 width=16) (actual time=0.024..0.042 rows=11 loops=1)\n> Output: c.id, c.blocked, c.first_name, c.last_name, c.owner_id, c.user_id\n> Index Cond: ((c.owner_id = 24) AND (c.user_id IS NOT NULL))\n> Filter: (c.user_id <> 24)\n> Rows Removed by Filter: 1\n> -> Index Scan using idx_contact_mutual on public.contact_entity c1 (cost=0.00..6.45 rows=1 width=16) (actual time=0.011..0.047 rows=16 loops=1)\n> Output: c1.id, c1.blocked, c1.first_name, c1.last_name, c1.owner_id, c1.user_id\n> Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id = 24))\n> Total runtime: 0.224 ms\n\nSo, it looks like you can get about 3000 to 4000 of these per\nsecond on a single connection -- at least in terms of server-side\nprocessing. Were you expecting more than that?\n\n-- \nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 3 Jun 2013 07:26:18 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL performance"
},
{
"msg_contents": "Thanks Kevin, the blocked should not be NULLABLE. I will fix that. This is\nwith a pretty tiny dataset. I'm a little paranoid that with a large one I\nwill have issues.\n\nBelieve it or not the query became faster when I put the tests for user_id\nIS NOT NULL in there (and added an index for that) then without the tests\nand index.\n\nIt kinda makes me wonder if (from a performance perspective) I should\nchange the schema to pull user_id out of contacts and created a related\ntable with {contacts.id, user_id} where user_id is never null.\n\n\n\n\nOn Mon, Jun 3, 2013 at 7:26 AM, Kevin Grittner <[email protected]> wrote:\n\n> Robert DiFalco <[email protected]> wrote:\n>\n> > CREATE TABLE contacts\n> > (\n> > id BIGINT PRIMARY KEY NOT NULL, // generated\n> >\n> > blocked BOOL,\n> > owner_id BIGINT NOT NULL,\n> > user_id BIGINT,\n> > FOREIGN KEY ( owner_id ) REFERENCES app_users ( id ) ON DELETE\n> CASCADE,\n> >\n> > FOREIGN KEY ( user_id ) REFERENCES app_users ( id ) ON DELETE SET\n> NULL\n> > );\n> > CREATE INDEX idx_contact_owner ON contacts ( owner_id );\n> > CREATE INDEX idx_contact_mutual ON contacts ( owner_id, user_id ) WHERE\n> user_id IS NOT NULL AND NOT blocked;\n>\n> Well, the first thing I note is that \"blocked\" can be NULL. You\n> exclude rows from the result where it IS NULL in either row. That\n> may be what you really want, but it seems worth mentioning. If you\n> don't need to support missing values there, you might want to add a\n> NOT NULL constraint. If it should be NULL when user_id is, but not\n> otherwise, you might want a row-level constraint. You might shave\n> a tiny amount off the runtime by getting rid of the redundant tests\n> for NOT NULL on user_id; it cannot compare as either TRUE on either\n> = or <> if either (or both) values are NULL.\n>\n> > explain analyze verbose\n> > select c.user_id\n> > from contact_entity c left outer join contact_entity c1 on c1.owner_id =\n> c.user_id and c1.user_id = c.owner_id\n> > where NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id\n> != 24\n> > AND c.user_id IS NOT NULL AND c1.user_id IS NOT NULL\n> > group by c.user_id;\n>\n> > Group (cost=0.00..9.00 rows=1 width=8) (actual time=0.170..0.301 rows=8\n> loops=1)\n> > Output: c.user_id\n> > -> Merge Join (cost=0.00..9.00 rows=1 width=8) (actual\n> time=0.166..0.270 rows=17 loops=1)\n> > Output: c.user_id\n> > Merge Cond: (c.user_id = c1.owner_id)\n> > -> Index Scan using idx_contact_mutual on public.contact_entity\n> c (cost=0.00..5.10 rows=2 width=16) (actual time=0.146..0.164 rows=11\n> loops=1)\n> > Output: c.id, c.blocked, c.first_name, c.last_name,\n> c.owner_id, c.user_id\n> > Index Cond: ((c.owner_id = 24) AND (c.user_id IS NOT NULL))\n> > Filter: (c.user_id <> 24)\n> > Rows Removed by Filter: 1\n> > -> Index Scan using idx_contact_mutual on public.contact_entity\n> c1 (cost=0.00..6.45 rows=1 width=16) (actual time=0.012..0.049 rows=18\n> loops=1)\n> > Output: c1.id, c1.blocked, c1.first_name, c1.last_name,\n> c1.owner_id, c1.user_id\n> > Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id =\n> 24))\n> > Total runtime: 0.388 ms\n>\n> > explain analyze verbose\n> > select c.user_id\n> > from contact_entity c\n> > where c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT\n> c.blocked and (exists(\n> > select 1\n> > from contact_entity c1\n> > where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT\n> NULL and c1.user_id=c.owner_id))\n> > group by c.user_id;\n>\n> > Group (cost=0.00..9.00 rows=1 width=8) (actual time=0.048..0.159 rows=8\n> loops=1)\n> > Output: c.user_id\n> > -> Merge Semi Join (cost=0.00..9.00 rows=1 width=8) (actual\n> time=0.044..0.137 rows=9 loops=1)\n> > Output: c.user_id\n> > Merge Cond: (c.user_id = c1.owner_id)\n> > -> Index Scan using idx_contact_mutual on public.contact_entity\n> c (cost=0.00..5.10 rows=2 width=16) (actual time=0.024..0.042 rows=11\n> loops=1)\n> > Output: c.id, c.blocked, c.first_name, c.last_name,\n> c.owner_id, c.user_id\n> > Index Cond: ((c.owner_id = 24) AND (c.user_id IS NOT NULL))\n> > Filter: (c.user_id <> 24)\n> > Rows Removed by Filter: 1\n> > -> Index Scan using idx_contact_mutual on public.contact_entity\n> c1 (cost=0.00..6.45 rows=1 width=16) (actual time=0.011..0.047 rows=16\n> loops=1)\n> > Output: c1.id, c1.blocked, c1.first_name, c1.last_name,\n> c1.owner_id, c1.user_id\n> > Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id =\n> 24))\n> > Total runtime: 0.224 ms\n>\n> So, it looks like you can get about 3000 to 4000 of these per\n> second on a single connection -- at least in terms of server-side\n> processing. Were you expecting more than that?\n>\n> --\n> Kevin Grittner\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThanks Kevin, the blocked should not be NULLABLE. I will fix that. This is with a pretty tiny dataset. I'm a little paranoid that with a large one I will have issues.Believe it or not the query became faster when I put the tests for user_id IS NOT NULL in there (and added an index for that) then without the tests and index.\nIt kinda makes me wonder if (from a performance perspective) I should change the schema to pull user_id out of contacts and created a related table with {contacts.id, user_id} where user_id is never null.\nOn Mon, Jun 3, 2013 at 7:26 AM, Kevin Grittner <[email protected]> wrote:\nRobert DiFalco <[email protected]> wrote:\n\n> CREATE TABLE contacts\n> (\n> id BIGINT PRIMARY KEY NOT NULL, // generated\n>\n> blocked BOOL,\n> owner_id BIGINT NOT NULL,\n> user_id BIGINT,\n> FOREIGN KEY ( owner_id ) REFERENCES app_users ( id ) ON DELETE CASCADE,\n>\n> FOREIGN KEY ( user_id ) REFERENCES app_users ( id ) ON DELETE SET NULL\n> );\n> CREATE INDEX idx_contact_owner ON contacts ( owner_id );\n> CREATE INDEX idx_contact_mutual ON contacts ( owner_id, user_id ) WHERE user_id IS NOT NULL AND NOT blocked;\n\nWell, the first thing I note is that \"blocked\" can be NULL. You\nexclude rows from the result where it IS NULL in either row. That\nmay be what you really want, but it seems worth mentioning. If you\ndon't need to support missing values there, you might want to add a\nNOT NULL constraint. If it should be NULL when user_id is, but not\notherwise, you might want a row-level constraint. You might shave\na tiny amount off the runtime by getting rid of the redundant tests\nfor NOT NULL on user_id; it cannot compare as either TRUE on either\n= or <> if either (or both) values are NULL.\n\n> explain analyze verbose\n> select c.user_id\n> from contact_entity c left outer join contact_entity c1 on c1.owner_id = c.user_id and c1.user_id = c.owner_id\n> where NOT c.blocked AND NOT c1.blocked AND c.owner_id = 24 AND c.user_id != 24\n> AND c.user_id IS NOT NULL AND c1.user_id IS NOT NULL\n> group by c.user_id;\n\n> Group (cost=0.00..9.00 rows=1 width=8) (actual time=0.170..0.301 rows=8 loops=1)\n> Output: c.user_id\n> -> Merge Join (cost=0.00..9.00 rows=1 width=8) (actual time=0.166..0.270 rows=17 loops=1)\n> Output: c.user_id\n> Merge Cond: (c.user_id = c1.owner_id)\n> -> Index Scan using idx_contact_mutual on public.contact_entity c (cost=0.00..5.10 rows=2 width=16) (actual time=0.146..0.164 rows=11 loops=1)\n> Output: c.id, c.blocked, c.first_name, c.last_name, c.owner_id, c.user_id\n> Index Cond: ((c.owner_id = 24) AND (c.user_id IS NOT NULL))\n> Filter: (c.user_id <> 24)\n> Rows Removed by Filter: 1\n> -> Index Scan using idx_contact_mutual on public.contact_entity c1 (cost=0.00..6.45 rows=1 width=16) (actual time=0.012..0.049 rows=18 loops=1)\n> Output: c1.id, c1.blocked, c1.first_name, c1.last_name, c1.owner_id, c1.user_id\n> Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id = 24))\n> Total runtime: 0.388 ms\n\n> explain analyze verbose\n> select c.user_id\n> from contact_entity c\n> where c.owner_id=24 and c.user_id<>24 and c.user_id IS NOT NULL and NOT c.blocked and (exists(\n> select 1\n> from contact_entity c1\n> where NOT c1.blocked and c1.owner_id=c.user_id and c1.user_id IS NOT NULL and c1.user_id=c.owner_id))\n> group by c.user_id;\n\n> Group (cost=0.00..9.00 rows=1 width=8) (actual time=0.048..0.159 rows=8 loops=1)\n> Output: c.user_id\n> -> Merge Semi Join (cost=0.00..9.00 rows=1 width=8) (actual time=0.044..0.137 rows=9 loops=1)\n> Output: c.user_id\n> Merge Cond: (c.user_id = c1.owner_id)\n> -> Index Scan using idx_contact_mutual on public.contact_entity c (cost=0.00..5.10 rows=2 width=16) (actual time=0.024..0.042 rows=11 loops=1)\n> Output: c.id, c.blocked, c.first_name, c.last_name, c.owner_id, c.user_id\n> Index Cond: ((c.owner_id = 24) AND (c.user_id IS NOT NULL))\n> Filter: (c.user_id <> 24)\n> Rows Removed by Filter: 1\n> -> Index Scan using idx_contact_mutual on public.contact_entity c1 (cost=0.00..6.45 rows=1 width=16) (actual time=0.011..0.047 rows=16 loops=1)\n> Output: c1.id, c1.blocked, c1.first_name, c1.last_name, c1.owner_id, c1.user_id\n> Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id = 24))\n> Total runtime: 0.224 ms\n\nSo, it looks like you can get about 3000 to 4000 of these per\nsecond on a single connection -- at least in terms of server-side\nprocessing. Were you expecting more than that?\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 3 Jun 2013 07:46:34 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL performance"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm using php5.4.12 with extension=php_pgsql.dll enabled but the query that\ni'm trying to execute is faster on PgAdmin app.\n\nSELECT title, ts_rank_cd(vector, query) AS rank FROM links,\nto_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank\nDESC;\n\nI'm not sure, what can i do to increase the speed of execution from php:\n\n$start_time = microtime(true);\n$query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM links,\nto_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank\nDESC LIMIT 10;\";\n$result = pg_query($connection, $query);\n$end_time = microtime(true);\n\npersistant connections are enabled in php.ini but i calculate only\nexecution time from start to end.\n\nThanks, Emrah.\n\n-- \nBest regards, Emrah Mehmedov\nSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\nHello,I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query that i'm trying to execute is faster on PgAdmin app.SELECT title, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC;\nI'm not sure, what can i do to increase the speed of execution from php:$start_time = microtime(true);$query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC LIMIT 10;\";\n$result = pg_query($connection, $query);$end_time = microtime(true);persistant connections are enabled in php.ini but i calculate only execution time from start to end.\nThanks, Emrah.-- Best regards, Emrah MehmedovSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com",
"msg_date": "Wed, 5 Jun 2013 12:18:09 +0200",
"msg_from": "Emrah Mehmedov <[email protected]>",
"msg_from_op": true,
"msg_subject": "PHP Postgres query slower then PgAdmin"
},
{
"msg_contents": "Is php connecting through tcp whilst pgadmin is using unix domain socket?\n Probably the query time is the same, but returning the result over tcp\nwill be slower.\n\n\nOn 5 June 2013 11:18, Emrah Mehmedov <[email protected]> wrote:\n\n> Hello,\n>\n> I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query\n> that i'm trying to execute is faster on PgAdmin app.\n>\n> SELECT title, ts_rank_cd(vector, query) AS rank FROM links,\n> to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank\n> DESC;\n>\n> I'm not sure, what can i do to increase the speed of execution from php:\n>\n> $start_time = microtime(true);\n> $query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM links,\n> to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank\n> DESC LIMIT 10;\";\n> $result = pg_query($connection, $query);\n> $end_time = microtime(true);\n>\n> persistant connections are enabled in php.ini but i calculate only\n> execution time from start to end.\n>\n> Thanks, Emrah.\n>\n> --\n> Best regards, Emrah Mehmedov\n> Software Developer @ X3M Labs\n> http://www.extreme-labs.com\n>\n\nIs php connecting through tcp whilst pgadmin is using unix domain socket? Probably the query time is the same, but returning the result over tcp will be slower.\nOn 5 June 2013 11:18, Emrah Mehmedov <[email protected]> wrote:\nHello,I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query that i'm trying to execute is faster on PgAdmin app.SELECT title, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC;\nI'm not sure, what can i do to increase the speed of execution from php:$start_time = microtime(true);$query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC LIMIT 10;\";\n$result = pg_query($connection, $query);$end_time = microtime(true);persistant connections are enabled in php.ini but i calculate only execution time from start to end.\nThanks, Emrah.-- Best regards, Emrah Mehmedov\nSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com",
"msg_date": "Wed, 5 Jun 2013 12:01:39 +0100",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PHP Postgres query slower then PgAdmin"
},
{
"msg_contents": "Can we modify php connection?\n\n\nOn Wed, Jun 5, 2013 at 1:01 PM, Bob Jolliffe <[email protected]> wrote:\n\n> Is php connecting through tcp whilst pgadmin is using unix domain socket?\n> Probably the query time is the same, but returning the result over tcp\n> will be slower.\n>\n>\n> On 5 June 2013 11:18, Emrah Mehmedov <[email protected]> wrote:\n>\n>> Hello,\n>>\n>> I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query\n>> that i'm trying to execute is faster on PgAdmin app.\n>>\n>> SELECT title, ts_rank_cd(vector, query) AS rank FROM links,\n>> to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank\n>> DESC;\n>>\n>> I'm not sure, what can i do to increase the speed of execution from php:\n>>\n>> $start_time = microtime(true);\n>> $query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM\n>> links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY\n>> rank DESC LIMIT 10;\";\n>> $result = pg_query($connection, $query);\n>> $end_time = microtime(true);\n>>\n>> persistant connections are enabled in php.ini but i calculate only\n>> execution time from start to end.\n>>\n>> Thanks, Emrah.\n>>\n>> --\n>> Best regards, Emrah Mehmedov\n>> Software Developer @ X3M Labs\n>> http://www.extreme-labs.com\n>>\n>\n>\n\n\n-- \nBest regards, Emrah Mehmedov\nSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\nCan we modify php connection?On Wed, Jun 5, 2013 at 1:01 PM, Bob Jolliffe <[email protected]> wrote:\nIs php connecting through tcp whilst pgadmin is using unix domain socket? Probably the query time is the same, but returning the result over tcp will be slower.\n\nOn 5 June 2013 11:18, Emrah Mehmedov <[email protected]> wrote:\nHello,I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query that i'm trying to execute is faster on PgAdmin app.SELECT title, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC;\nI'm not sure, what can i do to increase the speed of execution from php:$start_time = microtime(true);$query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC LIMIT 10;\";\n$result = pg_query($connection, $query);$end_time = microtime(true);persistant connections are enabled in php.ini but i calculate only execution time from start to end.\nThanks, Emrah.-- Best regards, Emrah Mehmedov\nSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\n\n-- Best regards, Emrah MehmedovSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com",
"msg_date": "Wed, 5 Jun 2013 13:02:31 +0200",
"msg_from": "Emrah Mehmedov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PHP Postgres query slower then PgAdmin"
},
{
"msg_contents": "Most probably. If your existing connection string specifies something like\n\"host=localhost port=5432 ...\" just remove the host and port parameters and\nphp will by default try to connect with unix domain socket.\n\n\nOn 5 June 2013 12:02, Emrah Mehmedov <[email protected]> wrote:\n\n> Can we modify php connection?\n>\n>\n> On Wed, Jun 5, 2013 at 1:01 PM, Bob Jolliffe <[email protected]>wrote:\n>\n>> Is php connecting through tcp whilst pgadmin is using unix domain socket?\n>> Probably the query time is the same, but returning the result over tcp\n>> will be slower.\n>>\n>>\n>> On 5 June 2013 11:18, Emrah Mehmedov <[email protected]> wrote:\n>>\n>>> Hello,\n>>>\n>>> I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query\n>>> that i'm trying to execute is faster on PgAdmin app.\n>>>\n>>> SELECT title, ts_rank_cd(vector, query) AS rank FROM links,\n>>> to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank\n>>> DESC;\n>>>\n>>> I'm not sure, what can i do to increase the speed of execution from php:\n>>>\n>>> $start_time = microtime(true);\n>>> $query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM\n>>> links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY\n>>> rank DESC LIMIT 10;\";\n>>> $result = pg_query($connection, $query);\n>>> $end_time = microtime(true);\n>>>\n>>> persistant connections are enabled in php.ini but i calculate only\n>>> execution time from start to end.\n>>>\n>>> Thanks, Emrah.\n>>>\n>>> --\n>>> Best regards, Emrah Mehmedov\n>>> Software Developer @ X3M Labs\n>>> http://www.extreme-labs.com\n>>>\n>>\n>>\n>\n>\n> --\n> Best regards, Emrah Mehmedov\n> Software Developer @ X3M Labs\n> http://www.extreme-labs.com\n>\n\nMost probably. If your existing connection string specifies something like \"host=localhost port=5432 ...\" just remove the host and port parameters and php will by default try to connect with unix domain socket.\nOn 5 June 2013 12:02, Emrah Mehmedov <[email protected]> wrote:\nCan we modify php connection?\nOn Wed, Jun 5, 2013 at 1:01 PM, Bob Jolliffe <[email protected]> wrote:\nIs php connecting through tcp whilst pgadmin is using unix domain socket? Probably the query time is the same, but returning the result over tcp will be slower.\n\nOn 5 June 2013 11:18, Emrah Mehmedov <[email protected]> wrote:\nHello,I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query that i'm trying to execute is faster on PgAdmin app.SELECT title, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC;\nI'm not sure, what can i do to increase the speed of execution from php:$start_time = microtime(true);$query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC LIMIT 10;\";\n$result = pg_query($connection, $query);$end_time = microtime(true);persistant connections are enabled in php.ini but i calculate only execution time from start to end.\nThanks, Emrah.-- Best regards, Emrah Mehmedov\nSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\n\n-- Best regards, Emrah MehmedovSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com",
"msg_date": "Wed, 5 Jun 2013 12:11:49 +0100",
"msg_from": "Bob Jolliffe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PHP Postgres query slower then PgAdmin"
},
{
"msg_contents": "I notice something in CMD, first time query is executing same time like\nfrom php all the time, but on the rest of the time that i will execute the\nquery is faster from cmd, php is keeping the same execution time.\ni also change the connection string (i remove host and port) and nothing\nchanged.\n\n\nOn Wed, Jun 5, 2013 at 1:11 PM, Bob Jolliffe <[email protected]> wrote:\n\n> Most probably. If your existing connection string specifies something\n> like \"host=localhost port=5432 ...\" just remove the host and port\n> parameters and php will by default try to connect with unix domain socket.\n>\n>\n> On 5 June 2013 12:02, Emrah Mehmedov <[email protected]> wrote:\n>\n>> Can we modify php connection?\n>>\n>>\n>> On Wed, Jun 5, 2013 at 1:01 PM, Bob Jolliffe <[email protected]>wrote:\n>>\n>>> Is php connecting through tcp whilst pgadmin is using unix domain\n>>> socket? Probably the query time is the same, but returning the result over\n>>> tcp will be slower.\n>>>\n>>>\n>>> On 5 June 2013 11:18, Emrah Mehmedov <[email protected]> wrote:\n>>>\n>>>> Hello,\n>>>>\n>>>> I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query\n>>>> that i'm trying to execute is faster on PgAdmin app.\n>>>>\n>>>> SELECT title, ts_rank_cd(vector, query) AS rank FROM links,\n>>>> to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank\n>>>> DESC;\n>>>>\n>>>> I'm not sure, what can i do to increase the speed of execution from php:\n>>>>\n>>>> $start_time = microtime(true);\n>>>> $query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM\n>>>> links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY\n>>>> rank DESC LIMIT 10;\";\n>>>> $result = pg_query($connection, $query);\n>>>> $end_time = microtime(true);\n>>>>\n>>>> persistant connections are enabled in php.ini but i calculate only\n>>>> execution time from start to end.\n>>>>\n>>>> Thanks, Emrah.\n>>>>\n>>>> --\n>>>> Best regards, Emrah Mehmedov\n>>>> Software Developer @ X3M Labs\n>>>> http://www.extreme-labs.com\n>>>>\n>>>\n>>>\n>>\n>>\n>> --\n>> Best regards, Emrah Mehmedov\n>> Software Developer @ X3M Labs\n>> http://www.extreme-labs.com\n>>\n>\n>\n\n\n-- \nBest regards, Emrah Mehmedov\nSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\nI notice something in CMD, first time query is executing same time like from php all the time, but on the rest of the time that i will execute the query is faster from cmd, php is keeping the same execution time.\ni also change the connection string (i remove host and port) and nothing changed.On Wed, Jun 5, 2013 at 1:11 PM, Bob Jolliffe <[email protected]> wrote:\nMost probably. If your existing connection string specifies something like \"host=localhost port=5432 ...\" just remove the host and port parameters and php will by default try to connect with unix domain socket.\n\nOn 5 June 2013 12:02, Emrah Mehmedov <[email protected]> wrote:\nCan we modify php connection?\nOn Wed, Jun 5, 2013 at 1:01 PM, Bob Jolliffe <[email protected]> wrote:\nIs php connecting through tcp whilst pgadmin is using unix domain socket? Probably the query time is the same, but returning the result over tcp will be slower.\n\nOn 5 June 2013 11:18, Emrah Mehmedov <[email protected]> wrote:\nHello,I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query that i'm trying to execute is faster on PgAdmin app.SELECT title, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC;\nI'm not sure, what can i do to increase the speed of execution from php:$start_time = microtime(true);$query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC LIMIT 10;\";\n$result = pg_query($connection, $query);$end_time = microtime(true);persistant connections are enabled in php.ini but i calculate only execution time from start to end.\nThanks, Emrah.-- Best regards, Emrah Mehmedov\nSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\n\n-- Best regards, Emrah MehmedovSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\n\n-- Best regards, Emrah MehmedovSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com",
"msg_date": "Wed, 5 Jun 2013 13:15:43 +0200",
"msg_from": "Emrah Mehmedov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PHP Postgres query slower then PgAdmin"
},
{
"msg_contents": "Just rule out something weird;\n\nCan you determine if you are getting the same query plan in both cases? Use\n\"explain analyze\" from the command line and turn on auto.explain via\npostgresql.conf to log what the plan is for the php case.\n\n\nTom Kincaid\nEnterpriseDB\nwww.enterprisedb.com\n\n\n\nOn Wed, Jun 5, 2013 at 7:15 AM, Emrah Mehmedov\n<[email protected]>wrote:\n\n> I notice something in CMD, first time query is executing same time like\n> from php all the time, but on the rest of the time that i will execute the\n> query is faster from cmd, php is keeping the same execution time.\n> i also change the connection string (i remove host and port) and nothing\n> changed.\n>\n>\n> On Wed, Jun 5, 2013 at 1:11 PM, Bob Jolliffe <[email protected]>wrote:\n>\n>> Most probably. If your existing connection string specifies something\n>> like \"host=localhost port=5432 ...\" just remove the host and port\n>> parameters and php will by default try to connect with unix domain socket.\n>>\n>>\n>> On 5 June 2013 12:02, Emrah Mehmedov <[email protected]> wrote:\n>>\n>>> Can we modify php connection?\n>>>\n>>>\n>>> On Wed, Jun 5, 2013 at 1:01 PM, Bob Jolliffe <[email protected]>wrote:\n>>>\n>>>> Is php connecting through tcp whilst pgadmin is using unix domain\n>>>> socket? Probably the query time is the same, but returning the result over\n>>>> tcp will be slower.\n>>>>\n>>>>\n>>>> On 5 June 2013 11:18, Emrah Mehmedov <[email protected]>wrote:\n>>>>\n>>>>> Hello,\n>>>>>\n>>>>> I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query\n>>>>> that i'm trying to execute is faster on PgAdmin app.\n>>>>>\n>>>>> SELECT title, ts_rank_cd(vector, query) AS rank FROM links,\n>>>>> to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank\n>>>>> DESC;\n>>>>>\n>>>>> I'm not sure, what can i do to increase the speed of execution from\n>>>>> php:\n>>>>>\n>>>>> $start_time = microtime(true);\n>>>>> $query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM\n>>>>> links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY\n>>>>> rank DESC LIMIT 10;\";\n>>>>> $result = pg_query($connection, $query);\n>>>>> $end_time = microtime(true);\n>>>>>\n>>>>> persistant connections are enabled in php.ini but i calculate only\n>>>>> execution time from start to end.\n>>>>>\n>>>>> Thanks, Emrah.\n>>>>>\n>>>>> --\n>>>>> Best regards, Emrah Mehmedov\n>>>>> Software Developer @ X3M Labs\n>>>>> http://www.extreme-labs.com\n>>>>>\n>>>>\n>>>>\n>>>\n>>>\n>>> --\n>>> Best regards, Emrah Mehmedov\n>>> Software Developer @ X3M Labs\n>>> http://www.extreme-labs.com\n>>>\n>>\n>>\n>\n>\n> --\n> Best regards, Emrah Mehmedov\n> Software Developer @ X3M Labs\n> http://www.extreme-labs.com\n>\n\n\n\n-- \nThomas John\n\nJust rule out something weird;Can you determine if you are getting the same query plan in both cases? Use \"explain analyze\" from the command line and turn on auto.explain via postgresql.conf to log what the plan is for the php case.\nTom KincaidEnterpriseDBwww.enterprisedb.com\nOn Wed, Jun 5, 2013 at 7:15 AM, Emrah Mehmedov <[email protected]> wrote:\nI notice something in CMD, first time query is executing same time like from php all the time, but on the rest of the time that i will execute the query is faster from cmd, php is keeping the same execution time.\n\ni also change the connection string (i remove host and port) and nothing changed.On Wed, Jun 5, 2013 at 1:11 PM, Bob Jolliffe <[email protected]> wrote:\nMost probably. If your existing connection string specifies something like \"host=localhost port=5432 ...\" just remove the host and port parameters and php will by default try to connect with unix domain socket.\n\nOn 5 June 2013 12:02, Emrah Mehmedov <[email protected]> wrote:\nCan we modify php connection?\nOn Wed, Jun 5, 2013 at 1:01 PM, Bob Jolliffe <[email protected]> wrote:\nIs php connecting through tcp whilst pgadmin is using unix domain socket? Probably the query time is the same, but returning the result over tcp will be slower.\n\nOn 5 June 2013 11:18, Emrah Mehmedov <[email protected]> wrote:\nHello,I'm using php5.4.12 with extension=php_pgsql.dll enabled but the query that i'm trying to execute is faster on PgAdmin app.SELECT title, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC;\nI'm not sure, what can i do to increase the speed of execution from php:$start_time = microtime(true);$query = \"SELECT title, url, ts_rank_cd(vector, query) AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@ vector ORDER BY rank DESC LIMIT 10;\";\n$result = pg_query($connection, $query);$end_time = microtime(true);persistant connections are enabled in php.ini but i calculate only execution time from start to end.\nThanks, Emrah.-- Best regards, Emrah Mehmedov\nSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\n\n-- Best regards, Emrah MehmedovSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\n\n-- Best regards, Emrah MehmedovSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\n-- Thomas John",
"msg_date": "Sun, 23 Jun 2013 13:35:15 -0400",
"msg_from": "Tom Kincaid <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PHP Postgres query slower then PgAdmin"
},
{
"msg_contents": "On Wed, Jun 5, 2013 at 1:15 PM, Emrah Mehmedov\n<[email protected]> wrote:\n> [ull text search]\n> I notice something in CMD, first time query is executing same time like from\n> php all the time, but on the rest of the time that i will execute the query\n> is faster from cmd, php is keeping the same execution time.\n> i also change the connection string (i remove host and port) and nothing\n> changed.\n\nThe first query using a text search config loads the dictionaries, so\nit is slower, and that's why the following queries are faster. I think\nyour PHP persistent connections don't work too well.\n\nRegards\nMarcin Mańk\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 23 Jun 2013 21:57:53 +0200",
"msg_from": "=?UTF-8?B?TWFyY2luIE1hxYRr?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PHP Postgres query slower then PgAdmin"
},
{
"msg_contents": "Marcin: This heppens everytime when i try to query different keyword in FTS\nfor example:\n\nfirsttime: query: \"Hello & World\" 15sec~\nsecondtime: query: \"Hello & World\" 2-3sec\n\nthen new query\n\nfirsttime: query: \"We & are & good\" 10sec~\nsecondtime: query: \"We & are & good\" 2-3sec\n\neven if i'm going from CMD queries are faster but on first time always it's\nslower then rest of the times.\n\n\n\nOn Sun, Jun 23, 2013 at 9:57 PM, Marcin Mańk <[email protected]> wrote:\n\n> On Wed, Jun 5, 2013 at 1:15 PM, Emrah Mehmedov\n> <[email protected]> wrote:\n> > [ull text search]\n> > I notice something in CMD, first time query is executing same time like\n> from\n> > php all the time, but on the rest of the time that i will execute the\n> query\n> > is faster from cmd, php is keeping the same execution time.\n> > i also change the connection string (i remove host and port) and nothing\n> > changed.\n>\n> The first query using a text search config loads the dictionaries, so\n> it is slower, and that's why the following queries are faster. I think\n> your PHP persistent connections don't work too well.\n>\n> Regards\n> Marcin Mańk\n>\n\n\n\n-- \nBest regards, Emrah Mehmedov\nSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\nMarcin: This heppens everytime when i try to query different keyword in FTS for example:firsttime: query: \"Hello & World\" 15sec~secondtime: query: \"Hello & World\" 2-3sec\nthen new query firsttime: query: \"We & are & good\" 10sec~secondtime: query: \"We & are & good\" 2-3sec\neven if i'm going from CMD queries are faster but on first time always it's slower then rest of the times.\nOn Sun, Jun 23, 2013 at 9:57 PM, Marcin Mańk <[email protected]> wrote:\nOn Wed, Jun 5, 2013 at 1:15 PM, Emrah Mehmedov\n<[email protected]> wrote:\n> [ull text search]\n> I notice something in CMD, first time query is executing same time like from\n> php all the time, but on the rest of the time that i will execute the query\n> is faster from cmd, php is keeping the same execution time.\n> i also change the connection string (i remove host and port) and nothing\n> changed.\n\nThe first query using a text search config loads the dictionaries, so\nit is slower, and that's why the following queries are faster. I think\nyour PHP persistent connections don't work too well.\n\nRegards\nMarcin Mańk\n-- Best regards, Emrah MehmedovSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com",
"msg_date": "Mon, 24 Jun 2013 11:55:04 +0200",
"msg_from": "Emrah Mehmedov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PHP Postgres query slower then PgAdmin"
},
{
"msg_contents": "On Mon, Jun 24, 2013 at 11:55 AM, Emrah Mehmedov\n<[email protected]> wrote:\n> Marcin: This heppens everytime when i try to query different keyword in FTS\n> for example:\n>\n> firsttime: query: \"Hello & World\" 15sec~\n> secondtime: query: \"Hello & World\" 2-3sec\n>\n> then new query\n>\n> firsttime: query: \"We & are & good\" 10sec~\n> secondtime: query: \"We & are & good\" 2-3sec\n>\nNow it looks like Postgres is fetching data from disk on first query\nrun, the second time it is from cache, so faster. Try:\n\nEXPLAIN(ANALYZE, BUFFERS) SELECT title, url, ts_rank_cd(vector, query)\nAS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@\nvector ORDER BY rank DESC LIMIT 10;\n\nwith varying queries, and post the results. This will show how many\nblocks are are read from shared buffers, and how many are read from\nthe OS(either from OS disk cache, or the actual disk).\n\nRegards\nMarcin Mańk\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Jun 2013 12:04:57 +0200",
"msg_from": "=?UTF-8?B?TWFyY2luIE1hxYRr?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PHP Postgres query slower then PgAdmin"
},
{
"msg_contents": "On Mon, Jun 24, 2013 at 12:04 PM, Marcin Mańk <[email protected]> wrote:\n\n> On Mon, Jun 24, 2013 at 11:55 AM, Emrah Mehmedov\n> <[email protected]> wrote:\n> > Marcin: This heppens everytime when i try to query different keyword in\n> FTS\n> > for example:\n> >\n> > firsttime: query: \"Hello & World\" 15sec~\n> > secondtime: query: \"Hello & World\" 2-3sec\n> >\n> > then new query\n> >\n> > firsttime: query: \"We & are & good\" 10sec~\n> > secondtime: query: \"We & are & good\" 2-3sec\n> >\n> Now it looks like Postgres is fetching data from disk on first query\n> run, the second time it is from cache, so faster. Try:\n>\n> EXPLAIN(ANALYZE, BUFFERS) SELECT title, url, ts_rank_cd(vector, query)\n> AS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@\n> vector ORDER BY rank DESC LIMIT 10;\n>\n> with varying queries, and post the results. This will show how many\n> blocks are are read from shared buffers, and how many are read from\n> the OS(either from OS disk cache, or the actual disk).\n>\n> Regards\n> Marcin Mańk\n>\n\nHi Marcin Mańk,\n\ni run the query with analyze and explain and the time is pretty same as i\ncalculate in php code, solution is to improve query or FTS dictionaries.\n\nThank you.\n-- \nBest regards, Emrah Mehmedov\nSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com\n\nOn Mon, Jun 24, 2013 at 12:04 PM, Marcin Mańk <[email protected]> wrote:\nOn Mon, Jun 24, 2013 at 11:55 AM, Emrah Mehmedov\n\n<[email protected]> wrote:\n> Marcin: This heppens everytime when i try to query different keyword in FTS\n> for example:\n>\n> firsttime: query: \"Hello & World\" 15sec~\n> secondtime: query: \"Hello & World\" 2-3sec\n>\n> then new query\n>\n> firsttime: query: \"We & are & good\" 10sec~\n> secondtime: query: \"We & are & good\" 2-3sec\n>\nNow it looks like Postgres is fetching data from disk on first query\nrun, the second time it is from cache, so faster. Try:\n\nEXPLAIN(ANALYZE, BUFFERS) SELECT title, url, ts_rank_cd(vector, query)\nAS rank FROM links, to_tsquery('english', 'risk') query WHERE query @@\nvector ORDER BY rank DESC LIMIT 10;\n\nwith varying queries, and post the results. This will show how many\nblocks are are read from shared buffers, and how many are read from\nthe OS(either from OS disk cache, or the actual disk).\n\nRegards\nMarcin Mańk\nHi Marcin Mańk,i run the query with analyze and explain and the time is pretty same as i calculate in php code, solution is to improve query or FTS dictionaries.\nThank you.-- Best regards, Emrah MehmedovSoftware Developer @ X3M Labs\nhttp://www.extreme-labs.com",
"msg_date": "Tue, 25 Jun 2013 12:15:00 +0200",
"msg_from": "Emrah Mehmedov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PHP Postgres query slower then PgAdmin"
}
] |
[
{
"msg_contents": "Hello,\n\n\nWe have a strange issue related to a prepared statement.\n\n\nWe have two equals queries where the sole difference is in the limit.\n- The first is hard coded with limit 500.\n- The second is prepared with limit $1 ($1 is bound to 500).\n\n\nPostgreSQL give us two different plans with a huge execution time for the\nprepared query:\n\n\n-----------------------------------------------------------------------------------------------------------------------\n2- Static Query\n-----------------------------------------------------------------------------------------------------------------------\nexplain analyze\nselect *\nfrom dm2_lignecommandevente lignecomma0_\ninner join dm2_lignedocumentcommercialvente lignecomma0_1_ on\nlignecomma0_.id=lignecomma0_1_.id\ninner join dm1_lignedocumentcommercial lignecomma0_2_ on\nlignecomma0_.id=lignecomma0_2_.id\nwhere (lignecomma0_.id not like 'DefaultRecord_%') and\n(lignecomma0_2_.dateFinValidite is null)\norder by coalescedate(lignecomma0_2_.dateCreationSysteme) desc\nlimit 500\n\n\n-------------------\nStatic query plan\n-------------------\nLimit (cost=0.00..12165.11 rows=500 width=909) (actual time=73.477..90.256\nrows=500 loops=1)\n -> Nested Loop (cost=0.00..11241165.90 rows=462025 width=909) (actual\ntime=73.475..90.164 rows=500 loops=1)\n -> Nested Loop (cost=0.00..9086881.29 rows=462025 width=852)\n(actual time=4.105..11.749 rows=500 loops=1)\n -> Index Scan Backward using\nx_dm1_lignedocumentcommercial_14 on dm1_lignedocumentcommercial\nlignecomma0_2_ (cost=0.00..2744783.31 rows=1652194 width=541) (actual\ntime=0.017..1.374 rows=1944 loops=1)\n Filter: (datefinvalidite IS NULL)\n -> Index Scan using dm2_lignecommandevente_pkey on\ndm2_lignecommandevente lignecomma0_ (cost=0.00..3.83 rows=1 width=311)\n(actual time=0.004..0.004 rows=0 loops=1944)\n Index Cond: ((lignecomma0_.id)::text =\n(lignecomma0_2_.id)::text)\n Filter: ((lignecomma0_.id)::text !~~\n'DefaultRecord_%'::text)\n -> Index Scan using dm2_lignedocumentcommercialvente_pkey on\ndm2_lignedocumentcommercialvente lignecomma0_1_ (cost=0.00..4.40 rows=1\nwidth=57) (actual time=0.005..0.005 rows=1 loops=500)\n Index Cond: ((lignecomma0_1_.id)::text =\n(lignecomma0_.id)::text)\nTotal runtime: 90.572 ms\n\n\n-----------------------------------------------------------------------------------------------------------------------\n2- Prepared Query\n------------------------------------------------------------\n-----------------------------------------------------------\nPREPARE query(int) AS\nselect *\n from dm2_lignecommandevente lignecomma0_\ninner join dm2_lignedocumentcommercialvente lignecomma0_1_ on\nlignecomma0_.id=lignecomma0_1_.id\ninner join dm1_lignedocumentcommercial lignecomma0_2_ on\nlignecomma0_.id=lignecomma0_2_.id\nwhere (lignecomma0_.id not like 'DefaultRecord_%')\n and (lignecomma0_2_.dateFinValidite is null)\norder by coalescedate(lignecomma0_2_.dateCreationSysteme) desc\nlimit $1;\n\nexplain analyze\nexecute query(500);\n\n\n-------------------\nPrepared query plan\n-------------------\nLimit (cost=879927.25..880042.76 rows=46202 width=909) (actual\ntime=69609.593..69609.642 rows=500 loops=1)\n -> Sort (cost=879927.25..881082.32 rows=462025 width=909) (actual\ntime=69609.588..69609.610 rows=500 loops=1)\n Sort Key: (coalescedate(lignecomma0_2_.datecreationsysteme))\n Sort Method: top-N heapsort Memory: 498kB\n -> Hash Join (cost=164702.90..651691.22 rows=462025 width=909)\n(actual time=7786.467..68148.530 rows=470294 loops=1)\n Hash Cond: ((lignecomma0_2_.id)::text =\n(lignecomma0_.id)::text)\n -> Seq Scan on dm1_lignedocumentcommercial lignecomma0_2_\n (cost=0.00..102742.36 rows=1652194 width=541) (actual\ntime=0.009..50840.692 rows=1650554 loops=1)\n Filter: (datefinvalidite IS NULL)\n -> Hash (cost=136181.67..136181.67 rows=472579 width=368)\n(actual time=7681.787..7681.787 rows=472625 loops=1)\n -> Hash Join (cost=40690.06..136181.67 rows=472579\nwidth=368) (actual time=986.580..7090.877 rows=472625 loops=1)\n Hash Cond: ((lignecomma0_1_.id)::text =\n(lignecomma0_.id)::text)\n -> Seq Scan on dm2_lignedocumentcommercialvente\nlignecomma0_1_ (cost=0.00..29881.18 rows=1431818 width=57) (actual\ntime=14.401..2288.869 rows=1431818 loops=1)\n -> Hash (cost=15398.83..15398.83 rows=472579\nwidth=311) (actual time=967.209..967.209 rows=472625 loops=1)\n -> Seq Scan on dm2_lignecommandevente\nlignecomma0_ (cost=0.00..15398.83 rows=472579 width=311) (actual\ntime=18.154..662.185 rows=472625 loops=1)\n Filter: ((id)::text !~~\n'DefaultRecord_%'::text)\nTotal runtime: 69612.191 ms\n-----------------------------------------------------------------------------------------------------------------------\n\n\nWe saw that both folowing queries give the same plan :\n\n - Static query with limit 500 removed\n\nexplain analyze\n\nselect *\n\nfrom dm2_lignecommandevente lignecomma0_\n\ninner join dm2_lignedocumentcommercialvente lignecomma0_1_ on\nlignecomma0_.id=lignecomma0_1_.id\n\ninner join dm1_lignedocumentcommercial lignecomma0_2_ on\nlignecomma0_.id=lignecomma0_2_.id\n\nwhere (lignecomma0_.id not like 'DefaultRecord_%') and\n(lignecomma0_2_.dateFinValidite is null)\n\norder by coalescedate(lignecomma0_2_.dateCreationSysteme) desc\n\n\n\n - The bad prepared query\n\nPREPARE query(int) AS\n\nselect *\n\n from dm2_lignecommandevente lignecomma0_\n\ninner join dm2_lignedocumentcommercialvente lignecomma0_1_ on\nlignecomma0_.id=lignecomma0_1_.id\n\ninner join dm1_lignedocumentcommercial lignecomma0_2_ on\nlignecomma0_.id=lignecomma0_2_.id\n\nwhere (lignecomma0_.id not like 'DefaultRecord_%')\n\n and (lignecomma0_2_.dateFinValidite is null)\n\norder by coalescedate(lignecomma0_2_.dateCreationSysteme) desc\n\nlimit $1;\n\n\nexplain analyze\n\nexecute query(500);\n\n\n\n\nWe met the same behaviour with both :\n- PostgreSQL 8.4.8 on Windows 2008 (Prod)\n- PostgreSQL 8.4.8 and 8.4.17 on Windows 7 (Dev)\n\n\nI hope someone has any idea.\n\n\n*Ghislain ROUVIGNAC*\n\nHello,\nWe have a strange issue related to a prepared statement.\nWe have two equals queries where the sole difference is in the limit.\n- The first is hard coded with limit 500.- The second is prepared with limit $1 ($1 is bound to 500).\n\nPostgreSQL give us two different plans with a huge execution time for the prepared query:\n-----------------------------------------------------------------------------------------------------------------------\n2- Static Query-----------------------------------------------------------------------------------------------------------------------\nexplain analyzeselect *\nfrom dm2_lignecommandevente lignecomma0_ inner join dm2_lignedocumentcommercialvente lignecomma0_1_ on lignecomma0_.id=lignecomma0_1_.id \n inner join dm1_lignedocumentcommercial lignecomma0_2_ on lignecomma0_.id=lignecomma0_2_.id \nwhere (lignecomma0_.id not like 'DefaultRecord_%') and (lignecomma0_2_.dateFinValidite is null) \norder by coalescedate(lignecomma0_2_.dateCreationSysteme) desc limit 500\n\n-------------------Static query plan\n-------------------Limit (cost=0.00..12165.11 rows=500 width=909) (actual time=73.477..90.256 rows=500 loops=1)\n -> Nested Loop (cost=0.00..11241165.90 rows=462025 width=909) (actual time=73.475..90.164 rows=500 loops=1)\n -> Nested Loop (cost=0.00..9086881.29 rows=462025 width=852) (actual time=4.105..11.749 rows=500 loops=1)\n -> Index Scan Backward using x_dm1_lignedocumentcommercial_14 on dm1_lignedocumentcommercial lignecomma0_2_ (cost=0.00..2744783.31 rows=1652194 width=541) (actual time=0.017..1.374 rows=1944 loops=1)\n Filter: (datefinvalidite IS NULL) -> Index Scan using dm2_lignecommandevente_pkey on dm2_lignecommandevente lignecomma0_ (cost=0.00..3.83 rows=1 width=311) (actual time=0.004..0.004 rows=0 loops=1944)\n Index Cond: ((lignecomma0_.id)::text = (lignecomma0_2_.id)::text)\n Filter: ((lignecomma0_.id)::text !~~ 'DefaultRecord_%'::text) -> Index Scan using dm2_lignedocumentcommercialvente_pkey on dm2_lignedocumentcommercialvente lignecomma0_1_ (cost=0.00..4.40 rows=1 width=57) (actual time=0.005..0.005 rows=1 loops=500)\n Index Cond: ((lignecomma0_1_.id)::text = (lignecomma0_.id)::text)\nTotal runtime: 90.572 ms\n-----------------------------------------------------------------------------------------------------------------------\n2- Prepared Query-----------------------------------------------------------------------------------------------------------------------\nPREPARE query(int) AS select *\n from dm2_lignecommandevente lignecomma0_ inner join dm2_lignedocumentcommercialvente lignecomma0_1_ on lignecomma0_.id=lignecomma0_1_.id \n inner join dm1_lignedocumentcommercial lignecomma0_2_ on lignecomma0_.id=lignecomma0_2_.id \nwhere (lignecomma0_.id not like 'DefaultRecord_%') and (lignecomma0_2_.dateFinValidite is null)\norder by coalescedate(lignecomma0_2_.dateCreationSysteme) desc limit $1;\nexplain analyze\nexecute query(500);\n-------------------\nPrepared query plan-------------------\nLimit (cost=879927.25..880042.76 rows=46202 width=909) (actual time=69609.593..69609.642 rows=500 loops=1)\n -> Sort (cost=879927.25..881082.32 rows=462025 width=909) (actual time=69609.588..69609.610 rows=500 loops=1)\n Sort Key: (coalescedate(lignecomma0_2_.datecreationsysteme)) Sort Method: top-N heapsort Memory: 498kB\n -> Hash Join (cost=164702.90..651691.22 rows=462025 width=909) (actual time=7786.467..68148.530 rows=470294 loops=1)\n Hash Cond: ((lignecomma0_2_.id)::text = (lignecomma0_.id)::text)\n -> Seq Scan on dm1_lignedocumentcommercial lignecomma0_2_ (cost=0.00..102742.36 rows=1652194 width=541) (actual time=0.009..50840.692 rows=1650554 loops=1)\n Filter: (datefinvalidite IS NULL) -> Hash (cost=136181.67..136181.67 rows=472579 width=368) (actual time=7681.787..7681.787 rows=472625 loops=1)\n -> Hash Join (cost=40690.06..136181.67 rows=472579 width=368) (actual time=986.580..7090.877 rows=472625 loops=1)\n Hash Cond: ((lignecomma0_1_.id)::text = (lignecomma0_.id)::text)\n -> Seq Scan on dm2_lignedocumentcommercialvente lignecomma0_1_ (cost=0.00..29881.18 rows=1431818 width=57) (actual time=14.401..2288.869 rows=1431818 loops=1)\n -> Hash (cost=15398.83..15398.83 rows=472579 width=311) (actual time=967.209..967.209 rows=472625 loops=1)\n -> Seq Scan on dm2_lignecommandevente lignecomma0_ (cost=0.00..15398.83 rows=472579 width=311) (actual time=18.154..662.185 rows=472625 loops=1)\n Filter: ((id)::text !~~ 'DefaultRecord_%'::text)\nTotal runtime: 69612.191 ms-----------------------------------------------------------------------------------------------------------------------\n\nWe saw that both folowing queries give the same plan :Static query with limit 500 removed\nexplain analyze\nselect *\nfrom dm2_lignecommandevente lignecomma0_ \n inner join dm2_lignedocumentcommercialvente lignecomma0_1_ on lignecomma0_.id=lignecomma0_1_.id \n inner join dm1_lignedocumentcommercial lignecomma0_2_ on lignecomma0_.id=lignecomma0_2_.id \nwhere (lignecomma0_.id not like 'DefaultRecord_%') and (lignecomma0_2_.dateFinValidite is null) \norder by coalescedate(lignecomma0_2_.dateCreationSysteme) desc \nThe bad prepared query\nPREPARE query(int) AS \nselect *\n from dm2_lignecommandevente lignecomma0_ \n inner join dm2_lignedocumentcommercialvente lignecomma0_1_ on lignecomma0_.id=lignecomma0_1_.id \n inner join dm1_lignedocumentcommercial lignecomma0_2_ on lignecomma0_.id=lignecomma0_2_.id \nwhere (lignecomma0_.id not like 'DefaultRecord_%')\n and (lignecomma0_2_.dateFinValidite is null)\norder by coalescedate(lignecomma0_2_.dateCreationSysteme) desc \nlimit $1;\n\nexplain analyze\nexecute query(500);\n\nWe met the same behaviour with both :\n- PostgreSQL 8.4.8 on Windows 2008 (Prod)- PostgreSQL 8.4.8 and 8.4.17 on Windows 7 (Dev)\n\nI hope someone has any idea.\nGhislain ROUVIGNAC",
"msg_date": "Thu, 6 Jun 2013 10:25:31 +0200",
"msg_from": "Ghislain ROUVIGNAC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Not same plan between static and prepared query"
},
{
"msg_contents": "\nOn Thursday, June 06, 2013 1:56 PM Ghislain ROUVIGNAC wrote:\n> Hello,\n\n\n> We have a strange issue related to a prepared statement.\n\n\n> We have two equals queries where the sole difference is in the limit.\n> - The first is hard coded with limit 500.\n> - The second is prepared with limit $1 ($1 is bound to 500).\n\n\n> PostgreSQL give us two different plans with a huge execution time for the\nprepared query:\n\nIt can generate different plan for prepared query, because optimizer uses\ndefault selectivity in case of bound parameters (in your case limit $1).\n\n\n> We met the same behaviour with both :\n> - PostgreSQL 8.4.8 on Windows 2008 (Prod)\n> - PostgreSQL 8.4.8 and 8.4.17 on Windows 7 (Dev)\n\n From PostgreSQL 9.2, it generates plan for prepared query during execution\n(Execute command) as well.\nSo I think you will not face this problem in PostgreSQL 9.2 and above.\n\nWith Regards,\nAmit Kapila.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 6 Jun 2013 16:10:58 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not same plan between static and prepared query"
},
{
"msg_contents": "Amit,\nIt's very strength for me to hear that PostgreSQL generate execution plan for prepared statements during execution, I always was thinking that the purpose of the prepared statement is to eliminate such behavior. Can it lead to some performance degradation in case of heavy \"update batch\", that can run for millions of different values? Is it some way to give some kind of query hint that will eliminate execution path recalculations during heavy updates and instruct regarding correct execution plan?\n\nSincerely yours,\n\n\nYuri Levinsky, DBA\nCelltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel\nMobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Amit Kapila\nSent: Thursday, June 06, 2013 1:41 PM\nTo: 'Ghislain ROUVIGNAC'; [email protected]\nSubject: Re: [PERFORM] Not same plan between static and prepared query\n\n\nOn Thursday, June 06, 2013 1:56 PM Ghislain ROUVIGNAC wrote:\n> Hello,\n\n\n> We have a strange issue related to a prepared statement.\n\n\n> We have two equals queries where the sole difference is in the limit.\n> - The first is hard coded with limit 500.\n> - The second is prepared with limit $1 ($1 is bound to 500).\n\n\n> PostgreSQL give us two different plans with a huge execution time for \n> the\nprepared query:\n\nIt can generate different plan for prepared query, because optimizer uses default selectivity in case of bound parameters (in your case limit $1).\n\n\n> We met the same behaviour with both :\n> - PostgreSQL 8.4.8 on Windows 2008 (Prod)\n> - PostgreSQL 8.4.8 and 8.4.17 on Windows 7 (Dev)\n\n From PostgreSQL 9.2, it generates plan for prepared query during execution (Execute command) as well.\nSo I think you will not face this problem in PostgreSQL 9.2 and above.\n\nWith Regards,\nAmit Kapila.\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\nThis mail was received via Mail-SeCure System.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 9 Jun 2013 18:14:33 +0300",
"msg_from": "\"Yuri Levinsky\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not same plan between static and prepared query"
},
{
"msg_contents": "Yuri Levinsky wrote\n>> We have two equals queries where the sole difference is in the limit.\n>> - The first is hard coded with limit 500.\n>> - The second is prepared with limit $1 ($1 is bound to 500).\n> \n> \n>> PostgreSQL give us two different plans with a huge execution time for \n>> the\n> prepared query:\n> \n> It can generate different plan for prepared query, because optimizer uses\n> default selectivity in case of bound parameters (in your case limit $1).\n> \n> \n>> We met the same behaviour with both :\n>> - PostgreSQL 8.4.8 on Windows 2008 (Prod)\n>> - PostgreSQL 8.4.8 and 8.4.17 on Windows 7 (Dev)\n\nSo the planner knows it needs a limit in both cases yet for the second\nsituation it has no idea what the limit value will be. For a sufficiently\nlarge value of LIMIT it will conclude that a sequential scan will be optimal\nand so that is what the plan uses. However, knowing the limit is only going\nto be 500 it is able to conclude that an index scan will work better.\n\n\n> From PostgreSQL 9.2, it generates plan for prepared query during execution\n> (Execute command) as well.\n> So I think you will not face this problem in PostgreSQL 9.2 and above.\n\nSee:\n\nhttp://www.postgresql.org/docs/9.2/interactive/release-9-2.html\n\nSection E.5.3.1.3 (First Bullet)\n\nSomeone more knowledgeable than myself will need to comment on how the\nperformance impact was overcome but my guess is that update statements\nlikely avoid this behavior if the where clauses are equality conditions\nsince indexes (if available) are going to be the most efficient plan\nregardless of the specific values. Its when, in cases like this, the\nplanner knows the specific value of LIMIT will matter greatly that it is\ngoing to need to use a run-time plan. Whether during the PREPARE phase the\nplanner tags the resultant plan with some kind of \"allow runtime plan\" flag\nI do not know though so maybe the first few executions will always use\nrun-time plans and only after N executes does the cached plan come into\neffect.\n\nIts probably worth a search and read of the mailing list but I cannot do so\nat this moment.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Not-same-plan-between-static-and-prepared-query-tp5758115p5758516.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 9 Jun 2013 08:49:48 -0700 (PDT)",
"msg_from": "David Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not same plan between static and prepared query"
},
{
"msg_contents": "On Sunday, June 09, 2013 8:45 PM Yuri Levinsky wrote:\n> Amit,\n> It's very strength for me to hear that PostgreSQL generate execution\n> plan for prepared statements during execution, I always was thinking\n> that the purpose of the prepared statement is to eliminate such\n> behavior. \n\nIt doesn't always choose to generate a new plan, rather it is a calculative\ndecision.\nAs far as I understand, it generates custom plan (based on bound parameters)\nfor 5 times and then generates generic plan (not based on bound parameters),\nafter that it compares that if the cost of generic plan is less than 10%\nmore expensive than average custom plan, then it will choose generic plan.\n\n> Can it lead to some performance degradation in case of heavy\n> \"update batch\", that can run for millions of different values? \n\nIdeally it should not degrade performance.\nWhat kind of update you have and does the values used for execute can vary\nplan too much every time?\n\n> Is it\n> some way to give some kind of query hint that will eliminate execution\n> path recalculations during heavy updates and instruct regarding correct\n> execution plan?\n\nCurrently there doesn't exist any way to give any hint.\n \n> Sincerely yours,\n> \n> \n> Yuri Levinsky, DBA\n> Celltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel\n> Mobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222\n> \n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Amit Kapila\n> Sent: Thursday, June 06, 2013 1:41 PM\n> To: 'Ghislain ROUVIGNAC'; [email protected]\n> Subject: Re: [PERFORM] Not same plan between static and prepared query\n> \n> \n> On Thursday, June 06, 2013 1:56 PM Ghislain ROUVIGNAC wrote:\n> > Hello,\n> \n> \n> > We have a strange issue related to a prepared statement.\n> \n> \n> > We have two equals queries where the sole difference is in the limit.\n> > - The first is hard coded with limit 500.\n> > - The second is prepared with limit $1 ($1 is bound to 500).\n> \n> \n> > PostgreSQL give us two different plans with a huge execution time for\n> > the\n> prepared query:\n> \n> It can generate different plan for prepared query, because optimizer\n> uses default selectivity in case of bound parameters (in your case\n> limit $1).\n> \n> \n> > We met the same behaviour with both :\n> > - PostgreSQL 8.4.8 on Windows 2008 (Prod)\n> > - PostgreSQL 8.4.8 and 8.4.17 on Windows 7 (Dev)\n> \n> From PostgreSQL 9.2, it generates plan for prepared query during\n> execution (Execute command) as well.\n> So I think you will not face this problem in PostgreSQL 9.2 and above.\n> \n> With Regards,\n> Amit Kapila.\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> This mail was received via Mail-SeCure System.\n> \n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Jun 2013 14:32:20 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not same plan between static and prepared query"
},
{
"msg_contents": "Hello Amit,\n\n\nThank you for your help.\n\n\nYou are right, it work fine with PostgreSQL 9.2.\n\n\n*Ghislain ROUVIGNAC*\n\n\n\n2013/6/6 Amit Kapila <[email protected]>\n\n>\n> On Thursday, June 06, 2013 1:56 PM Ghislain ROUVIGNAC wrote:\n> > Hello,\n>\n>\n> > We have a strange issue related to a prepared statement.\n>\n>\n> > We have two equals queries where the sole difference is in the limit.\n> > - The first is hard coded with limit 500.\n> > - The second is prepared with limit $1 ($1 is bound to 500).\n>\n>\n> > PostgreSQL give us two different plans with a huge execution time for the\n> prepared query:\n>\n> It can generate different plan for prepared query, because optimizer uses\n> default selectivity in case of bound parameters (in your case limit $1).\n>\n>\n> > We met the same behaviour with both :\n> > - PostgreSQL 8.4.8 on Windows 2008 (Prod)\n> > - PostgreSQL 8.4.8 and 8.4.17 on Windows 7 (Dev)\n>\n> From PostgreSQL 9.2, it generates plan for prepared query during execution\n> (Execute command) as well.\n> So I think you will not face this problem in PostgreSQL 9.2 and above.\n>\n> With Regards,\n> Amit Kapila.\n>\n>\n\nHello Amit,Thank you for your help.You are right, it work fine with PostgreSQL 9.2.\nGhislain ROUVIGNAC\n2013/6/6 Amit Kapila <[email protected]>\n\nOn Thursday, June 06, 2013 1:56 PM Ghislain ROUVIGNAC wrote:\n> Hello,\n\n\n> We have a strange issue related to a prepared statement.\n\n\n> We have two equals queries where the sole difference is in the limit.\n> - The first is hard coded with limit 500.\n> - The second is prepared with limit $1 ($1 is bound to 500).\n\n\n> PostgreSQL give us two different plans with a huge execution time for the\nprepared query:\n\nIt can generate different plan for prepared query, because optimizer uses\ndefault selectivity in case of bound parameters (in your case limit $1).\n\n\n> We met the same behaviour with both :\n> - PostgreSQL 8.4.8 on Windows 2008 (Prod)\n> - PostgreSQL 8.4.8 and 8.4.17 on Windows 7 (Dev)\n\n>From PostgreSQL 9.2, it generates plan for prepared query during execution\n(Execute command) as well.\nSo I think you will not face this problem in PostgreSQL 9.2 and above.\n\nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 10 Jun 2013 15:58:55 +0200",
"msg_from": "Ghislain ROUVIGNAC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Not same plan between static and prepared query"
}
] |
[
{
"msg_contents": "Hi, My pg_xlog dir has been growing rapidly the last 4 days, and my disk is now almost full (1000Gb) even though the database is only 50Gb. I have a streaming replication server running, and in the log of the slave it says:\n\ncp: cannot stat `/var/lib/postgresql/9.2/wals/0000000200000E1B000000A9': No such file or directory\ncp: cannot stat `/var/lib/postgresql/9.2/wals/0000000200000E1B000000A9': No such file or directory\n2013-06-10 11:21:45 GMT FATAL: could not connect to the primary server: could not connect to server: No route to host\n\t\tIs the server running on host \"192.168.0.4\" and accepting\n\t\tTCP/IP connections on port 5432?\n\nAll the time. \n\nI have tried to restart the server, but that didn't help. I checked the master, and the file /var/lib/postgresql/9.2/wals/0000000200000E1B000000A9 does not exist! I'm pretty lost here, can someone help me solve this and get my master server cleaned up. What is causing this, and what do I need to do?\n\nKind regards\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Jun 2013 13:29:46 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "> Hi, My pg_xlog dir has been growing rapidly the last 4 days, and my disk\n> is now almost full (1000Gb) even though the database is only 50Gb. I have a\n> streaming replication server running, and in the log of the slave it says:\n>\n> cp: cannot stat `/var/lib/postgresql/9.2/wals/0000000200000E1B000000A9':\n> No such file or directory\n> cp: cannot stat `/var/lib/postgresql/9.2/wals/0000000200000E1B000000A9':\n> No such file or directory\n> 2013-06-10 11:21:45 GMT FATAL: could not connect to the primary server:\n> could not connect to server: No route to host\n> Is the server running on host \"192.168.0.4\" and accepting\n> TCP/IP connections on port 5432?\n>\n> All the time.\n>\n> I have tried to restart the server, but that didn't help. I checked the\n> master, and the file /var/lib/postgresql/9.2/wals/0000000200000E1B000000A9\n> does not exist! I'm pretty lost here, can someone help me solve this and\n> get my master server cleaned up. What is causing this, and what do I need\n> to do?\n>\n>\nIIRC, this kind of situation we may expect, when the archive command was\nfailed at master side. Could you verify, how many files\n\"000000xxxxxxx.ready\" reside under the master's pg_xlog/archive_status\ndirectory. And also, verify the master server's recent pg_log file, for\nfinding the root cause of the master server down issue.\n\n\nDinesh\n\n-- \n*Dinesh Kumar*\nSoftware Engineer\n\nPh: +918087463317\nSkype ID: dinesh.kumar432\nwww.enterprisedb.co\n<http://www.enterprisedb.com/>m<http://www.enterprisedb.com/>\n*\nFollow us on Twitter*\n@EnterpriseDB\n\nVisit EnterpriseDB for tutorials, webinars,\nwhitepapers<http://www.enterprisedb.com/resources-community> and\nmore <http://www.enterprisedb.com/resources-community>\n\n\n\nHi, My pg_xlog dir has been growing rapidly the last 4 days, and my disk is now almost full (1000Gb) even though the database is only 50Gb. I have a streaming replication server running, and in the log of the slave it says:\n\ncp: cannot stat `/var/lib/postgresql/9.2/wals/0000000200000E1B000000A9': No such file or directory\ncp: cannot stat `/var/lib/postgresql/9.2/wals/0000000200000E1B000000A9': No such file or directory\n2013-06-10 11:21:45 GMT FATAL: could not connect to the primary server: could not connect to server: No route to host\n Is the server running on host \"192.168.0.4\" and accepting\n TCP/IP connections on port 5432?\n\nAll the time.\n\nI have tried to restart the server, but that didn't help. I checked the master, and the file /var/lib/postgresql/9.2/wals/0000000200000E1B000000A9 does not exist! I'm pretty lost here, can someone help me solve this and get my master server cleaned up. What is causing this, and what do I need to do?\nIIRC, this kind of situation we may expect, when the archive command was failed at master side. Could you verify, how many files \"000000xxxxxxx.ready\" reside under the master's pg_xlog/archive_status directory. And also, verify the master server's recent pg_log file, for finding the root cause of the master server down issue.\n\nDinesh-- Dinesh KumarSoftware Engineer\nPh: +918087463317Skype ID: dinesh.kumar432\nwww.enterprisedb.comFollow us on Twitter\n\n@EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers and more",
"msg_date": "Mon, 10 Jun 2013 17:17:38 +0530",
"msg_from": "Dinesh Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "On Mon, Jun 10, 2013 at 4:29 AM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n>\n> 2013-06-10 11:21:45 GMT FATAL: could not connect to the primary server:\n> could not connect to server: No route to host\n> Is the server running on host \"192.168.0.4\" and accepting\n> TCP/IP connections on port 5432?\n>\n\nDid anything get changed on the standby or master around the time this\nmessage started occurring?\nOn the master, what do the following show?\nshow port;\nshow listen_addresses;\n\nThe master's IP is still 192.168.0.4?\n\nHave you tried connecting to the master using something like:\npsql -h 192.168.0.4 -p 5432 -U postgres -d postgres\n\nDoes that throw a useful error or warning?\n\nOn Mon, Jun 10, 2013 at 4:29 AM, Niels Kristian Schjødt <[email protected]> wrote:\n\n2013-06-10 11:21:45 GMT FATAL: could not connect to the primary server: could not connect to server: No route to host\n Is the server running on host \"192.168.0.4\" and accepting\n TCP/IP connections on port 5432?Did anything get changed on the standby or master around the time this message started occurring?On the master, what do the following show?\nshow port;show listen_addresses;The master's IP is still 192.168.0.4?Have you tried connecting to the master using something like:psql -h 192.168.0.4 -p 5432 -U postgres -d postgres\n Does that throw a useful error or warning?",
"msg_date": "Mon, 10 Jun 2013 07:36:03 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "Den 10/06/2013 kl. 16.36 skrev bricklen <[email protected]>:\n\n> On Mon, Jun 10, 2013 at 4:29 AM, Niels Kristian Schjødt <[email protected]> wrote:\n> \n> 2013-06-10 11:21:45 GMT FATAL: could not connect to the primary server: could not connect to server: No route to host\n> Is the server running on host \"192.168.0.4\" and accepting\n> TCP/IP connections on port 5432?\n> \n> Did anything get changed on the standby or master around the time this message started occurring?\n> On the master, what do the following show?\n> show port;\n> show listen_addresses;\n> \n> The master's IP is still 192.168.0.4?\n> \n> Have you tried connecting to the master using something like:\n> psql -h 192.168.0.4 -p 5432 -U postgres -d postgres\n> \n> Does that throw a useful error or warning?\n> \n\nIt turned out that the switch port that the server was connected to was faulty, and hence no successful connection between master and slave was established. This resolved in pg_xlog building up very fast, because our system performs a lot of changes on the data we store. \n\nI ended up running pg_archivecleanup on the master to get some space freed urgently. Then I got the switch changed with a new one. Now I'm trying to the streaming replication setup from scratch again, but with no luck.\n\nI can't seem to figure out which steps I need to do, to get the standby server wiped and get it started as a streaming replication again from scratch. I tried to follow the steps, from step 6, in here http://wiki.postgresql.org/wiki/Streaming_Replication but the process seems to fail when I reach the point where I try to do a psql -c \"SELECT pg_stop_backup()\". It just says:\n\nNOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived\nWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed)\nHINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\nWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (120 seconds elapsed)\nHINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\nWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (240 seconds elapsed)\nHINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\nWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (480 seconds elapsed)\nHINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\nWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (960 seconds elapsed)\nHINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\nWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (1920 seconds elapsed)\nHINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\n\nWhen looking at ps aux on the master, I see the following:\n\npostgres 30930 0.0 0.0 98412 1632 ? Ss 15:59 0:02 postgres: archiver process failed on 0000000200000E1B000000A9\n\nThe file mentioned is the one that it was about to archive, when the standby server failed. Somehow it must still be trying to \"catch up\" from that file which of cause isn't there any more, since I had to remove those in order to get more space on the HDD. Instead of trying to catch up from the last succeeded file, I want it to start over from scratch with the replication - I just don't know how.\n\n\n\n\nDen 10/06/2013 kl. 16.36 skrev bricklen <[email protected]>:On Mon, Jun 10, 2013 at 4:29 AM, Niels Kristian Schjødt <[email protected]> wrote:\n\n2013-06-10 11:21:45 GMT FATAL: could not connect to the primary server: could not connect to server: No route to host\n Is the server running on host \"192.168.0.4\" and accepting\n TCP/IP connections on port 5432?Did anything get changed on the standby or master around the time this message started occurring?On the master, what do the following show?\nshow port;show listen_addresses;The master's IP is still 192.168.0.4?Have you tried connecting to the master using something like:psql -h 192.168.0.4 -p 5432 -U postgres -d postgres\n Does that throw a useful error or warning?\nIt turned out that the switch port that the server was connected to was faulty, and hence no successful connection between master and slave was established. This resolved in pg_xlog building up very fast, because our system performs a lot of changes on the data we store. I ended up running pg_archivecleanup on the master to get some space freed urgently. Then I got the switch changed with a new one. Now I'm trying to the streaming replication setup from scratch again, but with no luck.I can't seem to figure out which steps I need to do, to get the standby server wiped and get it started as a streaming replication again from scratch. I tried to follow the steps, from step 6, in here http://wiki.postgresql.org/wiki/Streaming_Replication but the process seems to fail when I reach the point where I try to do a psql -c \"SELECT pg_stop_backup()\". It just says:NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archivedWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed)HINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (120 seconds elapsed)HINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (240 seconds elapsed)HINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (480 seconds elapsed)HINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (960 seconds elapsed)HINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (1920 seconds elapsed)HINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.When looking at ps aux on the master, I see the following:postgres 30930 0.0 0.0 98412 1632 ? Ss 15:59 0:02 postgres: archiver process failed on 0000000200000E1B000000A9The file mentioned is the one that it was about to archive, when the standby server failed. Somehow it must still be trying to \"catch up\" from that file which of cause isn't there any more, since I had to remove those in order to get more space on the HDD. Instead of trying to catch up from the last succeeded file, I want it to start over from scratch with the replication - I just don't know how.",
"msg_date": "Mon, 10 Jun 2013 17:35:41 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "On Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n> I can't seem to figure out which steps I need to do, to get the standby\n> server wiped and get it started as a streaming replication again from\n> scratch. I tried to follow the steps, from step 6, in here\n> http://wiki.postgresql.org/wiki/Streaming_Replication but the process\n> seems to fail when I reach the point where I try to do a psql -c \"SELECT\n> pg_stop_backup()\". It just says:\n>\n\n\n\nIf you use pg_basebackup you don't need to manually put the master into\nbackup mode.\nBe aware that if you are generating a lot of WAL segments and your\nfilesystem backup is large (and takes a while to ship to the slave), you\nwill need to set \"wal_keep_segments\" quite high on the master to prevent\nthe segments from disappearing during the setup of the slave -- or at least\nthat's the case when you use \"--xlog-method=stream\".\n\nOn Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt <[email protected]> wrote:\nI can't seem to figure out which steps I need to do, to get the standby server wiped and get it started as a streaming replication again from scratch. I tried to follow the steps, from step 6, in here http://wiki.postgresql.org/wiki/Streaming_Replication but the process seems to fail when I reach the point where I try to do a psql -c \"SELECT pg_stop_backup()\". It just says:\nIf you use pg_basebackup you don't need to manually put the master into backup mode.Be aware that if you are generating a lot of WAL segments and your filesystem backup is large (and takes a while to ship to the slave), you will need to set \"wal_keep_segments\" quite high on the master to prevent the segments from disappearing during the setup of the slave -- or at least that's the case when you use \"--xlog-method=stream\".",
"msg_date": "Mon, 10 Jun 2013 08:51:14 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "Den 10/06/2013 kl. 17.51 skrev bricklen <[email protected]>:\n\n> \n> On Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt <[email protected]> wrote:\n> I can't seem to figure out which steps I need to do, to get the standby server wiped and get it started as a streaming replication again from scratch. I tried to follow the steps, from step 6, in here http://wiki.postgresql.org/wiki/Streaming_Replication but the process seems to fail when I reach the point where I try to do a psql -c \"SELECT pg_stop_backup()\". It just says:\n> \n> \n> \n> If you use pg_basebackup you don't need to manually put the master into backup mode.\n> Be aware that if you are generating a lot of WAL segments and your filesystem backup is large (and takes a while to ship to the slave), you will need to set \"wal_keep_segments\" quite high on the master to prevent the segments from disappearing during the setup of the slave -- or at least that's the case when you use \"--xlog-method=stream\".\n> \n\nOkay thanks,\nI did the base backup, and I ran the rsync command and it succeeded. However then I try to do pg_stop_backup() it just \"hangs\" and I have a feeling, that it's rather because of some information mismatch than actual loading time, since nothing is transferred to the slave and I keep on seeing that \"postgres 30930 0.0 0.0 98412 1632 ? Ss 15:59 0:02 postgres: archiver process failed on 0000000200000E1B000000A9\" in the process overview, and I know that exactly that file was the one it has been trying to sync ever since the connection dropped. I saw something in here http://postgresql.1045698.n5.nabble.com/safe-to-clear-pg-xlog-archive-status-directory-td5738029.html, about wiping the pg_xlog/archive_status directly in order to \"reset\" the sync between the servers before running the pg_backup_start(), but I'm unsure if it's right, and when I would do it…\n\n\nDen 10/06/2013 kl. 17.51 skrev bricklen <[email protected]>:On Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt <[email protected]> wrote:\nI can't seem to figure out which steps I need to do, to get the standby server wiped and get it started as a streaming replication again from scratch. I tried to follow the steps, from step 6, in here http://wiki.postgresql.org/wiki/Streaming_Replication but the process seems to fail when I reach the point where I try to do a psql -c \"SELECT pg_stop_backup()\". It just says:\nIf you use pg_basebackup you don't need to manually put the master into backup mode.Be aware that if you are generating a lot of WAL segments and your filesystem backup is large (and takes a while to ship to the slave), you will need to set \"wal_keep_segments\" quite high on the master to prevent the segments from disappearing during the setup of the slave -- or at least that's the case when you use \"--xlog-method=stream\".\n\nOkay thanks,I did the base backup, and I ran the rsync command and it succeeded. However then I try to do pg_stop_backup() it just \"hangs\" and I have a feeling, that it's rather because of some information mismatch than actual loading time, since nothing is transferred to the slave and I keep on seeing that \"postgres 30930 0.0 0.0 98412 1632 ? Ss 15:59 0:02 postgres: archiver process failed on 0000000200000E1B000000A9\" in the process overview, and I know that exactly that file was the one it has been trying to sync ever since the connection dropped. I saw something in here http://postgresql.1045698.n5.nabble.com/safe-to-clear-pg-xlog-archive-status-directory-td5738029.html, about wiping the pg_xlog/archive_status directly in order to \"reset\" the sync between the servers before running the pg_backup_start(), but I'm unsure if it's right, and when I would do it…",
"msg_date": "Mon, 10 Jun 2013 18:03:14 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "On Mon, Jun 10, 2013 at 8:51 AM, bricklen <[email protected]> wrote:\n\n>\n> On Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt <\n> [email protected]> wrote:\n>\n>> I can't seem to figure out which steps I need to do, to get the standby\n>> server wiped and get it started as a streaming replication again from\n>> scratch. I tried to follow the steps, from step 6, in here\n>> http://wiki.postgresql.org/wiki/Streaming_Replication but the process\n>> seems to fail when I reach the point where I try to do a psql -c \"SELECT\n>> pg_stop_backup()\". It just says:\n>>\n>\n>\n> If you use pg_basebackup you don't need to manually put the master into\n> backup mode.\n> Be aware that if you are generating a lot of WAL segments and your\n> filesystem backup is large (and takes a while to ship to the slave), you\n> will need to set \"wal_keep_segments\" quite high on the master to prevent\n> the segments from disappearing during the setup of the slave -- or at least\n> that's the case when you use \"--xlog-method=stream\".\n>\n>\n\nFor what its worth, I took some notes when I set up Streaming Replication\nthe other day and the process worked for me. There might have been some\ntweaks here and there that I negelected to write down, but the gist of the\nsteps are below.\n\nIf anyone has any corrections, please chime in!\n\n\n##On the hot standby, create the staging directory to hold the master's log\nfiles\nmkdir /pgdata/WAL_Archive\nchown postgres:postgres /pgdata/WAL_Archive\n\n\n# master, $PGDATA/postgresql.conf\nwal_level = hot_standby\narchive_mode = on\n## /pgdata/WAL_Archive is a staging directory on the slave, outside of\n$PGDATA\narchive_command = 'rsync -W -a %p postgres@SLAVE_IP_HERE\n:/pgdata/WAL_Archive/'\nmax_wal_senders = 3\nwal_keep_segments = 10000 # if you have the room, to help the\npg_basebackup\n # not fail due to the WAL segment getting\nremoved from the master.\n\n\n## Modify the master $PGDATA/pg_hba.conf and enable the replication lines\nfor the IPs of the slaves.\n## Issue \"pg_ctl reload\" on the master after the changes have been made.\n# TYPE DATABASE USER ADDRESS METHOD\nhostssl replication replication SLAVE_IP_HERE/32 md5\n\n\n\n## On the hot standby, $PGDATA/postgresql.conf\nhot_standby = on #off # \"on\" allows queries during recovery\nmax_standby_archive_delay = 15min # max delay before canceling queries, set\nto hours if backups will be taken from here\nmax_standby_streaming_delay = 15min # max delay before canceling queries\nhot_standby_feedback = on #off\n\n\n\n## On the master, create the replication role, which will be replicated to\nthe slave via pg_basebackup\npsql -d postgres -c \"CREATE USER replication WITH replication ENCRYPTED\nPASSWORD 'CHANGEME' LOGIN\"\n\n\n## Restart the master, to pick up the changes to postgresql.conf\n\n\n## On the slave, from $HOME, issue the pg_basebackup command to start\nsetting up the hot standby from the master\n## --host=IP_OF_MASTER -> The master's IP\n## --pgdata=$PGDATA -> The slave's $PGDATA directory\n## -- xlog-method=stream -> Opens a second connection to the master to\nstream the WAL segments rather than pulling them all at the end\n## --password will prompt for the replication role's password\n\n## Without compression, \"stream\" gets the changes via the same method as\nStreaming Replication\ntime pg_basebackup --pgdata=$PGDATA --host=IP_OF_MASTER --port=5432\n--username=replication --password --xlog-method=stream --format=plain\n--progress --verbose\n\n-- Alternate version with compression\n#time pg_basebackup --pgdata=$PGDATA --host=IP_OF_MASTER --port=5432\n--username=replication --password --xlog --gzip --format=tar --progress\n--verbose\n\n\n\n\n##On the standby, create $PGDATA/recovery.conf:\nstandby_mode = on\n\n## To promote the slave to a live database, issue \"touch /tmp/promote_db\"\ntrigger_file = '/tmp/promote_db'\n\n## Host can be the master's IP or hostname\nprimary_conninfo = 'host=IP_OF_MASTER port=5432 user=replication\npassword=CHANGEME'\n\n## Log the standby WAL segments applied to a standby.log file\n## TODO: Add the standby.log to a log rotator\nrestore_command = 'cp /pgdata/WAL_Archive/%f \"%p\"\n2>>/pgdata/9.2/data/pg_log/standby.log'\n\n## XXX: If there are multiple slaves, do not use pg_archivecleanup (WAL\nsegments could be removed before being applied to other slaves)\narchive_cleanup_command = '/usr/pgsql-9.2/bin/pg_archivecleanup\n/pgdata/WAL_Archive %r'\n\n## On hot standby clusters, set to 'latest' to switch to the newest\ntimeline in the archive\nrecovery_target_timeline = 'latest'\n\nOn Mon, Jun 10, 2013 at 8:51 AM, bricklen <[email protected]> wrote:\nOn Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt <[email protected]> wrote:\nI can't seem to figure out which steps I need to do, to get the standby server wiped and get it started as a streaming replication again from scratch. I tried to follow the steps, from step 6, in here http://wiki.postgresql.org/wiki/Streaming_Replication but the process seems to fail when I reach the point where I try to do a psql -c \"SELECT pg_stop_backup()\". It just says:\nIf you use pg_basebackup you don't need to manually put the master into backup mode.Be aware that if you are generating a lot of WAL segments and your filesystem backup is large (and takes a while to ship to the slave), you will need to set \"wal_keep_segments\" quite high on the master to prevent the segments from disappearing during the setup of the slave -- or at least that's the case when you use \"--xlog-method=stream\".\nFor what its worth, I took some notes when I set up Streaming Replication the other day and the process worked for me. There might have been some tweaks here and there that I negelected to write down, but the gist of the steps are below.\nIf anyone has any corrections, please chime in!##On the hot standby, create the staging directory to hold the master's log filesmkdir /pgdata/WAL_Archivechown postgres:postgres /pgdata/WAL_Archive\n# master, $PGDATA/postgresql.confwal_level = hot_standbyarchive_mode = on## /pgdata/WAL_Archive is a staging directory on the slave, outside of $PGDATAarchive_command = 'rsync -W -a %p postgres@SLAVE_IP_HERE:/pgdata/WAL_Archive/'\nmax_wal_senders = 3wal_keep_segments = 10000 # if you have the room, to help the pg_basebackup # not fail due to the WAL segment getting removed from the master.## Modify the master $PGDATA/pg_hba.conf and enable the replication lines for the IPs of the slaves.\n## Issue \"pg_ctl reload\" on the master after the changes have been made.# TYPE DATABASE USER ADDRESS METHODhostssl replication replication SLAVE_IP_HERE/32 md5\n## On the hot standby, $PGDATA/postgresql.confhot_standby = on #off # \"on\" allows queries during recoverymax_standby_archive_delay = 15min # max delay before canceling queries, set to hours if backups will be taken from here\nmax_standby_streaming_delay = 15min # max delay before canceling querieshot_standby_feedback = on #off ## On the master, create the replication role, which will be replicated to the slave via pg_basebackup\npsql -d postgres -c \"CREATE USER replication WITH replication ENCRYPTED PASSWORD 'CHANGEME' LOGIN\"## Restart the master, to pick up the changes to postgresql.conf\n## On the slave, from $HOME, issue the pg_basebackup command to start setting up the hot standby from the master## --host=IP_OF_MASTER -> The master's IP## --pgdata=$PGDATA -> The slave's $PGDATA directory\n## -- xlog-method=stream -> Opens a second connection to the master to stream the WAL segments rather than pulling them all at the end## --password will prompt for the replication role's password## Without compression, \"stream\" gets the changes via the same method as Streaming Replication\ntime pg_basebackup --pgdata=$PGDATA --host=IP_OF_MASTER --port=5432 --username=replication --password --xlog-method=stream --format=plain --progress --verbose-- Alternate version with compression#time pg_basebackup --pgdata=$PGDATA --host=IP_OF_MASTER --port=5432 --username=replication --password --xlog --gzip --format=tar --progress --verbose\n##On the standby, create $PGDATA/recovery.conf:standby_mode = on## To promote the slave to a live database, issue \"touch /tmp/promote_db\"trigger_file = '/tmp/promote_db'\n## Host can be the master's IP or hostnameprimary_conninfo = 'host=IP_OF_MASTER port=5432 user=replication password=CHANGEME'## Log the standby WAL segments applied to a standby.log file## TODO: Add the standby.log to a log rotator\nrestore_command = 'cp /pgdata/WAL_Archive/%f \"%p\" 2>>/pgdata/9.2/data/pg_log/standby.log'## XXX: If there are multiple slaves, do not use pg_archivecleanup (WAL segments could be removed before being applied to other slaves)\narchive_cleanup_command = '/usr/pgsql-9.2/bin/pg_archivecleanup /pgdata/WAL_Archive %r'## On hot standby clusters, set to 'latest' to switch to the newest timeline in the archiverecovery_target_timeline = 'latest'",
"msg_date": "Mon, 10 Jun 2013 09:03:22 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "On Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n>\n> WARNING: pg_stop_backup still waiting for all required WAL segments to be\n> archived (1920 seconds elapsed)\n> HINT: Check that your archive_command is executing properly.\n> pg_stop_backup can be canceled safely, but the database backup will not be\n> usable without all the WAL segments.\n>\n> When looking at ps aux on the master, I see the following:\n>\n> postgres 30930 0.0 0.0 98412 1632 ? Ss 15:59 0:02 postgres:\n> archiver process failed on 0000000200000E1B000000A9\n>\n\n> The file mentioned is the one that it was about to archive, when the\n> standby server failed. Somehow it must still be trying to \"catch up\" from\n> that file which of cause isn't there any more, since I had to remove those\n> in order to get more space on the HDD.\n>\n\nSo the archive_command is failing because it is trying to archive a file\nthat no longer exists.\n\nOne way around this is to remove the .ready files from\nthe pg_xlog/archive_status directory, which correspond to the WAL files you\nmanually removed.\n\nAnother way would be to temporarily replace the archive_command with one\nthat will report success even when the archiving fails, until the archiver\ngets paste this stretch. In fact you could just replace the command with\n'true', so it reports success without even doing anything.\n\nCheers,\n\nJeff\n\nOn Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt <[email protected]> wrote:\n\nWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (1920 seconds elapsed)\nHINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\nWhen looking at ps aux on the master, I see the following:\n\npostgres 30930 0.0 0.0 98412 1632 ? Ss 15:59 0:02 postgres: archiver process failed on 0000000200000E1B000000A9\nThe file mentioned is the one that it was about to archive, when the standby server failed. Somehow it must still be trying to \"catch up\" from that file which of cause isn't there any more, since I had to remove those in order to get more space on the HDD. \nSo the archive_command is failing because it is trying to archive a file that no longer exists.One way around this is to remove the .ready files from the pg_xlog/archive_status directory, which correspond to the WAL files you manually removed. \nAnother way would be to temporarily replace the archive_command with one that will report success even when the archiving fails, until the archiver gets paste this stretch. In fact you could just replace the command with 'true', so it reports success without even doing anything.\nCheers,Jeff",
"msg_date": "Mon, 10 Jun 2013 10:53:06 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "On Mon, Jun 10, 2013 at 12:35 PM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n>\n> Den 10/06/2013 kl. 16.36 skrev bricklen <[email protected]>:\n>\n> On Mon, Jun 10, 2013 at 4:29 AM, Niels Kristian Schjødt <\n> [email protected]> wrote:\n>\n>>\n>> 2013-06-10 11:21:45 GMT FATAL: could not connect to the primary server:\n>> could not connect to server: No route to host\n>> Is the server running on host \"192.168.0.4\" and accepting\n>> TCP/IP connections on port 5432?\n>>\n>\n> Did anything get changed on the standby or master around the time this\n> message started occurring?\n> On the master, what do the following show?\n> show port;\n> show listen_addresses;\n>\n> The master's IP is still 192.168.0.4?\n>\n> Have you tried connecting to the master using something like:\n> psql -h 192.168.0.4 -p 5432 -U postgres -d postgres\n>\n> Does that throw a useful error or warning?\n>\n>\n>\n> It turned out that the switch port that the server was connected to was\n> faulty, and hence no successful connection between master and slave was\n> established. This resolved in pg_xlog building up very fast, because our\n> system performs a lot of changes on the data we store.\n>\n> I ended up running pg_archivecleanup on the master to get some space freed\n> urgently. Then I got the switch changed with a new one. Now I'm trying to\n> the streaming replication setup from scratch again, but with no luck.\n>\n> I can't seem to figure out which steps I need to do, to get the standby\n> server wiped and get it started as a streaming replication again from\n> scratch. I tried to follow the steps, from step 6, in here\n> http://wiki.postgresql.org/wiki/Streaming_Replication but the process\n> seems to fail when I reach the point where I try to do a psql -c \"SELECT\n> pg_stop_backup()\". It just says:\n>\n> NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to\n> be archived\n> WARNING: pg_stop_backup still waiting for all required WAL segments to be\n> archived (60 seconds elapsed)\n> HINT: Check that your archive_command is executing properly.\n> pg_stop_backup can be canceled safely, but the database backup will not be\n> usable without all the WAL segments.\n> (...)\n>\n> When looking at ps aux on the master, I see the following:\n>\n> postgres 30930 0.0 0.0 98412 1632 ? Ss 15:59 0:02 postgres:\n> archiver process failed on 0000000200000E1B000000A9\n>\n> The file mentioned is the one that it was about to archive, when the\n> standby server failed. Somehow it must still be trying to \"catch up\" from\n> that file which of cause isn't there any more, since I had to remove those\n> in order to get more space on the HDD. Instead of trying to catch up from\n> the last succeeded file, I want it to start over from scratch with the\n> replication - I just don't know how.\n>\n>\nThat is because you manually removed some xlog, and you shouldn't ever do\nthat. To \"cancel\" the archiving, the better way (IMHO) is to set\narchive_command to a dummy command, like:\n\n archive_command = '/bin/true'\n\nAnd reload PostgreSQL:\n\n psql -c \"SELECT pg_reload_conf()\"\n\nWith that, PostgreSQL will stop archiving, and so you'll **be with no\nbackup at all**. With some archives removed, you can use your old\narchive_command again and reload the server.\n\nBTW, check why the archive_command is not working properly (look at PG's\nlog files). Is it because of no space left on disk? If so, removing some\nmay work.\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Jun 10, 2013 at 12:35 PM, Niels Kristian Schjødt <[email protected]> wrote:\nDen 10/06/2013 kl. 16.36 skrev bricklen <[email protected]>:\nOn Mon, Jun 10, 2013 at 4:29 AM, Niels Kristian Schjødt <[email protected]> wrote:\n\n2013-06-10 11:21:45 GMT FATAL: could not connect to the primary server: could not connect to server: No route to host\n Is the server running on host \"192.168.0.4\" and accepting\n TCP/IP connections on port 5432?Did anything get changed on the standby or master around the time this message started occurring?On the master, what do the following show?\nshow port;show listen_addresses;The master's IP is still 192.168.0.4?Have you tried connecting to the master using something like:psql -h 192.168.0.4 -p 5432 -U postgres -d postgres\n\n\n Does that throw a useful error or warning?\nIt turned out that the switch port that the server was connected to was faulty, and hence no successful connection between master and slave was established. This resolved in pg_xlog building up very fast, because our system performs a lot of changes on the data we store. \nI ended up running pg_archivecleanup on the master to get some space freed urgently. Then I got the switch changed with a new one. Now I'm trying to the streaming replication setup from scratch again, but with no luck.\nI can't seem to figure out which steps I need to do, to get the standby server wiped and get it started as a streaming replication again from scratch. I tried to follow the steps, from step 6, in here http://wiki.postgresql.org/wiki/Streaming_Replication but the process seems to fail when I reach the point where I try to do a psql -c \"SELECT pg_stop_backup()\". It just says:\nNOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived\nWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed)HINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\n(...)When looking at ps aux on the master, I see the following:\npostgres 30930 0.0 0.0 98412 1632 ? Ss 15:59 0:02 postgres: archiver process failed on 0000000200000E1B000000A9\n\nThe file mentioned is the one that it was about to archive, when the standby server failed. Somehow it must still be trying to \"catch up\" from that file which of cause isn't there any more, since I had to remove those in order to get more space on the HDD. Instead of trying to catch up from the last succeeded file, I want it to start over from scratch with the replication - I just don't know how.\nThat is because you manually removed some xlog, and you shouldn't ever do that. To \"cancel\" the archiving, the better way (IMHO) is to set archive_command to a dummy command, like:\n archive_command = '/bin/true'And reload PostgreSQL: psql -c \"SELECT pg_reload_conf()\"\nWith that, PostgreSQL will stop archiving, and so you'll **be with no backup at all**. With some archives removed, you can use your old archive_command again and reload the server.\nBTW, check why the archive_command is not working properly (look at PG's log files). Is it because of no space left on disk? If so, removing some may work.Regards,-- \n\nMatheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 10 Jun 2013 14:59:18 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "Okay, cool\n\nYou mean that I should do the following right?:\n\n1. Stop slave server\n2. set archive_command = 'true' in postgresql.conf on the master server\n3. restart master server\n4. run psql -c \"SELECT pg_start_backup('label', true)\" on master\n5. run rsync -av --exclude postmaster.pid --exclude pg_xlog /var/lib/postgresql/9.2/main/ [email protected]:/var/lib/postgresql/9.2/main/\" on master server\n6. run psql -c \"SELECT pg_stop_backup();\" on master server\n7. change archive_command back on master\n8. restart master\n9. start slave\n\nJust to confirm the approach :-)\n\n\n\nDen 10/06/2013 kl. 19.53 skrev Jeff Janes <[email protected]>:\n\n> On Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt <[email protected]> wrote:\n> \n> WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (1920 seconds elapsed)\n> HINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\n> \n> When looking at ps aux on the master, I see the following:\n> \n> postgres 30930 0.0 0.0 98412 1632 ? Ss 15:59 0:02 postgres: archiver process failed on 0000000200000E1B000000A9\n> \n> The file mentioned is the one that it was about to archive, when the standby server failed. Somehow it must still be trying to \"catch up\" from that file which of cause isn't there any more, since I had to remove those in order to get more space on the HDD.\n> \n> So the archive_command is failing because it is trying to archive a file that no longer exists.\n> \n> One way around this is to remove the .ready files from the pg_xlog/archive_status directory, which correspond to the WAL files you manually removed. \n> \n> Another way would be to temporarily replace the archive_command with one that will report success even when the archiving fails, until the archiver gets paste this stretch. In fact you could just replace the command with 'true', so it reports success without even doing anything.\n> \n> Cheers,\n> \n> Jeff\n\n\nOkay, coolYou mean that I should do the following right?:1. Stop slave server2. set archive_command = 'true' in postgresql.conf on the master server3. restart master server4. run psql -c \"SELECT pg_start_backup('label', true)\" on master5. run rsync -av --exclude postmaster.pid --exclude pg_xlog /var/lib/postgresql/9.2/main/ [email protected]:/var/lib/postgresql/9.2/main/\" on master server6. run psql -c \"SELECT pg_stop_backup();\" on master server7. change archive_command back on master8. restart master9. start slaveJust to confirm the approach :-)Den 10/06/2013 kl. 19.53 skrev Jeff Janes <[email protected]>:On Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt <[email protected]> wrote:\n\nWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (1920 seconds elapsed)\nHINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\nWhen looking at ps aux on the master, I see the following:\n\npostgres 30930 0.0 0.0 98412 1632 ? Ss 15:59 0:02 postgres: archiver process failed on 0000000200000E1B000000A9\nThe file mentioned is the one that it was about to archive, when the standby server failed. Somehow it must still be trying to \"catch up\" from that file which of cause isn't there any more, since I had to remove those in order to get more space on the HDD. \nSo the archive_command is failing because it is trying to archive a file that no longer exists.One way around this is to remove the .ready files from the pg_xlog/archive_status directory, which correspond to the WAL files you manually removed. \nAnother way would be to temporarily replace the archive_command with one that will report success even when the archiving fails, until the archiver gets paste this stretch. In fact you could just replace the command with 'true', so it reports success without even doing anything.\nCheers,Jeff",
"msg_date": "Mon, 10 Jun 2013 20:02:40 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "On Mon, Jun 10, 2013 at 11:02 AM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n> Okay, cool\n>\n> You mean that I should do the following right?:\n>\n> 1. Stop slave server\n>\n\n\nAt this point, you don't have a slave server. Not a usable one, anyway.\n If you used to have a hot-standby server, it is now simply a historical\nreporting server. If you have no need/use for such a reporting server,\nthen yes you should stop it, to avoid confusion.\n\n\n\n> 2. set archive_command = 'true' in postgresql.conf on the master server\n> 3. restart master server\n>\n\nYou can simply do a reload rather than a full restart.\n\n\n> 4. run psql -c \"SELECT pg_start_backup('label', true)\" on master\n>\n\nNo, you shouldn't do that yet without first having correctly functioning\narchiving back in place. After setting archive_command=true and reloading\nthe server, you have to wait a while for the \"bad\" WAL files to get\npseudo-archived and cleared from the system. Once that has happened, you\ncan then return archive_command to its previous setting, and again\nreload/restart the server. Only at that point should you begin taking the\nnew backup. In other words, steps 7 and 8 have to be moved up to before\nstep 4.\n\n\n> 5. run rsync -av --exclude postmaster.pid --exclude pg_xlog\n> /var/lib/postgresql/9.2/main/ [email protected]:/var/lib/postgresql/9.2/main/\" on\n> master server\n> 6. run psql -c \"SELECT pg_stop_backup();\" on master server\n> 7. change archive_command back on master\n> 8. restart master\n> 9. start slave\n>\n> Just to confirm the approach :-)\n>\n\n\nCheers,\n\nJeff\n\nOn Mon, Jun 10, 2013 at 11:02 AM, Niels Kristian Schjødt <[email protected]> wrote:\nOkay, coolYou mean that I should do the following right?:\n1. Stop slave serverAt this point, you don't have a slave server. Not a usable one, anyway. If you used to have a hot-standby server, it is now simply a historical reporting server. If you have no need/use for such a reporting server, then yes you should stop it, to avoid confusion.\n 2. set archive_command = 'true' in postgresql.conf on the master server\n3. restart master serverYou can simply do a reload rather than a full restart. \n4. run psql -c \"SELECT pg_start_backup('label', true)\" on master\nNo, you shouldn't do that yet without first having correctly functioning archiving back in place. After setting archive_command=true and reloading the server, you have to wait a while for the \"bad\" WAL files to get pseudo-archived and cleared from the system. Once that has happened, you can then return archive_command to its previous setting, and again reload/restart the server. Only at that point should you begin taking the new backup. In other words, steps 7 and 8 have to be moved up to before step 4.\n 5. run rsync -av --exclude postmaster.pid --exclude pg_xlog /var/lib/postgresql/9.2/main/ [email protected]:/var/lib/postgresql/9.2/main/\" on master server\n6. run psql -c \"SELECT pg_stop_backup();\" on master server7. change archive_command back on master8. restart master9. start slaveJust to confirm the approach :-)\nCheers,Jeff",
"msg_date": "Mon, 10 Jun 2013 11:24:45 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "Thanks,\n\n> No, you shouldn't do that yet without first having correctly functioning archiving back in place. After setting archive_command=true and reloading the server, you have to wait a while for the \"bad\" WAL files to get pseudo-archived and cleared from the system.\n\nHow do I know when this is done?\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Jun 2013 20:31:18 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
},
{
"msg_contents": "Solved it - thanks!\n\nDen 10/06/2013 kl. 20.24 skrev Jeff Janes <[email protected]>:\n\n> On Mon, Jun 10, 2013 at 11:02 AM, Niels Kristian Schjødt <[email protected]> wrote:\n> Okay, cool\n> \n> You mean that I should do the following right?:\n> \n> 1. Stop slave server\n> \n> \n> At this point, you don't have a slave server. Not a usable one, anyway. If you used to have a hot-standby server, it is now simply a historical reporting server. If you have no need/use for such a reporting server, then yes you should stop it, to avoid confusion.\n> \n> \n> 2. set archive_command = 'true' in postgresql.conf on the master server\n> 3. restart master server\n> \n> You can simply do a reload rather than a full restart.\n> \n> 4. run psql -c \"SELECT pg_start_backup('label', true)\" on master\n> \n> No, you shouldn't do that yet without first having correctly functioning archiving back in place. After setting archive_command=true and reloading the server, you have to wait a while for the \"bad\" WAL files to get pseudo-archived and cleared from the system. Once that has happened, you can then return archive_command to its previous setting, and again reload/restart the server. Only at that point should you begin taking the new backup. In other words, steps 7 and 8 have to be moved up to before step 4.\n> \n> 5. run rsync -av --exclude postmaster.pid --exclude pg_xlog /var/lib/postgresql/9.2/main/ [email protected]:/var/lib/postgresql/9.2/main/\" on master server\n> 6. run psql -c \"SELECT pg_stop_backup();\" on master server\n> 7. change archive_command back on master\n> 8. restart master\n> 9. start slave\n> \n> Just to confirm the approach :-)\n> \n> \n> Cheers,\n> \n> Jeff\n> \n> \n\n\nSolved it - thanks!Den 10/06/2013 kl. 20.24 skrev Jeff Janes <[email protected]>:On Mon, Jun 10, 2013 at 11:02 AM, Niels Kristian Schjødt <[email protected]> wrote:\nOkay, coolYou mean that I should do the following right?:\n1. Stop slave serverAt this point, you don't have a slave server. Not a usable one, anyway. If you used to have a hot-standby server, it is now simply a historical reporting server. If you have no need/use for such a reporting server, then yes you should stop it, to avoid confusion.\n 2. set archive_command = 'true' in postgresql.conf on the master server\n3. restart master serverYou can simply do a reload rather than a full restart. \n4. run psql -c \"SELECT pg_start_backup('label', true)\" on master\nNo, you shouldn't do that yet without first having correctly functioning archiving back in place. After setting archive_command=true and reloading the server, you have to wait a while for the \"bad\" WAL files to get pseudo-archived and cleared from the system. Once that has happened, you can then return archive_command to its previous setting, and again reload/restart the server. Only at that point should you begin taking the new backup. In other words, steps 7 and 8 have to be moved up to before step 4.\n 5. run rsync -av --exclude postmaster.pid --exclude pg_xlog /var/lib/postgresql/9.2/main/ [email protected]:/var/lib/postgresql/9.2/main/\" on master server\n6. run psql -c \"SELECT pg_stop_backup();\" on master server7. change archive_command back on master8. restart master9. start slaveJust to confirm the approach :-)\nCheers,Jeff",
"msg_date": "Mon, 10 Jun 2013 21:12:25 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: URGENT issue: pg-xlog growing on master!"
}
] |
[
{
"msg_contents": "Hello all you guys,\n\nSince saturday I'm get stucked in a very strange situation: from time to\ntime (sometimes with intervals less than 10 minutes), the server get\n\"stucked\"/\"hang\" (I dont know how to call it) and every connections on\npostgres (dont matter if it's SELECT, UPDATE, DELETE, INSERT, startup,\nauthentication...) seems like get \"paused\"; after some seconds (say ~10 or\n~15 sec, sometimes less) everything \"goes OK\".\n\nSo, my first trial was to check disks. Running \"iostat\" apparently showed\nthat disks was OK. It's a Raid10, 4 600GB SAS, IBM Storage DS3512, over FC.\nIBM DS Storage Manager says that disks is OK.\n\nThen, memory. Apparently no swap being used:\n[###@### data]# free -m\n total used free shared buffers cached\nMem: 145182 130977 14204 0 43 121407\n-/+ buffers/cache: 9526 135655\nSwap: 6143 65 6078\n\nNo error on /var/log/messages.\n\nFollowing, is some strace of one processes, and some others, maybe, useful\ninfos. Every processes I've straced bring the same scenario: seems it get\nstucked on semop.\n\nThere's no modification in server since last monday, that I changed\npg_hba.conf to login in LDAP. The LDAP Server apparently is OK, and tcpdump\ndoesnt show any slow on response, neither big activity on this port.\n\nAny help appreciate,\n\n[###@### ~]# strace -ttp 5209\nProcess 5209 attached - interrupt to quit\n09:01:54.122445 semop(2293765, {{15, -1, 0}}, 1) = 0\n09:01:55.368785 semop(2293765, {{15, -1, 0}}, 1) = 0\n09:01:55.368902 semop(2523148, {{11, 1, 0}}, 1) = 0\n09:01:55.368978 semop(2293765, {{15, -1, 0}}, 1) = 0\n09:01:55.369861 semop(2293765, {{15, -1, 0}}, 1) = 0\n09:01:55.370648 semop(3047452, {{6, 1, 0}}, 1) = 0\n09:01:55.370694 semop(2293765, {{15, -1, 0}}, 1) = 0\n09:01:55.370762 semop(2785300, {{12, 1, 0}}, 1) = 0\n09:01:55.370805 access(\"base/2048098929\", F_OK) = 0\n09:01:55.370953 open(\"base/2048098929/PG_VERSION\", O_RDONLY) = 5\n\n[###@### data]# ipcs -l\n\n- Shared Memory Limits -\nmax number of segments = 4096\nmax seg size (kbytes) = 83886080\nmax total shared memory (kbytes) = 17179869184\nmin seg size (bytes) = 1\n\n------ Semaphore Limits --------\nmax number of arrays = 128\nmax semaphores per array = 250\nmax semaphores system wide = 32000\nmax ops per semop call = 32\nsemaphore max value = 32767\n\n------ Messages: Limits --------\nmax queues system wide = 32768\nmax size of message (bytes) = 65536\ndefault max size of queue (bytes) = 65536\n\n[###@### data]# ipcs -u\n----- Semaphore Status -------\nused arrays: 34\nallocated semaphores: 546\n\n[###@### data]# uname -a\nLinux ### 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6 23:43:09 UTC 2012\nx86_64 x86_64 x86_64 GNU/Linux\n\npostgres=# select version();\n version\n--------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n20120305 (Red Hat 4.4.6-4), 64-bit\n(1 registro)\n\n[###@### data]# cat /etc/redhat-release\nCentOS release 6.3 (Final)\n\nHello all you guys,Since saturday I'm get stucked in a very strange situation: from time to time (sometimes with intervals less than 10 minutes), the server get \"stucked\"/\"hang\" (I dont know how to call it) and every connections on postgres (dont matter if it's SELECT, UPDATE, DELETE, INSERT, startup, authentication...) seems like get \"paused\"; after some seconds (say ~10 or ~15 sec, sometimes less) everything \"goes OK\".\nSo, my first trial was to check disks. Running \"iostat\" apparently showed that disks was OK. It's a Raid10, 4 600GB SAS, IBM Storage DS3512, over FC. IBM DS Storage Manager says that disks is OK.\nThen, memory. Apparently no swap being used:[###@### data]# free -m total used free shared buffers cachedMem: 145182 130977 14204 0 43 121407\n-/+ buffers/cache: 9526 135655Swap: 6143 65 6078No error on /var/log/messages.Following, is some strace of one processes, and some others, maybe, useful infos. Every processes I've straced bring the same scenario: seems it get stucked on semop.\nThere's no modification in server since last monday, that I changed pg_hba.conf to login in LDAP. The LDAP Server apparently is OK, and tcpdump doesnt show any slow on response, neither big activity on this port.\nAny help appreciate,[###@### ~]# strace -ttp 5209Process 5209 attached - interrupt to quit09:01:54.122445 semop(2293765, {{15, -1, 0}}, 1) = 0\n09:01:55.368785 semop(2293765, {{15, -1, 0}}, 1) = 009:01:55.368902 semop(2523148, {{11, 1, 0}}, 1) = 009:01:55.368978 semop(2293765, {{15, -1, 0}}, 1) = 009:01:55.369861 semop(2293765, {{15, -1, 0}}, 1) = 0\n09:01:55.370648 semop(3047452, {{6, 1, 0}}, 1) = 009:01:55.370694 semop(2293765, {{15, -1, 0}}, 1) = 009:01:55.370762 semop(2785300, {{12, 1, 0}}, 1) = 009:01:55.370805 access(\"base/2048098929\", F_OK) = 0\n09:01:55.370953 open(\"base/2048098929/PG_VERSION\", O_RDONLY) = 5[###@### data]# ipcs -l- Shared Memory Limits -max number of segments = 4096\nmax seg size (kbytes) = 83886080max total shared memory (kbytes) = 17179869184min seg size (bytes) = 1------ Semaphore Limits --------max number of arrays = 128\nmax semaphores per array = 250max semaphores system wide = 32000max ops per semop call = 32semaphore max value = 32767------ Messages: Limits --------\nmax queues system wide = 32768max size of message (bytes) = 65536default max size of queue (bytes) = 65536[###@### data]# ipcs -u----- Semaphore Status -------\nused arrays: 34allocated semaphores: 546[###@### data]# uname -aLinux ### 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6 23:43:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux\npostgres=# select version(); version--------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit(1 registro)[###@### data]# cat /etc/redhat-release\nCentOS release 6.3 (Final)",
"msg_date": "Tue, 11 Jun 2013 09:48:45 -0300",
"msg_from": "Rafael Domiciano <[email protected]>",
"msg_from_op": true,
"msg_subject": "9.2.2 - semop hanging"
},
{
"msg_contents": "On 06/11/2013 05:48 AM, Rafael Domiciano wrote:\n> Hello all you guys,\n> \n> Since saturday I'm get stucked in a very strange situation: from time to\n> time (sometimes with intervals less than 10 minutes), the server get\n> \"stucked\"/\"hang\" (I dont know how to call it) and every connections on\n> postgres (dont matter if it's SELECT, UPDATE, DELETE, INSERT, startup,\n> authentication...) seems like get \"paused\"; after some seconds (say ~10 or\n> ~15 sec, sometimes less) everything \"goes OK\".\n\nDid you ever get any idea what was going on here? I'm seeing the same\nbehavior at another side, on 9.1.3.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Jun 2013 12:07:21 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.2.2 - semop hanging"
},
{
"msg_contents": "On Tue, Jun 11, 2013 at 9:48 PM, Rafael Domiciano\n<[email protected]> wrote:\n> postgres=# select version();\n> version\n> --------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n> 20120305 (Red Hat 4.4.6-4), 64-bit\n> (1 registro)\nThis is not directly related to your post, but... You should update\nasap your server to 9.2.4, as it contains important security fixes.\n--\nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 19 Jun 2013 07:45:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.2.2 - semop hanging"
},
{
"msg_contents": "Hello guys,\n\nI've been trying to \"hunting down\" my problem and reached the following:\n\n1) Emre Hasegeli has suggested to reduce my shared buffers, but it's\nalready low:\n total server memory: 141 GB\n shared_buffers: 16 GB\n\nMaybe it's too low? I've been thinking to increase to 32 GB.\n\nmax_connections = 500 and ~400 connections average\n\n2) Being \"hanging\" on \"semop\" I tried the following, as suggested on some\n\"tuning page\" over web.\n\necho \"250 32000 100 128\" > /proc/sys/kernel/sem\n\n3) I think my problem could be something related to \"LwLocks\", as I did\nsome googling and found some related problems and slides. There is some way\nI can confirm this?\n\n4) Rebooting the server didn't make any difference.\n\nAppreciate any help,\n\nRafael\n\n\nOn Tue, Jun 11, 2013 at 9:48 AM, Rafael Domiciano <\[email protected]> wrote:\n\n> Hello all you guys,\n>\n> Since saturday I'm get stucked in a very strange situation: from time to\n> time (sometimes with intervals less than 10 minutes), the server get\n> \"stucked\"/\"hang\" (I dont know how to call it) and every connections on\n> postgres (dont matter if it's SELECT, UPDATE, DELETE, INSERT, startup,\n> authentication...) seems like get \"paused\"; after some seconds (say ~10 or\n> ~15 sec, sometimes less) everything \"goes OK\".\n>\n> So, my first trial was to check disks. Running \"iostat\" apparently showed\n> that disks was OK. It's a Raid10, 4 600GB SAS, IBM Storage DS3512, over FC.\n> IBM DS Storage Manager says that disks is OK.\n>\n> Then, memory. Apparently no swap being used:\n> [###@### data]# free -m\n> total used free shared buffers cached\n> Mem: 145182 130977 14204 0 43 121407\n> -/+ buffers/cache: 9526 135655\n> Swap: 6143 65 6078\n>\n> No error on /var/log/messages.\n>\n> Following, is some strace of one processes, and some others, maybe, useful\n> infos. Every processes I've straced bring the same scenario: seems it get\n> stucked on semop.\n>\n> There's no modification in server since last monday, that I changed\n> pg_hba.conf to login in LDAP. The LDAP Server apparently is OK, and tcpdump\n> doesnt show any slow on response, neither big activity on this port.\n>\n> Any help appreciate,\n>\n> [###@### ~]# strace -ttp 5209\n> Process 5209 attached - interrupt to quit\n> 09:01:54.122445 semop(2293765, {{15, -1, 0}}, 1) = 0\n> 09:01:55.368785 semop(2293765, {{15, -1, 0}}, 1) = 0\n> 09:01:55.368902 semop(2523148, {{11, 1, 0}}, 1) = 0\n> 09:01:55.368978 semop(2293765, {{15, -1, 0}}, 1) = 0\n> 09:01:55.369861 semop(2293765, {{15, -1, 0}}, 1) = 0\n> 09:01:55.370648 semop(3047452, {{6, 1, 0}}, 1) = 0\n> 09:01:55.370694 semop(2293765, {{15, -1, 0}}, 1) = 0\n> 09:01:55.370762 semop(2785300, {{12, 1, 0}}, 1) = 0\n> 09:01:55.370805 access(\"base/2048098929\", F_OK) = 0\n> 09:01:55.370953 open(\"base/2048098929/PG_VERSION\", O_RDONLY) = 5\n>\n> [###@### data]# ipcs -l\n>\n> - Shared Memory Limits -\n> max number of segments = 4096\n> max seg size (kbytes) = 83886080\n> max total shared memory (kbytes) = 17179869184\n> min seg size (bytes) = 1\n>\n> ------ Semaphore Limits --------\n> max number of arrays = 128\n> max semaphores per array = 250\n> max semaphores system wide = 32000\n> max ops per semop call = 32\n> semaphore max value = 32767\n>\n> ------ Messages: Limits --------\n> max queues system wide = 32768\n> max size of message (bytes) = 65536\n> default max size of queue (bytes) = 65536\n>\n> [###@### data]# ipcs -u\n> ----- Semaphore Status -------\n> used arrays: 34\n> allocated semaphores: 546\n>\n> [###@### data]# uname -a\n> Linux ### 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6 23:43:09 UTC 2012\n> x86_64 x86_64 x86_64 GNU/Linux\n>\n> postgres=# select version();\n> version\n>\n> --------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n> 20120305 (Red Hat 4.4.6-4), 64-bit\n> (1 registro)\n>\n> [###@### data]# cat /etc/redhat-release\n> CentOS release 6.3 (Final)\n>\n\nHello guys,I've been trying to \"hunting down\" my problem and reached the following:1) Emre Hasegeli has suggested to reduce my shared buffers, but it's already low:\n total server memory: 141 GB shared_buffers: 16 GBMaybe it's too low? I've been thinking to increase to 32 GB.max_connections = 500 and ~400 connections average\n2) Being \"hanging\" on \"semop\" I tried the following, as suggested on some \"tuning page\" over web.echo \"250 32000 100 128\" > /proc/sys/kernel/sem\n3) I think my problem could be something related to \"LwLocks\", as I did some googling and found some related problems and slides. There is some way I can confirm this?\n4) Rebooting the server didn't make any difference.Appreciate any help,RafaelOn Tue, Jun 11, 2013 at 9:48 AM, Rafael Domiciano <[email protected]> wrote:\nHello all you guys,\nSince saturday I'm get stucked in a very strange situation: from time to time (sometimes with intervals less than 10 minutes), the server get \"stucked\"/\"hang\" (I dont know how to call it) and every connections on postgres (dont matter if it's SELECT, UPDATE, DELETE, INSERT, startup, authentication...) seems like get \"paused\"; after some seconds (say ~10 or ~15 sec, sometimes less) everything \"goes OK\".\nSo, my first trial was to check disks. Running \"iostat\" apparently showed that disks was OK. It's a Raid10, 4 600GB SAS, IBM Storage DS3512, over FC. IBM DS Storage Manager says that disks is OK.\nThen, memory. Apparently no swap being used:[###@### data]# free -m total used free shared buffers cachedMem: 145182 130977 14204 0 43 121407\n-/+ buffers/cache: 9526 135655Swap: 6143 65 6078No error on /var/log/messages.Following, is some strace of one processes, and some others, maybe, useful infos. Every processes I've straced bring the same scenario: seems it get stucked on semop.\nThere's no modification in server since last monday, that I changed pg_hba.conf to login in LDAP. The LDAP Server apparently is OK, and tcpdump doesnt show any slow on response, neither big activity on this port.\nAny help appreciate,[###@### ~]# strace -ttp 5209Process 5209 attached - interrupt to quit09:01:54.122445 semop(2293765, {{15, -1, 0}}, 1) = 0\n\n09:01:55.368785 semop(2293765, {{15, -1, 0}}, 1) = 009:01:55.368902 semop(2523148, {{11, 1, 0}}, 1) = 009:01:55.368978 semop(2293765, {{15, -1, 0}}, 1) = 009:01:55.369861 semop(2293765, {{15, -1, 0}}, 1) = 0\n09:01:55.370648 semop(3047452, {{6, 1, 0}}, 1) = 009:01:55.370694 semop(2293765, {{15, -1, 0}}, 1) = 009:01:55.370762 semop(2785300, {{12, 1, 0}}, 1) = 009:01:55.370805 access(\"base/2048098929\", F_OK) = 0\n09:01:55.370953 open(\"base/2048098929/PG_VERSION\", O_RDONLY) = 5[###@### data]# ipcs -l\n- Shared Memory Limits -max number of segments = 4096\nmax seg size (kbytes) = 83886080max total shared memory (kbytes) = 17179869184min seg size (bytes) = 1\n------ Semaphore Limits --------max number of arrays = 128\nmax semaphores per array = 250max semaphores system wide = 32000max ops per semop call = 32semaphore max value = 32767------ Messages: Limits --------\n\nmax queues system wide = 32768max size of message (bytes) = 65536default max size of queue (bytes) = 65536[###@### data]# ipcs -u----- Semaphore Status -------\nused arrays: 34allocated semaphores: 546[###@### data]# uname -aLinux ### 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6 23:43:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux\npostgres=# select version(); version--------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit(1 registro)[###@### data]# cat /etc/redhat-release\n\nCentOS release 6.3 (Final)",
"msg_date": "Mon, 1 Jul 2013 15:06:34 -0300",
"msg_from": "Rafael Domiciano <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 9.2.2 - semop hanging"
}
] |
[
{
"msg_contents": "Hi All\n\nOne of my query treating performance issue on my production server.\nOnce i run query on my parent table with specific condition(hard coded\nvalue) its uses only proper child table and its index on explain plan ,\nbut once i am using table conditions (instead of hard coded value), query\nplanner is going all the child tables, Can i know where i am worng\n\nPostgresql version 9.2.2\n\nPlease find details below\n==========================\n\nXXX_db=> select id from xxx where d_id = '5';\n id\n-------\n 5\n 45\n(2 rows)\n\n\nXXX_db=> explain analyze SELECT * FROM xxx_parent_table WHERE id in\n(5,45) and ( sts = 1 or status is null ) order by creation_time limit 40 ;\n QUERY PLAN\n\n\n--------------------------------------------------------------------------------------------------------------------------------------\n-\n\n Limit (cost=12.21..12.21 rows=3 width=251) (actual time=6.585..6.585\nrows=0 loops=1)\n -> Sort (cost=12.21..12.21 rows=3 width=251) (actual time=6.582..6.582\nrows=0 loops=1)\n Sort Key: public.xxx_parent_tables.creation_time\n Sort Method: quicksort Memory: 25kB\n -> Result (cost=0.00..12.18 rows=3 width=251) (actual\ntime=6.571..6.571 rows=0 loops=1)\n -> Append (cost=0.00..12.18 rows=3 width=251) (actual\ntime=6.569..6.569 rows=0 loops=1)\n -> Seq Scan on xxx_parent_tables (cost=0.00..0.00\nrows=1 width=324) (actual time=0.003..0.003 rows=0 loops=1)\n Filter: ((id = ANY ('{5,45}'::bigint[])) AND\n((status = 1) OR (status IS NULL)))\n -> Bitmap Heap Scan on\nxxx_parent_tables_table_details_ xxx_parent_tables (cost=4.52..6.53 rows=1\nwidth=105) (actual ti\nme=0.063..0.063 rows=0 loops=1)\n Recheck Cond: ((status = 1) OR (status IS NULL))\n Filter: (id = ANY ('{5,45}'::bigint[]))\n -> BitmapOr (cost=4.52..4.52 rows=1 width=0)\n(actual time=0.059..0.059 rows=0 loops=1)\n -> Bitmap Index Scan on\nxxx_parent_tables_table_details__status_idx (cost=0.00..2.26 rows=1\nwidth=0)\n (actual time=0.038..0.038 rows=0 loops=1)\n Index Cond: (status = 1)\n -> Bitmap Index Scan on\nxxx_parent_tables_table_details__status_idx (cost=0.00..2.26 rows=1\nwidth=0)\n (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (status IS NULL)\n -> Bitmap Heap Scan on\nxxx_parent_tables_table_details_det xxx_parent_tables (cost=2.52..5.65\nrows=1 width=324) (actual ti\nme=6.502..6.502 rows=0 loops=1)\n Recheck Cond: (id = ANY ('{5,45}'::bigint[]))\n Filter: ((status = 1) OR (status IS NULL))\n -> Bitmap Index Scan on\nxxx_parent_tables_table_details_id_idx (cost=0.00..2.52 rows=2 width=0)\n(actua\nl time=6.499..6.499 rows=0 loops=1)\n Index Cond: (id = ANY ('{5,45}'::bigint[]))\n Total runtime: 6.823 ms\n(22 rows)\n\n\nXXX_db => explain analyze SELECT * FROM xxx_parent_tables WHERE cp_id\nin (select id from xxx where d_id = '5') and ( status = 1 or status is null\n) order by creation_time limit 40 ;\n\n QUERY PLAN\n\n\n--------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------\n Limit (cost=3.66..6067.89 rows=40 width=105) (actual\ntime=70479.596..70479.596 rows=0 loops=1)\n -> Nested Loop Semi Join (cost=3.66..4587291.92 rows=30258 width=105)\n(actual time=70479.593..70479.593 rows=0 loops=1)\n Join Filter: (public.xxx_parent_tables.cp_id = cp_info.cp_id)\n Rows Removed by Join Filter: 1416520\n -> Merge Append (cost=3.66..4565956.68 rows=711059 width=105)\n(actual time=67225.964..69635.016 rows=708260 loops=1)\n Sort Key: public.xxx_parent_tables.creation_time\n -> Sort (cost=0.01..0.02 rows=1 width=324) (actual\ntime=0.018..0.018 rows=0 loops=1)\n Sort Key: public.xxx_parent_tables.creation_time\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on xxx_parent_tables (cost=0.00..0.00\nrows=1 width=324) (actual time=0.011..0.011 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_automobiles_carwale_creation_time_idx on\nxxx_parent_tables_automobiles_carwale xxx_parent_tables (co\nst=0.00..649960.44 rows=17 width=105) (actual time=10219.559..10219.559\nrows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n Rows Removed by Filter: 3102241\n -> Index Scan using\nxxx_parent_tables_automobiles_sulekha_creation_time_idx on\nxxx_parent_tables_automobiles_sulekha xxx_parent_tables (co\nst=0.00..1124998.57 rows=1 width=105) (actual time=17817.577..17817.577\nrows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n Rows Removed by Filter: 4016234\n -> Index Scan using\nxxx_parent_tables_automobiles_verse_creation_time_idx on\nxxx_parent_tables_automobiles_verse xxx_parent_tables (cost=0\n.00..24068.88 rows=1 width=103) (actual time=675.291..675.291 rows=0\nloops=1)\n Filter: ((status = 1) OR (status IS NULL))\n Rows Removed by Filter: 420616\n -> Index Scan using\nxxx_parent_tables_automobiles_yolist_creation_time_idx on\nxxx_parent_tables_automobiles_yolist xxx_parent_tables (cost\n=0.00..25.05 rows=2 width=324) (actual time=0.016..0.016 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_deals_bagittoday_creation_time_idx on\nxxx_parent_tables_deals_bagittoday xxx_parent_tables (cost=0.0\n0..23882.78 rows=1 width=105) (actual time=234.672..234.672 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n Rows Removed by Filter: 84988\n -> Index Scan using\nxxx_parent_tables_deals_bindaasbargain_creation_time_idx on\nxxx_parent_tables_deals_bindaasbargain xxx_parent_tables (\ncost=0.00..25.05 rows=2 width=324) (actual time=0.016..0.016 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_deals_buzzr_creation_time_idx on\nxxx_parent_tables_deals_buzzr xxx_parent_tables (cost=0.00..11435.4\n1 rows=1 width=105) (actual time=109.466..109.466 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n Rows Removed by Filter: 33750\n -> Index Scan using\nxxx_parent_tables_deals_dealdrums_creation_time_idx on\nxxx_parent_tables_deals_dealdrums xxx_parent_tables (cost=0.00.\n.51.61 rows=1 width=105) (actual time=0.917..0.917 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n Rows Removed by Filter: 941\n -> Index Scan using\nxxx_parent_tables_deals_dealsandyou_creation_time_idx on\nxxx_parent_tables_deals_dealsandyou xxx_parent_tables (cost=0\n.00..25.05 rows=2 width=324) (actual time=0.012..0.012 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_deals_foodiebay_creation_time_idx on\nxxx_parent_tables_deals_foodiebay xxx_parent_tables (cost=0.00.\n.25.05 rows=2 width=324) (actual time=0.024..0.024 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_deals_futurebazaar_creation_time_idx on\nxxx_parent_tables_deals_futurebazaar xxx_parent_tables (cost\n=0.00..30.37 rows=1 width=109) (actual time=0.348..0.348 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n\n -> Index Scan using\nxxx_parent_tables_jobs_jobsa1_creation_time_idx on\nxxx_parent_tables_jobs_jobsa1 xxx_parent_tables (cost=0.00..25.05 r\nows=2 width=324) (actual time=0.020..0.020 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_jobs_jobsinnigeria_creation_time_idx on\nxxx_parent_tables_jobs_jobsinnigeria xxx_parent_tables (cost\n=0.00..25.05 rows=2 width=324) (actual time=0.013..0.013 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_jobs_khojle_creation_time_idx on\nxxx_parent_tables_jobs_khojle xxx_parent_tables (cost=0.00..25.05 r\nows=2 width=324) (actual time=0.013..0.013 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_jobs_midday_creation_time_idx on\nxxx_parent_tables_jobs_midday xxx_parent_tables (cost=0.00..25.05 r\nows=2 width=324) (actual time=0.011..0.011 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_jobs_monsterindia_creation_time_idx on\nxxx_parent_tables_jobs_monsterindia xxx_parent_tables (cost=0\n.00..31569.68 rows=81849 width=105) (actual time=279.393..544.467\nrows=78622 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n Rows Removed by Filter: 155151\n -> Index Scan using\nxxx_parent_tables_jobs_mprc_creation_time_idx on\nxxx_parent_tables_jobs_mprc xxx_parent_tables (cost=0.00..25.05 rows=\n2 width=324) (actual time=0.016..0.016 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_jobs_myjobsintanzania_creation_time_idx on\nxxx_parent_tables_jobs_myjobsintanzania xxx_parent_tables\n (cost=0.00..25.05 rows=2 width=324) (actual time=0.012..0.012 rows=0\nloops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_mobiles_verse_creation_time_idx on\nxxx_parent_tables_mobiles_verse xxx_parent_tables (cost=0.00..25.\n05 rows=2 width=324) (actual time=0.015..0.015 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using\nxxx_parent_tables_mobileseeker_quikr_creation_time_idx on\nxxx_parent_tables_mobileseeker_quikr xxx_parent_tables (cost\n=0.00..13.30 rows=1 width=105) (actual time=0.111..0.111 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL))\n Rows Removed by Filter: 61\n\n Filter: ((status = 1) OR (status IS NULL))\n -> Materialize (cost=0.00..3.47 rows=2 width=8) (actual\ntime=0.000..0.000 rows=2 loops=708260)\n -> Seq Scan on cp_info (cost=0.00..3.46 rows=2 width=8)\n(actual time=0.028..0.060 rows=2 loops=1)\n Filter: (domain_id = 5::bigint)\n Rows Removed by Filter: 115\n Total runtime: 70481.560 ms\n(xxx rows)\n\nHi AllOne of my query treating performance issue on my production server.Once i run query on my parent table with specific condition(hard coded value) its uses only proper child table and its index on explain plan , but once i am using table conditions (instead of hard coded value), query planner is going all the child tables, Can i know where i am worng \nPostgresql version 9.2.2 Please find details below ==========================XXX_db=> select id from xxx where d_id = '5';\n id ------- 5 45(2 rows)XXX_db=> explain analyze SELECT * FROM xxx_parent_table WHERE id in (5,45) and ( sts = 1 or status is null ) order by creation_time limit 40 ;\n QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------\n- Limit (cost=12.21..12.21 rows=3 width=251) (actual time=6.585..6.585 rows=0 loops=1) -> Sort (cost=12.21..12.21 rows=3 width=251) (actual time=6.582..6.582 rows=0 loops=1)\n Sort Key: public.xxx_parent_tables.creation_time Sort Method: quicksort Memory: 25kB -> Result (cost=0.00..12.18 rows=3 width=251) (actual time=6.571..6.571 rows=0 loops=1)\n -> Append (cost=0.00..12.18 rows=3 width=251) (actual time=6.569..6.569 rows=0 loops=1) -> Seq Scan on xxx_parent_tables (cost=0.00..0.00 rows=1 width=324) (actual time=0.003..0.003 rows=0 loops=1)\n Filter: ((id = ANY ('{5,45}'::bigint[])) AND ((status = 1) OR (status IS NULL))) -> Bitmap Heap Scan on xxx_parent_tables_table_details_ xxx_parent_tables (cost=4.52..6.53 rows=1 width=105) (actual ti\nme=0.063..0.063 rows=0 loops=1) Recheck Cond: ((status = 1) OR (status IS NULL)) Filter: (id = ANY ('{5,45}'::bigint[])) -> BitmapOr (cost=4.52..4.52 rows=1 width=0) (actual time=0.059..0.059 rows=0 loops=1)\n -> Bitmap Index Scan on xxx_parent_tables_table_details__status_idx (cost=0.00..2.26 rows=1 width=0) (actual time=0.038..0.038 rows=0 loops=1) Index Cond: (status = 1)\n -> Bitmap Index Scan on xxx_parent_tables_table_details__status_idx (cost=0.00..2.26 rows=1 width=0) (actual time=0.019..0.019 rows=0 loops=1) Index Cond: (status IS NULL)\n -> Bitmap Heap Scan on xxx_parent_tables_table_details_det xxx_parent_tables (cost=2.52..5.65 rows=1 width=324) (actual time=6.502..6.502 rows=0 loops=1) Recheck Cond: (id = ANY ('{5,45}'::bigint[]))\n Filter: ((status = 1) OR (status IS NULL)) -> Bitmap Index Scan on xxx_parent_tables_table_details_id_idx (cost=0.00..2.52 rows=2 width=0) (actua\nl time=6.499..6.499 rows=0 loops=1) Index Cond: (id = ANY ('{5,45}'::bigint[])) Total runtime: 6.823 ms(22 rows)\nXXX_db => explain analyze SELECT * FROM xxx_parent_tables WHERE cp_id in (select id from xxx where d_id = '5') and ( status = 1 or status is null ) order by creation_time limit 40 ; QUERY PLAN \n --------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------ Limit (cost=3.66..6067.89 rows=40 width=105) (actual time=70479.596..70479.596 rows=0 loops=1) -> Nested Loop Semi Join (cost=3.66..4587291.92 rows=30258 width=105) (actual time=70479.593..70479.593 rows=0 loops=1)\n Join Filter: (public.xxx_parent_tables.cp_id = cp_info.cp_id) Rows Removed by Join Filter: 1416520 -> Merge Append (cost=3.66..4565956.68 rows=711059 width=105) (actual time=67225.964..69635.016 rows=708260 loops=1)\n Sort Key: public.xxx_parent_tables.creation_time -> Sort (cost=0.01..0.02 rows=1 width=324) (actual time=0.018..0.018 rows=0 loops=1) Sort Key: public.xxx_parent_tables.creation_time\n Sort Method: quicksort Memory: 25kB -> Seq Scan on xxx_parent_tables (cost=0.00..0.00 rows=1 width=324) (actual time=0.011..0.011 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL))\n -> Index Scan using xxx_parent_tables_automobiles_carwale_creation_time_idx on xxx_parent_tables_automobiles_carwale xxx_parent_tables (cost=0.00..649960.44 rows=17 width=105) (actual time=10219.559..10219.559 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL)) Rows Removed by Filter: 3102241 -> Index Scan using xxx_parent_tables_automobiles_sulekha_creation_time_idx on xxx_parent_tables_automobiles_sulekha xxx_parent_tables (co\nst=0.00..1124998.57 rows=1 width=105) (actual time=17817.577..17817.577 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) Rows Removed by Filter: 4016234\n -> Index Scan using xxx_parent_tables_automobiles_verse_creation_time_idx on xxx_parent_tables_automobiles_verse xxx_parent_tables (cost=0.00..24068.88 rows=1 width=103) (actual time=675.291..675.291 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL)) Rows Removed by Filter: 420616 -> Index Scan using xxx_parent_tables_automobiles_yolist_creation_time_idx on xxx_parent_tables_automobiles_yolist xxx_parent_tables (cost\n=0.00..25.05 rows=2 width=324) (actual time=0.016..0.016 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_deals_bagittoday_creation_time_idx on xxx_parent_tables_deals_bagittoday xxx_parent_tables (cost=0.0\n0..23882.78 rows=1 width=105) (actual time=234.672..234.672 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) Rows Removed by Filter: 84988\n -> Index Scan using xxx_parent_tables_deals_bindaasbargain_creation_time_idx on xxx_parent_tables_deals_bindaasbargain xxx_parent_tables (cost=0.00..25.05 rows=2 width=324) (actual time=0.016..0.016 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_deals_buzzr_creation_time_idx on xxx_parent_tables_deals_buzzr xxx_parent_tables (cost=0.00..11435.4\n1 rows=1 width=105) (actual time=109.466..109.466 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) Rows Removed by Filter: 33750 -> Index Scan using xxx_parent_tables_deals_dealdrums_creation_time_idx on xxx_parent_tables_deals_dealdrums xxx_parent_tables (cost=0.00.\n.51.61 rows=1 width=105) (actual time=0.917..0.917 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) Rows Removed by Filter: 941 -> Index Scan using xxx_parent_tables_deals_dealsandyou_creation_time_idx on xxx_parent_tables_deals_dealsandyou xxx_parent_tables (cost=0\n.00..25.05 rows=2 width=324) (actual time=0.012..0.012 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_deals_foodiebay_creation_time_idx on xxx_parent_tables_deals_foodiebay xxx_parent_tables (cost=0.00.\n.25.05 rows=2 width=324) (actual time=0.024..0.024 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_deals_futurebazaar_creation_time_idx on xxx_parent_tables_deals_futurebazaar xxx_parent_tables (cost\n=0.00..30.37 rows=1 width=109) (actual time=0.348..0.348 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_jobs_jobsa1_creation_time_idx on xxx_parent_tables_jobs_jobsa1 xxx_parent_tables (cost=0.00..25.05 r\nows=2 width=324) (actual time=0.020..0.020 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_jobs_jobsinnigeria_creation_time_idx on xxx_parent_tables_jobs_jobsinnigeria xxx_parent_tables (cost\n=0.00..25.05 rows=2 width=324) (actual time=0.013..0.013 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_jobs_khojle_creation_time_idx on xxx_parent_tables_jobs_khojle xxx_parent_tables (cost=0.00..25.05 r\nows=2 width=324) (actual time=0.013..0.013 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_jobs_midday_creation_time_idx on xxx_parent_tables_jobs_midday xxx_parent_tables (cost=0.00..25.05 r\nows=2 width=324) (actual time=0.011..0.011 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_jobs_monsterindia_creation_time_idx on xxx_parent_tables_jobs_monsterindia xxx_parent_tables (cost=0\n.00..31569.68 rows=81849 width=105) (actual time=279.393..544.467 rows=78622 loops=1) Filter: ((status = 1) OR (status IS NULL)) Rows Removed by Filter: 155151\n -> Index Scan using xxx_parent_tables_jobs_mprc_creation_time_idx on xxx_parent_tables_jobs_mprc xxx_parent_tables (cost=0.00..25.05 rows=2 width=324) (actual time=0.016..0.016 rows=0 loops=1)\n Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_jobs_myjobsintanzania_creation_time_idx on xxx_parent_tables_jobs_myjobsintanzania xxx_parent_tables \n (cost=0.00..25.05 rows=2 width=324) (actual time=0.012..0.012 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_mobiles_verse_creation_time_idx on xxx_parent_tables_mobiles_verse xxx_parent_tables (cost=0.00..25.\n05 rows=2 width=324) (actual time=0.015..0.015 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) -> Index Scan using xxx_parent_tables_mobileseeker_quikr_creation_time_idx on xxx_parent_tables_mobileseeker_quikr xxx_parent_tables (cost\n=0.00..13.30 rows=1 width=105) (actual time=0.111..0.111 rows=0 loops=1) Filter: ((status = 1) OR (status IS NULL)) Rows Removed by Filter: 61 \n Filter: ((status = 1) OR (status IS NULL)) -> Materialize (cost=0.00..3.47 rows=2 width=8) (actual time=0.000..0.000 rows=2 loops=708260) -> Seq Scan on cp_info (cost=0.00..3.46 rows=2 width=8) (actual time=0.028..0.060 rows=2 loops=1)\n Filter: (domain_id = 5::bigint) Rows Removed by Filter: 115 Total runtime: 70481.560 ms(xxx rows)",
"msg_date": "Thu, 13 Jun 2013 13:19:42 +0530",
"msg_from": "K P Manoj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance"
},
{
"msg_contents": "On Thu, Jun 13, 2013 at 12:49 AM, K P Manoj <[email protected]> wrote:\n> One of my query treating performance issue on my production server.\n> Once i run query on my parent table with specific condition(hard coded\n> value) its uses only proper child table and its index on explain plan ,\n> but once i am using table conditions (instead of hard coded value), query\n> planner is going all the child tables, Can i know where i am worng\n\n From the docs:\n\n\"Constraint exclusion only works when the query's WHERE clause\ncontains constants (or externally supplied parameters). For example, a\ncomparison against a non-immutable function such as CURRENT_TIMESTAMP\ncannot be optimized, since the planner cannot know which partition the\nfunction value might fall into at run time.\"\n\nhttp://www.postgresql.org/docs/9.2/static/ddl-partitioning.html#DDL-PARTITIONING-CAVEATS\n\n--\nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nProfile: http://www.linkedin.com/in/grayhemp\nPhone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 13 Jun 2013 02:03:43 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
}
] |
[
{
"msg_contents": "I've been using Gmail and thought you might like to try it out. Here's an\ninvitation to create an account.\n\n\n You're Invited to Gmail!\n\nK P Manoj has invited you to open a Gmail account.\n\nGmail is Google's free email service, built on the idea that email can be\nintuitive, efficient, and fun. Gmail has:\n\n *Less spam*\nKeep unwanted messages out of your inbox with Google's innovative\ntechnology.\n\n*Lots of space*\nEnough storage so that you'll never have to delete another message.\n\n*Built-in chat*\nText or video chat with K P Manoj and other friends in real time.\n\n*Mobile access*\nGet your email anywhere with Gmail on your mobile phone.\n\nYou can even import your contacts and email from Yahoo!, Hotmail, AOL, or\nany other web mail or POP accounts.\n\nOnce you create your account, K P Manoj will be notified of your new Gmail\naddress so you can stay in touch. Learn\nmore<http://mail.google.com/mail/help/intl/en/about.html>or get\nstarted<http://mail.google.com/mail/a-378e3a85f2-87aa6851ff-KdTY0LQWIXF9-KxXP4SK9lzH8VU?pc=en-rf---a>\n!\n Sign up<http://mail.google.com/mail/a-378e3a85f2-87aa6851ff-KdTY0LQWIXF9-KxXP4SK9lzH8VU?pc=en-rf---a>\n\nGoogle Inc. | 1600 Ampitheatre Parkway | Mountain View, California 94043\n\nI've been using Gmail and thought you might like to try it out. Here's an invitation to create an account.\n\n\n\n\n\n\n\n\nYou're Invited to Gmail!\n\n\n\n\n\n\t K P Manoj has invited you to open a Gmail account.\n\t \nGmail is Google's free email service, built on the idea that email can be intuitive, efficient, and fun. Gmail has:\n\n\nLess spam\n\t\tKeep unwanted messages out of your inbox with Google's innovative technology.\n\n\n\nLots of space\n\t\tEnough storage so that you'll never have to delete another message.\n\n\n\nBuilt-in chat\n\t\tText or video chat with K P Manoj and other friends in real time.\n\n\n\nMobile access\n\t\tGet your email anywhere with Gmail on your mobile phone.\n\n\nYou can even import your contacts and email from Yahoo!, Hotmail, AOL, or any other web mail or POP accounts.\nOnce you create your account, K P Manoj will be notified of your new Gmail address so you can stay in touch.\n\t\tLearn more or get started!\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSign up\n\n\n\n\n\n\n\n\n\n\n\n\n\nGoogle Inc. | 1600 Ampitheatre Parkway | Mountain View, California 94043",
"msg_date": "Thu, 13 Jun 2013 13:19:56 +0530",
"msg_from": "K P Manoj <[email protected]>",
"msg_from_op": true,
"msg_subject": "K P Manoj has invited you to open a Google mail account"
}
] |
[
{
"msg_contents": "Hello,\nReading code documentation of pg_stat_statements it says\n\n* As of Postgres 9.2, this module normalizes query entries. Normalization\n * is a process whereby similar queries, typically differing only in their\n * constants (though the exact rules are somewhat more subtle than that) are\n * recognized as equivalent, and are tracked as a single entry. This is\n * particularly useful for non-prepared queries.\n\nConsider query\nSELECT * FROM pgbench_branches LEFT JOIN pgbench_tellers ON\npgbench_tellers.bid= pgbench_branches.bid WHERE pgbench_branches.bID*=*5\n\nDoes this mean that all queries with just the constant changing are\nnormalized\n\npgbench_branches.bID*=*10*,*pgbench_branches.bID*=*15\n\nOr are queries where conditions changed included as well?\n\npgbench_branches.bID* <*10*,*pgbench_branches.bID*>*15\n\nregards\n\nSameer\n\n\n*\n*\n\nHello,Reading code documentation of pg_stat_statements it says * As of Postgres 9.2, this module normalizes query entries. Normalization * is a process whereby similar queries, typically differing only in their\n * constants (though the exact rules are somewhat more subtle than that) are * recognized as equivalent, and are tracked as a single entry. This is * particularly useful for non-prepared queries.\nConsider querySELECT * FROM pgbench_branches LEFT JOIN pgbench_tellers\nON pgbench_tellers.bid= pgbench_branches.bid WHERE pgbench_branches.bID=5Does this mean that all queries with just the constant changing are normalized pgbench_branches.bID=10,pgbench_branches.bID=15\nOr are queries where conditions changed included as well?pgbench_branches.bID <10,pgbench_branches.bID>15regards\nSameer",
"msg_date": "Mon, 17 Jun 2013 12:28:15 +0530",
"msg_from": "Sameer Thakur <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat_statements query normalization"
},
{
"msg_contents": "On Sun, Jun 16, 2013 at 11:58 PM, Sameer Thakur <[email protected]> wrote:\n> Consider query\n> SELECT * FROM pgbench_branches LEFT JOIN pgbench_tellers ON\n> pgbench_tellers.bid= pgbench_branches.bid WHERE pgbench_branches.bID=5\n>\n> Does this mean that all queries with just the constant changing are\n> normalized\n>\n> pgbench_branches.bID=10,pgbench_branches.bID=15\n>\n> Or are queries where conditions changed included as well?\n\nWhy don't you play around with it and see for yourself? In general,\nqueries differing only in the values of constants are considered\nequivalent by the fingerprinting. pg_stat_statements usefully ignores\ndifferences in whitespace and equivalent syntaxes, by virtue of the\nfact that ultimately the post-parse analysis tree is fingerprinted.\nYou might say that pg_stat_statements leverages the normalization\ncapabilities of the core system by working off this later\nrepresentation (essentially, the internal representation that the\nrewriter stage processes).\n\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 17 Jun 2013 00:04:23 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements query normalization"
},
{
"msg_contents": ">Why don't you play around with it and see for yourself?\nI did that. Populated a sample table and then queried on it multiple times\nfor each condition (=,>,<) with different constant values. Then queried\npg_stat_statements view. Saw three different records corresponding to each\ncondition query text (=?,>?,<?).\nThank you\nSameer\n\n>Why don't you play around with it and see for yourself? I did that. Populated a sample table and then queried on it multiple times for each condition (=,>,<) with different constant values. Then queried pg_stat_statements view. Saw three different records corresponding to each condition query text (=?,>?,<?). \nThank youSameer",
"msg_date": "Wed, 19 Jun 2013 16:29:38 +0530",
"msg_from": "Sameer Thakur <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements query normalization"
}
] |
[
{
"msg_contents": "Hello,\nI understand that when the pg_stat_statements.save=true the\nstatement statistics are saved at global/pg_stat_statements.stat. This file\nis read on next startup and then deleted.\nIf there is a crash i understand that pg_stat_statements.stat file is not\ncreated even if pg_stat_statements.save=true.\n If the crash happened before pg_stat_statements.stat file is deleted\nthen, on recovery, is that pg_stat_statements.stat file deleted?\nOn crash recovery, are statement statistics reset ,to same values as would\nbe the case on normal startup in the case pg_stat_statements.save=false?\nThank you\nSameer\n\nHello,I understand that when the pg_stat_statements.save=true the statement statistics are saved at global/pg_stat_statements.stat. This file is read on next startup and then deleted.\nIf there is a crash i understand that pg_stat_statements.stat file is not created even if pg_stat_statements.save=true.\n If the crash happened before pg_stat_statements.stat file is deleted then, on recovery, is that pg_stat_statements.stat file deleted?\nOn crash recovery, are statement statistics reset ,to same values as would be the case on normal startup in the case pg_stat_statements.save=false?\nThank youSameer",
"msg_date": "Wed, 19 Jun 2013 18:02:45 +0530",
"msg_from": "Sameer Thakur <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat_statements behavior in crash recovery"
},
{
"msg_contents": "On Wed, Jun 19, 2013 at 5:32 AM, Sameer Thakur <[email protected]> wrote:\n> If there is a crash i understand that pg_stat_statements.stat file is not\n> created even if pg_stat_statements.save=true.\n> If the crash happened before pg_stat_statements.stat file is deleted then,\n> on recovery, is that pg_stat_statements.stat file deleted?\n> On crash recovery, are statement statistics reset ,to same values as would\n> be the case on normal startup in the case pg_stat_statements.save=false?\n\nThe pg_stat_statements statistics file is just deleted when the server\nstarts, and statistics are serialized to disk when there's a clean\nshutdown. pg_stat_statements is similar to the statistics collector\nhere.\n\nWhy are you posting this to the -performance list?\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 19 Jun 2013 07:24:45 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements behavior in crash recovery"
},
{
"msg_contents": ">Why are you posting this to the -performance list?\nSorry, maybe -general was the correct place. I thought that\npg_stat_statements was a performance diagnostics tool, so -performance was\nthe correct forum\nThank you\nSameer\n\n>Why are you posting this to the -performance list?Sorry, maybe -general was the correct place. I thought that pg_stat_statements was a performance diagnostics tool, so -performance was the correct forum \nThank youSameer",
"msg_date": "Thu, 20 Jun 2013 09:34:13 +0530",
"msg_from": "Sameer Thakur <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements behavior in crash recovery"
}
] |
[
{
"msg_contents": "All;\n\nI'm working with a client running PostgreSQL on a Fusion-IO drive.\n\nThey have a PostgreSQL setup guide from Fusion recommending the \nfollowing settings:\neffective_io_concurrency=0\nbgwriter_lru_maxpages=0\nrandom_page_cost=0.1\nsequential_page_cost=0.1
\n\nThese seem odd to me, effectively turning the background writer off,\nplus setting both random_page_cost and sequential_page_cost to the same \n(very low) value...\n\nThoughts?\n\n\nThanks in advance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 20 Jun 2013 13:56:28 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL settings for running on an SSD drive"
},
{
"msg_contents": "On 06/20/2013 02:56 PM, CS DBA wrote:\n\n> They have a PostgreSQL setup guide from Fusion recommending the\n> following settings:\n> effective_io_concurrency=0\n> bgwriter_lru_maxpages=0\n> random_page_cost=0.1\n> sequential_page_cost=0.1
\n\nWell, since FusionIO drives have a limited write cycle (5PB?), I can \nsomewhat see why they would recommend turning off the background writer. \nWe were a bit more conservative in our settings, though:\n\nseq_page_cost = 1.0 # Default\nrandom_page_cost = 1.0 # Reduce to match seq_page_cost\n\nYep. That's it. Just the one setting. FusionIO drives are fast, but \nthey're not infinitely fast. My tests (and others) show they're about \n1/2 the speed of memory, regarding IOPS. And while they can serve very \naggressive sequential reads, they're not orders of magnitude faster than \nspindles in anything but IOPS.\n\nKnowing that, we reduced random page fetches to be the same speed as \nsequential page fetches. This has served our heavy OLTP system (and its \nFusionIO underpinnings) very well so far.\n\nBut like I said, these are pretty conservative. I'd start at 1 and \nreduce in 0.2 increments and run tests to see if there's a beneficial \nchange.\n\nIf it helps, here's our system stats, some only relevant during \nfinancial hours:\n\n* A billion queries per day\n* Sustained 500+ write queries per second\n* Average 7000-ish transactions per second.\n* Average 35,000-ish queries per second.\n* pg_xlog and pgdata on same FusionIO device\n* 2 years, 3 months in operation\n* 1.29PB written\n* 1.75PB read\n\nThe load on our system right now is 3.7 on a 24 CPU box while serving \n4100 TPS after active trading hours. The FusionIO drive is basically the \nonly reason we can do all of that without a lot of excessive contortions.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 20 Jun 2013 15:13:10 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL settings for running on an SSD drive"
},
{
"msg_contents": "Folks,\n\nFirst, cc'ing Greg Smith to see if he can address this with the Fusion\nfolks so that they stop giving out a bad guide.\n\nOn 06/20/2013 01:13 PM, Shaun Thomas wrote:\n> On 06/20/2013 02:56 PM, CS DBA wrote:\n> \n>> They have a PostgreSQL setup guide from Fusion recommending the\n>> following settings:\n>> effective_io_concurrency=0\n>> bgwriter_lru_maxpages=0\n>> random_page_cost=0.1\n>> sequential_page_cost=0.1
\n> \n> Well, since FusionIO drives have a limited write cycle (5PB?), I can\n> somewhat see why they would recommend turning off the background writer.\n> We were a bit more conservative in our settings, though:\n> \n> seq_page_cost = 1.0 # Default\n> random_page_cost = 1.0 # Reduce to match seq_page_cost\n> \n> Yep. That's it. Just the one setting. FusionIO drives are fast, but\n> they're not infinitely fast. My tests (and others) show they're about\n> 1/2 the speed of memory, regarding IOPS. And while they can serve very\n> aggressive sequential reads, they're not orders of magnitude faster than\n> spindles in anything but IOPS.\n> \n> Knowing that, we reduced random page fetches to be the same speed as\n> sequential page fetches. This has served our heavy OLTP system (and its\n> FusionIO underpinnings) very well so far.\n\nDid you compare setting RPC to 1.0 vs. setting it to 1.1, or something\nelse just slightly higher than SPC?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 20 Jun 2013 13:32:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL settings for running on an SSD drive"
},
{
"msg_contents": "On 06/20/2013 03:32 PM, Josh Berkus wrote:\n\n> Did you compare setting RPC to 1.0 vs. setting it to 1.1, or something\n> else just slightly higher than SPC?\n\nYes, actually. My favored setting when we were on 8.3 was 1.5. But \nsomething with the planner changed pretty drastically when we went to \n9.1, and we were getting some really bad query plans unless we \n*strongly* suggested RPC was cheap. I was afraid I'd have to go lower, \nbut 1 seemed to do the trick.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 20 Jun 2013 16:23:06 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL settings for running on an SSD drive"
},
{
"msg_contents": "\nOn 06/20/2013 05:23 PM, Shaun Thomas wrote:\n> On 06/20/2013 03:32 PM, Josh Berkus wrote:\n>\n>> Did you compare setting RPC to 1.0 vs. setting it to 1.1, or something\n>> else just slightly higher than SPC?\n>\n> Yes, actually. My favored setting when we were on 8.3 was 1.5. But \n> something with the planner changed pretty drastically when we went to \n> 9.1, and we were getting some really bad query plans unless we \n> *strongly* suggested RPC was cheap. I was afraid I'd have to go lower, \n> but 1 seemed to do the trick.\n>\n\nThat would be perverse, surely, but on Fusion-IO RPC = SPC seems to make \nsense unless you assume that cache misses will be higher for random \nreads than for sequential reads.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 20 Jun 2013 17:32:34 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL settings for running on an SSD drive"
},
{
"msg_contents": "On 6/20/13 4:32 PM, Josh Berkus wrote:\n> First, cc'ing Greg Smith to see if he can address this with the Fusion\n> folks so that they stop giving out a bad guide.\n\nI'm working on a completely replacement of that guide, one that actually \ngives out a full set of advice. Right now I'm between their product \ncycles, I'm expecting new hardware again here soon.\n\nThe main thing that no one has done a good guide to is how to reduce SSD \nflash cell wear in a PostgreSQL database. If there's anyone out there \nwho is burning through enough FusionIO cells at your site where a) you \ncare about wear, and b) you can spare a drive for testing at your site, \nplease contact me off list if you'd like to talk about that. I have a \nproject aiming at increased lifetimes that's specific to FusionIO \nhardware, and it could use more testers. This involves a modified \nPostgreSQL though. It's not for the squeamish or for a production \nsystem. If I can't crash the kernel on the test server and that's fine, \nmove along, this will not help you for a while yet.\n\n >>> They have a PostgreSQL setup guide from Fusion recommending the\n >>> following settings:\n >>> effective_io_concurrency=0\n >>> bgwriter_lru_maxpages=0\n\nI finally tracked down where this all came from. As a general advisory \non what their current guide addresses, validation of its settings \nincluded things like testing with pgbench. It's a little known property \nof the built-in pgbench test that the best TPS *throughput* numbers come \nfrom turning the background writer off. That's why their guide suggests \ndoing that. The problem with that approach is that the background \nwriter is intended to improve *latency*, so measuring its impact on \nthroughput isn't the right approach.\n\nEnabling or disabling the background writer doesn't have a large impact \non flash wear. That wasn't why turning it off was recommended. It came \nout of the throughput improvement.\n\nSimilarly, effective_io_concurrency is untested by pgbench, its queries \naren't complicated enough. I would consider both of these settings \nworse than the defaults, and far from optimal for their hardware.\n\n>>> random_page_cost=0.1\n>>> sequential_page_cost=0.1
\n\nAs Shaun already commented on a bit--I agree with almost everything he \nsuggested--the situation with random vs. sequential I/O on this hardware \nis not as simple as it's made out to be sometimes. Random I/O certainly \nis not the same speed as sequential even on their hardware. Sequential \nI/O is not always 10X as fast as traditional drives. Using values \ncloser to 1.0 as he suggested is a lot more sensible to me.\n\nI also don't think random_page_cost = seq_page_cost is the best setting, \neven though it did work out for Shaun. The hardware is fast enough that \nyou can make a lot of mistakes without really paying for them though, \nand any query tuning like that is going to be workload dependent. It's \nimpossible to make a single recommendation here.\n\n>> FusionIO drives are fast, but\n>> they're not infinitely fast. My tests (and others) show they're about\n>> 1/2 the speed of memory, regarding IOPS.\n\nThis part needs a disclaimer on it. Memory speed varies significantly \nbetween servers. The range just on new servers right now goes from \n5GB/s to 40GB/s. There are a good number of single and dual socket \nIntel systems where \"1/2 the speed of memory\" is about right. There are \nsystems where the ratio will be closer to 1:1 or 1:4 though.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Jul 2013 22:04:33 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL settings for running on an SSD drive"
},
{
"msg_contents": "On 07/17/2013 09:04 PM, Greg Smith wrote:\n\n> I'm working on a completely replacement of that guide, one that actually\n> gives out a full set of advice. Right now I'm between their product\n> cycles, I'm expecting new hardware again here soon.\n\nMe too. It's interesting that they seem to be focusing more on using the \ncards as a caching layer instead of a directly readable device. I still \nneed to test that use case.\n\n> This involves a modified PostgreSQL though. It's not for the\n> squeamish or for a production system.\n\nI'd volunteer, but we're actually using EDB. Unless you can convince EDB \nto supply similar binaries as you have, I can't get equivalent tests. :(\n\n> I also don't think random_page_cost = seq_page_cost is the best setting,\n> even though it did work out for Shaun. The hardware is fast enough that\n> you can make a lot of mistakes without really paying for them though,\n> and any query tuning like that is going to be workload dependent. It's\n> impossible to make a single recommendation here.\n\nVery, very true. I actually prefer using different values, and before \n9.1, we had random at 1.5, and sequential at 1.0. Some of our query \nplans were being adversely affected, and didn't go back to normal until \nI reduced random cost to 1.0. I can't explain why that would happen, but \nit's not too surprising given that we jumped from 8.2 to 9.1.\n\nSince we're mainly focused on stability right now, getting the old \nperformance back was the main concern. I haven't revisited the setting \nsince that initial upgrade and correction, so it's hard to know what the \n\"right\" setting really is. Like you said, there is a lot of room for \ntuning based on system usage.\n\n> There are a good number of single and dual socket Intel systems where\n> \"1/2 the speed of memory\" is about right. There are systems where\n> the ratio will be closer to 1:1 or 1:4 though.\n\nDoh, yeah. It also depends on the FusionIO generation and tier you're \nworking with. Some of their newer/bigger cards with more controller \nchips can (purportedly) push upwards of 6GB/s, which is a tad faster \nthan the 800MB/s (measured) of our ancient gen-1 cards.\n\nToo many variables. -_-\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Jul 2013 15:42:08 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL settings for running on an SSD drive"
}
] |
[
{
"msg_contents": "I'm trying to optimize a query on a partitioned table. The schema looks\nlike this:\n\nCREATE TABLE observations(\n ts timestamptz NOT NULL DEFAULT now(),\n type text NOT NULL,\n subject uuid NOT NULL,\n details json NOT NULL\n);\n\nThe table is partitioned by ts (right now I have ~300 1h partitions, which\nI know is pushing it; I'm looking at daily instead, though for what it's\nworth, an unpartitioned table doesn't seem to perform much better here).\nThe query is:\n\nSELECT\n DISTINCT ON (type) ts, type, details\nFROM\n observations\nWHERE\n subject = '...'\nORDER BY\n type, ts DESC;\n\nThe cardinality of \"type\" is fairly low (~3 right now, probably less than\ntwo dozen in the foreseeable future). Most types are likely to have an\nentry with a very recent timestamp (most likely in the latest partition),\nbut I can't depend on that.\n\nI've tried a number of different index combinations of ts, type, and\nsubject (both composite and individual indexes), but nothing seems to run\nespecially quickly. The table has a fresh ANALYZE. I'm running 9.2.4. I've\nposted [1] an EXPLAIN ANALYZE for the version with an index on (subject,\ntype, ts). Any thoughts?\n\n[1]: http://explain.depesz.com/s/mnI\n\nI'm trying to optimize a query on a partitioned table. The schema looks like this:CREATE TABLE observations( ts timestamptz NOT NULL DEFAULT now(),\n type text NOT NULL, subject uuid NOT NULL, details json NOT NULL);The table is partitioned by ts (right now I have ~300 1h partitions, which I know is pushing it; I'm looking at daily instead, though for what it's worth, an unpartitioned table doesn't seem to perform much better here). The query is:\nSELECT DISTINCT ON (type) ts, type, detailsFROM observationsWHERE subject = '...'ORDER BY type, ts DESC;\nThe cardinality of \"type\" is fairly low (~3 right now, probably less than two dozen in the foreseeable future). Most types are likely to have an entry with a very recent timestamp (most likely in the latest partition), but I can't depend on that.\nI've tried a number of different index combinations of ts, type, and subject (both composite and individual indexes), but nothing seems to run especially quickly. The table has a fresh ANALYZE. I'm running 9.2.4. I've posted [1] an EXPLAIN ANALYZE for the version with an index on (subject, type, ts). Any thoughts?\n[1]: http://explain.depesz.com/s/mnI",
"msg_date": "Thu, 20 Jun 2013 18:24:46 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query tuning: partitioning, DISTINCT ON, and indexing"
},
{
"msg_contents": "On Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda <[email protected]>wrote:\n\n> SELECT\n> DISTINCT ON (type) ts, type, details\n> FROM\n> observations\n> WHERE\n> subject = '...'\n> ORDER BY\n> type, ts DESC;\n>\n\nFirst thing: What is your \"work_mem\" set to, and how much RAM is in the\nmachine? If you look at the plan, you'll immediately notice the \"external\nmerge Disk\" line where it spills to disk on the sort. Try setting your\nwork_mem to 120MB or so (depending on how much RAM you have, # concurrent\nsessions, complexity of queries etc)\n\nOn Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda <[email protected]> wrote:\nSELECT DISTINCT ON (type) ts, type, detailsFROM observations\nWHERE subject = '...'ORDER BY type, ts DESC;First thing: What is your \"work_mem\" set to, and how much RAM is in the machine? If you look at the plan, you'll immediately notice the \"external merge Disk\" line where it spills to disk on the sort. Try setting your work_mem to 120MB or so (depending on how much RAM you have, # concurrent sessions, complexity of queries etc)",
"msg_date": "Thu, 20 Jun 2013 21:13:11 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning: partitioning, DISTINCT ON, and indexing"
},
{
"msg_contents": "On Thu, Jun 20, 2013 at 9:13 PM, bricklen <[email protected]> wrote:\n\n>\n> On Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda <[email protected]>wrote:\n>\n>> SELECT\n>> DISTINCT ON (type) ts, type, details\n>> FROM\n>> observations\n>> WHERE\n>> subject = '...'\n>> ORDER BY\n>> type, ts DESC;\n>>\n>\n> First thing: What is your \"work_mem\" set to, and how much RAM is in the\n> machine? If you look at the plan, you'll immediately notice the \"external\n> merge Disk\" line where it spills to disk on the sort. Try setting your\n> work_mem to 120MB or so (depending on how much RAM you have, # concurrent\n> sessions, complexity of queries etc)\n>\n\nGood call, thanks, although the in-mem quicksort is not much faster:\n\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=471248.30..489392.67 rows=3 width=47) (actual\ntime=32002.133..32817.474 rows=3 loops=1)\n Buffers: shared read=30264\n -> Sort (cost=471248.30..480320.48 rows=3628873 width=47) (actual\ntime=32002.128..32455.950 rows=3628803 loops=1)\n Sort Key: public.observations.type, public.observations.ts\n Sort Method: quicksort Memory: 381805kB\n Buffers: shared read=30264\n -> Result (cost=0.00..75862.81 rows=3628873 width=47) (actual\ntime=0.026..1323.317 rows=3628803 loops=1)\n Buffers: shared read=30264\n -> Append (cost=0.00..75862.81 rows=3628873 width=47)\n(actual time=0.026..978.477 rows=3628803 loops=1)\n Buffers: shared read=30264\n...\n\nthe machine is not nailed down, but I think I'd need to find a way to\ndrastically improve the plan to keep this in Postgres. The alternative is\nprobably caching the results somewhere else: for any given subject, I only\nneed the latest observation of each type 99.9+% of the time.\n\nOn Thu, Jun 20, 2013 at 9:13 PM, bricklen <[email protected]> wrote:\n\nOn Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda <[email protected]> wrote:\nSELECT DISTINCT ON (type) ts, type, details\nFROM observations\nWHERE subject = '...'ORDER BY type, ts DESC;First thing: What is your \"work_mem\" set to, and how much RAM is in the machine? If you look at the plan, you'll immediately notice the \"external merge Disk\" line where it spills to disk on the sort. Try setting your work_mem to 120MB or so (depending on how much RAM you have, # concurrent sessions, complexity of queries etc)\n\nGood call, thanks, although the in-mem quicksort is not much faster:\n QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=471248.30..489392.67 rows=3 width=47) (actual time=32002.133..32817.474 rows=3 loops=1) Buffers: shared read=30264 -> Sort (cost=471248.30..480320.48 rows=3628873 width=47) (actual time=32002.128..32455.950 rows=3628803 loops=1)\n Sort Key: public.observations.type, public.observations.ts Sort Method: quicksort Memory: 381805kB Buffers: shared read=30264\n -> Result (cost=0.00..75862.81 rows=3628873 width=47) (actual time=0.026..1323.317 rows=3628803 loops=1) Buffers: shared read=30264\n -> Append (cost=0.00..75862.81 rows=3628873 width=47) (actual time=0.026..978.477 rows=3628803 loops=1) Buffers: shared read=30264\n...the machine is not nailed down, but I think I'd need to find a way to drastically improve the plan to keep this in Postgres. The alternative is probably caching the results somewhere else: for any given subject, I only need the latest observation of each type 99.9+% of the time.",
"msg_date": "Thu, 20 Jun 2013 22:14:40 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query tuning: partitioning, DISTINCT ON, and indexing"
},
{
"msg_contents": "On Thu, Jun 20, 2013 at 10:14 PM, Maciek Sakrejda <[email protected]>wrote:\n\n> On Thu, Jun 20, 2013 at 9:13 PM, bricklen <[email protected]> wrote:\n>\n>>\n>> On Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda <[email protected]>wrote:\n>>\n>>> SELECT\n>>> DISTINCT ON (type) ts, type, details\n>>> FROM\n>>> observations\n>>> WHERE\n>>> subject = '...'\n>>> ORDER BY\n>>> type, ts DESC;\n>>>\n>>\n>> First thing: What is your \"work_mem\" set to, and how much RAM is in the\n>> machine? If you look at the plan, you'll immediately notice the \"external\n>> merge Disk\" line where it spills to disk on the sort. Try setting your\n>> work_mem to 120MB or so (depending on how much RAM you have, # concurrent\n>> sessions, complexity of queries etc)\n>>\n>\n> Good call, thanks, although the in-mem quicksort is not much faster:\n>\n>\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=471248.30..489392.67 rows=3 width=47) (actual\n> time=32002.133..32817.474 rows=3 loops=1)\n> Buffers: shared read=30264\n> -> Sort (cost=471248.30..480320.48 rows=3628873 width=47) (actual\n> time=32002.128..32455.950 rows=3628803 loops=1)\n> Sort Key: public.observations.type, public.observations.ts\n> Sort Method: quicksort Memory: 381805kB\n> Buffers: shared read=30264\n> -> Result (cost=0.00..75862.81 rows=3628873 width=47) (actual\n> time=0.026..1323.317 rows=3628803 loops=1)\n> Buffers: shared read=30264\n> -> Append (cost=0.00..75862.81 rows=3628873 width=47)\n> (actual time=0.026..978.477 rows=3628803 loops=1)\n> Buffers: shared read=30264\n> ...\n>\n> the machine is not nailed down, but I think I'd need to find a way to\n> drastically improve the plan to keep this in Postgres. The alternative is\n> probably caching the results somewhere else: for any given subject, I only\n> need the latest observation of each type 99.9+% of the time.\n>\n\n\n Here are some pages that might help for what details to provide:\nhttps://wiki.postgresql.org/wiki/Server_Configuration\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\nDid you try an index on (type, ts desc) ? I don't have much else to add at\nthis point, but maybe after posting some more server and table (parent and\nchild) details someone will have an answer for you.\n\nOn Thu, Jun 20, 2013 at 10:14 PM, Maciek Sakrejda <[email protected]> wrote:\nOn Thu, Jun 20, 2013 at 9:13 PM, bricklen <[email protected]> wrote:\n\n\nOn Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda <[email protected]> wrote:\nSELECT DISTINCT ON (type) ts, type, details\nFROM observations\nWHERE subject = '...'ORDER BY type, ts DESC;First thing: What is your \"work_mem\" set to, and how much RAM is in the machine? If you look at the plan, you'll immediately notice the \"external merge Disk\" line where it spills to disk on the sort. Try setting your work_mem to 120MB or so (depending on how much RAM you have, # concurrent sessions, complexity of queries etc)\n\nGood call, thanks, although the in-mem quicksort is not much faster:\n\n QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=471248.30..489392.67 rows=3 width=47) (actual time=32002.133..32817.474 rows=3 loops=1) Buffers: shared read=30264 -> Sort (cost=471248.30..480320.48 rows=3628873 width=47) (actual time=32002.128..32455.950 rows=3628803 loops=1)\n Sort Key: public.observations.type, public.observations.ts Sort Method: quicksort Memory: 381805kB Buffers: shared read=30264\n -> Result (cost=0.00..75862.81 rows=3628873 width=47) (actual time=0.026..1323.317 rows=3628803 loops=1) Buffers: shared read=30264\n -> Append (cost=0.00..75862.81 rows=3628873 width=47) (actual time=0.026..978.477 rows=3628803 loops=1) Buffers: shared read=30264\n...the machine is not nailed down, but I think I'd need to find a way to drastically improve the plan to keep this in Postgres. The alternative is probably caching the results somewhere else: for any given subject, I only need the latest observation of each type 99.9+% of the time.\n Here are some pages that might help for what details to provide: https://wiki.postgresql.org/wiki/Server_Configuration\nhttps://wiki.postgresql.org/wiki/Slow_Query_QuestionsDid you try an index on (type, ts desc) ? I don't have much else to add at this point, but maybe after posting some more server and table (parent and child) details someone will have an answer for you.",
"msg_date": "Fri, 21 Jun 2013 09:08:38 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning: partitioning, DISTINCT ON, and indexing"
},
{
"msg_contents": "On Fri, Jun 21, 2013 at 9:08 AM, bricklen <[email protected]> wrote:\n\n> Did you try an index on (type, ts desc) ? I don't have much else to add at\n> this point, but maybe after posting some more server and table (parent and\n> child) details someone will have an answer for you.\n>\n\nNo, this is exactly what I was missing. I had forgotten the default index\norder is useless for a descending lookup like this: I made the change and\nthe performance is 3000x better (the plan's using the index now). Thanks\nfor all your help.\n\nOn Fri, Jun 21, 2013 at 9:08 AM, bricklen <[email protected]> wrote:\nDid you try an index on (type, ts desc) ? I don't have much else to add at this point, but maybe after posting some more server and table (parent and child) details someone will have an answer for you.\nNo, this is exactly what I was missing. I had forgotten the default index order is useless for a descending lookup like this: I made the change and the performance is 3000x better (the plan's using the index now). Thanks for all your help.",
"msg_date": "Fri, 21 Jun 2013 10:08:50 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query tuning: partitioning, DISTINCT ON, and indexing"
}
] |
[
{
"msg_contents": "Folks,\n\nI'm getting something really odd in 9.2.4, where the planner estimates\nthat the selectivity of a column equal to itself is always exactly 0.5%\n(i.e. 0.005X). I can't figure out where this constant is coming from,\nor why it's being applied.\n\nTest case:\n\ncreate table esttest (\n id int not null primary key,\n state1 int not null default 0,\n state2 int not null default 0,\n state3 int not null default 0\n);\n\ninsert into esttest (id, state1, state2, state3)\nselect i,\n (random()*3)::int,\n (random())::int,\n (random()*100)::int\nfrom generate_series (1, 20000)\n as gs(i);\n\nvacuum analyze esttest;\n\nexplain analyze\nselect * from esttest\nwhere state1 = state1;\n\nexplain analyze\nselect * from esttest\nwhere state2 = state2;\n\nexplain analyze\nselect * from esttest\nwhere state3 = state3;\n\nResults of test case:\n\nbadestimate=# explain analyze\nbadestimate-# select * from esttest\nbadestimate-# where state1 = state1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Seq Scan on esttest (cost=0.00..359.00 rows=100 width=16) (actual\ntime=0.009..4.145 rows=20000 loops=1)\n Filter: (state1 = state1)\n Total runtime: 5.572 ms\n(3 rows)\n\nbadestimate=#\nbadestimate=# explain analyze\nbadestimate-# select * from esttest\nbadestimate-# where state2 = state2;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Seq Scan on esttest (cost=0.00..359.00 rows=100 width=16) (actual\ntime=0.006..4.166 rows=20000 loops=1)\n Filter: (state2 = state2)\n Total runtime: 5.595 ms\n(3 rows)\n\nbadestimate=#\nbadestimate=# explain analyze\nbadestimate-# select * from esttest\nbadestimate-# where state3 = state3;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Seq Scan on esttest (cost=0.00..359.00 rows=100 width=16) (actual\ntime=0.005..4.298 rows=20000 loops=1)\n Filter: (state3 = state3)\n Total runtime: 5.716 ms\n(3 rows)\n\n\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 21 Jun 2013 12:52:05 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Weird, bad 0.5% selectivity estimate for a column equal to itself"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> I'm getting something really odd in 9.2.4, where the planner estimates\n> that the selectivity of a column equal to itself is always exactly 0.5%\n> (i.e. 0.005X). I can't figure out where this constant is coming from,\n> or why it's being applied.\n\nSee DEFAULT_EQ_SEL. But why exactly do you care? Surely it's a stupid\nquery and you should fix it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 21 Jun 2013 17:32:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird, bad 0.5% selectivity estimate for a column equal to itself"
},
{
"msg_contents": "On 06/21/2013 02:32 PM, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n>> I'm getting something really odd in 9.2.4, where the planner estimates\n>> that the selectivity of a column equal to itself is always exactly 0.5%\n>> (i.e. 0.005X). I can't figure out where this constant is coming from,\n>> or why it's being applied.\n> \n> See DEFAULT_EQ_SEL. \n\nWhy is it using that? We have statistics on the column. What reason\nwould it have for using a default estimate?\n\n> But why exactly do you care? Surely it's a stupid\n> query and you should fix it.\n\n(a) that test case is a substantial simplication of a much more complex\nquery, one which exhibits actual execution time issues because of this\nselectivity bug.\n\n(b) that query is also auto-generated by external software, so \"just fix\nit\" isn't as easy as it sounds.\n\n(c) PostgreSQL ought to perform well even on the stupid queries.\n\nObviously, we're going to code around this for the existing software,\nbut this is an example of a planner bug which should be on the fix list.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 21 Jun 2013 15:21:28 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Weird, bad 0.5% selectivity estimate for a column equal to itself"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> On 06/21/2013 02:32 PM, Tom Lane wrote:\n>> See DEFAULT_EQ_SEL. \n\n> Why is it using that? We have statistics on the column. What reason\n> would it have for using a default estimate?\n\nThe stats are generally consulted for \"Var Op Constant\" scenarios.\nIt doesn't know what to do with \"Var Op Var\" cases that aren't joins.\nAs long as we lack cross-column-correlation stats I doubt it'd be very\nhelpful to try to derive a stats-based number for such cases. Of\ncourse, \"X = X\" is a special case, but ...\n\n>> But why exactly do you care? Surely it's a stupid\n>> query and you should fix it.\n\n> (b) that query is also auto-generated by external software, so \"just fix\n> it\" isn't as easy as it sounds.\n\nPersonally, I'll bet lunch that that external software is outright\nbroken, ie it probably thinks \"X = X\" is constant true and they found\nthey could save two lines of code and a few machine cycles by emitting\nthat rather than not emitting anything. Of course, the amount of\nparsing/planning time wasted in dealing with the useless-and-incorrect\nclause exceeds what was saved by multiple orders of magnitude, but hey\nit was easy.\n\nIt wouldn't take too much new code to get the planner to replace \"X = X\"\nwith \"X IS NOT NULL\", but I think we're probably fixing the wrong piece\nof software if we do.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 22 Jun 2013 16:24:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird, bad 0.5% selectivity estimate for a column equal to itself"
},
{
"msg_contents": "\n> Personally, I'll bet lunch that that external software is outright\n> broken, ie it probably thinks \"X = X\" is constant true and they found\n> they could save two lines of code and a few machine cycles by emitting\n> that rather than not emitting anything. Of course, the amount of\n> parsing/planning time wasted in dealing with the useless-and-incorrect\n> clause exceeds what was saved by multiple orders of magnitude, but hey\n> it was easy.\n\nWell, it was more in the form of:\n\ntab1.x = COALESCE(tab2.y,tab1.x)\n\n... which some programmer 8 years ago though would be a cute shorthand for:\n\ntab.x = tab2.y OR tab2.y IS NULL\n\nStill stupid, sure, but when you're dealing with partly-third-party\nlegacy software which was ported from MSSQL (which has issues with \"IS\nNULL\"), that's what you get.\n\n> It wouldn't take too much new code to get the planner to replace \"X = X\"\n> with \"X IS NOT NULL\", but I think we're probably fixing the wrong piece\n> of software if we do.\n\nWell, I'd be more satisfied with having a solution for:\n\nWHERE tab1.x = tab1.y\n\n... in general, even if it didn't have correlation stats. Like, what's\npreventing us from using the same selectivity logic we would on a join\nfor that? It wouldn't be accurate for highly correlated columns (or for\ncolX = colx) but it would be a damsight better than defaultsel. Heck,\neven multiplying the the two ndistincts together would be an improvement ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Jun 2013 16:10:40 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Weird, bad 0.5% selectivity estimate for a column equal to itself"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> Personally, I'll bet lunch that that external software is outright\n>> broken, ie it probably thinks \"X = X\" is constant true and they found\n>> they could save two lines of code and a few machine cycles by emitting\n>> that rather than not emitting anything.\n\n> Well, it was more in the form of:\n> tab1.x = COALESCE(tab2.y,tab1.x)\n\nHm. I'm not following how you get from there to complaining about not\nbeing smart about X = X, because that surely ain't the same.\n\n> Well, I'd be more satisfied with having a solution for:\n> WHERE tab1.x = tab1.y\n> ... in general, even if it didn't have correlation stats. Like, what's\n> preventing us from using the same selectivity logic we would on a join\n> for that?\n\nIt's a totally different case. In the join case you expect that each\nelement of one table will be compared with each element of the other.\nIn the single-table case, that's exactly *not* what will happen, and\nI don't see how you get to anything very useful without knowing\nsomething about the value pairs that actually occur. As a concrete\nexample, applying the join selectivity logic would certainly give a\ncompletely wrong answer for X = X, unless there were only one value\noccurring in the column.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Jun 2013 21:41:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird, bad 0.5% selectivity estimate for a column equal to itself"
},
{
"msg_contents": "On 06/25/2013 06:41 PM, Tom Lane wrote:\n>> Well, it was more in the form of:\n>> tab1.x = COALESCE(tab2.y,tab1.x)\n> \n> Hm. I'm not following how you get from there to complaining about not\n> being smart about X = X, because that surely ain't the same.\n\nActually, it was dominated by defaultsel, since tab2.y had a nullfrac of\n70%. It took us a couple days of reducing the bad query plan to figure\nout where the bad estimate was coming from. The real estimate should\nhave been 0.7 + ( est. tab2.y = tab1.x ), but instead we were getting\n0.005 + ( est. tab2.y = tab1.x ), which was throwing the whole query\nplan way off ... with an execution time difference of 900X.\n\n> It's a totally different case. In the join case you expect that each\n> element of one table will be compared with each element of the other.\n> In the single-table case, that's exactly *not* what will happen, and\n> I don't see how you get to anything very useful without knowing\n> something about the value pairs that actually occur. \n\nSure you can. If you make the assumption that there is 0 correlation,\nthen you can simply estimate the comparison as between two random\ncolumns. In the simplest approach, you would multiply the two\nndistincts, so that a column with 3 values would match a column with 10\nvalues 0.033 of the time.\n\nNow for a real estimator, we'd of course want to use the MCVs and the\nhistogram to calculate a better estimation; obviously our 3X10 table is\ngoing to match 0% of the time if col1 is [1,2,3] and col2 contains\nvalues from 1000 to 1100. The MCVs would be particularly valuable here;\nif the same MCV appears in both columns, we can multiply the probabilities.\n\nTo me, this seems just like estimating on a foreign table match, only\nsimpler. Of course, a coefficient of corellation would make it even\nmore accurate, but even without one we can arrive at much better\nestimates than defaultsel.\n\n> As a concrete\n> example, applying the join selectivity logic would certainly give a\n> completely wrong answer for X = X, unless there were only one value\n> occurring in the column.\n\nYeah, I think we'll eventually need to special-case that one. In the\nparticular case I ran across, though, using column match estimation\nwould have still yielded a better result than defaultsel.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Jun 2013 16:04:02 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Weird, bad 0.5% selectivity estimate for a column equal to itself"
}
] |
[
{
"msg_contents": "hello postgresql experts --\n\ni have a strange row estimate problem, which unfortunately i have trouble reproducing for anything but very large tables which makes boiling down to a simple example hard. i'm using version 9.1.1, all tables have had analyze run on them.\n\nhere's the example : i have a large table (billions of rows) with a five column primary key and a sixth value column. for confidentiality purposes i can't reproduce the exact schema here, but it is essentially\n\ncreate table bigtable (\n id1 integer not null,\n id2 date not null,\n id3 integer not null,\n id4 time not null,\n id5 integer not null,\n value real not null,\n primary key (id1, id2, id3, id4, id5)\n);\n\nfor various reasons there is only one id1 in the table right now, though in the future there will be more; also the primary key was added after table creation with alter table, though that shouldn't matter.\n\ni need to select out a somewhat arbitrary collection of rows out of bigtable. to do so i generate a temporary table \n\ncreate table jointable (\n id1 integer not null,\n id2 date not null,\n id3 integer not null,\n id4 time not null,\n id5 integer not null\n);\n\nand then perform a join against this table.\n\nif jointable doesn't have many rows, the planner picks a nested loop over jointable and a primary key lookup on bigtable. in the following, for expository purposes, jointable has 10 rows. we can see the planner knows this.\n\nexplain select * from bigtable join jointable using (id1, id2, id3, id4, id5);\n\n Nested Loop (cost=0.00..6321.03 rows=145 width=28)\n -> Seq Scan on jointable (cost=0.00..1.10 rows=10 width=24)\n -> Index Scan using bigtable_pkey on bigtable (cost=0.00..631.97 rows=1 width=28)\n Index Cond: ((id1 = jointable.id1) AND (id2 = jointable.id2) AND (id3 = jointable.id3) AND (id4 = jointable.id4) AND (vid = foo.vid))\n(4 rows)\n\nas you increase the number of rows in jointable, the planner switches to a sort + merge. in this case jointable has roughly 2 million rows.\n\n Merge Join (cost=727807979.29..765482193.16 rows=18212633 width=28)\n Merge Cond: ((bigtable.id1 = jointabe.id1) AND (bigtable.id2 = jointable.id2) AND (bigtable.id3 = jointable.id3) AND (bigtable.id4 = bigtable.id4) AND (bigtable.id5 = bigtable.id5))\n -> Sort (cost=727511795.16..735430711.00 rows=3167566336 width=28)\n Sort Key: bigtable.id3, bigtable.id1, bigtable.id2, bigtable.id4, bigtable.id5\n -> Seq Scan on bigtable (cost=0.00..76085300.36 rows=3167566336 width=28)\n -> Materialize (cost=295064.70..305399.26 rows=2066911 width=24)\n -> Sort (cost=295064.70..300231.98 rows=2066911 width=24)\n Sort Key: jointable.id3, jointable.id1, jointable.id2, jointable.id4, jointable.id5\n -> Seq Scan on jointable (cost=0.00..35867.11 rows=2066911 width=24)\n(9 rows)\n\nthe choice of sort + merge is really bad here, given the size of bigtable (3 billion rows and counting.)\n\nsome questions :\n\n1 - first off, why isn't the sort happening on the primary key, so that bigtable does not have to be sorted?\n\n2 - more importantly, since we are joining on the primary key, shouldn't the row estimate coming out of the join be limited by the number of rows in jointable?\n\nfor example, it is strange to see that if i put in a non-limiting limit statement (something bigger than the number of rows in jointable) it switches back to a nested loop + index scan :\n\nexplain select * from bigtable join jointable using (id1, id2, id3, id4, id5) limit 2500000;\n\n Limit (cost=0.00..178452647.11 rows=2500000 width=28)\n -> Nested Loop (cost=0.00..1306127545.35 rows=18297957 width=28)\n -> Seq Scan on jointable (cost=0.00..35867.11 rows=2066911 width=24)\n -> Index Scan using bigtable_pkey on bigtable (cost=0.00..631.88 rows=1 width=28)\n Index Cond: ((id1 = jointable.id1) AND (id2 = jointable.id2) AND (id3 = jointable.id3) AND (id4 = jointable.id4) AND (id5 = jointable.id5))\n(5 rows)\n\nam i not understanding the query planner, or is this a known issue in the query planner, or have i stumbled onto something amiss? unfortunately any synthetic examples i was able to make (up to 10 million rows) did not exhibit this behavior, which makes it hard to test.\n\nbest regards, ben\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Jun 2013 15:18:11 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "incorrect row estimates for primary key join"
},
{
"msg_contents": "On Mon, Jun 24, 2013 at 3:18 PM, Ben <[email protected]> wrote:\n\n>\n> create table jointable (\n> id1 integer not null,\n> id2 date not null,\n> id3 integer not null,\n> id4 time not null,\n> id5 integer not null\n> );\n>\n> and then perform a join against this table.\n>\n\nIs it safe to assume you ANALYZEd the jointable after creating it? (I\nassume so, just checking)\n\n\n\n> as you increase the number of rows in jointable, the planner switches to a\n> sort + merge. in this case jointable has roughly 2 million rows.\n>\n\nCan you post the output of:\n\nSELECT version();\nSELECT name, current_setting(name), source\nFROM pg_settings\nWHERE source NOT IN ('default', 'override');\n\nOn Mon, Jun 24, 2013 at 3:18 PM, Ben <[email protected]> wrote:\n\ncreate table jointable (\n id1 integer not null,\n id2 date not null,\n id3 integer not null,\n id4 time not null,\n id5 integer not null\n);\n\nand then perform a join against this table.Is it safe to assume you ANALYZEd the jointable after creating it? (I assume so, just checking) \n\n\nas you increase the number of rows in jointable, the planner switches to a sort + merge. in this case jointable has roughly 2 million rows.Can you post the output of:\nSELECT version();SELECT name, current_setting(name), sourceFROM pg_settingsWHERE source NOT IN ('default', 'override');",
"msg_date": "Mon, 24 Jun 2013 16:23:26 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: incorrect row estimates for primary key join"
},
{
"msg_contents": "\nhello --\n\nOn Jun 24, 2013, at 4:23 PM, bricklen wrote:\n\n> Is it safe to assume you ANALYZEd the jointable after creating it? (I assume so, just checking)\n\nyes, jointable was analyzed. both tables were further analyzed after any changes.\n\n> Can you post the output of:\n> \n> SELECT version();\n> SELECT name, current_setting(name), source\n> FROM pg_settings\n> WHERE source NOT IN ('default', 'override');\n\n version\n---------------------------------------------------------------------------------------\n PostgreSQL 9.1.1 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux) 4.6.2, 64-bit\n(1 row)\n\n name | current_setting | source\n------------------------------+--------------------+----------------------\n checkpoint_completion_target | 0.9 | configuration file\n checkpoint_segments | 16 | configuration file\n DateStyle | ISO, MDY | configuration file\n default_statistics_target | 50 | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n effective_cache_size | 5632MB | configuration file\n lc_messages | en_US.UTF-8 | configuration file\n lc_monetary | en_US.UTF-8 | configuration file\n lc_numeric | en_US.UTF-8 | configuration file\n lc_time | en_US.UTF-8 | configuration file\n listen_addresses | * | configuration file\n log_destination | stderr | configuration file\n log_line_prefix | %t %d %u | configuration file\n log_timezone | US/Pacific | environment variable\n logging_collector | on | configuration file\n maintenance_work_mem | 480MB | configuration file\n max_connections | 300 | configuration file\n max_stack_depth | 2MB | environment variable\n max_wal_senders | 3 | configuration file\n search_path | public | user\n shared_buffers | 1920MB | configuration file\n TimeZone | US/Pacific | environment variable\n wal_buffers | 8MB | configuration file\n wal_keep_segments | 128 | configuration file\n wal_level | hot_standby | configuration file\n work_mem | 48MB | configuration file\n(26 rows)\n\nhope this helps!\n\nthanks, ben\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Jun 2013 16:48:47 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: incorrect row estimates for primary key join"
},
{
"msg_contents": "Ben <[email protected]> wrote:\n\n> PostgreSQL 9.1.1 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux) 4.6.2, 64-bit\n\nConsider applying the latest bug fixes for 9.1 -- which would leave\nyou showing 9.1.9.\n\nhttp://www.postgresql.org/support/versioning/\n\n> default_statistics_target | 50 | configuration file\n\nWhy did you change this from the default of 100?\n\n> effective_cache_size | 5632MB | configuration file\n\nHow much RAM is on this machine? What else is running on it? \n(Normally people set this to 50% to 75% of total RAM. Lower values\ndiscourage index usage in queries like your example.)\n\nDo you get a different plan if you set cpu_tuple_cost = 0.03? How\nabout 0.05? You can set this just for a single connection and run\nexplain on the query to do a quick check.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Jun 2013 06:20:29 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: incorrect row estimates for primary key join"
},
{
"msg_contents": "On 06/25/2013 00:18, Ben wrote:\n> hello postgresql experts --\n>\n> i have a strange row estimate problem, which unfortunately i have trouble reproducing for anything but very large tables which makes boiling down to a simple example hard. i'm using version 9.1.1, all tables have had analyze run on them.\n>\n> here's the example : i have a large table (billions of rows) with a five column primary key and a sixth value column. for confidentiality purposes i can't reproduce the exact schema here, but it is essentially\n>\n> create table bigtable (\n> id1 integer not null,\n> id2 date not null,\n> id3 integer not null,\n> id4 time not null,\n> id5 integer not null,\n> value real not null,\n> primary key (id1, id2, id3, id4, id5)\n> );\n>\n> for various reasons there is only one id1 in the table right now, though in the future there will be more; also the primary key was added after table creation with alter table, though that shouldn't matter.\n>\n> i need to select out a somewhat arbitrary collection of rows out of bigtable. to do so i generate a temporary table\n>\n> create table jointable (\n> id1 integer not null,\n> id2 date not null,\n> id3 integer not null,\n> id4 time not null,\n> id5 integer not null\n> );\n>\n> and then perform a join against this table.\n>\n> if jointable doesn't have many rows, the planner picks a nested loop over jointable and a primary key lookup on bigtable. in the following, for expository purposes, jointable has 10 rows. we can see the planner knows this.\n>\n> explain select * from bigtable join jointable using (id1, id2, id3, id4, id5);\n>\n> Nested Loop (cost=0.00..6321.03 rows=145 width=28)\n> -> Seq Scan on jointable (cost=0.00..1.10 rows=10 width=24)\n> -> Index Scan using bigtable_pkey on bigtable (cost=0.00..631.97 rows=1 width=28)\n> Index Cond: ((id1 = jointable.id1) AND (id2 = jointable.id2) AND (id3 = jointable.id3) AND (id4 = jointable.id4) AND (vid = foo.vid))\n> (4 rows)\n>\n> as you increase the number of rows in jointable, the planner switches to a sort + merge. in this case jointable has roughly 2 million rows.\n>\n> Merge Join (cost=727807979.29..765482193.16 rows=18212633 width=28)\n> Merge Cond: ((bigtable.id1 = jointabe.id1) AND (bigtable.id2 = jointable.id2) AND (bigtable.id3 = jointable.id3) AND (bigtable.id4 = bigtable.id4) AND (bigtable.id5 = bigtable.id5))\n> -> Sort (cost=727511795.16..735430711.00 rows=3167566336 width=28)\n> Sort Key: bigtable.id3, bigtable.id1, bigtable.id2, bigtable.id4, bigtable.id5\n> -> Seq Scan on bigtable (cost=0.00..76085300.36 rows=3167566336 width=28)\n> -> Materialize (cost=295064.70..305399.26 rows=2066911 width=24)\n> -> Sort (cost=295064.70..300231.98 rows=2066911 width=24)\n> Sort Key: jointable.id3, jointable.id1, jointable.id2, jointable.id4, jointable.id5\n> -> Seq Scan on jointable (cost=0.00..35867.11 rows=2066911 width=24)\n> (9 rows)\n\ncan you show us the explain analyze version ?\n\n> the choice of sort + merge is really bad here, given the size of bigtable (3 billion rows and counting.)\n>\n> some questions :\n>\n> 1 - first off, why isn't the sort happening on the primary key, so that bigtable does not have to be sorted?\n>\n> 2 - more importantly, since we are joining on the primary key, shouldn't the row estimate coming out of the join be limited by the number of rows in jointable?\n>\n> for example, it is strange to see that if i put in a non-limiting limit statement (something bigger than the number of rows in jointable) it switches back to a nested loop + index scan :\n>\n> explain select * from bigtable join jointable using (id1, id2, id3, id4, id5) limit 2500000;\n>\n> Limit (cost=0.00..178452647.11 rows=2500000 width=28)\n> -> Nested Loop (cost=0.00..1306127545.35 rows=18297957 width=28)\n> -> Seq Scan on jointable (cost=0.00..35867.11 rows=2066911 width=24)\n> -> Index Scan using bigtable_pkey on bigtable (cost=0.00..631.88 rows=1 width=28)\n> Index Cond: ((id1 = jointable.id1) AND (id2 = jointable.id2) AND (id3 = jointable.id3) AND (id4 = jointable.id4) AND (id5 = jointable.id5))\n> (5 rows)\n>\n> am i not understanding the query planner, or is this a known issue in the query planner, or have i stumbled onto something amiss? unfortunately any synthetic examples i was able to make (up to 10 million rows) did not exhibit this behavior, which makes it hard to test.\n>\n> best regards, ben\n>\n>\n>\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Jun 2013 16:43:56 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: incorrect row estimates for primary key join"
},
{
"msg_contents": "hello --\n\nthanks kevin for the tuning advice, i will answer your questions below and try different tuning configurations and report back. but first allow me take a step back and ask a couple simple questions :\n\nit seems to me that an equality join between two relations (call them A and B) using columns in relation B with a unique constraint should yield row estimates which are at most equal to the row estimates for relation A. my questions are\n\n1 - is this correct?\n\n2 - does the postgresql planner implement this when generating row estimates?\n\nit seems like if the answers to 1 and 2 are yes, then the row estimates for my join should always come back less or equal to the estimates for jointable, regardless of what the query plan is. indeed this is what i find experimentally for smaller examples. what is perplexing to me is why this is not true for this large table. (the fact that the table size is greater than 2^31 is probably a red herring but hasn't escaped my attention.) while i do have a performance issue (i'd like for it to select the index scan) which might be solved by better configuration, that at the moment is a secondary question -- right now i'm interested in why the row estimates are off.\n\nmoving on to your remarks :\n\nOn Jun 25, 2013, at 6:20 AM, Kevin Grittner wrote:\n\n> Ben <[email protected]> wrote:\n> \n>> PostgreSQL 9.1.1 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux) 4.6.2, 64-bit\n> \n> Consider applying the latest bug fixes for 9.1 -- which would leave\n> you showing 9.1.9.\n\ni will bring it up with our ops people. do you have any particular fixes in mind, or is this a (very sensible) blanket suggestion?\n\n>> default_statistics_target | 50 | configuration file\n> \n> Why did you change this from the default of 100?\n\nsorry, i do not know. it is possible this was copied from the configuration of a different server, which is serving some very very large tables with gist indexes, where the statistics do not help the selectivity estimations much if at all (as far as i can tell gist indexes often use hard-coded selectivity estimates as opposed to using the statistics.) in that case it is an oversight and i will correct it. but i believe the statistics for the tables in question are close enough, and certainly do not explain the off row estimates in the query plan.\n\n>> effective_cache_size | 5632MB | configuration file\n> \n> How much RAM is on this machine? What else is running on it? \n> (Normally people set this to 50% to 75% of total RAM. Lower values\n> discourage index usage in queries like your example.)\n\n24GB. i can up it to 12 or 16GB and report back.\n\n> Do you get a different plan if you set cpu_tuple_cost = 0.03? How\n> about 0.05? You can set this just for a single connection and run\n> explain on the query to do a quick check.\n\nsetting cpu_tuple_cost to 0.03 or 0.05 has no effect on the choice of plan or the row estimates for the un-limited query or the limited query.\n\nbest regards, ben\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Jun 2013 11:27:38 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: incorrect row estimates for primary key join"
},
{
"msg_contents": "Ben <[email protected]> wrote:\n\n> it seems to me that an equality join between two relations (call them A and B)\n> using columns in relation B with a unique constraint should yield row estimates\n> which are at most equal to the row estimates for relation A. my questions are\n>\n> 1 - is this correct?\n>\n> 2 - does the postgresql planner implement this when generating row estimates?\n\nThat seems intuitive, but some of the estimates need to be made\nbefore all such information is available. Maybe we can do\nsomething about that some day....\n\n> while i do have a performance > issue (i'd like for it to select\n> the index scan) which might be solved by better configuration,\n> that at the moment is a secondary question -- right now i'm\n> interested in why the row estimates are off.\n\nMaybe someone else will jump in here with more details than I can\nprovide (at least without hours digging in the source code).\n\n> On Jun 25, 2013, at 6:20 AM, Kevin Grittner wrote:\n>> Ben <[email protected]> wrote:\n>>\n>>> PostgreSQL 9.1.1 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux)\n>>> 4.6.2, 64-bit\n>>\n>> Consider applying the latest bug fixes for 9.1 -- which would leave\n>> you showing 9.1.9.\n>\n> i will bring it up with our ops people. do you have any particular fixes in\n> mind, or is this a (very sensible) blanket suggestion?\n\nI do recommend staying up-to-date in general (subject to roll-out\nprocedures), We try very hard not to change any behavior that\nisn't a clear bug from one minor release to the next, precisely so\nthat people can apply these critical bug fixes with confidence that\nthings won't break. There is a fix for a pretty significant\nsecurity vulnerability you are currently missing, which would be my\ntop concern; but it wouldn't be surprising if there were a planner\nbug in 9.1.1 which is fixed in 9.1.9.\n\n>> Do you get a different plan if you set cpu_tuple_cost = 0.03? How\n>> about 0.05? You can set this just for a single connection and run\n>> explain on the query to do a quick check.\n>\n> setting cpu_tuple_cost to 0.03 or 0.05 has no effect on the choice of plan or\n> the row estimates for the un-limited query or the limited query.\n\nThat wouldn't affect row estimates, but it would tend to encourage\nindex usage, because it would increase the estimated cost of\nreading each row.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Jun 2013 15:13:15 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: incorrect row estimates for primary key join"
},
{
"msg_contents": "Kevin Grittner <[email protected]> writes:\n> Ben <[email protected]> wrote:\n>> it seems to me that an equality join between two relations (call them A and B)\n>> using columns in relation B with a unique constraint should yield row estimates\n>> which are at most equal to the row estimates for relation A.� my questions are\n>> \n>> 1 - is this correct?\n>> \n>> 2 - does the postgresql planner implement this when generating row estimates?\n\n> That seems intuitive, but some of the estimates need to be made\n> before all such information is available.� Maybe we can do\n> something about that some day....\n> Maybe someone else will jump in here with more details than I can\n> provide (at least without hours digging in the source code).\n\nIt does not attempt to match up query WHERE clauses with indexes during\nselectivity estimation, so the existence of a multi-column unique\nconstraint wouldn't help it improve the estimate.\n\nIn the case at hand, I doubt that a better result rowcount estimate\nwould have changed the planner's opinion of how to do the join. The OP\nseems to be imagining that 2 million index probes into a large table\nwould be cheap, but that's hardly true. It's quite likely that the\nmergejoin actually is the best way to do the query. If it isn't really\nbest on his hardware, I would think that indicates a need for some\ntuning of the cost parameters. Another thing that might be helpful for\nworking with such large tables is increasing work_mem, to make hashes\nand sorts run faster.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Jun 2013 19:36:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: incorrect row estimates for primary key join"
},
{
"msg_contents": "\nOn Jun 25, 2013, at 4:36 PM, Tom Lane wrote:\n\n>> That seems intuitive, but some of the estimates need to be made\n>> before all such information is available. Maybe we can do\n>> something about that some day....\n>> Maybe someone else will jump in here with more details than I can\n>> provide (at least without hours digging in the source code).\n> \n> It does not attempt to match up query WHERE clauses with indexes during\n> selectivity estimation, so the existence of a multi-column unique\n> constraint wouldn't help it improve the estimate.\n\nthanks tom, that answered my question.\n\n> In the case at hand, I doubt that a better result rowcount estimate\n> would have changed the planner's opinion of how to do the join. The OP\n> seems to be imagining that 2 million index probes into a large table\n> would be cheap, but that's hardly true. It's quite likely that the\n> mergejoin actually is the best way to do the query. If it isn't really\n> best on his hardware, I would think that indicates a need for some\n> tuning of the cost parameters. Another thing that might be helpful for\n> working with such large tables is increasing work_mem, to make hashes\n> and sorts run faster.\n\ni apologize if i seemed like i was presuming to know what the best query plan is. i fully understand that the query planner sometimes makes unintuitive decisions which turn out to be for the best, having experienced it first hand many times. since i've nudged my company to use postgresql (instead of mysql/sqlite), we've been very happy with it. also, having tried my hand (and failing) at making good gist selectivity estimators, i think i've got a not-completely-ignorant 10,000 ft view of the trade-offs it tries to make, when sequential scans are better than repeated index lookups, et cetera. i'm writing because i found this example, which shows yet another thing i don't understand about the query planner, and i am trying to learn better about it.\n\nyou've already answered my main question (whether or not unique constraints are used to help row estimation.) there's a couple more issues which i don't quite understand :\n\n1) when i give a hint to the query planner to not expect more than number-of-rows-in-jointable (via a limit), switches to a nested loop + index scan, but with the same row estimates. i'll show the plan i had in the first email :\n\nLimit (cost=0.00..178452647.11 rows=2500000 width=28)\n -> Nested Loop (cost=0.00..1306127545.35 rows=18297957 width=28)\n -> Seq Scan on jointable (cost=0.00..35867.11 rows=2066911 width=24)\n -> Index Scan using bigtable_pkey on bigtable (cost=0.00..631.88 rows=1 width=28)\n Index Cond: ((id1 = jointable.id1) AND (id2 = jointable.id2) AND (id3 = jointable.id3) AND (id4 = jointable.id4) AND (id5 = jointable.id5))\n(5 rows)\n\nbefore, i was misreading this as saying the planner was going to execute the nested loop fully (rows=18 million), and then limit the results. i am now reading it as saying that the inner nested loop will be short-circuited after it generates enough rows. if this is true, it seems to imply that, in query plan with deeply nested inner nested loops, one should read the inner loop row estimates with a grain of salt, as there might be limits (potentially many levels outwards) which can short-circuit them. am i wrong about this?\n\n2) when doing the sort+merge join, it choses to sort bigtable rather than use an index scan. i've tried to give hints by requesting the results come in primary key order, but it keeps sorting by a permutation of the primary key and then resorting the join results at the end. so obviously the random seek cost dominates the sequential read + sort (which i find surprising, but again i am happy to be surprised.) that seems fine for a query which is going to touch the whole table. but i can't seem to come up with a query which would ever favor using an index scan. for example this :\n\nexplain select * from bigtable order by (id1, id2, id3, id4, id5) limit 1;\n QUERY PLAN \n--------------------------------------------------------------------------------------------\n Limit (cost=91923132.04..91923132.04 rows=1 width=28)\n -> Sort (cost=91923132.04..99842047.88 rows=3167566336 width=28)\n Sort Key: (ROW(id1, id2, id3, id4, id5))\n -> Seq Scan on bigtable (cost=0.00..76085300.36 rows=3167566336 width=28)\n(4 rows)\n\n(apologies bigtable has grown since i've first started this thread.) shouldn't an index scan definitely be fastest here? you don't need to touch the whole table or index. maybe there something i have misconfigured here?\n\nbest regards, ben\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Jun 2013 17:29:01 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: incorrect row estimates for primary key join"
},
{
"msg_contents": "On Wed, Jun 26, 2013 at 2:29 AM, Ben <[email protected]> wrote:\n\n> shouldn't an index scan definitely be fastest here? you don't need to touch the whole table or index. maybe there something i have misconfigured here?\n>\n\nHow about you try increasing work_mem ? I think a hash join may be the\nbest plan here, and it won't get chosen with low work_mem .\n\nRegards\nMarcin Mańk\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 02:22:18 +0200",
"msg_from": "=?UTF-8?B?TWFyY2luIE1hxYRr?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: incorrect row estimates for primary key join"
},
{
"msg_contents": "\nOn Jun 26, 2013, at 5:22 PM, Marcin Mańk wrote:\n\n> On Wed, Jun 26, 2013 at 2:29 AM, Ben <[email protected]> wrote:\n> \n>> shouldn't an index scan definitely be fastest here? you don't need to touch the whole table or index. maybe there something i have misconfigured here?\n>> \n> \n> How about you try increasing work_mem ? I think a hash join may be the\n> best plan here, and it won't get chosen with low work_mem .\n\ni will increase work_mem and experiment for the other queries, but the query which i was asking about in this particular question was looking up the single smallest key in the primary key index, which seems like it shouldn't need to touch more than one key, since it can just get the first one from an in-order index traversal. of course with my earlier bigtable/jointable join question increasing work_mem makes a lot of sense.\n\nbest regards, ben\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Jun 2013 20:48:16 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: incorrect row estimates for primary key join"
}
] |
[
{
"msg_contents": "Hello,\n\nIf a table takes 100 MB while on disk, approximately how much space will it take in RAM/database buffer?\nContext - We are designing a database that will hold a couple of GBs of data. We wanted to figure out how much shared_buffers we should provide to ensure that most of the time, all the data will be in memory. This is mostly master data (transactions will go to Casandra), and will be read from, rarely written to. We do need data integrity, transaction management, failover etc - hence PostgreSQL.\n\nRegards,\nJayadevan\n\n\n\nDISCLAIMER: \"The information in this e-mail and any attachment is intended only for the person to whom it is addressed and may contain confidential and/or privileged material. If you have received this e-mail in error, kindly contact the sender and destroy all copies of the original communication. IBS makes no warranty, express or implied, nor guarantees the accuracy, adequacy or completeness of the information contained in this email or any attachment and is not liable for any errors, defects, omissions, viruses or for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n\n\n\nHello,\n \nIf a table takes 100 MB while on disk, approximately how much space will it take in RAM/database buffer?\nContext – We are designing a database that will hold a couple of GBs of data. We wanted to figure out how much shared_buffers we should provide to ensure that most of the time, all the data will be in memory. This is mostly master data\n (transactions will go to Casandra), and will be read from, rarely written to. We do need data integrity, transaction management, failover etc – hence PostgreSQL.\n \nRegards,\nJayadevan\n\n \n\n\n\nDISCLAIMER: \"The information in this e-mail and any attachment is intended only for the person to whom it is addressed and may contain confidential and/or privileged material. If you have received this e-mail in\n error, kindly contact the sender and destroy all copies of the original communication. IBS makes no warranty, express or implied, nor guarantees the accuracy, adequacy or completeness of the information contained in this email or any attachment and is not\n liable for any errors, defects, omissions, viruses or for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Tue, 25 Jun 2013 04:57:08 +0000",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "on disk and in memory"
},
{
"msg_contents": "Jayadevan M wrote:\r\n> If a table takes 100 MB while on disk, approximately how much space will it take in RAM/database\r\n> buffer?\r\n\r\n100 MB. A block in memory has the same layout as a block on disk.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Jun 2013 07:33:57 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: on disk and in memory"
}
] |
[
{
"msg_contents": "Hi,\n\npostgres does a seqscan, even though there is an index present and it\nshould be much more efficient to use it.\nI tried to synthetically reproduce it, but it won't make the same choice\nwhen i do.\nI can reproduce it with a simplified set of the data itself though.\n\nhere's the query, and the analyzed plan:\nselect count(*)\nfrom d2\njoin g2 on g2.gid=d2.gid\nwhere g2.k=1942\n\nAggregate (cost=60836.71..60836.72 rows=1 width=0) (actual\ntime=481.526..481.526 rows=1 loops=1)\n -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual\ntime=317.403..481.513 rows=*17* loops=1)\n Hash Cond: (d2.gid = g2.gid)\n -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8)\n(actual time=0.013..231.707 rows=*3107454* loops=1)\n -> Hash (cost=1290.24..1290.24 rows=494 width=8) (actual\ntime=0.207..0.207 rows=121 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 5kB\n -> Index Scan using g_blok on g2 (cost=0.00..1290.24\nrows=494 width=8) (actual time=0.102..0.156 rows=*121* loops=1)\n Index Cond: (k = 1942)\nTotal runtime: 481.600 ms\n\nHere's the DDL:\ncreate table g2 (gid bigint primary key, k integer);\ncreate table d2 (id bigint primary key, gid bigint);\n--insert into g2 (...)\n--insert into d2 (...)\ncreate index g_blok on g2(blok);\ncreate index d_gid on d2(gid);\nalter table d2 add constraint d_g_fk foreign key (gid) references g2 (gid);\nanalyze d2;\nanalyze g2;\n\n\nAny advice?\n\nCheers,\n\nWilly-Bas Loos\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nHi,postgres does a seqscan, even though there is an index present and it should be much more efficient to use it.I tried to synthetically reproduce it, but it won't make the same choice when i do.\nI can reproduce it with a simplified set of the data itself though.here's the query, and the analyzed plan:select count(*) from d2join g2 on g2.gid=d2.gidwhere g2.k=1942Aggregate (cost=60836.71..60836.72 rows=1 width=0) (actual time=481.526..481.526 rows=1 loops=1)\n -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual time=317.403..481.513 rows=17 loops=1) Hash Cond: (d2.gid = g2.gid) -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8) (actual time=0.013..231.707 rows=3107454 loops=1)\n -> Hash (cost=1290.24..1290.24 rows=494 width=8) (actual time=0.207..0.207 rows=121 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 5kB -> Index Scan using g_blok on g2 (cost=0.00..1290.24 rows=494 width=8) (actual time=0.102..0.156 rows=121 loops=1)\n Index Cond: (k = 1942)Total runtime: 481.600 msHere's the DDL:create table g2 (gid bigint primary key, k integer);create table d2 (id bigint primary key, gid bigint);--insert into g2 (...)\n--insert into d2 (...)create index g_blok on g2(blok);create index d_gid on d2(gid);alter table d2 add constraint d_g_fk foreign key (gid) references g2 (gid);analyze d2;analyze g2;Any advice?\nCheers,Willy-Bas Loos-- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 26 Jun 2013 17:45:15 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "Hi,\npostgres does a seqscan, even though there is an index present and it should be much more efficient to use it.\nI tried to synthetically reproduce it, but it won't make the same choice when i do.\nI can reproduce it with a simplified set of the data itself though.\nhere's the query, and the analyzed plan:\nselect count(*) \nfrom d2\njoin g2 on g2.gid=d2.gid\nwhere g2.k=1942\n\nAggregate (cost=60836.71..60836.72 rows=1 width=0) (actual time=481.526..481.526 rows=1 loops=1)\n -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual time=317.403..481.513 rows=17 loops=1)\n Hash Cond: (d2.gid = g2.gid)\n -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8) (actual time=0.013..231.707 rows=3107454 loops=1)\n -> Hash (cost=1290.24..1290.24 rows=494 width=8) (actual time=0.207..0.207 rows=121 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 5kB\n -> Index Scan using g_blok on g2 (cost=0.00..1290.24 rows=494 width=8) (actual time=0.102..0.156 rows=121 loops=1)\n Index Cond: (k = 1942)\nTotal runtime: 481.600 ms\nHere's the DDL:\ncreate table g2 (gid bigint primary key, k integer);\ncreate table d2 (id bigint primary key, gid bigint);\n--insert into g2 (...)\n--insert into d2 (...)\ncreate index g_blok on g2(blok);\ncreate index d_gid on d2(gid);\nalter table d2 add constraint d_g_fk foreign key (gid) references g2 (gid);\nanalyze d2;\nanalyze g2;\n\nAny advice?\n\nCheers,\nWilly-Bas Loos\n-- \n\nSo, did you try to set:\n\nenable_seqscan = off\n\nand see if different execution plan is more efficient?\n\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Jun 2013 17:35:33 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "nope\n$ grep ^[^#] /etc/postgresql/9.1/main/postgresql.conf|grep -e ^[^[:space:]]\ndata_directory = '/var/lib/postgresql/9.1/main' # use data in\nanother directory\nhba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based\nauthentication file\nident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident\nconfiguration file\nexternal_pid_file = '/var/run/postgresql/9.1-main.pid' # write an\nextra PID file\nport = 5432 # (change requires restart)\nmax_connections = 100 # (change requires restart)\nunix_socket_directory = '/var/run/postgresql' # (change requires\nrestart)\nssl = true # (change requires restart)\nshared_buffers = 2GB # min 128kB\nwork_mem = 100MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nsynchronous_commit = off # synchronization level; on, off, or local\ncheckpoint_segments = 10 # in logfile segments, min 1, 16MB each\nlog_line_prefix = '%t ' # special values:\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\n\n\n\nOn Wed, Jun 26, 2013 at 7:35 PM, Igor Neyman <[email protected]> wrote:\n\n> Hi,\n> postgres does a seqscan, even though there is an index present and it\n> should be much more efficient to use it.\n> I tried to synthetically reproduce it, but it won't make the same choice\n> when i do.\n> I can reproduce it with a simplified set of the data itself though.\n> here's the query, and the analyzed plan:\n> select count(*)\n> from d2\n> join g2 on g2.gid=d2.gid\n> where g2.k=1942\n>\n> Aggregate (cost=60836.71..60836.72 rows=1 width=0) (actual\n> time=481.526..481.526 rows=1 loops=1)\n> -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual\n> time=317.403..481.513 rows=17 loops=1)\n> Hash Cond: (d2.gid = g2.gid)\n> -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8)\n> (actual time=0.013..231.707 rows=3107454 loops=1)\n> -> Hash (cost=1290.24..1290.24 rows=494 width=8) (actual\n> time=0.207..0.207 rows=121 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 5kB\n> -> Index Scan using g_blok on g2 (cost=0.00..1290.24\n> rows=494 width=8) (actual time=0.102..0.156 rows=121 loops=1)\n> Index Cond: (k = 1942)\n> Total runtime: 481.600 ms\n> Here's the DDL:\n> create table g2 (gid bigint primary key, k integer);\n> create table d2 (id bigint primary key, gid bigint);\n> --insert into g2 (...)\n> --insert into d2 (...)\n> create index g_blok on g2(blok);\n> create index d_gid on d2(gid);\n> alter table d2 add constraint d_g_fk foreign key (gid) references g2 (gid);\n> analyze d2;\n> analyze g2;\n>\n> Any advice?\n>\n> Cheers,\n> Willy-Bas Loos\n> --\n>\n> So, did you try to set:\n>\n> enable_seqscan = off\n>\n> and see if different execution plan is more efficient?\n>\n> Igor Neyman\n>\n\n\n\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nnope$ grep ^[^#] /etc/postgresql/9.1/main/postgresql.conf|grep -e ^[^[:space:]]data_directory = '/var/lib/postgresql/9.1/main' # use data in another directoryhba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based authentication file\nident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident configuration fileexternal_pid_file = '/var/run/postgresql/9.1-main.pid' # write an extra PID fileport = 5432 # (change requires restart)\nmax_connections = 100 # (change requires restart)unix_socket_directory = '/var/run/postgresql' # (change requires restart)ssl = true # (change requires restart)shared_buffers = 2GB # min 128kB\nwork_mem = 100MB # min 64kBmaintenance_work_mem = 256MB # min 1MBsynchronous_commit = off # synchronization level; on, off, or localcheckpoint_segments = 10 # in logfile segments, min 1, 16MB each\nlog_line_prefix = '%t ' # special values:datestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error messagelc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formattinglc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'\nOn Wed, Jun 26, 2013 at 7:35 PM, Igor Neyman <[email protected]> wrote:\nHi,\npostgres does a seqscan, even though there is an index present and it should be much more efficient to use it.\nI tried to synthetically reproduce it, but it won't make the same choice when i do.\nI can reproduce it with a simplified set of the data itself though.\nhere's the query, and the analyzed plan:\nselect count(*)\nfrom d2\njoin g2 on g2.gid=d2.gid\nwhere g2.k=1942\n\nAggregate (cost=60836.71..60836.72 rows=1 width=0) (actual time=481.526..481.526 rows=1 loops=1)\n -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual time=317.403..481.513 rows=17 loops=1)\n Hash Cond: (d2.gid = g2.gid)\n -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8) (actual time=0.013..231.707 rows=3107454 loops=1)\n -> Hash (cost=1290.24..1290.24 rows=494 width=8) (actual time=0.207..0.207 rows=121 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 5kB\n -> Index Scan using g_blok on g2 (cost=0.00..1290.24 rows=494 width=8) (actual time=0.102..0.156 rows=121 loops=1)\n Index Cond: (k = 1942)\nTotal runtime: 481.600 ms\nHere's the DDL:\ncreate table g2 (gid bigint primary key, k integer);\ncreate table d2 (id bigint primary key, gid bigint);\n--insert into g2 (...)\n--insert into d2 (...)\ncreate index g_blok on g2(blok);\ncreate index d_gid on d2(gid);\nalter table d2 add constraint d_g_fk foreign key (gid) references g2 (gid);\nanalyze d2;\nanalyze g2;\n\nAny advice?\n\nCheers,\nWilly-Bas Loos\n--\n\nSo, did you try to set:\n\nenable_seqscan = off\n\nand see if different execution plan is more efficient?\n\nIgor Neyman\n-- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 26 Jun 2013 21:03:41 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "On Wed, Jun 26, 2013 at 9:45 AM, Willy-Bas Loos <[email protected]> wrote:\n> Hi,\n>\n> postgres does a seqscan, even though there is an index present and it should\n> be much more efficient to use it.\n> I tried to synthetically reproduce it, but it won't make the same choice\n> when i do.\n> I can reproduce it with a simplified set of the data itself though.\n>\n> here's the query, and the analyzed plan:\n> select count(*)\n> from d2\n> join g2 on g2.gid=d2.gid\n> where g2.k=1942\n>\n> Aggregate (cost=60836.71..60836.72 rows=1 width=0) (actual\n> time=481.526..481.526 rows=1 loops=1)\n> -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual\n> time=317.403..481.513 rows=17 loops=1)\n> Hash Cond: (d2.gid = g2.gid)\n> -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8)\n> (actual time=0.013..231.707 rows=3107454 loops=1)\n\nBut this plan isn't retrieving just a few rows from d2, it's\nretreiving 3.1 Million rows.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Jun 2013 13:07:36 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "\n\nFrom: Willy-Bas Loos [mailto:[email protected]] \nSent: Wednesday, June 26, 2013 3:04 PM\nTo: Igor Neyman\nCc: [email protected]\nSubject: Re: [PERFORM] seqscan for 100 out of 3M rows, index present\n\nnope\n$ grep ^[^#] /etc/postgresql/9.1/main/postgresql.conf|grep -e ^[^[:space:]]\ndata_directory = '/var/lib/postgresql/9.1/main' # use data in another directory\nhba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based authentication file\nident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident configuration file\nexternal_pid_file = '/var/run/postgresql/9.1-main.pid' # write an extra PID file\nport = 5432 # (change requires restart)\nmax_connections = 100 # (change requires restart)\nunix_socket_directory = '/var/run/postgresql' # (change requires restart)\nssl = true # (change requires restart)\nshared_buffers = 2GB # min 128kB\nwork_mem = 100MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nsynchronous_commit = off # synchronization level; on, off, or local\ncheckpoint_segments = 10 # in logfile segments, min 1, 16MB each\nlog_line_prefix = '%t ' # special values:\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\n\n--\n\nYou could change this setting on session level, and prove yourself or query optimizer right (or wrong :)\n\nIgor Neyman\n\n...\n...\nAggregate (cost=60836.71..60836.72 rows=1 width=0) (actual time=481.526..481.526 rows=1 loops=1)\n -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual time=317.403..481.513 rows=17 loops=1)\n Hash Cond: (d2.gid = g2.gid)\n -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8) (actual time=0.013..231.707 rows=3107454 loops=1)\n -> Hash (cost=1290.24..1290.24 rows=494 width=8) (actual time=0.207..0.207 rows=121 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 5kB\n -> Index Scan using g_blok on g2 (cost=0.00..1290.24 rows=494 width=8) (actual time=0.102..0.156 rows=121 loops=1)\n Index Cond: (k = 1942)\nTotal runtime: 481.600 ms\nHere's the DDL:\ncreate table g2 (gid bigint primary key, k integer);\ncreate table d2 (id bigint primary key, gid bigint);\n--insert into g2 (...)\n--insert into d2 (...)\ncreate index g_blok on g2(blok);\ncreate index d_gid on d2(gid);\nalter table d2 add constraint d_g_fk foreign key (gid) references g2 (gid);\nanalyze d2;\nanalyze g2;\n\nAny advice?\n\nCheers,\nWilly-Bas Loos\n--\nSo, did you try to set:\n\nenable_seqscan = off\n\nand see if different execution plan is more efficient?\n\nIgor Neyman\n\n\n\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Jun 2013 19:08:06 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "plan with enable_seqscan off:\n\nAggregate (cost=253892.48..253892.49 rows=1 width=0) (actual\ntime=208.681..208.681 rows=1 loops=1)\n -> Nested Loop (cost=5.87..253889.49 rows=1198 width=0) (actual\ntime=69.403..208.647 rows=17 loops=1)\n -> Index Scan using geo_blok_idx on geo g (cost=0.00..1314.43\nrows=500 width=8) (actual time=45.776..46.147 rows=121 loops=1)\n Index Cond: (blok = 1942)\n -> Bitmap Heap Scan on bmp_data d (cost=5.87..502.91 rows=179\nwidth=8) (actual time=1.340..1.341 rows=0 loops=121)\n Recheck Cond: (geo_id = g.geo_id)\n -> Bitmap Index Scan on bmp_data_geo_idx (cost=0.00..5.82\nrows=179 width=0) (actual time=1.206..1.206 rows=0 loops=121)\n Index Cond: (geo_id = g.geo_id)\nTotal runtime: 208.850 ms\n\n\n\n\nOn Wed, Jun 26, 2013 at 9:08 PM, Igor Neyman <[email protected]> wrote:\n\n>\n>\n> From: Willy-Bas Loos [mailto:[email protected]]\n> Sent: Wednesday, June 26, 2013 3:04 PM\n> To: Igor Neyman\n> Cc: [email protected]\n> Subject: Re: [PERFORM] seqscan for 100 out of 3M rows, index present\n>\n> nope\n> $ grep ^[^#] /etc/postgresql/9.1/main/postgresql.conf|grep -e ^[^[:space:]]\n> data_directory = '/var/lib/postgresql/9.1/main' # use data in\n> another directory\n> hba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based\n> authentication file\n> ident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident\n> configuration file\n> external_pid_file = '/var/run/postgresql/9.1-main.pid' # write an\n> extra PID file\n> port = 5432 # (change requires restart)\n> max_connections = 100 # (change requires restart)\n> unix_socket_directory = '/var/run/postgresql' # (change requires\n> restart)\n> ssl = true # (change requires restart)\n> shared_buffers = 2GB # min 128kB\n> work_mem = 100MB # min 64kB\n> maintenance_work_mem = 256MB # min 1MB\n> synchronous_commit = off # synchronization level; on, off, or local\n> checkpoint_segments = 10 # in logfile segments, min 1, 16MB each\n> log_line_prefix = '%t ' # special values:\n> datestyle = 'iso, mdy'\n> lc_messages = 'en_US.UTF-8' # locale for system error message\n> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n> lc_time = 'en_US.UTF-8' # locale for time formatting\n> default_text_search_config = 'pg_catalog.english'\n>\n> --\n>\n> You could change this setting on session level, and prove yourself or\n> query optimizer right (or wrong :)\n>\n> Igor Neyman\n>\n> ...\n> ...\n> Aggregate (cost=60836.71..60836.72 rows=1 width=0) (actual\n> time=481.526..481.526 rows=1 loops=1)\n> -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual\n> time=317.403..481.513 rows=17 loops=1)\n> Hash Cond: (d2.gid = g2.gid)\n> -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8)\n> (actual time=0.013..231.707 rows=3107454 loops=1)\n> -> Hash (cost=1290.24..1290.24 rows=494 width=8) (actual\n> time=0.207..0.207 rows=121 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 5kB\n> -> Index Scan using g_blok on g2 (cost=0.00..1290.24\n> rows=494 width=8) (actual time=0.102..0.156 rows=121 loops=1)\n> Index Cond: (k = 1942)\n> Total runtime: 481.600 ms\n> Here's the DDL:\n> create table g2 (gid bigint primary key, k integer);\n> create table d2 (id bigint primary key, gid bigint);\n> --insert into g2 (...)\n> --insert into d2 (...)\n> create index g_blok on g2(blok);\n> create index d_gid on d2(gid);\n> alter table d2 add constraint d_g_fk foreign key (gid) references g2 (gid);\n> analyze d2;\n> analyze g2;\n>\n> Any advice?\n>\n> Cheers,\n> Willy-Bas Loos\n> --\n> So, did you try to set:\n>\n> enable_seqscan = off\n>\n> and see if different execution plan is more efficient?\n>\n> Igor Neyman\n>\n>\n>\n> --\n> \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n>\n\n\n\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nplan with enable_seqscan off:Aggregate (cost=253892.48..253892.49 rows=1 width=0) (actual time=208.681..208.681 rows=1 loops=1) -> Nested Loop (cost=5.87..253889.49 rows=1198 width=0) (actual time=69.403..208.647 rows=17 loops=1)\n -> Index Scan using geo_blok_idx on geo g (cost=0.00..1314.43 rows=500 width=8) (actual time=45.776..46.147 rows=121 loops=1) Index Cond: (blok = 1942) -> Bitmap Heap Scan on bmp_data d (cost=5.87..502.91 rows=179 width=8) (actual time=1.340..1.341 rows=0 loops=121)\n Recheck Cond: (geo_id = g.geo_id) -> Bitmap Index Scan on bmp_data_geo_idx (cost=0.00..5.82 rows=179 width=0) (actual time=1.206..1.206 rows=0 loops=121) Index Cond: (geo_id = g.geo_id)\nTotal runtime: 208.850 msOn Wed, Jun 26, 2013 at 9:08 PM, Igor Neyman <[email protected]> wrote:\n\n\nFrom: Willy-Bas Loos [mailto:[email protected]]\nSent: Wednesday, June 26, 2013 3:04 PM\nTo: Igor Neyman\nCc: [email protected]\nSubject: Re: [PERFORM] seqscan for 100 out of 3M rows, index present\n\nnope\n$ grep ^[^#] /etc/postgresql/9.1/main/postgresql.conf|grep -e ^[^[:space:]]\ndata_directory = '/var/lib/postgresql/9.1/main' # use data in another directory\nhba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based authentication file\nident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident configuration file\nexternal_pid_file = '/var/run/postgresql/9.1-main.pid' # write an extra PID file\nport = 5432 # (change requires restart)\nmax_connections = 100 # (change requires restart)\nunix_socket_directory = '/var/run/postgresql' # (change requires restart)\nssl = true # (change requires restart)\nshared_buffers = 2GB # min 128kB\nwork_mem = 100MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nsynchronous_commit = off # synchronization level; on, off, or local\ncheckpoint_segments = 10 # in logfile segments, min 1, 16MB each\nlog_line_prefix = '%t ' # special values:\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\n\n--\n\nYou could change this setting on session level, and prove yourself or query optimizer right (or wrong :)\n\nIgor Neyman\n\n...\n...\nAggregate (cost=60836.71..60836.72 rows=1 width=0) (actual time=481.526..481.526 rows=1 loops=1)\n -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual time=317.403..481.513 rows=17 loops=1)\n Hash Cond: (d2.gid = g2.gid)\n -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8) (actual time=0.013..231.707 rows=3107454 loops=1)\n -> Hash (cost=1290.24..1290.24 rows=494 width=8) (actual time=0.207..0.207 rows=121 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 5kB\n -> Index Scan using g_blok on g2 (cost=0.00..1290.24 rows=494 width=8) (actual time=0.102..0.156 rows=121 loops=1)\n Index Cond: (k = 1942)\nTotal runtime: 481.600 ms\nHere's the DDL:\ncreate table g2 (gid bigint primary key, k integer);\ncreate table d2 (id bigint primary key, gid bigint);\n--insert into g2 (...)\n--insert into d2 (...)\ncreate index g_blok on g2(blok);\ncreate index d_gid on d2(gid);\nalter table d2 add constraint d_g_fk foreign key (gid) references g2 (gid);\nanalyze d2;\nanalyze g2;\n\nAny advice?\n\nCheers,\nWilly-Bas Loos\n--\nSo, did you try to set:\n\nenable_seqscan = off\n\nand see if different execution plan is more efficient?\n\nIgor Neyman\n\n\n\n--\n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n-- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 26 Jun 2013 21:18:51 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "\n\nFrom: Willy-Bas Loos [mailto:[email protected]] \nSent: Wednesday, June 26, 2013 3:19 PM\nTo: Igor Neyman\nCc: [email protected]\nSubject: Re: [PERFORM] seqscan for 100 out of 3M rows, index present\n\nplan with enable_seqscan off:\n\nAggregate (cost=253892.48..253892.49 rows=1 width=0) (actual time=208.681..208.681 rows=1 loops=1)\n -> Nested Loop (cost=5.87..253889.49 rows=1198 width=0) (actual time=69.403..208.647 rows=17 loops=1)\n -> Index Scan using geo_blok_idx on geo g (cost=0.00..1314.43 rows=500 width=8) (actual time=45.776..46.147 rows=121 loops=1)\n Index Cond: (blok = 1942)\n -> Bitmap Heap Scan on bmp_data d (cost=5.87..502.91 rows=179 width=8) (actual time=1.340..1.341 rows=0 loops=121)\n Recheck Cond: (geo_id = g.geo_id)\n -> Bitmap Index Scan on bmp_data_geo_idx (cost=0.00..5.82 rows=179 width=0) (actual time=1.206..1.206 rows=0 loops=121)\n Index Cond: (geo_id = g.geo_id)\nTotal runtime: 208.850 ms\n\n\nOn Wed, Jun 26, 2013 at 9:08 PM, Igor Neyman <[email protected]> wrote:\n\n\nFrom: Willy-Bas Loos [mailto:[email protected]]\nSent: Wednesday, June 26, 2013 3:04 PM\nTo: Igor Neyman\nCc: [email protected]\nSubject: Re: [PERFORM] seqscan for 100 out of 3M rows, index present\n\nnope\n$ grep ^[^#] /etc/postgresql/9.1/main/postgresql.conf|grep -e ^[^[:space:]]\ndata_directory = '/var/lib/postgresql/9.1/main' # use data in another directory\nhba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based authentication file\nident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident configuration file\nexternal_pid_file = '/var/run/postgresql/9.1-main.pid' # write an extra PID file\nport = 5432 # (change requires restart)\nmax_connections = 100 # (change requires restart)\nunix_socket_directory = '/var/run/postgresql' # (change requires restart)\nssl = true # (change requires restart)\nshared_buffers = 2GB # min 128kB\nwork_mem = 100MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nsynchronous_commit = off # synchronization level; on, off, or local\ncheckpoint_segments = 10 # in logfile segments, min 1, 16MB each\nlog_line_prefix = '%t ' # special values:\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\n--\n\nHow much RAM you have on this machine?\nWhat else is this machine is being used for (besides being db server)?\nAnd, what is your setting for effective_cache_size? It looks like you didn't change it from default (128MB).\nYou need to adjust effective_cache_size so somewhat between 60%-75% of RAM, if the database is the main process running on this machine.\n\nAgain, effective_cache_size could be set on session level, so you could try it before changing GUC in postgresql.conf.\nWhen trying it, don't forget to change enable_seqscan back to \"on\" (if it's still \"off\").\n\nIgor Neyman\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Jun 2013 19:30:17 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "On Wed, Jun 26, 2013 at 9:30 PM, Igor Neyman <[email protected]> wrote:\n\n>\n> How much RAM you have on this machine?\n>\n16 GB\n\n> What else is this machine is being used for (besides being db server)?\n>\nIt's my laptop by now, but i was working on a server before that. The\nlaptop gives me some liberties to play around.\nI could reproduce it well on my laptop, so i thought it would do.\n\n\n> And, what is your setting for effective_cache_size? It looks like you\n> didn't change it from default (128MB).\n> You need to adjust effective_cache_size so somewhat between 60%-75% of\n> RAM, if the database is the main process running on this machine.\n>\ncorrect, it was 128GB, changed it to 12GB, to no avail.\n\n>\n>\n>\n\n\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nOn Wed, Jun 26, 2013 at 9:30 PM, Igor Neyman <[email protected]> wrote:\n\nHow much RAM you have on this machine?16 GB \nWhat else is this machine is being used for (besides being db server)?It's my laptop by now, but i was working on a server before that. The laptop gives me some liberties to play around.\nI could reproduce it well on my laptop, so i thought it would do. \nAnd, what is your setting for effective_cache_size? It looks like you didn't change it from default (128MB).\nYou need to adjust effective_cache_size so somewhat between 60%-75% of RAM, if the database is the main process running on this machine.correct, it was 128GB, changed it to 12GB, to no avail. \n\n-- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 26 Jun 2013 22:12:02 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "On Wed, Jun 26, 2013 at 12:07 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Wed, Jun 26, 2013 at 9:45 AM, Willy-Bas Loos <[email protected]>\n> wrote:\n> >\n> > Aggregate (cost=60836.71..60836.72 rows=1 width=0) (actual\n> > time=481.526..481.526 rows=1 loops=1)\n> > -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual\n> > time=317.403..481.513 rows=17 loops=1)\n> > Hash Cond: (d2.gid = g2.gid)\n> > -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8)\n> > (actual time=0.013..231.707 rows=3107454 loops=1)\n>\n> But this plan isn't retrieving just a few rows from d2, it's\n> retreiving 3.1 Million rows.\n>\n\nBut I think that that is the point. Why is it retrieving 3.1 million, when\nit only needs 17?\n\nCheers,\n\nJeff\n\nOn Wed, Jun 26, 2013 at 12:07 PM, Scott Marlowe <[email protected]> wrote:\nOn Wed, Jun 26, 2013 at 9:45 AM, Willy-Bas Loos <[email protected]> wrote:\n>\n> Aggregate (cost=60836.71..60836.72 rows=1 width=0) (actual\n> time=481.526..481.526 rows=1 loops=1)\n> -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual\n> time=317.403..481.513 rows=17 loops=1)\n> Hash Cond: (d2.gid = g2.gid)\n> -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8)\n> (actual time=0.013..231.707 rows=3107454 loops=1)\n\nBut this plan isn't retrieving just a few rows from d2, it's\nretreiving 3.1 Million rows.But I think that that is the point. Why is it retrieving 3.1 million, when it only needs 17? Cheers,\nJeff",
"msg_date": "Wed, 26 Jun 2013 13:31:29 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "On Wed, Jun 26, 2013 at 10:31 PM, Jeff Janes <[email protected]> wrote:\n\n>\n> Why is it retrieving 3.1 million, when it only needs 17?\n>\n>\n> that's because of the sequential scan, it reads all the data.\n\ncheers,\n\nwilly-bas\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nOn Wed, Jun 26, 2013 at 10:31 PM, Jeff Janes <[email protected]> wrote:\nWhy is it retrieving 3.1 million, when it only needs 17?\n that's because of the sequential scan, it reads all the data.cheers,willy-bas\n-- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 26 Jun 2013 22:36:10 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "2013/6/26 Willy-Bas Loos <[email protected]>\n\n> postgres does a seqscan, even though there is an index present and it\n> should be much more efficient to use it.\n> I tried to synthetically reproduce it, but it won't make the same choice\n> when i do.\n> I can reproduce it with a simplified set of the data itself though.\n>\n> here's the query, and the analyzed plan:\n> select count(*)\n> from d2\n> join g2 on g2.gid=d2.gid\n> where g2.k=1942\n\n\n1) Could you show the output of the following queries, please?\nselect relname,relpages,reltuples::numeric\n from pg_class where oid in ('d2'::regclass, 'g2'::regclass);\nselect attrelid::regclass, attname,\n CASE WHEN attstattarget<0 THEN\ncurrent_setting('default_statistics_target')::int4 ELSE attstattarget END\n from pg_attribute\n where attrelid in ('d2'::regclass, 'g2'::regclass) and attname='gid';\n\n2) Will it help running the following?:\nALTER TABLE d2 ALTER gid SET STATISTICS 500;\nVACUUM ANALYZE d2;\nEXPLAIN (ANALYZE, BUFFERS) ...\nSET enable_seqscan TO 'off';\nEXPLAIN (ANALYZE, BUFFERS) ...\n\n\n-- \nVictor Y. Yegorov\n\n2013/6/26 Willy-Bas Loos <[email protected]>\npostgres does a seqscan, even though there is an index present and it should be much more efficient to use it.I tried to synthetically reproduce it, but it won't make the same choice when i do.\nI can reproduce it with a simplified set of the data itself though.here's the query, and the analyzed plan:select count(*) from d2join g2 on g2.gid=d2.gidwhere g2.k=1942\n1) Could you show the output of the following queries, please?select relname,relpages,reltuples::numeric from pg_class where oid in ('d2'::regclass, 'g2'::regclass);\nselect attrelid::regclass, attname, CASE WHEN attstattarget<0 THEN current_setting('default_statistics_target')::int4 ELSE attstattarget END\n from pg_attribute where attrelid in ('d2'::regclass, 'g2'::regclass) and attname='gid';2) Will it help running the following?:\nALTER TABLE d2 ALTER gid SET STATISTICS 500;VACUUM ANALYZE d2;EXPLAIN (ANALYZE, BUFFERS) ...SET enable_seqscan TO 'off';\nEXPLAIN (ANALYZE, BUFFERS) ...-- Victor Y. Yegorov",
"msg_date": "Wed, 26 Jun 2013 23:46:27 +0300",
"msg_from": "Victor Yegorov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "On Wed, Jun 26, 2013 at 10:36:10PM +0200, Willy-Bas Loos wrote:\n> On Wed, Jun 26, 2013 at 10:31 PM, Jeff Janes <[email protected]> wrote:\n> \n> >\n> > Why is it retrieving 3.1 million, when it only needs 17?\n> >\n> >\n> > that's because of the sequential scan, it reads all the data.\n> \n> cheers,\n> \n> willy-bas\n\nWell, the two plans timings were pretty close together. Maybe your\ncost model is off. Try adjusting the various cost parameters to\nfavor random I/O more.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Jun 2013 15:48:04 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "On Wed, Jun 26, 2013 at 12:18 PM, Willy-Bas Loos <[email protected]> wrote:\n> plan with enable_seqscan off:\n>\n> Aggregate (cost=253892.48..253892.49 rows=1 width=0) (actual\n> time=208.681..208.681 rows=1 loops=1)\n> -> Nested Loop (cost=5.87..253889.49 rows=1198 width=0) (actual\n> time=69.403..208.647 rows=17 loops=1)\n> -> Index Scan using geo_blok_idx on geo g (cost=0.00..1314.43\n> rows=500 width=8) (actual time=45.776..46.147 rows=121 loops=1)\n> Index Cond: (blok = 1942)\n> -> Bitmap Heap Scan on bmp_data d (cost=5.87..502.91 rows=179\n> width=8) (actual time=1.340..1.341 rows=0 loops=121)\n> Recheck Cond: (geo_id = g.geo_id)\n> -> Bitmap Index Scan on bmp_data_geo_idx (cost=0.00..5.82\n> rows=179 width=0) (actual time=1.206..1.206 rows=0 loops=121)\n> Index Cond: (geo_id = g.geo_id)\n> Total runtime: 208.850 ms\n>\n> On Wed, Jun 26, 2013 at 9:08 PM, Igor Neyman <[email protected]> wrote:\n>> Aggregate (cost=60836.71..60836.72 rows=1 width=0) (actual\n>> time=481.526..481.526 rows=1 loops=1)\n>> -> Hash Join (cost=1296.42..60833.75 rows=1184 width=0) (actual\n>> time=317.403..481.513 rows=17 loops=1)\n>> Hash Cond: (d2.gid = g2.gid)\n>> -> Seq Scan on d2 (cost=0.00..47872.54 rows=3107454 width=8)\n>> (actual time=0.013..231.707 rows=3107454 loops=1)\n>> -> Hash (cost=1290.24..1290.24 rows=494 width=8) (actual\n>> time=0.207..0.207 rows=121 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 5kB\n>> -> Index Scan using g_blok on g2 (cost=0.00..1290.24\n>> rows=494 width=8) (actual time=0.102..0.156 rows=121 loops=1)\n>> Index Cond: (k = 1942)\n>> Total runtime: 481.600 ms\n\nThese are plans of two different queries. Please show the second one\n(where d2, g2, etc are) with secscans off.\n\n--\nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nProfile: http://www.linkedin.com/in/grayhemp\nPhone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Jun 2013 13:55:13 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "On Wed, Jun 26, 2013 at 12:18 PM, Willy-Bas Loos <[email protected]> wrote:\n\n> plan with enable_seqscan off:\n>\n> Aggregate (cost=253892.48..253892.49 rows=1 width=0) (actual\n> time=208.681..208.681 rows=1 loops=1)\n>\n\n\nThe estimated cost of this is ~4x times greater than the estimated cost for\nthe sequential scan. It should be easy to tweak things to get those to\nreverse, but will doing so mess up other queries that are currently OK?\n\n\n\n> -> Nested Loop (cost=5.87..253889.49 rows=1198 width=0) (actual\n> time=69.403..208.647 rows=17 loops=1)\n>\n\nThe estimated number of rows is off by 70 fold. Most of this is probably\ndue to cross-column correlations, which you probably can't do much about.\n\n\n> -> Index Scan using geo_blok_idx on geo g (cost=0.00..1314.43\n> rows=500 width=8) (actual time=45.776..46.147 rows=121 loops=1)\n> Index Cond: (blok = 1942)\n>\n\nIt thinks it will find 500 rows (a suspiciously round number?) but actually\nfinds 121. That is off by a factor of 4. Why does it not produce a better\nestimate on such a simple histogram-based estimation? Was the table\nanalyzed recently? Have you tried increasing default_statistics_target?\n If you choose values of blok other than 1942, what are the results like?\n This estimate feeds into the inner loop estimates multiplicatively, so\nthis is a powerful factor in driving the choice.\n\n\n\n> -> Bitmap Heap Scan on bmp_data d (cost=5.87..502.91 rows=179\n> width=8) (actual time=1.340..1.341 rows=0 loops=121)\n> Recheck Cond: (geo_id = g.geo_id)\n> -> Bitmap Index Scan on bmp_data_geo_idx (cost=0.00..5.82\n> rows=179 width=0) (actual time=1.206..1.206 rows=0 loops=121)\n> Index Cond: (geo_id = g.geo_id)\n> Total runtime: 208.850 ms\n>\n>\nSo it is only twice as fast as the sequential scan anyway. Were you\nexpecting even more faster? Unless it is the dominant query in your\ndatabase, I would usually not consider a factor of 2 improvement to be\nworth worrying about, as it is too likely you will make something else\nworse in the process.\n\nCheers,\n\nJeff\n\nOn Wed, Jun 26, 2013 at 12:18 PM, Willy-Bas Loos <[email protected]> wrote:\nplan with enable_seqscan off:Aggregate (cost=253892.48..253892.49 rows=1 width=0) (actual time=208.681..208.681 rows=1 loops=1)\nThe estimated cost of this is ~4x times greater than the estimated cost for the sequential scan. It should be easy to tweak things to get those to reverse, but will doing so mess up other queries that are currently OK?\n -> Nested Loop (cost=5.87..253889.49 rows=1198 width=0) (actual time=69.403..208.647 rows=17 loops=1)\nThe estimated number of rows is off by 70 fold. Most of this is probably due to cross-column correlations, which you probably can't do much about. \n\n -> Index Scan using geo_blok_idx on geo g (cost=0.00..1314.43 rows=500 width=8) (actual time=45.776..46.147 rows=121 loops=1) Index Cond: (blok = 1942)\nIt thinks it will find 500 rows (a suspiciously round number?) but actually finds 121. That is off by a factor of 4. Why does it not produce a better estimate on such a simple histogram-based estimation? Was the table analyzed recently? Have you tried increasing default_statistics_target? If you choose values of blok other than 1942, what are the results like? This estimate feeds into the inner loop estimates multiplicatively, so this is a powerful factor in driving the choice.\n -> Bitmap Heap Scan on bmp_data d (cost=5.87..502.91 rows=179 width=8) (actual time=1.340..1.341 rows=0 loops=121)\n\n Recheck Cond: (geo_id = g.geo_id) -> Bitmap Index Scan on bmp_data_geo_idx (cost=0.00..5.82 rows=179 width=0) (actual time=1.206..1.206 rows=0 loops=121) Index Cond: (geo_id = g.geo_id)\n\nTotal runtime: 208.850 msSo it is only twice as fast as the sequential scan anyway. Were you expecting even more faster? Unless it is the dominant query in your database, I would usually not consider a factor of 2 improvement to be worth worrying about, as it is too likely you will make something else worse in the process.\nCheers,Jeff",
"msg_date": "Wed, 26 Jun 2013 13:59:43 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "On Wed, Jun 26, 2013 at 11:20 PM, Willy-Bas Loos <[email protected]> wrote:\n\n> On Wed, Jun 26, 2013 at 10:55 PM, Sergey Konoplev <[email protected]>wrote:\n>\n>>\n>>\n>> These are plans of two different queries. Please show the second one\n>> (where d2, g2, etc are) with secscans off.\n>>\n>>\n> yes, you're right sry for the confusion.\n> here's the plan with enable_seqscan=off for the same quer as the OP. (same\n> deal though)\n>\n> Aggregate (cost=59704.95..59704.96 rows=1 width=0) (actual\n> time=41.612..41.613 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..59701.99 rows=1184 width=0) (actual time=\n> 40.451..41.591 rows=17 loops=1)\n> -> Index Scan using g_blok on g2 (cost=0.00..1290.24 rows=494\n> width=8) (actual time=40.209..40.472 rows=121 loops=1)\n>\n> Index Cond: (k = 1942)\n> -> Index Scan using d_gid on d2 (cost=0.00..117.62 rows=50\n> width=8) (actual time=0.008..0.008 rows=0 loops=121)\n> Index Cond: (gid = g2.gid)\n> Total runtime: 41.746 ms\n>\n> Cheers,\n>\n> WBL\n>\n> forgot the list\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nOn Wed, Jun 26, 2013 at 11:20 PM, Willy-Bas Loos <[email protected]> wrote:\nOn Wed, Jun 26, 2013 at 10:55 PM, Sergey Konoplev <[email protected]> wrote:\n\n\nThese are plans of two different queries. Please show the second one\n(where d2, g2, etc are) with secscans off.\nyes, you're right sry for the confusion.here's the plan with enable_seqscan=off for the same quer as the OP. (same deal though)Aggregate (cost=59704.95..59704.96 rows=1 width=0) (actual time=41.612..41.613 rows=1 loops=1)\n\n -> Nested Loop (cost=0.00..59701.99 rows=1184 width=0) (actual time=40.451..41.591 rows=17 loops=1) \n -> Index Scan using g_blok on g2 (cost=0.00..1290.24 rows=494 \nwidth=8) (actual time=40.209..40.472 rows=121 loops=1) Index Cond: (k = 1942) -> Index Scan using d_gid on d2 (cost=0.00..117.62 rows=50 width=8) (actual time=0.008..0.008 rows=0 loops=121)\n\n Index Cond: (gid = g2.gid)Total runtime: 41.746 msCheers,WBLforgot the list\n-- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 26 Jun 2013 23:25:15 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
},
{
"msg_contents": "1)\n attrelid | attname | attstattarget\n----------+---------+---------------\n g2 | gid | 100\n d2 | gid | 100\n(2 rows)\n\nsetting statistics too 500 works!\nI already tried overruling pg_statistic.stadistinct, but that didn't work.\nthank you all for your help!!\n\nCheers,\n\nWilly-Bas\n\n\nOn Wed, Jun 26, 2013 at 10:46 PM, Victor Yegorov <[email protected]> wrote:\n\n> 2013/6/26 Willy-Bas Loos <[email protected]>\n>\n>> postgres does a seqscan, even though there is an index present and it\n>> should be much more efficient to use it.\n>> I tried to synthetically reproduce it, but it won't make the same choice\n>> when i do.\n>> I can reproduce it with a simplified set of the data itself though.\n>>\n>> here's the query, and the analyzed plan:\n>> select count(*)\n>> from d2\n>> join g2 on g2.gid=d2.gid\n>> where g2.k=1942\n>\n>\n> 1) Could you show the output of the following queries, please?\n> select relname,relpages,reltuples::numeric\n> from pg_class where oid in ('d2'::regclass, 'g2'::regclass);\n> select attrelid::regclass, attname,\n> CASE WHEN attstattarget<0 THEN\n> current_setting('default_statistics_target')::int4 ELSE attstattarget END\n> from pg_attribute\n> where attrelid in ('d2'::regclass, 'g2'::regclass) and attname='gid';\n>\n> 2) Will it help running the following?:\n> ALTER TABLE d2 ALTER gid SET STATISTICS 500;\n> VACUUM ANALYZE d2;\n> EXPLAIN (ANALYZE, BUFFERS) ...\n> SET enable_seqscan TO 'off';\n> EXPLAIN (ANALYZE, BUFFERS) ...\n>\n>\n> --\n> Victor Y. Yegorov\n>\n\n\n\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\n1) attrelid | attname | attstattarget ----------+---------+--------------- g2 | gid | 100 d2 | gid | 100(2 rows)\nsetting statistics too 500 works!I already tried overruling pg_statistic.stadistinct, but that didn't work.thank you all for your help!!Cheers,Willy-Bas\nOn Wed, Jun 26, 2013 at 10:46 PM, Victor Yegorov <[email protected]> wrote:\n2013/6/26 Willy-Bas Loos <[email protected]>\n\npostgres does a seqscan, even though there is an index present and it should be much more efficient to use it.I tried to synthetically reproduce it, but it won't make the same choice when i do.\nI can reproduce it with a simplified set of the data itself though.here's the query, and the analyzed plan:select count(*) from d2join g2 on g2.gid=d2.gidwhere g2.k=1942\n1) Could you show the output of the following queries, please?select relname,relpages,reltuples::numeric from pg_class where oid in ('d2'::regclass, 'g2'::regclass);\nselect attrelid::regclass, attname, CASE WHEN attstattarget<0 THEN current_setting('default_statistics_target')::int4 ELSE attstattarget END\n\n from pg_attribute where attrelid in ('d2'::regclass, 'g2'::regclass) and attname='gid';2) Will it help running the following?:\nALTER TABLE d2 ALTER gid SET STATISTICS 500;VACUUM ANALYZE d2;EXPLAIN (ANALYZE, BUFFERS) ...SET enable_seqscan TO 'off';\nEXPLAIN (ANALYZE, BUFFERS) ...-- Victor Y. Yegorov\n\n-- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 26 Jun 2013 23:35:39 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seqscan for 100 out of 3M rows, index present"
}
] |
[
{
"msg_contents": "Hey guys,\n\nI suspect I'll get an answer equivalent to \"the planner treats that like \na variable,\" but I really hope not because it renders partitions \nessentially useless to us. This is as recent as 9.1.9 and constraint \nexclusion is enabled.\n\nWhat I have is this test case:\n\nCREATE TABLE part_test (\n fake INT,\n part_col TIMESTAMP WITHOUT TIME ZONE\n);\n\nCREATE TABLE part_test_1 (\n CHECK (part_col >= '2013-05-01' AND\n part_col < '2013-06-01')\n) INHERITS (part_test);\n\nCREATE TABLE part_test_2 (\n CHECK (part_col >= '2013-04-01' AND\n part_col < '2013-05-01')\n) INHERITS (part_test);\n\nAnd this query performs a sequence scan across all partitions:\n\nEXPLAIN ANALYZE\nSELECT * FROM part_test\n WHERE part_col > CURRENT_DATE;\n\nThe CURRENT_DATE value is clearly more recent than any of the \npartitions, yet it checks them anyway. The only way to get it to \nproperly constrain partitions is to use a static value:\n\nEXPLAIN ANALYZE\nSELECT * FROM part_test\n WHERE part_col > '2013-06-27';\n\nBut developers never do this. Nor should they. I feel like an idiot even \nasking this, because it seems so wrong, and I can't seem to come up with \na workaround other than, \"Ok devs, hard code dates into all of your \nqueries from now on.\"\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 11:15:43 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitions not Working as Expected"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of Shaun Thomas\n> Sent: Thursday, June 27, 2013 12:16 PM\n> To: [email protected]\n> Subject: [PERFORM] Partitions not Working as Expected\n> \n> Hey guys,\n> \n> I suspect I'll get an answer equivalent to \"the planner treats that like a\n> variable,\" but I really hope not because it renders partitions essentially\n> useless to us. This is as recent as 9.1.9 and constraint exclusion is enabled.\n> \n> What I have is this test case:\n> \n> CREATE TABLE part_test (\n> fake INT,\n> part_col TIMESTAMP WITHOUT TIME ZONE\n> );\n> \n> CREATE TABLE part_test_1 (\n> CHECK (part_col >= '2013-05-01' AND\n> part_col < '2013-06-01')\n> ) INHERITS (part_test);\n> \n> CREATE TABLE part_test_2 (\n> CHECK (part_col >= '2013-04-01' AND\n> part_col < '2013-05-01')\n> ) INHERITS (part_test);\n> \n> And this query performs a sequence scan across all partitions:\n> \n> EXPLAIN ANALYZE\n> SELECT * FROM part_test\n> WHERE part_col > CURRENT_DATE;\n> \n> The CURRENT_DATE value is clearly more recent than any of the partitions,\n> yet it checks them anyway. The only way to get it to properly constrain\n> partitions is to use a static value:\n> \n> EXPLAIN ANALYZE\n> SELECT * FROM part_test\n> WHERE part_col > '2013-06-27';\n> \n> But developers never do this. Nor should they. I feel like an idiot even asking\n> this, because it seems so wrong, and I can't seem to come up with a\n> workaround other than, \"Ok devs, hard code dates into all of your queries\n> from now on.\"\n> \n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n> \n\nDoesn't have to be hardcoded.\nIf executed as dynamic sql, it will be re-planned properly, e.g.:\n\nlQueryString := 'SELECT MAX(cycle_date_time) AS MaxDT\n FROM gp_cycle_' || partition_extension::varchar ||\n ' WHERE cell_id = ' || i_n_Cell_id::varchar ||\n ' AND part_type_id = ' || i_n_PartType_id::varchar ||\n ' AND cycle_date_time <= TIMESTAMP ' || quote_literal(cast(i_t_EndDate AS VARCHAR));\n IF (lQueryString IS NOT NULL) THEN\n EXECUTE lQueryString INTO lEndDate;\n\n\nRegards,\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 17:08:43 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "On 06/27/2013 12:08 PM, Igor Neyman wrote:\n\n> Doesn't have to be hardcoded.\n> If executed as dynamic sql, it will be re-planned properly, e.g.:\n\nWell yeah. That's not really the point, though. Aside from existing \ncode, hard-coding is generally frowned upon. Our devs have been using \nCURRENT_DATE and its ilk for over six years now.\n\nSo now I get to tell our devs to refactor six years of JAVA code and \nfind any place they use CURRENT_DATE, and replace it with an ORM \nvariable for the current date instead.\n\nAt this point I wonder why CURRENT_DATE even exists, if using it is \napparently detrimental to query execution.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 12:17:42 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "On Thu, Jun 27, 2013 at 10:17 AM, Shaun Thomas <[email protected]>wrote:\n\n>\n> Well yeah. That's not really the point, though. Aside from existing code,\n> hard-coding is generally frowned upon. Our devs have been using\n> CURRENT_DATE and its ilk for over six years now.\n>\n\nWould it help to put the current_date call in a wrapper function and coerce\nit as IMMUTABLE? A quick test shows that constraint exclusion seems to kick\nin, but I can't speak intelligently about whether that is wise or not.\n\nOn Thu, Jun 27, 2013 at 10:17 AM, Shaun Thomas <[email protected]> wrote:\n\nWell yeah. That's not really the point, though. Aside from existing code, hard-coding is generally frowned upon. Our devs have been using CURRENT_DATE and its ilk for over six years now.\nWould it help to put the current_date call in a wrapper function and coerce it as IMMUTABLE? A quick test shows that constraint exclusion seems to kick in, but I can't speak intelligently about whether that is wise or not.",
"msg_date": "Thu, 27 Jun 2013 10:34:30 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "On Thu, Jun 27, 2013 at 10:34 AM, bricklen <[email protected]> wrote:\n\n> On Thu, Jun 27, 2013 at 10:17 AM, Shaun Thomas <[email protected]>wrote:\n>\n>>\n>> Well yeah. That's not really the point, though. Aside from existing code,\n>> hard-coding is generally frowned upon. Our devs have been using\n>> CURRENT_DATE and its ilk for over six years now.\n>>\n>\n> Would it help to put the current_date call in a wrapper function and\n> coerce it as IMMUTABLE? A quick test shows that constraint exclusion seems\n> to kick in, but I can't speak intelligently about whether that is wise or\n> not.\n>\n>\nOr what about something like DATE_TRUNC(\"DAY\", now())? Or would that run\ninto the same optimization/planner problems as CURRENT_DATE?\n\nOn Thu, Jun 27, 2013 at 10:34 AM, bricklen <[email protected]> wrote:\nOn Thu, Jun 27, 2013 at 10:17 AM, Shaun Thomas <[email protected]> wrote:\n\nWell yeah. That's not really the point, though. Aside from existing code, hard-coding is generally frowned upon. Our devs have been using CURRENT_DATE and its ilk for over six years now.\nWould it help to put the current_date call in a wrapper function and coerce it as IMMUTABLE? A quick test shows that constraint exclusion seems to kick in, but I can't speak intelligently about whether that is wise or not.\n\nOr what about something like DATE_TRUNC(\"DAY\", now())? Or would that run into the same optimization/planner problems as CURRENT_DATE?",
"msg_date": "Thu, 27 Jun 2013 10:42:13 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "On 06/27/2013 12:42 PM, Dave Johansen wrote:\n\n> Or what about something like DATE_TRUNC(\"DAY\", now())? Or would that run\n> into the same optimization/planner problems as CURRENT_DATE?\n\nSame issue. This seems to work, though I'm not entirely sure of the \nimplications:\n\nUPDATE pg_proc\n SET provolatile = 'i'\n WHERE proname = 'date_in';\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 13:16:00 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> On 06/27/2013 12:42 PM, Dave Johansen wrote:\n>> Or what about something like DATE_TRUNC(\"DAY\", now())? Or would that run\n>> into the same optimization/planner problems as CURRENT_DATE?\n\n> Same issue. This seems to work, though I'm not entirely sure of the \n> implications:\n\n> UPDATE pg_proc\n> SET provolatile = 'i'\n> WHERE proname = 'date_in';\n\nThat will break things: CURRENT_DATE will then be equivalent to just\nwriting today's date as a literal.\n\nIt's conceivable that it wouldn't break any scenario that you personally\ncare about, if you never use CURRENT_DATE in any view, rule, column\ndefault expression, or cached plan; but it seems mighty risky from here.\n\n\nI don't see any very good solution to your problem within the current\napproach to partitioning, which is basically theorem-proving. That\nproof engine has no concept of time passing, let alone the sort of\ndetailed knowledge of the semantics of this particular function that\nwould allow it to conclude \"if CURRENT_DATE > '2013-06-20' is true now,\nit will always be so in the future as well\".\n\nI think most hackers agree that the way forward on partitioning involves\nbuilding hard-wired logic that selects the correct partition(s) at\nrun-time, so that it wouldn't particularly matter where we got the\ncomparison value from or whether it was a constant. So I'm not feeling\nmotivated to try to hack some solution for this case into the theorem\nprover.\n\nUnfortunately, it's likely to be awhile before that next-generation\npartitioning code shows up. But major extensions to the proof engine\nwouldn't be a weekend project, either...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 14:42:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "We have also run into this with our production databases. We worked around the issue by adding an index to each child table so that it scans all the child index's instead of the child table's. For us this made a large performance improvement.\n\nCREATE INDEX part_test_1_idx ON part_test_1\n USING btree (part_col);\n\nCREATE INDEX part_test_2_idx ON part_test_2\n USING btree (part_col);\n\nLloyd Albin\nStatistical Center for HIV/AIDS Research and Prevention (SCHARP)\nVaccine and Infectious Disease Division (VIDD)\nFred Hutchinson Cancer Research Center (FHCRC)\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Shaun Thomas\nSent: Thursday, June 27, 2013 11:16 AM\nTo: Dave Johansen\nCc: bricklen; [email protected]\nSubject: Re: [PERFORM] Partitions not Working as Expected\n\nOn 06/27/2013 12:42 PM, Dave Johansen wrote:\n\n> Or what about something like DATE_TRUNC(\"DAY\", now())? Or would that \n> run into the same optimization/planner problems as CURRENT_DATE?\n\nSame issue. This seems to work, though I'm not entirely sure of the\nimplications:\n\nUPDATE pg_proc\n SET provolatile = 'i'\n WHERE proname = 'date_in';\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 18:45:09 +0000",
"msg_from": "\"Albin, Lloyd P\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "On 06/27/2013 01:42 PM, Tom Lane wrote:\n\n> That will break things: CURRENT_DATE will then be equivalent to just\n> writing today's date as a literal.\n\nInteresting. I tested it by creating a view and a table with a default, \nand it always seems to get translated to:\n\n('now'::text)::date\n\nBut I'll take your explanation at face value, since that doesn't imply \nwhat the output would be. What's interesting is that EnterpriseDB has \ntheir own pg_catalog.current_date function that gets called by the \nCURRENT_DATE keyword. So unlike in vanilla PG, I could mark just the \ncurrent_date function as immutable without affecting a lot of other \ninternals.\n\nOn EDB, this actually works:\n\nUPDATE pg_proc\n SET provolatile = 'i'\n WHERE proname = 'current_date';\n\nThen the plan gets pared down as desired. But again, if the date were to \nroll over, I'm not sure what would happen. I wish I could test that \nwithout fiddling with machine times.\n\n> I don't see any very good solution to your problem within the current\n> approach to partitioning, which is basically theorem-proving. That\n> proof engine has no concept of time passing, let alone the sort of\n> detailed knowledge of the semantics of this particular function that\n> would allow it to conclude \"if CURRENT_DATE > '2013-06-20' is true now,\n> it will always be so in the future as well\".\n\nI get it. From the context of two months ago, CURRENT_DATE > \n'2013-06-20' would return a different answer than it would today, which \nisn't really good for proofs.\n\nThe only way for it to work as \"expected\" would be to add a first pass \nto resolve any immediate variables, which would effectively throw away \nplan caches. I'd actually be OK with that.\n\n> I think most hackers agree that the way forward on partitioning\n> involves building hard-wired logic that selects the correct\n> partition(s) at run-time, so that it wouldn't particularly matter\n> where we got the comparison value from or whether it was a constant.\n\nFair enough. I'll stop telling devs to use current_date instead of ORM \ninjections, then. Hopefully we can track down and tweak the affected \nqueries on the tables we're partitioning without too much work and QA.\n\nThanks, Tom!\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 14:14:34 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "On 06/27/2013 01:45 PM, Albin, Lloyd P wrote:\n\n> We have also run into this with our production databases. We worked\n> around the issue by adding an index to each child table so that it\n> scans all the child index's instead of the child table's. For us\n> this made a large performance improvement.\n\nHaha. Yeah, that's assumed. I'd never use a partition set without the \nconstraint column in at least one index. The proof of concept was just \nto illustrate that the planner doesn't even get that far in ignoring \n\"empty\" partitions. Sure, scanning the inapplicable child tables has a \nlow cost, but it's not zero. With about a dozen of them, query times \nincrease from 0.130ms to 0.280ms for my test case. Not a lot in the long \nrun, but in a OLTP system, it can be fairly noticeable.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 14:17:40 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "\nOn 06/27/2013 03:14 PM, Shaun Thomas wrote:\n> On 06/27/2013 01:42 PM, Tom Lane wrote:\n>\n>> That will break things: CURRENT_DATE will then be equivalent to just\n>> writing today's date as a literal.\n>\n> Interesting. I tested it by creating a view and a table with a \n> default, and it always seems to get translated to:\n>\n> ('now'::text)::date\n>\n> But I'll take your explanation at face value, since that doesn't imply \n> what the output would be. What's interesting is that EnterpriseDB has \n> their own pg_catalog.current_date function that gets called by the \n> CURRENT_DATE keyword. So unlike in vanilla PG, I could mark just the \n> current_date function as immutable without affecting a lot of other \n> internals.\n>\n> On EDB, this actually works:\n>\n> UPDATE pg_proc\n> SET provolatile = 'i'\n> WHERE proname = 'current_date';\n\n\nBut that's a lie, surely. If it breaks you have nobody to blame but \nyourself. There's a reason EDB haven't marked their function immutable - \nit's not.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 15:49:26 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "On 06/27/2013 02:49 PM, Andrew Dunstan wrote:\n\n> But that's a lie, surely. If it breaks you have nobody to blame but\n> yourself. There's a reason EDB haven't marked their function\n> immutable - it's not.\n\nWell, yeah. That's why I'm testing it in a dev system. :)\n\nNone of this will probably pan out, but I need to see the limits of how \nbadly I can abuse the database.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 14:55:44 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "On 2013-06-27 14:42:26 -0400, Tom Lane wrote:\n> Shaun Thomas <[email protected]> writes:\n> > On 06/27/2013 12:42 PM, Dave Johansen wrote:\n> >> Or what about something like DATE_TRUNC(\"DAY\", now())? Or would that run\n> >> into the same optimization/planner problems as CURRENT_DATE?\n> \n> > Same issue. This seems to work, though I'm not entirely sure of the \n> > implications:\n> \n> > UPDATE pg_proc\n> > SET provolatile = 'i'\n> > WHERE proname = 'date_in';\n> \n> That will break things: CURRENT_DATE will then be equivalent to just\n> writing today's date as a literal.\n> \n> It's conceivable that it wouldn't break any scenario that you personally\n> care about, if you never use CURRENT_DATE in any view, rule, column\n> default expression, or cached plan; but it seems mighty risky from here.\n\n> I don't see any very good solution to your problem within the current\n> approach to partitioning, which is basically theorem-proving. That\n> proof engine has no concept of time passing, let alone the sort of\n> detailed knowledge of the semantics of this particular function that\n> would allow it to conclude \"if CURRENT_DATE > '2013-06-20' is true now,\n> it will always be so in the future as well\".\n\nCouldn't we at least significantly improve on the status quo by\ndetecting we're currently planning a query that's only going to be\nexecuted once (because it's directly executed or because were planning a\nonetime plan for specific parameters) and inline stable functions before\ndoing the theorem proving?\n\nMaybe I am missing something here?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 22:43:17 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "\n> At this point I wonder why CURRENT_DATE even exists, if using it is\n> apparently detrimental to query execution.\n\nIt's good for inserts. ;-)\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 14:14:05 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Couldn't we at least significantly improve on the status quo by\n> detecting we're currently planning a query that's only going to be\n> executed once (because it's directly executed or because were planning a\n> onetime plan for specific parameters) and inline stable functions before\n> doing the theorem proving?\n\nI think Haas went down that rabbit hole before you. The current\ndefinition of stable functions is not strong enough to guarantee that a\nplan-time evaluation would give the same result as a run-time\nevaluation, not even in one-shot-plan cases. The obvious reason why not\nis that the planner isn't using the same snapshot that the executor will\nuse (which is not that easy to change, see his failed patch from a year\nor so back). But even if we rejiggered things enough so the query did\nuse the same snapshot that'd been used for planning, I'm not very\nconvinced that such an assumption would be valid. The assumptions for\nstable functions are pretty weak really.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 17:17:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions not Working as Expected"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> On 06/27/2013 01:42 PM, Tom Lane wrote:\n>> That will break things: CURRENT_DATE will then be equivalent to just\n>> writing today's date as a literal.\n\n> Interesting. I tested it by creating a view and a table with a default, \n> and it always seems to get translated to:\n> ('now'::text)::date\n\nYeah, that is what the parser does with it. The way to read that is\n\"a constant of type text, containing the string 'now', to which is\napplied a run-time coercion to type date\". The run-time coercion is\nequivalent to (and implemented by) calling text_out then date_in.\nIf date_in is marked immutable, then the planner will correctly conclude\nthat it can fold the whole thing to a date constant on sight. Now you\nhave a plan with a hard-wired value for the current date, which will\nbegin to give wrong answers after midnight passes. If your usage\npattern is such that no query plan survives across a day boundary,\nyou might not notice ... but it's still wrong.\n\n> ... What's interesting is that EnterpriseDB has \n> their own pg_catalog.current_date function that gets called by the \n> CURRENT_DATE keyword.\n\nYeah, we really ought to do likewise in the community code. But that\ndoesn't affect the fundamental semantic issue here, which is that you\ncan't mark the expression immutable without creating incorrect cached\nplans.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jun 2013 17:44:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions not Working as Expected"
}
] |
[
{
"msg_contents": "Hi, \nI am experiencing a similar issue as the one mentioned in this post http://stackoverflow.com/questions/3100072/postgresql-slow-on-a-large-table-with-arrays-and-lots-of-updates/3100232#3100232\nHowever the post is written for a 8.3 installation, so I'm wondering if the fillfactor problem is still roughly the same in 9.2, and hence would have a similar effect when adjusted?\n\nRegards Niels Kristian \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 2 Jul 2013 13:27:43 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fillfactor in postgresql 9.2"
},
{
"msg_contents": "Niels Kristian Schjødt wrote:\r\n> I am experiencing a similar issue as the one mentioned in this post\r\n> http://stackoverflow.com/questions/3100072/postgresql-slow-on-a-large-table-with-arrays-and-lots-of-\r\n> updates/3100232#3100232\r\n> However the post is written for a 8.3 installation, so I'm wondering if the fillfactor problem is\r\n> still roughly the same in 9.2, and hence would have a similar effect when adjusted?\r\n\r\nYes, lowering the fillfactor for a table will still\r\nincrease the chances of HOT updates, improving performance\r\nand reducing the need for maintenance.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 2 Jul 2013 11:45:03 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fillfactor in postgresql 9.2"
},
{
"msg_contents": "On Tue, Jul 2, 2013 at 5:15 PM, Albe Laurenz <[email protected]>wrote:\n\n> Niels Kristian Schjødt wrote:\n> > I am experiencing a similar issue as the one mentioned in this post\n> >\n> http://stackoverflow.com/questions/3100072/postgresql-slow-on-a-large-table-with-arrays-and-lots-of-\n> > updates/3100232#3100232\n> > However the post is written for a 8.3 installation, so I'm wondering if\n> the fillfactor problem is\n> > still roughly the same in 9.2, and hence would have a similar effect\n> when adjusted?\n>\n> Yes, lowering the fillfactor for a table will still\n> increase the chances of HOT updates, improving performance\n> and reducing the need for maintenance.\n>\n\nOur experience while testing HOT was a bit different (and I think that's\nwhy the default fillfactor was left unchanged). Even if you start at 100,\nthe system quickly stabilizes after first update on every page. Even though\nthe first update is a non-HOT update, the subsequent update on the same\npage will be a HOT update assuming there are no long running transactions\nand HOT gets a chance to clean up the dead space left by previous update.\n\nHaving said that, it may not be a bad idea to start with a small free space\nin each page, may be just enough to hold one more row (plus a few more\nbytes for the line pointers etc).\n\nAlso, you need to be careful about updating many rows on the same page in a\nsingle transaction or having an open long running transaction. They can\neasily stop HOT's ability to aggressively clean up dead space and stop the\nbloat.\n\nThanks,\nPavan\n-- \nPavan Deolasee\nhttp://www.linkedin.com/in/pavandeolasee\n\nOn Tue, Jul 2, 2013 at 5:15 PM, Albe Laurenz <[email protected]> wrote:\nNiels Kristian Schjødt wrote:\n> I am experiencing a similar issue as the one mentioned in this post\n> http://stackoverflow.com/questions/3100072/postgresql-slow-on-a-large-table-with-arrays-and-lots-of-\n\n\n> updates/3100232#3100232\n> However the post is written for a 8.3 installation, so I'm wondering if the fillfactor problem is\n> still roughly the same in 9.2, and hence would have a similar effect when adjusted?\n\nYes, lowering the fillfactor for a table will still\nincrease the chances of HOT updates, improving performance\nand reducing the need for maintenance.Our experience while testing HOT was a bit different (and I think that's why the default fillfactor was left unchanged). Even if you start at 100, the system quickly stabilizes after first update on every page. Even though the first update is a non-HOT update, the subsequent update on the same page will be a HOT update assuming there are no long running transactions and HOT gets a chance to clean up the dead space left by previous update.\nHaving said that, it may not be a bad idea to start with a small free space in each page, may be just enough to hold one more row (plus a few more bytes for the line pointers etc).\nAlso, you need to be careful about updating many rows on the same page in a single transaction or having an open long running transaction. They can easily stop HOT's ability to aggressively clean up dead space and stop the bloat.\nThanks,Pavan-- Pavan Deolaseehttp://www.linkedin.com/in/pavandeolasee",
"msg_date": "Tue, 2 Jul 2013 17:39:55 +0530",
"msg_from": "Pavan Deolasee <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fillfactor in postgresql 9.2"
}
] |
[
{
"msg_contents": "Hi all\n\njust wanna share a link I've stumbled across today.\n\nhttps://lwn.net/Articles/557220/\n\nquoting the relevant part of the announcement:\n\n\"This is the culmination of several months of effort, to determine the results\nof using different tuning options in the Linux kernel, with different filesystems\nrunning on flash-based block devices.\"\n\nnot specifically tailored to db use, but after a quick glance I can say\nit seems to contain a lot of useful info.\n\n\nAndrea\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 03 Jul 2013 09:01:29 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": true,
"msg_subject": "[OT] A flash filesystem tuning guide"
}
] |
[
{
"msg_contents": "I have made some changes in my postgresql.conf, well, I made two changes \nin this file. the first time, my queries had improved on their execution \ntime considerably but in the second change, I seem my queries have not \nimproved on the contrary they have come back to be slow or at best, they \nhave not changed in its previous improvement.\n\nThese are my changes:\n\n+ shared_buffers = 4GB.\n+ bgwriter_lru_maxpages = 250.\n+ synchronous_commit = off.\n+ effective_io_concurrency = 3.\n+ checkpoint_segments = 64.\n+ checkpoint_timeout = 45min\n+ logging_collector = on\n+ log_min_duration_statement = 500\n+ log_temp_files = 0.\n\nmy max connections are 150\n\nPlease, what would be my error?\n\nThank you for the tips,\n\nDavid Carpio\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 03 Jul 2013 10:10:06 -0500",
"msg_from": "David Carpio <[email protected]>",
"msg_from_op": true,
"msg_subject": "My changes in the postgresql.conf does not work"
},
{
"msg_contents": "David Carpio wrote\n> I have made some changes in my postgresql.conf, well, I made two changes \n> in this file. the first time, my queries had improved on their execution \n> time considerably but in the second change, I seem my queries have not \n> improved on the contrary they have come back to be slow or at best, they \n> have not changed in its previous improvement.\n> \n> These are my changes:\n> \n> + shared_buffers = 4GB.\n> + bgwriter_lru_maxpages = 250.\n> + synchronous_commit = off.\n> + effective_io_concurrency = 3.\n> + checkpoint_segments = 64.\n> + checkpoint_timeout = 45min\n> + logging_collector = on\n> + log_min_duration_statement = 500\n> + log_temp_files = 0.\n> \n> my max connections are 150\n> \n> Please, what would be my error?\n> \n> Thank you for the tips,\n> \n> David Carpio\n\nIt might increase the likelihood of a meaningful response if you include:\n\n1) The default for each of the parameters\n2) The value you used foe each parameter in the situation where performance\nimproved\n\nVery few people memorized the first and it would be interesting to have the\nsecond for reference.\n\nAlso,\n\nPerformance questions are hardware/software specific. What are you running\nand how are you testing?\n\nYou should also do some reading about performance question posting and\nperformance in general:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nOne question is whether you really want to make the default\n\"synchronous_commit\" setting to be \"off\"? Can you even measure the actual\ndifference in your specific use-case that turning this off makes or are you\njust throwing stuff against the wall that say \"this can improve performance\"\nand hoping things work out for the best? Since it can be turned on/off on a\nper-transaction basis you should generally try to only have it off in areas\nthat are meaningful and were you've acknowledged the corresponding\nadditional risk specific to that area.\n\nDavid J.\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/My-changes-in-the-postgresql-conf-does-not-work-tp5762369p5762418.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Jul 2013 10:40:48 -0700 (PDT)",
"msg_from": "David Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: My changes in the postgresql.conf does not work"
}
] |
[
{
"msg_contents": "Hey,\n\nWe have a search method that depending on search params will join 3-5 tables, craft the joins and where section. Only problem is, this is done in rather horrible java code. So using pgtap for tests is not feasible.\nI want to move the database complexity back to database, almost writing the query construction in the plpgsql or python as stores procedure, any suggestions ?\n\nUnfortunately PostgreSQL won't eliminate unnecessary joins from a view, so I can't just create one view and simple code adding where's, order by, etc.\n\nNo, I don't want to use orm.\n\nThanks. \n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 5 Jul 2013 14:57:06 +0100",
"msg_from": "Greg Jaskiewicz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dynamic queries in stored procedure"
},
{
"msg_contents": "\nOn 07/05/2013 09:57 AM, Greg Jaskiewicz wrote:\n> Hey,\n>\n> We have a search method that depending on search params will join 3-5 tables, craft the joins and where section. Only problem is, this is done in rather horrible java code. So using pgtap for tests is not feasible.\n> I want to move the database complexity back to database, almost writing the query construction in the plpgsql or python as stores procedure, any suggestions ?\n>\n> Unfortunately PostgreSQL won't eliminate unnecessary joins from a view, so I can't just create one view and simple code adding where's, order by, etc.\n>\n> No, I don't want to use orm.\n>\n\nIt's a matter of taste. Pretty much every PL has facilities for \nconstructing and running dynamic sql. PLPgsql ,PLPerl, PLV8 ...\n\ncheers\n\nandrew\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 05 Jul 2013 10:26:42 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dynamic queries in stored procedure"
},
{
"msg_contents": "2013/7/5 Greg Jaskiewicz <[email protected]>\n\n> Hey,\n>\n> We have a search method that depending on search params will join 3-5\n> tables, craft the joins and where section. Only problem is, this is done in\n> rather horrible java code. So using pgtap for tests is not feasible.\n> I want to move the database complexity back to database, almost writing\n> the query construction in the plpgsql or python as stores procedure, any\n> suggestions ?\n>\n> Unfortunately PostgreSQL won't eliminate unnecessary joins from a view, so\n> I can't just create one view and simple code adding where's, order by, etc.\n>\n> No, I don't want to use orm.\n>\n> Thanks.\n>\n>\nIf returning type of function is always the same - you can achieve that\nwith any pl language in postgres...\n\nbefore 9.2 we have used plv8 (to return text as formated JSON) - because of\nwe haven't known expected number of columns and type for each column in\nmoment we created function....\n\n From 9.2 you can use any procedural language and return JSON datatype...\n\n\nCheers,\n\nMisa\n\n2013/7/5 Greg Jaskiewicz <[email protected]>\nHey,\n\nWe have a search method that depending on search params will join 3-5 tables, craft the joins and where section. Only problem is, this is done in rather horrible java code. So using pgtap for tests is not feasible.\nI want to move the database complexity back to database, almost writing the query construction in the plpgsql or python as stores procedure, any suggestions ?\n\nUnfortunately PostgreSQL won't eliminate unnecessary joins from a view, so I can't just create one view and simple code adding where's, order by, etc.\n\nNo, I don't want to use orm.\n\nThanks.If returning type of function is always the same - you can achieve that with any pl language in postgres...\nbefore 9.2 we have used plv8 (to return text as formated JSON) - because of we haven't known expected number of columns and type for each column in moment we created function....\nFrom 9.2 you can use any procedural language and return JSON datatype...\nCheers,Misa",
"msg_date": "Fri, 5 Jul 2013 16:46:31 +0200",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dynamic queries in stored procedure"
},
{
"msg_contents": "I do this all the time; In fact, I've written a dynamic aggregate engine \nthat uses a sudo bind variable technique & dynamic joins with dependency \ninjection because the table names and query logic are not known at run \ntime - all in plpgsql.\n\nsb\nOn 7/5/2013 9:26 AM, Andrew Dunstan wrote:\n>\n> On 07/05/2013 09:57 AM, Greg Jaskiewicz wrote:\n>> Hey,\n>>\n>> We have a search method that depending on search params will join 3-5 \n>> tables, craft the joins and where section. Only problem is, this is \n>> done in rather horrible java code. So using pgtap for tests is not \n>> feasible.\n>> I want to move the database complexity back to database, almost \n>> writing the query construction in the plpgsql or python as stores \n>> procedure, any suggestions ?\n>>\n>> Unfortunately PostgreSQL won't eliminate unnecessary joins from a \n>> view, so I can't just create one view and simple code adding where's, \n>> order by, etc.\n>>\n>> No, I don't want to use orm.\n>>\n>\n> It's a matter of taste. Pretty much every PL has facilities for \n> constructing and running dynamic sql. PLPgsql ,PLPerl, PLV8 ...\n>\n> cheers\n>\n> andrew\n>\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 05 Jul 2013 09:50:47 -0500",
"msg_from": "Scott Barney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dynamic queries in stored procedure"
}
] |
[
{
"msg_contents": "I imported a large area of OpenStreetMap's planet.osm file into a\npostgresql database. The database contains a table called nodes. Each node\nhas a geometry column called geom and a hstore column called tags. I need\nto extract nodes along a line that have certain keys in the tags column. To\ndo that I use the following query:\n\nSELECT id, tags FROM nodes WHERE ST_DWithin(nodes.geom,\nST_MakeLine('{$geom1}', '{$geom2}'), 0.001) AND tags ? '{$type}';\n\n$geom1 and $geom2 are geometries for start and end points of my line.\nThe $type variable contains the key I want to search for. Now, it can have\none of the following values: 'historic' or 'tourist'.\n\nThe query given above works but it is too slow. I guess searching for a key\nin tags column takes too much time. I read about GIN and GIST indexes and I\ngenerated a GIN index using the following query:\n\nCREATE INDEX nodes_tags_idx ON nodes USING GIN(tags);\n\nAfter creating the index I searched again for nodes using the same first\nquery but there is no change in performance.\n\nHow can I properly use GIN and GIST to index tags column so I can faster\nsearch for nodes that have a certain key in tags column?\n\nThank you,\n\nRadu-Stefan\n\nI imported a large area of OpenStreetMap's planet.osm file into a postgresql database. The database contains a table called nodes. Each node has a geometry column called geom and a hstore column called tags. I need to extract nodes along a line that have certain keys in the tags column. To do that I use the following query:\n\nSELECT id, tags \nFROM nodes \nWHERE ST_DWithin(nodes.geom, ST_MakeLine('{$geom1}', '{$geom2}'), 0.001) \nAND tags ? '{$type}';\n$geom1 and $geom2 are geometries for start and end points of my line.\nThe $type variable contains the key I want to search for. Now, it can have one of the following values: 'historic' or 'tourist'.\nThe query given above works but it is too slow. I guess searching for a key in tags column takes too much time. I read about GIN and GIST indexes and I generated a GIN index using the following query:\nCREATE INDEX nodes_tags_idx ON nodes USING GIN(tags);\nAfter creating the index I searched again for nodes using the same first query but there is no change in performance.\nHow can I properly use GIN and GIST to index tags column so I can faster search for nodes that have a certain key in tags column?\nThank you,\nRadu-Stefan",
"msg_date": "Sun, 7 Jul 2013 10:28:11 +0300",
"msg_from": "Radu-Stefan Zugravu <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to properly index hstore tags column to faster search for keys"
},
{
"msg_contents": "On 07/07/13 08:28, Radu-Stefan Zugravu wrote:\n> Each node has a geometry column called geom and a hstore column\n> called tags. I need to extract nodes along a line that have certain\n> keys in the tags column. To do that I use the following query:\n\n> SELECT id, tags\n> FROM nodes\n> WHERE ST_DWithin(nodes.geom, ST_MakeLine('{$geom1}', '{$geom2}'), 0.001)\n> AND tags ? '{$type}';\n\n> CREATE INDEX nodes_tags_idx ON nodes USING GIN(tags);\n>\n> After creating the index I searched again for nodes using the same first\n> query but there is no change in performance.\n>\n> How can I properly use GIN and GIST to index tags column so I can faster\n> search for nodes that have a certain key in tags column?\n\nYour index definition looks OK. Try showing the output of EXPLAIN \nANALYSE for your query - that way we'll see if the index is being used. \nYou can always paste explain output to: http://explain.depesz.com/ if \nit's too long for the email.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 08 Jul 2013 08:44:41 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to properly index hstore tags column to faster search for\n keys"
},
{
"msg_contents": "Hi,\nThank you for your answer.\nMy EXPLAIN ANALYZE output can be found here: http://explain.depesz.com/s/Wbo\n.\nAlso, there is a discution on this subject on dba.stackexchange.com:\nhttp://dba.stackexchange.com/questions/45820/how-to-properly-index-hstore-tags-column-to-faster-search-for-keys\n\n\nOn Mon, Jul 8, 2013 at 10:44 AM, Richard Huxton <[email protected]> wrote:\n\n> On 07/07/13 08:28, Radu-Stefan Zugravu wrote:\n>\n>> Each node has a geometry column called geom and a hstore column\n>> called tags. I need to extract nodes along a line that have certain\n>> keys in the tags column. To do that I use the following query:\n>>\n>\n> SELECT id, tags\n>> FROM nodes\n>> WHERE ST_DWithin(nodes.geom, ST_MakeLine('{$geom1}', '{$geom2}'), 0.001)\n>> AND tags ? '{$type}';\n>>\n>\n> CREATE INDEX nodes_tags_idx ON nodes USING GIN(tags);\n>>\n>> After creating the index I searched again for nodes using the same first\n>> query but there is no change in performance.\n>>\n>> How can I properly use GIN and GIST to index tags column so I can faster\n>> search for nodes that have a certain key in tags column?\n>>\n>\n> Your index definition looks OK. Try showing the output of EXPLAIN ANALYSE\n> for your query - that way we'll see if the index is being used. You can\n> always paste explain output to: http://explain.depesz.com/ if it's too\n> long for the email.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n\n\n-- \nRadu-Stefan Zugravu\n0755 950 145\n0760 903 464\[email protected]\[email protected]\n\nHi,Thank you for your answer.My EXPLAIN ANALYZE output can be found here: http://explain.depesz.com/s/Wbo.Also, there is a discution on this subject on dba.stackexchange.com: http://dba.stackexchange.com/questions/45820/how-to-properly-index-hstore-tags-column-to-faster-search-for-keys\nOn Mon, Jul 8, 2013 at 10:44 AM, Richard Huxton <[email protected]> wrote:\nOn 07/07/13 08:28, Radu-Stefan Zugravu wrote:\n\nEach node has a geometry column called geom and a hstore column\ncalled tags. I need to extract nodes along a line that have certain\nkeys in the tags column. To do that I use the following query:\n\n\n\nSELECT id, tags\nFROM nodes\nWHERE ST_DWithin(nodes.geom, ST_MakeLine('{$geom1}', '{$geom2}'), 0.001)\nAND tags ? '{$type}';\n\n\n\nCREATE INDEX nodes_tags_idx ON nodes USING GIN(tags);\n\nAfter creating the index I searched again for nodes using the same first\nquery but there is no change in performance.\n\nHow can I properly use GIN and GIST to index tags column so I can faster\nsearch for nodes that have a certain key in tags column?\n\n\nYour index definition looks OK. Try showing the output of EXPLAIN ANALYSE for your query - that way we'll see if the index is being used. You can always paste explain output to: http://explain.depesz.com/ if it's too long for the email.\n\n-- \n Richard Huxton\n Archonet Ltd\n-- Radu-Stefan Zugravu0755 950 1450760 903 464\[email protected]@yahoo.com",
"msg_date": "Mon, 8 Jul 2013 11:31:02 +0300",
"msg_from": "Radu-Stefan Zugravu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to properly index hstore tags column to faster search for\n keys"
},
{
"msg_contents": "On 08/07/13 09:31, Radu-Stefan Zugravu wrote:\n> Hi,\n> Thank you for your answer.\n> My EXPLAIN ANALYZE output can be found here:\n> http://explain.depesz.com/s/Wbo.\n\nThanks\n\n> Also, there is a discution on this subject on dba.stackexchange.com\n> <http://dba.stackexchange.com>:\n> http://dba.stackexchange.com/questions/45820/how-to-properly-index-hstore-tags-column-to-faster-search-for-keys\n\nThanks - also useful to know.\n\nI can't see anything wrong with your query. Reading it from the bottom \nupwards:\n1. Index used for \"historic\" search - builds a bitmap of blocks\n2. Index used for geometry search - builds a bitmap of blocks\n3. See where the bitmaps overlap (BitmapAnd)\n4. Grab those disk blocks and find the rows (Bitmap Heap Scan)\n\nThe whole thing takes under 20ms - what sort of time were you hoping for?\n\nThe bulk of it (15ms) is taken up locating the \"historic\" rows. There \nare 36351 of those, but presumably most of them are far away on the map.\n\nCould you post the explain without the index? I'm curious as to how slow \nit is just testing the tags after doing the geometry search.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 08 Jul 2013 09:53:28 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to properly index hstore tags column to faster search for\n keys"
},
{
"msg_contents": "Any improvement is welcomed. The overall performance of the application is\nnot very good. It takes about 200 seconds to compute a path for not so far\nstar and end points. I want to improve this query as much as I can.\nHow exactly should I post the explain without the index? Do I have to drop\nall created indexes for the tags column? It takes some time to create them\nback.\n\n\nOn Mon, Jul 8, 2013 at 11:53 AM, Richard Huxton <[email protected]> wrote:\n\n> On 08/07/13 09:31, Radu-Stefan Zugravu wrote:\n>\n>> Hi,\n>> Thank you for your answer.\n>> My EXPLAIN ANALYZE output can be found here:\n>> http://explain.depesz.com/s/**Wbo <http://explain.depesz.com/s/Wbo>.\n>>\n>\n> Thanks\n>\n> Also, there is a discution on this subject on dba.stackexchange.com\n>> <http://dba.stackexchange.com>**:\n>> http://dba.stackexchange.com/**questions/45820/how-to-**\n>> properly-index-hstore-tags-**column-to-faster-search-for-**keys<http://dba.stackexchange.com/questions/45820/how-to-properly-index-hstore-tags-column-to-faster-search-for-keys>\n>>\n>\n> Thanks - also useful to know.\n>\n> I can't see anything wrong with your query. Reading it from the bottom\n> upwards:\n> 1. Index used for \"historic\" search - builds a bitmap of blocks\n> 2. Index used for geometry search - builds a bitmap of blocks\n> 3. See where the bitmaps overlap (BitmapAnd)\n> 4. Grab those disk blocks and find the rows (Bitmap Heap Scan)\n>\n> The whole thing takes under 20ms - what sort of time were you hoping for?\n>\n> The bulk of it (15ms) is taken up locating the \"historic\" rows. There are\n> 36351 of those, but presumably most of them are far away on the map.\n>\n> Could you post the explain without the index? I'm curious as to how slow\n> it is just testing the tags after doing the geometry search.\n>\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n\n\n-- \nRadu-Stefan Zugravu\n0755 950 145\n0760 903 464\[email protected]\[email protected]\n\nAny improvement is welcomed. The overall performance of the application is not very good. It takes about 200 seconds to compute a path for not so far star and end points. I want to improve this query as much as I can.\nHow exactly should I post the explain without the index? Do I have to drop all created indexes for the tags column? It takes some time to create them back.On Mon, Jul 8, 2013 at 11:53 AM, Richard Huxton <[email protected]> wrote:\nOn 08/07/13 09:31, Radu-Stefan Zugravu wrote:\n\nHi,\nThank you for your answer.\nMy EXPLAIN ANALYZE output can be found here:\nhttp://explain.depesz.com/s/Wbo.\n\n\nThanks\n\n\nAlso, there is a discution on this subject on dba.stackexchange.com\n<http://dba.stackexchange.com>:\nhttp://dba.stackexchange.com/questions/45820/how-to-properly-index-hstore-tags-column-to-faster-search-for-keys\n\n\nThanks - also useful to know.\n\nI can't see anything wrong with your query. Reading it from the bottom upwards:\n1. Index used for \"historic\" search - builds a bitmap of blocks\n2. Index used for geometry search - builds a bitmap of blocks\n3. See where the bitmaps overlap (BitmapAnd)\n4. Grab those disk blocks and find the rows (Bitmap Heap Scan)\n\nThe whole thing takes under 20ms - what sort of time were you hoping for?\n\nThe bulk of it (15ms) is taken up locating the \"historic\" rows. There are 36351 of those, but presumably most of them are far away on the map.\n\nCould you post the explain without the index? I'm curious as to how slow it is just testing the tags after doing the geometry search.\n\n-- \n Richard Huxton\n Archonet Ltd\n-- Radu-Stefan Zugravu0755 950 1450760 903 464\[email protected]@yahoo.com",
"msg_date": "Mon, 8 Jul 2013 12:20:13 +0300",
"msg_from": "Radu-Stefan Zugravu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to properly index hstore tags column to faster search for\n keys"
},
{
"msg_contents": "On 08/07/13 10:20, Radu-Stefan Zugravu wrote:\n> Any improvement is welcomed. The overall performance of the application\n> is not very good. It takes about 200 seconds to compute a path for not\n> so far star and end points.\n\nSo you have to call this query 1000 times with different start and end \npoints?\n\n > I want to improve this query as much as I can.\n\nThere's only two ways I can see to get this much below 20ms. This will \nonly work if you want a very restricted range of tags.\n\nDrop the tag index and create multiple geometry indexes instead:\n\nCREATE INDEX node_geo_tourist_idx <index details> WHERE tags ? 'tourist';\nCREATE INDEX node_geo_tourist_idx <index details> WHERE tags ? 'history';\netc.\n\nThis will only work if you have a literal WHERE clause that checks the \ntag. It should be fast though.\n\n\nThe second way would be to delete all the nodes that aren't tagged \ntourist or history. That assumes you are never interested in them of course.\n\n> How exactly should I post the explain without the index? Do I have to\n> drop all created indexes for the tags column? It takes some time to\n> create them back.\n\nNot important - I was just curious.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 08 Jul 2013 11:27:34 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to properly index hstore tags column to faster search for\n keys"
},
{
"msg_contents": "I do call the query for each neighbour node to find which one is better in\nbuilding my path.\nI think I will try the first way you mentioned. I also found some\nreferences using BTREE indexes:\n\nCREATE INDEX nodes_tags_btree_historic_idx on nodes USING BTREE ((tags ?\n'historic'));\nCREATE INDEX nodes_tags_btree_tourist_idx on nodes USING BTREE ((tags ?\n'tourist));\n\nDo you think this could make a difference?\n\n\nOn Mon, Jul 8, 2013 at 1:27 PM, Richard Huxton <[email protected]> wrote:\n\n> On 08/07/13 10:20, Radu-Stefan Zugravu wrote:\n>\n>> Any improvement is welcomed. The overall performance of the application\n>> is not very good. It takes about 200 seconds to compute a path for not\n>> so far star and end points.\n>>\n>\n> So you have to call this query 1000 times with different start and end\n> points?\n>\n>\n> > I want to improve this query as much as I can.\n>\n> There's only two ways I can see to get this much below 20ms. This will\n> only work if you want a very restricted range of tags.\n>\n> Drop the tag index and create multiple geometry indexes instead:\n>\n> CREATE INDEX node_geo_tourist_idx <index details> WHERE tags ? 'tourist';\n> CREATE INDEX node_geo_tourist_idx <index details> WHERE tags ? 'history';\n> etc.\n>\n> This will only work if you have a literal WHERE clause that checks the\n> tag. It should be fast though.\n>\n>\n> The second way would be to delete all the nodes that aren't tagged tourist\n> or history. That assumes you are never interested in them of course.\n>\n>\n> How exactly should I post the explain without the index? Do I have to\n>> drop all created indexes for the tags column? It takes some time to\n>> create them back.\n>>\n>\n> Not important - I was just curious.\n>\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n\n\n-- \nRadu-Stefan Zugravu\n0755 950 145\n0760 903 464\[email protected]\[email protected]\n\nI do call the query for each neighbour node to find which one is better in building my path.I think I will try the first way you mentioned. I also found some references using BTREE indexes:\nCREATE INDEX nodes_tags_btree_historic_idx on nodes USING BTREE ((tags ? 'historic'));CREATE INDEX nodes_tags_btree_tourist_idx on nodes USING BTREE ((tags ? 'tourist));Do you think this could make a difference?\nOn Mon, Jul 8, 2013 at 1:27 PM, Richard Huxton <[email protected]> wrote:\nOn 08/07/13 10:20, Radu-Stefan Zugravu wrote:\n\nAny improvement is welcomed. The overall performance of the application\nis not very good. It takes about 200 seconds to compute a path for not\nso far star and end points.\n\n\nSo you have to call this query 1000 times with different start and end points?\n\n> I want to improve this query as much as I can.\n\nThere's only two ways I can see to get this much below 20ms. This will only work if you want a very restricted range of tags.\n\nDrop the tag index and create multiple geometry indexes instead:\n\nCREATE INDEX node_geo_tourist_idx <index details> WHERE tags ? 'tourist';\nCREATE INDEX node_geo_tourist_idx <index details> WHERE tags ? 'history';\netc.\n\nThis will only work if you have a literal WHERE clause that checks the tag. It should be fast though.\n\n\nThe second way would be to delete all the nodes that aren't tagged tourist or history. That assumes you are never interested in them of course.\n\n\nHow exactly should I post the explain without the index? Do I have to\ndrop all created indexes for the tags column? It takes some time to\ncreate them back.\n\n\nNot important - I was just curious.\n\n-- \n Richard Huxton\n Archonet Ltd\n-- Radu-Stefan Zugravu0755 950 1450760 903 464\[email protected]@yahoo.com",
"msg_date": "Mon, 8 Jul 2013 14:01:02 +0300",
"msg_from": "Radu-Stefan Zugravu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to properly index hstore tags column to faster search for\n keys"
},
{
"msg_contents": "Hi Stefan\n1 - If you have a fixed data that does not change a lot, like I assume is your fixed 'map' try implementing in your app the hashtrie method. This looks as better approach as your query is quite fast. Usually I am starting to query my queries (or the query planner) when they start to take more the 2 seconds. The fact that you continuously call it for your next node it might not be the best approach.\n2 - As mentioned by Richard, try either to delete the nodes that does not belong to \"historic\" / \"tourist\" or simply split the table in 2. One that have only them and the rest to the other table. Assuming this will not change a lot the other already implemented queries in your app (because you'll have to make a 1-to-1 now) it might save your day.\nDanny\n\n\n________________________________\n From: Radu-Stefan Zugravu <[email protected]>\nTo: Richard Huxton <[email protected]> \nCc: [email protected] \nSent: Monday, July 8, 2013 2:01 PM\nSubject: Re: [PERFORM] How to properly index hstore tags column to faster search for keys\n \n\n\nI do call the query for each neighbour node to find which one is better in building my path.I think I will try the first way you mentioned. I also found some references using BTREE indexes:\n\nCREATE INDEX nodes_tags_btree_historic_idx on nodes USING BTREE ((tags ? 'historic'));\nCREATE INDEX nodes_tags_btree_tourist_idx on nodes USING BTREE ((tags ? 'tourist));\n\n\nDo you think this could make a difference?\n\n\n\nOn Mon, Jul 8, 2013 at 1:27 PM, Richard Huxton <[email protected]> wrote:\n\nOn 08/07/13 10:20, Radu-Stefan Zugravu wrote:\n>\n>Any improvement is welcomed. The overall performance of the application\n>>is not very good. It takes about 200 seconds to compute a path for not\n>>so far star and end points.\n>>\n>\nSo you have to call this query 1000 times with different start and end points?\n>\n>\n>> I want to improve this query as much as I can.\n>\n>\nThere's only two ways I can see to get this much below 20ms. This will only work if you want a very restricted range of tags.\n>\n>Drop the tag index and create multiple geometry indexes instead:\n>\n>CREATE INDEX node_geo_tourist_idx <index details> WHERE tags ? 'tourist';\n>CREATE INDEX node_geo_tourist_idx <index details> WHERE tags ? 'history';\n>etc.\n>\n>This will only work if you have a literal WHERE clause that checks the tag. It should be fast though.\n>\n>\n>The second way would be to delete all the nodes that aren't tagged tourist or history. That assumes you are never interested in them of course.\n>\n>\n>\n>How exactly should I post the explain without the index? Do I have to\n>>drop all created indexes for the tags column? It takes some time to\n>>create them back.\n>>\n>\nNot important - I was just curious.\n>\n>\n>-- \n> Richard Huxton\n> Archonet Ltd\n>\n\n\n-- \n\nRadu-Stefan Zugravu0755 950 145\n0760 903 464\[email protected]\[email protected] \nHi Stefan1 - If you have a fixed data that does not change a lot, like I assume is your fixed 'map' try implementing in your app the hashtrie method. This looks as better approach as your query is quite fast. Usually I am starting to query my queries (or the query planner) when they start to take more the 2 seconds. The fact that you continuously call it for your next node it might not be the best approach.2 - As mentioned by Richard, try either to delete the nodes that does not belong to \"historic\" / \"tourist\" or simply split the table in 2. One that have\n only them and the rest to the other table. Assuming this will not change a lot the other already implemented queries in your app (because you'll have to make a 1-to-1 now) it might save your day.Danny From: Radu-Stefan Zugravu <[email protected]> To: Richard Huxton <[email protected]> Cc: [email protected] Sent: Monday, July 8, 2013 2:01 PM Subject: Re: [PERFORM] How to properly index hstore tags column to faster\n search for keys I do call the query for each neighbour node to find which one is better in building my path.I think I will try the first way you mentioned. I also found some references using BTREE indexes:\nCREATE INDEX nodes_tags_btree_historic_idx on nodes USING BTREE ((tags ? 'historic'));CREATE INDEX nodes_tags_btree_tourist_idx on nodes USING BTREE ((tags ? 'tourist));Do you think this could make a difference?\nOn Mon, Jul 8, 2013 at 1:27 PM, Richard Huxton <[email protected]> wrote:\nOn 08/07/13 10:20, Radu-Stefan Zugravu wrote:\n\nAny improvement is welcomed. The overall performance of the application\nis not very good. It takes about 200 seconds to compute a path for not\nso far star and end points.\n\n\nSo you have to call this query 1000 times with different start and end points?\n\n> I want to improve this query as much as I can.\n\nThere's only two ways I can see to get this much below 20ms. This will only work if you want a very restricted range of tags.\n\nDrop the tag index and create multiple geometry indexes instead:\n\nCREATE INDEX node_geo_tourist_idx <index details> WHERE tags ? 'tourist';\nCREATE INDEX node_geo_tourist_idx <index details> WHERE tags ? 'history';\netc.\n\nThis will only work if you have a literal WHERE clause that checks the tag. It should be fast though.\n\n\nThe second way would be to delete all the nodes that aren't tagged tourist or history. That assumes you are never interested in them of course.\n\n\nHow exactly should I post the explain without the index? Do I have to\ndrop all created indexes for the tags column? It takes some time to\ncreate them back.\n\n\nNot important - I was just curious.\n\n-- \n Richard Huxton\n Archonet Ltd\n-- Radu-Stefan Zugravu0755 950 1450760 903 464\[email protected]@yahoo.com",
"msg_date": "Mon, 8 Jul 2013 05:16:19 -0700 (PDT)",
"msg_from": "idc danny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to properly index hstore tags column to faster search for\n keys"
},
{
"msg_contents": "Dear Radu-Stefan,\nIt seems to me that you trying hard to solve a problem by SQL that probably can't be solved. Take a look please on Apache HBase. You can access HBase from PostgreSQL as well by utilizing Java or Python for example.\n\nSincerely yours,\n\n[Description: Celltick logo_highres]\nYuri Levinsky, DBA\nCelltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel\nMobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Radu-Stefan Zugravu\nSent: Monday, July 08, 2013 12:20 PM\nTo: Richard Huxton\nCc: [email protected]\nSubject: Re: [PERFORM] How to properly index hstore tags column to faster search for keys\n\nAny improvement is welcomed. The overall performance of the application is not very good. It takes about 200 seconds to compute a path for not so far star and end points. I want to improve this query as much as I can.\nHow exactly should I post the explain without the index? Do I have to drop all created indexes for the tags column? It takes some time to create them back.\n\nOn Mon, Jul 8, 2013 at 11:53 AM, Richard Huxton <[email protected]<mailto:[email protected]>> wrote:\nOn 08/07/13 09:31, Radu-Stefan Zugravu wrote:\nHi,\nThank you for your answer.\nMy EXPLAIN ANALYZE output can be found here:\nhttp://explain.depesz.com/s/Wbo.\n\nThanks\nAlso, there is a discution on this subject on dba.stackexchange.com<http://dba.stackexchange.com>\n<http://dba.stackexchange.com>:\nhttp://dba.stackexchange.com/questions/45820/how-to-properly-index-hstore-tags-column-to-faster-search-for-keys\n\nThanks - also useful to know.\n\nI can't see anything wrong with your query. Reading it from the bottom upwards:\n1. Index used for \"historic\" search - builds a bitmap of blocks\n2. Index used for geometry search - builds a bitmap of blocks\n3. See where the bitmaps overlap (BitmapAnd)\n4. Grab those disk blocks and find the rows (Bitmap Heap Scan)\n\nThe whole thing takes under 20ms - what sort of time were you hoping for?\n\nThe bulk of it (15ms) is taken up locating the \"historic\" rows. There are 36351 of those, but presumably most of them are far away on the map.\n\nCould you post the explain without the index? I'm curious as to how slow it is just testing the tags after doing the geometry search.\n\n\n--\n Richard Huxton\n Archonet Ltd\n\n\n\n--\nRadu-Stefan Zugravu\n0755 950 145\n0760 903 464\[email protected]<mailto:[email protected]>\[email protected]<mailto:[email protected]>\n\nThis mail was received via Mail-SeCure System.",
"msg_date": "Mon, 8 Jul 2013 15:34:49 +0000",
"msg_from": "Yuri Levinsky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to properly index hstore tags column to faster search for\n keys"
},
{
"msg_contents": "Hi Yuri and Radu-Stefan\n\nI would'nt give too fast on PostgreSQL!\nWhen looking at your query plan I wonder if one could reformulate the query\nto compute the ST_DWithin first (assuming you have an index on the node\ngeometries!) before it filters the tags.\nTo investigate that you could formulate a CTE query [1] which computes the\nST_DWithin first.\n\nYours, Stefan\n\n[1] http://www.postgresql.org/docs/9.2/static/queries-with.html\n\n\n2013/7/8 Yuri Levinsky <[email protected]>\n\n> Dear Radu-Stefan,****\n>\n> It seems to me that you trying hard to solve a problem by SQL that\n> probably can't be solved. Take a look please on Apache HBase. You can\n> access HBase from PostgreSQL as well by utilizing Java or Python for\n> example. ****\n>\n> ** **\n>\n> *Sincerely yours*,****\n>\n> ** **\n>\n> [image: Description: Celltick logo_highres]****\n>\n> Yuri Levinsky, DBA****\n>\n> Celltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel****\n>\n> Mobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222****\n>\n> ** **\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Radu-Stefan Zugravu\n> *Sent:* Monday, July 08, 2013 12:20 PM\n> *To:* Richard Huxton\n> *Cc:* [email protected]\n>\n> *Subject:* Re: [PERFORM] How to properly index hstore tags column to\n> faster search for keys****\n>\n> ** **\n>\n> Any improvement is welcomed. The overall performance of the application is\n> not very good. It takes about 200 seconds to compute a path for not so far\n> star and end points. I want to improve this query as much as I can.****\n>\n> How exactly should I post the explain without the index? Do I have to drop\n> all created indexes for the tags column? It takes some time to create them\n> back.****\n>\n> ** **\n>\n> On Mon, Jul 8, 2013 at 11:53 AM, Richard Huxton <[email protected]> wrote:*\n> ***\n>\n> On 08/07/13 09:31, Radu-Stefan Zugravu wrote:****\n>\n> Hi,\n> Thank you for your answer.\n> My EXPLAIN ANALYZE output can be found here:\n> http://explain.depesz.com/s/Wbo.****\n>\n> ** **\n>\n> Thanks****\n>\n> Also, there is a discution on this subject on dba.stackexchange.com****\n>\n> <http://dba.stackexchange.com>:\n>\n> http://dba.stackexchange.com/questions/45820/how-to-properly-index-hstore-tags-column-to-faster-search-for-keys\n> ****\n>\n>\n> Thanks - also useful to know.\n>\n> I can't see anything wrong with your query. Reading it from the bottom\n> upwards:\n> 1. Index used for \"historic\" search - builds a bitmap of blocks\n> 2. Index used for geometry search - builds a bitmap of blocks\n> 3. See where the bitmaps overlap (BitmapAnd)\n> 4. Grab those disk blocks and find the rows (Bitmap Heap Scan)\n>\n> The whole thing takes under 20ms - what sort of time were you hoping for?\n>\n> The bulk of it (15ms) is taken up locating the \"historic\" rows. There are\n> 36351 of those, but presumably most of them are far away on the map.\n>\n> Could you post the explain without the index? I'm curious as to how slow\n> it is just testing the tags after doing the geometry search.****\n>\n>\n>\n> --\n> Richard Huxton\n> Archonet Ltd****\n>\n>\n>\n> ****\n>\n> ** **\n>\n> -- ****\n>\n> Radu-Stefan Zugravu****\n>\n> 0755 950 145\n> 0760 903 464\n> [email protected]\n> [email protected] ****\n>\n>\n> This mail was received via Mail-SeCure System.****\n>",
"msg_date": "Fri, 19 Jul 2013 12:03:17 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to properly index hstore tags column to faster search for\n keys"
}
] |
[
{
"msg_contents": "Hi, i have a postgresql 9.2.2, but i don�t use autovaccum but i want to \nbegin to use it. some recommendation about the optimal configuration? \nor some link to explain it.\n\nThanks\n\n-- \nAtentamente,\n\n\nJEISON BEDOYA DELGADO\nAdm. Servidores y Comunicaciones\nAUDIFARMA S.A.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 08 Jul 2013 11:14:52 -0500",
"msg_from": "Jeison Bedoya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance autovaccum"
},
{
"msg_contents": "On Tue, Jul 9, 2013 at 1:14 AM, Jeison Bedoya <[email protected]> wrote:\n> Hi, i have a postgresql 9.2.2,\nYou should update to 9.2.4. There are major security fixes in this subrelease.\n\n> but i don´t use autovaccum but i want to\n> begin to use it. some recommendation about the optimal configuration? or\n> some link to explain it.\nPerhaps that?\nhttp://www.postgresql.org/docs/9.2/static/routine-vacuuming.html#AUTOVACUUM\n\nAtentamente,\n--\nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 Jul 2013 07:21:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance autovaccum"
},
{
"msg_contents": "On 07/08/2013 09:14 AM, Jeison Bedoya wrote:\n> Hi, i have a postgresql 9.2.2, but i don´t use autovaccum but i want to\n> begin to use it. some recommendation about the optimal configuration?\n> or some link to explain it.\n\nInitial configuration:\n\nautovacuum = on\n\nThere, you're done. You only do something else if the default\nconfiguraiton is proven not to work for you.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 09 Jul 2013 15:14:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance autovaccum"
},
{
"msg_contents": "\nOn 07/09/2013 03:14 PM, Josh Berkus wrote:\n>\n> On 07/08/2013 09:14 AM, Jeison Bedoya wrote:\n>> Hi, i have a postgresql 9.2.2, but i don´t use autovaccum but i want to\n>> begin to use it. some recommendation about the optimal configuration?\n>> or some link to explain it.\n>\n> Initial configuration:\n>\n> autovacuum = on\n>\n> There, you're done. You only do something else if the default\n> configuraiton is proven not to work for you.\n>\n\nWell, and a restart of PostgreSQL. It should also be noted that \nautovacuum by default is on. You can check to see if it is currently \nrunning for you by issuing the following command from psql:\n\nshow autovacuum;\n\nOther than that JoshB is correct. The default settings for autovacuum \nwork for the 95% of users out there.\n\nJD\n\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc\nFor my dreams of your image that blossoms\n a rose in the deeps of my heart. - W.B. Yeats\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 09 Jul 2013 17:41:43 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance autovaccum"
},
{
"msg_contents": "In our use case, the default autovacuum settings did not work, I guess we are in the 5% group of users. The default settings were too aggressive when it ran against some of our larger tables (example: 100M rows by 250 columns) in our front end OLTP database causing severe performance degradation. We had to throttle it back (and haven't had problems since).\r\n\r\nYou should definitely run autovacuum. If you are only able to experiment with it in production, I recommend taking it slow at first, and gradually making it more aggressive after you have a good handle on the impact (and observe it running on the larger critical tables in your data inventory without impact). You can start with the defaults, they aren't too bad. In our case - a backend for a high performance, highly available website with a set of webservers that are very sensitive to even slight query time changes, the default settings simply consumed too much overhead. I think in the end all we had to change was the autovacuum_vacuum_cost_delay to make enough difference to keep our site up and running. You should review the tuning options though.\r\n\r\nThe problem is if you kill the autovacuum process because you suspect it is causing issues during a crisis, the autovacuumer will just start it back up again a few minutes later. If you disable it permanently your query performance will likely slowly degrade. You need to find somewhere in between.\r\n\r\nOh - one last note: If you have a partitioned table (partitioned by date), autovacuum will not run against the older partitions (because they are no longer changing). If you've had autovacuum off for a while, you may need to go back and manually vacuum analyze the older partitions to clean them up after you get autovacuum running. (ditto for other old tables that are no longer changing)\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Joshua D. Drake\r\nSent: Tuesday, July 09, 2013 8:42 PM\r\nTo: Josh Berkus\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Performance autovaccum\r\n\r\n\r\nOn 07/09/2013 03:14 PM, Josh Berkus wrote:\r\n>\r\n> On 07/08/2013 09:14 AM, Jeison Bedoya wrote:\r\n>> Hi, i have a postgresql 9.2.2, but i don´t use autovaccum but i want \r\n>> to begin to use it. some recommendation about the optimal configuration?\r\n>> or some link to explain it.\r\n>\r\n> Initial configuration:\r\n>\r\n> autovacuum = on\r\n>\r\n> There, you're done. You only do something else if the default \r\n> configuraiton is proven not to work for you.\r\n>\r\n\r\nWell, and a restart of PostgreSQL. It should also be noted that autovacuum by default is on. You can check to see if it is currently running for you by issuing the following command from psql:\r\n\r\nshow autovacuum;\r\n\r\nOther than that JoshB is correct. The default settings for autovacuum work for the 95% of users out there.\r\n\r\nJD\r\n\r\n\r\n\r\n--\r\nCommand Prompt, Inc. - http://www.commandprompt.com/ 509-416-6579 PostgreSQL Support, Training, Professional Services and Development High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc For my dreams of your image that blossoms\r\n a rose in the deeps of my heart. - W.B. Yeats\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jul 2013 13:12:16 +0000",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance autovaccum"
}
] |
[
{
"msg_contents": "I am migrating a Postgres 8.4 installation on a dedicated server to Postgres 9.2 running on a Virtual Machine. A sample query that run in 10 minutes on the 8.4 installation take 40 minutes on the 9.2 installation.\n\nCurrent Server, Postgres 8.4\n\n* 6-core, 3GHz AMD system\n\n* 12GB of RAM\n\n* 4 SATA drive RAID-1 storage\n\n* Mandriva OS\n\n* SQL encoding and 'C' collation\n\nVirtual Machine, Postgres 9.2 ( two different systems)\n\n* 4-core, 3Ghz Intel system\n\n* 12GB or RAM\n\n* SAS storage on one, and 4-SATA drive RAID-10 system on second\n\n* CentOS 6.3 OS\n\n* UTF-8 encoding, and I have tried both 'C' and en_US collation\n\nThe first VM is at a local Data Center and the second in on a dedicated server in my office. Both give similar results.\nThe data, indexes and constraints have all been successfully migrated to the new system.\nI have tuned the VM systems using pgtune with no significant before and after difference.\nThe 'explain' output for the query is very different between the two systems.\n\nIt seems like I am missing some simple step for there to be such a huge performance difference.\n\nAny suggestions on what else to text/check would be very much appreciated.\n\nTom\n\n\n\n\n\n\n\n\n\n\nI am migrating a Postgres 8.4 installation on a dedicated server to Postgres 9.2 running on a Virtual Machine. A sample query that run in 10 minutes on the 8.4 installation take 40 minutes on the 9.2 installation.\n\n \nCurrent Server, Postgres 8.4\n· \n6-core, 3GHz AMD system\n· \n12GB of RAM\n· \n4 SATA drive RAID-1 storage\n· \nMandriva OS\n· \nSQL encoding and ‘C’ collation\n \nVirtual Machine, Postgres 9.2 ( two different systems)\n· \n4-core, 3Ghz Intel system\n· \n12GB or RAM\n· \nSAS storage on one, and 4-SATA drive RAID-10 system on second\n· \nCentOS 6.3 OS\n· \nUTF-8 encoding, and I have tried both ‘C’ and en_US collation\n \nThe first VM is at a local Data Center and the second in on a dedicated server in my office. Both give similar results.\n\nThe data, indexes and constraints have all been successfully migrated to the new system.\nI have tuned the VM systems using pgtune with no significant before and after difference.\n\nThe ‘explain’ output for the query is very different between the two systems.\n\n \nIt seems like I am missing some simple step for there to be such a huge performance difference.\n\n \nAny suggestions on what else to text/check would be very much appreciated.\n \nTom",
"msg_date": "Mon, 8 Jul 2013 16:21:31 +0000",
"msg_from": "Tom Harkaway <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.4 to 9.2 migration performance"
},
{
"msg_contents": "On Mon, Jul 8, 2013 at 9:21 AM, Tom Harkaway <[email protected]> wrote:\n\n> The ‘explain’ output for the query is very different between the two\n> systems.\n>\n\nYou ran ANALYZE after loading the data? Can you post the query and EXPLAIN\nANALYZE output?\nAlso, some tips on getting answers with (potentially) less ping pong\ndiscussion can be found here:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nOn Mon, Jul 8, 2013 at 9:21 AM, Tom Harkaway <[email protected]> wrote:\n\nThe ‘explain’ output for the query is very different between the two systems.\n\nYou ran ANALYZE after loading the data? Can you post the query and EXPLAIN ANALYZE output?Also, some tips on getting answers with (potentially) less ping pong discussion can be found here: https://wiki.postgresql.org/wiki/Slow_Query_Questions",
"msg_date": "Mon, 8 Jul 2013 09:31:39 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.4 to 9.2 migration performance"
}
] |
[
{
"msg_contents": "Hi, i want to know why in my database the process stay in BID, PARSE, \nautentication, startup by a couple minuts, generating slow in the \nprocess, perhaps tunning parameters? or configuration of operating \nsystem (Linux RHEL 6).\n\nThanks by your help\n\n-- \nAtentamente,\n\n\nJEISON BEDOYA DELGADO\nAdm. Servidores y Comunicaciones\nAUDIFARMA S.A.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 08 Jul 2013 11:22:12 -0500",
"msg_from": "Jeison Bedoya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Process in state BIND, authentication, PARSE"
},
{
"msg_contents": "\nOn 07/08/2013 12:22 PM, Jeison Bedoya wrote:\n> Hi, i want to know why in my database the process stay in BID, PARSE, \n> autentication, startup by a couple minuts, generating slow in the \n> process, perhaps tunning parameters? or configuration of operating \n> system (Linux RHEL 6).\n>\n>\n\n\nYou haven't given us nearly enough information about your setup. We'd \nneed to see your configuration settings and have some details of the \nmachine and where connections are coming from to diagnose it further.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 08 Jul 2013 12:40:03 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process in state BIND, authentication, PARSE"
},
{
"msg_contents": "Hi, yeah i am sorry, i run the postgresql in a machine with this \nconfiguration\n\nRam: 128GB\ncpu: 32 cores\nDisk: 400GB over SAN\n\nThe database run an application web over glassfish, and have 2.000 users\n\nmy database configuracion is this:\n\nmax_connections = 900\nshared_buffers = 4096MB\ntemp_buffers = 128MB\nwork_mem = 1024MB\nmaintenance_work_mem = 1024MB\nwal_buffers = 256\ncheckpoint_segments = 103\neffective_cache_size = 4096MB\n\nthanks\n\nAtentamente,\n\n\nJEISON BEDOYA DELGADO\nAdm. Servidores y Comunicaciones\nAUDIFARMA S.A.\n\nEl 08/07/2013 11:40 a.m., Andrew Dunstan escribi�:\n>\n> On 07/08/2013 12:22 PM, Jeison Bedoya wrote:\n>> Hi, i want to know why in my database the process stay in BID, PARSE, \n>> autentication, startup by a couple minuts, generating slow in the \n>> process, perhaps tunning parameters? or configuration of operating \n>> system (Linux RHEL 6).\n>>\n>>\n>\n>\n> You haven't given us nearly enough information about your setup. We'd \n> need to see your configuration settings and have some details of the \n> machine and where connections are coming from to diagnose it further.\n>\n> cheers\n>\n> andrew\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 08 Jul 2013 12:01:45 -0500",
"msg_from": "Jeison Bedoya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Process in state BIND, authentication, PARSE"
},
{
"msg_contents": "On Tue, Jul 9, 2013 at 2:01 AM, Jeison Bedoya <[email protected]> wrote:\n> max_connections = 900\n> work_mem = 1024MB\n> maintenance_work_mem = 1024MB\nAren't work_mem and maintenance_work_mem too high? You need to keep in\nmind that those are per-operation settings, so for example if you have\n100 clients performing queries, this could grow up to 100G. In your\ncase you even have a maximum of 900 connections... Do you perform\nheavy sort operations with your application that could explain such an\namount of memory needed?\n--\nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 Jul 2013 07:35:14 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process in state BIND, authentication, PARSE"
},
{
"msg_contents": "On Mon, Jul 8, 2013 at 5:35 PM, Michael Paquier\n<[email protected]> wrote:\n> On Tue, Jul 9, 2013 at 2:01 AM, Jeison Bedoya <[email protected]> wrote:\n>> max_connections = 900\n>> work_mem = 1024MB\n>> maintenance_work_mem = 1024MB\n> Aren't work_mem and maintenance_work_mem too high? You need to keep in\n> mind that those are per-operation settings, so for example if you have\n> 100 clients performing queries, this could grow up to 100G. In your\n> case you even have a maximum of 900 connections... Do you perform\n> heavy sort operations with your application that could explain such an\n> amount of memory needed?\n\nit's not at all unreasonable for maintenance_work_mem on a 128gb box.\nagree on work_mem though. If it was me, i'd set it to around 64mb and\nthen locally set it for particular queries that need a lot of memory.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 Jul 2013 08:23:39 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process in state BIND, authentication, PARSE"
},
{
"msg_contents": "Jeison Bedoya <[email protected]> wrote:\n\n> Ram: 128GB\n\n> max_connections = 900\n\n> temp_buffers = 128MB\n\nBesides the concerns already expressed about work_mem, temp_buffers\ncould be a big problem. If a connection uses temp tables it\nacquires up to 128MB, *and holds on it reserved for caching temp\ntables for that connection for as long as the connection lasts*. \nSo, for 900 connections, that could be 112.5 GB. I would expect to\nsee performance decrease and eventually completely tank as more\nconnections reserved memory for this purpose.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Jul 2013 06:37:33 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process in state BIND, authentication, PARSE"
}
] |
[
{
"msg_contents": "Hi Guys,\n\nI'm new to Postgresql, we have a Greenplum cluster and need to create many indexes on the database. So my question is:\n\nIs there any performance tips for creating index on Postgres?\nhow to monitor the progress the creation process?\n\nThanks and best regards,\nSuya Huang\n\n\n\n\n\n\n\n\n\nHi Guys,\n \nI’m new to Postgresql, we have a Greenplum cluster and need to create many indexes on the database. So my question is:\n \nIs there any performance tips for creating index on Postgres?\nhow to monitor the progress the creation process?\n \nThanks and best regards,\nSuya Huang",
"msg_date": "Thu, 11 Jul 2013 02:03:48 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to speed up the index creation in GP?"
},
{
"msg_contents": "Hi Suya,\n\nI think you should start with it\nhttp://www.postgresql.org/docs/9.2/static/indexes.html.\n\nOn Wed, Jul 10, 2013 at 7:03 PM, Huang, Suya <[email protected]> wrote:\n> Hi Guys,\n>\n>\n>\n> I’m new to Postgresql, we have a Greenplum cluster and need to create many\n> indexes on the database. So my question is:\n>\n>\n>\n> Is there any performance tips for creating index on Postgres?\n>\n> how to monitor the progress the creation process?\n>\n>\n>\n> Thanks and best regards,\n>\n> Suya Huang\n\n\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nProfile: http://www.linkedin.com/in/grayhemp\nPhone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 14 Jul 2013 21:33:57 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to speed up the index creation in GP?"
}
] |
[
{
"msg_contents": "Hi All,\n\n(basic info)\nPostgreSQL 9.2.4\n64 bit Linux host\n4GB shared_buffers with 14GB system memory, dedicated database VM\n10MB work_mem\n\n\nI have a query that takes over 6 minutes to complete, and it's due \nmainly to the two sorting operations being done on this query. The data \nit is returning itself is quite large, and with the two sorting \noperations it does (one because of the union, one because of the order \nby), it ends up using about *2.1GB* of temporary file space (and there \nis no way I can increase work_mem to hold this).\n\n--[Query]----------------------------\nSELECT\n t.id,\n t.mycolumn1,\n (SELECT table3.otherid FROM table3 WHERE table3.id = t.typeid),\n (SELECT table3.otherid FROM table3 WHERE table3.id = t2.third_id),\n t.mycolumn2 AS mycolumn2\n\nFROM table1 t\n\nLEFT OUTER JOIN table2 t2 ON t2.real_id = t.id\n\nWHERE t.external_id IN ('6544', '2234', '2', '4536')\n\nUNION\n\nSELECT\n t.id,\n t.mycolumn1,\n (SELECT table3.otherid FROM table3 WHERE table3.id = t.typeid),\n (SELECT table3.otherid FROM table3 WHERE table3.id = t2.third_id),\n t.mycolumn2 AS mycolumn2\n\nFROM table1 t\n\nLEFT OUTER JOIN table2 t2 ON t2.real_id = t.backup_id\n\nWHERE t.external_id IN ('6544', '2234', '2', '4536')\n\nORDER BY t.mycolumn2, t.id;\n\n\n--[Explain Analyze (sorry for the anonymizing)]----------------------------\nSort (cost=133824460.450..133843965.150 rows=7801882 width=56) (actual \ntime=849405.656..894724.453 rows=9955729 loops=1)\n Sort Key: romeo.three, romeo.quebec_seven\n*Sort Method: external merge Disk: 942856kB*\n -> Unique (cost=132585723.530..132702751.760 rows=7801882 width=56) \n(actual time=535267.196..668982.694 rows=9955729 loops=1)\n -> Sort (cost=132585723.530..132605228.240 rows=7801882 \nwidth=56) (actual time=535267.194..649304.631 rows=10792011 loops=1)\n Sort Key: romeo.quebec_seven, romeo.golf, ((delta_four \n3)), ((delta_four 4)), romeo.three\n*Sort Method: external merge Disk: 1008216kB*\n -> Append (cost=0.000..131464014.850 rows=7801882 \nwidth=56) (actual time=0.798..291477.595 rows=10792011 loops=1)\n -> Merge Left Join (cost=0.000..46340412.140 \nrows=2748445 width=56) (actual time=0.797..70748.213 rows=3690431 loops=1)\n Merge Cond: (romeo.quebec_seven = \nalpha_bravo.six_kilo)\n -> Index Scan using juliet on zulu romeo \n(cost=0.000..163561.710 rows=2748445 width=52) (actual \ntime=0.019..11056.883 rows=3653472 loops=1)\n Filter: (delta_uniform = ANY \n('two'::integer[]))\n -> Index Only Scan using sierra on \nquebec_juliet alpha_bravo (cost=0.000..86001.860 rows=2191314 width=8) \n(actual time=0.047..3996.543 rows=2191314 loops=1)\n Heap Fetches: 2191314\n SubPlan\n -> Index Scan using oscar on six_delta \n(cost=0.000..8.380 rows=1 width=23) (actual time=0.009..0.009 rows=1 \nloops=3690431)\n Index Cond: (quebec_seven = romeo.lima)\n SubPlan\n -> Index Scan using oscar on six_delta \n(cost=0.000..8.380 rows=1 width=23) (actual time=0.001..0.001 rows=0 \nloops=3690431)\n Index Cond: (quebec_seven = \nalpha_bravo.delta_november)\n -> Merge Right Join (cost=0.000..85045583.890 \nrows=5053437 width=56) (actual time=0.843..213450.477 rows=7101580 loops=1)\n Merge Cond: (alpha_bravo.six_kilo = \nromeo.india)\n -> Index Only Scan using sierra on \nquebec_juliet alpha_bravo (cost=0.000..86001.860 rows=2191314 width=8) \n(actual time=0.666..6165.870 rows=2191314 loops=1)\n Heap Fetches: 2191314\n -> Materialize (cost=0.000..193106.580 \nrows=2748445 width=56) (actual time=0.134..25852.353 rows=7101580 loops=1)\n -> Index Scan using alpha_seven on \nzulu romeo (cost=0.000..186235.470 rows=2748445 width=56) (actual \ntime=0.108..18439.857 rows=3653472 loops=1)\n Filter: (delta_uniform = ANY \n('two'::integer[]))\n SubPlan\n -> Index Scan using oscar on six_delta \n(cost=0.000..8.380 rows=1 width=23) (actual time=0.009..0.010 rows=1 \nloops=7101580)\n Index Cond: (quebec_seven = romeo.lima)\n SubPlan\n -> Index Scan using oscar on six_delta \n(cost=0.000..8.380 rows=1 width=23) (actual time=0.007..0.008 rows=1 \nloops=7101580)\n Index Cond: (quebec_seven = \nalpha_bravo.delta_november)\n--[end]----------------------------\n\n*My attempts:*\n1. I tried to get an index set up on table1 that orders t.mycolumn2 and \nt.id so that the sorting operation might be skipped, however when table1 \nis accessed it is using an index that's on t.id due to the left join to \ntable2, then filters out the \"WHERE t.external_id IN ('6544', '2234', \n'2', '4536')\", so at this point the data I've accessed came from \nsomething other than my 'order by' and must be reordered at the end.\n\nNOTE: The where clause (WHERE t.external_id IN ('6544', '2234', '2', \n'4536')) alone doesn't use an index at all, but results in a sequential \nscan (result set is about 60% of the total rows I believe).\n\nI've been unable to get the query planner to pick an index on \n(t.mycolumn2, t.id).\n\n2. I reworked the entire query to eliminate the subqueries living all in \njoins. Overall the query LOOKS more efficient, but takes about the same \namount of time as the main one because most of the execution time is \ndone in the two sorting operations.\n\n3. I'm trying to eliminate the union, however I have two problems.\nA) I can't figure out how to have an 'or' clause in a single join that \nwould fetch all the correct rows. If I just do:\nLEFT OUTER JOIN table2 t2 ON (t2.real_id = t.id OR t2.real_id = \nt.backup_id), I end up with many less rows than the original query. B.\n\nI believe the issue with this is a row could have one of three \npossibilities:\n* part of the first query but not the second -> results in 1 row after \nthe union\n* part of the second query but not the first -> results in 1 row after \nthe union\n* part of the first query and the second -> results in 2 rows after the \nunion (see 'B)' for why)\n\nB) the third and fourth column in the SELECT will need to be different \ndepending on what column the row is joined on in the LEFT OUTER JOIN to \ntable2, so I may need some expensive case when logic to filter what is \nput there based on whether that row came from the first join clause, or \nthe second.\n\n\nAny thoughts or things I'm looking over? Any help would be greatly \nappreciated. My first goal is to get rid of the sort by the UNION, if \npossible. The second would be to eliminate the last sort by the ORDER \nBY, but I'm not sure if it will be easily doable.\n\nThanks,\n- Brian F\n\n\n\n\n\n\n\n Hi All, \n\n (basic info)\n PostgreSQL 9.2.4\n 64 bit Linux host\n 4GB shared_buffers with 14GB system memory, dedicated database VM\n 10MB work_mem\n\n\n I have a query that takes over 6 minutes to complete, and it's due\n mainly to the two sorting operations being done on this query. The\n data it is returning itself is quite large, and with the two sorting\n operations it does (one because of the union, one because of the\n order by), it ends up using about 2.1GB of temporary file\n space (and there is no way I can increase work_mem to hold this).\n\n--[Query]----------------------------\n SELECT\n t.id,\n t.mycolumn1,\n (SELECT table3.otherid FROM table3 WHERE table3.id =\n t.typeid), \n (SELECT table3.otherid FROM table3 WHERE table3.id =\n t2.third_id), \n t.mycolumn2 AS mycolumn2\n\n FROM table1 t\n\n LEFT OUTER JOIN table2 t2 ON t2.real_id = t.id\n\n WHERE t.external_id IN ('6544', '2234', '2', '4536')\n\n UNION\n\n SELECT\n t.id,\n t.mycolumn1,\n (SELECT table3.otherid FROM table3 WHERE table3.id =\n t.typeid),\n (SELECT table3.otherid FROM table3 WHERE table3.id =\n t2.third_id),\n t.mycolumn2 AS mycolumn2\n\n FROM table1 t\n\n LEFT OUTER JOIN table2 t2 ON t2.real_id = t.backup_id\n\n WHERE t.external_id IN ('6544', '2234', '2', '4536')\n\n ORDER BY t.mycolumn2, t.id;\n\n\n --[Explain Analyze (sorry for the\n anonymizing)]----------------------------\n Sort (cost=133824460.450..133843965.150 rows=7801882 width=56)\n (actual time=849405.656..894724.453 rows=9955729 loops=1)\n Sort Key: romeo.three, romeo.quebec_seven\n Sort Method: external merge Disk: 942856kB\n -> Unique (cost=132585723.530..132702751.760 rows=7801882\n width=56) (actual time=535267.196..668982.694 rows=9955729\n loops=1)\n -> Sort (cost=132585723.530..132605228.240\n rows=7801882 width=56) (actual time=535267.194..649304.631\n rows=10792011 loops=1)\n Sort Key: romeo.quebec_seven, romeo.golf,\n ((delta_four 3)), ((delta_four 4)), romeo.three\n Sort Method: external merge Disk: 1008216kB\n -> Append (cost=0.000..131464014.850\n rows=7801882 width=56) (actual time=0.798..291477.595\n rows=10792011 loops=1)\n -> Merge Left Join \n (cost=0.000..46340412.140 rows=2748445 width=56) (actual\n time=0.797..70748.213 rows=3690431 loops=1)\n Merge Cond: (romeo.quebec_seven =\n alpha_bravo.six_kilo)\n -> Index Scan using juliet on zulu\n romeo (cost=0.000..163561.710 rows=2748445 width=52) (actual\n time=0.019..11056.883 rows=3653472 loops=1)\n Filter: (delta_uniform = ANY\n ('two'::integer[]))\n -> Index Only Scan using sierra on\n quebec_juliet alpha_bravo (cost=0.000..86001.860 rows=2191314\n width=8) (actual time=0.047..3996.543 rows=2191314 loops=1)\n Heap Fetches: 2191314\n SubPlan\n -> Index Scan using oscar on\n six_delta (cost=0.000..8.380 rows=1 width=23) (actual\n time=0.009..0.009 rows=1 loops=3690431)\n Index Cond: (quebec_seven =\n romeo.lima)\n SubPlan\n -> Index Scan using oscar on\n six_delta (cost=0.000..8.380 rows=1 width=23) (actual\n time=0.001..0.001 rows=0 loops=3690431)\n Index Cond: (quebec_seven =\n alpha_bravo.delta_november)\n -> Merge Right Join \n (cost=0.000..85045583.890 rows=5053437 width=56) (actual\n time=0.843..213450.477 rows=7101580 loops=1)\n Merge Cond: (alpha_bravo.six_kilo =\n romeo.india)\n -> Index Only Scan using sierra on\n quebec_juliet alpha_bravo (cost=0.000..86001.860 rows=2191314\n width=8) (actual time=0.666..6165.870 rows=2191314 loops=1)\n Heap Fetches: 2191314\n -> Materialize \n (cost=0.000..193106.580 rows=2748445 width=56) (actual\n time=0.134..25852.353 rows=7101580 loops=1)\n -> Index Scan using\n alpha_seven on zulu romeo (cost=0.000..186235.470 rows=2748445\n width=56) (actual time=0.108..18439.857 rows=3653472 loops=1)\n Filter: (delta_uniform =\n ANY ('two'::integer[]))\n SubPlan\n -> Index Scan using oscar on\n six_delta (cost=0.000..8.380 rows=1 width=23) (actual\n time=0.009..0.010 rows=1 loops=7101580)\n Index Cond: (quebec_seven =\n romeo.lima)\n SubPlan\n -> Index Scan using oscar on\n six_delta (cost=0.000..8.380 rows=1 width=23) (actual\n time=0.007..0.008 rows=1 loops=7101580)\n Index Cond: (quebec_seven =\n alpha_bravo.delta_november)\n --[end]----------------------------\n\n\n\nMy attempts:\n 1. I tried to get an index set up on table1 that orders t.mycolumn2\n and t.id so that the sorting operation might be skipped, however\n when table1 is accessed it is using an index that's on t.id due to\n the left join to table2, then filters out the \"WHERE t.external_id\n IN ('6544', '2234', '2', '4536')\", so at this point the data I've\n accessed came from something other than my 'order by' and must be\n reordered at the end. \n\n NOTE: The where clause (WHERE t.external_id IN ('6544', '2234', '2',\n '4536')) alone doesn't use an index at all, but results in a\n sequential scan (result set is about 60% of the total rows I\n believe).\n\n I've been unable to get the query planner to pick an index on\n (t.mycolumn2, t.id).\n\n 2. I reworked the entire query to eliminate the subqueries living\n all in joins. Overall the query LOOKS more efficient, but takes\n about the same amount of time as the main one because most of the\n execution time is done in the two sorting operations.\n\n 3. I'm trying to eliminate the union, however I have two problems. \n A) I can't figure out how to have an 'or' clause in a single join\n that would fetch all the correct rows. If I just do:\n LEFT OUTER JOIN table2 t2 ON (t2.real_id = t.id OR t2.real_id =\n t.backup_id), I end up with many less rows than the original query.\n B.\n\n I believe the issue with this is a row could have one of three\n possibilities:\n * part of the first query but not the second -> results in 1 row\n after the union\n * part of the second query but not the first -> results in 1 row\n after the union\n * part of the first query and the second -> results in 2 rows\n after the union (see 'B)' for why)\n\n B) the third and fourth column in the SELECT will need to be\n different depending on what column the row is joined on in the LEFT\n OUTER JOIN to table2, so I may need some expensive case when logic\n to filter what is put there based on whether that row came from the\n first join clause, or the second.\n\n\n Any thoughts or things I'm looking over? Any help would be greatly\n appreciated. My first goal is to get rid of the sort by the UNION,\n if possible. The second would be to eliminate the last sort by the\n ORDER BY, but I'm not sure if it will be easily doable.\n\n Thanks,\n - Brian F",
"msg_date": "Thu, 11 Jul 2013 12:27:48 -0600",
"msg_from": "Brian Fehrle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trying to eliminate union and sort"
},
{
"msg_contents": "Brian,\n\n> 3. I'm trying to eliminate the union, however I have two problems.\n> A) I can't figure out how to have an 'or' clause in a single join that\n> would fetch all the correct rows. If I just do:\n> LEFT OUTER JOIN table2 t2 ON (t2.real_id = t.id OR t2.real_id =\n> t.backup_id), I end up with many less rows than the original query. B.\n> \n> I believe the issue with this is a row could have one of three\n> possibilities:\n> * part of the first query but not the second -> results in 1 row after\n> the union\n> * part of the second query but not the first -> results in 1 row after\n> the union\n> * part of the first query and the second -> results in 2 rows after the\n> union (see 'B)' for why)\n> \n> B) the third and fourth column in the SELECT will need to be different\n> depending on what column the row is joined on in the LEFT OUTER JOIN to\n> table2, so I may need some expensive case when logic to filter what is\n> put there based on whether that row came from the first join clause, or\n> the second.\n\nNo, it doesn't:\n\nSELECT t.id,\n\tt.mycolumn1,\n\ttable3.otherid as otherid1,\n\ttable3a.otherid as otherid2,\n\tt.mycolumn2\nFROM t\n\tLEFT OUTER JOIN table2\n\t ON ( t.id = t2.real_id OR t.backup_id = t2.real_id )\n\tLEFT OUTER JOIN table3\n\t ON ( t.typeid = table3.id )\n LEFT OUTER JOIN table3 as table3a\n ON ( table2.third_id = table3.id )\nWHERE t.external_id IN ( ... )\nORDER BY t.mycolumn2, t.id\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Jul 2013 17:46:18 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to eliminate union and sort"
},
{
"msg_contents": "On 07/11/2013 06:46 PM, Josh Berkus wrote:\n> Brian,\n>\n>> 3. I'm trying to eliminate the union, however I have two problems.\n>> A) I can't figure out how to have an 'or' clause in a single join that\n>> would fetch all the correct rows. If I just do:\n>> LEFT OUTER JOIN table2 t2 ON (t2.real_id = t.id OR t2.real_id =\n>> t.backup_id), I end up with many less rows than the original query. B.\n>>\n>> I believe the issue with this is a row could have one of three\n>> possibilities:\n>> * part of the first query but not the second -> results in 1 row after\n>> the union\n>> * part of the second query but not the first -> results in 1 row after\n>> the union\n>> * part of the first query and the second -> results in 2 rows after the\n>> union (see 'B)' for why)\n>>\n>> B) the third and fourth column in the SELECT will need to be different\n>> depending on what column the row is joined on in the LEFT OUTER JOIN to\n>> table2, so I may need some expensive case when logic to filter what is\n>> put there based on whether that row came from the first join clause, or\n>> the second.\n> No, it doesn't:\n>\n> SELECT t.id,\n> \tt.mycolumn1,\n> \ttable3.otherid as otherid1,\n> \ttable3a.otherid as otherid2,\n> \tt.mycolumn2\n> FROM t\n> \tLEFT OUTER JOIN table2\n> \t ON ( t.id = t2.real_id OR t.backup_id = t2.real_id )\n> \tLEFT OUTER JOIN table3\n> \t ON ( t.typeid = table3.id )\n> LEFT OUTER JOIN table3 as table3a\n> ON ( table2.third_id = table3.id )\n> WHERE t.external_id IN ( ... )\n> ORDER BY t.mycolumn2, t.id\nI tried this originally, however my resulting rowcount is different.\n\nThe original query returns 9,955,729 rows\nThis above one returns 7,213,906\n\nAs for the counts on the tables:\ntable1 3,653,472\ntable2 2,191,314\ntable3 25,676,589\n\nI think it's safe to assume right now that any resulting joins are not \none-to-one\n\n- Brian F\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Jul 2013 16:27:36 -0600",
"msg_from": "Brian Fehrle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to eliminate union and sort"
},
{
"msg_contents": "\n> As for the counts on the tables:\n> table1 3,653,472\n> table2 2,191,314\n> table3 25,676,589\n> \n> I think it's safe to assume right now that any resulting joins are not\n> one-to-one\n\nHmmm? How is doing a subselect in the SELECT clause even working, then?\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Jul 2013 15:43:45 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to eliminate union and sort"
},
{
"msg_contents": "On 07/12/2013 04:43 PM, Josh Berkus wrote:\n>> As for the counts on the tables:\n>> table1 3,653,472\n>> table2 2,191,314\n>> table3 25,676,589\n>>\n>> I think it's safe to assume right now that any resulting joins are not\n>> one-to-one\n> Hmmm? How is doing a subselect in the SELECT clause even working, then?\n>\nOh my, this is sad. the query in all returns 9,955,729 rows, so the sub \nqueries are run on each of these resulting rows, however in this entire \nresult set, subquery 1 returns 16 distinct rows, subquery 2 returns 63 \ndifferent rows, but those sub queries are run over 10 million times to \nreturn these few distinct rows. So it's running many times, but \nreturning the same small set of data over and over again.\n\n- Brian F\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Jul 2013 10:12:20 -0600",
"msg_from": "Brian Fehrle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trying to eliminate union and sort"
},
{
"msg_contents": "Hello,\n\nyou may want to try starting with some CTE that first retrieve required subsets.\nadding ordering within those CTE might also improve the timing of following sort/join operations.\n(sorry for the top posting)\n\nSomething like:\n\nWITH T1 AS (\n SELECT id, typeid, backup_id, mycolumn1, mycolumn2\n FROM table1 WHERE t.external_id IN ('6544', '2234', '2', '4536')\n ORDER BY mycolumn2, id ?\n ),\n \nTYPES AS (SELECT DISTINCT typeid FROM T1),\n\nT3_OTHERS AS ( SELECT id, otherid FROM table3 JOIN TYPES ON table3.id = TYPES.typeid\n -- Order BY id ? \n ), \n\nSELECT\n T1.id,\n T1.mycolumn1,\n T3_OTHERS.otherid,\n T3_2.otherid,\n T1.mycolumn2 AS mycolumn2\n\nFROM T1 \nLEFT OUTER JOIN T3_OTHERS ON T1.typeid = T3_OTHERS.id\nLEFT OUTER JOIN table2 t2 ON (t2.real_id = T1.backup_id OR t2.real_id = t.id\nLEFT OUTER JOIN table3 T3_2 ON t2.third_id = T3_2.id\nORDER BY T1.mycolumn2,T1.id\n\nregards,\nMarc Mamin\n\n________________________________________\nVon: [email protected] [[email protected]]" im Auftrag von "Brian Fehrle [[email protected]]\nGesendet: Montag, 15. Juli 2013 18:12\nAn: [email protected]\nBetreff: Re: [PERFORM] Trying to eliminate union and sort\n\nOn 07/12/2013 04:43 PM, Josh Berkus wrote:\n>> As for the counts on the tables:\n>> table1 3,653,472\n>> table2 2,191,314\n>> table3 25,676,589\n>>\n>> I think it's safe to assume right now that any resulting joins are not\n>> one-to-one\n> Hmmm? How is doing a subselect in the SELECT clause even working, then?\n>\nOh my, this is sad. the query in all returns 9,955,729 rows, so the sub\nqueries are run on each of these resulting rows, however in this entire\nresult set, subquery 1 returns 16 distinct rows, subquery 2 returns 63\ndifferent rows, but those sub queries are run over 10 million times to\nreturn these few distinct rows. So it's running many times, but\nreturning the same small set of data over and over again.\n\n- Brian F\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Jul 2013 18:26:35 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to eliminate union and sort"
},
{
"msg_contents": "I'd try to check why discounts are different. Join with 'or' should work.\nBuild (one query) except all (another query) and check some rows from\nresult.\n 13 лип. 2013 01:28, \"Brian Fehrle\" <[email protected]> напис.\n\n> On 07/11/2013 06:46 PM, Josh Berkus wrote:\n>\n>> Brian,\n>>\n>> 3. I'm trying to eliminate the union, however I have two problems.\n>>> A) I can't figure out how to have an 'or' clause in a single join that\n>>> would fetch all the correct rows. If I just do:\n>>> LEFT OUTER JOIN table2 t2 ON (t2.real_id = t.id OR t2.real_id =\n>>> t.backup_id), I end up with many less rows than the original query. B.\n>>>\n>>> I believe the issue with this is a row could have one of three\n>>> possibilities:\n>>> * part of the first query but not the second -> results in 1 row after\n>>> the union\n>>> * part of the second query but not the first -> results in 1 row after\n>>> the union\n>>> * part of the first query and the second -> results in 2 rows after the\n>>> union (see 'B)' for why)\n>>>\n>>> B) the third and fourth column in the SELECT will need to be different\n>>> depending on what column the row is joined on in the LEFT OUTER JOIN to\n>>> table2, so I may need some expensive case when logic to filter what is\n>>> put there based on whether that row came from the first join clause, or\n>>> the second.\n>>>\n>> No, it doesn't:\n>>\n>> SELECT t.id,\n>> t.mycolumn1,\n>> table3.otherid as otherid1,\n>> table3a.otherid as otherid2,\n>> t.mycolumn2\n>> FROM t\n>> LEFT OUTER JOIN table2\n>> ON ( t.id = t2.real_id OR t.backup_id = t2.real_id )\n>> LEFT OUTER JOIN table3\n>> ON ( t.typeid = table3.id )\n>> LEFT OUTER JOIN table3 as table3a\n>> ON ( table2.third_id = table3.id )\n>> WHERE t.external_id IN ( ... )\n>> ORDER BY t.mycolumn2, t.id\n>>\n> I tried this originally, however my resulting rowcount is different.\n>\n> The original query returns 9,955,729 rows\n> This above one returns 7,213,906\n>\n> As for the counts on the tables:\n> table1 3,653,472\n> table2 2,191,314\n> table3 25,676,589\n>\n> I think it's safe to assume right now that any resulting joins are not\n> one-to-one\n>\n> - Brian F\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nI'd try to check why discounts are different. Join with 'or' should work. Build (one query) except all (another query) and check some rows from result.\n\n13 лип. 2013 01:28, \"Brian Fehrle\" <[email protected]> напис.\nOn 07/11/2013 06:46 PM, Josh Berkus wrote:\n\nBrian,\n\n\n3. I'm trying to eliminate the union, however I have two problems.\nA) I can't figure out how to have an 'or' clause in a single join that\nwould fetch all the correct rows. If I just do:\nLEFT OUTER JOIN table2 t2 ON (t2.real_id = t.id OR t2.real_id =\nt.backup_id), I end up with many less rows than the original query. B.\n\nI believe the issue with this is a row could have one of three\npossibilities:\n* part of the first query but not the second -> results in 1 row after\nthe union\n* part of the second query but not the first -> results in 1 row after\nthe union\n* part of the first query and the second -> results in 2 rows after the\nunion (see 'B)' for why)\n\nB) the third and fourth column in the SELECT will need to be different\ndepending on what column the row is joined on in the LEFT OUTER JOIN to\ntable2, so I may need some expensive case when logic to filter what is\nput there based on whether that row came from the first join clause, or\nthe second.\n\nNo, it doesn't:\n\nSELECT t.id,\n t.mycolumn1,\n table3.otherid as otherid1,\n table3a.otherid as otherid2,\n t.mycolumn2\nFROM t\n LEFT OUTER JOIN table2\n ON ( t.id = t2.real_id OR t.backup_id = t2.real_id )\n LEFT OUTER JOIN table3\n ON ( t.typeid = table3.id )\n LEFT OUTER JOIN table3 as table3a\n ON ( table2.third_id = table3.id )\nWHERE t.external_id IN ( ... )\nORDER BY t.mycolumn2, t.id\n\nI tried this originally, however my resulting rowcount is different.\n\nThe original query returns 9,955,729 rows\nThis above one returns 7,213,906\n\nAs for the counts on the tables:\ntable1 3,653,472\ntable2 2,191,314\ntable3 25,676,589\n\nI think it's safe to assume right now that any resulting joins are not one-to-one\n\n- Brian F\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 17 Jul 2013 11:24:40 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trying to eliminate union and sort"
}
] |
[
{
"msg_contents": "Hi!\n\n\nI’ve just learned about the water crisis and thought you would be interested to check out this story:\n\nhttps://waterforward.charitywater.org/et/FWIshxIN\n\n\nLet me know what you think!\n\n\n\nThanks,\nMarcos\n\n\n\n--\nSent via WaterForward, an initiative of charity: water\nWaterForward, 387 Tehama Street, San Francisco, CA 94103, USA.\nClick here to unsubscribe: https://waterforward.charitywater.org/opt_out?et=FWIshxIN\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\nI’ve just learned about the water crisis and thought you would be interested to check out this story:\nhttps://waterforward.charitywater.org\nLet me know what you think!\n\n\nThanks,Marcos\n\n\n\n\n\n\n\n\n\n\n\n\nSent via WaterForward, an initiative of charity: water\n\n\nUnsubscribe\n\n\n\n\nWaterForward, 387 Tehama Street, San Francisco, CA 94103, USA.",
"msg_date": "Mon, 15 Jul 2013 10:00:06 -0700",
"msg_from": "Marcos Luis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Thought you'd find this interesting"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm in the process of implementing a table for storing some raw data in the format of a hash containing one level of keys and values. The hash can be quite big (up to 50 keys pointing at values varying from one to several hundred characters)\n\nNow, I'm in doubt whether to use JSON or Hstore for this task. Here is the facts:\n\n- I'm not going to search a lot (if any) in the data stored in the column, i'm only going to load it out.\n- The data is going to be heavily updated (not only inserted). Keys and values are going to be added/overwritten quite some times.\n- My database's biggest current issue is updates, so i don't want that to be a bottle neck.\n- I'm on postgresql 9.2 \n\nSo, question is: Which will be better performance wise, especially for updates? Does the same issues with updates on the MVCC structure apply to updates in Hstore? What is taking up most space on the HDD?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 16 Jul 2013 17:05:07 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hstore VS. JSON"
},
{
"msg_contents": "\nOn 07/16/2013 11:05 AM, Niels Kristian Schjødt wrote:\n> Hi,\n>\n> I'm in the process of implementing a table for storing some raw data in the format of a hash containing one level of keys and values. The hash can be quite big (up to 50 keys pointing at values varying from one to several hundred characters)\n>\n> Now, I'm in doubt whether to use JSON or Hstore for this task. Here is the facts:\n>\n> - I'm not going to search a lot (if any) in the data stored in the column, i'm only going to load it out.\n> - The data is going to be heavily updated (not only inserted). Keys and values are going to be added/overwritten quite some times.\n\n\nIn both cases, each hstore/json is a single datum, and updating it means \nwriting out the whole datum - in fact the whole row containing the datum.\n\n> - My database's biggest current issue is updates, so i don't want that to be a bottle neck.\n> - I'm on postgresql 9.2\n>\n> So, question is: Which will be better performance wise, especially for updates? Does the same issues with updates on the MVCC structure apply to updates in Hstore? What is taking up most space on the HDD?\n>\n\nMVCC applies to all updates on all kinds of data. Hstore and JSON are \nnot different in this respect.\n\nYou should test the storage effects with your data. On 9.2 for your data \nhstore might be a better bet, since in 9.2 hstore has more operators \navailable natively.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 16 Jul 2013 11:33:28 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hstore VS. JSON"
},
{
"msg_contents": "On Tue, Jul 16, 2013 at 10:33 AM, Andrew Dunstan <[email protected]> wrote:\n>\n> On 07/16/2013 11:05 AM, Niels Kristian Schjødt wrote:\n>>\n>> Hi,\n>>\n>> I'm in the process of implementing a table for storing some raw data in\n>> the format of a hash containing one level of keys and values. The hash can\n>> be quite big (up to 50 keys pointing at values varying from one to several\n>> hundred characters)\n>>\n>> Now, I'm in doubt whether to use JSON or Hstore for this task. Here is the\n>> facts:\n>>\n>> - I'm not going to search a lot (if any) in the data stored in the column,\n>> i'm only going to load it out.\n>> - The data is going to be heavily updated (not only inserted). Keys and\n>> values are going to be added/overwritten quite some times.\n>\n>\n>\n> In both cases, each hstore/json is a single datum, and updating it means\n> writing out the whole datum - in fact the whole row containing the datum.\n>\n>\n>> - My database's biggest current issue is updates, so i don't want that to\n>> be a bottle neck.\n>> - I'm on postgresql 9.2\n>>\n>> So, question is: Which will be better performance wise, especially for\n>> updates? Does the same issues with updates on the MVCC structure apply to\n>> updates in Hstore? What is taking up most space on the HDD?\n>>\n>\n> MVCC applies to all updates on all kinds of data. Hstore and JSON are not\n> different in this respect.\n>\n> You should test the storage effects with your data. On 9.2 for your data\n> hstore might be a better bet, since in 9.2 hstore has more operators\n> available natively.\n\nyeah.\n\nhstore pros:\n*) GIST/GIN access\n*) good searching operators, in particular @>\n\njson pros:\n*) nested, supports fancier structures\n*) limited type support (json numerics, etc)\n\nI don't know which is more compact when storing similar data. My\nguess is they are pretty close. Since json is something of a standard\nfor data serialization though I expect it will largely displace hstore\nover time.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 16 Jul 2013 10:47:18 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hstore VS. JSON"
}
] |
[
{
"msg_contents": "Hi all (Hopefully this is the correct mailing list for this).\n\nI'm working on performance tuning a host of queries on PostgreSQL 9.2 \nfrom an application, each query having its own issues and fixes, however \nfrom what I understand this application runs the exact same queries on \nthe exact same data in half the time on oracle and SQL server.\n\nAre there any known differences between the database systems in terms of \nquery planners or general operations (sorting, unions) that are notable \ndifferent between the systems that would make postgres slow down when \nexecuting the exact same queries?\n\nIt's worth noting that the queries are not that good, they have issues \nwith bad sub-selects, Cartesian products, and what looks like bad query \ndesign in general, so the blame isn't completely with the database being \nslow, but I wonder what makes oracle preform better when given \nnot-so-great queries?\n\nI know this is rather general and high level, but any tips or experience \nanyone has would be appreciated.\n\n\nThanks,\n- Brian F\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 16 Jul 2013 10:51:42 -0600",
"msg_from": "Brian Fehrle <[email protected]>",
"msg_from_op": true,
"msg_subject": "General key issues when comparing performance between PostgreSQL\n and oracle"
},
{
"msg_contents": "On Tue, Jul 16, 2013 at 9:51 AM, Brian Fehrle <[email protected]>wrote:\n\n>\n> Are there any known differences between the database systems in terms of\n> query planners or general operations (sorting, unions) that are notable\n> different between the systems that would make postgres slow down when\n> executing the exact same queries?\n>\n>\nyes.\n\nBut unless you provide more detail, it's impossible to say what those\ndifferences might be. In all probability, postgresql can be configured to\nprovide comparable performance, but without details, who can really say?\n\nOn Tue, Jul 16, 2013 at 9:51 AM, Brian Fehrle <[email protected]> wrote:\n\nAre there any known differences between the database systems in terms of query planners or general operations (sorting, unions) that are notable different between the systems that would make postgres slow down when executing the exact same queries?\nyes.But unless you provide more detail, it's impossible to say what those differences might be. In all probability, postgresql can be configured to provide comparable performance, but without details, who can really say?",
"msg_date": "Tue, 16 Jul 2013 20:01:19 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General key issues when comparing performance between\n PostgreSQL and oracle"
},
{
"msg_contents": "On Tue, Jul 16, 2013 at 10:51 AM, Brian Fehrle\n<[email protected]> wrote:\n> Hi all (Hopefully this is the correct mailing list for this).\n>\n> I'm working on performance tuning a host of queries on PostgreSQL 9.2 from\n> an application, each query having its own issues and fixes, however from\n> what I understand this application runs the exact same queries on the exact\n> same data in half the time on oracle and SQL server.\n>\n> Are there any known differences between the database systems in terms of\n> query planners or general operations (sorting, unions) that are notable\n> different between the systems that would make postgres slow down when\n> executing the exact same queries?\n>\n> It's worth noting that the queries are not that good, they have issues with\n> bad sub-selects, Cartesian products, and what looks like bad query design in\n> general, so the blame isn't completely with the database being slow, but I\n> wonder what makes oracle preform better when given not-so-great queries?\n>\n> I know this is rather general and high level, but any tips or experience\n> anyone has would be appreciated.\n\nHere's the thing. The Postgres team is small, and they have to choose\nwisely what to work on. So, do they work on making everything else\nbetter, faster and more reliable, or do they dedicate precious man\nhours to making bad queries run well?\n\nFact is that if the query can be made better, then you get to do that.\nAfter optimizing the query, if you find a corner case where\npostgresql's query planner is making a bad decision or where some new\ndb method would make it faster then it's time to appeal to the\ncommunity to see what they can do. If you don't have the time to fix\nbad queries, then you might want to stick to Oracle or MSSQL server.\nOR spend the money you'd spend on those on a postgres hacker to see\nwhat they can do for you.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 16 Jul 2013 22:14:08 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General key issues when comparing performance between\n PostgreSQL and oracle"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Tue, Jul 16, 2013 at 10:51 AM, Brian Fehrle\n> <[email protected]> wrote:\n>> I'm working on performance tuning a host of queries on PostgreSQL 9.2 from\n>> an application, each query having its own issues and fixes, however from\n>> what I understand this application runs the exact same queries on the exact\n>> same data in half the time on oracle and SQL server.\n>> \n>> It's worth noting that the queries are not that good, they have issues with\n>> bad sub-selects, Cartesian products, and what looks like bad query design in\n>> general, so the blame isn't completely with the database being slow, but I\n>> wonder what makes oracle preform better when given not-so-great queries?\n\n> Fact is that if the query can be made better, then you get to do that.\n\nAnother point worth making here: we often find that ugly-looking queries\ngot that way by being hand-optimized to exploit implementation\npeculiarities of whichever DBMS they were being run on before. This is\na particularly nasty form of vendor lock-in, especially if you're\ndealing with legacy code whose current custodians don't remember having\ndone such hacking on the queries.\n\nI'm not necessarily claiming that your particular cases meet that\ndescription, since you're saying that the queries perform well on both\nOracle and SQL Server. But without details it's hard to judge. It\nseems most likely from here that your first problem is a badly tuned\nPostgres installation.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Jul 2013 00:51:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General key issues when comparing performance between PostgreSQL\n and oracle"
}
] |
[
{
"msg_contents": "In the asynchronous commit documentation, it says:\n\n*The commands supporting two-phase commit, such as PREPARE TRANSACTION, are\nalso always synchronous\n*\n\nDoes this mean that all queries that are part of a distributed transaction\nare synchronous?\n\nIn our databases we have extremely high disk I/O, I'm wondering if\ndistributed transactions may be the reason behind it.\n\nThanks\n\nIn the asynchronous commit documentation, it says:\nThe commands supporting two-phase commit, such as PREPARE TRANSACTION, are also always synchronous\nDoes this mean that all queries that are part of a distributed transaction are synchronous?\nIn our databases we have extremely high disk I/O, I'm wondering if distributed transactions may be the reason behind it.\nThanks",
"msg_date": "Wed, 17 Jul 2013 10:18:34 +0300",
"msg_from": "Xenofon Papadopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Distributed transactions and asynchronous commit"
},
{
"msg_contents": ">\n> In the asynchronous commit documentation, it says:\n>\n> *The commands supporting two-phase commit, such as PREPARE TRANSACTION,\n> are also always synchronous\n> *\n>\n> Does this mean that all queries that are part of a distributed transaction\n> are synchronous?\n>\n\nYep\n\n\n> In our databases we have extremely high disk I/O, I'm wondering if\n> distributed transactions may be the reason behind it.\n>\n\ncould be, but you have to send us more info about your setup, your\nconfiguration , especially your io settings, output of vmstats would also\nbe helpful\n\n\n> Thanks\n>\n>\nVasilis Ventirozos\n\nIn the asynchronous commit documentation, it says:\nThe commands supporting two-phase commit, such as PREPARE TRANSACTION, are also always synchronous\nDoes this mean that all queries that are part of a distributed transaction are synchronous?\nYep \nIn our databases we have extremely high disk I/O, I'm wondering if distributed transactions may be the reason behind it.\n could be, but you have to send us more info about your setup, your configuration , especially your io settings, output of vmstats would also be helpful \nThanks\n\nVasilis Ventirozos",
"msg_date": "Wed, 17 Jul 2013 10:28:50 +0100",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distributed transactions and asynchronous commit"
},
{
"msg_contents": "Hi,\n\nIl 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n> In the asynchronous commit documentation, it says:\n>\n> /The commands supporting two-phase commit, such as PREPARE \n> TRANSACTION, are also always synchronous\n> /\n>\n> Does this mean that all queries that are part of a distributed \n> transaction are synchronous?\n>\n> In our databases we have extremely high disk I/O, I'm wondering if \n> distributed transactions may be the reason behind it.\n\nDistributed transactions are base on two-phase-commit (2PC) algorithms \nfor ensuring correct transaction completion, so are synchronous.\nHowever, I think this is not the main reason behind your extremely high \ndisk I/O. You should check if your system is properly tuned to get the \nbest performances.\nFirst of all, you could take a look on your PostgreSQL configurations, \nand check if shared_memory is set properly taking into account your RAM \navailability. The conservative PostgreSQL default value is 24 MB, \nforcing system to exploit many disk I/O resources.\nAside from this, you could take a look if autovacuum is often triggered \n(generating a large amount of I/O) in case of large use of \nupdates/inserts in your database.\n\nRegards,\n\nGiuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it\n\n\n\n\n\n\n\nHi,\n\n Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n\n\n\nIn\n the asynchronous commit documentation, it says:\n\n\nThe\n commands supporting two-phase commit, such as PREPARE\n TRANSACTION,\n are also always synchronous\n\n\n\nDoes\n this mean that all queries that are part of a distributed\n transaction are synchronous?\n\n\nIn\n our databases we have extremely high disk I/O, I'm\n wondering if distributed transactions may be the reason\n behind it.\n\n\n\n Distributed transactions are base on two-phase-commit (2PC)\n algorithms for ensuring correct transaction completion, so are\n synchronous. \n However, I think this is not the main reason behind your extremely\n high disk I/O. You should check if your system is properly tuned to\n get the best performances. \n First of all, you could take a look on your PostgreSQL\n configurations, and check if shared_memory is set properly taking\n into account your RAM availability. The conservative PostgreSQL\n default value is 24 MB, forcing system to exploit many disk I/O\n resources.\n Aside from this, you could take a look if autovacuum is often\n triggered (generating a large amount of I/O) in case of large use of\n updates/inserts in your database.\n\n Regards,\n\n Giuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it",
"msg_date": "Wed, 17 Jul 2013 12:09:17 +0200",
"msg_from": "Giuseppe Broccolo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distributed transactions and asynchronous commit"
},
{
"msg_contents": "Thank you for your replies so far.\nThe DB in question is Postgres+ 9.2 running inside a VM with the following\nspecs:\n\n16 CPUs (dedicated to the VM)\n60G RAM\nRAID-10 storage on a SAN for pgdata and pgarchieves, using different LUNs\nfor each.\n\nWe have 3 kind of queries:\n\n- The vast majority of the queries are small SELECT/INSERT/UPDATEs which\nare part of distributed transactions\n- A few small ones, which are mostly SELECTs\n- A few bulk loads, where we add 100k - 1M of rows in tables\n\nOur settings are:\n\nshared_buffers: 8G\nwork_mem: 12M\ncheckpoint_segments: 64\n\nAutovacuum is somewhat aggressive, as our data changes quite often and\nwithout it the planner was completely off.\nRight now we use:\n\n autovacuum_analyze_scale_factor: 0.1\n autovacuum_analyze_threshold: 50\n autovacuum_freeze_max_age: 200000000\n autovacuum_max_workers: 12\n autovacuum_naptime: 10s\n autovacuum_vacuum_cost_delay: 20ms\n autovacuum_vacuum_cost_limit: -1\n autovacuum_vacuum_scale_factor: 0.2\n autovacuum_vacuum_threshold: 50\n\nAt high-peak hour, the disk utilization for the pgdata mountpoint is:\n\n*00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\n await svctm %util*\n13:20:01 dev253-2 7711.62 24166.97 56657.95 10.48 735.28\n95.09 0.11 86.11\n13:30:01 dev253-2 5340.88 19465.30 39133.32 10.97 319.20\n59.94 0.15 82.30\n13:40:01 dev253-2 2791.02 13061.76 19330.40 11.61 349.95\n 125.38 0.33 90.73\n13:50:01 dev253-2 3478.69 10503.84 25505.27 10.35 308.12\n88.57 0.20 68.12\n14:00:01 dev253-2 5269.12 33613.43 35830.13 13.18 232.48\n44.09 0.19 100.05\n14:10:01 dev253-2 4910.24 21767.22 33970.96 11.35 322.52\n65.64 0.21 104.55\n14:20:02 dev253-2 5358.95 40772.03 33682.46 13.89 721.81\n 134.32 0.20 104.92\n14:30:01 dev253-2 4420.51 17256.16 33315.27 11.44 336.53\n76.13 0.15 65.25\n14:40:02 dev253-2 4884.13 28439.26 31604.76 12.29 265.32\n54.26 0.20 97.51\n14:50:01 dev253-2 3124.91 8077.46 22511.59 9.79 50.41\n16.13 0.24 76.17\n\nand for pgarchives:\n\n*00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\n await svctm %util*\n13:20:01 dev253-3 2802.25 0.69 22417.32 8.00 465.05\n 165.94 0.02 4.32\n13:30:01 dev253-3 1559.87 11159.45 12120.99 14.92 64.17\n41.11 0.08 12.02\n13:40:01 dev253-3 922.62 8066.62 7129.15 16.47 19.75\n21.40 0.08 6.99\n13:50:01 dev253-3 1194.81 895.34 9524.53 8.72 28.40\n23.76 0.01 1.69\n14:00:01 dev253-3 1919.12 0.46 15352.49 8.00 51.75\n26.95 0.01 1.61\n14:10:01 dev253-3 1770.59 9286.61 13873.79 13.08 139.86\n78.97 0.08 14.46\n14:20:02 dev253-3 1595.04 11810.63 12389.08 15.17 109.17\n68.42 0.15 24.71\n14:30:01 dev253-3 1793.71 12173.88 13957.79 14.57 141.56\n78.89 0.08 13.61\n14:40:02 dev253-3 1751.62 0.43 14012.53 8.00 43.38\n24.76 0.01 1.40\n14:50:01 dev253-3 1351.72 3225.19 10707.29 10.31 31.91\n23.59 0.02 2.93\n\n\n\n\nOn Wed, Jul 17, 2013 at 1:09 PM, Giuseppe Broccolo <\[email protected]> wrote:\n\n> Hi,\n>\n> Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n>\n> In the asynchronous commit documentation, it says:\n>\n> *The commands supporting two-phase commit, such as PREPARE TRANSACTION,\n> are also always synchronous\n> *\n>\n> Does this mean that all queries that are part of a distributed\n> transaction are synchronous?\n>\n> In our databases we have extremely high disk I/O, I'm wondering if\n> distributed transactions may be the reason behind it.\n>\n>\n> Distributed transactions are base on two-phase-commit (2PC) algorithms for\n> ensuring correct transaction completion, so are synchronous.\n> However, I think this is not the main reason behind your extremely high\n> disk I/O. You should check if your system is properly tuned to get the best\n> performances.\n> First of all, you could take a look on your PostgreSQL configurations, and\n> check if shared_memory is set properly taking into account your RAM\n> availability. The conservative PostgreSQL default value is 24 MB, forcing\n> system to exploit many disk I/O resources.\n> Aside from this, you could take a look if autovacuum is often triggered\n> (generating a large amount of I/O) in case of large use of updates/inserts\n> in your database.\n>\n> Regards,\n>\n> Giuseppe.\n>\n> --\n> Giuseppe Broccolo - 2ndQuadrant Italy\n> PostgreSQL Training, Services and [email protected] | www.2ndQuadrant.it\n>\n>\n\nThank you for your replies so far.The DB in question is Postgres+ 9.2 running inside a VM with the following specs:16 CPUs (dedicated to the VM)60G RAMRAID-10 storage on a SAN for pgdata and pgarchieves, using different LUNs for each.\nWe have 3 kind of queries:- The vast majority of the queries are small SELECT/INSERT/UPDATEs which are part of distributed transactions- A few small ones, which are mostly SELECTs\n- A few bulk loads, where we add 100k - 1M of rows in tablesOur settings are:shared_buffers: 8Gwork_mem: 12Mcheckpoint_segments: 64\nAutovacuum is somewhat aggressive, as our data changes quite often and without it the planner was completely off.Right now we use: autovacuum_analyze_scale_factor: 0.1 \n autovacuum_analyze_threshold: 50 autovacuum_freeze_max_age: 200000000 autovacuum_max_workers: 12 autovacuum_naptime: 10s autovacuum_vacuum_cost_delay: 20ms \n autovacuum_vacuum_cost_limit: -1 autovacuum_vacuum_scale_factor: 0.2 autovacuum_vacuum_threshold: 50 At high-peak hour, the disk utilization for the pgdata mountpoint is:\n00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util13:20:01 dev253-2 7711.62 24166.97 56657.95 10.48 735.28 95.09 0.11 86.11\n13:30:01 dev253-2 5340.88 19465.30 39133.32 10.97 319.20 59.94 0.15 82.3013:40:01 dev253-2 2791.02 13061.76 19330.40 11.61 349.95 125.38 0.33 90.73\n13:50:01 dev253-2 3478.69 10503.84 25505.27 10.35 308.12 88.57 0.20 68.1214:00:01 dev253-2 5269.12 33613.43 35830.13 13.18 232.48 44.09 0.19 100.05\n14:10:01 dev253-2 4910.24 21767.22 33970.96 11.35 322.52 65.64 0.21 104.5514:20:02 dev253-2 5358.95 40772.03 33682.46 13.89 721.81 134.32 0.20 104.92\n14:30:01 dev253-2 4420.51 17256.16 33315.27 11.44 336.53 76.13 0.15 65.2514:40:02 dev253-2 4884.13 28439.26 31604.76 12.29 265.32 54.26 0.20 97.51\n14:50:01 dev253-2 3124.91 8077.46 22511.59 9.79 50.41 16.13 0.24 76.17and for pgarchives:00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util\n13:20:01 dev253-3 2802.25 0.69 22417.32 8.00 465.05 165.94 0.02 4.3213:30:01 dev253-3 1559.87 11159.45 12120.99 14.92 64.17 41.11 0.08 12.02\n13:40:01 dev253-3 922.62 8066.62 7129.15 16.47 19.75 21.40 0.08 6.9913:50:01 dev253-3 1194.81 895.34 9524.53 8.72 28.40 23.76 0.01 1.69\n14:00:01 dev253-3 1919.12 0.46 15352.49 8.00 51.75 26.95 0.01 1.6114:10:01 dev253-3 1770.59 9286.61 13873.79 13.08 139.86 78.97 0.08 14.46\n14:20:02 dev253-3 1595.04 11810.63 12389.08 15.17 109.17 68.42 0.15 24.7114:30:01 dev253-3 1793.71 12173.88 13957.79 14.57 141.56 78.89 0.08 13.61\n14:40:02 dev253-3 1751.62 0.43 14012.53 8.00 43.38 24.76 0.01 1.4014:50:01 dev253-3 1351.72 3225.19 10707.29 10.31 31.91 23.59 0.02 2.93\nOn Wed, Jul 17, 2013 at 1:09 PM, Giuseppe Broccolo <[email protected]> wrote:\n\n\nHi,\n\n Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n\n\n\nIn\n the asynchronous commit documentation, it says:\n\n\nThe\n commands supporting two-phase commit, such as PREPARE\n TRANSACTION,\n are also always synchronous\n\n\n\nDoes\n this mean that all queries that are part of a distributed\n transaction are synchronous?\n\n\nIn\n our databases we have extremely high disk I/O, I'm\n wondering if distributed transactions may be the reason\n behind it.\n\n\n\n Distributed transactions are base on two-phase-commit (2PC)\n algorithms for ensuring correct transaction completion, so are\n synchronous. \n However, I think this is not the main reason behind your extremely\n high disk I/O. You should check if your system is properly tuned to\n get the best performances. \n First of all, you could take a look on your PostgreSQL\n configurations, and check if shared_memory is set properly taking\n into account your RAM availability. The conservative PostgreSQL\n default value is 24 MB, forcing system to exploit many disk I/O\n resources.\n Aside from this, you could take a look if autovacuum is often\n triggered (generating a large amount of I/O) in case of large use of\n updates/inserts in your database.\n\n Regards,\n\n Giuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it",
"msg_date": "Wed, 17 Jul 2013 13:52:19 +0300",
"msg_from": "Xenofon Papadopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Distributed transactions and asynchronous commit"
},
{
"msg_contents": "Il 17/07/2013 12:52, Xenofon Papadopoulos ha scritto:\n> Thank you for your replies so far.\n> The DB in question is Postgres+ 9.2 running inside a VM with the \n> following specs:\n>\n> 16 CPUs (dedicated to the VM)\n> 60G RAM\n> RAID-10 storage on a SAN for pgdata and pgarchieves, using different \n> LUNs for each.\n>\n> We have 3 kind of queries:\n>\n> - The vast majority of the queries are small SELECT/INSERT/UPDATEs \n> which are part of distributed transactions\n> - A few small ones, which are mostly SELECTs\n> - A few bulk loads, where we add 100k - 1M of rows in tables\n>\n> Our settings are:\n>\n> shared_buffers: 8G\n> work_mem: 12M\n> checkpoint_segments: 64\n>\n\nshared_buffers could be set up to 20-30% of the available RAM: in your \ncase, 16GB could be a reasonable value.\n\n> Autovacuum is somewhat aggressive, as our data changes quite often and \n> without it the planner was completely off.\n> Right now we use:\n>\n> autovacuum_analyze_scale_factor: 0.1\n> autovacuum_analyze_threshold: 50\n> autovacuum_freeze_max_age: 200000000\n> autovacuum_max_workers: 12\n> autovacuum_naptime: 10s\n> autovacuum_vacuum_cost_delay: 20ms\n> autovacuum_vacuum_cost_limit: -1\n> autovacuum_vacuum_scale_factor: 0.2\n> autovacuum_vacuum_threshold: 50\n\nThis means that auto vacuum will be triggered after around 50 updates \naech time, if your database is doing a lot of updates/inserts (as I \nunderstood) an unnecessary amount of vacuum statements can be reached, \nwhich will generate a lot of IO. If the inserts/updates are small, this \nvalue could be decreased.\n\nGiuseppe.\n\n>\n> At high-peak hour, the disk utilization for the pgdata mountpoint is:\n>\n> *00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz \n> avgqu-sz await svctm %util*\n> 13:20:01 dev253-2 7711.62 24166.97 56657.95 10.48 735.28 \n> 95.09 0.11 86.11\n> 13:30:01 dev253-2 5340.88 19465.30 39133.32 10.97 319.20 \n> 59.94 0.15 82.30\n> 13:40:01 dev253-2 2791.02 13061.76 19330.40 11.61 349.95 \n> 125.38 0.33 90.73\n> 13:50:01 dev253-2 3478.69 10503.84 25505.27 10.35 308.12 \n> 88.57 0.20 68.12\n> 14:00:01 dev253-2 5269.12 33613.43 35830.13 13.18 232.48 \n> 44.09 0.19 100.05\n> 14:10:01 dev253-2 4910.24 21767.22 33970.96 11.35 322.52 \n> 65.64 0.21 104.55\n> 14:20:02 dev253-2 5358.95 40772.03 33682.46 13.89 721.81 \n> 134.32 0.20 104.92\n> 14:30:01 dev253-2 4420.51 17256.16 33315.27 11.44 336.53 \n> 76.13 0.15 65.25\n> 14:40:02 dev253-2 4884.13 28439.26 31604.76 12.29 265.32 \n> 54.26 0.20 97.51\n> 14:50:01 dev253-2 3124.91 8077.46 22511.59 9.79 50.41 \n> 16.13 0.24 76.17\n>\n> and for pgarchives:\n>\n> *00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz \n> avgqu-sz await svctm %util*\n> 13:20:01 dev253-3 2802.25 0.69 22417.32 8.00 465.05 \n> 165.94 0.02 4.32\n> 13:30:01 dev253-3 1559.87 11159.45 12120.99 14.92 64.17 \n> 41.11 0.08 12.02\n> 13:40:01 dev253-3 922.62 8066.62 7129.15 16.47 19.75 \n> 21.40 0.08 6.99\n> 13:50:01 dev253-3 1194.81 895.34 9524.53 8.72 28.40 \n> 23.76 0.01 1.69\n> 14:00:01 dev253-3 1919.12 0.46 15352.49 8.00 51.75 \n> 26.95 0.01 1.61\n> 14:10:01 dev253-3 1770.59 9286.61 13873.79 13.08 139.86 \n> 78.97 0.08 14.46\n> 14:20:02 dev253-3 1595.04 11810.63 12389.08 15.17 109.17 \n> 68.42 0.15 24.71\n> 14:30:01 dev253-3 1793.71 12173.88 13957.79 14.57 141.56 \n> 78.89 0.08 13.61\n> 14:40:02 dev253-3 1751.62 0.43 14012.53 8.00 43.38 \n> 24.76 0.01 1.40\n> 14:50:01 dev253-3 1351.72 3225.19 10707.29 10.31 31.91 \n> 23.59 0.02 2.93\n>\n>\n>\n>\n> On Wed, Jul 17, 2013 at 1:09 PM, Giuseppe Broccolo \n> <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n>> In the asynchronous commit documentation, it says:\n>>\n>> /The commands supporting two-phase commit, such as PREPARE\n>> TRANSACTION, are also always synchronous\n>> /\n>>\n>> Does this mean that all queries that are part of a distributed\n>> transaction are synchronous?\n>>\n>> In our databases we have extremely high disk I/O, I'm wondering\n>> if distributed transactions may be the reason behind it.\n>\n> Distributed transactions are base on two-phase-commit (2PC)\n> algorithms for ensuring correct transaction completion, so are\n> synchronous.\n> However, I think this is not the main reason behind your extremely\n> high disk I/O. You should check if your system is properly tuned\n> to get the best performances.\n> First of all, you could take a look on your PostgreSQL\n> configurations, and check if shared_memory is set properly taking\n> into account your RAM availability. The conservative PostgreSQL\n> default value is 24 MB, forcing system to exploit many disk I/O\n> resources.\n> Aside from this, you could take a look if autovacuum is often\n> triggered (generating a large amount of I/O) in case of large use\n> of updates/inserts in your database.\n>\n> Regards,\n>\n> Giuseppe.\n>\n> -- \n> Giuseppe Broccolo - 2ndQuadrant Italy\n> PostgreSQL Training, Services and Support\n> [email protected] <mailto:[email protected]> |www.2ndQuadrant.it <http://www.2ndQuadrant.it>\n>\n>\n\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it\n\n\n\n\n\n\n\nIl 17/07/2013 12:52, Xenofon\n Papadopoulos ha scritto:\n\n\nThank you for your replies so far.\n The DB in question is Postgres+ 9.2 running inside a VM\n with the following specs:\n\n\n16 CPUs (dedicated to the VM)\n60G RAM\nRAID-10 storage on a SAN for pgdata and pgarchieves, using\n different LUNs for each.\n\n\nWe have 3 kind of queries:\n\n\n- The vast majority of the queries are small\n SELECT/INSERT/UPDATEs which are part of distributed\n transactions\n- A few small ones, which are mostly SELECTs\n- A few bulk loads, where we add 100k - 1M of rows in\n tables\n\n\nOur settings are:\n\n\nshared_buffers: 8G\nwork_mem: 12M\ncheckpoint_segments: 64\n\n\n\n\n\n\n shared_buffers could be set up to 20-30% of the available RAM: in\n your case, 16GB could be a reasonable value.\n\n\n\nAutovacuum is somewhat aggressive, as our data changes\n quite often and without it the planner was completely off.\nRight now we use:\n\n\n\n autovacuum_analyze_scale_factor: 0.1 \n autovacuum_analyze_threshold: 50 \n\n\n\n\n\n\n\n autovacuum_freeze_max_age: 200000000 \n\n\n\n\n\n\n autovacuum_max_workers: 12 \n autovacuum_naptime: 10s \n autovacuum_vacuum_cost_delay: 20ms \n autovacuum_vacuum_cost_limit: -1 \n autovacuum_vacuum_scale_factor: 0.2 \n autovacuum_vacuum_threshold: 50 \n\n\n\n\n This means that auto vacuum will be triggered after around 50\n updates aech time, if your database is doing a lot of\n updates/inserts (as I understood) an unnecessary amount of vacuum\n statements can be reached, which will generate a lot of IO. If the\n inserts/updates are small, this value could be decreased.\n\n Giuseppe.\n\n\n\n\n \n\n\n\nAt high-peak hour, the disk utilization for the pgdata\n mountpoint is:\n\n\n\n00:00:01 DEV tps rd_sec/s wr_sec/s\n avgrq-sz avgqu-sz await svctm %util\n\n\n13:20:01 dev253-2 7711.62 24166.97 56657.95 \n 10.48 735.28 95.09 0.11 86.11\n13:30:01 dev253-2 5340.88 19465.30 39133.32 \n 10.97 319.20 59.94 0.15 82.30\n13:40:01 dev253-2 2791.02 13061.76 19330.40 \n 11.61 349.95 125.38 0.33 90.73\n13:50:01 dev253-2 3478.69 10503.84 25505.27 \n 10.35 308.12 88.57 0.20 68.12\n14:00:01 dev253-2 5269.12 33613.43 35830.13 \n 13.18 232.48 44.09 0.19 100.05\n14:10:01 dev253-2 4910.24 21767.22 33970.96 \n 11.35 322.52 65.64 0.21 104.55\n14:20:02 dev253-2 5358.95 40772.03 33682.46 \n 13.89 721.81 134.32 0.20 104.92\n14:30:01 dev253-2 4420.51 17256.16 33315.27 \n 11.44 336.53 76.13 0.15 65.25\n14:40:02 dev253-2 4884.13 28439.26 31604.76 \n 12.29 265.32 54.26 0.20 97.51\n14:50:01 dev253-2 3124.91 8077.46 22511.59 \n 9.79 50.41 16.13 0.24 76.17\n\n\n\nand for pgarchives:\n\n\n\n00:00:01 DEV tps rd_sec/s wr_sec/s\n avgrq-sz avgqu-sz await svctm %util\n\n\n13:20:01 dev253-3 2802.25 0.69 22417.32 \n 8.00 465.05 165.94 0.02 4.32\n13:30:01 dev253-3 1559.87 11159.45 12120.99 \n 14.92 64.17 41.11 0.08 12.02\n13:40:01 dev253-3 922.62 8066.62 7129.15 \n 16.47 19.75 21.40 0.08 6.99\n13:50:01 dev253-3 1194.81 895.34 9524.53 \n 8.72 28.40 23.76 0.01 1.69\n14:00:01 dev253-3 1919.12 0.46 15352.49 \n 8.00 51.75 26.95 0.01 1.61\n14:10:01 dev253-3 1770.59 9286.61 13873.79 \n 13.08 139.86 78.97 0.08 14.46\n14:20:02 dev253-3 1595.04 11810.63 12389.08 \n 15.17 109.17 68.42 0.15 24.71\n14:30:01 dev253-3 1793.71 12173.88 13957.79 \n 14.57 141.56 78.89 0.08 13.61\n14:40:02 dev253-3 1751.62 0.43 14012.53 \n 8.00 43.38 24.76 0.01 1.40\n14:50:01 dev253-3 1351.72 3225.19 10707.29 \n 10.31 31.91 23.59 0.02 2.93\n\n\n\n\n\n\n\n\nOn Wed, Jul 17, 2013 at 1:09 PM,\n Giuseppe Broccolo <[email protected]>\n wrote:\n\n\nHi,\n\n Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n\n\n\n\n\nIn\n\n the asynchronous commit documentation, it\n says:\n\n\nThe\n\n commands supporting two-phase commit, such as PREPARE\n\n TRANSACTION,\n are also always synchronous\n\n\n\nDoes\n\n this mean that all queries that are part of\n a distributed transaction are synchronous?\n\n\nIn\n\n our databases we have extremely high disk\n I/O, I'm wondering if distributed\n transactions may be the reason behind it.\n\n\n\n\n\n Distributed transactions are base on two-phase-commit\n (2PC) algorithms for ensuring correct transaction\n completion, so are synchronous. \n However, I think this is not the main reason behind your\n extremely high disk I/O. You should check if your system\n is properly tuned to get the best performances. \n First of all, you could take a look on your PostgreSQL\n configurations, and check if shared_memory is set properly\n taking into account your RAM availability. The\n conservative PostgreSQL default value is 24 MB, forcing\n system to exploit many disk I/O resources.\n Aside from this, you could take a look if autovacuum is\n often triggered (generating a large amount of I/O) in case\n of large use of updates/inserts in your database.\n\n Regards,\n\n Giuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it\n\n\n\n\n\n\n\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it",
"msg_date": "Wed, 17 Jul 2013 13:16:06 +0200",
"msg_from": "Giuseppe Broccolo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distributed transactions and asynchronous commit"
},
{
"msg_contents": "On Wed, Jul 17, 2013 at 11:52 AM, Xenofon Papadopoulos <[email protected]>wrote:\n\n> Thank you for your replies so far.\n> The DB in question is Postgres+ 9.2 running inside a VM with the following\n> specs:\n>\n> 16 CPUs (dedicated to the VM)\n> 60G RAM\n> RAID-10 storage on a SAN for pgdata and pgarchieves, using different LUNs\n> for each.\n>\n> We have 3 kind of queries:\n>\n> - The vast majority of the queries are small SELECT/INSERT/UPDATEs which\n> are part of distributed transactions\n> - A few small ones, which are mostly SELECTs\n> - A few bulk loads, where we add 100k - 1M of rows in tables\n>\n> Our settings are:\n>\n> shared_buffers: 8G\n> work_mem: 12M\n> checkpoint_segments: 64\n>\n> Autovacuum is somewhat aggressive, as our data changes quite often and\n> without it the planner was completely off.\n> Right now we use:\n>\n> autovacuum_analyze_scale_factor: 0.1\n> autovacuum_analyze_threshold: 50\n> autovacuum_freeze_max_age: 200000000\n> autovacuum_max_workers: 12\n> autovacuum_naptime: 10s\n> autovacuum_vacuum_cost_delay: 20ms\n> autovacuum_vacuum_cost_limit: -1\n> autovacuum_vacuum_scale_factor: 0.2\n> autovacuum_vacuum_threshold: 50\n>\n\nsettings look ok, except vacuum and analyze threshold that is in my opinion\ntoo agressive (500 would make more sense) and workers at 6 you haven't\nmentioned wal_buffers and effective_io_concurrency settings but i dont\nthink that it would make much of a difference\n\n>\n>\n> At high-peak hour, the disk utilization for the pgdata mountpoint is:\n>\n> *00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\n> await svctm %util*\n> 13:20:01 dev253-2 7711.62 24166.97 56657.95 10.48 735.28\n> 95.09 0.11 86.11\n> 13:30:01 dev253-2 5340.88 19465.30 39133.32 10.97 319.20\n> 59.94 0.15 82.30\n> 13:40:01 dev253-2 2791.02 13061.76 19330.40 11.61 349.95\n> 125.38 0.33 90.73\n> 13:50:01 dev253-2 3478.69 10503.84 25505.27 10.35 308.12\n> 88.57 0.20 68.12\n> 14:00:01 dev253-2 5269.12 33613.43 35830.13 13.18 232.48\n> 44.09 0.19 100.05\n> 14:10:01 dev253-2 4910.24 21767.22 33970.96 11.35 322.52\n> 65.64 0.21 104.55\n> 14:20:02 dev253-2 5358.95 40772.03 33682.46 13.89 721.81\n> 134.32 0.20 104.92\n> 14:30:01 dev253-2 4420.51 17256.16 33315.27 11.44 336.53\n> 76.13 0.15 65.25\n> 14:40:02 dev253-2 4884.13 28439.26 31604.76 12.29 265.32\n> 54.26 0.20 97.51\n> 14:50:01 dev253-2 3124.91 8077.46 22511.59 9.79 50.41\n> 16.13 0.24 76.17\n>\n\nassuming that sector = 512 bytes, it means that your san makes 20mb/sec\nread which if its not totally random-reads is quite low,\ni would start from there, make tests to see if everything works ok,\n(bonnie++, dd , etc) and if you are getting the numbers you are supposed to\n\n\n> and for pgarchives:\n>\n> *00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\n> await svctm %util*\n> 13:20:01 dev253-3 2802.25 0.69 22417.32 8.00 465.05\n> 165.94 0.02 4.32\n> 13:30:01 dev253-3 1559.87 11159.45 12120.99 14.92 64.17\n> 41.11 0.08 12.02\n> 13:40:01 dev253-3 922.62 8066.62 7129.15 16.47 19.75\n> 21.40 0.08 6.99\n> 13:50:01 dev253-3 1194.81 895.34 9524.53 8.72 28.40\n> 23.76 0.01 1.69\n> 14:00:01 dev253-3 1919.12 0.46 15352.49 8.00 51.75\n> 26.95 0.01 1.61\n> 14:10:01 dev253-3 1770.59 9286.61 13873.79 13.08 139.86\n> 78.97 0.08 14.46\n> 14:20:02 dev253-3 1595.04 11810.63 12389.08 15.17 109.17\n> 68.42 0.15 24.71\n> 14:30:01 dev253-3 1793.71 12173.88 13957.79 14.57 141.56\n> 78.89 0.08 13.61\n> 14:40:02 dev253-3 1751.62 0.43 14012.53 8.00 43.38\n> 24.76 0.01 1.40\n> 14:50:01 dev253-3 1351.72 3225.19 10707.29 10.31 31.91\n> 23.59 0.02 2.93\n>\n>\n>\n>\n> On Wed, Jul 17, 2013 at 1:09 PM, Giuseppe Broccolo <\n> [email protected]> wrote:\n>\n>> Hi,\n>>\n>> Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n>>\n>> In the asynchronous commit documentation, it says:\n>>\n>> *The commands supporting two-phase commit, such as PREPARE TRANSACTION,\n>> are also always synchronous\n>> *\n>>\n>> Does this mean that all queries that are part of a distributed\n>> transaction are synchronous?\n>>\n>> In our databases we have extremely high disk I/O, I'm wondering if\n>> distributed transactions may be the reason behind it.\n>>\n>>\n>> Distributed transactions are base on two-phase-commit (2PC) algorithms\n>> for ensuring correct transaction completion, so are synchronous.\n>> However, I think this is not the main reason behind your extremely high\n>> disk I/O. You should check if your system is properly tuned to get the best\n>> performances.\n>> First of all, you could take a look on your PostgreSQL configurations,\n>> and check if shared_memory is set properly taking into account your RAM\n>> availability. The conservative PostgreSQL default value is 24 MB, forcing\n>> system to exploit many disk I/O resources.\n>> Aside from this, you could take a look if autovacuum is often triggered\n>> (generating a large amount of I/O) in case of large use of updates/inserts\n>> in your database.\n>>\n>> Regards,\n>>\n>> Giuseppe.\n>>\n>> --\n>> Giuseppe Broccolo - 2ndQuadrant Italy\n>> PostgreSQL Training, Services and [email protected] | www.2ndQuadrant.it\n>>\n>>\n>\n\nOn Wed, Jul 17, 2013 at 11:52 AM, Xenofon Papadopoulos <[email protected]> wrote:\nThank you for your replies so far.The DB in question is Postgres+ 9.2 running inside a VM with the following specs:\n16 CPUs (dedicated to the VM)60G RAMRAID-10 storage on a SAN for pgdata and pgarchieves, using different LUNs for each.\nWe have 3 kind of queries:- The vast majority of the queries are small SELECT/INSERT/UPDATEs which are part of distributed transactions- A few small ones, which are mostly SELECTs\n- A few bulk loads, where we add 100k - 1M of rows in tablesOur settings are:shared_buffers: 8Gwork_mem: 12Mcheckpoint_segments: 64\nAutovacuum is somewhat aggressive, as our data changes quite often and without it the planner was completely off.Right now we use: autovacuum_analyze_scale_factor: 0.1 \n autovacuum_analyze_threshold: 50 autovacuum_freeze_max_age: 200000000 autovacuum_max_workers: 12 autovacuum_naptime: 10s autovacuum_vacuum_cost_delay: 20ms \n autovacuum_vacuum_cost_limit: -1 autovacuum_vacuum_scale_factor: 0.2 autovacuum_vacuum_threshold: 50 settings look ok, except vacuum and analyze threshold that is in my opinion too agressive (500 would make more sense) and workers at 6 you haven't mentioned wal_buffers and effective_io_concurrency settings but i dont think that it would make much of a difference\n At high-peak hour, the disk utilization for the pgdata mountpoint is:\n00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util13:20:01 dev253-2 7711.62 24166.97 56657.95 10.48 735.28 95.09 0.11 86.11\n13:30:01 dev253-2 5340.88 19465.30 39133.32 10.97 319.20 59.94 0.15 82.3013:40:01 dev253-2 2791.02 13061.76 19330.40 11.61 349.95 125.38 0.33 90.73\n13:50:01 dev253-2 3478.69 10503.84 25505.27 10.35 308.12 88.57 0.20 68.1214:00:01 dev253-2 5269.12 33613.43 35830.13 13.18 232.48 44.09 0.19 100.05\n14:10:01 dev253-2 4910.24 21767.22 33970.96 11.35 322.52 65.64 0.21 104.5514:20:02 dev253-2 5358.95 40772.03 33682.46 13.89 721.81 134.32 0.20 104.92\n14:30:01 dev253-2 4420.51 17256.16 33315.27 11.44 336.53 76.13 0.15 65.2514:40:02 dev253-2 4884.13 28439.26 31604.76 12.29 265.32 54.26 0.20 97.51\n14:50:01 dev253-2 3124.91 8077.46 22511.59 9.79 50.41 16.13 0.24 76.17assuming that sector = 512 bytes, it means that your san makes 20mb/sec read which if its not totally random-reads is quite low,\ni would start from there, make tests to see if everything works ok, (bonnie++, dd , etc) and if you are getting the numbers you are supposed to \nand for pgarchives:00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util\n13:20:01 dev253-3 2802.25 0.69 22417.32 8.00 465.05 165.94 0.02 4.3213:30:01 dev253-3 1559.87 11159.45 12120.99 14.92 64.17 41.11 0.08 12.02\n13:40:01 dev253-3 922.62 8066.62 7129.15 16.47 19.75 21.40 0.08 6.9913:50:01 dev253-3 1194.81 895.34 9524.53 8.72 28.40 23.76 0.01 1.69\n14:00:01 dev253-3 1919.12 0.46 15352.49 8.00 51.75 26.95 0.01 1.6114:10:01 dev253-3 1770.59 9286.61 13873.79 13.08 139.86 78.97 0.08 14.46\n14:20:02 dev253-3 1595.04 11810.63 12389.08 15.17 109.17 68.42 0.15 24.7114:30:01 dev253-3 1793.71 12173.88 13957.79 14.57 141.56 78.89 0.08 13.61\n14:40:02 dev253-3 1751.62 0.43 14012.53 8.00 43.38 24.76 0.01 1.4014:50:01 dev253-3 1351.72 3225.19 10707.29 10.31 31.91 23.59 0.02 2.93\nOn Wed, Jul 17, 2013 at 1:09 PM, Giuseppe Broccolo <[email protected]> wrote:\n\n\nHi,\n\n Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n\n\n\nIn\n the asynchronous commit documentation, it says:\n\n\nThe\n commands supporting two-phase commit, such as PREPARE\n TRANSACTION,\n are also always synchronous\n\n\n\nDoes\n this mean that all queries that are part of a distributed\n transaction are synchronous?\n\n\nIn\n our databases we have extremely high disk I/O, I'm\n wondering if distributed transactions may be the reason\n behind it.\n\n\n\n Distributed transactions are base on two-phase-commit (2PC)\n algorithms for ensuring correct transaction completion, so are\n synchronous. \n However, I think this is not the main reason behind your extremely\n high disk I/O. You should check if your system is properly tuned to\n get the best performances. \n First of all, you could take a look on your PostgreSQL\n configurations, and check if shared_memory is set properly taking\n into account your RAM availability. The conservative PostgreSQL\n default value is 24 MB, forcing system to exploit many disk I/O\n resources.\n Aside from this, you could take a look if autovacuum is often\n triggered (generating a large amount of I/O) in case of large use of\n updates/inserts in your database.\n\n Regards,\n\n Giuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it",
"msg_date": "Wed, 17 Jul 2013 12:21:06 +0100",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distributed transactions and asynchronous commit"
},
{
"msg_contents": "On Wed, Jul 17, 2013 at 12:21 PM, Vasilis Ventirozos <[email protected]\n> wrote:\n\n>\n>\n>\n> On Wed, Jul 17, 2013 at 11:52 AM, Xenofon Papadopoulos <[email protected]>wrote:\n>\n>> Thank you for your replies so far.\n>> The DB in question is Postgres+ 9.2 running inside a VM with the\n>> following specs:\n>>\n>> 16 CPUs (dedicated to the VM)\n>> 60G RAM\n>> RAID-10 storage on a SAN for pgdata and pgarchieves, using different LUNs\n>> for each.\n>>\n>> We have 3 kind of queries:\n>>\n>> - The vast majority of the queries are small SELECT/INSERT/UPDATEs which\n>> are part of distributed transactions\n>> - A few small ones, which are mostly SELECTs\n>> - A few bulk loads, where we add 100k - 1M of rows in tables\n>>\n>> Our settings are:\n>>\n>> shared_buffers: 8G\n>> work_mem: 12M\n>> checkpoint_segments: 64\n>>\n>> Autovacuum is somewhat aggressive, as our data changes quite often and\n>> without it the planner was completely off.\n>> Right now we use:\n>>\n>> autovacuum_analyze_scale_factor: 0.1\n>> autovacuum_analyze_threshold: 50\n>> autovacuum_freeze_max_age: 200000000\n>> autovacuum_max_workers: 12\n>> autovacuum_naptime: 10s\n>> autovacuum_vacuum_cost_delay: 20ms\n>> autovacuum_vacuum_cost_limit: -1\n>> autovacuum_vacuum_scale_factor: 0.2\n>> autovacuum_vacuum_threshold: 50\n>>\n>\n> settings look ok, except vacuum and analyze threshold that is in my\n> opinion too agressive (500 would make more sense) and workers at 6 you\n> haven't mentioned wal_buffers and effective_io_concurrency settings but i\n> dont think that it would make much of a difference\n>\n>>\n>>\n>> At high-peak hour, the disk utilization for the pgdata mountpoint is:\n>>\n>> *00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\n>> await svctm %util*\n>> 13:20:01 dev253-2 7711.62 24166.97 56657.95 10.48 735.28\n>> 95.09 0.11 86.11\n>> 13:30:01 dev253-2 5340.88 19465.30 39133.32 10.97 319.20\n>> 59.94 0.15 82.30\n>> 13:40:01 dev253-2 2791.02 13061.76 19330.40 11.61 349.95\n>> 125.38 0.33 90.73\n>> 13:50:01 dev253-2 3478.69 10503.84 25505.27 10.35 308.12\n>> 88.57 0.20 68.12\n>> 14:00:01 dev253-2 5269.12 33613.43 35830.13 13.18 232.48\n>> 44.09 0.19 100.05\n>> 14:10:01 dev253-2 4910.24 21767.22 33970.96 11.35 322.52\n>> 65.64 0.21 104.55\n>> 14:20:02 dev253-2 5358.95 40772.03 33682.46 13.89 721.81\n>> 134.32 0.20 104.92\n>> 14:30:01 dev253-2 4420.51 17256.16 33315.27 11.44 336.53\n>> 76.13 0.15 65.25\n>> 14:40:02 dev253-2 4884.13 28439.26 31604.76 12.29 265.32\n>> 54.26 0.20 97.51\n>> 14:50:01 dev253-2 3124.91 8077.46 22511.59 9.79 50.41\n>> 16.13 0.24 76.17\n>>\n>\n> assuming that sector = 512 bytes, it means that your san makes 20mb/sec\n> read which if its not totally random-reads is quite low,\n> i would start from there, make tests to see if everything works ok,\n> (bonnie++, dd , etc) and if you are getting the numbers you are supposed to\n>\n\ni would also check for index / table bloat, here's a script that it would\ndo that for you\nhttp://labs.omniti.com/labs/pgtreats/browser/trunk/tools/pg_bloat_report.pl\n\n\n> and for pgarchives:\n>>\n>> *00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\n>> await svctm %util*\n>> 13:20:01 dev253-3 2802.25 0.69 22417.32 8.00 465.05\n>> 165.94 0.02 4.32\n>> 13:30:01 dev253-3 1559.87 11159.45 12120.99 14.92 64.17\n>> 41.11 0.08 12.02\n>> 13:40:01 dev253-3 922.62 8066.62 7129.15 16.47 19.75\n>> 21.40 0.08 6.99\n>> 13:50:01 dev253-3 1194.81 895.34 9524.53 8.72 28.40\n>> 23.76 0.01 1.69\n>> 14:00:01 dev253-3 1919.12 0.46 15352.49 8.00 51.75\n>> 26.95 0.01 1.61\n>> 14:10:01 dev253-3 1770.59 9286.61 13873.79 13.08 139.86\n>> 78.97 0.08 14.46\n>> 14:20:02 dev253-3 1595.04 11810.63 12389.08 15.17 109.17\n>> 68.42 0.15 24.71\n>> 14:30:01 dev253-3 1793.71 12173.88 13957.79 14.57 141.56\n>> 78.89 0.08 13.61\n>> 14:40:02 dev253-3 1751.62 0.43 14012.53 8.00 43.38\n>> 24.76 0.01 1.40\n>> 14:50:01 dev253-3 1351.72 3225.19 10707.29 10.31 31.91\n>> 23.59 0.02 2.93\n>>\n>>\n>>\n>>\n>> On Wed, Jul 17, 2013 at 1:09 PM, Giuseppe Broccolo <\n>> [email protected]> wrote:\n>>\n>>> Hi,\n>>>\n>>> Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n>>>\n>>> In the asynchronous commit documentation, it says:\n>>>\n>>> *The commands supporting two-phase commit, such as PREPARE TRANSACTION,\n>>> are also always synchronous\n>>> *\n>>>\n>>> Does this mean that all queries that are part of a distributed\n>>> transaction are synchronous?\n>>>\n>>> In our databases we have extremely high disk I/O, I'm wondering if\n>>> distributed transactions may be the reason behind it.\n>>>\n>>>\n>>> Distributed transactions are base on two-phase-commit (2PC) algorithms\n>>> for ensuring correct transaction completion, so are synchronous.\n>>> However, I think this is not the main reason behind your extremely high\n>>> disk I/O. You should check if your system is properly tuned to get the best\n>>> performances.\n>>> First of all, you could take a look on your PostgreSQL configurations,\n>>> and check if shared_memory is set properly taking into account your RAM\n>>> availability. The conservative PostgreSQL default value is 24 MB, forcing\n>>> system to exploit many disk I/O resources.\n>>> Aside from this, you could take a look if autovacuum is often triggered\n>>> (generating a large amount of I/O) in case of large use of updates/inserts\n>>> in your database.\n>>>\n>>> Regards,\n>>>\n>>> Giuseppe.\n>>>\n>>> --\n>>> Giuseppe Broccolo - 2ndQuadrant Italy\n>>> PostgreSQL Training, Services and [email protected] | www.2ndQuadrant.it\n>>>\n>>>\n>>\n>\n\nOn Wed, Jul 17, 2013 at 12:21 PM, Vasilis Ventirozos <[email protected]> wrote:\nOn Wed, Jul 17, 2013 at 11:52 AM, Xenofon Papadopoulos <[email protected]> wrote:\nThank you for your replies so far.The DB in question is Postgres+ 9.2 running inside a VM with the following specs:\n16 CPUs (dedicated to the VM)60G RAMRAID-10 storage on a SAN for pgdata and pgarchieves, using different LUNs for each.\nWe have 3 kind of queries:- The vast majority of the queries are small SELECT/INSERT/UPDATEs which are part of distributed transactions- A few small ones, which are mostly SELECTs\n- A few bulk loads, where we add 100k - 1M of rows in tablesOur settings are:shared_buffers: 8Gwork_mem: 12Mcheckpoint_segments: 64\nAutovacuum is somewhat aggressive, as our data changes quite often and without it the planner was completely off.Right now we use: autovacuum_analyze_scale_factor: 0.1 \n autovacuum_analyze_threshold: 50 autovacuum_freeze_max_age: 200000000 autovacuum_max_workers: 12 autovacuum_naptime: 10s autovacuum_vacuum_cost_delay: 20ms \n autovacuum_vacuum_cost_limit: -1 autovacuum_vacuum_scale_factor: 0.2 autovacuum_vacuum_threshold: 50 settings look ok, except vacuum and analyze threshold that is in my opinion too agressive (500 would make more sense) and workers at 6 you haven't mentioned wal_buffers and effective_io_concurrency settings but i dont think that it would make much of a difference\n At high-peak hour, the disk utilization for the pgdata mountpoint is:\n00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util13:20:01 dev253-2 7711.62 24166.97 56657.95 10.48 735.28 95.09 0.11 86.11\n13:30:01 dev253-2 5340.88 19465.30 39133.32 10.97 319.20 59.94 0.15 82.3013:40:01 dev253-2 2791.02 13061.76 19330.40 11.61 349.95 125.38 0.33 90.73\n13:50:01 dev253-2 3478.69 10503.84 25505.27 10.35 308.12 88.57 0.20 68.1214:00:01 dev253-2 5269.12 33613.43 35830.13 13.18 232.48 44.09 0.19 100.05\n14:10:01 dev253-2 4910.24 21767.22 33970.96 11.35 322.52 65.64 0.21 104.5514:20:02 dev253-2 5358.95 40772.03 33682.46 13.89 721.81 134.32 0.20 104.92\n14:30:01 dev253-2 4420.51 17256.16 33315.27 11.44 336.53 76.13 0.15 65.2514:40:02 dev253-2 4884.13 28439.26 31604.76 12.29 265.32 54.26 0.20 97.51\n14:50:01 dev253-2 3124.91 8077.46 22511.59 9.79 50.41 16.13 0.24 76.17assuming that sector = 512 bytes, it means that your san makes 20mb/sec read which if its not totally random-reads is quite low,\ni would start from there, make tests to see if everything works ok, (bonnie++, dd , etc) and if you are getting the numbers you are supposed toi would also check for index / table bloat, here's a script that it would do that for you\nhttp://labs.omniti.com/labs/pgtreats/browser/trunk/tools/pg_bloat_report.pl \n\nand for pgarchives:00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util\n13:20:01 dev253-3 2802.25 0.69 22417.32 8.00 465.05 165.94 0.02 4.3213:30:01 dev253-3 1559.87 11159.45 12120.99 14.92 64.17 41.11 0.08 12.02\n13:40:01 dev253-3 922.62 8066.62 7129.15 16.47 19.75 21.40 0.08 6.9913:50:01 dev253-3 1194.81 895.34 9524.53 8.72 28.40 23.76 0.01 1.69\n14:00:01 dev253-3 1919.12 0.46 15352.49 8.00 51.75 26.95 0.01 1.6114:10:01 dev253-3 1770.59 9286.61 13873.79 13.08 139.86 78.97 0.08 14.46\n14:20:02 dev253-3 1595.04 11810.63 12389.08 15.17 109.17 68.42 0.15 24.7114:30:01 dev253-3 1793.71 12173.88 13957.79 14.57 141.56 78.89 0.08 13.61\n14:40:02 dev253-3 1751.62 0.43 14012.53 8.00 43.38 24.76 0.01 1.4014:50:01 dev253-3 1351.72 3225.19 10707.29 10.31 31.91 23.59 0.02 2.93\nOn Wed, Jul 17, 2013 at 1:09 PM, Giuseppe Broccolo <[email protected]> wrote:\n\n\nHi,\n\n Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n\n\n\nIn\n the asynchronous commit documentation, it says:\n\n\nThe\n commands supporting two-phase commit, such as PREPARE\n TRANSACTION,\n are also always synchronous\n\n\n\nDoes\n this mean that all queries that are part of a distributed\n transaction are synchronous?\n\n\nIn\n our databases we have extremely high disk I/O, I'm\n wondering if distributed transactions may be the reason\n behind it.\n\n\n\n Distributed transactions are base on two-phase-commit (2PC)\n algorithms for ensuring correct transaction completion, so are\n synchronous. \n However, I think this is not the main reason behind your extremely\n high disk I/O. You should check if your system is properly tuned to\n get the best performances. \n First of all, you could take a look on your PostgreSQL\n configurations, and check if shared_memory is set properly taking\n into account your RAM availability. The conservative PostgreSQL\n default value is 24 MB, forcing system to exploit many disk I/O\n resources.\n Aside from this, you could take a look if autovacuum is often\n triggered (generating a large amount of I/O) in case of large use of\n updates/inserts in your database.\n\n Regards,\n\n Giuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it",
"msg_date": "Wed, 17 Jul 2013 12:26:51 +0100",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Distributed transactions and asynchronous commit"
},
{
"msg_contents": "wal_buffers: 32M\neffective_io_concurrency: 4\n\nThere is no bloat.\nNote that we are using Postgres inside a VM, there is a VMFS layer on top\nof the LUNs which might affect the performance. That said, we're still\nwondering if this much I/O is normal and if we can somehow reduce it.\nEnabling async commits in a DB without distributed transactions resulted to\na huge decrease in I/O, here there was almost no effect.\n\n\n\nOn Wed, Jul 17, 2013 at 2:21 PM, Vasilis Ventirozos\n<[email protected]>wrote:\n\n>\n>\n>\n> On Wed, Jul 17, 2013 at 11:52 AM, Xenofon Papadopoulos <[email protected]>wrote:\n>\n>> Thank you for your replies so far.\n>> The DB in question is Postgres+ 9.2 running inside a VM with the\n>> following specs:\n>>\n>> 16 CPUs (dedicated to the VM)\n>> 60G RAM\n>> RAID-10 storage on a SAN for pgdata and pgarchieves, using different LUNs\n>> for each.\n>>\n>> We have 3 kind of queries:\n>>\n>> - The vast majority of the queries are small SELECT/INSERT/UPDATEs which\n>> are part of distributed transactions\n>> - A few small ones, which are mostly SELECTs\n>> - A few bulk loads, where we add 100k - 1M of rows in tables\n>>\n>> Our settings are:\n>>\n>> shared_buffers: 8G\n>> work_mem: 12M\n>> checkpoint_segments: 64\n>>\n>> Autovacuum is somewhat aggressive, as our data changes quite often and\n>> without it the planner was completely off.\n>> Right now we use:\n>>\n>> autovacuum_analyze_scale_factor: 0.1\n>> autovacuum_analyze_threshold: 50\n>> autovacuum_freeze_max_age: 200000000\n>> autovacuum_max_workers: 12\n>> autovacuum_naptime: 10s\n>> autovacuum_vacuum_cost_delay: 20ms\n>> autovacuum_vacuum_cost_limit: -1\n>> autovacuum_vacuum_scale_factor: 0.2\n>> autovacuum_vacuum_threshold: 50\n>>\n>\n> settings look ok, except vacuum and analyze threshold that is in my\n> opinion too agressive (500 would make more sense) and workers at 6 you\n> haven't mentioned wal_buffers and effective_io_concurrency settings but i\n> dont think that it would make much of a difference\n>\n>>\n>>\n>> At high-peak hour, the disk utilization for the pgdata mountpoint is:\n>>\n>> *00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\n>> await svctm %util*\n>> 13:20:01 dev253-2 7711.62 24166.97 56657.95 10.48 735.28\n>> 95.09 0.11 86.11\n>> 13:30:01 dev253-2 5340.88 19465.30 39133.32 10.97 319.20\n>> 59.94 0.15 82.30\n>> 13:40:01 dev253-2 2791.02 13061.76 19330.40 11.61 349.95\n>> 125.38 0.33 90.73\n>> 13:50:01 dev253-2 3478.69 10503.84 25505.27 10.35 308.12\n>> 88.57 0.20 68.12\n>> 14:00:01 dev253-2 5269.12 33613.43 35830.13 13.18 232.48\n>> 44.09 0.19 100.05\n>> 14:10:01 dev253-2 4910.24 21767.22 33970.96 11.35 322.52\n>> 65.64 0.21 104.55\n>> 14:20:02 dev253-2 5358.95 40772.03 33682.46 13.89 721.81\n>> 134.32 0.20 104.92\n>> 14:30:01 dev253-2 4420.51 17256.16 33315.27 11.44 336.53\n>> 76.13 0.15 65.25\n>> 14:40:02 dev253-2 4884.13 28439.26 31604.76 12.29 265.32\n>> 54.26 0.20 97.51\n>> 14:50:01 dev253-2 3124.91 8077.46 22511.59 9.79 50.41\n>> 16.13 0.24 76.17\n>>\n>\n> assuming that sector = 512 bytes, it means that your san makes 20mb/sec\n> read which if its not totally random-reads is quite low,\n> i would start from there, make tests to see if everything works ok,\n> (bonnie++, dd , etc) and if you are getting the numbers you are supposed to\n>\n>\n>> and for pgarchives:\n>>\n>> *00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz\n>> await svctm %util*\n>> 13:20:01 dev253-3 2802.25 0.69 22417.32 8.00 465.05\n>> 165.94 0.02 4.32\n>> 13:30:01 dev253-3 1559.87 11159.45 12120.99 14.92 64.17\n>> 41.11 0.08 12.02\n>> 13:40:01 dev253-3 922.62 8066.62 7129.15 16.47 19.75\n>> 21.40 0.08 6.99\n>> 13:50:01 dev253-3 1194.81 895.34 9524.53 8.72 28.40\n>> 23.76 0.01 1.69\n>> 14:00:01 dev253-3 1919.12 0.46 15352.49 8.00 51.75\n>> 26.95 0.01 1.61\n>> 14:10:01 dev253-3 1770.59 9286.61 13873.79 13.08 139.86\n>> 78.97 0.08 14.46\n>> 14:20:02 dev253-3 1595.04 11810.63 12389.08 15.17 109.17\n>> 68.42 0.15 24.71\n>> 14:30:01 dev253-3 1793.71 12173.88 13957.79 14.57 141.56\n>> 78.89 0.08 13.61\n>> 14:40:02 dev253-3 1751.62 0.43 14012.53 8.00 43.38\n>> 24.76 0.01 1.40\n>> 14:50:01 dev253-3 1351.72 3225.19 10707.29 10.31 31.91\n>> 23.59 0.02 2.93\n>>\n>>\n>>\n>>\n>> On Wed, Jul 17, 2013 at 1:09 PM, Giuseppe Broccolo <\n>> [email protected]> wrote:\n>>\n>>> Hi,\n>>>\n>>> Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n>>>\n>>> In the asynchronous commit documentation, it says:\n>>>\n>>> *The commands supporting two-phase commit, such as PREPARE TRANSACTION,\n>>> are also always synchronous\n>>> *\n>>>\n>>> Does this mean that all queries that are part of a distributed\n>>> transaction are synchronous?\n>>>\n>>> In our databases we have extremely high disk I/O, I'm wondering if\n>>> distributed transactions may be the reason behind it.\n>>>\n>>>\n>>> Distributed transactions are base on two-phase-commit (2PC) algorithms\n>>> for ensuring correct transaction completion, so are synchronous.\n>>> However, I think this is not the main reason behind your extremely high\n>>> disk I/O. You should check if your system is properly tuned to get the best\n>>> performances.\n>>> First of all, you could take a look on your PostgreSQL configurations,\n>>> and check if shared_memory is set properly taking into account your RAM\n>>> availability. The conservative PostgreSQL default value is 24 MB, forcing\n>>> system to exploit many disk I/O resources.\n>>> Aside from this, you could take a look if autovacuum is often triggered\n>>> (generating a large amount of I/O) in case of large use of updates/inserts\n>>> in your database.\n>>>\n>>> Regards,\n>>>\n>>> Giuseppe.\n>>>\n>>> --\n>>> Giuseppe Broccolo - 2ndQuadrant Italy\n>>> PostgreSQL Training, Services and [email protected] | www.2ndQuadrant.it\n>>>\n>>>\n>>\n>\n\nwal_buffers: 32Meffective_io_concurrency: 4There is no bloat.Note that we are using Postgres inside a VM, there is a VMFS layer on top of the LUNs which might affect the performance. That said, we're still wondering if this much I/O is normal and if we can somehow reduce it. Enabling async commits in a DB without distributed transactions resulted to a huge decrease in I/O, here there was almost no effect.\nOn Wed, Jul 17, 2013 at 2:21 PM, Vasilis Ventirozos <[email protected]> wrote:\nOn Wed, Jul 17, 2013 at 11:52 AM, Xenofon Papadopoulos <[email protected]> wrote:\nThank you for your replies so far.The DB in question is Postgres+ 9.2 running inside a VM with the following specs:\n16 CPUs (dedicated to the VM)60G RAMRAID-10 storage on a SAN for pgdata and pgarchieves, using different LUNs for each.\nWe have 3 kind of queries:- The vast majority of the queries are small SELECT/INSERT/UPDATEs which are part of distributed transactions- A few small ones, which are mostly SELECTs\n- A few bulk loads, where we add 100k - 1M of rows in tablesOur settings are:shared_buffers: 8Gwork_mem: 12Mcheckpoint_segments: 64\nAutovacuum is somewhat aggressive, as our data changes quite often and without it the planner was completely off.Right now we use: autovacuum_analyze_scale_factor: 0.1 \n autovacuum_analyze_threshold: 50 autovacuum_freeze_max_age: 200000000 autovacuum_max_workers: 12 autovacuum_naptime: 10s autovacuum_vacuum_cost_delay: 20ms \n autovacuum_vacuum_cost_limit: -1 autovacuum_vacuum_scale_factor: 0.2 autovacuum_vacuum_threshold: 50 settings look ok, except vacuum and analyze threshold that is in my opinion too agressive (500 would make more sense) and workers at 6 you haven't mentioned wal_buffers and effective_io_concurrency settings but i dont think that it would make much of a difference\n At high-peak hour, the disk utilization for the pgdata mountpoint is:\n00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util13:20:01 dev253-2 7711.62 24166.97 56657.95 10.48 735.28 95.09 0.11 86.11\n13:30:01 dev253-2 5340.88 19465.30 39133.32 10.97 319.20 59.94 0.15 82.3013:40:01 dev253-2 2791.02 13061.76 19330.40 11.61 349.95 125.38 0.33 90.73\n13:50:01 dev253-2 3478.69 10503.84 25505.27 10.35 308.12 88.57 0.20 68.1214:00:01 dev253-2 5269.12 33613.43 35830.13 13.18 232.48 44.09 0.19 100.05\n14:10:01 dev253-2 4910.24 21767.22 33970.96 11.35 322.52 65.64 0.21 104.5514:20:02 dev253-2 5358.95 40772.03 33682.46 13.89 721.81 134.32 0.20 104.92\n14:30:01 dev253-2 4420.51 17256.16 33315.27 11.44 336.53 76.13 0.15 65.2514:40:02 dev253-2 4884.13 28439.26 31604.76 12.29 265.32 54.26 0.20 97.51\n14:50:01 dev253-2 3124.91 8077.46 22511.59 9.79 50.41 16.13 0.24 76.17assuming that sector = 512 bytes, it means that your san makes 20mb/sec read which if its not totally random-reads is quite low,\ni would start from there, make tests to see if everything works ok, (bonnie++, dd , etc) and if you are getting the numbers you are supposed to \nand for pgarchives:00:00:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util\n13:20:01 dev253-3 2802.25 0.69 22417.32 8.00 465.05 165.94 0.02 4.3213:30:01 dev253-3 1559.87 11159.45 12120.99 14.92 64.17 41.11 0.08 12.02\n13:40:01 dev253-3 922.62 8066.62 7129.15 16.47 19.75 21.40 0.08 6.9913:50:01 dev253-3 1194.81 895.34 9524.53 8.72 28.40 23.76 0.01 1.69\n14:00:01 dev253-3 1919.12 0.46 15352.49 8.00 51.75 26.95 0.01 1.6114:10:01 dev253-3 1770.59 9286.61 13873.79 13.08 139.86 78.97 0.08 14.46\n14:20:02 dev253-3 1595.04 11810.63 12389.08 15.17 109.17 68.42 0.15 24.7114:30:01 dev253-3 1793.71 12173.88 13957.79 14.57 141.56 78.89 0.08 13.61\n14:40:02 dev253-3 1751.62 0.43 14012.53 8.00 43.38 24.76 0.01 1.4014:50:01 dev253-3 1351.72 3225.19 10707.29 10.31 31.91 23.59 0.02 2.93\nOn Wed, Jul 17, 2013 at 1:09 PM, Giuseppe Broccolo <[email protected]> wrote:\n\n\nHi,\n\n Il 17/07/2013 09:18, Xenofon Papadopoulos ha scritto:\n\n\n\nIn\n the asynchronous commit documentation, it says:\n\n\nThe\n commands supporting two-phase commit, such as PREPARE\n TRANSACTION,\n are also always synchronous\n\n\n\nDoes\n this mean that all queries that are part of a distributed\n transaction are synchronous?\n\n\nIn\n our databases we have extremely high disk I/O, I'm\n wondering if distributed transactions may be the reason\n behind it.\n\n\n\n Distributed transactions are base on two-phase-commit (2PC)\n algorithms for ensuring correct transaction completion, so are\n synchronous. \n However, I think this is not the main reason behind your extremely\n high disk I/O. You should check if your system is properly tuned to\n get the best performances. \n First of all, you could take a look on your PostgreSQL\n configurations, and check if shared_memory is set properly taking\n into account your RAM availability. The conservative PostgreSQL\n default value is 24 MB, forcing system to exploit many disk I/O\n resources.\n Aside from this, you could take a look if autovacuum is often\n triggered (generating a large amount of I/O) in case of large use of\n updates/inserts in your database.\n\n Regards,\n\n Giuseppe.\n\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it",
"msg_date": "Wed, 17 Jul 2013 15:18:18 +0300",
"msg_from": "Xenofon Papadopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Distributed transactions and asynchronous commit"
}
] |
[
{
"msg_contents": "I have the same table definition in two different databases on the same computer. When I explain a simple query in both of them, one database uses a sequence scan and the other uses an index scan. If I try to run the Seq Scan version without the where clause restricting the value of uniqueid, it uses all of the memory on my computer and never completes.\n\nHow can I get the Seq Scan version to use an index scan?\n\nExplain results - good version:\n\"GroupAggregate (cost=0.00..173.78 rows=1 width=15)\"\n\" -> Index Scan using pubcoop_ext_idx1 on pubcoop_ext (cost=0.00..173.77 rows=1 width=15)\"\n\" Index Cond: (uniqueid < '000000009'::bpchar)\"\n\nExplain results - problem version:\n\"HashAggregate (cost=13540397.84..13540398.51 rows=67 width=18)\"\n\" -> Seq Scan on pubcoop_ext (cost=0.00..13360259.50 rows=36027667 width=18)\"\n\" Filter: (uniqueid < '000000009'::bpchar)\"\n\n\nThanks,\nEllen\n\n\n\n\n\n\n\n\n\nI have the same table definition in two different databases on the same computer. When I explain a simple query in both of them, one database uses a sequence scan and the other uses an index scan. If I try to run the Seq Scan version without\n the where clause restricting the value of uniqueid, it uses all of the memory on my computer and never completes.\n\n \nHow can I get the Seq Scan version to use an index scan?\n \nExplain results – good version:\n\"GroupAggregate (cost=0.00..173.78 rows=1 width=15)\"\n\" -> Index Scan using pubcoop_ext_idx1 on pubcoop_ext (cost=0.00..173.77 rows=1 width=15)\"\n\" Index Cond: (uniqueid < '000000009'::bpchar)\"\n \nExplain results – problem version:\n\"HashAggregate (cost=13540397.84..13540398.51 rows=67 width=18)\"\n\" -> Seq Scan on pubcoop_ext (cost=0.00..13360259.50 rows=36027667 width=18)\"\n\" Filter: (uniqueid < '000000009'::bpchar)\"\n \n \nThanks,\nEllen",
"msg_date": "Wed, 17 Jul 2013 19:50:06 +0000",
"msg_from": "Ellen Rothman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seq Scan vs Index on Identical Tables in Two Different Databases"
},
{
"msg_contents": "On Wed, Jul 17, 2013 at 12:50 PM, Ellen Rothman\n<[email protected]>wrote:\n\n> I have the same table definition in two different databases on the same\n> computer. When I explain a simple query in both of them, one database uses\n> a sequence scan and the other uses an index scan. If I try to run the Seq\n> Scan version without the where clause restricting the value of uniqueid, it\n> uses all of the memory on my computer and never completes. ****\n>\n> ** **\n>\n> How can I get the Seq Scan version to use an index scan?\n>\n\nDid you run \"ANALYZE your-table-name\" before trying the sequential scan\nquery?\n\nOn Wed, Jul 17, 2013 at 12:50 PM, Ellen Rothman <[email protected]> wrote:\n\n\n\nI have the same table definition in two different databases on the same computer. When I explain a simple query in both of them, one database uses a sequence scan and the other uses an index scan. If I try to run the Seq Scan version without\n the where clause restricting the value of uniqueid, it uses all of the memory on my computer and never completes.\n\n \nHow can I get the Seq Scan version to use an index scan?Did you run \"ANALYZE your-table-name\" before trying the sequential scan query?",
"msg_date": "Wed, 17 Jul 2013 13:11:42 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq Scan vs Index on Identical Tables in Two Different Databases"
},
{
"msg_contents": "Ellen Rothman wrote\n> I have the same table definition in two different databases on the same\n> computer. \n\nYou really should prove this to us by running schema commands on the table\nand providing results.\n\nAlso, version information has not been provided and you do not state whether\nthe databases are the same as well as tables. And, do those tables have\nidentical data or just structure?\n\n\n> When I explain a simple query in both of them, one database uses a\n> sequence scan and the other uses an index scan.\n\nCorrupt index maybe? Or back to the first point maybe there isn't one.\n\n\n> If I try to run the Seq Scan version without the where clause restricting\n> the value of uniqueid, it uses all of the memory on my computer and never\n> completes.\n\nHow are you running this and how are you defining \"never completes\"?\n\nCan you run this but with a limit clause so your client (and the database)\ndoes not try to display 3 millions rows of data?\n\n\n> How can I get the Seq Scan version to use an index scan?\n\nRe-Index (or drop/create even)\n\nAlso, you should always try to provide actual queries and not just explains. \nSince you are getting \"Aggregate\" nodes you obviously aren't running a\nsimple \"SELECT * FROM publcoop_ext [WHERE ...]\".\n\nIdeally you can also provide a self-contained test case. though your\nscenario seems simple enough that either:\n\n1) You didn't run analyze\n2) Your table and/or index is corrupt\n3) You do not actually have an index on the table even though you claim they\nare the same\n\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Seq-Scan-vs-Index-on-Identical-Tables-in-Two-Different-Databases-tp5764125p5764143.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Jul 2013 14:14:45 -0700 (PDT)",
"msg_from": "David Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq Scan vs Index on Identical Tables in Two Different Databases"
},
{
"msg_contents": "On Wed, Jul 17, 2013 at 07:50:06PM +0000, Ellen Rothman wrote:\n- I have the same table definition in two different databases on the same computer. When I explain a simple query in both of them, one database uses a sequence scan and the other uses an index scan. If I try to run the Seq Scan version without the where clause restricting the value of uniqueid, it uses all of the memory on my computer and never completes.\n- \n- How can I get the Seq Scan version to use an index scan?\n- \n- Explain results - good version:\n- \"GroupAggregate (cost=0.00..173.78 rows=1 width=15)\"\n- \" -> Index Scan using pubcoop_ext_idx1 on pubcoop_ext (cost=0.00..173.77 rows=1 width=15)\"\n- \" Index Cond: (uniqueid < '000000009'::bpchar)\"\n- \n- Explain results - problem version:\n- \"HashAggregate (cost=13540397.84..13540398.51 rows=67 width=18)\"\n- \" -> Seq Scan on pubcoop_ext (cost=0.00..13360259.50 rows=36027667 width=18)\"\n- \" Filter: (uniqueid < '000000009'::bpchar)\"\n\n(Assuming that your postgresql.conf is the same across both systems and that\nyou've run vanilla analyze against each table... )\n\nI ran into a similar problem before and it revolved around the somewhat random nature of\na vaccum analyze. To solve the problem i increased the statistics_target for the table\non the box that was performing poorly and ran analyze.\n\nI believe that worked because basically the default_statistics_taget of 100 wasn't\ncatching enough info about that record range to make an index appealing to the optimizer\non the new box where the old box it was.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Jul 2013 14:40:00 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq Scan vs Index on Identical Tables in Two Different Databases"
},
{
"msg_contents": "I guess not. I usually vacuum with the analyze option box checked; I must have missed that this cycle.\r\n\r\nIt looks much better now.\r\n\r\nThanks!\r\n\r\n\r\n\r\nFrom: bricklen [mailto:[email protected]]\r\nSent: Wednesday, July 17, 2013 4:12 PM\r\nTo: Ellen Rothman\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Seq Scan vs Index on Identical Tables in Two Different Databases\r\n\r\n\r\nOn Wed, Jul 17, 2013 at 12:50 PM, Ellen Rothman <[email protected]<mailto:[email protected]>> wrote:\r\nI have the same table definition in two different databases on the same computer. When I explain a simple query in both of them, one database uses a sequence scan and the other uses an index scan. If I try to run the Seq Scan version without the where clause restricting the value of uniqueid, it uses all of the memory on my computer and never completes.\r\n\r\nHow can I get the Seq Scan version to use an index scan?\r\n\r\nDid you run \"ANALYZE your-table-name\" before trying the sequential scan query?\r\n________________________________\r\nNo virus found in this message.\r\nChecked by AVG - www.avg.com<http://www.avg.com>\r\nVersion: 2013.0.3349 / Virus Database: 3204/6483 - Release Date: 07/11/13\r\n\n\n\n\n\n\n\n\n\nI guess not. I usually vacuum with the analyze option box checked; I must have missed that this cycle.\n \nIt looks much better now.\r\n\n \nThanks!\n \n \n \nFrom: bricklen [mailto:[email protected]]\r\n\nSent: Wednesday, July 17, 2013 4:12 PM\nTo: Ellen Rothman\nCc: [email protected]\nSubject: Re: [PERFORM] Seq Scan vs Index on Identical Tables in Two Different Databases\n \n\n\n \n\nOn Wed, Jul 17, 2013 at 12:50 PM, Ellen Rothman <[email protected]> wrote:\n\n\nI have the same table definition in two different databases on the same computer. When I explain a simple query in both of them, one database uses a sequence scan and the other\r\n uses an index scan. If I try to run the Seq Scan version without the where clause restricting the value of uniqueid, it uses all of the memory on my computer and never completes.\r\n\n \nHow can I get the Seq Scan version to use an index scan?\n\n\n\n \n\n\nDid you run \"ANALYZE your-table-name\" before trying the sequential scan query?\n\n\n\n\n\nNo virus found in this message.\r\nChecked by AVG - www.avg.com\r\nVersion: 2013.0.3349 / Virus Database: 3204/6483 - Release Date: 07/11/13",
"msg_date": "Wed, 17 Jul 2013 21:54:03 +0000",
"msg_from": "Ellen Rothman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seq Scan vs Index on Identical Tables in Two Different Databases"
}
] |
[
{
"msg_contents": "Greg, All:\n\nSo, I've been running some stats over some customer workloads who are\nhaving issues with checkpoint spikes, and I'm finding that the bgwriter\nis almost entirely ineffective for them:\n\npct_checkpoints_req | 33.0\navg_frequency_min | 2.30\navg_write_time_s | 112.17\navg_sync_time_s | 1.33\nmb_written | 2387369.6\nmb_written_per_min | 61.01\nmb_per_checkpoint | 82.27\npct_checkpoint_buffers | 58.6\npct_bgwriter_buffers | 0.3\npct_backend_buffers | 41.1\nbgwriter_halt_freq | 0.06\nbgwriter_halt_potential | 70.11\nbuffer_allocation_ratio | 1.466\n\n(query for the above is below)\n\nThe key metric I'm looking at is that the bgwriter only took care of\n0.3% of buffers. Yet average write throughput is around 1mb/s, and the\nbgwriter is capable of flushing 4mb/s, if it's waking up every 200ms.\n\nOf course, our first conclusion is that the writes are very bursty and\nthe bgwriter is frequently hitting lru_maxpages. In fact, that seems to\nbe the case, per bgwriter_halt_potential above (this is a measurement of\nthe % of the time the bgwriter halted vs if all buffer writes were done\nin one continuous session where it couldn't keep up). And from the raw\npg_stat_bgwriter:\n\nmaxwritten_clean | 6950\n\nSo we're halting because we hit lru_maxpages a *lot*, which is keeping\nthe bgwriter from keeping up. What gives?\n\nWell, digging into the docs, one thing I noticed was this note:\n\n\"It then sleeps for bgwriter_delay milliseconds, and repeats. When there\nare no dirty buffers in the buffer pool, though, it goes into a longer\nsleep regardless of bgwriter_delay.\"\n\ncombined with:\n\n\"The number of dirty buffers written in each round is based on the\nnumber of new buffers that have been needed by server processes during\nrecent rounds\"\n\n... so Greg built in the bgwriter autotuner with ramp-up/down behavior,\nwhere it sleeps longer and writes less if it hasn't been busy lately.\nBut given the stats I'm looking at, I'm wondering if that isn't too much\nof a double-whammy for people with bursty workloads.\n\nThat is, if you have several seconds of inactivity followed by a big\nwrite, then the bgwriter will wake up slowly (which, btw, is not\nmanually tunable), and then write very little when it does wake up, at\nleast in the first round.\n\nOf course, I may be misinterpreting the data in front of me ... I'm\ncurrently running a week-long test of raising lru_maxpages and\ndecreasing bgwriter_delay to see how it affects things ... but I wanted\nto discuss it on-list.\n\nBgwriter stats query follows:\n\nwith bgstats as (\n select checkpoints_timed,\n checkpoints_req,\n checkpoints_timed + checkpoints_req as checkpoints,\n checkpoint_sync_time,\n checkpoint_write_time,\n buffers_checkpoint,\n buffers_clean,\n maxwritten_clean,\n buffers_backend,\n buffers_backend_fsync,\n buffers_alloc,\n buffers_checkpoint + buffers_clean + buffers_backend as\ntotal_buffers,\n round(extract('epoch' from now() - stats_reset)/60)::numeric as\nmin_since_reset,\n lru.setting::numeric as bgwriter_maxpages,\n delay.setting::numeric as bgwriter_delay\n from pg_stat_bgwriter\n cross join pg_settings as lru\n cross join pg_settings as delay\n where lru.name = 'bgwriter_lru_maxpages'\n and delay.name = 'bgwriter_delay'\n)\nselect\n round(checkpoints_req*100/checkpoints,1) as pct_checkpoints_req,\n round(min_since_reset/checkpoints,2) as avg_frequency_min,\n round(checkpoint_write_time::numeric/(checkpoints*1000),2) as\navg_write_time_s,\n round(checkpoint_sync_time::numeric/(checkpoints*1000),2) as\navg_sync_time_s,\n round(total_buffers/128::numeric,1) as mb_written,\n round(total_buffers/(128 * min_since_reset),2) as mb_written_per_min,\n round(buffers_checkpoint/(checkpoints*128::numeric),2) as\nmb_per_checkpoint,\n round(buffers_checkpoint*100/total_buffers::numeric,1) as\npct_checkpoint_buffers,\n round(buffers_clean*100/total_buffers::numeric,1) as\npct_bgwriter_buffers,\n round(buffers_backend*100/total_buffers::numeric,1) as\npct_backend_buffers,\n\nround(maxwritten_clean*100::numeric/(min_since_reset*60000/bgwriter_delay),2)\nas bgwriter_halt_freq,\n\nround(maxwritten_clean*100::numeric/(buffers_clean/bgwriter_maxpages),2)\nas bgwriter_halt_potential,\n round(buffers_alloc::numeric/total_buffers,3) as buffer_allocation_ratio\nfrom bgstats;\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Jul 2013 15:46:23 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "bgwriter autotuning might be unnecessarily penalizing bursty\n workloads"
},
{
"msg_contents": "On 7/17/13 6:46 PM, Josh Berkus wrote:\n> The key metric I'm looking at is that the bgwriter only took care of\n> 0.3% of buffers.\n\nThere look to be a good number of buffers on this server that are only \nbeing written at checkpoint time. The background writer will only deal \nwith buffers when their usage count is low. Fast servers can cycle over \nshared_buffers such that as soon as their usage counts get low, they're \nimmediately reallocated by a hungry backend. You might try to quantify \nhow many buffers the BGW can possibly do something with using \npg_buffercache.\n\n> So we're halting because we hit lru_maxpages a *lot*, which is keeping\n> the bgwriter from keeping up. What gives?\n\n2007's defaults can be a bummer in 2013. I don't hesitate to bump that \nup to 500 on a server with decent hardware.\n\nIf it weren't for power savings concerns, I would set bgwriter_delay to \n10ms by default. With that change lru_maxpages should drop to 10 to \nhave the same behavior as the existing default.\n\n> \"It then sleeps for bgwriter_delay milliseconds, and repeats. When there\n> are no dirty buffers in the buffer pool, though, it goes into a longer\n> sleep regardless of bgwriter_delay.\"\n\nThat wording is from some power savings code added after I poked at \nthings. Look at src/backend/postmaster/bgwriter.c where it mentions \n\"electricity\" to see more about it. The longer sleeps are supposed to \nbe interrupted when backends do work, to respond to bursts. Maybe it \ncan get confused, I haven't looked at it that carefully.\n\nThe latching mechanism shouldn't need to be tunable if it works \ncorrectly, which is why there's no exposed knobs for it.\n\nSince I saw an idea I'm going to steal from your pg_stat_bgwriter query, \nI'll trade you one with a completely different spin on the data to \nharvest from:\n\nSELECT\n block_size::numeric * buffers_alloc / (1024 * 1024 * seconds) AS \nalloc_mbps,\n block_size::numeric * buffers_checkpoint / (1024 * 1024 * seconds) AS \ncheckpoint_mbps,\n block_size::numeric * buffers_clean / (1024 * 1024 * seconds) AS \nclean_mbps,\n block_size::numeric * buffers_backend/ (1024 * 1024 * seconds) AS \nbackend_mbps,\n block_size::numeric * (buffers_checkpoint + buffers_clean + \nbuffers_backend) / (1024 * 1024 * seconds) AS write_mbps\nFROM\n(\nSELECT now() AS sample,now() - stats_reset AS uptime,EXTRACT(EPOCH FROM \nnow()) - extract(EPOCH FROM stats_reset) AS seconds, \nb.*,p.setting::integer AS block_size FROM pg_stat_bgwriter b,pg_settings \np WHERE p.name='block_size'\n) bgw;\n\nThat only works on 9.1 and later where there is a stats_reset time \navailable on pg_stat_bgwriter. Sample from a busy system with \nmoderately tuned BGW and checkpoint_timeout at 15 minutes:\n\n-[ RECORD 1 ]---+-------------------\nalloc_mbps | 246.019686474412\ncheckpoint_mbps | 0.0621780475463596\nclean_mbps | 2.38631188442859\nbackend_mbps | 0.777490109599045\nwrite_mbps | 3.22598004157399\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Jul 2013 21:15:54 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter autotuning might be unnecessarily penalizing bursty\n workloads"
},
{
"msg_contents": "Greg,\n\n> There look to be a good number of buffers on this server that are only\n> being written at checkpoint time. The background writer will only deal\n> with buffers when their usage count is low. Fast servers can cycle over\n> shared_buffers such that as soon as their usage counts get low, they're\n> immediately reallocated by a hungry backend. You might try to quantify\n> how many buffers the BGW can possibly do something with using\n> pg_buffercache.\n\nYeah, that's going to be the next step.\n\n> 2007's defaults can be a bummer in 2013. I don't hesitate to bump that\n> up to 500 on a server with decent hardware.\n\nRight, that's what I just tested. The results are interesting. I\nchanged the defaults as follows:\n\nbgwriter_delay = 100ms\nbgwriter_lru_maxpages = 512\nbgwriter_lru_multiplier = 3.0\n\n... and the number of buffers being written by the bgwriter went *down*,\nalmost to zero. Mind you, I wanna gather a full week of data, but there\nseems to be something counterintuitive going on here.\n\nOne potential factor is that they have their shared_buffers set\nunusually high (5GB out of 16GB).\n\nHere's the stats:\n\npostgres=# select * from pg_stat_bgwriter;\n-[ RECORD 1 ]---------+------------------------------\ncheckpoints_timed | 330\ncheckpoints_req | 47\ncheckpoint_write_time | 55504727\ncheckpoint_sync_time | 286743\nbuffers_checkpoint | 2809031\nbuffers_clean | 789\nmaxwritten_clean | 0\nbuffers_backend | 457456\nbuffers_backend_fsync | 0\nbuffers_alloc | 943734\nstats_reset | 2013-07-17 17:09:18.945194-07\n\nSo we're not hitting maxpages anymore, at all. So why isn't the\nbgwriter doing any work?\n\n-[ RECORD 1 ]-----------+--------\npct_checkpoints_req | 12.0\navg_frequency_min | 2.78\navg_write_time_s | 146.91\navg_sync_time_s | 0.76\nmb_written | 25617.8\nmb_written_per_min | 24.42\nmb_per_checkpoint | 58.27\npct_checkpoint_buffers | 86.0\npct_bgwriter_buffers | 0.0\npct_backend_buffers | 14.0\nbgwriter_halt_freq | 0.00\nbgwriter_halt_potential | 0.00\nbuffer_allocation_ratio | 0.288\n\n\nAnd your query, with some rounding added:\n\n-[ RECORD 1 ]---+------\nalloc_mbps | 0.116\ncheckpoint_mbps | 0.340\nclean_mbps | 0.000\nbackend_mbps | 0.056\nwrite_mbps | 0.396\n\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Jul 2013 10:40:20 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: bgwriter autotuning might be unnecessarily penalizing\n bursty workloads"
},
{
"msg_contents": "On Thu, Jul 18, 2013 at 10:40 AM, Josh Berkus <[email protected]> wrote:\n>\n> Right, that's what I just tested. The results are interesting. I\n> changed the defaults as follows:\n>\n> bgwriter_delay = 100ms\n> bgwriter_lru_maxpages = 512\n> bgwriter_lru_multiplier = 3.0\n>\n> ... and the number of buffers being written by the bgwriter went *down*,\n> almost to zero. Mind you, I wanna gather a full week of data, but there\n> seems to be something counterintuitive going on here.\n>\n> One potential factor is that they have their shared_buffers set\n> unusually high (5GB out of 16GB).\n>\n> Here's the stats:\n>\n> postgres=# select * from pg_stat_bgwriter;\n> -[ RECORD 1 ]---------+------------------------------\n> checkpoints_timed | 330\n> checkpoints_req | 47\n> checkpoint_write_time | 55504727\n> checkpoint_sync_time | 286743\n> buffers_checkpoint | 2809031\n> buffers_clean | 789\n> maxwritten_clean | 0\n> buffers_backend | 457456\n> buffers_backend_fsync | 0\n> buffers_alloc | 943734\n> stats_reset | 2013-07-17 17:09:18.945194-07\n>\n> So we're not hitting maxpages anymore, at all. So why isn't the\n> bgwriter doing any work?\n\n\nDoes their workload have a lot of bulk operations, which use a\nring-buffer strategy and so intentionally evict their own buffers?\n\nDo you have a simple select * from pg_stat_bgwriter from the period\nbefore the change? You posted the query that does averaging and\naggregation, but I couldn't figure out how to backtrack from that to\nthe original numbers.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Jul 2013 12:24:25 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: bgwriter autotuning might be unnecessarily\n penalizing bursty workloads"
}
] |
[
{
"msg_contents": "Hi\n\nAt 2013/2/8 I wrote:\n> I have problems with the performance of FTS in a query like this:\n>\n> SELECT * FROM FullTextSearch WHERE content_tsv_gin @@\n> plainto_tsquery('english', 'good');\n>\n> It's slow (> 30 sec.) for some GB (27886 html files, originally 73 MB zipped).\n> The planner obviously always chooses table scan\n\nNow, I've identified (but only partially resolved) the issue: Here are\nmy comments:\n\nThats the query in question (see commented log below):\n\nselect id,title,left(content,100)\nfrom fulltextsearch\nwhere plainto_tsquery('pg_catalog.english','good') @@\nto_tsvector('pg_catalog.english',content);\n\nAfter having created the GIN index, the FTS query unexpectedly is fast\nbecause planner chooses \"Bitmap Index Scan\". After the index\nstatistics have been updated, the same query becomes slow. Only when\nusing the \"trick\" with the function in the WHERE clause. I think GIST\ndoes'nt change anything.\n\nselect id,title,left(content,100)\nfrom fulltextsearch, plainto_tsquery('pg_catalog.english','good') query\nwhere query @@ to_tsvector('pg_catalog.english',content);\n\n=> This hint should mentioned in the docs!\n\nThen, setting enable_seqscan to off makes original query fast again.\nBut that's a setting I want to avoid in a multi-user database.\nFinally, setting random_page_cost to 1 helps also - but I don't like\nthis setting neither.\n\n=> To me the planner should be updated to recognize immutable\nplainto_tsquery() function in the WHERE clause and choose \"Bitmap\nIndex Scan\" at the first place.\n\nWhat do you think?\n\nYours, Stefan\n\n\n----\nLets look at table fulltextsearch:\n\nmovies=# \\d fulltextsearch\n Table \"public.fulltextsearch\"\n Column | Type | Modifiers\n---------+---------+-------------------------------------------------------------\n id | integer | not null default nextval('fulltextsearch_id_seq'::regclass)\n docid | integer | default 0\n title | text |\n content | text | not null\n\nmovies=# CREATE INDEX fulltextsearch_gincontent ON fulltextsearch\nUSING gin(to_tsvector('pg_catalog.english',content));\n\nmovies=# SELECT * FROM pg_class c WHERE relname LIKE 'fullt%';\n oid | name | kind | tuples | pages |\nallvisible | toastrelid | hasindex\n--------+---------------------------+------+-------------+-------+------------+------------+----------\n 476289 | fulltextsearch | r | 27886 | 555 |\n 0 | 476293 | t\n 503080 | fulltextsearch_gincontent | i | 8.97135e+06 | 11133 |\n 0 | 0 | f\n 476296 | fulltextsearch_id_seq | S | 1 | 1 |\n 0 | 0 | f\n 503075 | fulltextsearch_pkey | i | 27886 | 79 |\n 0 | 0 | f\n(4 rows)\n\n=> fulltextsearch_gincontent has an arbitrary large number of tuples\n(statistics is wrong and not yet updated)\n\nmovies=#\nexplain (analyze,costs,timing,buffers)\nselect id,title,left(content,100)\nfrom fulltextsearch\nwhere plainto_tsquery('pg_catalog.english','good') @@\nto_tsvector('pg_catalog.english',content);\n=> Unexpectedly, the query is fast!\nSee query plan http://explain.depesz.com/s/ewn\n\nLet's update the statistics:\n\nmovies=# VACUUM ANALYZE VERBOSE fulltextsearch ;\n\nSELECT * FROM pg_class c WHERE relname LIKE 'fullt%';\n oid | name | kind | tuples | pages |\nallvisible | toastrelid | hasindex\n--------+---------------------------+------+--------+-------+------------+------------+----------\n 476289 | fulltextsearch | r | 27886 | 555 |\n555 | 476293 | t\n 503080 | fulltextsearch_gincontent | i | 27886 | 11133 |\n0 | 0 | f\n 476296 | fulltextsearch_id_seq | S | 1 | 1 |\n0 | 0 | f\n 503075 | fulltextsearch_pkey | i | 27886 | 79 |\n0 | 0 | f\n(4 rows)\n\n=> Now after having update statistics (see especially tuples of\nfulltextsearch_gincontent ) the original query is slow!\nSee query plan http://explain.depesz.com/s/MQ60\n\nNow, let's reformulate the original query and move the function call\nto plainto_tsquery to the FROM clause:\n\nmovies=# explain (analyze,costs,timing,buffers)\nselect id,title,left(content,100)\nfrom fulltextsearch, plainto_tsquery('pg_catalog.english','good') query\nwhere query @@ to_tsvector('pg_catalog.english',content);\n=> This special query is fast again! See query plan\nhttp://explain.depesz.com/s/FVT\n\nSetting enable_seqscan to off makes query fast again: See query plan\nhttp://explain.depesz.com/s/eOr\n\nFinally, setting random_page_cost to 1 helps also (default is 4):\n\nmovies=# set enable_seqscan to default;\nmovies=# set random_page_cost to 1.0;\n=> Query is fast. See query plan http://explain.depesz.com/s/M5Ke\n\n----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jul 2013 01:39:20 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "FTS performance issue - planner problem identified (but only\n partially resolved)"
},
{
"msg_contents": "Hi\n\nSorry, referring to GIST index in my mail before was no good idea.\n\nThe bottom line still is, that the query (as recommended by the docs)\nand the planner don't choose the index which makes it slow - unless\nthe original query...\n\n> select id,title,left(content,100)\n> from fulltextsearch\n> where plainto_tsquery('pg_catalog.english','good') @@\n> to_tsvector('pg_catalog.english',content);\n\nis reformulated by this\n\n> select id,title,left(content,100)\n> from fulltextsearch, plainto_tsquery('pg_catalog.english','good') query\n> where query @@\n> to_tsvector('pg_catalog.english',content);\n\n... using default values for enable_seqscan and set random_page_cost.\n\nYours, S.\n\n\n2013/7/19 Stefan Keller <[email protected]>:\n> Hi\n>\n> At 2013/2/8 I wrote:\n>> I have problems with the performance of FTS in a query like this:\n>>\n>> SELECT * FROM FullTextSearch WHERE content_tsv_gin @@\n>> plainto_tsquery('english', 'good');\n>>\n>> It's slow (> 30 sec.) for some GB (27886 html files, originally 73 MB zipped).\n>> The planner obviously always chooses table scan\n>\n> Now, I've identified (but only partially resolved) the issue: Here are\n> my comments:\n>\n> Thats the query in question (see commented log below):\n>\n> select id,title,left(content,100)\n> from fulltextsearch\n> where plainto_tsquery('pg_catalog.english','good') @@\n> to_tsvector('pg_catalog.english',content);\n>\n> After having created the GIN index, the FTS query unexpectedly is fast\n> because planner chooses \"Bitmap Index Scan\". After the index\n> statistics have been updated, the same query becomes slow. Only when\n> using the \"trick\" with the function in the WHERE clause. I think GIST\n> does'nt change anything.\n>\n> select id,title,left(content,100)\n> from fulltextsearch, plainto_tsquery('pg_catalog.english','good') query\n> where query @@ to_tsvector('pg_catalog.english',content);\n>\n> => This hint should mentioned in the docs!\n>\n> Then, setting enable_seqscan to off makes original query fast again.\n> But that's a setting I want to avoid in a multi-user database.\n> Finally, setting random_page_cost to 1 helps also - but I don't like\n> this setting neither.\n>\n> => To me the planner should be updated to recognize immutable\n> plainto_tsquery() function in the WHERE clause and choose \"Bitmap\n> Index Scan\" at the first place.\n>\n> What do you think?\n>\n> Yours, Stefan\n>\n>\n> ----\n> Lets look at table fulltextsearch:\n>\n> movies=# \\d fulltextsearch\n> Table \"public.fulltextsearch\"\n> Column | Type | Modifiers\n> ---------+---------+-------------------------------------------------------------\n> id | integer | not null default nextval('fulltextsearch_id_seq'::regclass)\n> docid | integer | default 0\n> title | text |\n> content | text | not null\n>\n> movies=# CREATE INDEX fulltextsearch_gincontent ON fulltextsearch\n> USING gin(to_tsvector('pg_catalog.english',content));\n>\n> movies=# SELECT * FROM pg_class c WHERE relname LIKE 'fullt%';\n> oid | name | kind | tuples | pages |\n> allvisible | toastrelid | hasindex\n> --------+---------------------------+------+-------------+-------+------------+------------+----------\n> 476289 | fulltextsearch | r | 27886 | 555 |\n> 0 | 476293 | t\n> 503080 | fulltextsearch_gincontent | i | 8.97135e+06 | 11133 |\n> 0 | 0 | f\n> 476296 | fulltextsearch_id_seq | S | 1 | 1 |\n> 0 | 0 | f\n> 503075 | fulltextsearch_pkey | i | 27886 | 79 |\n> 0 | 0 | f\n> (4 rows)\n>\n> => fulltextsearch_gincontent has an arbitrary large number of tuples\n> (statistics is wrong and not yet updated)\n>\n> movies=#\n> explain (analyze,costs,timing,buffers)\n> select id,title,left(content,100)\n> from fulltextsearch\n> where plainto_tsquery('pg_catalog.english','good') @@\n> to_tsvector('pg_catalog.english',content);\n> => Unexpectedly, the query is fast!\n> See query plan http://explain.depesz.com/s/ewn\n>\n> Let's update the statistics:\n>\n> movies=# VACUUM ANALYZE VERBOSE fulltextsearch ;\n>\n> SELECT * FROM pg_class c WHERE relname LIKE 'fullt%';\n> oid | name | kind | tuples | pages |\n> allvisible | toastrelid | hasindex\n> --------+---------------------------+------+--------+-------+------------+------------+----------\n> 476289 | fulltextsearch | r | 27886 | 555 |\n> 555 | 476293 | t\n> 503080 | fulltextsearch_gincontent | i | 27886 | 11133 |\n> 0 | 0 | f\n> 476296 | fulltextsearch_id_seq | S | 1 | 1 |\n> 0 | 0 | f\n> 503075 | fulltextsearch_pkey | i | 27886 | 79 |\n> 0 | 0 | f\n> (4 rows)\n>\n> => Now after having update statistics (see especially tuples of\n> fulltextsearch_gincontent ) the original query is slow!\n> See query plan http://explain.depesz.com/s/MQ60\n>\n> Now, let's reformulate the original query and move the function call\n> to plainto_tsquery to the FROM clause:\n>\n> movies=# explain (analyze,costs,timing,buffers)\n> select id,title,left(content,100)\n> from fulltextsearch, plainto_tsquery('pg_catalog.english','good') query\n> where query @@ to_tsvector('pg_catalog.english',content);\n> => This special query is fast again! See query plan\n> http://explain.depesz.com/s/FVT\n>\n> Setting enable_seqscan to off makes query fast again: See query plan\n> http://explain.depesz.com/s/eOr\n>\n> Finally, setting random_page_cost to 1 helps also (default is 4):\n>\n> movies=# set enable_seqscan to default;\n> movies=# set random_page_cost to 1.0;\n> => Query is fast. See query plan http://explain.depesz.com/s/M5Ke\n>\n> ----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jul 2013 15:38:55 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FTS performance issue - planner problem identified (but only\n partially resolved)"
},
{
"msg_contents": "\n> SELECT * FROM FullTextSearch WHERE content_tsv_gin @@\n> plainto_tsquery('english', 'good');\n>\n> It's slow (> 30 sec.) for some GB (27886 html files, originally 73 MB zipped).\n> The planner obviously always chooses table scan\n\n\nHello,\n\nA probable reason for the time difference is the cost for decompressing toasted content.\nAt lest in 8.3, the planner was not good at estimating it.\n\nI'm getting better overall performances since I've stopped collect statistic on tsvectors.\nAn alternative would have been to disallow compression on them.\n\nI'm aware this is a drastic way and would not recommend it without testing. The benefit may depend on the type of data you are indexing.\nIn our use case these are error logs with many java stack traces, hence with many lexemes poorly discriminative.\n\nsee: http://www.postgresql.org/message-id/[email protected]\nas a comment on\nhttp://www.postgresql.org/message-id/C4DAC901169B624F933534A26ED7DF310861B363@JENMAIL01.ad.intershop.net\n\nregards,\n\nMarc Mamin\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jul 2013 19:57:43 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FTS performance issue - planner problem identified\n (but only partially resolved)"
},
{
"msg_contents": "Hi Marc\n\nThanks a lot for your hint!\n\nYou mean doing a \"SET track_counts (true);\" for the whole session?\nThat would be ok if it would be possible just for the gin index.\n\nIt's obviously an issue of the planner estimation costs.\nThe data I'm speaking about (\"movies\") has a text attribute which has\na length of more than 8K so it's obviously having to do with\ndetoasting.\nBut the thoughts about @@ operators together with this GIN index seem\nalso to be valid.\n\nI hope this issue is being tracked in preparation for 9.3.\n\nRegards, Stefan\n\n\n2013/7/19 Marc Mamin <[email protected]>:\n>\n>> SELECT * FROM FullTextSearch WHERE content_tsv_gin @@\n>> plainto_tsquery('english', 'good');\n>>\n>> It's slow (> 30 sec.) for some GB (27886 html files, originally 73 MB zipped).\n>> The planner obviously always chooses table scan\n>\n>\n> Hello,\n>\n> A probable reason for the time difference is the cost for decompressing toasted content.\n> At least in 8.3, the planner was not good at estimating it.\n>\n> I'm getting better overall performances since I've stopped collect statistic on tsvectors.\n> An alternative would have been to disallow compression on them.\n>\n> I'm aware this is a drastic way and would not recommend it without testing. The benefit may depend on the type of data you are indexing.\n> In our use case these are error logs with many java stack traces, hence with many lexemes poorly discriminative.\n>\n> see: http://www.postgresql.org/message-id/[email protected]\n> as a comment on\n> http://www.postgresql.org/message-id/C4DAC901169B624F933534A26ED7DF310861B363@JENMAIL01.ad.intershop.net\n>\n> regards,\n>\n> Marc Mamin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 20 Jul 2013 01:55:29 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FTS performance issue - planner problem identified (but\n only partially resolved)"
},
{
"msg_contents": "\n\n________________________________________\nVon: Stefan Keller [[email protected]]\n>Gesendet: Samstag, 20. Juli 2013 01:55\n>\n>Hi Marc\n>\n>Thanks a lot for your hint!\n>\n>You mean doing a \"SET track_counts (true);\" for the whole session?\n\nNo, \nI mean \n\nALTER TABLE <table> ALTER <ts_vector_column> SET STATISTICS 0;\n\nAnd remove existing statistics\n\nDELETE FROM pg_catalog.pg_statistic \nwhere starelid='<table>':: regclass\nAND staattnum = (SELECT attnum FROM \tpg_attribute\n WHERE attrelid = '<table>':: regclass\n AND attname = '<ts_vector_column>'::name\n )\n\nBut you should first try to find out which proportion of your ts queries are faster \nwhen using a table scan as they will probably not happen anymore afterwards !\n(Except if further columns on your table 'FullTextSearch' are considered by the planner)\n\n\n\n\n>That would be ok if it would be possible just for the gin index.\n>\n>It's obviously an issue of the planner estimation costs.\n>The data I'm speaking about (\"movies\") has a text attribute which has\n>a length of more than 8K so it's obviously having to do with\n>detoasting.\n>But the thoughts about @@ operators together with this GIN index seem\n>also to be valid.\n>\n>I hope this issue is being tracked in preparation for 9.3.\n>\n>Regards, Stefan\n>\n>\n>2013/7/19 Marc Mamin <[email protected]>:\n>>\n>>> SELECT * FROM FullTextSearch WHERE content_tsv_gin @@\n>>> plainto_tsquery('english', 'good');\n>>>\n>>> It's slow (> 30 sec.) for some GB (27886 html files, originally 73 MB zipped).\n>>> The planner obviously always chooses table scan\n>>\n>>\n>> Hello,\n>>\n>> A probable reason for the time difference is the cost for decompressing toasted content.\n>> At least in 8.3, the planner was not good at estimating it.\n>>\n>> I'm getting better overall performances since I've stopped collect statistic on tsvectors.\n>> An alternative would have been to disallow compression on them.\n>>\n>> I'm aware this is a drastic way and would not recommend it without testing. The benefit may depend on the type of data you are indexing.\n>> In our use case these are error logs with many java stack traces, hence with many lexemes poorly discriminative.\n>>\n>> see: http://www.postgresql.org/message-id/[email protected]\n>> as a comment on\n>> http://www.postgresql.org/message-id/C4DAC901169B624F933534A26ED7DF310861B363@JENMAIL01.ad.intershop.net\n>>\n>> regards,\n>>\n>> Marc Mamin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 20 Jul 2013 08:46:21 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FTS performance issue - planner problem identified\n (but only partially resolved)"
},
{
"msg_contents": "Stefan Keller <[email protected]> wrote:\n\n> Finally, setting random_page_cost to 1 helps also - but I don't\n> like this setting neither.\n\nWell, you should learn to like whichever settings best model your\nactual costs given your level of caching and your workload. ;-)\nFWIW, I have found page costs less volatile and easier to tune\nwith cpu_tuple_cost increased. I just always start by bumping\nthat to 0.03.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 29 Jul 2013 13:28:56 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FTS performance issue - planner problem identified (but only\n partially resolved)"
},
{
"msg_contents": "Hi Kevin\n\nWell, you're right :-) But my use cases are un-specific \"by design\"\nsince I'm using FTS as a general purpose function.\n\nSo I still propose to enhance the planner too as Tom Lane and your\ncolleague suggest based on repeated similar complaints [1].\n\nYours, Stefan\n\n[1] http://www.postgresql.org/message-id/CA+TgmoZgQBeu2KN305hwDS+aXW7YP0YN9vZwBsbWA8Unst+cew@mail.gmail.com\n\n\n2013/7/29 Kevin Grittner <[email protected]>:\n> Stefan Keller <[email protected]> wrote:\n>\n>> Finally, setting random_page_cost to 1 helps also - but I don't\n>> like this setting neither.\n>\n> Well, you should learn to like whichever settings best model your\n> actual costs given your level of caching and your workload. ;-)\n> FWIW, I have found page costs less volatile and easier to tune\n> with cpu_tuple_cost increased. I just always start by bumping\n> that to 0.03.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Jul 2013 01:29:46 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FTS performance issue - planner problem identified (but\n only partially resolved)"
}
] |
[
{
"msg_contents": "I tried sending this a couple days ago but I wasn't a member of the group\nso I think it's in limbo. Apologies if a 2nd copy shows up at some point.\n\nWe recently migrated a 1.3TB database from 8.4 to 9.2.2 on a new server. As\npart of this migration we added partitions to the largest tables so we\ncould start removing old data to an archive database. Large queries perform\nmuch better due to not hitting the older data as expected. Small queries\nserved from records in memory are suffering a much bigger performance hit\nthan anticipated due to the partitioning.\n\nI'm able to duplicate this issue on our server trivially with these\ncommands: http://pgsql.privatepaste.com/7223545173\n\nRunning the queries from the command line 10k times (time psql testdb <\ntest1.sql >/dev/null) results in a 2x slowdown for the queries not using\ntesttable_90 directly. (~4s vs ~2s).\n\nRunning a similar single record select on a non-partitioned table averages\n10k in 2s.\n\nRunning \"select 1;\" 10k times in the same method averages 1.8 seconds.\n\nThis matches exactly what I'm seeing in our production database. The\nnumbers are different, but the 2x slowdown persists. Doing a similar test\non another table on production with 7 children and 3 check constraints per\nchild results in a 3x slowdown.\n\nI'm aware that partitioning has an impact on the planner, but doubling the\ntime of in memory queries with only 5 partitions with 1 check each is much\ngreater than anticipated. Are my expectations off and this is normal\nbehavior or is there something I can do to try and speed these in memory\nqueries up? I was unable to find any information online as to the expected\nplanner impact of X # of partitions.\n\nDatabase information follows:\n\nRed Hat Enterprise Linux Server release 6.4 (Santiago)\nLinux hostname.domainname 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29\n16:51:51 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux\nPostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n20120305 (Red Hat 4.4.6-4), 64-bit\n\nServer info:\n4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n128gb RAM\n\n DateStyle | ISO,\nMDY\n| configuration file\n default_statistics_target |\n5000\n| configuration file\n default_text_search_config |\npg_catalog.english\n| configuration file\n effective_cache_size |\n64000MB\n| configuration file\n effective_io_concurrency |\n2\n| configuration file\n fsync |\non\n| configuration file\n lc_messages |\nC\n| configuration file\n lc_monetary |\nC\n| configuration file\n lc_numeric |\nC\n| configuration file\n lc_time |\nC\n| configuration file\n max_connections |\n500\n| configuration file\n max_stack_depth |\n2MB\n| environment\n shared_buffers |\n32000MB\n| configuration file\n synchronous_commit |\non\n| configuration file\n TimeZone |\nCST6CDT\n| configuration file\n wal_buffers |\n16MB\n| configuration file\n wal_level |\narchive\n| configuration file\n wal_sync_method |\nfdatasync\n| configuration file\n\nI tried sending this a couple days ago but I wasn't a member of the group so I think it's in limbo. Apologies if a 2nd copy shows up at some point.\r\nWe recently migrated a 1.3TB database from 8.4 to 9.2.2 on a new server. As part of this migration we added partitions to the largest tables so we could start removing old data to an archive database. Large queries perform much better due to not hitting the older data as expected. Small queries served from records in memory are suffering a much bigger performance hit than anticipated due to the partitioning.\nI'm able to duplicate this issue on our server trivially with these commands: http://pgsql.privatepaste.com/7223545173Running the queries from the command line 10k times (time psql testdb < test1.sql >/dev/null) results in a 2x slowdown for the queries not using testtable_90 directly. (~4s vs ~2s). \nRunning a similar single record select on a non-partitioned table averages 10k in 2s. Running \"select 1;\" 10k times in the same method averages 1.8 seconds.This matches exactly what I'm seeing in our production database. The numbers are different, but the 2x slowdown persists. Doing a similar test on another table on production with 7 children and 3 check constraints per child results in a 3x slowdown.\nI'm aware that partitioning has an impact on the planner, but doubling the time of in memory queries with only 5 partitions with 1 check each is much greater than anticipated. Are my expectations off and this is normal behavior or is there something I can do to try and speed these in memory queries up? I was unable to find any information online as to the expected planner impact of X # of partitions.\nDatabase information follows:Red Hat Enterprise Linux Server release 6.4 (Santiago)Linux hostname.domainname 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\nPostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bitServer info:4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz128gb RAM DateStyle | ISO, MDY | configuration file\r\n\r\n default_statistics_target | 5000 | configuration file default_text_search_config | pg_catalog.english | configuration file\r\n\r\n effective_cache_size | 64000MB | configuration file effective_io_concurrency | 2 | configuration file\r\n\r\n fsync | on | configuration file lc_messages | C | configuration file\r\n\r\n lc_monetary | C | configuration file lc_numeric | C | configuration file\r\n\r\n lc_time | C | configuration file max_connections | 500 | configuration file\r\n\r\n max_stack_depth | 2MB | environment shared_buffers | 32000MB | configuration file\r\n\r\n synchronous_commit | on | configuration file TimeZone | CST6CDT | configuration file\r\n\r\n wal_buffers | 16MB | configuration file wal_level | archive | configuration file\r\n\r\n wal_sync_method | fdatasync | configuration file",
"msg_date": "Fri, 19 Jul 2013 08:52:00 -0500",
"msg_from": "Skarsol <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Relatively high planner overhead on partitions?"
},
{
"msg_contents": "Hello\n\n\n\n2013/7/19 Skarsol <[email protected]>:\n> I tried sending this a couple days ago but I wasn't a member of the group so\n> I think it's in limbo. Apologies if a 2nd copy shows up at some point.\n>\n> We recently migrated a 1.3TB database from 8.4 to 9.2.2 on a new server. As\n> part of this migration we added partitions to the largest tables so we could\n> start removing old data to an archive database. Large queries perform much\n> better due to not hitting the older data as expected. Small queries served\n> from records in memory are suffering a much bigger performance hit than\n> anticipated due to the partitioning.\n>\n> I'm able to duplicate this issue on our server trivially with these\n> commands: http://pgsql.privatepaste.com/7223545173\n>\n> Running the queries from the command line 10k times (time psql testdb <\n> test1.sql >/dev/null) results in a 2x slowdown for the queries not using\n> testtable_90 directly. (~4s vs ~2s).\n\nif all data in your test living in memory - then bottleneck is in CPU\n- and any other node in execution plan is significant.\n\nIt is not surprise, because OLTP relation databases are not well\noptimized for this use case. A designers expected much more\nsignificant impact of IO operations, and these databases are designed\nto minimize bottleneck in IO - with relative low memory using. This\nuse case is better solved in OLAP databases (read optimized databases)\ndesigned after 2000 year - like monetdb, verticadb, or last year cool\ndb HANA.\n\nRegards\n\nPavel\n\n\n>\n> Running a similar single record select on a non-partitioned table averages\n> 10k in 2s.\n>\n> Running \"select 1;\" 10k times in the same method averages 1.8 seconds.\n>\n> This matches exactly what I'm seeing in our production database. The numbers\n> are different, but the 2x slowdown persists. Doing a similar test on another\n> table on production with 7 children and 3 check constraints per child\n> results in a 3x slowdown.\n>\n> I'm aware that partitioning has an impact on the planner, but doubling the\n> time of in memory queries with only 5 partitions with 1 check each is much\n> greater than anticipated. Are my expectations off and this is normal\n> behavior or is there something I can do to try and speed these in memory\n> queries up? I was unable to find any information online as to the expected\n> planner impact of X # of partitions.\n>\n> Database information follows:\n>\n> Red Hat Enterprise Linux Server release 6.4 (Santiago)\n> Linux hostname.domainname 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29\n> 16:51:51 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux\n> PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6\n> 20120305 (Red Hat 4.4.6-4), 64-bit\n>\n> Server info:\n> 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n> 128gb RAM\n>\n> DateStyle | ISO, MDY\n> | configuration file\n> default_statistics_target | 5000\n> | configuration file\n> default_text_search_config | pg_catalog.english\n> | configuration file\n> effective_cache_size | 64000MB\n> | configuration file\n> effective_io_concurrency | 2\n> | configuration file\n> fsync | on\n> | configuration file\n> lc_messages | C\n> | configuration file\n> lc_monetary | C\n> | configuration file\n> lc_numeric | C\n> | configuration file\n> lc_time | C\n> | configuration file\n> max_connections | 500\n> | configuration file\n> max_stack_depth | 2MB\n> | environment\n> shared_buffers | 32000MB\n> | configuration file\n> synchronous_commit | on\n> | configuration file\n> TimeZone | CST6CDT\n> | configuration file\n> wal_buffers | 16MB\n> | configuration file\n> wal_level | archive\n> | configuration file\n> wal_sync_method | fdatasync\n> | configuration file\n>\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Jul 2013 18:16:30 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Relatively high planner overhead on partitions?"
}
] |
[
{
"msg_contents": "Hello,\n\nI want test the PostgreSQL performance using DBT5, where my PostgreSQL running on server (say 192.168.56.101) and want to test the performance executing DBT5 on other machine (say 192.168.56.102).\n\nMy aim is test the database performance excluding overhead to testing tool running on same machine.\n\nIs this possible using existing DBT5 code or need to modify somehow ?\n\nThanks and Regards,\nAmul Sul\n Hello,I want test the PostgreSQL performance using DBT5, where my PostgreSQL running on server (say 192.168.56.101) and want to test the performance executing DBT5 on other machine (say 192.168.56.102).My aim is test the database performance excluding overhead to testing tool running on same machine.Is this possible using existing DBT5 code or need to modify somehow ?Thanks and Regards,Amul Sul",
"msg_date": "Tue, 23 Jul 2013 10:13:01 +0800 (SGT)",
"msg_from": "amul sul <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fw: [osdldbt-general] Running DBT5 on remote database server"
},
{
"msg_contents": ">I want test the PostgreSQL performance using DBT5, where my PostgreSQL\nrunning on server (say >192.168.56.101) and want to test the performance\nexecuting DBT5 on other machine (say 192.168.56.102).\n>My aim is test the database performance excluding overhead to testing tool\nrunning on same machine.\n>Is this possible using existing DBT5 code or need to modify somehow ?\n\nIts always beneficial to use a wrapper script on the top of a benchmark.\nIn such a scenarios you can use two servers (Database server,Test server)\nYou can run this wrapper script on test server against database server. \nIn this case the resources of Database servers are used only for running the\nbenchmark, and Test server can be used for many activities, such as Results\ncollection ,resource usage collection etc. IOW you can avoid bottleneck of\ntesting tool.\n \nGreg smith has written a very nice script for pgbnech benchmark.\nYou can reuse most of the part.\nhere is the link for that:\nhttps://github.com/gregs1104/pgbench-tools\nhope this will be helpful.\n\nThank you,\nSamrat Revagade,\nNTT-DATA-OSS Center (Pune)\n\n\n\n\n-----\nThank You,\nSamrat.R.\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Fw-osdldbt-general-Running-DBT5-on-remote-database-server-tp5764754p5765128.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 25 Jul 2013 06:03:07 -0700 (PDT)",
"msg_from": "Samrat Revagade <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fw: [osdldbt-general] Running DBT5 on remote database server"
},
{
"msg_contents": "It is always better to run performance script and database on different\nmachines.\n\n From DBT5 documentation it doesn't seems like you can connect to remote\nhost.\nIf yo found you can use that.\n\nIf it is not present then:-\n\nUse wrapper tools like pg_bench tool which samrat has suggested .\n\nwhere you can specify dbt5 command as in this tool command for pg_bench is\nwritten.\n\nAlso check host variable in dbt5 and if it is possible to change values for\nthat variable then pass address of remote host. \n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Fw-osdldbt-general-Running-DBT5-on-remote-database-server-tp5764754p5765132.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 25 Jul 2013 06:26:18 -0700 (PDT)",
"msg_from": "sachin kotwal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fw: [osdldbt-general] Running DBT5 on remote database server"
},
{
"msg_contents": "On 7/22/13 10:13 PM, amul sul wrote:\n> I want test the PostgreSQL performance using DBT5, where my PostgreSQL\n> running on server (say 192.168.56.101) and want to test the performance\n> executing DBT5 on other machine (say 192.168.56.102).\n\nYou can do this. Here's the help for the run_workload.sh command:\n\nusage: run_workload.sh -c <number of customers> -d <duration of test> -u \n<number of users>\nother options:\n -a <pgsql>\n -b <database parameters>\n -f <scale factor. (default 500)>\n -h <database host name. (default localhost)>\n...\n\n\"-h\" is the setting that points toward another server. You will also \nneed to setup the postgresql.conf and pg_hba.conf on the system to allow \nremote connections, the same way as this is normally done with Postgres.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 31 Jul 2013 12:02:50 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fw: [osdldbt-general] Running DBT5 on remote database\n server"
}
] |
[
{
"msg_contents": "I am trying to understand how memory is allocated/used by our Postgresql Database connections. From reading the documentation it appears that work_mem and temp_buffers are the 2 largest contributors (and work_mem usage can grow) to the memory utilized by the Database connections. In addition, if I understand correctly, work_mem and temp_buffers each have their own pool, and thus database connections use (when needed) and return memory to these pools. I have not read this anywhere, but based on observation it appears that once these pools grow, they never release any of the memory (e.g. they do not shrink in size if some of the memory has not been for a given period of time).\r\n\r\nWith that said, are there any mechanisms available to determine how much work_mem and temp_buffers memory has been allocated by the Postgres database (and by database connection/process would be very useful as well)? Also, which postgres process manages the memory pools for work_mem and temp_buffers?\r\n\r\nFYI – I am using smem (on a linux server) to monitor the memory allocated to our Database connections. In an attempt to lower our memory footprint, I lowered our setting for work_mem from 1MB down to 500kB (in addition I enabled log_temp_files to see the SQL statements that now use temp files for sorting and hash operations). As I expected the memory used by the connections that were doing large sorts went down in size. However, one of those DB connections dramatically increased in memory usage with this change. It went from approx. 6MB up to 37MB in memory usage? Are temp_buffers used in conjunction with some sorting operations that use temp_files (and thus this connection allocated several temp_buffers? Although I thought the temp_buffers parameter was the amount of memory that would be allocated for a session (e.g. there is no multiplying factor like the work_mem for a session)?\r\n\r\nWe are using Postgres 9.0.13\r\n shared_buffers = 800MB\r\n work_mem = 1MB\r\n temp_buffers = 8MB (our applications do not use temp tables)\r\n effective_cache_size = 1500MB\r\n\r\nThanks,\r\nAlan\r\n\r\n\n\n\n\n\n\n\n\n\n\nI am trying to understand how memory is allocated/used by our Postgresql Database connections. From reading the documentation it appears that work_mem and temp_buffers are the 2 largest contributors (and work_mem usage can grow) to the memory utilized\r\nby the Database connections. In addition, if I understand correctly, work_mem and temp_buffers each have their own pool, and thus database connections use (when needed) and return memory to these pools. I have not read this anywhere, but based on observation\r\nit appears that once these pools grow, they never release any of the memory (e.g. they do not shrink in size if some of the memory has not been for a given period of time).\n \nWith that said, are there any mechanisms available to determine how much work_mem and temp_buffers memory has been allocated by the Postgres database (and by database connection/process would be very useful as well)? Also, which postgres process manages\r\nthe memory pools for work_mem and temp_buffers?\n \nFYI – I am using smem (on a linux server) to monitor the memory allocated to our Database connections. In an attempt to lower our memory footprint, I lowered our setting for work_mem from 1MB down to 500kB (in addition I enabled log_temp_files to see\r\nthe SQL statements that now use temp files for sorting and hash operations). As I expected the memory used by the connections that were doing large sorts went down in size. However, one of those DB connections dramatically increased in memory usage with this\r\nchange. It went from approx. 6MB up to 37MB in memory usage? Are temp_buffers used in conjunction with some sorting operations that use temp_files (and thus this connection allocated several temp_buffers? Although I thought the temp_buffers parameter was\r\nthe amount of memory that would be allocated for a session (e.g. there is no multiplying factor like the work_mem for a session)?\n \nWe are using Postgres 9.0.13\n shared_buffers = 800MB\n work_mem = 1MB \n temp_buffers = 8MB (our applications do not use temp tables)\n effective_cache_size = 1500MB\n \nThanks,\nAlan",
"msg_date": "Thu, 25 Jul 2013 13:23:46 +0000",
"msg_from": "\"McKinzie, Alan (Alan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How is memory allocated/used by Postgresql Database connections"
},
{
"msg_contents": "\"McKinzie, Alan (Alan)\" <[email protected]> writes:\n> I am trying to understand how memory is allocated/used by our Postgresql Database connections. From reading the documentation it appears that work_mem and temp_buffers are the 2 largest contributors (and work_mem usage can grow) to the memory utilized by the Database connections. In addition, if I understand correctly, work_mem and temp_buffers each have their own pool, and thus database connections use (when needed) and return memory to these pools. I have not read this anywhere, but based on observation it appears that once these pools grow, they never release any of the memory (e.g. they do not shrink in size if some of the memory has not been for a given period of time).\n\nTemp buffers, once used within a particular backend process, are kept\nfor the life of that process. Memory consumed for work_mem will be\nreleased back to libc at the end of the query. The net effect of that\nis platform-dependent --- my experience is that glibc on Linux is able\nto give memory back to the OS, but on other platforms the process memory\nsize doesn't shrink.\n\n> With that said, are there any mechanisms available to determine how much work_mem and temp_buffers memory has been allocated by the Postgres database (and by database connection/process would be very useful as well)? Also, which postgres process manages the memory pools for work_mem and temp_buffers?\n\nThere's no \"pool\", these allocations are process-local.\n\n> FYI – I am using smem (on a linux server) to monitor the memory allocated to our Database connections. In an attempt to lower our memory footprint, I lowered our setting for work_mem from 1MB down to 500kB (in addition I enabled log_temp_files to see the SQL statements that now use temp files for sorting and hash operations). As I expected the memory used by the connections that were doing large sorts went down in size. However, one of those DB connections dramatically increased in memory usage with this change. It went from approx. 6MB up to 37MB in memory usage?\n\nKeep in mind that work_mem is the max per sort or hash operation, so a\ncomplex query could consume a multiple of that. The most obvious theory\nabout your result is that the work_mem change caused the planner to\nswitch to another plan that involved more sorts or hashes than before.\nBut without a lot more detail than this, we can only speculate.\n\n> Are temp_buffers used in conjunction with some sorting operations that use temp_files (and thus this connection allocated several temp_buffers?\n\nNo, they're only used in connection with temp tables.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 25 Jul 2013 09:41:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How is memory allocated/used by Postgresql Database connections"
},
{
"msg_contents": "On Thu, Jul 25, 2013 at 6:23 AM, McKinzie, Alan (Alan)\n<[email protected]> wrote:\n\n> FYI – I am using smem (on a linux server) to monitor the memory allocated to\n> our Database connections. In an attempt to lower our memory footprint, I\n> lowered our setting for work_mem from 1MB down to 500kB (in addition I\n> enabled log_temp_files to see the SQL statements that now use temp files for\n> sorting and hash operations).\n\n1MB is already pretty small. If you have a lot of connections all\nusing temp space at the same time, you should probably consider using\na connection pooler to limit that number and then increasing work_mem,\nrather than decreasing it.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 25 Jul 2013 10:32:24 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How is memory allocated/used by Postgresql Database connections"
}
] |
[
{
"msg_contents": "Hi everybody,\n\nIn recent days, we have seen many processes in reaching the lock held \n5000. At that time my machine will become sluggish and no response from \nthe database. I tried to change configuration parameters, but have not \nfound anything satisfactory. further in meeting log messages like the \nfollowing:\nCOTidleERROR: out of memory\nCOTidleDETAIL: Can not enlarge string buffer container containing 0 \nbytes by 1476395004 more bytes.\nCOTidleLOG: incomplete message from client\nCOTUPDATE waitingLOG: process 20761 still waiting for ShareLock on \ntransaction 10,580,510 1664,674 ms after\n\nMy machine is on linux postgres version 9.2.2, and the following settings:\n\nmemory ram: 128 GB\ncores: 32\n\nmax_connections: 900\nshared_buffers = 2048MB\nwork_mem = 1024MB\nmaintenance_work_mem = 1024MB\ntemp_buffers = 512MB\ncheckpoint_segments = 103\n\nregrads\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Jul 2013 05:52:28 -0500",
"msg_from": "Jeison Bedoya <[email protected]>",
"msg_from_op": true,
"msg_subject": "to many locks held"
},
{
"msg_contents": "On Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya <[email protected]>wrote:\n\n> Hi everybody,\n>\n> In recent days, we have seen many processes in reaching the lock held 5000.\n\n\n\nDo you know what queries are holding locks? Is that behaviour expected?\n\n\n\n> At that time my machine will become sluggish and no response from the\n> database. I tried to change configuration parameters, but have not found\n> anything satisfactory. further in meeting log messages like the following:\n> COTidleERROR: out of memory\n> COTidleDETAIL: Can not enlarge string buffer container containing 0 bytes\n> by 1476395004 more bytes.\n>\n\nI've never come across that message before, so someone wiser will need to\ncomment on that.\n\n\n> COTidleLOG: incomplete message from client\n> COTUPDATE waitingLOG: process 20761 still waiting for ShareLock on\n> transaction 10,580,510 1664,674 ms after\n>\n> My machine is on linux postgres version 9.2.2, and the following settings:\n>\n\nYou will want to upgrade to the latest point release (9.2.4) as there was a\nserious security vulnerability fixed in 9.2.3. Details:\nhttp://www.postgresql.org/about/news/1446/\n\n\n>\n> memory ram: 128 GB\n> cores: 32\n>\n> max_connections: 900\n>\n\nI would say you might be better off using a connection pooler if you need\nthis many connections.\n\n\nwork_mem = 1024MB\n>\n\nwork_mem is pretty high. It would make sense in a data warehouse-type\nenvironment, but with a max of 900 connections, that can get used up in a\nhurry. Do you find your queries regularly spilling sorts to disk (something\nlike \"External merge Disk\" in your EXPLAIN ANALYZE plans)?\n\nHave you looked at swapping and disk I/O during these periods of\nsluggishness?\n\nOn Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya <[email protected]> wrote:\nHi everybody,\n\nIn recent days, we have seen many processes in reaching the lock held 5000.Do you know what queries are holding locks? Is that behaviour expected? \n At that time my machine will become sluggish and no response from the database. I tried to change configuration parameters, but have not found anything satisfactory. further in meeting log messages like the following:\n\nCOTidleERROR: out of memory\nCOTidleDETAIL: Can not enlarge string buffer container containing 0 bytes by 1476395004 more bytes.I've never come across that message before, so someone wiser will need to comment on that.\n \nCOTidleLOG: incomplete message from client\nCOTUPDATE waitingLOG: process 20761 still waiting for ShareLock on transaction 10,580,510 1664,674 ms after\n\nMy machine is on linux postgres version 9.2.2, and the following settings:You will want to upgrade to the latest point release (9.2.4) as there was a serious security vulnerability fixed in 9.2.3. Details: http://www.postgresql.org/about/news/1446/\n \n\nmemory ram: 128 GB\ncores: 32\n\nmax_connections: 900I would say you might be better off using a connection pooler if you need this many connections.\n\nwork_mem = 1024MBwork_mem is pretty high. It would make sense in a data warehouse-type environment, but with a max of 900 connections, that can get used up in a hurry. Do you find your queries regularly spilling sorts to disk (something like \"External merge Disk\" in your EXPLAIN ANALYZE plans)?\nHave you looked at swapping and disk I/O during these periods of sluggishness?",
"msg_date": "Tue, 30 Jul 2013 07:48:08 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to many locks held"
},
{
"msg_contents": "On Tue, Jul 30, 2013 at 11:48 PM, bricklen <[email protected]> wrote:\n\n> On Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya <[email protected]>wrote:\n>\n memory ram: 128 GB\n>> cores: 32\n>>\n>> max_connections: 900\n>>\n>\n> I would say you might be better off using a connection pooler if you need\n> this many connections.\n>\nYeah that's a lot. pgbouncer might be a good option in your case.\n\nwork_mem = 1024MB\n>>\n>\n> work_mem is pretty high. It would make sense in a data warehouse-type\n> environment, but with a max of 900 connections, that can get used up in a\n> hurry. Do you find your queries regularly spilling sorts to disk (something\n> like \"External merge Disk\" in your EXPLAIN ANALYZE plans)?\n>\nwork_mem is a per-operation setting for sort/hash operations. So in your\ncase you might finish with a maximum of 900GB of memory allocated based on\nthe maximum number of sessions that can run in parallel on your server.\nSimply reduce the value of work_mem to something your server can manage and\nyou should be able to solve your problems of OOM.\n-- \nMichael\n\nOn Tue, Jul 30, 2013 at 11:48 PM, bricklen <[email protected]> wrote:\nOn Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya <[email protected]> wrote:\n\n\nmemory ram: 128 GB\ncores: 32\n\nmax_connections: 900I would say you might be better off using a connection pooler if you need this many connections.Yeah that's a lot. pgbouncer might be a good option in your case. \n\n\n\nwork_mem = 1024MBwork_mem is pretty high. It would make sense in a data warehouse-type environment, but with a max of 900 connections, that can get used up in a hurry. Do you find your queries regularly spilling sorts to disk (something like \"External merge Disk\" in your EXPLAIN ANALYZE plans)?\nwork_mem is a per-operation setting for sort/hash operations. So in your case you might finish with a maximum of 900GB of memory allocated based on the maximum number of sessions that can run in parallel on your server. Simply reduce the value of work_mem to something your server can manage and you should be able to solve your problems of OOM.\n-- Michael",
"msg_date": "Wed, 31 Jul 2013 14:18:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to many locks held"
},
{
"msg_contents": "Jeison Bedoya <[email protected]> wrote:\n\n> memory ram: 128 GB\n> cores: 32\n>\n> max_connections: 900\n\n> temp_buffers = 512MB\n\nIn addition to the other comments, be aware that temp_buffers is\nthe limit of how much RAM *each connection* can acquire to avoid\nwriting temporary table data to disk. Once allocated to a\nconnection, it will be reserved for that use on that connection\nuntil the connection closes. So temp_buffers could lock down 450\nGB of RAM even while all connections are idle. If the maximum\nconnections become active, and they average one work_mem allocation\napiece, that's an *additional* 900 GB of RAM which would be needed\nto avoid problems.\n\nReducing connections through a pooler is strongly indicated, and\nyou may still need to reduce work_mem or temp_buffers.\n\nhttp://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 2 Aug 2013 14:03:28 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to many locks held"
}
] |
[
{
"msg_contents": "Hello team,\n\nWe have serious performance issues with a production EDB 9.2AS installation\n\nThe issue is mostly related to storage I/O bottlenecks during peak hours\nand we are looking for tunables on any level that could reduce the I/O\nspikes on SAN and improve overall DB performance.\n\nOur storage array consists of 16 disks in RAID-10 topology (device 253-2 on\nour OS configuration). We are also using RAID-5 for archive_log storage\n(also presented by SAN to the same machine - device 253-3)\n\nWe have set synchronous_commit to off but since almost all of application\nqueries are using prepared statements we don't get any real benefit.\n\nWe are using VMware , VMFS and LVM so we need your feedback on any kind of\ntunable that could remove load from storage during peak hours (FYI\napplication peak hours are 13:00-23:00 UTC, during night (04:00-06:00 UTC)\nthere are some heavy reporting activity + backups)\nArchive logs are rsync-ed to a remote backup server every 20 minutes.\n\nAlso please advise on any postgres.conf modification that could\nsignificantly affect storage load (WAL-checkpoint configuration etc.) (we\nhave not tried to move pg_xlog to a separate LUN since this is not an\noption - any other LUN would be using the same storage pool as the rest of\nthe /pgdata files)\nWe had some issues in the past with autovaccum deamon failing to work\nefficiently under high load so we have already applied your instructions\nfor a more aggressive auto-vacumm policy (changes already applied on\npostgresql.conf)\n\nLet me know if you want me to attach all the usual info for tickets\nregarding (OS, disks, PG conf, etc) plus the sar output and server logs\nfrom the last 3 days (24,25,26 June).\n\nThanks,\nTasos\n\nHello team,We have serious performance issues with a production EDB 9.2AS installationThe issue is mostly related to storage I/O bottlenecks during peak hours and we are looking for tunables on any level that could reduce the I/O spikes on SAN and improve overall DB performance.\nOur storage array consists of 16 disks in RAID-10 topology (device 253-2 on our OS configuration). We are also using RAID-5 for archive_log storage (also presented by SAN to the same machine - device 253-3)We have set synchronous_commit to off but since almost all of application queries are using prepared statements we don't get any real benefit.\nWe are using VMware , VMFS and LVM so we need your feedback on any kind of tunable that could remove load from storage during peak hours (FYI application peak hours are 13:00-23:00 UTC, during night (04:00-06:00 UTC) there are some heavy reporting activity + backups)\nArchive logs are rsync-ed to a remote backup server every 20 minutes.Also please advise on any postgres.conf modification that could significantly affect storage load (WAL-checkpoint configuration etc.) (we have not tried to move pg_xlog to a separate LUN since this is not an option - any other LUN would be using the same storage pool as the rest of the /pgdata files)\nWe had some issues in the past with autovaccum deamon failing to work efficiently under high load so we have already applied your instructions for a more aggressive auto-vacumm policy (changes already applied on postgresql.conf)\nLet me know if you want me to attach all the usual info for tickets regarding (OS, disks, PG conf, etc) plus the sar output and server logs from the last 3 days (24,25,26 June). Thanks,Tasos",
"msg_date": "Wed, 31 Jul 2013 14:58:53 +0300",
"msg_from": "Tasos Petalas <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG performance issues related to storage I/O waits"
},
{
"msg_contents": "On 31 Červenec 2013, 13:58, Tasos Petalas wrote:\n> Hello team,\n>\n> We have serious performance issues with a production EDB 9.2AS\n> installation\n\nAre you sure you've sent this to the right list? Because this is a\ncommunity mailing list for PostgreSQL, not for EDB. If you have a support\ncontract with EDB it's probably better to ask them directly (e.g. they\nmight give you advices about some custom features not available in vanilla\nPostgreSQL).\n\n> The issue is mostly related to storage I/O bottlenecks during peak hours\n> and we are looking for tunables on any level that could reduce the I/O\n> spikes on SAN and improve overall DB performance.\n\nSo is that a dedicated DWH machine, and PostgreSQL is responsible for most\nof the I/O load? Which processes are doing that? Backends handling queries\nor some background processes (say, checkpoints)? Is that random or\nsequential I/O, reads or writes, ...?\n\nHow much I/O are we talking about? Could it be that the SAN is overloaded\nby someone else (in case it's not dedicated to the database)?\n\nIt might turn out that the most effective solution is tuning the queries\nthat are responsible for the I/O activity.\n\n> Our storage array consists of 16 disks in RAID-10 topology (device 253-2\n> on\n> our OS configuration). We are also using RAID-5 for archive_log storage\n> (also presented by SAN to the same machine - device 253-3)\n\nI have no clue what device 253-3 is, but I assume you're using SAS disks.\n\n>\n> We have set synchronous_commit to off but since almost all of application\n> queries are using prepared statements we don't get any real benefit.\n\nUmmmm, how is this related? AFAIK those are rather orthogonal features,\ni.e. prepared statements should benefit from synchronous_commit=off just\nlike any other queries.\n\n> We are using VMware , VMFS and LVM so we need your feedback on any kind of\n> tunable that could remove load from storage during peak hours (FYI\n> application peak hours are 13:00-23:00 UTC, during night (04:00-06:00 UTC)\n> there are some heavy reporting activity + backups)\n> Archive logs are rsync-ed to a remote backup server every 20 minutes.\n>\n> Also please advise on any postgres.conf modification that could\n> significantly affect storage load (WAL-checkpoint configuration etc.) (we\n> have not tried to move pg_xlog to a separate LUN since this is not an\n> option - any other LUN would be using the same storage pool as the rest of\n> the /pgdata files)\n> We had some issues in the past with autovaccum deamon failing to work\n> efficiently under high load so we have already applied your instructions\n> for a more aggressive auto-vacumm policy (changes already applied on\n> postgresql.conf)\n>\n> Let me know if you want me to attach all the usual info for tickets\n> regarding (OS, disks, PG conf, etc) plus the sar output and server logs\n> from the last 3 days (24,25,26 June).\n\nWell, we can't really help you unless you give us this, so yes - attach\nthis info. And please try to identify what is actually causing most I/O\nactivity (e.g. using \"iotop\").\n\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 31 Jul 2013 14:40:54 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG performance issues related to storage I/O waits"
},
{
"msg_contents": "Hi Tasos,\n\n(2013/07/31 20:58), Tasos Petalas wrote:\n> We have set synchronous_commit to off but since almost all of application queries\n> are using prepared statements we don't get any real benefit.\n\n> Also please advise on any postgres.conf modification that could significantly\n> affect storage load (WAL-checkpoint configuration etc.)\nPlease send us to your postgresql.conf and detail of your server (memory ,raid \ncache size and OS version).\n\nI think your server's memory is big. You should set small dirty_background_ratio \nor dirty_background_byte. If your server's memory is big, PostgreSQL's checkpoint \nexecutes stupid disk-write.\n\nBy the way, I make new checkpoint scheduler and method now. If you could send \nmore detail information which is like response time or load average, you will \nhelp me to make better my patch.\n\nBest regards,\n--\nMitsumasa KONDO\nNTT Open Source Software Center\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 01 Aug 2013 11:02:06 +0900",
"msg_from": "KONDO Mitsumasa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG performance issues related to storage I/O waits"
},
{
"msg_contents": "Hi,\n\nOn 1.8.2013 21:47, Tasos Petalas wrote:> I attach postgresql.conf, sar\noutput and server/os/disk info\n> \n> Will update you with iotop outputs shortly.\n\nSo, have you found anything interesting using iotop?\n\n>> On Wed, Jul 31, 2013 at 3:40 PM, Tomas Vondra <[email protected] \n>> <mailto:[email protected]>> wrote:\n>> \n>> On 31 Červenec 2013, 13:58, Tasos Petalas wrote:\n>>> Hello team,\n>>> \n>>> We have serious performance issues with a production EDB 9.2AS \n>>> installation\n>> \n>> Are you sure you've sent this to the right list? Because this is a \n>> community mailing list for PostgreSQL, not for EDB. If you have a \n>> support contract with EDB it's probably better to ask them directly\n>> (e.g. they might give you advices about some custom features not\n>> available in vanilla PostgreSQL).\n> \n> We do have a support contract with did not provide any valuable \n> feedback so far. As a person coming from the community world I \n> believe I can get answers here.\n\nYou can certainly get help here, although probably only for plain\nPostgreSQL, not for the features available only in EDB.\n\n>>> The issue is mostly related to storage I/O bottlenecks during \n>>> peak hours and we are looking for tunables on any level that \n>>> could reduce the I/O spikes on SAN and improve overall DB \n>>> performance.\n>> \n>> So is that a dedicated DWH machine, and PostgreSQL is responsible \n>> for most of the I/O load? Which processes are doing that? Backends \n>> handling queries or some background processes (say, checkpoints)? \n>> Is that random or sequential I/O, reads or writes, ...?\n> \n> The system is a heavy OLTP system. Bottlenecks are mostly related to \n> I/O writes (many small chunks). Machine is only hosting postgres.\n\nThat however still doesn't say which processes are responsible for that.\nIs that background writer, backends running queries or what? The iotop\nshould give you answer to this (or at least a hint).\n\n> How much I/O are we talking about? Could it be that the SAN is \n> overloaded by someone else (in case it's not dedicated to the \n> database)?\n> \n> Check SAR results (pg_data resides on dev 253-2 (RAID-10), \n> pg_archives on 253-3 (RAID-5)\n\nWhat is pg_archives? Also RAID-5 is generally poor choice for write\nintensive workloads.\n\nAlso, how are these volumes defined? Do they use distinct sets of disks?\nHow many disks are used for each volume?\n\n> It might turn out that the most effective solution is tuning the \n> queries that are responsible for the I/O activity.\n> \n> SAN is almost dedicated to the host running postgres.\n\nAnd? How does that contradict my suggestion to tune the queries? For\neach piece of hardware there are bottlenecks and a query that can hit\nthem. In those cases the best solution often is tuning the query.\n\n>>> Our storage array consists of 16 disks in RAID-10 topology \n>>> (device 253-2 on our OS configuration). We are also using RAID-5\n>>> for archive_log storage (also presented by SAN to the same \n>>> machine - device 253-3)\n>> \n>> I have no clue what device 253-3 is, but I assume you're using SAS\n>> disks.\n> \n> Yes we are using 15K SAS disks in RAID 10. (253-2 dev refers to sar \n> output for disks)\n\nOK, so the pg_archives is probably for xlog archive, right?\n\n>>> We have set synchronous_commit to off but since almost all of \n>>> application queries are using prepared statements we don't get \n>>> any real benefit.\n>> \n>> Ummmm, how is this related? AFAIK those are rather orthogonal \n>> features, i.e. prepared statements should benefit from \n>> synchronous_commit=off just like any other queries.\n> \n> it is not prepared statements. It is distributed transactions \n> (queries inside a PREPARE TRANSACTION block). / / /Certain utility \n> commands, for instance DROP TABLE, are forced to commit\n> synchronously regardless of the setting of synchronous_commit. This\n> is to ensure consistency between the server's file system and the\n> logical state of the database. The commands supporting two-phase\n> commit, such as PREPARE TRANSACTION, are also always synchronous./\n> \n> _/Taken form \n> http://www.postgresql.org/docs/9.2/static/wal-async-commit.html/_\n\nWell, then please use the proper term - prepared transactions and\nprepared statements are two very different things.\n\nBut yes, if you're using prepared transactions (for 2PC), then yes,\nsynchronous_commit=off is not going to improve anything as it simply has\nto be synchronous.\n\n>>> We are using VMware , VMFS and LVM so we need your feedback on \n>>> any kind of tunable that could remove load from storage during \n>>> peak hours (FYI application peak hours are 13:00-23:00 UTC, \n>>> during night (04:00-06:00 UTC) there are some heavy reporting \n>>> activity + backups) Archive logs are rsync-ed to a remote backup \n>>> server every 20 minutes.\n>>> \n>>> Also please advise on any postgres.conf modification that could \n>>> significantly affect storage load (WAL-checkpoint configuration \n>>> etc.) (we have not tried to move pg_xlog to a separate LUN since \n>>> this is not an option - any other LUN would be using the same \n>>> storage pool as the rest of the /pgdata files) We had some\n>>> issues in the past with autovaccum deamon failing to work\n>>> efficiently under high load so we have already applied your\n>>> instructions for a more aggressive auto-vacumm policy (changes\n>>> already applied on postgresql.conf)\n>>> \n>>> Let me know if you want me to attach all the usual info for \n>>> tickets regarding (OS, disks, PG conf, etc) plus the sar output \n>>> and server logs from the last 3 days (24,25,26 June).\n>> \n>> Well, we can't really help you unless you give us this, so yes - \n>> attach this info. And please try to identify what is actually \n>> causing most I/O activity (e.g. using \"iotop\").\n> \n> SAR outputs, postgresql.conf, other os/system h/w info attached.\n\nI've checked the conf, and I think you should really consider increasing\ncheckpoint_segments - it's set to 3 (= 64MB) but I think something like\n32 (=512MB) or even more would be more appropriate.\n\nI see you've enabled log_checkpoints - can you check your logs how often\nthe checkpoints happen?\n\nAlso, can you check pg_stat_bgwriter view? I'd bet the value in\ncheckpoints_timed is very low, compared to checkpoints_req. Or even\nbetter, get the values from this view before / after running the batch jobs.\n\nAnyway, I don't think writes are the main problem here - see this:\n\n DEV tps rd_sec/s wr_sec/s await %util\n04:00:01 dev253-2 176.56 134.07 1378.08 3.70 3.53\n04:10:01 dev253-2 895.22 11503.99 6735.08 16.63 8.24\n04:20:01 dev253-2 455.35 25523.80 1362.37 2.38 16.81\n04:30:01 dev253-2 967.29 95471.88 4193.50 6.70 37.44\n04:40:01 dev253-2 643.31 80754.86 2456.40 3.35 29.70\n04:50:01 dev253-2 526.35 84990.05 1323.28 2.07 29.41\n05:00:01 dev253-2 652.68 73192.18 1297.20 1.89 28.51\n05:10:01 dev253-2 1256.31 34786.32 5840.08 9.25 53.08\n05:20:01 dev253-2 549.84 14530.45 3522.85 8.12 9.89\n05:30:01 dev253-2 1363.27 170743.78 5490.38 7.53 59.75\n05:40:01 dev253-2 978.88 180199.97 1796.90 2.54 74.08\n05:50:01 dev253-2 1690.10 166467.91 8013.10 35.45 66.32\n06:00:01 dev253-2 2441.94 111316.65 15245.05 34.90 41.78\n\nit's a slightly modified sar output for the main data directory (on\n253-2). It clearly shows you're doing ~50MB/s of (random) reads compared\nto less than 5MB/s of writes (assuming a sector is 512B).\n\nThere's almost no activity on the pg_archives (253-3) device:\n\n00:00:01 DEV tps rd_sec/s wr_sec/s await %util\n04:00:01 dev253-3 20.88 0.01 167.00 14.10 0.02\n04:10:01 dev253-3 211.06 0.05 1688.43 43.61 0.12\n04:20:01 dev253-3 14.30 0.00 114.40 9.95 0.01\n04:30:01 dev253-3 112.78 0.45 901.75 17.81 0.06\n04:40:01 dev253-3 14.11 0.00 112.92 10.66 0.01\n04:50:01 dev253-3 7.39 56.94 56.85 10.91 0.04\n05:00:01 dev253-3 14.21 0.00 113.67 10.92 0.01\n05:10:01 dev253-3 7.05 0.26 56.15 17.03 0.02\n05:20:01 dev253-3 28.38 18.20 208.87 8.68 0.29\n05:30:01 dev253-3 41.71 0.03 333.63 14.70 0.03\n05:40:01 dev253-3 6.95 0.00 55.62 10.39 0.00\n05:50:01 dev253-3 105.36 386.44 830.83 9.62 0.19\n06:00:01 dev253-3 13.92 0.01 111.34 10.41 0.01\n\nIn the afternoon it's a different story - for 253-2 it looks like this:\n\n DEV tps rd_sec/s wr_sec/s await %util\n15:50:01 dev253-2 4742.91 33828.98 29156.17 84.84 105.14\n16:00:01 dev253-2 2781.05 12737.41 18878.52 19.24 80.53\n16:10:01 dev253-2 3661.51 20950.64 23758.96 36.86 99.03\n16:20:01 dev253-2 5011.45 32454.33 31895.05 72.75 102.38\n16:30:01 dev253-2 2638.08 14661.23 17853.16 25.24 75.64\n16:40:01 dev253-2 1988.95 5764.73 14190.12 45.05 58.80\n16:50:01 dev253-2 2185.15 88296.73 11806.38 7.46 84.37\n17:00:01 dev253-2 2031.19 12835.56 12997.34 8.90 82.62\n17:10:01 dev253-2 4009.24 34288.71 23974.92 38.07 103.01\n17:20:01 dev253-2 3605.86 26107.83 22457.41 45.76 90.90\n17:30:01 dev253-2 2550.47 7496.85 18267.07 19.10 65.87\n\nSo this is about 50:50 reads and writes, and this is also the time when\nwith some measurable activity on the 253-3 device:\n\n00:00:01 DEV tps rd_sec/s wr_sec/s await %util\n15:50:01 dev253-3 1700.97 9739.48 13249.75 22.63 8.53\n16:00:01 dev253-3 807.44 512.95 6439.17 15.21 0.82\n16:10:01 dev253-3 1236.72 22.92 9892.26 28.74 0.95\n16:20:01 dev253-3 1709.15 0.52 13672.70 40.89 1.69\n16:30:01 dev253-3 919.26 8217.60 7051.60 20.40 11.74\n16:40:01 dev253-3 601.66 0.37 4812.94 18.99 0.39\n16:50:01 dev253-3 476.40 0.42 3810.95 10.02 0.28\n17:00:01 dev253-3 636.03 0.15 5088.08 11.01 0.35\n17:10:01 dev253-3 1259.55 165.64 10069.65 15.18 1.01\n17:20:01 dev253-3 1194.10 0.29 9552.49 26.11 0.94\n17:30:01 dev253-3 785.40 2000.52 6201.21 33.12 3.06\n\nStill, this is pretty low write activity, and it's sequential.\n\nWhat I think you could/should do:\n\n* increase checkpoint_segments to 64 (or something)\n\n* move pg_xlog to a separate device (not simply a volume on the SAN,\n sharing disks with the other volumes - that won't give you anything)\n\nI'd expect these changes to improve the afternoon peak, as it's doing\nabout 50% writes. However I would not expect this to improve the morning\npeak, because that's doing a lot of reads (not writes).\n\nThe trick here is that your database is ~300GB - you're doing a lot of\nseeks across the whole database, and it's way more than the RAM. So it\nhas to actually access the disks. 15k disks have ~200 IOPS each, adding\nthem to RAID-10 generally gives you an array with\n\n (n/2) * (IOPS for a single disk)\n\nI don't know how exactly are your volumes defined on your SAN, but\nassuming you have RAID-10 on 12 drives gives you ~1200 IOPS.\n\nThis is the bottleneck you're hitting, and there's not much you can do\nabout it:\n\n * getting better storage (giving you more seeks) - say SSD-like disks\n * improving the application/queries/schema to minimize the seeks\n\nAnd so on.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 03 Aug 2013 22:38:25 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG performance issues related to storage I/O waits"
},
{
"msg_contents": "Hi,\n\nOn 5.8.2013 17:55, Tasos Petalas wrote:\n> \n> Seems most of the I/O is caused by SELECT backend processes (READ), \n> whereas (WRITE) requests of wal writer and checkpointer processes do\n> not appear as top IO proceses (correct me if I am wrong)\n> \n> E.g. check the follwoing heavy write process that reports 0% I/O ...!\n> \n> 14:09:40 769 be/4 enterpri 0.00 B/s 33.65 M/s 0.00 % 0.00 %\n> postgres: wal writer process \n\nThat's because the WAL writer does sequential I/O (writes), which is a\nperfect match for SAS drives.\n\nOTOH the queries do a lot of random reads, which is a terrible match for\nspinners.\n\n> That however still doesn't say which processes are responsible\n> for that.\n> Is that background writer, backends running queries or what? The\n> iotop\n> should give you answer to this (or at least a hint).\n> \n> \n> It seems most of I/O reported from backends running heavy concurrent\n> select queries (See iotop attachment in previous email)\n\nYes, that seems to be the case.\n\n> Also, how are these volumes defined? Do they use distinct sets\n> of disks?\n> How many disks are used for each volume?\n> \n> \n> These are LUNs from SAN (we have dedicated 16 SAS 2,5'' disks in RAID-10\n> topology in Storage)\n\nI do understand these are LUNs from the SAN. I was asking whether there\nare separate sets of disks for the data directory (which you mentioned\nto be RAID-10) and pg_archives (which you mentioned to be RAID-5).\n\nAlthough I doubt it'd be possible to use the same disk for two LUNs.\n\n> > Yes we are using 15K SAS disks in RAID 10. (253-2 dev refers\n> to sar\n> > output for disks)\n> \n> OK, so the pg_archives is probably for xlog archive, right?\n> \n> NO.\n> /pg_archives is the target mount_point where we copy archive_logs to\n> (archive_command = 'test ! -f /pg_archives/%f && cp %p /pg_archives/%f')\n\n... which is exactly what WAL archive is. That's why the GUC is called\narchive_command.\n\n> I've checked the conf, and I think you should really consider\n> increasing\n> checkpoint_segments - it's set to 3 (= 64MB) but I think\n> something like\n> 32 (=512MB) or even more would be more appropriate.\n> \n> We use EDB dynatune. Actual setting can be found in file\n> (Ticket.Usual.Info.27.07.13.txt) of initial e-mail --> check show all;\n> section\n> Current checkpoint_segments is set to 64\n\nOK, I'm not familiar with dynatune, and I got confused by the\npostgresql.conf that you sent. 64 seems fine to me.\n\n> I see you've enabled log_checkpoints - can you check your logs\n> how often\n> the checkpoints happen?\n> \n> \n> This is the output of the checkpoints during peak hours (avg. every 2-5\n> minutes)\n> \n> 2013-08-02 14:00:20 UTC [767]: [19752]: [0]LOG: checkpoint complete:\n> wrote 55926 buffers (5.3%); 0 transaction log file(s) added, 0 removed,\n> 41 recycled; write=220.619 s, sync=\n> 5.443 s, total=226.152 s; sync files=220, longest=1.433 s, average=0.024 s\n> 2013-08-02 14:05:14 UTC [767]: [19754]: [0]LOG: checkpoint complete:\n> wrote 109628 buffers (10.5%); 0 transaction log file(s) added, 0\n> removed, 31 recycled; write=209.714 s, syn\n> c=9.513 s, total=219.252 s; sync files=222, longest=3.472 s, average=0.042 s\n\nMeh, seems OK to me. This was based on the incorrect number of\ncheckpoint segments ...\n> \n> \n> \n> Also, can you check pg_stat_bgwriter view? I'd bet the value in\n> checkpoints_timed is very low, compared to checkpoints_req. Or even\n> better, get the values from this view before / after running the\n> batch jobs.\n> \n> Results during load:\n> checkpoints_timed : 12432 , checkpoints_req = 3058\n\nAgain, seems fine.\n\n> In the afternoon it's a different story - for 253-2 it looks\n> like this:\n> \n> DEV tps rd_sec/s wr_sec/s await %util\n> 15:50:01 dev253-2 4742.91 33828.98 29156.17 84.84 105.14\n> 16:00:01 dev253-2 2781.05 12737.41 18878.52 19.24 80.53\n> 16:10:01 dev253-2 3661.51 20950.64 23758.96 36.86 99.03\n> 16:20:01 dev253-2 5011.45 32454.33 31895.05 72.75 102.38\n> 16:30:01 dev253-2 2638.08 14661.23 17853.16 25.24 75.64\n> 16:40:01 dev253-2 1988.95 5764.73 14190.12 45.05 58.80\n> 16:50:01 dev253-2 2185.15 88296.73 11806.38 7.46 84.37\n> 17:00:01 dev253-2 2031.19 12835.56 12997.34 8.90 82.62\n> 17:10:01 dev253-2 4009.24 34288.71 23974.92 38.07 103.01\n> 17:20:01 dev253-2 3605.86 26107.83 22457.41 45.76 90.90\n> 17:30:01 dev253-2 2550.47 7496.85 18267.07 19.10 65.87\n> \n> \n> This is when the actual problem arises\n\nWell, then I think it's mostly about the SELECT queries.\n\n> What I think you could/should do:\n> \n> * move pg_xlog to a separate device (not simply a volume on the SAN,\n> sharing disks with the other volumes - that won't give you\n> anything)\n> \n> Unfortunately we cannot do so at the moment (alll available SAN\n> resources are assigned to the pg_data directory of the server)\n>\n> I'd expect these changes to improve the afternoon peak, as it's\n> doing\n> about 50% writes. However I would not expect this to improve the\n> morning\n> peak, because that's doing a lot of reads (not writes).\n> \n> Afternoon peak is what we need to troubleshoot (will check if we can\n> assign pg_xlog to a different LUN - not an option currently)\n\nOK, understood. It's difficult to predict the gain and given the iotop\noutput it might even cause harm.\n\n> \n> Will SSD improve write performance? We are thinking of moving towards\n> this direction.\n\nIt'll certainly improve the random I/O in general, which is the main\nissue with SELECT queries. Sequential read/write improvement probably\nwon't be that significant.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 05 Aug 2013 22:28:37 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG performance issues related to storage I/O waits"
},
{
"msg_contents": "On Mon, Aug 5, 2013 at 11:28 PM, Tomas Vondra <[email protected]> wrote:\n\n> Hi,\n>\n> On 5.8.2013 17:55, Tasos Petalas wrote:\n> >\n> > Seems most of the I/O is caused by SELECT backend processes (READ),\n> > whereas (WRITE) requests of wal writer and checkpointer processes do\n> > not appear as top IO proceses (correct me if I am wrong)\n> >\n> > E.g. check the follwoing heavy write process that reports 0% I/O ...!\n> >\n> > 14:09:40 769 be/4 enterpri 0.00 B/s 33.65 M/s 0.00 % 0.00 %\n> > postgres: wal writer process\n>\n> That's because the WAL writer does sequential I/O (writes), which is a\n> perfect match for SAS drives.\n>\n> OTOH the queries do a lot of random reads, which is a terrible match for\n> spinners.\n>\n> > That however still doesn't say which processes are responsible\n> > for that.\n> > Is that background writer, backends running queries or what? The\n> > iotop\n> > should give you answer to this (or at least a hint).\n> >\n> >\n> > It seems most of I/O reported from backends running heavy concurrent\n> > select queries (See iotop attachment in previous email)\n>\n> Yes, that seems to be the case.\n>\n> > Also, how are these volumes defined? Do they use distinct sets\n> > of disks?\n> > How many disks are used for each volume?\n> >\n> >\n> > These are LUNs from SAN (we have dedicated 16 SAS 2,5'' disks in RAID-10\n> > topology in Storage)\n>\n> I do understand these are LUNs from the SAN. I was asking whether there\n> are separate sets of disks for the data directory (which you mentioned\n> to be RAID-10) and pg_archives (which you mentioned to be RAID-5).\n>\n> Although I doubt it'd be possible to use the same disk for two LUNs.\n>\n\nSorry I didn't get you question right. Yes there are different disk sets\nfor RAID-10 (data) and RAID-5 (wal archives)\n\n>\n> > > Yes we are using 15K SAS disks in RAID 10. (253-2 dev refers\n> > to sar\n> > > output for disks)\n> >\n> > OK, so the pg_archives is probably for xlog archive, right?\n> >\n> > NO.\n> > /pg_archives is the target mount_point where we copy archive_logs to\n> > (archive_command = 'test ! -f /pg_archives/%f && cp %p /pg_archives/%f')\n>\n> ... which is exactly what WAL archive is. That's why the GUC is called\n> archive_command.\n>\n\nAgain misunderstood your question. I wrongly got you're asking for separate\nLUN for WAL (pg_xlog to a separate device and not WAL archives)\n\n>\n> > I've checked the conf, and I think you should really consider\n> > increasing\n> > checkpoint_segments - it's set to 3 (= 64MB) but I think\n> > something like\n> > 32 (=512MB) or even more would be more appropriate.\n> >\n> > We use EDB dynatune. Actual setting can be found in file\n> > (Ticket.Usual.Info.27.07.13.txt) of initial e-mail --> check show all;\n> > section\n> > Current checkpoint_segments is set to 64\n>\n> OK, I'm not familiar with dynatune, and I got confused by the\n> postgresql.conf that you sent. 64 seems fine to me.\n>\n\nUnderstood. EDB dynatune is a specific feature that ships with EDB PG\nversions and suppose to take care of most of the PG conf parameters (found\nin postgresql.conf) automatically and adjust them in run time (You can\nalways override them).\n\n\"Show all\" command in psql promt gives you the actual values at any given\ntime.\n\n\n> > I see you've enabled log_checkpoints - can you check your logs\n> > how often\n> > the checkpoints happen?\n> >\n> >\n> > This is the output of the checkpoints during peak hours (avg. every 2-5\n> > minutes)\n> >\n> > 2013-08-02 14:00:20 UTC [767]: [19752]: [0]LOG: checkpoint complete:\n> > wrote 55926 buffers (5.3%); 0 transaction log file(s) added, 0 removed,\n> > 41 recycled; write=220.619 s, sync=\n> > 5.443 s, total=226.152 s; sync files=220, longest=1.433 s, average=0.024\n> s\n> > 2013-08-02 14:05:14 UTC [767]: [19754]: [0]LOG: checkpoint complete:\n> > wrote 109628 buffers (10.5%); 0 transaction log file(s) added, 0\n> > removed, 31 recycled; write=209.714 s, syn\n> > c=9.513 s, total=219.252 s; sync files=222, longest=3.472 s,\n> average=0.042 s\n>\n> Meh, seems OK to me. This was based on the incorrect number of\n> checkpoint segments ...\n> >\n> >\n> >\n> > Also, can you check pg_stat_bgwriter view? I'd bet the value in\n> > checkpoints_timed is very low, compared to checkpoints_req. Or\n> even\n> > better, get the values from this view before / after running the\n> > batch jobs.\n> >\n> > Results during load:\n> > checkpoints_timed : 12432 , checkpoints_req = 3058\n>\n> Again, seems fine.\n>\n>\nUpdate values for pg_stat_bgwriter after batch activity (off-peak)\n checkpoints_timed : 12580 checkpoints_req : 3070\n\nI don't see any significant difference here.\n\n\n> > In the afternoon it's a different story - for 253-2 it looks\n> > like this:\n> >\n> > DEV tps rd_sec/s wr_sec/s await\n> %util\n> > 15:50:01 dev253-2 4742.91 33828.98 29156.17 84.84\n> 105.14\n> > 16:00:01 dev253-2 2781.05 12737.41 18878.52 19.24\n> 80.53\n> > 16:10:01 dev253-2 3661.51 20950.64 23758.96 36.86\n> 99.03\n> > 16:20:01 dev253-2 5011.45 32454.33 31895.05 72.75\n> 102.38\n> > 16:30:01 dev253-2 2638.08 14661.23 17853.16 25.24\n> 75.64\n> > 16:40:01 dev253-2 1988.95 5764.73 14190.12 45.05\n> 58.80\n> > 16:50:01 dev253-2 2185.15 88296.73 11806.38 7.46\n> 84.37\n> > 17:00:01 dev253-2 2031.19 12835.56 12997.34 8.90\n> 82.62\n> > 17:10:01 dev253-2 4009.24 34288.71 23974.92 38.07\n> 103.01\n> > 17:20:01 dev253-2 3605.86 26107.83 22457.41 45.76\n> 90.90\n> > 17:30:01 dev253-2 2550.47 7496.85 18267.07 19.10\n> 65.87\n> >\n> >\n> > This is when the actual problem arises\n>\n> Well, then I think it's mostly about the SELECT queries.\n>\n> > What I think you could/should do:\n> >\n> > * move pg_xlog to a separate device (not simply a volume on the\n> SAN,\n> > sharing disks with the other volumes - that won't give you\n> > anything)\n> >\n> > Unfortunately we cannot do so at the moment (alll available SAN\n> > resources are assigned to the pg_data directory of the server)\n> >\n> > I'd expect these changes to improve the afternoon peak, as it's\n> > doing\n> > about 50% writes. However I would not expect this to improve the\n> > morning\n> > peak, because that's doing a lot of reads (not writes).\n> >\n> > Afternoon peak is what we need to troubleshoot (will check if we can\n> > assign pg_xlog to a different LUN - not an option currently)\n>\n> OK, understood. It's difficult to predict the gain and given the iotop\n> output it might even cause harm.\n>\n> >\n> > Will SSD improve write performance? We are thinking of moving towards\n> > this direction.\n>\n> It'll certainly improve the random I/O in general, which is the main\n> issue with SELECT queries. Sequential read/write improvement probably\n> won't be that significant.\n>\n> Tomas\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Mon, Aug 5, 2013 at 11:28 PM, Tomas Vondra <[email protected]> wrote:\nHi,\n\nOn 5.8.2013 17:55, Tasos Petalas wrote:\n>\n> Seems most of the I/O is caused by SELECT backend processes (READ),\n> whereas (WRITE) requests of wal writer and checkpointer processes do\n> not appear as top IO proceses (correct me if I am wrong)\n>\n> E.g. check the follwoing heavy write process that reports 0% I/O ...!\n>\n> 14:09:40 769 be/4 enterpri 0.00 B/s 33.65 M/s 0.00 % 0.00 %\n> postgres: wal writer process\n\nThat's because the WAL writer does sequential I/O (writes), which is a\nperfect match for SAS drives.\n\nOTOH the queries do a lot of random reads, which is a terrible match for\nspinners.\n\n> That however still doesn't say which processes are responsible\n> for that.\n> Is that background writer, backends running queries or what? The\n> iotop\n> should give you answer to this (or at least a hint).\n>\n>\n> It seems most of I/O reported from backends running heavy concurrent\n> select queries (See iotop attachment in previous email)\n\nYes, that seems to be the case.\n\n> Also, how are these volumes defined? Do they use distinct sets\n> of disks?\n> How many disks are used for each volume?\n>\n>\n> These are LUNs from SAN (we have dedicated 16 SAS 2,5'' disks in RAID-10\n> topology in Storage)\n\nI do understand these are LUNs from the SAN. I was asking whether there\nare separate sets of disks for the data directory (which you mentioned\nto be RAID-10) and pg_archives (which you mentioned to be RAID-5).\n\nAlthough I doubt it'd be possible to use the same disk for two LUNs.Sorry I didn't get you question right. Yes there are different disk sets for RAID-10 (data) and RAID-5 (wal archives) \n\n\n> > Yes we are using 15K SAS disks in RAID 10. (253-2 dev refers\n> to sar\n> > output for disks)\n>\n> OK, so the pg_archives is probably for xlog archive, right?\n>\n> NO.\n> /pg_archives is the target mount_point where we copy archive_logs to\n> (archive_command = 'test ! -f /pg_archives/%f && cp %p /pg_archives/%f')\n\n... which is exactly what WAL archive is. That's why the GUC is called\narchive_command.Again misunderstood your question. I wrongly got you're asking for separate LUN for WAL (pg_xlog to a separate device and not WAL archives) \n\n> I've checked the conf, and I think you should really consider\n> increasing\n> checkpoint_segments - it's set to 3 (= 64MB) but I think\n> something like\n> 32 (=512MB) or even more would be more appropriate.\n>\n> We use EDB dynatune. Actual setting can be found in file\n> (Ticket.Usual.Info.27.07.13.txt) of initial e-mail --> check show all;\n> section\n> Current checkpoint_segments is set to 64\n\nOK, I'm not familiar with dynatune, and I got confused by the\npostgresql.conf that you sent. 64 seems fine to me.Understood. EDB dynatune is a specific feature that ships with EDB PG versions and suppose to take care of most of the PG conf parameters (found in postgresql.conf) automatically and adjust them in run time (You can always override them).\n\"Show all\" command in psql promt gives you the actual values at any given time.\n\n> I see you've enabled log_checkpoints - can you check your logs\n> how often\n> the checkpoints happen?\n>\n>\n> This is the output of the checkpoints during peak hours (avg. every 2-5\n> minutes)\n>\n> 2013-08-02 14:00:20 UTC [767]: [19752]: [0]LOG: checkpoint complete:\n> wrote 55926 buffers (5.3%); 0 transaction log file(s) added, 0 removed,\n> 41 recycled; write=220.619 s, sync=\n> 5.443 s, total=226.152 s; sync files=220, longest=1.433 s, average=0.024 s\n> 2013-08-02 14:05:14 UTC [767]: [19754]: [0]LOG: checkpoint complete:\n> wrote 109628 buffers (10.5%); 0 transaction log file(s) added, 0\n> removed, 31 recycled; write=209.714 s, syn\n> c=9.513 s, total=219.252 s; sync files=222, longest=3.472 s, average=0.042 s\n\nMeh, seems OK to me. This was based on the incorrect number of\ncheckpoint segments ...\n>\n>\n>\n> Also, can you check pg_stat_bgwriter view? I'd bet the value in\n> checkpoints_timed is very low, compared to checkpoints_req. Or even\n> better, get the values from this view before / after running the\n> batch jobs.\n>\n> Results during load:\n> checkpoints_timed : 12432 , checkpoints_req = 3058\n\nAgain, seems fine.\n Update values for pg_stat_bgwriter after batch activity (off-peak) checkpoints_timed : 12580 checkpoints_req : 3070I don't see any significant difference here.\n \n> In the afternoon it's a different story - for 253-2 it looks\n> like this:\n>\n> DEV tps rd_sec/s wr_sec/s await %util\n> 15:50:01 dev253-2 4742.91 33828.98 29156.17 84.84 105.14\n> 16:00:01 dev253-2 2781.05 12737.41 18878.52 19.24 80.53\n> 16:10:01 dev253-2 3661.51 20950.64 23758.96 36.86 99.03\n> 16:20:01 dev253-2 5011.45 32454.33 31895.05 72.75 102.38\n> 16:30:01 dev253-2 2638.08 14661.23 17853.16 25.24 75.64\n> 16:40:01 dev253-2 1988.95 5764.73 14190.12 45.05 58.80\n> 16:50:01 dev253-2 2185.15 88296.73 11806.38 7.46 84.37\n> 17:00:01 dev253-2 2031.19 12835.56 12997.34 8.90 82.62\n> 17:10:01 dev253-2 4009.24 34288.71 23974.92 38.07 103.01\n> 17:20:01 dev253-2 3605.86 26107.83 22457.41 45.76 90.90\n> 17:30:01 dev253-2 2550.47 7496.85 18267.07 19.10 65.87\n>\n>\n> This is when the actual problem arises\n\nWell, then I think it's mostly about the SELECT queries.\n\n> What I think you could/should do:\n>\n> * move pg_xlog to a separate device (not simply a volume on the SAN,\n> sharing disks with the other volumes - that won't give you\n> anything)\n>\n> Unfortunately we cannot do so at the moment (alll available SAN\n> resources are assigned to the pg_data directory of the server)\n>\n> I'd expect these changes to improve the afternoon peak, as it's\n> doing\n> about 50% writes. However I would not expect this to improve the\n> morning\n> peak, because that's doing a lot of reads (not writes).\n>\n> Afternoon peak is what we need to troubleshoot (will check if we can\n> assign pg_xlog to a different LUN - not an option currently)\n\nOK, understood. It's difficult to predict the gain and given the iotop\noutput it might even cause harm.\n\n>\n> Will SSD improve write performance? We are thinking of moving towards\n> this direction.\n\nIt'll certainly improve the random I/O in general, which is the main\nissue with SELECT queries. Sequential read/write improvement probably\nwon't be that significant.\n\nTomas\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 6 Aug 2013 07:46:58 +0300",
"msg_from": "Tasos Petalas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG performance issues related to storage I/O waits"
}
] |
[
{
"msg_contents": "Hello, i have a problem with planning time, I do not understand why this can\nhappen.\n\nPostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real\n(Debian 4.4.5-8) 4.4.5, 64-bit\n\n# explain\n# select i.item_id, u.user_id from items i\n# left join users u on u.user_id = i.user_id\n# where item_id = 169946840;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..397.14 rows=1 width=16)\n -> Index Scan using items_item_ux on items i (cost=0.00..358.84 rows=1\nwidth=16)\n Index Cond: (item_id = 169946840)\n -> Index Only Scan using users_user_id_pkey on users u\n (cost=0.00..38.30 rows=1 width=8)\n Index Cond: (user_id = i.user_id)\n\ntime: 55919.910 ms\n\n# set enable_mergejoin to off;\n\n# explain\nselect i.item_id, u.user_id from items i\nleft join users u on u.user_id = i.user_id\nwhere item_id = 169946840;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..397.14 rows=1 width=16)\n -> Index Scan using items_item_ux on items i (cost=0.00..358.84 rows=1\nwidth=16)\n Index Cond: (item_id = 169946840)\n -> Index Only Scan using users_user_id_pkey on users u\n (cost=0.00..38.30 rows=1 width=8)\n Index Cond: (user_id = i.user_id)\n\ntime: 28.874 ms\n\n-- \nSergey Burladyan\n\nHello, i have a problem with planning time, I do not understand why this can happen.\nPostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit# explain# select i.item_id, u.user_id from items i\n# left join users u on u.user_id = i.user_id# where item_id = 169946840; QUERY PLAN ----------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..397.14 rows=1 width=16) -> Index Scan using items_item_ux on items i (cost=0.00..358.84 rows=1 width=16) Index Cond: (item_id = 169946840)\n -> Index Only Scan using users_user_id_pkey on users u (cost=0.00..38.30 rows=1 width=8) Index Cond: (user_id = i.user_id)time: 55919.910 ms\n# set enable_mergejoin to off;# explainselect i.item_id, u.user_id from items ileft join users u on u.user_id = i.user_idwhere item_id = 169946840; QUERY PLAN \n---------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=0.00..397.14 rows=1 width=16) -> Index Scan using items_item_ux on items i (cost=0.00..358.84 rows=1 width=16)\n Index Cond: (item_id = 169946840) -> Index Only Scan using users_user_id_pkey on users u (cost=0.00..38.30 rows=1 width=8) Index Cond: (user_id = i.user_id)\ntime: 28.874 ms-- Sergey Burladyan",
"msg_date": "Thu, 1 Aug 2013 13:55:07 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "Le 01/08/2013 11:55, Sergey Burladyan a écrit :\n> Hello, i have a problem with planning time, I do not understand why this\n> can happen.\n> \n> PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real\n> (Debian 4.4.5-8) 4.4.5, 64-bit\n> \n> # explain\n> # select i.item_id, u.user_id from items i\n> # left join users u on u.user_id = i.user_id\n> # where item_id = 169946840;\n> QUERY PLAN \n> \n> ----------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=0.00..397.14 rows=1 width=16)\n> -> Index Scan using items_item_ux on items i (cost=0.00..358.84\n> rows=1 width=16)\n> Index Cond: (item_id = 169946840)\n> -> Index Only Scan using users_user_id_pkey on users u\n> (cost=0.00..38.30 rows=1 width=8)\n> Index Cond: (user_id = i.user_id)\n> \n> time: 55919.910 ms\n> \n> # set enable_mergejoin to off;\n> \n> # explain\n> select i.item_id, u.user_id from items i\n> left join users u on u.user_id = i.user_id\n> where item_id = 169946840;\n> QUERY PLAN \n> \n> ----------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=0.00..397.14 rows=1 width=16)\n> -> Index Scan using items_item_ux on items i (cost=0.00..358.84\n> rows=1 width=16)\n> Index Cond: (item_id = 169946840)\n> -> Index Only Scan using users_user_id_pkey on users u\n> (cost=0.00..38.30 rows=1 width=8)\n> Index Cond: (user_id = i.user_id)\n> \n> time: 28.874 ms\n> \n> -- \n> Sergey Burladyan\n\nHello,\n\nIf you leave enable_mergejoin to on, what happens if you run the explain\ntwo time in a row ? Do you get the same planning time ?\n\nAt first look, this reminds me some catalog bloat issue. Can you provide\nthe result of these queries :\nSELECT pg_size_pretty(pg_table_size('pg_class')) AS size_pg_class;\nSELECT pg_size_pretty(pg_table_size('pg_attribute')) AS size_pg_attribute;\n\nThanks\n-- \nThomas Reiss\nConsultant Dalibo\nhttp://dalibo.com - http://dalibo.org\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 01 Aug 2013 12:04:15 +0200",
"msg_from": "Thomas Reiss <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looks like merge join planning time is too big, 55\n seconds"
},
{
"msg_contents": "01.08.2013 14:05 пользователь \"Thomas Reiss\" <[email protected]>\nнаписал:\n>\n> If you leave enable_mergejoin to on, what happens if you run the explain\n> two time in a row ? Do you get the same planning time ?\n\nYes, I get the same planning time.\n\n\n01.08.2013 14:05 пользователь \"Thomas Reiss\" <[email protected]> написал:\n>\n> If you leave enable_mergejoin to on, what happens if you run the explain\n> two time in a row ? Do you get the same planning time ?\nYes, I get the same planning time.",
"msg_date": "Thu, 1 Aug 2013 15:27:14 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "On Thu, Aug 1, 2013 at 2:04 PM, Thomas Reiss <[email protected]>wrote:\n\n> Le 01/08/2013 11:55, Sergey Burladyan a écrit :\n> At first look, this reminds me some catalog bloat issue. Can you provide\n> the result of these queries :\n> SELECT pg_size_pretty(pg_table_size('pg_class')) AS size_pg_class;\n> SELECT pg_size_pretty(pg_table_size('pg_attribute')) AS size_pg_attribute;\n>\n\nSELECT pg_size_pretty(pg_table_size('pg_class')) AS size_pg_class; --- '16\nMB'\nSELECT pg_size_pretty(pg_table_size('pg_attribute')) AS size_pg_attribute;\n--- '63 MB'\n\n-- \nSergey Burladyan\n\nOn Thu, Aug 1, 2013 at 2:04 PM, Thomas Reiss <[email protected]> wrote:\nLe 01/08/2013 11:55, Sergey Burladyan a écrit :\nAt first look, this reminds me some catalog bloat issue. Can you provide\nthe result of these queries :\nSELECT pg_size_pretty(pg_table_size('pg_class')) AS size_pg_class;\nSELECT pg_size_pretty(pg_table_size('pg_attribute')) AS size_pg_attribute;SELECT pg_size_pretty(pg_table_size('pg_class')) AS size_pg_class; --- '16 MB'\nSELECT pg_size_pretty(pg_table_size('pg_attribute')) AS size_pg_attribute; --- '63 MB'-- Sergey Burladyan",
"msg_date": "Thu, 1 Aug 2013 15:45:15 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "I find another query with big planning time:\nexplain select * from xview.user_items_v v where ( v.item_id = 132358330 );\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..363.28 rows=1 width=44)\n Join Filter: (ief.item_id = ix.item_id)\n -> Index Scan using items_item_ux on items ix (cost=0.00..359.20\nrows=1 width=36)\n Index Cond: (item_id = 132358330)\n Filter: ((xa_txtime IS NULL) AND (user_id > 0) AND (status_id <\n20))\n -> Index Scan using item_enabled_flags_item_id_idx on\nitem_enabled_flags ief (cost=0.00..4.06 rows=1 width=8)\n Index Cond: (item_id = 132358330)\n(7 rows)\n\nTime: 44037.758 ms\n\nlooks like planning algorithm hang on 'items' table statistics. Setting\nenable_mergejoin to off does not help with this query.\n\n-- \nSergey Burladyan\n\nI find another query with big planning time:explain select * from xview.user_items_v v where ( v.item_id = 132358330 ); QUERY PLAN \n------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=0.00..363.28 rows=1 width=44)\n Join Filter: (ief.item_id = ix.item_id) -> Index Scan using items_item_ux on items ix (cost=0.00..359.20 rows=1 width=36) Index Cond: (item_id = 132358330)\n Filter: ((xa_txtime IS NULL) AND (user_id > 0) AND (status_id < 20)) -> Index Scan using item_enabled_flags_item_id_idx on item_enabled_flags ief (cost=0.00..4.06 rows=1 width=8)\n Index Cond: (item_id = 132358330)(7 rows)Time: 44037.758 ms\nlooks like planning algorithm hang on 'items' table statistics. Setting enable_mergejoin to off does not help with this query.-- Sergey Burladyan",
"msg_date": "Thu, 1 Aug 2013 18:30:39 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "Sergey Burladyan <[email protected]> writes:\n\n> # explain\n> # select i.item_id, u.user_id from items i\n> # left join users u on u.user_id = i.user_id\n> # where item_id = 169946840;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=0.00..397.14 rows=1 width=16)\n> -> Index Scan using items_item_ux on items i (cost=0.00..358.84 rows=1 width=16)\n> Index Cond: (item_id = 169946840)\n> -> Index Only Scan using users_user_id_pkey on users u (cost=0.00..38.30 rows=1 width=8)\n> Index Cond: (user_id = i.user_id)\n>\n> time: 55919.910 ms\n\nWhile running this EXPLAIN backend use disk for a long time:\n TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND \n21638 be/4 postgres 2.10 M/s 9.45 M/s 0.00 % 69.04 % postgres: postgres xxxxx xxx.xxx.xxx.xxx(50987) EXPLAIN\n\nWhy it read and write to disk 10 megabytes per second for EXPLAIN query? Cannot understand what is going on here :(\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 01 Aug 2013 19:17:27 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "On Thu, Aug 01, 2013 at 07:17:27PM +0400, Sergey Burladyan wrote:\n- Sergey Burladyan <[email protected]> writes:\n- \n- > # explain\n- > # select i.item_id, u.user_id from items i\n- > # left join users u on u.user_id = i.user_id\n- > # where item_id = 169946840;\n- > QUERY PLAN \n- > ----------------------------------------------------------------------------------------------\n- > Nested Loop Left Join (cost=0.00..397.14 rows=1 width=16)\n- > -> Index Scan using items_item_ux on items i (cost=0.00..358.84 rows=1 width=16)\n- > Index Cond: (item_id = 169946840)\n- > -> Index Only Scan using users_user_id_pkey on users u (cost=0.00..38.30 rows=1 width=8)\n- > Index Cond: (user_id = i.user_id)\n- >\n- > time: 55919.910 ms\n- \n- While running this EXPLAIN backend use disk for a long time:\n- TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND \n- 21638 be/4 postgres 2.10 M/s 9.45 M/s 0.00 % 69.04 % postgres: postgres xxxxx xxx.xxx.xxx.xxx(50987) EXPLAIN\n- \n- Why it read and write to disk 10 megabytes per second for EXPLAIN query? Cannot understand what is going on here :(\n\n\nThat sounds familiar - is it possible you're running into this?\nhttp://www.postgresql.org/message-id/[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Aug 2013 10:58:03 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "On Thu, Aug 1, 2013 at 8:17 AM, Sergey Burladyan <[email protected]> wrote:\n> Sergey Burladyan <[email protected]> writes:\n>\n>> # explain\n>> # select i.item_id, u.user_id from items i\n>> # left join users u on u.user_id = i.user_id\n>> # where item_id = 169946840;\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------\n>> Nested Loop Left Join (cost=0.00..397.14 rows=1 width=16)\n>> -> Index Scan using items_item_ux on items i (cost=0.00..358.84 rows=1 width=16)\n>> Index Cond: (item_id = 169946840)\n>> -> Index Only Scan using users_user_id_pkey on users u (cost=0.00..38.30 rows=1 width=8)\n>> Index Cond: (user_id = i.user_id)\n>>\n>> time: 55919.910 ms\n>\n> While running this EXPLAIN backend use disk for a long time:\n> TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND\n> 21638 be/4 postgres 2.10 M/s 9.45 M/s 0.00 % 69.04 % postgres: postgres xxxxx xxx.xxx.xxx.xxx(50987) EXPLAIN\n>\n> Why it read and write to disk 10 megabytes per second for EXPLAIN query? Cannot understand what is going on here :(\n\nI'd use strace to find what file handle is being read and written, and\nlsof to figure out what file that is.\n\nIt looks like it is more write than read, which does seem strange.\n\nAny chance you can create a self-contained test case?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Aug 2013 11:23:54 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n\n> I'd use strace to find what file handle is being read and written, and\n> lsof to figure out what file that is.\n\nI use strace, it is more read then write:\n$ cut -d '(' -f 1 /var/tmp/pg.trace | sort | uniq -c | sort -n\n 49 select\n 708 close\n 1021 open\n 7356 write\n 212744 read\n 219650 lseek\n\ntop reads:\n7859 read(150 open(\"base/16444/17685.129\", O_RDWR|O_CREAT, 0600) = 150\n9513 read(149 open(\"base/16444/17685.128\", O_RDWR|O_CREAT, 0600) = 149\n10529 read(151 open(\"base/16444/17685.130\", O_RDWR|O_CREAT, 0600) = 151\n12155 read(152 open(\"base/16444/17685.131\", O_RDWR|O_CREAT, 0600) = 152\n12768 read(154 open(\"base/16444/17685.133\", O_RDWR|O_CREAT, 0600) = 154\n16210 read(153 open(\"base/16444/17685.132\", O_RDWR|O_CREAT, 0600) = 153\n\nit is 'items' table:\nselect relname from pg_class where relfilenode = 17685;\n relname \n---------\n items\n\neach read is 8192 bytes, so for EXPLAIN query with two simple index scan, *without* ANALYZE postgres\nread (7859 + 9513 + 10529 + 12155 + 12768 + 16210) * 8192 = 565 526 528 bytes from it.\n\n> It looks like it is more write than read, which does seem strange.\n\nWhy it read something for simple EXPLAIN, without real executing query? :-)\n\n> Any chance you can create a self-contained test case?\n\nI think I cannot do this, it is ~1 Tb heavily load database. This is at standby server.\n\nPS: two strace for quick and slow explain:\n\nexplain\nselect i.item_id from items i\nwhere item_id = 169946840\n\n$ cut -d '(' -f 1 /var/tmp/pg-all-normal.trace | sort | uniq -c\n 313 lseek\n 308 open\n 2 read\n 13 recvfrom\n 6 sendto\n\nexplain\nselect i.item_id, u.user_id from items i\nleft join users u on u.user_id = i.user_id\nwhere item_id = 169946840\n\n$ cut -d '(' -f 1 /var/tmp/pg-all-slow.trace | sort | uniq -c\n 963 close\n 1 fsync\n5093393 lseek\n 925 open\n6004995 read\n 14 recvfrom\n 1 rt_sigreturn\n 9 select\n 4361 semop\n 7 sendto\n 1 --- SIGUSR1 \n 685605 write\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 01 Aug 2013 23:13:26 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "On Thu, Aug 1, 2013 at 12:13 PM, Sergey Burladyan <[email protected]> wrote:\n> Jeff Janes <[email protected]> writes:\n>\n>> I'd use strace to find what file handle is being read and written, and\n>> lsof to figure out what file that is.\n>\n> I use strace, it is more read then write:\n> $ cut -d '(' -f 1 /var/tmp/pg.trace | sort | uniq -c | sort -n\n> 49 select\n> 708 close\n> 1021 open\n> 7356 write\n> 212744 read\n> 219650 lseek\n\nBased on your iotop (or whatever that was that you posted previously)\nmost of the reads must be coming from the file system cache.\n\n>\n> top reads:\n> 7859 read(150 open(\"base/16444/17685.129\", O_RDWR|O_CREAT, 0600) = 150\n> 9513 read(149 open(\"base/16444/17685.128\", O_RDWR|O_CREAT, 0600) = 149\n> 10529 read(151 open(\"base/16444/17685.130\", O_RDWR|O_CREAT, 0600) = 151\n> 12155 read(152 open(\"base/16444/17685.131\", O_RDWR|O_CREAT, 0600) = 152\n> 12768 read(154 open(\"base/16444/17685.133\", O_RDWR|O_CREAT, 0600) = 154\n> 16210 read(153 open(\"base/16444/17685.132\", O_RDWR|O_CREAT, 0600) = 153\n>\n> it is 'items' table:\n> select relname from pg_class where relfilenode = 17685;\n> relname\n> ---------\n> items\n>\n> each read is 8192 bytes, so for EXPLAIN query with two simple index scan, *without* ANALYZE postgres\n> read (7859 + 9513 + 10529 + 12155 + 12768 + 16210) * 8192 = 565 526 528 bytes from it.\n>\n>> It looks like it is more write than read, which does seem strange.\n>\n> Why it read something for simple EXPLAIN, without real executing query? :-)\n\nI figured it was reading some system catalogs or something. I don't\nknow why it would be reading the table files. Or writing much of\nanything, either.\n\nI think the next step would be to run gdb -p <pid> (but don't start\ngdb until backend is in the middle of a slow explain), then:\n\nbreak read\nc\nbt\n\nThen repeat the c and bt combination a few more times, to build up a\ndataset on what the call stack is which is causing the reads to\nhappen.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Aug 2013 12:51:04 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n\n> I think the next step would be to run gdb -p <pid> (but don't start\n> gdb until backend is in the middle of a slow explain), then:\n\nSorry, I am lack debug symbols, so call trace is incomplete:\n\nexplain select i.item_id, u.user_id from items i left join users u on u.user_id = i.user_id where item_id = 169946840\n\n#0 0x00007ff766967620 in read () from /lib/libc.so.6\n#1 0x00007ff7689cfc25 in FileRead ()\n#2 0x00007ff7689ea2f6 in mdread ()\n#3 0x00007ff7689cc473 in ?? ()\n#4 0x00007ff7689ccf54 in ReadBufferExtended ()\n#5 0x00007ff7688050ca in index_fetch_heap ()\n#6 0x00007ff76880523e in index_getnext ()\n#7 0x00007ff768a63306 in ?? ()\n#8 0x00007ff768a67624 in ?? ()\n#9 0x00007ff768a67d9c in ?? ()\n#10 0x00007ff768a68376 in mergejoinscansel ()\n#11 0x00007ff76896faa6 in initial_cost_mergejoin ()\n#12 0x00007ff768977695 in ?? ()\n#13 0x00007ff76897816c in add_paths_to_joinrel ()\n#14 0x00007ff76897981b in make_join_rel ()\n#15 0x00007ff768979ac9 in join_search_one_level ()\n#16 0x00007ff76896a3ab in standard_join_search ()\n#17 0x00007ff7689837c1 in query_planner ()\n#18 0x00007ff768985260 in ?? ()\n#19 0x00007ff7689870a9 in subquery_planner ()\n#20 0x00007ff76898736e in standard_planner ()\n#21 0x00007ff7689ef3ce in pg_plan_query ()\n#22 0x00007ff7688c94a3 in ?? ()\n#23 0x00007ff7688c9809 in ExplainQuery ()\n#24 0x00007ff7648095e2 in ?? () from /usr/lib/postgresql/9.2/lib/pg_stat_statements.so\n#25 0x00007ff7689f1f27 in ?? ()\n#26 0x00007ff7689f3295 in ?? ()\n#27 0x00007ff7689f388f in PortalRun ()\n#28 0x00007ff7689ef96d in ?? ()\n#29 0x00007ff7689f0950 in PostgresMain ()\n#30 0x00007ff7689aa7a3 in ?? ()\n#31 0x00007ff7689ad73c in PostmasterMain ()\n#32 0x00007ff768948e4b in main ()\n\n#0 0x00007ff766973950 in lseek64 () from /lib/libc.so.6\n#1 0x00007ff7689cf88d in FileSeek ()\n#2 0x00007ff7689ea09c in mdwrite ()\n#3 0x00007ff7689cb12f in ?? ()\n#4 0x00007ff7689cca43 in ?? ()\n#5 0x00007ff7689ccf54 in ReadBufferExtended ()\n#6 0x00007ff7688050ca in index_fetch_heap ()\n#7 0x00007ff76880523e in index_getnext ()\n#8 0x00007ff768a63306 in ?? ()\n#9 0x00007ff768a67624 in ?? ()\n#10 0x00007ff768a67d9c in ?? ()\n#11 0x00007ff768a68376 in mergejoinscansel ()\n#12 0x00007ff76896faa6 in initial_cost_mergejoin ()\n#13 0x00007ff768977695 in ?? ()\n#14 0x00007ff76897816c in add_paths_to_joinrel ()\n#15 0x00007ff76897981b in make_join_rel ()\n#16 0x00007ff768979ac9 in join_search_one_level ()\n#17 0x00007ff76896a3ab in standard_join_search ()\n#18 0x00007ff7689837c1 in query_planner ()\n#19 0x00007ff768985260 in ?? ()\n#20 0x00007ff7689870a9 in subquery_planner ()\n#21 0x00007ff76898736e in standard_planner ()\n#22 0x00007ff7689ef3ce in pg_plan_query ()\n#23 0x00007ff7688c94a3 in ?? ()\n#24 0x00007ff7688c9809 in ExplainQuery ()\n#25 0x00007ff7648095e2 in ?? () from /usr/lib/postgresql/9.2/lib/pg_stat_statements.so\n#26 0x00007ff7689f1f27 in ?? ()\n#27 0x00007ff7689f3295 in ?? ()\n#28 0x00007ff7689f388f in PortalRun ()\n#29 0x00007ff7689ef96d in ?? ()\n#30 0x00007ff7689f0950 in PostgresMain ()\n#31 0x00007ff7689aa7a3 in ?? ()\n#32 0x00007ff7689ad73c in PostmasterMain ()\n#33 0x00007ff768948e4b in main ()\n\n#0 0x00007ff766973950 in lseek64 () from /lib/libc.so.6\n#1 0x00007ff7689cf88d in FileSeek ()\n#2 0x00007ff7689ea2b9 in mdread ()\n#3 0x00007ff7689cc473 in ?? ()\n#4 0x00007ff7689ccf54 in ReadBufferExtended ()\n#5 0x00007ff7688050ca in index_fetch_heap ()\n#6 0x00007ff76880523e in index_getnext ()\n#7 0x00007ff768a63306 in ?? ()\n#8 0x00007ff768a67624 in ?? ()\n#9 0x00007ff768a67d9c in ?? ()\n#10 0x00007ff768a68376 in mergejoinscansel ()\n#11 0x00007ff76896faa6 in initial_cost_mergejoin ()\n#12 0x00007ff768977695 in ?? ()\n#13 0x00007ff76897816c in add_paths_to_joinrel ()\n#14 0x00007ff76897981b in make_join_rel ()\n#15 0x00007ff768979ac9 in join_search_one_level ()\n#16 0x00007ff76896a3ab in standard_join_search ()\n#17 0x00007ff7689837c1 in query_planner ()\n#18 0x00007ff768985260 in ?? ()\n#19 0x00007ff7689870a9 in subquery_planner ()\n#20 0x00007ff76898736e in standard_planner ()\n#21 0x00007ff7689ef3ce in pg_plan_query ()\n#22 0x00007ff7688c94a3 in ?? ()\n#23 0x00007ff7688c9809 in ExplainQuery ()\n#24 0x00007ff7648095e2 in ?? () from /usr/lib/postgresql/9.2/lib/pg_stat_statements.so\n#25 0x00007ff7689f1f27 in ?? ()\n#26 0x00007ff7689f3295 in ?? ()\n#27 0x00007ff7689f388f in PortalRun ()\n#28 0x00007ff7689ef96d in ?? ()\n#29 0x00007ff7689f0950 in PostgresMain ()\n#30 0x00007ff7689aa7a3 in ?? ()\n#31 0x00007ff7689ad73c in PostmasterMain ()\n#32 0x00007ff768948e4b in main ()\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 02 Aug 2013 01:50:23 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "I also find this trace for other query:\nexplain select * from xview.user_items_v v where ( v.item_id = 132358330 );\n\n#0 0x00007ff766967620 in read () from /lib/libc.so.6\n#1 0x00007ff7689cfc25 in FileRead ()\n#2 0x00007ff7689ea2f6 in mdread ()\n#3 0x00007ff7689cc473 in ?? ()\n#4 0x00007ff7689ccf54 in ReadBufferExtended ()\n#5 0x00007ff7688050ca in index_fetch_heap ()\n#6 0x00007ff76880523e in index_getnext ()\n#7 0x00007ff768a63306 in ?? ()\n#8 0x00007ff768a67624 in ?? ()\n#9 0x00007ff768a67d9c in ?? ()\n#10 0x00007ff768a688fc in scalargtsel ()\n#11 0x00007ff768ac5211 in OidFunctionCall4Coll ()\n#12 0x00007ff768998ce5 in restriction_selectivity ()\n#13 0x00007ff76896c71e in clause_selectivity ()\n#14 0x00007ff76896bf60 in clauselist_selectivity ()\n#15 0x00007ff76896ddfd in set_baserel_size_estimates ()\n#16 0x00007ff76896abf2 in ?? ()\n#17 0x00007ff76896bc97 in make_one_rel ()\n#18 0x00007ff7689837c1 in query_planner ()\n#19 0x00007ff768985260 in ?? ()\n#20 0x00007ff7689870a9 in subquery_planner ()\n#21 0x00007ff76898736e in standard_planner ()\n#22 0x00007ff7689ef3ce in pg_plan_query ()\n#23 0x00007ff7688c94a3 in ?? ()\n#24 0x00007ff7688c9809 in ExplainQuery ()\n#25 0x00007ff7648095e2 in ?? () from /usr/lib/postgresql/9.2/lib/pg_stat_statements.so\n#26 0x00007ff7689f1f27 in ?? ()\n#27 0x00007ff7689f3295 in ?? ()\n#28 0x00007ff7689f388f in PortalRun ()\n#29 0x00007ff7689ef96d in ?? ()\n#30 0x00007ff7689f0950 in PostgresMain ()\n#31 0x00007ff7689aa7a3 in ?? ()\n#32 0x00007ff7689ad73c in PostmasterMain ()\n#33 0x00007ff768948e4b in main ()\n\nI see two code paths:\n#6 0x00007ff76880523e in index_getnext ()\n...\n#9 0x00007ff768a67d9c in ?? ()\n#10 0x00007ff768a688fc in scalargtsel ()\n...\n\nand \n\n#6 0x00007ff76880523e in index_getnext ()\n...\n#9 0x00007ff768a67d9c in ?? ()\n#10 0x00007ff768a68376 in mergejoinscansel ()\n...\n\nIf I not mistaken, may be two code paths like this here:\n(1) mergejoinscansel -> scalarineqsel-> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext\n(2) scalargtsel -> scalarineqsel -> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext\n\nAnd may be get_actual_variable_range() function is too expensive for\ncall with my bloated table items with bloated index items_user_id_idx on it?\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 02 Aug 2013 04:16:46 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "Sergey Burladyan escribi�:\n> I also find this trace for other query:\n> explain select * from xview.user_items_v v where ( v.item_id = 132358330 );\n> \n> #0 0x00007ff766967620 in read () from /lib/libc.so.6\n> #1 0x00007ff7689cfc25 in FileRead ()\n> #2 0x00007ff7689ea2f6 in mdread ()\n> #3 0x00007ff7689cc473 in ?? ()\n> #4 0x00007ff7689ccf54 in ReadBufferExtended ()\n> #5 0x00007ff7688050ca in index_fetch_heap ()\n> #6 0x00007ff76880523e in index_getnext ()\n> #7 0x00007ff768a63306 in ?? ()\n> #8 0x00007ff768a67624 in ?? ()\n> #9 0x00007ff768a67d9c in ?? ()\n> #10 0x00007ff768a688fc in scalargtsel ()\n\nIt'd be useful to see what's in frames 7-9, but this might be related to\nget_actual_variable_range(). I don't see anything else nearby that\nwould try to read portions of the table.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Aug 2013 23:19:15 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looks like merge join planning time is too big, 55\n seconds"
},
{
"msg_contents": "On Thu, Aug 1, 2013 at 5:16 PM, Sergey Burladyan <[email protected]> wrote:\n> I also find this trace for other query:\n> explain select * from xview.user_items_v v where ( v.item_id = 132358330 );\n>\n>\n> If I not mistaken, may be two code paths like this here:\n> (1) mergejoinscansel -> scalarineqsel-> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext\n> (2) scalargtsel -> scalarineqsel -> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext\n\nYeah, I think you are correct.\n\n> And may be get_actual_variable_range() function is too expensive for\n> call with my bloated table items with bloated index items_user_id_idx on it?\n\nBut why is it bloated in this way? It must be visiting many thousands\nof dead/invisible rows before finding the first visible one. But,\nBtree index have a mechanism to remove dead tuples from indexes, so it\ndoesn't follow them over and over again (see \"kill_prior_tuple\"). So\nis that mechanism not working, or are the tuples not dead but just\ninvisible (i.e. inserted by a still open transaction)?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 2 Aug 2013 08:29:34 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "On Fri, Aug 2, 2013 at 2:58 AM, Sergey Burladyan <[email protected]> wrote:\n>\n> PS: I think my main problem is here:\n> select min(user_id) from items;\n> min\n> -----\n> 1\n> (1 row)\n>\n> Time: 504.520 ms\n\nThat is a long time, but still 100 fold less than the planner is taking.\n\nWhat about max(user_id)?\n\n>\n> also, i cannot reindex it concurrently now, because it run autovacuum: VACUUM ANALYZE public.items (to prevent wraparound)\n\nThat is going to take a long time if you have the cost settings at\ntheir defaults.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 2 Aug 2013 08:35:45 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> On Thu, Aug 1, 2013 at 5:16 PM, Sergey Burladyan <[email protected]> wrote:\n>> If I not mistaken, may be two code paths like this here:\n>> (1) mergejoinscansel -> scalarineqsel-> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext\n>> (2) scalargtsel -> scalarineqsel -> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext\n\n> Yeah, I think you are correct.\n\nmergejoinscansel does *not* call scalarineqsel, nor get_actual_variable_range.\nIt calls get_variable_range, which only looks at the pg_statistic entries.\n\nI think we need to see the actual stack traces, not incomplete versions.\nIt's possible that the situation here involves bloat in pg_statistic, but\nwe're just leaping to conclusions if we assume that that's where the index\nfetches are occurring.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 02 Aug 2013 11:50:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Jeff Janes <[email protected]> writes:\n> > On Thu, Aug 1, 2013 at 5:16 PM, Sergey Burladyan <[email protected]> wrote:\n> >> If I not mistaken, may be two code paths like this here:\n> >> (1) mergejoinscansel -> scalarineqsel-> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext\n> >> (2) scalargtsel -> scalarineqsel -> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext\n>\n> > Yeah, I think you are correct.\n>\n> mergejoinscansel does *not* call scalarineqsel, nor get_actual_variable_range.\n> It calls get_variable_range, which only looks at the pg_statistic\n> entries.\n\nHmm, I speak about 9.2.2 but in current HEAD this call still exist,\nplease see: http://doxygen.postgresql.org/selfuncs_8c_source.html#l02976\n\n> I think we need to see the actual stack traces, not incomplete versions.\n> It's possible that the situation here involves bloat in pg_statistic, but\n> we're just leaping to conclusions if we assume that that's where the index\n> fetches are occurring.\n\nI found debug symbols and send stack trace to mail list, but it blocked\nby size, try again with zip\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 02 Aug 2013 20:20:22 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "Tom Lane escribi�:\n> Jeff Janes <[email protected]> writes:\n> > On Thu, Aug 1, 2013 at 5:16 PM, Sergey Burladyan <[email protected]> wrote:\n> >> If I not mistaken, may be two code paths like this here:\n> >> (1) mergejoinscansel -> scalarineqsel-> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext\n> >> (2) scalargtsel -> scalarineqsel -> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext\n> \n> > Yeah, I think you are correct.\n> \n> mergejoinscansel does *not* call scalarineqsel, nor get_actual_variable_range.\n> It calls get_variable_range, which only looks at the pg_statistic entries.\n\nUh? It's right there in line 2976 in HEAD.\n\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 2 Aug 2013 12:23:44 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looks like merge join planning time is too big, 55\n seconds"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n\n> On Fri, Aug 2, 2013 at 2:58 AM, Sergey Burladyan <[email protected]> wrote:\n> >\n> > PS: I think my main problem is here:\n> > select min(user_id) from items;\n> > min\n> > -----\n> > 1\n> > (1 row)\n> >\n> > Time: 504.520 ms\n>\n> That is a long time, but still 100 fold less than the planner is taking.\n>\n> What about max(user_id)?\n\nmax is good, only rows with user_id = 0 was updated:\n\nselect max(user_id) from items;\nTime: 59.646 ms\n\n> > also, i cannot reindex it concurrently now, because it run autovacuum: VACUUM ANALYZE public.items (to prevent wraparound)\n>\n> That is going to take a long time if you have the cost settings at\n> their defaults.\n\nYes, I have custom setting, more slow, it will last about a week.\n\n> But why is it bloated in this way? \n\nDon't known. It has been updated many items last week. ~ 10% of table.\n\n> It must be visiting many thousands of dead/invisible rows before\n> finding the first visible one. But, Btree index have a mechanism to\n> remove dead tuples from indexes, so it doesn't follow them over and\n> over again (see \"kill_prior_tuple\"). So is that mechanism not\n> working, or are the tuples not dead but just invisible (i.e. inserted\n> by a still open transaction)?\n\nIt is deleted, but VACUUM still not completed.\n\nBTW, it is standby server, and it query plan (block read) is very\ndifferent from master:\n\nHot standby:\n\nexplain (analyze,verbose,buffers) select min(user_id) from items;\n\n'Result (cost=0.12..0.13 rows=1 width=0) (actual time=56064.514..56064.514 rows=1 loops=1)'\n' Output: $0'\n' Buffers: shared hit=3694164 read=6591224 written=121652'\n' InitPlan 1 (returns $0)'\n' -> Limit (cost=0.00..0.12 rows=1 width=8) (actual time=56064.502..56064.503 rows=1 loops=1)'\n' Output: public.items.user_id'\n' Buffers: shared hit=3694164 read=6591224 written=121652'\n' -> Index Only Scan using items_user_id_idx on public.items (cost=0.00..24165743.48 rows=200673143 width=8) (actual time=56064.499..56064.499 rows=1 loops=1)'\n' Output: public.items.user_id'\n' Index Cond: (public.items.user_id IS NOT NULL)'\n' Heap Fetches: 8256426'\n' Buffers: shared hit=3694164 read=6591224 written=121652'\n'Total runtime: 56064.571 ms'\n\nMaster:\n\n'Result (cost=0.12..0.13 rows=1 width=0) (actual time=202.759..202.759 rows=1 loops=1)'\n' Output: $0'\n' Buffers: shared hit=153577 read=1'\n' InitPlan 1 (returns $0)'\n' -> Limit (cost=0.00..0.12 rows=1 width=8) (actual time=202.756..202.757 rows=1 loops=1)'\n' Output: public.items.user_id'\n' Buffers: shared hit=153577 read=1'\n' -> Index Only Scan using items_user_id_idx on public.items (cost=0.00..24166856.02 rows=200680528 width=8) (actual time=202.756..202.756 rows=1 loops=1)'\n' Output: public.items.user_id'\n' Index Cond: (public.items.user_id IS NOT NULL)'\n' Heap Fetches: 0'\n' Buffers: shared hit=153577 read=1'\n'Total runtime: 202.786 ms'\n\nAnd from backup, before index|heap bloated :)\n\n Result (cost=0.87..0.88 rows=1 width=0) (actual time=16.002..16.003 rows=1 loops=1)\n Output: $0\n Buffers: shared hit=3 read=4\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..0.87 rows=1 width=8) (actual time=15.993..15.995 rows=1 loops=1)\n Output: public.items.user_id\n Buffers: shared hit=3 read=4\n -> Index Only Scan using items_user_id_idx on public.items (cost=0.00..169143085.72 rows=193309210 width=8) (actual time=15.987..15.987 rows=1 loops=1)\n Output: public.items.user_id\n Index Cond: (public.items.user_id IS NOT NULL)\n Heap Fetches: 1\n Buffers: shared hit=3 read=4\n Total runtime: 16.057 ms\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 02 Aug 2013 21:11:24 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane escribi�:\n>> It calls get_variable_range, which only looks at the pg_statistic entries.\n\n> Uh? It's right there in line 2976 in HEAD.\n\nMeh. You're right, I was thinking of this bit in get_variable_range()\n\n /*\n * XXX It's very tempting to try to use the actual column min and max, if\n * we can get them relatively-cheaply with an index probe. However, since\n * this function is called many times during join planning, that could\n * have unpleasant effects on planning speed. Need more investigation\n * before enabling this.\n */\n#ifdef NOT_USED\n if (get_actual_variable_range(root, vardata, sortop, min, max))\n return true;\n#endif\n\nI think when that was written, we didn't have the code in scalarineqsel\nthat tries to go out and get the actual endpoints from an index. Now\nthat we do, the planning cost impact that I was afraid of here can\nactually bite us, and it seems that at least for Sergey's case it's pretty\nbad. Another problem is that we'll end up comparing endpoints gotten from\npg_statistic to endpoints gotten from the index, making the resulting\nnumbers at least self-inconsistent and very possibly meaningless.\n\nThe planner already caches the results of mergejoinscansel in hopes of\nalleviating its cost, but I wonder if we need another lower-level cache\nfor the min/max values of each variable that participates in a\nmergejoinable clause.\n\nHaving said that, it's still not clear why these probes are so expensive\nin Sergey's case. I favor your idea about lots of dead rows, but we don't\nhave actual proof of it. Maybe pgstattuple could help here?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 02 Aug 2013 15:43:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
},
{
"msg_contents": "Sergey Burladyan <[email protected]> writes:\n\n> Hot standby:\n...\n> ' -> Index Only Scan using items_user_id_idx on public.items (cost=0.00..24165743.48 rows=200673143 width=8) (actual time=56064.499..56064.499 rows=1 loops=1)'\n> ' Output: public.items.user_id'\n> ' Index Cond: (public.items.user_id IS NOT NULL)'\n> ' Heap Fetches: 8256426'\n> ' Buffers: shared hit=3694164 read=6591224 written=121652'\n> 'Total runtime: 56064.571 ms'\n>\n> Master:\n>\n...\n> ' -> Index Only Scan using items_user_id_idx on public.items (cost=0.00..24166856.02 rows=200680528 width=8) (actual time=202.756..202.756 rows=1 loops=1)'\n> ' Output: public.items.user_id'\n> ' Index Cond: (public.items.user_id IS NOT NULL)'\n> ' Heap Fetches: 0'\n> ' Buffers: shared hit=153577 read=1'\n> 'Total runtime: 202.786 ms'\n\nLooks like visibility map is not replicated into slave somehow?\n\nIf it matters, Master was restarted yesterday, Standby was not.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 03 Aug 2013 01:17:13 +0400",
"msg_from": "Sergey Burladyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looks like merge join planning time is too big, 55 seconds"
}
] |
[
{
"msg_contents": "Howdy. I seem to have inherited this problem:\n\nhttp://www.postgresql.org/message-id/[email protected]\n\nBasically a subselect with no offset is resulting in really poor\nperformance with 120s queries but adding an offset 0 to the inner sub\nselect results in 0.5s query times, and I get the same output.\n\nThe original answer Robert Haas asks for a self contained test case.\n\nI am running 8.4.15 and can try 8.4.17 if some patch has been applied\nto it to address this issue. I just want to know should I\n\nA: upgrade to 8.4.17\nor\nB: create a self contained test case.\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Aug 2013 13:40:41 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "subselect requires offset 0 for good performance."
},
{
"msg_contents": "On Thu, Aug 1, 2013 at 2:40 PM, Scott Marlowe <[email protected]> wrote:\n> Howdy. I seem to have inherited this problem:\n>\n> http://www.postgresql.org/message-id/[email protected]\n>\n> Basically a subselect with no offset is resulting in really poor\n> performance with 120s queries but adding an offset 0 to the inner sub\n> select results in 0.5s query times, and I get the same output.\n>\n> The original answer Robert Haas asks for a self contained test case.\n>\n> I am running 8.4.15 and can try 8.4.17 if some patch has been applied\n> to it to address this issue. I just want to know should I\n>\n> A: upgrade to 8.4.17\n> or\n> B: create a self contained test case.\n\nIMNSHO, I would pursue both (unless A solves your problem in which\ncase B is moot).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Aug 2013 16:15:09 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> I am running 8.4.15 and can try 8.4.17 if some patch has been applied\n> to it to address this issue. I just want to know should I\n\n> A: upgrade to 8.4.17\n> or\n> B: create a self contained test case.\n\nA quick look at the release notes shows no planner fixes in 8.4.16 or\n8.4.17, so it would be rather surprising if (A) helps.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 01 Aug 2013 19:44:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "On Thu, Aug 1, 2013 at 5:44 PM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> I am running 8.4.15 and can try 8.4.17 if some patch has been applied\n>> to it to address this issue. I just want to know should I\n>\n>> A: upgrade to 8.4.17\n>> or\n>> B: create a self contained test case.\n>\n> A quick look at the release notes shows no planner fixes in 8.4.16 or\n> 8.4.17, so it would be rather surprising if (A) helps.\n\nOK. I was doing some initial testing and if I select out the 4 columns\ninto a test table the query runs fast. If I select all the columns\ninto a test table it runs slow, so it appears table width affects\nthis. Will have more to report tomorrow on it.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Aug 2013 19:22:25 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "On 08/02/2013 03:22 AM, Scott Marlowe wrote:\n> On Thu, Aug 1, 2013 at 5:44 PM, Tom Lane <[email protected]> wrote:\n>> Scott Marlowe <[email protected]> writes:\n>>> I am running 8.4.15 and can try 8.4.17 if some patch has been applied\n>>> to it to address this issue. I just want to know should I\n>>> A: upgrade to 8.4.17\n>>> or\n>>> B: create a self contained test case.\n>> A quick look at the release notes shows no planner fixes in 8.4.16 or\n>> 8.4.17, so it would be rather surprising if (A) helps.\n> OK. I was doing some initial testing and if I select out the 4 columns\n> into a test table the query runs fast. If I select all the columns\n> into a test table it runs slow, so it appears table width affects\n> this. Will have more to report tomorrow on it.\n\nI don't know what your query is, but here's one I was working on\nyesterday that shows the problem. It may not be the smallest test case\npossible, but it works.\n\nEXPLAIN ANALYZE\nWITH RECURSIVE\nx (start_time) AS\n(\n SELECT generate_series(1, 1000000)\n),\nt (time, timeround) AS\n(\n SELECT time, time - time % 900000 AS timeround\n FROM (SELECT min(start_time) AS time FROM x) AS tmp\n UNION ALL\n SELECT time, time - time % 900000\n FROM (SELECT (SELECT min(start_time) AS time\n FROM x\n WHERE start_time >= t.timeround + 900000)\n FROM t\n WHERE t.time IS NOT NULL OFFSET 0\n ) tmp\n)\nSELECT count(*) FROM t WHERE time IS NOT NULL;\n\nIf you remove the OFFSET 0, you'll see two more subplans (because \"time\"\nis referenced three times). The difference is much more noticeable if\nyou make the x CTE its own table.\n\nVik\n\nPS: This query is emulating a LooseIndexScan.\nhttp://wiki.postgresql.org/wiki/Loose_indexscan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 02 Aug 2013 09:37:44 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "On Thu, Aug 1, 2013 at 7:22 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Aug 1, 2013 at 5:44 PM, Tom Lane <[email protected]> wrote:\n>> Scott Marlowe <[email protected]> writes:\n>>> I am running 8.4.15 and can try 8.4.17 if some patch has been applied\n>>> to it to address this issue. I just want to know should I\n>>\n>>> A: upgrade to 8.4.17\n>>> or\n>>> B: create a self contained test case.\n>>\n>> A quick look at the release notes shows no planner fixes in 8.4.16 or\n>> 8.4.17, so it would be rather surprising if (A) helps.\n>\n> OK. I was doing some initial testing and if I select out the 4 columns\n> into a test table the query runs fast. If I select all the columns\n> into a test table it runs slow, so it appears table width affects\n> this. Will have more to report tomorrow on it.\n\nHere's the query:\nSELECT * FROM dba.pp_test_wide p LEFT JOIN\n (\n SELECT tree_sortkey FROM dba.pp_test_wide\n WHERE tree_sortkey BETWEEN '00000000000101010000010001010100'::VARBIT\n AND public.tree_right('00000000000101010000010001010100'::VARBIT)\n AND product_name IS NOT NULL AND tree_sortkey <>\n'00000000000101010000010001010100'::VARBIT\n ) pp\nON p.tree_sortkey BETWEEN pp.tree_sortkey AND public.tree_right(pp.tree_sortkey)\nWHERE\n p.tree_sortkey BETWEEN '00000000000101010000010001010100'::VARBIT\nAND public.tree_right('00000000000101010000010001010100'::VARBIT)\n AND p.tree_sortkey BETWEEN '00000000'::VARBIT AND\npublic.tree_right('00000000'::VARBIT)\n AND p.deleted_at IS NULL\n AND pp.tree_sortkey IS NULL\n\nI extracted all the data like so:\n\nselect * into dba.pp_test_wide from original table;\n\nand get this query plan from explain analyze:\nhttp://explain.depesz.com/s/EPx which takes 20 minutes to run.\n\nIf I extract it this way:\n\nselect tree_sortkey, product_name, deleted_at into db.pp_test_3col\nfrom original table;\n\nI get this plan: http://explain.depesz.com/s/gru which gets a\nmaterialize in it, and suddenly takes 106 ms.\n\nthe factor in performance increase is therefore ~ 11,342. that's\npretty huge. I'll try to make a self contained test case now.\nHopefully that at least points in the right direction tho to a bug of\nsome kind.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 2 Aug 2013 11:08:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> I extracted all the data like so:\n> select * into dba.pp_test_wide from original table;\n> and get this query plan from explain analyze:\n> http://explain.depesz.com/s/EPx which takes 20 minutes to run.\n> If I extract it this way:\n> select tree_sortkey, product_name, deleted_at into db.pp_test_3col\n> from original table;\n> I get this plan: http://explain.depesz.com/s/gru which gets a\n> materialize in it, and suddenly takes 106 ms.\n\nThere's no reason why suppressing some unrelated columns would change the\nrowcount estimates, but those two plans show different rowcount estimates.\n\nI suspect the *actual* reason for the plan change was that autovacuum had\nhad a chance to update statistics for the one table, and not yet for the\nother. Please do a manual ANALYZE on both tables and see if there's\nstill a plan difference.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 02 Aug 2013 15:31:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "On Fri, Aug 2, 2013 at 1:31 PM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> I extracted all the data like so:\n>> select * into dba.pp_test_wide from original table;\n>> and get this query plan from explain analyze:\n>> http://explain.depesz.com/s/EPx which takes 20 minutes to run.\n>> If I extract it this way:\n>> select tree_sortkey, product_name, deleted_at into db.pp_test_3col\n>> from original table;\n>> I get this plan: http://explain.depesz.com/s/gru which gets a\n>> materialize in it, and suddenly takes 106 ms.\n>\n> There's no reason why suppressing some unrelated columns would change the\n> rowcount estimates, but those two plans show different rowcount estimates.\n>\n> I suspect the *actual* reason for the plan change was that autovacuum had\n> had a chance to update statistics for the one table, and not yet for the\n> other. Please do a manual ANALYZE on both tables and see if there's\n> still a plan difference.\n\nInteresting. I ran analyze on both tables and sure enough the new test\ntable runs fast. Ran analyze on the old table and it runs slow. The\nonly thing the old table and its plan are missing is the materialize.\nSo what is likely to change from the old table to the new one? Here's\nthe explain analyze output from the old table and the same query\nagainst it: http://explain.depesz.com/s/CtZ and here's the plan with\noffset 0 in it: http://explain.depesz.com/s/Gug note that while the\nestimates are a bit off, the really huge difference here says to me\nsome suboptimal method is getting deployed in the background\nsomewhere. Do we need a stack trace?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 2 Aug 2013 13:58:30 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "On Fri, Aug 2, 2013 at 1:58 PM, Scott Marlowe <[email protected]> wrote:\n> On Fri, Aug 2, 2013 at 1:31 PM, Tom Lane <[email protected]> wrote:\n>> Scott Marlowe <[email protected]> writes:\n>>> I extracted all the data like so:\n>>> select * into dba.pp_test_wide from original table;\n>>> and get this query plan from explain analyze:\n>>> http://explain.depesz.com/s/EPx which takes 20 minutes to run.\n>>> If I extract it this way:\n>>> select tree_sortkey, product_name, deleted_at into db.pp_test_3col\n>>> from original table;\n>>> I get this plan: http://explain.depesz.com/s/gru which gets a\n>>> materialize in it, and suddenly takes 106 ms.\n>>\n>> There's no reason why suppressing some unrelated columns would change the\n>> rowcount estimates, but those two plans show different rowcount estimates.\n>>\n>> I suspect the *actual* reason for the plan change was that autovacuum had\n>> had a chance to update statistics for the one table, and not yet for the\n>> other. Please do a manual ANALYZE on both tables and see if there's\n>> still a plan difference.\n>\n> Interesting. I ran analyze on both tables and sure enough the new test\n> table runs fast. Ran analyze on the old table and it runs slow. The\n> only thing the old table and its plan are missing is the materialize.\n> So what is likely to change from the old table to the new one? Here's\n> the explain analyze output from the old table and the same query\n> against it: http://explain.depesz.com/s/CtZ and here's the plan with\n> offset 0 in it: http://explain.depesz.com/s/Gug note that while the\n> estimates are a bit off, the really huge difference here says to me\n> some suboptimal method is getting deployed in the background\n> somewhere. Do we need a stack trace?\n\nSo as a followup. I ran vacuum verbose analyze on the original table,\nthinking it might be bloated but it wasn't. Out of 320k or so rows\nthere were 4k dead tuples recovered, and none that it couldn't\nrecover. So now I'm trying to recreate the original table with a\nselect into with an order by random() on the end. Nope it gets a\nmaterialize in it and runs fast. Well it's danged hard to make a test\ncase when copying the table with random ordering results in a much\nfaster query against the same data. I'm at a loss on how to reproduce\nthis. Are the indexes on the master table leading it astray maybe?\nYep. Added the indexes and performance went right into the dumper. New\nplan on new table with old data added in random order now looks like\nthe old table, only worse because it's on a slower drive. Just to be\ncomplete here's the plan: http://explain.depesz.com/s/PYH Note that I\ncreated new table with order by random() and created indexes. Ran\nanalyze on it, and the select plan looks similar now:\nhttp://explain.depesz.com/s/bsE\n\nSo maybe I can make a test case now. But to summarize, when it can use\nindexes this query gets REAL slow because it lacks a materialize step.\nThat seem about right?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 2 Aug 2013 14:26:09 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> Yep. Added the indexes and performance went right into the dumper. New\n> plan on new table with old data added in random order now looks like\n> the old table, only worse because it's on a slower drive. Just to be\n> complete here's the plan: http://explain.depesz.com/s/PYH Note that I\n> created new table with order by random() and created indexes. Ran\n> analyze on it, and the select plan looks similar now:\n> http://explain.depesz.com/s/bsE\n\n> So maybe I can make a test case now. But to summarize, when it can use\n> indexes this query gets REAL slow because it lacks a materialize step.\n> That seem about right?\n\nWell, the plans shown here could *not* use a materialize step because the\ninner scan makes use of a value from the current outer row. The\nmaterialized plan has to omit one of the index conditions from the inner\nscan and then apply it as a join condition.\n\nI suspect the real reason that the fast case is fast is that the inner\nrelation, even without the p.tree_sortkey >= pro_partners.tree_sortkey\ncondition, is empty, and thus the join runs very quickly. But the planner\ndoesn't know that. Its estimate of the row count isn't very large, but\nit's definitely not zero, plus it thinks that adding the additional index\ncondition reduces the rowcount by a factor of 3 from there. So it comes\nto the wrong conclusion about the value of materializing a fixed inner\nrelation as opposed to using a parameterized indexscan.\n\nHave you tried increasing the statistics targets for these join columns?\nIt's also possible that what you need to do is adjust the planner's\ncost parameters ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 02 Aug 2013 16:51:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "On Fri, Aug 2, 2013 at 2:51 PM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> Yep. Added the indexes and performance went right into the dumper. New\n>> plan on new table with old data added in random order now looks like\n>> the old table, only worse because it's on a slower drive. Just to be\n>> complete here's the plan: http://explain.depesz.com/s/PYH Note that I\n>> created new table with order by random() and created indexes. Ran\n>> analyze on it, and the select plan looks similar now:\n>> http://explain.depesz.com/s/bsE\n>\n>> So maybe I can make a test case now. But to summarize, when it can use\n>> indexes this query gets REAL slow because it lacks a materialize step.\n>> That seem about right?\n>\n> Well, the plans shown here could *not* use a materialize step because the\n> inner scan makes use of a value from the current outer row. The\n> materialized plan has to omit one of the index conditions from the inner\n> scan and then apply it as a join condition.\n>\n> I suspect the real reason that the fast case is fast is that the inner\n> relation, even without the p.tree_sortkey >= pro_partners.tree_sortkey\n> condition, is empty, and thus the join runs very quickly. But the planner\n> doesn't know that. Its estimate of the row count isn't very large, but\n> it's definitely not zero, plus it thinks that adding the additional index\n> condition reduces the rowcount by a factor of 3 from there. So it comes\n> to the wrong conclusion about the value of materializing a fixed inner\n> relation as opposed to using a parameterized indexscan.\n>\n> Have you tried increasing the statistics targets for these join columns?\n> It's also possible that what you need to do is adjust the planner's\n> cost parameters ...\n\nI've tried changing random_page_cost, sequential_page_cost, the cpu*\ncosts, and setting effective_cache_size all over the place and it\nstays just as slow.\n\nour default stats target is 100. Did a stats target = 1000 on the\nthree cols we access. Same terrible performance. Plan here:\nhttp://explain.depesz.com/s/XVt\nstats target=10000, same bad performance, plan:\nhttp://explain.depesz.com/s/kJ54 pretty much the same. Setting\neffective_cache_size='1000GB' make no difference, still slow.\n\nIf I set random_page_cost to 75 makes it work, i.e. a materialize\nshows up. Note that we run on FusionIO cards, and the whole db fits in\nmemory, so a very large effective cache size and random page cost of\n1.0 is actually accurate for our hardware.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 2 Aug 2013 15:27:41 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "On Fri, Aug 2, 2013 at 3:27 PM, Scott Marlowe <[email protected]> wrote:\n> On Fri, Aug 2, 2013 at 2:51 PM, Tom Lane <[email protected]> wrote:\n>> Scott Marlowe <[email protected]> writes:\n>>> Yep. Added the indexes and performance went right into the dumper. New\n>>> plan on new table with old data added in random order now looks like\n>>> the old table, only worse because it's on a slower drive. Just to be\n>>> complete here's the plan: http://explain.depesz.com/s/PYH Note that I\n>>> created new table with order by random() and created indexes. Ran\n>>> analyze on it, and the select plan looks similar now:\n>>> http://explain.depesz.com/s/bsE\n>>\n>>> So maybe I can make a test case now. But to summarize, when it can use\n>>> indexes this query gets REAL slow because it lacks a materialize step.\n>>> That seem about right?\n>>\n>> Well, the plans shown here could *not* use a materialize step because the\n>> inner scan makes use of a value from the current outer row. The\n>> materialized plan has to omit one of the index conditions from the inner\n>> scan and then apply it as a join condition.\n>>\n>> I suspect the real reason that the fast case is fast is that the inner\n>> relation, even without the p.tree_sortkey >= pro_partners.tree_sortkey\n>> condition, is empty, and thus the join runs very quickly. But the planner\n>> doesn't know that. Its estimate of the row count isn't very large, but\n>> it's definitely not zero, plus it thinks that adding the additional index\n>> condition reduces the rowcount by a factor of 3 from there. So it comes\n>> to the wrong conclusion about the value of materializing a fixed inner\n>> relation as opposed to using a parameterized indexscan.\n>>\n>> Have you tried increasing the statistics targets for these join columns?\n>> It's also possible that what you need to do is adjust the planner's\n>> cost parameters ...\n>\n> I've tried changing random_page_cost, sequential_page_cost, the cpu*\n> costs, and setting effective_cache_size all over the place and it\n> stays just as slow.\n>\n> our default stats target is 100. Did a stats target = 1000 on the\n> three cols we access. Same terrible performance. Plan here:\n> http://explain.depesz.com/s/XVt\n> stats target=10000, same bad performance, plan:\n> http://explain.depesz.com/s/kJ54 pretty much the same. Setting\n> effective_cache_size='1000GB' make no difference, still slow.\n>\n> If I set random_page_cost to 75 makes it work, i.e. a materialize\n> shows up. Note that we run on FusionIO cards, and the whole db fits in\n> memory, so a very large effective cache size and random page cost of\n> 1.0 is actually accurate for our hardware.\n\nOK I've done some more research. Here's the plan for it with a low\nrandom page cost:\n\nNested Loop Anti Join (cost=0.00..1862107.69 rows=9274 width=994)\n(actual time=0.181..183413.479 rows=17391 loops=1)\n Join Filter: (p.tree_sortkey <= tree_right(pp_test_wide.tree_sortkey))\n -> Index Scan using pp_test_wide_tree_sortkey_idx on pp_test_wide\np (cost=0.00..10108.15 rows=10433 width=983) (actual\ntime=0.164..218.230 rows=17391 loops=1)\n Index Cond: ((tree_sortkey >=\nB'00000000000101010000010001010100'::bit varying) AND (tree_sortkey <=\nB'0000000000010101000001000101010011111111111111111111111111111111'::bit\nvarying) AND (tree_sortkey >= B'00000000'::bit varying) AND\n(tree_sortkey <= B'0000000011111111111111111111111111111111'::bit\nvarying))\n Filter: (deleted_at IS NULL)\n -> Index Scan using pp_test_wide_tree_sortkey_idx on pp_test_wide\n(cost=0.00..194.86 rows=14 width=11) (actual time=10.530..10.530\nrows=0 loops=17391)\n Index Cond: ((pp_test_wide.tree_sortkey >=\nB'00000000000101010000010001010100'::bit varying) AND\n(pp_test_wide.tree_sortkey <=\nB'0000000000010101000001000101010011111111111111111111111111111111'::bit\nvarying) AND (p.tree_sortkey >= pp_test_wide.tree_sortkey))\n Filter: ((pp_test_wide.product_name IS NOT NULL) AND\n(pp_test_wide.tree_sortkey <> B'00000000000101010000010001010100'::bit\nvarying))\n Total runtime: 183421.226 ms\n\nNote that it's got 0 rows with 17391 loops on the bottom half, time of\n10ms each. which adds up to just under the total cost in the nested\nloop anti-join of ~183,000 ms. If I crank up random_page_cost to say\n100 I get this plan:\n\nNested Loop Anti Join (cost=15202.10..144269.58 rows=9292 width=997)\n(actual time=71.281..271.018 rows=17391 loops=1)\n Join Filter: ((p.tree_sortkey >= pp_test_wide.tree_sortkey) AND\n(p.tree_sortkey <= tree_right(pp_test_wide.tree_sortkey)))\n -> Seq Scan on pp_test_wide p (cost=0.00..16009.92 rows=10453\nwidth=986) (actual time=0.024..183.341 rows=17391 loops=1)\n Filter: ((deleted_at IS NULL) AND (tree_sortkey >=\nB'00000000000101010000010001010100'::bit varying) AND (tree_sortkey <=\nB'0000000000010101000001000101010011111111111111111111111111111111'::bit\nvarying) AND (tree_sortkey >= B'00000000'::bit varying) AND\n(tree_sortkey <= B'0000000011111111111111111111111111111111'::bit\nvarying))\n -> Materialize (cost=15202.10..15202.54 rows=44 width=11) (actual\ntime=0.004..0.004 rows=0 loops=17391)\n -> Seq Scan on pp_test_wide (cost=0.00..15202.06 rows=44\nwidth=11) (actual time=71.245..71.245 rows=0 loops=1)\n Filter: ((product_name IS NOT NULL) AND (tree_sortkey\n>= B'00000000000101010000010001010100'::bit varying) AND (tree_sortkey\n<= B'0000000000010101000001000101010011111111111111111111111111111111'::bit\nvarying) AND (tree_sortkey <> B'00000000000101010000010001010100'::bit\nvarying))\n Total runtime: 272.204 ms\n\nNote that here we feed two seq scans into the anti-join and the speed\nof both seq scan is quite low. I've played around with the\nselectivity to where the inner scan delivers 53 rows instead of none,\nand the query performance was about the same for both plans (i.e.\nrandom_page_cost = 1 is super slow lots of loops of 10ms,\nrandom_page_cost = 100 is fast with two seq scans.).\n\nIt seems to me that deciding to do the index scan is a mistake whether\nthe lower portion of the esitmates 14 or 0 rows seems like a mistake,\nsince it is going to run an index scan's worth of loops from the upper\nindex scan. I.e. it should be guestimating that it's gonna do ~10,000\nloops in the index scan query. It's like somehow the query planner\nthinks that each loop is going to be MUCH faster than 10ms.\n\nI can create an extract of this table for testing if someone wants one\nto explore with. Let me know.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 16:12:18 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "On 08/02/2013 09:37 AM, Vik Fearing wrote:\n> EXPLAIN ANALYZE\n> WITH RECURSIVE\n> x (start_time) AS\n> (\n> SELECT generate_series(1, 1000000)\n> ),\n> t (time, timeround) AS\n> (\n> SELECT time, time - time % 900000 AS timeround\n> FROM (SELECT min(start_time) AS time FROM x) AS tmp\n> UNION ALL\n> SELECT time, time - time % 900000\n> FROM (SELECT (SELECT min(start_time) AS time\n> FROM x\n> WHERE start_time >= t.timeround + 900000)\n> FROM t\n> WHERE t.time IS NOT NULL OFFSET 0\n> ) tmp\n> )\n> SELECT count(*) FROM t WHERE time IS NOT NULL;\n>\n> If you remove the OFFSET 0, you'll see two more subplans (because \"time\"\n> is referenced three times).\n\nIs this not interesting to anyone?\n\nVik\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 09 Aug 2013 00:09:17 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "OK I'm bumping this one last time in the hopes that someone has an\nidea what to do to fix it.\n\nQuery plan: http://explain.depesz.com/s/kJ54\n\nThis query takes 180 seconds. It loops 17391 times across the lower\nindex using entries from the upper index. That seems buggy to me.\nWhile the exact estimates are off, the fact that it estimates 10,433\nrows in the above index scan means that it's expecting to do that many\nloops on the bottom index scan.\n\nThis is one of those queries where adding an offset 0 to the inner\nselect reduces run time from 180s to 0.2s. And I can't imagine a plan\nthat thinks running 10k loops on an index scan is a good idea.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 13 Aug 2013 15:01:26 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> OK I'm bumping this one last time in the hopes that someone has an\n> idea what to do to fix it.\n\n> Query plan: http://explain.depesz.com/s/kJ54\n\n> This query takes 180 seconds. It loops 17391 times across the lower\n> index using entries from the upper index. That seems buggy to me.\n\nThere isn't all that much that the planner can do with that query. There\nare no equality join clauses, so no possibility of a merge or hash join;\nthe only way to implement the join is a nestloop.\n\nThings would probably be better if it left out the one join clause it's\nputting into the inner indexscan condition, so it could materialize the\nresult of the inner indexscan and then do a nestloop join against the\nMaterial node. I'd expect 9.0 and up to consider that a good idea ...\nbut looking back, I see this is 8.4, which means you're probably out of\nluck on getting a better plan. 8.4's nearly out of warranty anyway ---\nconsider upgrading.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 13 Aug 2013 18:50:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subselect requires offset 0 for good performance."
},
{
"msg_contents": "On Tue, Aug 13, 2013 at 4:50 PM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> OK I'm bumping this one last time in the hopes that someone has an\n>> idea what to do to fix it.\n>\n>> Query plan: http://explain.depesz.com/s/kJ54\n>\n>> This query takes 180 seconds. It loops 17391 times across the lower\n>> index using entries from the upper index. That seems buggy to me.\n>\n> There isn't all that much that the planner can do with that query. There\n> are no equality join clauses, so no possibility of a merge or hash join;\n> the only way to implement the join is a nestloop.\n>\n> Things would probably be better if it left out the one join clause it's\n> putting into the inner indexscan condition, so it could materialize the\n> result of the inner indexscan and then do a nestloop join against the\n> Material node. I'd expect 9.0 and up to consider that a good idea ...\n> but looking back, I see this is 8.4, which means you're probably out of\n> luck on getting a better plan. 8.4's nearly out of warranty anyway ---\n> consider upgrading.\n\nThanks for the hints, we'll try them. As for the upgrade an upgrade to\n9.1, possibly 9.2 is already in the planning stages. But you know how\nproduction upgrades go, slow...\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 13 Aug 2013 17:42:07 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subselect requires offset 0 for good performance."
}
] |
[
{
"msg_contents": "Good day,\n \nI have a performance issue when JOINing a view within another view more than once.\nThe query takes over three seconds to execute, which is too long in this case. It's not a problem if the tables are nearly empty, but that isn't the case on the production database.\n \nI suspect the planner thinks it's better to first put together the v_address view and JOIN it to the parcel table later on, but the function \"fx_get_user_tree_subordinates_by_id\" should be JOINed to the parcel table first, as it reduces the number of rows to less than 200 and any following JOINs would be much faster.\n \nI have also ran vacuum, reindex and analyze on the whole database, but it seems to have had to effect.\n \nIs there any way to nudge the planner toward that way of execution?\n \nThis is the query:\nhttps://app.box.com/s/jzxiuuxoyj28q4q8rzxr\n \nThis is the query plan:\nhttps://app.box.com/s/u8nk6qvkjs4ae7l7dh4h (plain text)\nhttps://app.box.com/s/jzxiuuxoyj28q4q8rzxr (graphical output)\n \nThese are the views:\nhttps://app.box.com/s/uibzidsazwv3eeauovuk (paginated view)\nhttps://app.box.com/s/v71vyexmdyl97m4f3m6u (used three times in the paginated view).\n \n \nThank you.\n \nPeter Slapansky\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 02 Aug 2013 15:43:16 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?Sub=2Doptimal_plan_for_a_paginated_query_on_a_view_with_another_view_inside_of_it=2E?="
},
{
"msg_contents": "I apologise, I have neglected to mention Postgres versions tested. It occurs with 9.0 and 9.2\nI have typo in my previous message - the sentence about vacuum, reindex and analyze should be:\n\"I had also run vacuum, reindex and analyze on the whole database, but it seems to have had no effect.\"\n \nThanks for any thoughts on the issue.\n \nPeter Slapansky\n\n\n______________________________________________________________\n> Od: <[email protected]>\n> Komu: <[email protected]>\n> Dátum: 02.08.2013 15:43\n> Predmet: Sub-optimal plan for a paginated query on a view with another view inside of it.\n>\n\nGood day,\n \nI have a performance issue when JOINing a view within another view more than once.\nThe query takes over three seconds to execute, which is too long in this case. It's not a problem if the tables are nearly empty, but that isn't the case on the production database.\n \nI suspect the planner thinks it's better to first put together the v_address view and JOIN it to the parcel table later on, but the function \"fx_get_user_tree_subordinates_by_id\" should be JOINed to the parcel table first, as it reduces the number of rows to less than 200 and any following JOINs would be much faster.\n \nI have also ran vacuum, reindex and analyze on the whole database, but it seems to have had to effect.\n \nIs there any way to nudge the planner toward that way of execution?\n \nThis is the query:\nhttps://app.box.com/s/jzxiuuxoyj28q4q8rzxr <https://app.box.com/s/jzxiuuxoyj28q4q8rzxr>\n \nThis is the query plan:\nhttps://app.box.com/s/u8nk6qvkjs4ae7l7dh4h <https://app.box.com/s/u8nk6qvkjs4ae7l7dh4h> (plain text)\nhttps://app.box.com/s/jzxiuuxoyj28q4q8rzxr <https://app.box.com/s/jzxiuuxoyj28q4q8rzxr> (graphical output)\n \nThese are the views:\nhttps://app.box.com/s/uibzidsazwv3eeauovuk <https://app.box.com/s/uibzidsazwv3eeauovuk> (paginated view)\nhttps://app.box.com/s/v71vyexmdyl97m4f3m6u <https://app.box.com/s/v71vyexmdyl97m4f3m6u> (used three times in the paginated view).\n \n \nThank you.\n \nPeter Slapansky\n\n\nI apologise, I have neglected to mention Postgres versions tested. It occurs with 9.0 and 9.2\nI have typo in my previous message - the sentence about vacuum, reindex and analyze should be:\n\"I had also run vacuum, reindex and analyze on the whole database, but it seems to have had no effect.\"\n \nThanks for any thoughts on the issue.\n \nPeter Slapansky\n\n\n______________________________________________________________\n> Od: <[email protected]>\n> Komu: <[email protected]>\n> Dátum: 02.08.2013 15:43\n> Predmet: Sub-optimal plan for a paginated query on a view with another view inside of it.\n>\n\nGood day,\n \nI have a performance issue when JOINing a view within another view more than once.\nThe query takes over three seconds to execute, which is too long in this case. It's not a problem if the tables are nearly empty, but that isn't the case on the production database.\n \nI suspect the planner thinks it's better to first put together the v_address view and JOIN it to the parcel table later on, but the function \"fx_get_user_tree_subordinates_by_id\" should be JOINed to the parcel table first, as it reduces the number of rows to less than 200 and any following JOINs would be much faster.\n \nI have also ran vacuum, reindex and analyze on the whole database, but it seems to have had to effect.\n \nIs there any way to nudge the planner toward that way of execution?\n \nThis is the query:\nhttps://app.box.com/s/jzxiuuxoyj28q4q8rzxr\n \nThis is the query plan:\nhttps://app.box.com/s/u8nk6qvkjs4ae7l7dh4h (plain text)\nhttps://app.box.com/s/jzxiuuxoyj28q4q8rzxr (graphical output)\n \nThese are the views:\nhttps://app.box.com/s/uibzidsazwv3eeauovuk (paginated view)\nhttps://app.box.com/s/v71vyexmdyl97m4f3m6u (used three times in the paginated view).\n \n \nThank you.\n \nPeter Slapansky",
"msg_date": "Mon, 05 Aug 2013 10:14:45 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?Re=3A_Sub=2Doptimal_plan_for_a_paginated_query_on_a_view_with_another_view_inside_of_it=2E?="
},
{
"msg_contents": "Hello\n\nplease, send result of EXPLAIN ANALYZE\n\nplease, use a http://explain.depesz.com/ for saving a plan\n\nthere is a more than 8 joins - so try to set geqo_threshold to 16,\njoin_collapse_limit to 16, and from_collapse_limit to 16.\n\nRegards\n\nPavel Stehule\n\n2013/8/2 <[email protected]>:\n> Good day,\n>\n> I have a performance issue when JOINing a view within another view more than once.\n> The query takes over three seconds to execute, which is too long in this case. It's not a problem if the tables are nearly empty, but that isn't the case on the production database.\n>\n> I suspect the planner thinks it's better to first put together the v_address view and JOIN it to the parcel table later on, but the function \"fx_get_user_tree_subordinates_by_id\" should be JOINed to the parcel table first, as it reduces the number of rows to less than 200 and any following JOINs would be much faster.\n>\n> I have also ran vacuum, reindex and analyze on the whole database, but it seems to have had to effect.\n>\n> Is there any way to nudge the planner toward that way of execution?\n>\n> This is the query:\n> https://app.box.com/s/jzxiuuxoyj28q4q8rzxr\n>\n> This is the query plan:\n> https://app.box.com/s/u8nk6qvkjs4ae7l7dh4h (plain text)\n> https://app.box.com/s/jzxiuuxoyj28q4q8rzxr (graphical output)\n>\n> These are the views:\n> https://app.box.com/s/uibzidsazwv3eeauovuk (paginated view)\n> https://app.box.com/s/v71vyexmdyl97m4f3m6u (used three times in the paginated view).\n>\n>\n> Thank you.\n>\n> Peter Slapansky\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 21:01:16 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sub-optimal plan for a paginated query on a view with\n another view inside of it."
},
{
"msg_contents": "Good day,\n \nI have included a link to the result of EXPLAIN ANALYZE. It's this one:\nhttps://app.box.com/s/u8nk6qvkjs4ae7l7dh4h <https://app.box.com/s/u8nk6qvkjs4ae7l7dh4h>\n \nHere's a link to Depesz's explain (if links to the site are okay):\nhttp://explain.depesz.com/s/gCk <http://explain.depesz.com/s/gCk>\n \nI have just tried setting geqo_threshold, join_collapse_limit and from_collapse_limit to 16, but it yielded no improvement.\nChanging those three parameters to 32 did speed up the query from about 3.3 seconds to about a second (give or take 50 ms), which is a pretty good improvement, but not quite there, as I'm looking to bring it down to about 300 ms if possible. Changing those three settings to 48 yielded no improvements over 32.\nIs there possibly something something else to tweak there?\n \nHere's EXPLAIN ANALYZE output when the three settings have been set to 32:\nhttp://explain.depesz.com/s/cj2 <http://explain.depesz.com/s/cj2>\n \nThank you.\n \nPeter Slapansky\n \n______________________________________________________________\n> Od: Pavel Stehule <[email protected]>\n> Komu: <[email protected]>\n> Dátum: 06.08.2013 21:01\n> Predmet: Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n>\n> CC: [email protected]\nHello\n\nplease, send result of EXPLAIN ANALYZE\n\nplease, use a http://explain.depesz.com/ <http://explain.depesz.com/> for saving a plan\n\nthere is a more than 8 joins - so try to set geqo_threshold to 16,\njoin_collapse_limit to 16, and from_collapse_limit to 16.\n\nRegards\n\nPavel Stehule\n\n2013/8/2 <[email protected]>:\n> Good day,\n>\n> I have a performance issue when JOINing a view within another view more than once.\n> The query takes over three seconds to execute, which is too long in this case. It's not a problem if the tables are nearly empty, but that isn't the case on the production database.\n>\n> I suspect the planner thinks it's better to first put together the v_address view and JOIN it to the parcel table later on, but the function \"fx_get_user_tree_subordinates_by_id\" should be JOINed to the parcel table first, as it reduces the number of rows to less than 200 and any following JOINs would be much faster.\n>\n> I have also ran vacuum, reindex and analyze on the whole database, but it seems to have had to effect.\n>\n> Is there any way to nudge the planner toward that way of execution?\n>\n> This is the query:\n> https://app.box.com/s/jzxiuuxoyj28q4q8rzxr <https://app.box.com/s/jzxiuuxoyj28q4q8rzxr>\n>\n> This is the query plan:\n> https://app.box.com/s/u8nk6qvkjs4ae7l7dh4h <https://app.box.com/s/u8nk6qvkjs4ae7l7dh4h> (plain text)\n> https://app.box.com/s/jzxiuuxoyj28q4q8rzxr <https://app.box.com/s/jzxiuuxoyj28q4q8rzxr> (graphical output)\n>\n> These are the views:\n> https://app.box.com/s/uibzidsazwv3eeauovuk <https://app.box.com/s/uibzidsazwv3eeauovuk> (paginated view)\n> https://app.box.com/s/v71vyexmdyl97m4f3m6u <https://app.box.com/s/v71vyexmdyl97m4f3m6u> (used three times in the paginated view).\n>\n>\n> Thank you.\n>\n> Peter Slapansky\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance <http://www.postgresql.org/mailpref/pgsql-performance>\n\n\nGood day,\n \nI have included a link to the result of EXPLAIN ANALYZE. It's this one:\nhttps://app.box.com/s/u8nk6qvkjs4ae7l7dh4h\n \nHere's a link to Depesz's explain (if links to the site are okay):\nhttp://explain.depesz.com/s/gCk\n \nI have just tried setting geqo_threshold, join_collapse_limit and from_collapse_limit to 16, but it yielded no improvement.\nChanging those three parameters to 32 did speed up the query from about 3.3 seconds to about a second (give or take 50 ms), which is a pretty good improvement, but not quite there, as I'm looking to bring it down to about 300 ms if possible. Changing those three settings to 48 yielded no improvements over 32.\nIs there possibly something something else to tweak there?\n \nHere's EXPLAIN ANALYZE output when the three settings have been set to 32:\nhttp://explain.depesz.com/s/cj2\n \nThank you.\n \nPeter Slapansky\n \n______________________________________________________________\n> Od: Pavel Stehule <[email protected]>\n> Komu: <[email protected]>\n> Dátum: 06.08.2013 21:01\n> Predmet: Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n>\n> CC: [email protected]\nHello\n\nplease, send result of EXPLAIN ANALYZE\n\nplease, use a http://explain.depesz.com/ for saving a plan\n\nthere is a more than 8 joins - so try to set geqo_threshold to 16,\njoin_collapse_limit to 16, and from_collapse_limit to 16.\n\nRegards\n\nPavel Stehule\n\n2013/8/2 <[email protected]>:\n> Good day,\n>\n> I have a performance issue when JOINing a view within another view more than once.\n> The query takes over three seconds to execute, which is too long in this case. It's not a problem if the tables are nearly empty, but that isn't the case on the production database.\n>\n> I suspect the planner thinks it's better to first put together the v_address view and JOIN it to the parcel table later on, but the function \"fx_get_user_tree_subordinates_by_id\" should be JOINed to the parcel table first, as it reduces the number of rows to less than 200 and any following JOINs would be much faster.\n>\n> I have also ran vacuum, reindex and analyze on the whole database, but it seems to have had to effect.\n>\n> Is there any way to nudge the planner toward that way of execution?\n>\n> This is the query:\n> https://app.box.com/s/jzxiuuxoyj28q4q8rzxr\n>\n> This is the query plan:\n> https://app.box.com/s/u8nk6qvkjs4ae7l7dh4h (plain text)\n> https://app.box.com/s/jzxiuuxoyj28q4q8rzxr (graphical output)\n>\n> These are the views:\n> https://app.box.com/s/uibzidsazwv3eeauovuk (paginated view)\n> https://app.box.com/s/v71vyexmdyl97m4f3m6u (used three times in the paginated view).\n>\n>\n> Thank you.\n>\n> Peter Slapansky\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 07 Aug 2013 14:42:39 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?Re=3A_=5BPERFORM=5D_Sub=2Doptimal_plan_for_a_paginated_query_on_a_view_with_another_view_inside_of_it=2E?="
},
{
"msg_contents": "\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\r\nSent: Wednesday, August 07, 2013 8:43 AM\r\nTo: Pavel Stehule\r\nCc: [email protected]\r\nSubject: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\r\n\r\nGood day,\r\n \r\nI have included a link to the result of EXPLAIN ANALYZE. It's this one:\r\nhttps://app.box.com/s/u8nk6qvkjs4ae7l7dh4h\r\n \r\nHere's a link to Depesz's explain (if links to the site are okay):\r\nhttp://explain.depesz.com/s/gCk\r\n \r\nI have just tried setting geqo_threshold, join_collapse_limit and from_collapse_limit to 16, but it yielded no improvement.\r\nChanging those three parameters to 32 did speed up the query from about 3.3 seconds to about a second (give or take 50 ms), which is a pretty good improvement, but not quite there, as I'm looking to bring it down to about 300 ms if possible. Changing those three settings to 48 yielded no improvements over 32.\r\nIs there possibly something something else to tweak there?\r\n \r\nHere's EXPLAIN ANALYZE output when the three settings have been set to 32:\r\nhttp://explain.depesz.com/s/cj2\r\n \r\nThank you.\r\n \r\nPeter Slapansky\r\n\r\n-----\r\n\r\nYour last explain analyze (with 3 settings set to 32) shows query duration 10ms, not 1sec.\r\nAm I wrong? \r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Aug 2013 13:46:50 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] Sub-optimal plan for a paginated query\n on a view with another view inside of it."
},
{
"msg_contents": "You're right, it does... but it's quite odd, because I re-ran the explain-analyze statement and got the same results.\nStill, the query now runs for about a second as mentioned before, so it's almost like something's missing from the explain, but I'm certain I copied it all.\n \nI did this via pgadmin, but that shouldn't matter, should it?\n \nThank you,\n \nPeter Slapansky\n______________________________________________________________\n> Od: Igor Neyman <[email protected]>\n> Komu: \"[email protected]\" <[email protected]>, Pavel Stehule <[email protected]>\n> Dátum: 07.08.2013 15:47\n> Predmet: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n>\n> CC: \"[email protected]\"\nYour last explain analyze (with 3 settings set to 32) shows query duration 10ms, not 1sec.\nAm I wrong? \n\nRegards,\nIgor Neyman\n \n______________________________________________________________\nFrom: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\nSent: Wednesday, August 07, 2013 8:43 AM\nTo: Pavel Stehule\nCc: [email protected]\nSubject: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n\nGood day,\n \nI have included a link to the result of EXPLAIN ANALYZE. It's this one:\nhttps://app.box.com/s/u8nk6qvkjs4ae7l7dh4h <https://app.box.com/s/u8nk6qvkjs4ae7l7dh4h>\n \nHere's a link to Depesz's explain (if links to the site are okay):\nhttp://explain.depesz.com/s/gCk <http://explain.depesz.com/s/gCk>\n \nI have just tried setting geqo_threshold, join_collapse_limit and from_collapse_limit to 16, but it yielded no improvement.\nChanging those three parameters to 32 did speed up the query from about 3.3 seconds to about a second (give or take 50 ms), which is a pretty good improvement, but not quite there, as I'm looking to bring it down to about 300 ms if possible. Changing those three settings to 48 yielded no improvements over 32.\nIs there possibly something something else to tweak there?\n \nHere's EXPLAIN ANALYZE output when the three settings have been set to 32:\nhttp://explain.depesz.com/s/cj2 <http://explain.depesz.com/s/cj2>\n \nThank you.\n \nPeter Slapansky\n\n\n\n\nYou're right, it does... but it's quite odd, because I re-ran the explain-analyze statement and got the same results.\nStill, the query now runs for about a second as mentioned before, so it's almost like something's missing from the explain, but I'm certain I copied it all.\n \nI did this via pgadmin, but that shouldn't matter, should it?\n \nThank you,\n \nPeter Slapansky\n______________________________________________________________\n> Od: Igor Neyman <[email protected]>\n> Komu: \"[email protected]\" <[email protected]>, Pavel Stehule <[email protected]>\n> Dátum: 07.08.2013 15:47\n> Predmet: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n>\n> CC: \"[email protected]\"\nYour last explain analyze (with 3 settings set to 32) shows query duration 10ms, not 1sec.\nAm I wrong? \n\nRegards,\nIgor Neyman\n \n______________________________________________________________\nFrom: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\nSent: Wednesday, August 07, 2013 8:43 AM\nTo: Pavel Stehule\nCc: [email protected]\nSubject: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n\nGood day,\n \nI have included a link to the result of EXPLAIN ANALYZE. It's this one:\nhttps://app.box.com/s/u8nk6qvkjs4ae7l7dh4h\n \nHere's a link to Depesz's explain (if links to the site are okay):\nhttp://explain.depesz.com/s/gCk\n \nI have just tried setting geqo_threshold, join_collapse_limit and from_collapse_limit to 16, but it yielded no improvement.\nChanging those three parameters to 32 did speed up the query from about 3.3 seconds to about a second (give or take 50 ms), which is a pretty good improvement, but not quite there, as I'm looking to bring it down to about 300 ms if possible. Changing those three settings to 48 yielded no improvements over 32.\nIs there possibly something something else to tweak there?\n \nHere's EXPLAIN ANALYZE output when the three settings have been set to 32:\nhttp://explain.depesz.com/s/cj2\n \nThank you.\n \nPeter Slapansky",
"msg_date": "Wed, 07 Aug 2013 16:42:50 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?RE=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Sub=2Doptimal_plan_for_a_paginated_query_on_a_view_with_another_view_inside_of_it=2E?="
},
{
"msg_contents": "\r\n\r\nFrom: [email protected] [mailto:[email protected]] \r\nSent: Wednesday, August 07, 2013 10:43 AM\r\nTo: Igor Neyman; Pavel Stehule\r\nCc: [email protected]\r\nSubject: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\r\n\r\nYou're right, it does... but it's quite odd, because I re-ran the explain-analyze statement and got the same results.\r\nStill, the query now runs for about a second as mentioned before, so it's almost like something's missing from the explain, but I'm certain I copied it all.\r\n \r\nI did this via pgadmin, but that shouldn't matter, should it?\r\n \r\nThank you,\r\n \r\nPeter Slapansky\r\n______________________________________________________________\r\n_________________________________________________________\r\n\r\nAt very end of explain analyze output there should be a line:\r\n\r\nTotal runtime: ....\r\n\r\nWhat do you get there?\r\n\r\nRegards,\r\nIgor Neyman\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Aug 2013 14:48:14 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] Sub-optimal plan for a paginated query\n on a view with another view inside of it."
},
{
"msg_contents": "2013/8/7 Igor Neyman <[email protected]>:\n>\n>\n> From: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\n> Sent: Wednesday, August 07, 2013 8:43 AM\n> To: Pavel Stehule\n> Cc: [email protected]\n> Subject: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n>\n> Good day,\n>\n> I have included a link to the result of EXPLAIN ANALYZE. It's this one:\n> https://app.box.com/s/u8nk6qvkjs4ae7l7dh4h\n>\n> Here's a link to Depesz's explain (if links to the site are okay):\n> http://explain.depesz.com/s/gCk\n>\n> I have just tried setting geqo_threshold, join_collapse_limit and from_collapse_limit to 16, but it yielded no improvement.\n> Changing those three parameters to 32 did speed up the query from about 3.3 seconds to about a second (give or take 50 ms), which is a pretty good improvement, but not quite there, as I'm looking to bring it down to about 300 ms if possible. Changing those three settings to 48 yielded no improvements over 32.\n> Is there possibly something something else to tweak there?\n>\n> Here's EXPLAIN ANALYZE output when the three settings have been set to 32:\n> http://explain.depesz.com/s/cj2\n>\n> Thank you.\n>\n> Peter Slapansky\n>\n> -----\n>\n> Your last explain analyze (with 3 settings set to 32) shows query duration 10ms, not 1sec.\n> Am I wrong?\n\nI afraid so 1 sec is planning time :( .. So execution is fast, but\nplanning is expensive and relatively slow .. maybe prepared statements\ncan helps in this case.\n\nRegards\n\nPavel\n\n>\n> Regards,\n> Igor Neyman\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Aug 2013 16:48:22 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] Sub-optimal plan for a paginated query on\n a view with another view inside of it."
},
{
"msg_contents": "2013/8/7 <[email protected]>:\n> You're right, it does... but it's quite odd, because I re-ran the\n> explain-analyze statement and got the same results.\n>\n> Still, the query now runs for about a second as mentioned before, so it's\n> almost like something's missing from the explain, but I'm certain I copied\n> it all.\n\nwhat is time of EXPLAIN only ?\n\nPavel\n\n>\n>\n>\n> I did this via pgadmin, but that shouldn't matter, should it?\n>\n>\n>\n> Thank you,\n>\n>\n>\n> Peter Slapansky\n>\n> ______________________________________________________________\n>> Od: Igor Neyman <[email protected]>\n>> Komu: \"[email protected]\" <[email protected]>, Pavel Stehule\n>> <[email protected]>\n>> Dátum: 07.08.2013 15:47\n>> Predmet: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated\n>> query on a view with another view inside of it.\n>>\n>\n>> CC: \"[email protected]\"\n>\n> Your last explain analyze (with 3 settings set to 32) shows query duration\n> 10ms, not 1sec.\n> Am I wrong?\n>\n> Regards,\n> Igor Neyman\n>\n>\n>\n> ______________________________________________________________\n>\n>\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of\n> [email protected]\n> Sent: Wednesday, August 07, 2013 8:43 AM\n> To: Pavel Stehule\n> Cc: [email protected]\n> Subject: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a\n> view with another view inside of it.\n>\n> Good day,\n>\n> I have included a link to the result of EXPLAIN ANALYZE. It's this one:\n> https://app.box.com/s/u8nk6qvkjs4ae7l7dh4h\n>\n> Here's a link to Depesz's explain (if links to the site are okay):\n> http://explain.depesz.com/s/gCk\n>\n> I have just tried setting geqo_threshold, join_collapse_limit and\n> from_collapse_limit to 16, but it yielded no improvement.\n> Changing those three parameters to 32 did speed up the query from about 3.3\n> seconds to about a second (give or take 50 ms), which is a pretty good\n> improvement, but not quite there, as I'm looking to bring it down to about\n> 300 ms if possible. Changing those three settings to 48 yielded no\n> improvements over 32.\n> Is there possibly something something else to tweak there?\n>\n> Here's EXPLAIN ANALYZE output when the three settings have been set to 32:\n> http://explain.depesz.com/s/cj2\n>\n> Thank you.\n>\n> Peter Slapansky\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Aug 2013 16:49:11 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] Sub-optimal plan for a paginated query on\n a view with another view inside of it."
},
{
"msg_contents": "I got:\n\"Total runtime: 9.313 ms\" in pgAdmin\n\"Total runtime: 9.363 ms\" in psql.\nBut timing after the query finished was 912.842 ms in psql.\n \nCheers,\n \nPeter Slapansky\n______________________________________________________________\n> Od: Igor Neyman <[email protected]>\n> Komu: \"[email protected]\" <[email protected]>, Pavel Stehule <[email protected]>\n> Dátum: 07.08.2013 16:48\n> Predmet: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n>\n> CC: \"[email protected]\"\n\n\nFrom: [email protected] [mailto:[email protected]] \nSent: Wednesday, August 07, 2013 10:43 AM\nTo: Igor Neyman; Pavel Stehule\nCc: [email protected]\nSubject: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n\nYou're right, it does... but it's quite odd, because I re-ran the explain-analyze statement and got the same results.\nStill, the query now runs for about a second as mentioned before, so it's almost like something's missing from the explain, but I'm certain I copied it all.\n \nI did this via pgadmin, but that shouldn't matter, should it?\n \nThank you,\n \nPeter Slapansky\n______________________________________________________________\n_________________________________________________________\n\nAt very end of explain analyze output there should be a line:\n\nTotal runtime: ....\n\nWhat do you get there?\n\nRegards,\nIgor Neyman\n\n\nI got:\n\"Total runtime: 9.313 ms\" in pgAdmin\n\"Total runtime: 9.363 ms\" in psql.\nBut timing after the query finished was 912.842 ms in psql.\n \nCheers,\n \nPeter Slapansky\n______________________________________________________________\n> Od: Igor Neyman <[email protected]>\n> Komu: \"[email protected]\" <[email protected]>, Pavel Stehule <[email protected]>\n> Dátum: 07.08.2013 16:48\n> Predmet: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n>\n> CC: \"[email protected]\"\n\n\nFrom: [email protected] [mailto:[email protected]] \nSent: Wednesday, August 07, 2013 10:43 AM\nTo: Igor Neyman; Pavel Stehule\nCc: [email protected]\nSubject: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n\nYou're right, it does... but it's quite odd, because I re-ran the explain-analyze statement and got the same results.\nStill, the query now runs for about a second as mentioned before, so it's almost like something's missing from the explain, but I'm certain I copied it all.\n \nI did this via pgadmin, but that shouldn't matter, should it?\n \nThank you,\n \nPeter Slapansky\n______________________________________________________________\n_________________________________________________________\n\nAt very end of explain analyze output there should be a line:\n\nTotal runtime: ....\n\nWhat do you get there?\n\nRegards,\nIgor Neyman",
"msg_date": "Wed, 07 Aug 2013 17:33:49 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?RE=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Sub=2Doptimal_plan_for_a_paginated_query_on_a_view_with_another_view_inside_of_it=2E?="
},
{
"msg_contents": "\r\n\r\nFrom: [email protected] [mailto:[email protected]] \r\nSent: Wednesday, August 07, 2013 11:34 AM\r\nTo: Igor Neyman; Pavel Stehule\r\nCc: [email protected]\r\nSubject: RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\r\n\r\nI got:\r\n\"Total runtime: 9.313 ms\" in pgAdmin\r\n\"Total runtime: 9.363 ms\" in psql.\r\nBut timing after the query finished was 912.842 ms in psql.\r\n \r\nCheers,\r\n \r\nPeter Slapansky\r\n______________________________________________________________\r\n\r\nThat proves what Pavel suggested regarding planning time.\r\n\r\nRegards,\r\nIgor Neyman\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Aug 2013 15:50:02 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] Sub-optimal plan for a paginated query\n on a view with another view inside of it."
},
{
"msg_contents": "<[email protected]> writes:\n> \"Total runtime: 9.313 ms\" in pgAdmin\n> \"Total runtime: 9.363 ms\" in psql.\n> But timing after the query finished was 912.842 ms in psql.\n\nWell, that's the downside of increasing join_collapse_limit and\nfrom_collapse_limit: you might get a better plan, but it takes a lot\nlonger to get it because the planner is considering many more options.\n\nIf you're sufficiently desperate, you could consider rewriting the query\nso that its JOIN structure matches the join order that the planner chooses\nat the high collapse_limit settings. Then you can reduce the limits back\ndown and it'll still find the same plan. This tends to suck from a query\nreadability/maintainability standpoint though :-(.\n\nThe prepared-query approach might offer a solution too, if the good plan\nisn't dependent on specific parameter values.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 07 Aug 2013 11:53:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:\n =?utf-8?q?RE=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Sub=2Doptimal_plan_for_a_paginated_query_on_a_view_with_another_view_inside_of_it=2E?="
},
{
"msg_contents": "I was afraid of something worse but hoping for something better in terms of maintainability. At least now I have a good explanation. :-)\nI just hope the embedded view use won't interfere too much.\n \nThanks everyone.\n \nRegards,\n \nPeter Slapansky\n \n______________________________________________________________\n> Od: Tom Lane <[email protected]>\n> Komu: <[email protected]>\n> Dátum: 07.08.2013 17:53\n> Predmet: Re: [PERFORM] RE: [PERFORM] Re: [PERFORM] Sub-optimal plan for a paginated query on a view with another view inside of it.\n>\n> CC: \"Igor Neyman\", \"Pavel Stehule\", \"[email protected]\"\n<[email protected]> writes:\n> \"Total runtime: 9.313 ms\" in pgAdmin\n> \"Total runtime: 9.363 ms\" in psql.\n> But timing after the query finished was 912.842 ms in psql.\n\nWell, that's the downside of increasing join_collapse_limit and\nfrom_collapse_limit: you might get a better plan, but it takes a lot\nlonger to get it because the planner is considering many more options.\n\nIf you're sufficiently desperate, you could consider rewriting the query\nso that its JOIN structure matches the join order that the planner chooses\nat the high collapse_limit settings. Then you can reduce the limits back\ndown and it'll still find the same plan. This tends to suck from a query\nreadability/maintainability standpoint though :-(.\n\nThe prepared-query approach might offer a solution too, if the good plan\nisn't dependent on specific parameter values.\n\n regards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 08 Aug 2013 08:59:42 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?q?Re=3A_=5BPERFORM=5D_RE=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Sub=2Doptimal_plan_for_a_paginated_query_on_a_view_with_another_view_inside_of_it=2E?="
}
] |
[
{
"msg_contents": "Hello,\n\nAssuming I have a huge table (doesn't fit in RAM), of which the most\nimportant fields are \"id\" which is a SERIAL PRIMARY KEY and \"active\"\nwhich is a boolean, and I'm issuing a query like:\n\nSELECT * FROM table ORDER BY id DESC LIMIT 10\n\n... is pgsql smart enough to use the index to fetch only the 10\nrequired rows instead of reading the whole table, then sorting it,\nthen trimming the result set? How about in the following queries:\n\nSELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 10\n\nSELECT * FROM table WHERE active ORDER BY id DESC LIMIT 10 OFFSET 10\n\nOr, more generally, is there some set of circumstances under which the\ncatastrophic scenario will happen?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 01:04:10 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On Mon, Aug 5, 2013 at 8:04 PM, Ivan Voras <[email protected]> wrote:\n> SELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 10\n>\n> SELECT * FROM table WHERE active ORDER BY id DESC LIMIT 10 OFFSET 10\n\nDid you try explain?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 5 Aug 2013 20:25:58 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On Tue, Aug 6, 2013 at 8:25 AM, Claudio Freire <[email protected]>wrote:\n\n> On Mon, Aug 5, 2013 at 8:04 PM, Ivan Voras <[email protected]> wrote:\n> > SELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 10\n> >\n> > SELECT * FROM table WHERE active ORDER BY id DESC LIMIT 10 OFFSET 10\n>\n> Did you try explain?\n>\nAnd did you run ANALYZE on your table to be sure that you generate correct\nplans?\n-- \nMichael\n\nOn Tue, Aug 6, 2013 at 8:25 AM, Claudio Freire <[email protected]> wrote:\nOn Mon, Aug 5, 2013 at 8:04 PM, Ivan Voras <[email protected]> wrote:\n\n> SELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 10\n>\n> SELECT * FROM table WHERE active ORDER BY id DESC LIMIT 10 OFFSET 10\n\nDid you try explain?And did you run ANALYZE on your table to be sure that you generate correct plans?-- Michael",
"msg_date": "Tue, 6 Aug 2013 09:20:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "Ivan,\n\n> Or, more generally, is there some set of circumstances under which the\n> catastrophic scenario will happen?\n\nYes:\n\nSELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 100000\n\nThis is the \"high offset\" problem, and affects all databases which\nsupport applications with paginated results, including non-relational\nones like SOLR. The basic problem is that you can't figure out what is\nOFFSET 100000 without first sorting the first 100000 results.\n\nThe easiest solution is to limit the number of pages your users can\n\"flip through\". Generally anyone asking for page 10,000 is a bot\nscreen-scraping your site, anyway.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 05 Aug 2013 18:22:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On Mon, Aug 5, 2013 at 6:22 PM, Josh Berkus <[email protected]> wrote:\n>> Or, more generally, is there some set of circumstances under which the\n>> catastrophic scenario will happen?\n>\n> Yes:\n>\n> SELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 100000\n>\n> This is the \"high offset\" problem, and affects all databases which\n> support applications with paginated results, including non-relational\n> ones like SOLR. The basic problem is that you can't figure out what is\n> OFFSET 100000 without first sorting the first 100000 results.\n>\n> The easiest solution is to limit the number of pages your users can\n> \"flip through\". Generally anyone asking for page 10,000 is a bot\n> screen-scraping your site, anyway.\n\nIn addition to Josh's answer I would like to mention that it might be\nworth to use partial index like this\n\nCREATE INDEX i_table_id_active ON table (is) WHERE active\n\nin this particular case\n\nSELECT * FROM table\nWHERE active\nORDER BY id DESC\nLIMIT 10 OFFSET 10\n\nso it will prevent from long filtering tons of rows in case of long\n\"NOT active\" gaps in the beginning of the scanning sequence.\n\nAs an alternative solution for pagination (OFFSET) problem you might\nalso use the \"prev/next\" technique, like\n\nSELECT * FROM table\nWHERE id > :current_last_id\nORDER BY id LIMIT 10\n\nfor \"next\", and\n\nSELECT * FROM (\n SELECT * FROM table\n WHERE id < :current_first_id\n ORDER BY id DESC\n LIMIT 10\n) AS sq ORDER BY id\n\nfor \"prev\". It will be very fast.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nProfile: http://www.linkedin.com/in/grayhemp\nPhone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 5 Aug 2013 18:42:49 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "Sergey Konoplev-2 wrote\n> As an alternative solution for pagination (OFFSET) problem you might\n> also use the \"prev/next\" technique, like\n> \n> SELECT * FROM table\n> WHERE id > :current_last_id\n> ORDER BY id LIMIT 10\n> \n> for \"next\", and\n> \n> SELECT * FROM (\n> SELECT * FROM table\n> WHERE id < :current_first_id\n> ORDER BY id DESC\n> LIMIT 10\n> ) AS sq ORDER BY id\n> \n> for \"prev\". It will be very fast.\n\nEven being fairly experienced at SQL generally because I haven't explored\npagination that much my awareness of the OFFSET issue led me to conclude bad\nthings. Thank you for thinking to take the time for a brief moment of\nenlightenment of something you likely take for granted by now.\n\nCurious how much slower/faster these queries would run if you added:\n\nSELECT *, first_value(id) OVER (...), last_value(id) OVER (...) \n--note the window specifications need to overcome the \"ORDER BY\" limitation\nnoted in the documentation.\n\nto the query. Using the window functions you know at each record what the\nfirst and last ids are for its window. Applicability would be\napplication/need specific but it would avoid having to calculate/maintain\nthese two values in a separate part of the application.\n\nDavid J.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/ORDER-BY-LIMIT-and-indexes-tp5766413p5766429.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 5 Aug 2013 18:54:28 -0700 (PDT)",
"msg_from": "David Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On 6 August 2013 02:20, Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Aug 6, 2013 at 8:25 AM, Claudio Freire <[email protected]>\n> wrote:\n>>\n>> On Mon, Aug 5, 2013 at 8:04 PM, Ivan Voras <[email protected]> wrote:\n>> > SELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 10\n>> >\n>> > SELECT * FROM table WHERE active ORDER BY id DESC LIMIT 10 OFFSET 10\n>>\n>> Did you try explain?\n>\n> And did you run ANALYZE on your table to be sure that you generate correct\n> plans?\n\nMy question was a theoretical one - about general pgsql abilities, not\na specific case.\n\nBut after prodded by you, here's an EXPLAIN ANALYZE for the first case:\n\nivoras=# explain analyze select * from lt order by id desc limit 10;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.29 rows=10 width=9) (actual time=0.034..0.053\nrows=10 loops=1)\n -> Index Scan Backward using lt_pkey on lt (cost=0.00..28673.34\nrows=1000000 width=9) (actual time=0.031..0.042 rows=10 loops=1)\n Total runtime: 0.115 ms\n(3 rows)\n\n(This test table has only 1 mil. records.)\n\nI'm not sure how to interpret the last line:\n(cost=0.00..28673.34 rows=1000000 width=9) (actual time=0.031..0.042\nrows=10 loops=1)\n\nIt seems to me like the planner thinks the Index Scan operation will\nreturn the entire table, and expects both a huge cost and all of the\nrecords, but its actual implementation does return only 10 records.\nThe Limit operation is a NOP in this case. Is this correct?\n\nIn the second case (with OFFSET):\nivoras=# explain analyze select * from lt order by id desc limit 10 offset 10;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.29..0.57 rows=10 width=9) (actual time=0.060..0.082\nrows=10 loops=1)\n -> Index Scan Backward using lt_pkey on lt (cost=0.00..28673.34\nrows=1000000 width=9) (actual time=0.040..0.061 rows=20 loops=1)\n Total runtime: 0.136 ms\n(3 rows)\n\nIt looks like the Index Scan implementation actually returns the first\n20 records - which means that for the last \"page\" of the supposed\npaginated display, pgsql will actually internally read the entire\ntable as an operation result in memory and discard almost all of it in\nthe Limit operation. Right?\n\nAnd for the third case (with another column in WITH):\nivoras=# update lt set active = false where (id % 2) = 0;\nUPDATE 500000\nivoras=# explain analyze select * from lt where active order by id\ndesc limit 10;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.32 rows=10 width=9) (actual time=0.135..0.186\nrows=10 loops=1)\n -> Index Scan Backward using lt_pkey on lt (cost=0.00..47380.43\nrows=1500000 width=9) (actual time=0.132..0.174 rows=10 loops=1)\n Filter: active\n Total runtime: 0.244 ms\n(4 rows)\n\nIt looks like filtering is performed in the Index Scan operation -\nwhich I never expected. Is this usually done or only for simple\nqueries. In other words, is there a circumstance where it is NOT done\nin this way, and should I care?\n\nBased on the results with OFFSET it looks like it would be a bad idea\nto use pgsql for allowing the user to browse through the records in a\npaginated form if the table is huge. I would very much like to be\nproven wrong :)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 12:04:13 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "Here are two more unexpected results. Same test table (1 mil. records,\n\"id\" is SERIAL PRIMARY KEY, PostgreSQL 9.1, VACUUM ANALYZE performed\nbefore the experiments):\n\nivoras=# explain analyze select * from lt where id > 900000 limit 10;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1.71 rows=10 width=9) (actual\ntime=142.669..142.680 rows=10 loops=1)\n -> Seq Scan on lt (cost=0.00..17402.00 rows=101630 width=9)\n(actual time=142.665..142.672 rows=10 loops=1)\n Filter: (id > 900000)\n Total runtime: 142.735 ms\n(4 rows)\n\nNote the Seq Scan.\n\nivoras=# explain analyze select * from lt where id > 900000;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on lt (cost=1683.97..7856.35 rows=101630 width=9)\n(actual time=38.462..85.780 rows=100000 loops=1)\n Recheck Cond: (id > 900000)\n -> Bitmap Index Scan on lt_pkey (cost=0.00..1658.56 rows=101630\nwidth=0) (actual time=38.310..38.310 rows=100000 loops=1)\n Index Cond: (id > 900000)\n Total runtime: 115.674 ms\n(5 rows)\n\nThis somewhat explains the above case - we are simply fetching 100,000\nrecords here, and it's slow enough even with the index scan, so\nplanner skips the index in the former case. BUT, if it did use the\nindex, it would have been expectedly fast:\n\nivoras=# set enable_seqscan to off;\nSET\nivoras=# explain analyze select * from lt where id > 900000 limit 10;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1.74 rows=10 width=9) (actual time=0.081..0.112\nrows=10 loops=1)\n -> Index Scan using lt_pkey on lt (cost=0.00..17644.17\nrows=101630 width=9) (actual time=0.078..0.100 rows=10 loops=1)\n Index Cond: (id > 900000)\n Total runtime: 0.175 ms\n(4 rows)\n\nIt looks like the problem is in the difference between what the\nplanner expects and what the Filter or Index operations deliver:\n(cost=0.00..17402.00 rows=101630 width=9) (actual\ntime=142.665..142.672 rows=10 loops=1).\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 12:46:29 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On Tue, Aug 6, 2013 at 7:46 AM, Ivan Voras <[email protected]> wrote:\n> ivoras=# explain analyze select * from lt where id > 900000 limit 10;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..1.71 rows=10 width=9) (actual\n> time=142.669..142.680 rows=10 loops=1)\n> -> Seq Scan on lt (cost=0.00..17402.00 rows=101630 width=9)\n> (actual time=142.665..142.672 rows=10 loops=1)\n> Filter: (id > 900000)\n> Total runtime: 142.735 ms\n> (4 rows)\n\n\nI think the problem lies in assuming the sequential scan will have 0\nstartup cost, which is not the case here (it will have to scan up to\nthe first page with an id > 900000).\n\nIf that were to be fixed, the index scan would be chosen instead.\n\nI don't see a way to fix it without querying the index, which could be\na costly operation... except with the newly proposed minmax indexes. I\nguess here's another case where those indexes would be useful.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 12:47:01 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On Mon, Aug 5, 2013 at 9:22 PM, Josh Berkus <[email protected]> wrote:\n\n> Ivan,\n>\n> > Or, more generally, is there some set of circumstances under which the\n> > catastrophic scenario will happen?\n>\n> Yes:\n>\n> SELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 100000\n>\n> This is the \"high offset\" problem, and affects all databases which\n> support applications with paginated results, including non-relational\n> ones like SOLR. The basic problem is that you can't figure out what is\n> OFFSET 100000 without first sorting the first 100000 results.\n>\n> The easiest solution is to limit the number of pages your users can\n> \"flip through\". Generally anyone asking for page 10,000 is a bot\n> screen-scraping your site, anyway.\n\n\nAnother solution is to build pages from the maximum id you pulled in the\nlast page so page one is:\nSELECT * FROM table ORDER BY id DESC LIMIT 10\nand page 2 is:\nSELECT * FROM table WHERE id > 19 ORDER BY id DESC LIMIT 10\nand page 3 is:\nSELECT * FROM table WHERE id > 37 ORDER BY id DESC LIMIT 10\nand so on. You build your urls like this:\nhttp://yousite.com/browse\nhttp://yousite.com/browse?after=19\nhttp://yousite.com/browse?after=37\nand so on.\n\nOn Mon, Aug 5, 2013 at 9:22 PM, Josh Berkus <[email protected]> wrote:\nIvan,\n\n> Or, more generally, is there some set of circumstances under which the\n> catastrophic scenario will happen?\n\nYes:\n\nSELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 100000\n\nThis is the \"high offset\" problem, and affects all databases which\nsupport applications with paginated results, including non-relational\nones like SOLR. The basic problem is that you can't figure out what is\nOFFSET 100000 without first sorting the first 100000 results.\n\nThe easiest solution is to limit the number of pages your users can\n\"flip through\". Generally anyone asking for page 10,000 is a bot\nscreen-scraping your site, anyway.Another solution is to build pages from the maximum id you pulled in the last page so page one is:SELECT * FROM table ORDER BY id DESC LIMIT 10\nand page 2 is:SELECT * FROM table WHERE id > 19 ORDER BY id DESC LIMIT 10and page 3 is:SELECT * FROM table WHERE id > 37 ORDER BY id DESC LIMIT 10and so on. You build your urls like this:\nhttp://yousite.com/browsehttp://yousite.com/browse?after=19http://yousite.com/browse?after=37\nand so on.",
"msg_date": "Tue, 6 Aug 2013 12:09:44 -0400",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On Tue, Aug 6, 2013 at 3:04 AM, Ivan Voras <[email protected]> wrote:\n>\n> ivoras=# explain analyze select * from lt order by id desc limit 10;\n> QUERY\n> PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.29 rows=10 width=9) (actual time=0.034..0.053\n> rows=10 loops=1)\n> -> Index Scan Backward using lt_pkey on lt (cost=0.00..28673.34\n> rows=1000000 width=9) (actual time=0.031..0.042 rows=10 loops=1)\n> Total runtime: 0.115 ms\n> (3 rows)\n>\n> (This test table has only 1 mil. records.)\n>\n> I'm not sure how to interpret the last line:\n> (cost=0.00..28673.34 rows=1000000 width=9) (actual time=0.031..0.042\n> rows=10 loops=1)\n>\n> It seems to me like the planner thinks the Index Scan operation will\n> return the entire table, and expects both a huge cost and all of the\n> records, but its actual implementation does return only 10 records.\n> The Limit operation is a NOP in this case. Is this correct?\n\nThe Index Scan line reports the cost estimate of the entire scan, the\nLimit line chops that cost estimate down based on the limit.\n\n>\n> In the second case (with OFFSET):\n> ivoras=# explain analyze select * from lt order by id desc limit 10 offset 10;\n> QUERY\n> PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.29..0.57 rows=10 width=9) (actual time=0.060..0.082\n> rows=10 loops=1)\n> -> Index Scan Backward using lt_pkey on lt (cost=0.00..28673.34\n> rows=1000000 width=9) (actual time=0.040..0.061 rows=20 loops=1)\n> Total runtime: 0.136 ms\n> (3 rows)\n>\n> It looks like the Index Scan implementation actually returns the first\n> 20 records - which means that for the last \"page\" of the supposed\n> paginated display, pgsql will actually internally read the entire\n> table as an operation result in memory and discard almost all of it in\n> the Limit operation. Right?\n\nIt would not necessarily do it in memory. If it used an index scan\n(which it probably would not), it would just need to count the rows\nskipped over, not store them. If it used a sort, it might spill to\ndisk.\n\n\n>\n> And for the third case (with another column in WITH):\n> ivoras=# update lt set active = false where (id % 2) = 0;\n> UPDATE 500000\n> ivoras=# explain analyze select * from lt where active order by id\n> desc limit 10;\n> QUERY\n> PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.32 rows=10 width=9) (actual time=0.135..0.186\n> rows=10 loops=1)\n> -> Index Scan Backward using lt_pkey on lt (cost=0.00..47380.43\n> rows=1500000 width=9) (actual time=0.132..0.174 rows=10 loops=1)\n> Filter: active\n> Total runtime: 0.244 ms\n> (4 rows)\n>\n> It looks like filtering is performed in the Index Scan operation -\n> which I never expected. Is this usually done or only for simple\n> queries. In other words, is there a circumstance where it is NOT done\n> in this way, and should I care?\n\nI would expect the filter to be done at the first point it can\nconveniently be done at. You probably shouldn't care.\n\n> Based on the results with OFFSET it looks like it would be a bad idea\n> to use pgsql for allowing the user to browse through the records in a\n> paginated form if the table is huge. I would very much like to be\n> proven wrong :)\n\nDo you expect to have a lot of users who will hit \"next page\" 50,000\ntimes and carefully read through each set of results? I think this is\na problem I would worry about when it knocked on my door. Unless I\nwas mostly worried about denial of service attacks.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 09:57:17 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On 06/08/13 22:46, Ivan Voras wrote:\n> Here are two more unexpected results. Same test table (1 mil. records,\n> \"id\" is SERIAL PRIMARY KEY, PostgreSQL 9.1, VACUUM ANALYZE performed\n> before the experiments):\n>\n> ivoras=# explain analyze select * from lt where id > 900000 limit 10;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..1.71 rows=10 width=9) (actual\n> time=142.669..142.680 rows=10 loops=1)\n> -> Seq Scan on lt (cost=0.00..17402.00 rows=101630 width=9)\n> (actual time=142.665..142.672 rows=10 loops=1)\n> Filter: (id > 900000)\n> Total runtime: 142.735 ms\n> (4 rows)\n>\n> Note the Seq Scan.\n>\n> ivoras=# explain analyze select * from lt where id > 900000;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on lt (cost=1683.97..7856.35 rows=101630 width=9)\n> (actual time=38.462..85.780 rows=100000 loops=1)\n> Recheck Cond: (id > 900000)\n> -> Bitmap Index Scan on lt_pkey (cost=0.00..1658.56 rows=101630\n> width=0) (actual time=38.310..38.310 rows=100000 loops=1)\n> Index Cond: (id > 900000)\n> Total runtime: 115.674 ms\n> (5 rows)\n>\n> This somewhat explains the above case - we are simply fetching 100,000\n> records here, and it's slow enough even with the index scan, so\n> planner skips the index in the former case. BUT, if it did use the\n> index, it would have been expectedly fast:\n>\n> ivoras=# set enable_seqscan to off;\n> SET\n> ivoras=# explain analyze select * from lt where id > 900000 limit 10;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..1.74 rows=10 width=9) (actual time=0.081..0.112\n> rows=10 loops=1)\n> -> Index Scan using lt_pkey on lt (cost=0.00..17644.17\n> rows=101630 width=9) (actual time=0.078..0.100 rows=10 loops=1)\n> Index Cond: (id > 900000)\n> Total runtime: 0.175 ms\n> (4 rows)\n>\n> It looks like the problem is in the difference between what the\n> planner expects and what the Filter or Index operations deliver:\n> (cost=0.00..17402.00 rows=101630 width=9) (actual\n> time=142.665..142.672 rows=10 loops=1).\n>\n>\n\nHmm - I wonder if the lack or ORDER BY is part of the problem here. \nConsider a similar query on pgbench_accounts:\n\nbench=# explain analyze select aid from pgbench_accounts where aid > \n100000 limit 20;\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.91 rows=20 width=4) (actual time=0.005..0.464 \nrows=20 loops=1)\n -> Seq Scan on pgbench_accounts (cost=0.00..499187.31 rows=10994846 \nwidth=4) (actual time=0.005..0.463 rows=20 loops=1)\n Filter: (aid > 100000)\n Total runtime: 0.474 ms\n(4 rows)\n\nbench=# explain analyze select aid from pgbench_accounts where aid > \n10000000 limit 20;\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2.25 rows=20 width=4) (actual time=0.014..0.018 \nrows=20 loops=1)\n -> Index Scan using pgbench_accounts_pkey on pgbench_accounts \n(cost=0.00..207204.06 rows=1844004 width=4) (actual time=0.014..0.017 \nrows=20 loops=1)\n Index Cond: (aid > 10000000)\n Total runtime: 0.030 ms\n(4 rows)\n\n\nSo at some point you get index scans. Now add an ORDER BY:\n\nbench=# explain analyze select aid from pgbench_accounts where aid > \n100000 order by aid limit 20;\nQUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n--\n Limit (cost=0.00..2.25 rows=20 width=4) (actual time=0.008..0.012 \nrows=20 loops=1)\n -> Index Scan using pgbench_accounts_pkey on pgbench_accounts \n(cost=0.00..1235355.34 rows=10994846 width=4) (actual time=0.008..0.011 \nrows=20 loops=1\n)\n Index Cond: (aid > 100000)\n Total runtime: 0.023 ms\n(4 rows)\n\nbench=# explain analyze select aid from pgbench_accounts where aid > \n10000000 order by aid limit 20;\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2.25 rows=20 width=4) (actual time=0.014..0.018 \nrows=20 loops=1)\n -> Index Scan using pgbench_accounts_pkey on pgbench_accounts \n(cost=0.00..207204.06 rows=1844004 width=4) (actual time=0.014..0.016 \nrows=20 loops=1)\n Index Cond: (aid > 10000000)\n Total runtime: 0.029 ms\n(4 rows)\n\n\n...and we have index scans for both cases.\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 07 Aug 2013 10:56:31 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On Tue, Aug 6, 2013 at 7:56 PM, Mark Kirkwood\n<[email protected]> wrote:\n> Hmm - I wonder if the lack or ORDER BY is part of the problem here. Consider\n> a similar query on pgbench_accounts:\n>\n> bench=# explain analyze select aid from pgbench_accounts where aid > 100000\n> limit 20;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.91 rows=20 width=4) (actual time=0.005..0.464 rows=20\n> loops=1)\n> -> Seq Scan on pgbench_accounts (cost=0.00..499187.31 rows=10994846\n> width=4) (actual time=0.005..0.463 rows=20 loops=1)\n> Filter: (aid > 100000)\n> Total runtime: 0.474 ms\n> (4 rows)\n>\n> bench=# explain analyze select aid from pgbench_accounts where aid >\n> 10000000 limit 20;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..2.25 rows=20 width=4) (actual time=0.014..0.018 rows=20\n> loops=1)\n> -> Index Scan using pgbench_accounts_pkey on pgbench_accounts\n> (cost=0.00..207204.06 rows=1844004 width=4) (actual time=0.014..0.017\n> rows=20 loops=1)\n> Index Cond: (aid > 10000000)\n> Total runtime: 0.030 ms\n> (4 rows)\n>\n>\n> So at some point you get index scans. Now add an ORDER BY:\n>\n> bench=# explain analyze select aid from pgbench_accounts where aid > 100000\n> order by aid limit 20;\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n> --\n> Limit (cost=0.00..2.25 rows=20 width=4) (actual time=0.008..0.012 rows=20\n> loops=1)\n> -> Index Scan using pgbench_accounts_pkey on pgbench_accounts\n> (cost=0.00..1235355.34 rows=10994846 width=4) (actual time=0.008..0.011\n> rows=20 loops=1\n> )\n> Index Cond: (aid > 100000)\n> Total runtime: 0.023 ms\n> (4 rows)\n>\n> bench=# explain analyze select aid from pgbench_accounts where aid >\n> 10000000 order by aid limit 20;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..2.25 rows=20 width=4) (actual time=0.014..0.018 rows=20\n> loops=1)\n> -> Index Scan using pgbench_accounts_pkey on pgbench_accounts\n> (cost=0.00..207204.06 rows=1844004 width=4) (actual time=0.014..0.016\n> rows=20 loops=1)\n> Index Cond: (aid > 10000000)\n> Total runtime: 0.029 ms\n> (4 rows)\n>\n>\n> ...and we have index scans for both cases.\n>\n> Cheers\n>\n> Mark\n\nYes, but those index scans decisions are driven by the wrong factor.\nIn the last two cases, the need for rows to be ordered. In the second\ncase, the estimated number of tuples in the scan.\n\nIn both cases, that's not the driving factor for the right decision.\nThe driving factor *should* be startup cost, which is nonzero because\nthere is a filter being applied to that sequential scan that filters\nmany of the initial tuples. With a nonzero startup cost, the cost of\nthe limited plan would be \"startup cost + scan cost * scanned\nfraction\". When scanned fraction is low enough, startup cost dominates\nthe equation.\n\nWith a min/max index, a cheap query to that index could estimate at\nleast a lower bound to that initial zero-rows-output cost. With b-tree\nindexes, not so much (the b-tree would have to be traversed until the\nfirst filter-passing tuple, which could be a while).\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 20:03:59 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On Tue, Aug 6, 2013 at 8:03 PM, Claudio Freire <[email protected]> wrote:\n> On Tue, Aug 6, 2013 at 7:56 PM, Mark Kirkwood\n> <[email protected]> wrote:\n>> Hmm - I wonder if the lack or ORDER BY is part of the problem here. Consider\n>> a similar query on pgbench_accounts:\n>>\n>> bench=# explain analyze select aid from pgbench_accounts where aid > 100000\n>> limit 20;\n>> QUERY PLAN\n>> -----------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..0.91 rows=20 width=4) (actual time=0.005..0.464 rows=20\n>> loops=1)\n>> -> Seq Scan on pgbench_accounts (cost=0.00..499187.31 rows=10994846\n>> width=4) (actual time=0.005..0.463 rows=20 loops=1)\n>> Filter: (aid > 100000)\n>> Total runtime: 0.474 ms\n>> (4 rows)\n>>\n>> bench=# explain analyze select aid from pgbench_accounts where aid >\n>> 10000000 limit 20;\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..2.25 rows=20 width=4) (actual time=0.014..0.018 rows=20\n>> loops=1)\n>> -> Index Scan using pgbench_accounts_pkey on pgbench_accounts\n>> (cost=0.00..207204.06 rows=1844004 width=4) (actual time=0.014..0.017\n>> rows=20 loops=1)\n>> Index Cond: (aid > 10000000)\n>> Total runtime: 0.030 ms\n>> (4 rows)\n>>\n>>\n>> So at some point you get index scans. Now add an ORDER BY:\n>>\n>> bench=# explain analyze select aid from pgbench_accounts where aid > 100000\n>> order by aid limit 20;\n>> QUERY PLAN\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n>> --\n>> Limit (cost=0.00..2.25 rows=20 width=4) (actual time=0.008..0.012 rows=20\n>> loops=1)\n>> -> Index Scan using pgbench_accounts_pkey on pgbench_accounts\n>> (cost=0.00..1235355.34 rows=10994846 width=4) (actual time=0.008..0.011\n>> rows=20 loops=1\n>> )\n>> Index Cond: (aid > 100000)\n>> Total runtime: 0.023 ms\n>> (4 rows)\n>>\n>> bench=# explain analyze select aid from pgbench_accounts where aid >\n>> 10000000 order by aid limit 20;\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..2.25 rows=20 width=4) (actual time=0.014..0.018 rows=20\n>> loops=1)\n>> -> Index Scan using pgbench_accounts_pkey on pgbench_accounts\n>> (cost=0.00..207204.06 rows=1844004 width=4) (actual time=0.014..0.016\n>> rows=20 loops=1)\n>> Index Cond: (aid > 10000000)\n>> Total runtime: 0.029 ms\n>> (4 rows)\n>>\n>>\n>> ...and we have index scans for both cases.\n>>\n>> Cheers\n>>\n>> Mark\n>\n> Yes, but those index scans decisions are driven by the wrong factor.\n> In the last two cases, the need for rows to be ordered. In the second\n> case, the estimated number of tuples in the scan.\n>\n> In both cases, that's not the driving factor for the right decision.\n> The driving factor *should* be startup cost, which is nonzero because\n> there is a filter being applied to that sequential scan that filters\n> many of the initial tuples. With a nonzero startup cost, the cost of\n> the limited plan would be \"startup cost + scan cost * scanned\n> fraction\". When scanned fraction is low enough, startup cost dominates\n> the equation.\n>\n> With a min/max index, a cheap query to that index could estimate at\n> least a lower bound to that initial zero-rows-output cost. With b-tree\n> indexes, not so much (the b-tree would have to be traversed until the\n> first filter-passing tuple, which could be a while).\n\nOk, silly me. A min/max index would *elliminate* the startup cost.\n\nSo, what's a good way to estimate it?\n\nPerhaps value-page correlation could be used, but it would cover a\nreally narrow case (monotonous sequences).\n\nAlternatively, a token startup cost could be added to those kinds of\nfiltered sequential scans, when the filtering term is selective\nenough. That would offset the cost just a little bit, but enough to\nfavor index over sequential on the right cases.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 20:09:57 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On Mon, Aug 5, 2013 at 6:54 PM, David Johnston <[email protected]> wrote:\n> Curious how much slower/faster these queries would run if you added:\n>\n> SELECT *, first_value(id) OVER (...), last_value(id) OVER (...)\n> --note the window specifications need to overcome the \"ORDER BY\" limitation\n> noted in the documentation.\n\nTo be honest I can not understand how are you going to specify partition here.\n\nOr you are talking about wrapping the original query like this\n\nSELECT *, first_value(id) OVER (), last_value(id) OVER () FROM (\n SELECT * FROM table\n WHERE id > :current_last_id\n ORDER BY id LIMIT 10\n) AS sq2;\n\n?\n\nHowever, in this case using min()/max() instead of\nfist_value()/last_value() will be faster as it does not require to do\nadditional scan on subquery results.\n\nIn general I do not think it would be much slower if we are not\ntalking about thousands of results on one page.\n\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nProfile: http://www.linkedin.com/in/grayhemp\nPhone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 16:50:49 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "On Tue, Aug 6, 2013 at 3:46 AM, Ivan Voras <[email protected]> wrote:\n> Here are two more unexpected results. Same test table (1 mil. records,\n> \"id\" is SERIAL PRIMARY KEY, PostgreSQL 9.1, VACUUM ANALYZE performed\n> before the experiments):\n>\n> ivoras=# explain analyze select * from lt where id > 900000 limit 10;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..1.71 rows=10 width=9) (actual\n> time=142.669..142.680 rows=10 loops=1)\n> -> Seq Scan on lt (cost=0.00..17402.00 rows=101630 width=9)\n> (actual time=142.665..142.672 rows=10 loops=1)\n> Filter: (id > 900000)\n> Total runtime: 142.735 ms\n> (4 rows)\n\n[skipped]\n\n> ivoras=# set enable_seqscan to off;\n> SET\n> ivoras=# explain analyze select * from lt where id > 900000 limit 10;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..1.74 rows=10 width=9) (actual time=0.081..0.112\n> rows=10 loops=1)\n> -> Index Scan using lt_pkey on lt (cost=0.00..17644.17\n> rows=101630 width=9) (actual time=0.078..0.100 rows=10 loops=1)\n> Index Cond: (id > 900000)\n> Total runtime: 0.175 ms\n> (4 rows)\n>\n> It looks like the problem is in the difference between what the\n> planner expects and what the Filter or Index operations deliver:\n> (cost=0.00..17402.00 rows=101630 width=9) (actual\n> time=142.665..142.672 rows=10 loops=1).\n\nThis might be caused by not accurate random_page_cost setting. This\nparameter gives planner a hint of how much it would cost to perform a\nrandom page read used by index scans. It looks like you need to\ndecrease random_page_cost.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nProfile: http://www.linkedin.com/in/grayhemp\nPhone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 Aug 2013 17:00:22 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> Alternatively, a token startup cost could be added to those kinds of\n> filtered sequential scans, when the filtering term is selective\n> enough. That would offset the cost just a little bit, but enough to\n> favor index over sequential on the right cases.\n\nMaybe not so \"token\". Really, if there's a filter condition having a\nselectivity of say 1/N, we should expect to have to skip over O(N) tuples\nbefore finding a match. (Not sure at this late hour if the expectation is\nN, or N/2, or something else ... but anyway it's in that ballpark.) We\ndon't take that into account in computing the startup time of a plan node,\nbut I'm thinking we should. In this particular example, that would\npenalize the seqscan plan and not the indexscan plan, because in the\nindexscan case the condition is being applied by the index; it's not an\nafter-the-fact filter.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 07 Aug 2013 00:52:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY, LIMIT and indexes"
}
] |
[
{
"msg_contents": "Hi,\n\nI've seen a couple of bad queries go through one instance and I'm wondering\nwhether there's something simple that can be done to help.\n\nNot running the query in the first place is what I am looking to do\nultimately but in the meantime, I'm interested in understanding more about\nthe plan below.\n\nThe query itself is very simple: a primary key lookup on a 1.5x10^7 rows.\nThe issue is that we are looking up over 11,000 primary keys at once,\ncausing the db to consume a lot of CPU.\n\nWhere I'm not sure I follow, is the discrepancy between the planned and\nactual rows.\n\nBitmap Heap Scan on dim_context c (cost=6923.33..11762.31 rows=1\nwidth=329) (actual time=17128.121..22031.783 rows=10858 loops=1)\n\nWould a sequential scan be more beneficial? The table itself is about 10GB\non a 64GB box, 30% of these 10GB are buffered in RAM from what I can tell.\n\nThanks for your help,\n\nAlexis\n\nHere are the full details.\n\nexplain (analyze, buffers)\nSELECT c.key,\n c.x_id,\n c.tags,\n c.source_type_id,\n x.api_key\n FROM dim_context c\n join x on c.x_id = x.id\n WHERE c.key = ANY (ARRAY[15368196, (11,000 other keys)])\n AND ((c.x_id = 1 AND c.tags @> ARRAY[E'blah']))\n\nHere is the plan, abridged\n\n Nested Loop (cost=6923.33..11770.59 rows=1 width=362) (actual\ntime=17128.188..22109.283 rows=10858 loops=1)\n Buffers: shared hit=83494\n -> Bitmap Heap Scan on dim_context c (cost=6923.33..11762.31 rows=1\nwidth=329) (actual time=17128.121..22031.783 rows=10858 loops=1)\n Recheck Cond: ((tags @> '{blah}'::text[]) AND (x_id = 1))\n Filter: (key = ANY ('{15368196,(a lot more keys\nhere)}'::integer[]))\n Buffers: shared hit=50919\n -> BitmapAnd (cost=6923.33..6923.33 rows=269 width=0) (actual\ntime=132.910..132.910 rows=0 loops=1)\n Buffers: shared hit=1342\n -> Bitmap Index Scan on dim_context_tags_idx\n (cost=0.00..1149.61 rows=15891 width=0) (actual time=64.614..64.614\nrows=264777 loops=1)\n Index Cond: (tags @> '{blah}'::text[])\n Buffers: shared hit=401\n -> Bitmap Index Scan on dim_context_x_id_source_type_id_idx\n (cost=0.00..5773.47 rows=268667 width=0) (actual time=54.648..54.648\nrows=267659 loops=1)\n Index Cond: (x_id = 1)\n Buffers: shared hit=941\n -> Index Scan using x_pkey on x (cost=0.00..8.27 rows=1 width=37)\n(actual time=0.003..0.004 rows=1 loops=10858)\n Index Cond: (x.id = 1)\n Buffers: shared hit=32575\n Total runtime: 22117.417 ms\n\nAnd here are the stats\n\n attname | null_frac | avg_width | n_distinct | correlation\n----------------+-----------+-----------+------------+-------------\n key | 0 | 4 | -1 | 0.999558\n x_id | 0 | 4 | 1498 | 0.351316\n h_id | 0.05632 | 4 | 116570 | 0.653092\n tags | 0.0544567 | 284 | 454877 | -0.169626\n source_type_id | 0 | 4 | 23 | 0.39552\n handle | 0 | 248 | -1 | 0.272456\n created | 0 | 8 | -0.645231 | 0.999559\n modified | 0 | 8 | -0.645231 | 0.999559\n\nHi,I've seen a couple of bad queries go through one instance and I'm wondering whether there's something simple that can be done to help.Not running the query in the first place is what I am looking to do ultimately but in the meantime, I'm interested in understanding more about the plan below.\nThe query itself is very simple: a primary key lookup on a 1.5x10^7 rows. The issue is that we are looking up over 11,000 primary keys at once, causing the db to consume a lot of CPU.\nWhere I'm not sure I follow, is the discrepancy between the planned and actual rows.Bitmap Heap Scan on dim_context c (cost=6923.33..11762.31 rows=1 width=329) (actual time=17128.121..22031.783 rows=10858 loops=1)\nWould a sequential scan be more beneficial? The table itself is about 10GB on a 64GB box, 30% of these 10GB are buffered in RAM from what I can tell.Thanks for your help,\nAlexisHere are the full details.explain (analyze, buffers)SELECT c.key, c.x_id, c.tags,\n c.source_type_id,\n x.api_key FROM dim_context c join x on c.x_id = x.id WHERE c.key = ANY (ARRAY[15368196, (11,000 other keys)]) AND ((c.x_id = 1 AND c.tags @> ARRAY[E'blah']))\nHere is the plan, abridged Nested Loop (cost=6923.33..11770.59 rows=1 width=362) (actual time=17128.188..22109.283 rows=10858 loops=1) Buffers: shared hit=83494\n -> Bitmap Heap Scan on dim_context c (cost=6923.33..11762.31 rows=1 width=329) (actual time=17128.121..22031.783 rows=10858 loops=1) Recheck Cond: ((tags @> '{blah}'::text[]) AND (x_id = 1))\n Filter: (key = ANY ('{15368196,(a lot more keys here)}'::integer[])) Buffers: shared hit=50919 -> BitmapAnd (cost=6923.33..6923.33 rows=269 width=0) (actual time=132.910..132.910 rows=0 loops=1)\n Buffers: shared hit=1342 -> Bitmap Index Scan on dim_context_tags_idx (cost=0.00..1149.61 rows=15891 width=0) (actual time=64.614..64.614 rows=264777 loops=1) Index Cond: (tags @> '{blah}'::text[])\n Buffers: shared hit=401 -> Bitmap Index Scan on dim_context_x_id_source_type_id_idx (cost=0.00..5773.47 rows=268667 width=0) (actual time=54.648..54.648 rows=267659 loops=1)\n Index Cond: (x_id = 1) Buffers: shared hit=941 -> Index Scan using x_pkey on x (cost=0.00..8.27 rows=1 width=37) (actual time=0.003..0.004 rows=1 loops=10858)\n Index Cond: (x.id = 1) Buffers: shared hit=32575 Total runtime: 22117.417 msAnd here are the stats\n attname | null_frac | avg_width | n_distinct | correlation\n----------------+-----------+-----------+------------+------------- key | 0 | 4 | -1 | 0.999558 x_id | 0 | 4 | 1498 | 0.351316\n h_id | 0.05632 | 4 | 116570 | 0.653092 tags | 0.0544567 | 284 | 454877 | -0.169626 source_type_id | 0 | 4 | 23 | 0.39552\n handle | 0 | 248 | -1 | 0.272456 created | 0 | 8 | -0.645231 | 0.999559 modified | 0 | 8 | -0.645231 | 0.999559",
"msg_date": "Wed, 7 Aug 2013 11:38:47 -0400",
"msg_from": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Better performance possible for a pathological query?"
},
{
"msg_contents": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]> writes:\n> The query itself is very simple: a primary key lookup on a 1.5x10^7 rows.\n> The issue is that we are looking up over 11,000 primary keys at once,\n> causing the db to consume a lot of CPU.\n\nIt looks like most of the runtime is probably going into checking the\nc.key = ANY (ARRAY[...]) construct. PG isn't especially smart about that\nif it fails to optimize the construct into an index operation --- I think\nit's just searching the array linearly for each row meeting the other\nrestrictions on c.\n\nYou could try writing the test like this:\n c.key = ANY (VALUES (1), (17), (42), ...)\nto see if the sub-select code path gives better results than the array\ncode path. In a quick check it looked like this might produce a hash\njoin, which seemed promising anyway.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 07 Aug 2013 12:07:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better performance possible for a pathological query?"
},
{
"msg_contents": "On Wed, Aug 7, 2013 at 12:07 PM, Tom Lane <[email protected]> wrote:\n\n> =?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]> writes:\n> > The query itself is very simple: a primary key lookup on a 1.5x10^7 rows.\n> > The issue is that we are looking up over 11,000 primary keys at once,\n> > causing the db to consume a lot of CPU.\n>\n> It looks like most of the runtime is probably going into checking the\n> c.key = ANY (ARRAY[...]) construct. PG isn't especially smart about that\n> if it fails to optimize the construct into an index operation --- I think\n> it's just searching the array linearly for each row meeting the other\n> restrictions on c.\n>\n> You could try writing the test like this:\n> c.key = ANY (VALUES (1), (17), (42), ...)\n> to see if the sub-select code path gives better results than the array\n> code path. In a quick check it looked like this might produce a hash\n> join, which seemed promising anyway.\n>\n> regards, tom lane\n>\n\nThank you very much Tom, your suggestion is spot on. Runtime decreased\n100-fold, from 20s to 200ms with a simple search-and-replace.\n\nHere's the updated plan for the record.\n\n Nested Loop (cost=168.22..2116.29 rows=148 width=362) (actual\ntime=22.134..256.531 rows=10858 loops=1)\n Buffers: shared hit=44967\n -> Index Scan using x_pkey on x (cost=0.00..8.27 rows=1 width=37)\n(actual time=0.071..0.073 rows=1 loops=1)\n Index Cond: (id = 1)\n Buffers: shared hit=4\n -> Nested Loop (cost=168.22..2106.54 rows=148 width=329) (actual\ntime=22.060..242.406 rows=10858 loops=1)\n Buffers: shared hit=44963\n -> HashAggregate (cost=168.22..170.22 rows=200 width=4) (actual\ntime=21.529..32.820 rows=11215 loops=1)\n -> Values Scan on \"*VALUES*\" (cost=0.00..140.19 rows=11215\nwidth=4) (actual time=0.005..9.527 rows=11215 loops=1)\n -> Index Scan using dim_context_pkey on dim_context c\n (cost=0.00..9.67 rows=1 width=329) (actual time=0.015..0.016 rows=1\nloops=11215)\n Index Cond: (c.key = \"*VALUES*\".column1)\n Filter: ((c.tags @> '{blah}'::text[]) AND (c.org_id = 1))\n Buffers: shared hit=44963\n Total runtime: 263.639 ms\n\nOn Wed, Aug 7, 2013 at 12:07 PM, Tom Lane <[email protected]> wrote:\n=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]> writes:\n> The query itself is very simple: a primary key lookup on a 1.5x10^7 rows.\n> The issue is that we are looking up over 11,000 primary keys at once,\n> causing the db to consume a lot of CPU.\n\nIt looks like most of the runtime is probably going into checking the\nc.key = ANY (ARRAY[...]) construct. PG isn't especially smart about that\nif it fails to optimize the construct into an index operation --- I think\nit's just searching the array linearly for each row meeting the other\nrestrictions on c.\n\nYou could try writing the test like this:\n c.key = ANY (VALUES (1), (17), (42), ...)\nto see if the sub-select code path gives better results than the array\ncode path. In a quick check it looked like this might produce a hash\njoin, which seemed promising anyway.\n\n regards, tom lane\nThank you very much Tom, your suggestion is spot on. Runtime decreased 100-fold, from 20s to 200ms with a simple search-and-replace.\nHere's the updated plan for the record. Nested Loop (cost=168.22..2116.29 rows=148 width=362) (actual time=22.134..256.531 rows=10858 loops=1) \n Buffers: shared hit=44967 -> Index Scan using x_pkey on x (cost=0.00..8.27 rows=1 width=37) (actual time=0.071..0.073 rows=1 loops=1)\n\n Index Cond: (id = 1) Buffers: shared hit=4 -> Nested Loop (cost=168.22..2106.54 rows=148 width=329) (actual time=22.060..242.406 rows=10858 loops=1)\n Buffers: shared hit=44963 -> HashAggregate (cost=168.22..170.22 rows=200 width=4) (actual time=21.529..32.820 rows=11215 loops=1)\n\n -> Values Scan on \"*VALUES*\" (cost=0.00..140.19 rows=11215 width=4) (actual time=0.005..9.527 rows=11215 loops=1) -> Index Scan using dim_context_pkey on dim_context c (cost=0.00..9.67 rows=1 width=329) (actual time=0.015..0.016 rows=1 loops=11215)\n Index Cond: (c.key = \"*VALUES*\".column1) Filter: ((c.tags @> '{blah}'::text[]) AND (c.org_id = 1))\n\n Buffers: shared hit=44963 Total runtime: 263.639 ms",
"msg_date": "Wed, 7 Aug 2013 12:28:45 -0400",
"msg_from": "=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Better performance possible for a pathological query?"
}
] |
[
{
"msg_contents": "Let's say I have a table something like this:\n\n create table call_activity (\n id int8 not null,\n called timestamp,\n user_id int8 not null,\n primary key (id)\n foreign key (user_id) references my_users\n )\n\n\nI want to get the last call_activity record for a single user. Is there\nANY way to efficiently retrieve the last record for a specified user_id, or\ndo I need to de-normalize and update a table with a single row for each\nuser each time a new call_activity record is inserted? I know I how to do\nthe query without the summary table (subquery or GROUP BY with MAX) but\nthat seems like it will never perform well for large data sets. Or am I\nfull of beans and it should perform just fine for a huge data set as long\nas I have an index on \"called\"?\n\nThanks in advance!\n\nLet's say I have a table something like this: create table call_activity ( id int8 not null, called timestamp,\n user_id int8 not null, primary key (id) foreign key (user_id) references my_users )I want to get the last call_activity record for a single user. Is there ANY way to efficiently retrieve the last record for a specified user_id, or do I need to de-normalize and update a table with a single row for each user each time a new call_activity record is inserted? I know I how to do the query without the summary table (subquery or GROUP BY with MAX) but that seems like it will never perform well for large data sets. Or am I full of beans and it should perform just fine for a huge data set as long as I have an index on \"called\"? \nThanks in advance!",
"msg_date": "Wed, 7 Aug 2013 11:12:48 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Efficiently query for the most recent record for a given user"
},
{
"msg_contents": "On Wed, Aug 7, 2013 at 3:12 PM, Robert DiFalco <[email protected]> wrote:\n> Let's say I have a table something like this:\n>\n> create table call_activity (\n> id int8 not null,\n> called timestamp,\n> user_id int8 not null,\n> primary key (id)\n> foreign key (user_id) references my_users\n> )\n>\n>\n> I want to get the last call_activity record for a single user. Is there ANY\n> way to efficiently retrieve the last record for a specified user_id, or do I\n> need to de-normalize and update a table with a single row for each user each\n> time a new call_activity record is inserted? I know I how to do the query\n> without the summary table (subquery or GROUP BY with MAX) but that seems\n> like it will never perform well for large data sets. Or am I full of beans\n> and it should perform just fine for a huge data set as long as I have an\n> index on \"called\"?\n\n\nCreate an index over (user_id, called desc), and do\n\nselect * from call_activity where user_id = blarg order by called desc limit 1\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Aug 2013 15:19:32 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently query for the most recent record for a\n given user"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> On Wed, Aug 7, 2013 at 3:12 PM, Robert DiFalco <[email protected]> wrote:\n>> I want to get the last call_activity record for a single user.\n\n> Create an index over (user_id, called desc), and do\n> select * from call_activity where user_id = blarg order by called desc limit 1\n\nNote that there's no particular need to specify \"desc\" in the index\ndefinition. This same index can support searches in either direction\non the \"called\" column.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 07 Aug 2013 14:34:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently query for the most recent record for a given user"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of Claudio Freire\n> Sent: Wednesday, August 07, 2013 2:20 PM\n> To: Robert DiFalco\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Efficiently query for the most recent record for a\n> given user\n> \n> On Wed, Aug 7, 2013 at 3:12 PM, Robert DiFalco <[email protected]>\n> wrote:\n> > Let's say I have a table something like this:\n> >\n> > create table call_activity (\n> > id int8 not null,\n> > called timestamp,\n> > user_id int8 not null,\n> > primary key (id)\n> > foreign key (user_id) references my_users\n> > )\n> >\n> >\n> > I want to get the last call_activity record for a single user. Is\n> > there ANY way to efficiently retrieve the last record for a specified\n> > user_id, or do I need to de-normalize and update a table with a single\n> > row for each user each time a new call_activity record is inserted? I\n> > know I how to do the query without the summary table (subquery or\n> > GROUP BY with MAX) but that seems like it will never perform well for\n> > large data sets. Or am I full of beans and it should perform just fine\n> > for a huge data set as long as I have an index on \"called\"?\n> \n> \n> Create an index over (user_id, called desc), and do\n> \n> select * from call_activity where user_id = blarg order by called desc limit 1\n> \n\nAnd most recent call for every user:\n\nSELECT id, user_id, MAX(called) OVER (PARTITION BY user_id) FROM call_activity;\n\nRegards,\nIgor Neyman\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Aug 2013 18:35:05 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently query for the most recent record for a\n given user"
},
{
"msg_contents": "Thanks guys!\n\n\nOn Wed, Aug 7, 2013 at 11:35 AM, Igor Neyman <[email protected]> wrote:\n\n> > -----Original Message-----\n> > From: [email protected] [mailto:pgsql-\n> > [email protected]] On Behalf Of Claudio Freire\n> > Sent: Wednesday, August 07, 2013 2:20 PM\n> > To: Robert DiFalco\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] Efficiently query for the most recent record for a\n> > given user\n> >\n> > On Wed, Aug 7, 2013 at 3:12 PM, Robert DiFalco <[email protected]\n> >\n> > wrote:\n> > > Let's say I have a table something like this:\n> > >\n> > > create table call_activity (\n> > > id int8 not null,\n> > > called timestamp,\n> > > user_id int8 not null,\n> > > primary key (id)\n> > > foreign key (user_id) references my_users\n> > > )\n> > >\n> > >\n> > > I want to get the last call_activity record for a single user. Is\n> > > there ANY way to efficiently retrieve the last record for a specified\n> > > user_id, or do I need to de-normalize and update a table with a single\n> > > row for each user each time a new call_activity record is inserted? I\n> > > know I how to do the query without the summary table (subquery or\n> > > GROUP BY with MAX) but that seems like it will never perform well for\n> > > large data sets. Or am I full of beans and it should perform just fine\n> > > for a huge data set as long as I have an index on \"called\"?\n> >\n> >\n> > Create an index over (user_id, called desc), and do\n> >\n> > select * from call_activity where user_id = blarg order by called desc\n> limit 1\n> >\n>\n> And most recent call for every user:\n>\n> SELECT id, user_id, MAX(called) OVER (PARTITION BY user_id) FROM\n> call_activity;\n>\n> Regards,\n> Igor Neyman\n>\n>\n\nThanks guys!On Wed, Aug 7, 2013 at 11:35 AM, Igor Neyman <[email protected]> wrote:\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of Claudio Freire\n> Sent: Wednesday, August 07, 2013 2:20 PM\n> To: Robert DiFalco\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Efficiently query for the most recent record for a\n> given user\n>\n> On Wed, Aug 7, 2013 at 3:12 PM, Robert DiFalco <[email protected]>\n> wrote:\n> > Let's say I have a table something like this:\n> >\n> > create table call_activity (\n> > id int8 not null,\n> > called timestamp,\n> > user_id int8 not null,\n> > primary key (id)\n> > foreign key (user_id) references my_users\n> > )\n> >\n> >\n> > I want to get the last call_activity record for a single user. Is\n> > there ANY way to efficiently retrieve the last record for a specified\n> > user_id, or do I need to de-normalize and update a table with a single\n> > row for each user each time a new call_activity record is inserted? I\n> > know I how to do the query without the summary table (subquery or\n> > GROUP BY with MAX) but that seems like it will never perform well for\n> > large data sets. Or am I full of beans and it should perform just fine\n> > for a huge data set as long as I have an index on \"called\"?\n>\n>\n> Create an index over (user_id, called desc), and do\n>\n> select * from call_activity where user_id = blarg order by called desc limit 1\n>\n\nAnd most recent call for every user:\n\nSELECT id, user_id, MAX(called) OVER (PARTITION BY user_id) FROM call_activity;\n\nRegards,\nIgor Neyman",
"msg_date": "Wed, 7 Aug 2013 11:39:07 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Efficiently query for the most recent record for a\n given user"
},
{
"msg_contents": "On Wed, Aug 7, 2013 at 3:34 PM, Tom Lane <[email protected]> wrote:\n> Claudio Freire <[email protected]> writes:\n>> On Wed, Aug 7, 2013 at 3:12 PM, Robert DiFalco <[email protected]> wrote:\n>>> I want to get the last call_activity record for a single user.\n>\n>> Create an index over (user_id, called desc), and do\n>> select * from call_activity where user_id = blarg order by called desc limit 1\n>\n> Note that there's no particular need to specify \"desc\" in the index\n> definition. This same index can support searches in either direction\n> on the \"called\" column.\n\n\nYeah, but it's faster if it's in the same direction, because the\nkernel read-ahead code detects sequential reads, whereas it doesn't\nwhen it goes backwards. The difference can be up to a factor of 10 for\nlong index scans.\n\nThough... true... for a limit 1... it wouldn't matter that much. But\nit's become habit to match index sort order by now.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Aug 2013 15:39:45 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently query for the most recent record for a\n given user"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> On Wed, Aug 7, 2013 at 3:34 PM, Tom Lane <[email protected]> wrote:\n>> Note that there's no particular need to specify \"desc\" in the index\n>> definition. This same index can support searches in either direction\n>> on the \"called\" column.\n\n> Yeah, but it's faster if it's in the same direction, because the\n> kernel read-ahead code detects sequential reads, whereas it doesn't\n> when it goes backwards. The difference can be up to a factor of 10 for\n> long index scans.\n\nColor me skeptical. Index searches are seldom purely sequential block\naccesses. Maybe if you had a freshly built index that'd never yet\nsuffered any inserts/updates, but in practice any advantage would\ndisappear very quickly after a few index page splits.\n\n> Though... true... for a limit 1... it wouldn't matter that much.\n\nThat's the other point.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 07 Aug 2013 15:04:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently query for the most recent record for a given user"
},
{
"msg_contents": "Claudio Freire escribi�:\n> On Wed, Aug 7, 2013 at 3:34 PM, Tom Lane <[email protected]> wrote:\n\n> > Note that there's no particular need to specify \"desc\" in the index\n> > definition. This same index can support searches in either direction\n> > on the \"called\" column.\n> \n> Yeah, but it's faster if it's in the same direction, because the\n> kernel read-ahead code detects sequential reads, whereas it doesn't\n> when it goes backwards. The difference can be up to a factor of 10 for\n> long index scans.\n\nThat might be true when an index is new, but as it grows, the leaf pages\nare not going to be sequential anymore. And this doesn't much apply for\nan equality lookup anyway, does it?\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Aug 2013 15:05:48 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently query for the most recent record for a\n given user"
},
{
"msg_contents": "On Wed, Aug 7, 2013 at 4:04 PM, Tom Lane <[email protected]> wrote:\n>> Yeah, but it's faster if it's in the same direction, because the\n>> kernel read-ahead code detects sequential reads, whereas it doesn't\n>> when it goes backwards. The difference can be up to a factor of 10 for\n>> long index scans.\n>\n> Color me skeptical. Index searches are seldom purely sequential block\n> accesses. Maybe if you had a freshly built index that'd never yet\n> suffered any inserts/updates, but in practice any advantage would\n> disappear very quickly after a few index page splits.\n\nMaybe.\n\nI've tested on pgbench test databases, which I'm not sure whether\nthey're freshly built indexes or incrementally built ones, and it\napplies there (in fact backward index-only scans was one of the\nworkloads the read-ahead patch improved the most).\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Aug 2013 16:13:48 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently query for the most recent record for a\n given user"
},
{
"msg_contents": "On Thu, Aug 8, 2013 at 5:01 PM, Kevin Grittner <[email protected]> wrote:\n> Claudio Freire <[email protected]> wrote:\n>> On Wed, Aug 7, 2013 at 4:04 PM, Tom Lane <[email protected]> wrote:\n>>>> Yeah, but it's faster if it's in the same direction, because the\n>>>> kernel read-ahead code detects sequential reads, whereas it doesn't\n>>>> when it goes backwards. The difference can be up to a factor of 10 for\n>>>> long index scans.\n>>>\n>>> Color me skeptical. Index searches are seldom purely sequential block\n>>> accesses. Maybe if you had a freshly built index that'd never yet\n>>> suffered any inserts/updates, but in practice any advantage would\n>>> disappear very quickly after a few index page splits.\n>>\n>> Maybe.\n>>\n>> I've tested on pgbench test databases, which I'm not sure whether\n>> they're freshly built indexes or incrementally built ones, and it\n>> applies there (in fact backward index-only scans was one of the\n>> workloads the read-ahead patch improved the most).\n>\n> It's been a while, but when I was touching the btree code for the\n> SSI implementation I thought I saw something about a reverse scan\n> needing to visit the parent page in cases where a forward scan\n> doesn't, due to the locking techniques used in btree. I don't know\n> how significant those extra trips up and down the tree are, but\n> they must cost *something*.\n\n From my benchmarks at the time (with pgbench), they seldom ever\nhappen, so even if they cost a lot, they don't add up to much.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 8 Aug 2013 17:08:52 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently query for the most recent record for a\n given user"
}
] |
[
{
"msg_contents": "In my system a user can have external contacts. When I am bringing in\nexternal contacts I want to correlate any other existing users in the\nsystem with those external contacts. A users external contacts may or may\nnot be users in my system. I have a user_id field in \"contacts\" that is\nNULL if that contact is not a user in my system\n\nCurrently I do something like this after reading in external contacts:\n\n UPDATE contacts SET user_id = u.id\n FROM my_users u\n JOIN phone_numbers pn ON u.phone_significant = pn.significant\n WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND\ncontacts.id= pn.ref_contact_id;\n\nIf any of the fields are not self explanatory let me know. \"Significant\" is\njust the right 7 most digits of a raw phone number.\n\nI'm more interested in possible improvements to my relational logic than\nthe details of the \"significant\" condition. IOW, I'm start enough to\noptimize the \"significant\" query but not smart enough to know if this is\nthe best approach for the overall correlated UPDATE query. :)\n\nSo yeah, is this the best way to update a contact's user_id reference based\non a contacts phone number matching the phone number of a user?\n\nOne detail from the schema -- A contact can have many phone numbers but a\nuser in my system will only ever have just one phone number. Hence the JOIN\nto \"phone_numbers\" versus the column in \"my_users\".\n\nThanks.\n\nIn my system a user can have external contacts. When I am bringing in external contacts I want to correlate any other existing users in the system with those external contacts. A users external contacts may or may not be users in my system. I have a user_id field in \"contacts\" that is NULL if that contact is not a user in my system\nCurrently I do something like this after reading in external contacts: UPDATE contacts SET user_id = u.id FROM my_users u JOIN phone_numbers pn ON u.phone_significant = pn.significant \n WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND contacts.id = pn.ref_contact_id;If any of the fields are not self explanatory let me know. \"Significant\" is just the right 7 most digits of a raw phone number. \nI'm more interested in possible improvements to my relational logic than the details of the \"significant\" condition. IOW, I'm start enough to optimize the \"significant\" query but not smart enough to know if this is the best approach for the overall correlated UPDATE query. :)\nSo yeah, is this the best way to update a contact's user_id reference based on a contacts phone number matching the phone number of a user?One detail from the schema -- A contact can have many phone numbers but a user in my system will only ever have just one phone number. Hence the JOIN to \"phone_numbers\" versus the column in \"my_users\".\nThanks.",
"msg_date": "Thu, 8 Aug 2013 11:06:41 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Efficient Correlated Update"
},
{
"msg_contents": "Guys, let me know if I have not provided enough information on this post.\nThanks!\n\n\nOn Thu, Aug 8, 2013 at 11:06 AM, Robert DiFalco <[email protected]>wrote:\n\n> In my system a user can have external contacts. When I am bringing in\n> external contacts I want to correlate any other existing users in the\n> system with those external contacts. A users external contacts may or may\n> not be users in my system. I have a user_id field in \"contacts\" that is\n> NULL if that contact is not a user in my system\n>\n> Currently I do something like this after reading in external contacts:\n>\n> UPDATE contacts SET user_id = u.id\n> FROM my_users u\n> JOIN phone_numbers pn ON u.phone_significant = pn.significant\n> WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND\n> contacts.id = pn.ref_contact_id;\n>\n> If any of the fields are not self explanatory let me know. \"Significant\"\n> is just the right 7 most digits of a raw phone number.\n>\n> I'm more interested in possible improvements to my relational logic than\n> the details of the \"significant\" condition. IOW, I'm start enough to\n> optimize the \"significant\" query but not smart enough to know if this is\n> the best approach for the overall correlated UPDATE query. :)\n>\n> So yeah, is this the best way to update a contact's user_id reference\n> based on a contacts phone number matching the phone number of a user?\n>\n> One detail from the schema -- A contact can have many phone numbers but a\n> user in my system will only ever have just one phone number. Hence the JOIN\n> to \"phone_numbers\" versus the column in \"my_users\".\n>\n> Thanks.\n>\n\nGuys, let me know if I have not provided enough information on this post. Thanks!On Thu, Aug 8, 2013 at 11:06 AM, Robert DiFalco <[email protected]> wrote:\nIn my system a user can have external contacts. When I am bringing in external contacts I want to correlate any other existing users in the system with those external contacts. A users external contacts may or may not be users in my system. I have a user_id field in \"contacts\" that is NULL if that contact is not a user in my system\nCurrently I do something like this after reading in external contacts: UPDATE contacts SET user_id = u.id FROM my_users u \n JOIN phone_numbers pn ON u.phone_significant = pn.significant \n WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND contacts.id = pn.ref_contact_id;If any of the fields are not self explanatory let me know. \"Significant\" is just the right 7 most digits of a raw phone number. \nI'm more interested in possible improvements to my relational logic than the details of the \"significant\" condition. IOW, I'm start enough to optimize the \"significant\" query but not smart enough to know if this is the best approach for the overall correlated UPDATE query. :)\nSo yeah, is this the best way to update a contact's user_id reference based on a contacts phone number matching the phone number of a user?One detail from the schema -- A contact can have many phone numbers but a user in my system will only ever have just one phone number. Hence the JOIN to \"phone_numbers\" versus the column in \"my_users\".\nThanks.",
"msg_date": "Thu, 8 Aug 2013 15:21:31 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Efficient Correlated Update"
},
{
"msg_contents": "Robert DiFalco <[email protected]> wrote:\n\n> In my system a user can have external contacts. When I am\n> bringing in external contacts I want to correlate any other\n> existing users in the system with those external contacts. A\n> users external contacts may or may not be users in my system. I\n> have a user_id field in \"contacts\" that is NULL if that contact\n> is not a user in my system\n>\n> Currently I do something like this after reading in external\n> contacts:\n>\n> UPDATE contacts SET user_id = u.id\n> FROM my_users u \n> JOIN phone_numbers pn ON u.phone_significant = pn.significant \n> WHERE contacts.owner_id = 7\n> AND contacts.user_id IS NULL\n> AND contacts.id = pn.ref_contact_id;\n>\n> If any of the fields are not self explanatory let me know.\n> \"Significant\" is just the right 7 most digits of a raw phone\n> number. \n>\n> I'm more interested in possible improvements to my relational\n> logic than the details of the \"significant\" condition. IOW, I'm\n> start enough to optimize the \"significant\" query but not smart\n> enough to know if this is the best approach for the overall\n> correlated UPDATE query. :)\n>\n> So yeah, is this the best way to update a contact's user_id\n> reference based on a contacts phone number matching the phone\n> number of a user?\n>\n> One detail from the schema -- A contact can have many phone\n> numbers but a user in my system will only ever have just one\n> phone number. Hence the JOIN to \"phone_numbers\" versus the column\n> in \"my_users\".\n \nIn looking it over, nothing jumped out at me as a problem. Are you\nhaving some problem with it, like poor performance or getting\nresults different from what you expected?\n\n-- \nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Aug 2013 08:44:15 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficient Correlated Update"
},
{
"msg_contents": "I sometimes experience that updating smaller sets is more efficient than\ndoing all at once in one transaction (talking about 10000+)\n\nAlways make sure the update references can make use of indices\n\n\nOn Fri, Aug 9, 2013 at 5:44 PM, Kevin Grittner <[email protected]> wrote:\n\n> Robert DiFalco <[email protected]> wrote:\n>\n> > In my system a user can have external contacts. When I am\n> > bringing in external contacts I want to correlate any other\n> > existing users in the system with those external contacts. A\n> > users external contacts may or may not be users in my system. I\n> > have a user_id field in \"contacts\" that is NULL if that contact\n> > is not a user in my system\n> >\n> > Currently I do something like this after reading in external\n> > contacts:\n> >\n> > UPDATE contacts SET user_id = u.id\n> > FROM my_users u\n> > JOIN phone_numbers pn ON u.phone_significant = pn.significant\n> > WHERE contacts.owner_id = 7\n> > AND contacts.user_id IS NULL\n> > AND contacts.id = pn.ref_contact_id;\n> >\n> > If any of the fields are not self explanatory let me know.\n> > \"Significant\" is just the right 7 most digits of a raw phone\n> > number.\n> >\n> > I'm more interested in possible improvements to my relational\n> > logic than the details of the \"significant\" condition. IOW, I'm\n> > start enough to optimize the \"significant\" query but not smart\n> > enough to know if this is the best approach for the overall\n> > correlated UPDATE query. :)\n> >\n> > So yeah, is this the best way to update a contact's user_id\n> > reference based on a contacts phone number matching the phone\n> > number of a user?\n> >\n> > One detail from the schema -- A contact can have many phone\n> > numbers but a user in my system will only ever have just one\n> > phone number. Hence the JOIN to \"phone_numbers\" versus the column\n> > in \"my_users\".\n>\n> In looking it over, nothing jumped out at me as a problem. Are you\n> having some problem with it, like poor performance or getting\n> results different from what you expected?\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI sometimes experience that updating smaller sets is more efficient than doing all at once in one transaction (talking about 10000+)Always make sure the update references can make use of indices\nOn Fri, Aug 9, 2013 at 5:44 PM, Kevin Grittner <[email protected]> wrote:\nRobert DiFalco <[email protected]> wrote:\n\n> In my system a user can have external contacts. When I am\n> bringing in external contacts I want to correlate any other\n> existing users in the system with those external contacts. A\n> users external contacts may or may not be users in my system. I\n> have a user_id field in \"contacts\" that is NULL if that contact\n> is not a user in my system\n>\n> Currently I do something like this after reading in external\n> contacts:\n>\n> UPDATE contacts SET user_id = u.id\n> FROM my_users u\n> JOIN phone_numbers pn ON u.phone_significant = pn.significant\n> WHERE contacts.owner_id = 7\n> AND contacts.user_id IS NULL\n> AND contacts.id = pn.ref_contact_id;\n>\n> If any of the fields are not self explanatory let me know.\n> \"Significant\" is just the right 7 most digits of a raw phone\n> number.\n>\n> I'm more interested in possible improvements to my relational\n> logic than the details of the \"significant\" condition. IOW, I'm\n> start enough to optimize the \"significant\" query but not smart\n> enough to know if this is the best approach for the overall\n> correlated UPDATE query. :)\n>\n> So yeah, is this the best way to update a contact's user_id\n> reference based on a contacts phone number matching the phone\n> number of a user?\n>\n> One detail from the schema -- A contact can have many phone\n> numbers but a user in my system will only ever have just one\n> phone number. Hence the JOIN to \"phone_numbers\" versus the column\n> in \"my_users\".\n \nIn looking it over, nothing jumped out at me as a problem. Are you\nhaving some problem with it, like poor performance or getting\nresults different from what you expected?\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 9 Aug 2013 17:49:27 +0200",
"msg_from": "Klaus Ita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficient Correlated Update"
},
{
"msg_contents": "Well, heh I'm no SQL expert. I kinda piece things together the best I can\nfrom what I can read and this was really the only way I could make the\nUPDATE work correctly. But the plan looks complicated with a lot of hash\nconditions, hash joins, and scans. I'm worried it wont perform with a very\nlarge dataset.\n\nHere's the plan:\n\nUpdate on public.contacts (cost=16.64..27.22 rows=42 width=163) (actual\ntime=1.841..1.841 rows=0 loops=1)\n -> Hash Join (cost=16.64..27.22 rows=42 width=163) (actual\ntime=1.837..1.837 rows=0 loops=1)\n Output: contacts.dtype, contacts.id, contacts.blocked,\ncontacts.fname, contacts.last_call, contacts.lname, contacts.hash,\ncontacts.record_id, contacts.fb_id, contacts.owner_id, u.id,\ncontacts.device, contacts.ctid, u.ctid, e.ctid\n Hash Cond: ((u.phone_short)::text = (e.significant)::text)\n -> Seq Scan on public.wai_users u (cost=0.00..10.36 rows=120\nwidth=46) (actual time=0.022..0.028 rows=6 loops=1)\n Output: u.id, u.ctid, u.phone_short\n -> Hash (cost=16.24..16.24 rows=116 width=157) (actual\ntime=1.744..1.744 rows=87 loops=1)\n Output: contacts.dtype, contacts.id, contacts.blocked,\ncontacts.fname, contacts.last_call, contacts.lname, contacts.hash,\ncontacts.record_id, contacts.fb_id, contacts.owner_id, contacts.device,\ncontacts.ctid, e.ctid, e.significant\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> Hash Join (cost=10.47..16.24 rows=116 width=157)\n(actual time=0.636..1.583 rows=87 loops=1)\n Output: contacts.dtype, contacts.id, contacts.blocked,\ncontacts.fname, contacts.last_call, contacts.lname, contacts.hash,\ncontacts.record_id, contacts.fb_id, contacts.owner_id, contacts.device,\ncontacts.ctid, e.ctid, e.significant\n Hash Cond: (e.owner_id = contacts.id)\n -> Seq Scan on public.phone_numbers e\n (cost=0.00..5.13 rows=378 width=22) (actual time=0.008..0.467 rows=378\nloops=1)\n Output: e.ctid, e.significant, e.owner_id\n -> Hash (cost=9.89..9.89 rows=166 width=143) (actual\ntime=0.578..0.578 rows=124 loops=1)\n Output: contacts.dtype, contacts.id,\ncontacts.blocked, contacts.fname, contacts.last_call, contacts.lname,\ncontacts.hash, contacts.record_id, contacts.fb_id, contacts.owner_id,\ncontacts.device, contacts.ctid\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\n -> Seq Scan on public.contacts\n (cost=0.00..9.89 rows=166 width=143) (actual time=0.042..0.365 rows=124\nloops=1)\n Output: contacts.dtype, contacts.id,\ncontacts.blocked, contacts.fname, contacts.last_call, contacts.lname,\ncontacts.hash, contacts.record_id, contacts.fb_id, contacts.owner_id,\ncontacts.device, contacts.ctid\n Filter: ((contacts.user_id IS NULL) AND\n(contacts.owner_id = 7))\n Rows Removed by Filter: 290\n Total runtime: 2.094 ms\n(22 rows)\n\nIf I wasn't having to update I could write a query like this which seems\nlike it has a much better plan:\n\ndfmvu2a0bvs93n=> explain analyze verbose SELECT c.id\n\n FROM wai_users u\n\n JOIN\nphone_numbers e ON u.phone_short = e.significant\n\n JOIN contacts c ON c.id = e.owner_id\n\n WHERE\nc.owner_id = 5 AND c.user_id IS NULL\n\n ;\n\n\n QUERY PLAN\n\n -------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..7.18 rows=1 width=8) (actual time=0.091..0.091\nrows=0 loops=1)\n Output: c.id\n -> Nested Loop (cost=0.00..7.06 rows=1 width=16) (actual\ntime=0.089..0.089 rows=0 loops=1)\n Output: e.significant, c.id\n -> Index Scan using idx_contact_owner on public.contacts c\n (cost=0.00..3.00 rows=1 width=8) (actual time=0.086..0.086 rows=0 loops=1)\n Output: c.dtype, c.id, c.blocked, c.fname, c.last_call,\nc.lname, c.hash, c.record_id, c.fb_id, c.owner_id, c.user_id, c.device\n Index Cond: (c.owner_id = 5)\n Filter: (c.user_id IS NULL)\n -> Index Scan using idx_phone_owner on public.phone_numbers e\n (cost=0.00..4.06 rows=1 width=16) (never executed)\n Output: e.id, e.raw_number, e.significant, e.owner_id\n Index Cond: (e.owner_id = c.id)\n -> Index Only Scan using idx_user_short_phone on public.wai_users u\n (cost=0.00..0.12 rows=1 width=32) (never executed)\n Output: u.phone_short\n Index Cond: (u.phone_short = (e.significant)::text)\n Heap Fetches: 0\n Total runtime: 0.158 ms\n(16 rows)\n\n\n\n\nOn Fri, Aug 9, 2013 at 8:44 AM, Kevin Grittner <[email protected]> wrote:\n\n> Robert DiFalco <[email protected]> wrote:\n>\n> > In my system a user can have external contacts. When I am\n> > bringing in external contacts I want to correlate any other\n> > existing users in the system with those external contacts. A\n> > users external contacts may or may not be users in my system. I\n> > have a user_id field in \"contacts\" that is NULL if that contact\n> > is not a user in my system\n> >\n> > Currently I do something like this after reading in external\n> > contacts:\n> >\n> > UPDATE contacts SET user_id = u.id\n> > FROM my_users u\n> > JOIN phone_numbers pn ON u.phone_significant = pn.significant\n> > WHERE contacts.owner_id = 7\n> > AND contacts.user_id IS NULL\n> > AND contacts.id = pn.ref_contact_id;\n> >\n> > If any of the fields are not self explanatory let me know.\n> > \"Significant\" is just the right 7 most digits of a raw phone\n> > number.\n> >\n> > I'm more interested in possible improvements to my relational\n> > logic than the details of the \"significant\" condition. IOW, I'm\n> > start enough to optimize the \"significant\" query but not smart\n> > enough to know if this is the best approach for the overall\n> > correlated UPDATE query. :)\n> >\n> > So yeah, is this the best way to update a contact's user_id\n> > reference based on a contacts phone number matching the phone\n> > number of a user?\n> >\n> > One detail from the schema -- A contact can have many phone\n> > numbers but a user in my system will only ever have just one\n> > phone number. Hence the JOIN to \"phone_numbers\" versus the column\n> > in \"my_users\".\n>\n> In looking it over, nothing jumped out at me as a problem. Are you\n> having some problem with it, like poor performance or getting\n> results different from what you expected?\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nWell, heh I'm no SQL expert. I kinda piece things together the best I can from what I can read and this was really the only way I could make the UPDATE work correctly. But the plan looks complicated with a lot of hash conditions, hash joins, and scans. I'm worried it wont perform with a very large dataset.\nHere's the plan:Update on public.contacts (cost=16.64..27.22 rows=42 width=163) (actual time=1.841..1.841 rows=0 loops=1) -> Hash Join (cost=16.64..27.22 rows=42 width=163) (actual time=1.837..1.837 rows=0 loops=1)\n Output: contacts.dtype, contacts.id, contacts.blocked, contacts.fname, contacts.last_call, contacts.lname, contacts.hash, contacts.record_id, contacts.fb_id, contacts.owner_id, u.id, contacts.device, contacts.ctid, u.ctid, e.ctid\n Hash Cond: ((u.phone_short)::text = (e.significant)::text) -> Seq Scan on public.wai_users u (cost=0.00..10.36 rows=120 width=46) (actual time=0.022..0.028 rows=6 loops=1) Output: u.id, u.ctid, u.phone_short\n -> Hash (cost=16.24..16.24 rows=116 width=157) (actual time=1.744..1.744 rows=87 loops=1) Output: contacts.dtype, contacts.id, contacts.blocked, contacts.fname, contacts.last_call, contacts.lname, contacts.hash, contacts.record_id, contacts.fb_id, contacts.owner_id, contacts.device, contacts.ctid, e.ctid, e.significant\n Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Hash Join (cost=10.47..16.24 rows=116 width=157) (actual time=0.636..1.583 rows=87 loops=1) Output: contacts.dtype, contacts.id, contacts.blocked, contacts.fname, contacts.last_call, contacts.lname, contacts.hash, contacts.record_id, contacts.fb_id, contacts.owner_id, contacts.device, contacts.ctid, e.ctid, e.significant\n Hash Cond: (e.owner_id = contacts.id) -> Seq Scan on public.phone_numbers e (cost=0.00..5.13 rows=378 width=22) (actual time=0.008..0.467 rows=378 loops=1)\n Output: e.ctid, e.significant, e.owner_id -> Hash (cost=9.89..9.89 rows=166 width=143) (actual time=0.578..0.578 rows=124 loops=1) Output: contacts.dtype, contacts.id, contacts.blocked, contacts.fname, contacts.last_call, contacts.lname, contacts.hash, contacts.record_id, contacts.fb_id, contacts.owner_id, contacts.device, contacts.ctid\n Buckets: 1024 Batches: 1 Memory Usage: 16kB -> Seq Scan on public.contacts (cost=0.00..9.89 rows=166 width=143) (actual time=0.042..0.365 rows=124 loops=1)\n Output: contacts.dtype, contacts.id, contacts.blocked, contacts.fname, contacts.last_call, contacts.lname, contacts.hash, contacts.record_id, contacts.fb_id, contacts.owner_id, contacts.device, contacts.ctid\n Filter: ((contacts.user_id IS NULL) AND (contacts.owner_id = 7)) Rows Removed by Filter: 290 Total runtime: 2.094 ms(22 rows)\nIf I wasn't having to update I could write a query like this which seems like it has a much better plan:dfmvu2a0bvs93n=> explain analyze verbose SELECT c.id FROM wai_users u JOIN phone_numbers e ON u.phone_short = e.significant JOIN contacts c ON c.id = e.owner_id WHERE c.owner_id = 5 AND c.user_id IS NULL ; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..7.18 rows=1 width=8) (actual time=0.091..0.091 rows=0 loops=1) Output: c.id -> Nested Loop (cost=0.00..7.06 rows=1 width=16) (actual time=0.089..0.089 rows=0 loops=1)\n Output: e.significant, c.id -> Index Scan using idx_contact_owner on public.contacts c (cost=0.00..3.00 rows=1 width=8) (actual time=0.086..0.086 rows=0 loops=1)\n Output: c.dtype, c.id, c.blocked, c.fname, c.last_call, c.lname, c.hash, c.record_id, c.fb_id, c.owner_id, c.user_id, c.device Index Cond: (c.owner_id = 5)\n Filter: (c.user_id IS NULL) -> Index Scan using idx_phone_owner on public.phone_numbers e (cost=0.00..4.06 rows=1 width=16) (never executed) Output: e.id, e.raw_number, e.significant, e.owner_id\n Index Cond: (e.owner_id = c.id) -> Index Only Scan using idx_user_short_phone on public.wai_users u (cost=0.00..0.12 rows=1 width=32) (never executed)\n Output: u.phone_short Index Cond: (u.phone_short = (e.significant)::text) Heap Fetches: 0 Total runtime: 0.158 ms(16 rows)\nOn Fri, Aug 9, 2013 at 8:44 AM, Kevin Grittner <[email protected]> wrote:\nRobert DiFalco <[email protected]> wrote:\n\n> In my system a user can have external contacts. When I am\n> bringing in external contacts I want to correlate any other\n> existing users in the system with those external contacts. A\n> users external contacts may or may not be users in my system. I\n> have a user_id field in \"contacts\" that is NULL if that contact\n> is not a user in my system\n>\n> Currently I do something like this after reading in external\n> contacts:\n>\n> UPDATE contacts SET user_id = u.id\n> FROM my_users u\n> JOIN phone_numbers pn ON u.phone_significant = pn.significant\n> WHERE contacts.owner_id = 7\n> AND contacts.user_id IS NULL\n> AND contacts.id = pn.ref_contact_id;\n>\n> If any of the fields are not self explanatory let me know.\n> \"Significant\" is just the right 7 most digits of a raw phone\n> number.\n>\n> I'm more interested in possible improvements to my relational\n> logic than the details of the \"significant\" condition. IOW, I'm\n> start enough to optimize the \"significant\" query but not smart\n> enough to know if this is the best approach for the overall\n> correlated UPDATE query. :)\n>\n> So yeah, is this the best way to update a contact's user_id\n> reference based on a contacts phone number matching the phone\n> number of a user?\n>\n> One detail from the schema -- A contact can have many phone\n> numbers but a user in my system will only ever have just one\n> phone number. Hence the JOIN to \"phone_numbers\" versus the column\n> in \"my_users\".\n \nIn looking it over, nothing jumped out at me as a problem. Are you\nhaving some problem with it, like poor performance or getting\nresults different from what you expected?\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 9 Aug 2013 08:52:39 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Efficient Correlated Update"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of Kevin Grittner\n> Sent: Friday, August 09, 2013 11:44 AM\n> To: Robert DiFalco; [email protected]\n> Subject: Re: [PERFORM] Efficient Correlated Update\n> \n> Robert DiFalco <[email protected]> wrote:\n> \n> > In my system a user can have external contacts. When I am bringing in\n> > external contacts I want to correlate any other existing users in the\n> > system with those external contacts. A users external contacts may or\n> > may not be users in my system. I have a user_id field in \"contacts\"\n> > that is NULL if that contact is not a user in my system\n> >\n> > Currently I do something like this after reading in external\n> > contacts:\n> >\n> > UPDATE contacts SET user_id = u.id\n> > FROM my_users u\n> > JOIN phone_numbers pn ON u.phone_significant = pn.significant\n> > WHERE contacts.owner_id = 7\n> > AND contacts.user_id IS NULL\n> > AND contacts.id = pn.ref_contact_id;\n> >\n> > If any of the fields are not self explanatory let me know.\n> > \"Significant\" is just the right 7 most digits of a raw phone number.\n> >\n> > I'm more interested in possible improvements to my relational logic\n> > than the details of the \"significant\" condition. IOW, I'm start enough\n> > to optimize the \"significant\" query but not smart enough to know if\n> > this is the best approach for the overall correlated UPDATE query. :)\n> >\n> > So yeah, is this the best way to update a contact's user_id reference\n> > based on a contacts phone number matching the phone number of a user?\n> >\n> > One detail from the schema -- A contact can have many phone numbers\n> > but a user in my system will only ever have just one phone number.\n> > Hence the JOIN to \"phone_numbers\" versus the column in \"my_users\".\n> \n> In looking it over, nothing jumped out at me as a problem. Are you having\n> some problem with it, like poor performance or getting results different from\n> what you expected?\n> \n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n> \n\nThere is an illness that sometimes strikes database developers/administrators.\nIt is called CTD - Compulsive Tuning Disorder :)\n\nIgor Neyman\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Aug 2013 15:54:36 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficient Correlated Update"
}
] |
[
{
"msg_contents": "Hi!\nI can't explain why function is slow down on same data.\nPostgresql.conf the same, hardware is more powerful.\nDiffrents is postgresql version\n\nHere it;s my tests\n\nServer 1 PSQL 9.1\n\nFIRST RUN\nEXPLAIN (ANALYZE, BUFFERS) SELECT webclient.prc_ti_cache_alloc_dbl_update(\n 21325134\n);\n\n'Result (cost=0.00..0.26 rows=1 width=0) (actual \ntime=1399.586..1399.587 rows=1 loops=1)'\n' Buffers: shared hit=40343 read=621'\n'Total runtime: 1399.613 ms'\n\nSECOND RUN SAME QUERY\n'Result (cost=0.00..0.26 rows=1 width=0) (actual time=42.540..42.541 \nrows=1 loops=1)'\n' Buffers: shared hit=37069'\n'Total runtime: 42.558 ms'\n\nTHIRD RUN SAME QUERY\n'Result (cost=0.00..0.26 rows=1 width=0) (actual time=198.893..198.894 \nrows=1 loops=1)'\n' Buffers: shared hit=37069'\n'Total runtime: 198.908 ms'\n\n\nServer 2 PSQL 9.2\n\nFIRST RUN\nEXPLAIN (ANALYZE, BUFFERS) SELECT webclient.prc_ti_cache_alloc_dbl_update(\n 21325134\n);\n\n'Result (cost=0.00..0.26 rows=1 width=0) (actual \ntime=1328.103..1328.104 rows=1 loops=1)'\n' Buffers: shared hit=43081 read=233 written=36'\n'Total runtime: 1328.129 ms'\n\nSECOND RUN SAME QUERY\n'Result (cost=0.00..0.26 rows=1 width=0) (actual \ntime=1699.711..1699.712 rows=1 loops=1)'\n' Buffers: shared hit=42919'\n'Total runtime: 1699.737 ms'\n\nTHIRD RUN SAME QUERY\n'Result (cost=0.00..0.26 rows=1 width=0) (actual \ntime=1907.947..1907.948 rows=1 loops=1)'\n' Buffers: shared hit=42869'\n'Total runtime: 1907.965 ms'\n\n\n\nCan some one explaine this?\nThe data and indexes the same.\nI have made vacuumdb on both srvers.\n\n\n\n\nHereis the function BODY\n\n-- Function: webclient.prc_ti_cache_alloc_dbl_update(integer)\n\n-- DROP FUNCTION webclient.prc_ti_cache_alloc_dbl_update(integer);\n\nCREATE OR REPLACE FUNCTION \nwebclient.prc_ti_cache_alloc_dbl_update(v_allspo integer)\n RETURNS integer AS\n$BODY$DECLARE\n spo_list RECORD;\n v_insert_cnt integer;\n v_delete_cnt integer;\n v_counter_sorting integer;\n\nBEGIN\n\n IF NOT webclient.fn_wc_condition() THEN\n RETURN 1;\n END IF;\n\n\n UPDATE webclient.ti_cache_alloc_price_dbl s SET\n offer=q.id, country=q.country, resort=q.resort, \nresort_place=q.resort_place, alloccat=q.alloccat, price=q.price,\n real_price=q.real_price, cash_type=q.cash_type, allspo=q.allspo, \nduration=q.duration, departure=q.departure,\n \"operator\"=q.\"operator\"\n FROM (SELECT DISTINCT ON\n (al.allocation, o.city, o.operator)\n o.id,\n al.allocation,\n o.city,\n al.alloccat,\n o.country,\n al.resort,\n al.resort_place,\n o.price,\n o.real_price,\n o.cash_type,\n o.allspo,\n o.duration,\n o.departure,\n o.OPERATOR\n FROM ti.ti_offer_price o\n JOIN ti.ti_offer_allocation2 al ON al.alloc_id = o.alloc_id\n LEFT JOIN ti.vw_ti_stop_allocation sa on sa.alloc_id=o.alloc_id \nAND sa.departure=o.departure AND sa.operator=o.operator AND \n(sa.room_size=o.room_size OR sa.room_size=0)\n LEFT JOIN ti.vw_ti_stop_flight sf on sf.back=false and \nsf.date_flight=o.departure and sf.operator=o.operator and sf.city=o.city \nand sf.resort=al.resort and sf.stop=true\n LEFT JOIN ti.vw_ti_stop_flight sfb on sfb.back=true and \nsfb.date_flight=o.arrival and sfb.operator=o.operator and \nsfb.city=o.city and sfb.resort=al.resort and sfb.stop=true\n WHERE o.allspo<>0 AND o.allspo = v_allspo\n AND o.departure>=current_date+10\n AND o.duration BETWEEN 7 AND 14\n AND o.ticket>0\n AND o.room_size=14\n AND sa.id is null\n AND coalesce(sf.stop,false)=false\n AND coalesce(sfb.stop,false)=false\n ORDER BY al.allocation, o.city, o.operator, o.real_price ASC, \no.departure ASC, o.allspo DESC) q\n WHERE s.allocation = q.allocation\n AND s.city = q.city\n AND s.operator = q.operator\n AND s.real_price < q.real_price;\n\n\n\n v_delete_cnt := 0; --будем использовать для проверки необходимости \nобновить counter_sorting\n\n FOR spo_list IN SELECT DISTINCT s.allocation, s.city, s.operator \nFROM webclient.ti_cache_alloc_price_dbl s\n JOIN ti.ti_offer_price o ON s.city = o.city AND s.operator = \no.operator\n JOIN ti.ti_offer_allocation2 al ON al.alloc_id = o.alloc_id AND \ns.allocation = al.allocation\n WHERE o.allspo<>0 AND o.allspo = v_allspo\n AND NOT EXISTS(SELECT id FROM ti.ti_offer_price WHERE \nallspo<>0 AND id=s.offer)\n AND o.departure>=current_date+10\n AND o.duration BETWEEN 7 AND 14\n AND o.ticket>0\n AND o.room_size=14\n LOOP\n SELECT counter_sorting INTO v_counter_sorting FROM \nwebclient.ti_cache_alloc_price_dbl WHERE allocation = \nspo_list.allocation AND city = spo_list.city AND \"operator\" = \nspo_list.operator;\n\n DELETE FROM webclient.ti_cache_alloc_price_dbl WHERE allocation \n= spo_list.allocation AND city = spo_list.city AND \"operator\" = \nspo_list.operator;\n\n v_delete_cnt := v_delete_cnt + 1;\n\n INSERT INTO webclient.ti_cache_alloc_price_dbl (offer, \nallocation, city, alloccat, country, resort, resort_place, price, \nreal_price, cash_type, allspo, duration, departure, operator, \ncounter_sorting)\n SELECT --DISTINCT ON (al.allocation, o.city)\n o.id,\n al.allocation,\n o.city,\n al.alloccat,\n o.country,\n al.resort,\n al.resort_place,\n o.price,\n o.real_price,\n o.cash_type,\n o.allspo,\n o.duration,\n o.departure,\n o.OPERATOR,\n v_counter_sorting\n FROM ti.ti_offer_price o\n JOIN ti.ti_offer_allocation2 al ON al.alloc_id = o.alloc_id\n LEFT JOIN ti.vw_ti_stop_allocation sa on sa.alloc_id=o.alloc_id \nAND sa.departure=o.departure AND sa.operator=o.operator AND \n(sa.room_size=o.room_size OR sa.room_size=0)\n LEFT JOIN ti.vw_ti_stop_flight sf on sf.back=false and \nsf.date_flight=o.departure and sf.operator=o.operator and sf.city=o.city \nand sf.resort=al.resort and sf.stop=true\n LEFT JOIN ti.vw_ti_stop_flight sfb on sfb.back=true and \nsfb.date_flight=o.arrival and sfb.operator=o.operator and \nsfb.city=o.city and sfb.resort=al.resort and sfb.stop=true\n WHERE o.allspo<>0 AND al.allocation = spo_list.allocation AND \no.country = al.country\n AND o.city = spo_list.city\n AND o.operator = spo_list.operator\n AND o.departure>=current_date+10\n AND o.duration BETWEEN 7 AND 14\n AND o.ticket>0\n AND o.room_size=14\n AND sa.id is null\n AND coalesce(sf.stop,false)=false\n AND coalesce(sfb.stop,false)=false\n ORDER BY o.real_price ASC, o.departure ASC, o.allspo DESC\n LIMIT 1;\n\n GET DIAGNOSTICS v_insert_cnt = ROW_COUNT;\n\n v_delete_cnt := v_delete_cnt - v_insert_cnt;\n END LOOP;\n\n--\n IF v_delete_cnt > 0 THEN\n --пересчитаем counter_sorting\n SELECT setval('webclient.seq_prc_pregen_ti_cache_alloc_dbl', 1, \nfalse) INTO v_delete_cnt;\n UPDATE webclient.ti_cache_alloc_price_dbl SET counter_sorting = \nnextval('webclient.seq_prc_pregen_ti_cache_alloc_dbl');\n END IF;\n\n\n\n INSERT INTO webclient.ti_cache_alloc_price_dbl (offer, \nallocation, city, alloccat, country, resort, resort_place, price, \nreal_price, cash_type, allspo, duration, departure, operator, \ncounter_sorting)\n SELECT DISTINCT ON\n (al.allocation, o.city)\n o.id,\n al.allocation,\n o.city,\n al.alloccat,\n o.country,\n al.resort,\n al.resort_place,\n o.price,\n o.real_price,\n o.cash_type,\n o.allspo,\n o.duration,\n o.departure,\n o.OPERATOR,\n nextval('webclient.seq_prc_pregen_ti_cache_alloc_dbl')\n FROM ti.ti_offer_price o\n JOIN ti.ti_offer_allocation2 al ON al.alloc_id = o.alloc_id\n LEFT JOIN ti.vw_ti_stop_allocation sa on sa.alloc_id=o.alloc_id \nAND sa.departure=o.departure AND sa.operator=o.operator AND \n(sa.room_size=o.room_size OR sa.room_size=0)\n LEFT JOIN ti.vw_ti_stop_flight sf on sf.back=false and \nsf.date_flight=o.departure and sf.operator=o.operator and sf.city=o.city \nand sf.resort=al.resort and sf.stop=true\n LEFT JOIN ti.vw_ti_stop_flight sfb on sfb.back=true and \nsfb.date_flight=o.arrival and sfb.operator=o.operator and \nsfb.city=o.city and sfb.resort=al.resort and sfb.stop=true\n WHERE o.allspo<>0 AND o.allspo = v_allspo\n AND NOT EXISTS (SELECT 1 FROM \nwebclient.ti_cache_alloc_price_dbl WHERE allocation = al.allocation AND \ncity = o.city AND \"operator\"=o.operator)\n AND o.departure>=current_date+10\n AND o.duration BETWEEN 7 AND 14\n AND o.ticket>0\n AND o.room_size=14\n AND sa.id is null\n AND coalesce(sf.stop,false)=false\n AND coalesce(sfb.stop,false)=false\n ORDER BY al.allocation, o.city, o.real_price ASC, o.departure \nASC, o.allspo DESC;\n\n\n RETURN 1;\nEND;$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\nALTER FUNCTION webclient.prc_ti_cache_alloc_dbl_update(integer)\n OWNER TO tix;\nGRANT EXECUTE ON FUNCTION \nwebclient.prc_ti_cache_alloc_dbl_update(integer) TO tix;\nGRANT EXECUTE ON FUNCTION \nwebclient.prc_ti_cache_alloc_dbl_update(integer) TO public;\nGRANT EXECUTE ON FUNCTION \nwebclient.prc_ti_cache_alloc_dbl_update(integer) TO lst_web;\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 12 Aug 2013 15:21:41 +0300",
"msg_from": "=?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCR0LXQu9C40L3RgdC60LjQuQ==?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "function execute on v.9.2 slow down"
},
{
"msg_contents": "On Mon, Aug 12, 2013 at 8:21 AM, Александр Белинский <[email protected]> wrote:\n> Hi!\n> I can't explain why function is slow down on same data.\n> Postgresql.conf the same, hardware is more powerful.\n> Diffrents is postgresql version\n\nHmm. PostgreSQL 9.2 will sometimes replan queries a number of times\nwhere older releases, looking to see whether the choice of bind\nvariables affects the optimal plan choice, where older versions would\ncreate a generic plan on first execution and use it forever. I'm not\nsure whether that behavior applies in this situation, though. If you\nrun it say 15 times does it eventually start running faster?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Sep 2013 19:40:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function execute on v.9.2 slow down"
},
{
"msg_contents": "17.09.2013 02:40, Robert Haas пишет:\n> On Mon, Aug 12, 2013 at 8:21 AM, Александр Белинский <[email protected]> wrote:\n>> Hi!\n>> I can't explain why function is slow down on same data.\n>> Postgresql.conf the same, hardware is more powerful.\n>> Diffrents is postgresql version\n> Hmm. PostgreSQL 9.2 will sometimes replan queries a number of times\n> where older releases, looking to see whether the choice of bind\n> variables affects the optimal plan choice, where older versions would\n> create a generic plan on first execution and use it forever. I'm not\n> sure whether that behavior applies in this situation, though. If you\n> run it say 15 times does it eventually start running faster?\nIf i run function 1000 times it eventually have same execution time \nforever in 9.2 and 9.3\nBut 9.1 version have performance benefit at second run and forever\n\nI made test and found that in 9.2 and 9.3 versions if i use variable in \nquery pg replan it forever.\n\nHere is my tests\nPostgresql 9.3\n\nEXPLAIN ANALYZE SELECT DISTINCT s.allocation, s.city, s.operator FROM \nwebclient.ti_cache_alloc_price_dbl s\n JOIN ti.ti_offer_price o ON s.city = o.city AND s.operator = \no.operator\n JOIN ti.ti_offer_allocation2 al ON al.alloc_id = o.alloc_id AND \ns.allocation = al.allocation\n WHERE o.allspo = 21600254\n AND NOT EXISTS(SELECT id FROM ti.ti_offer_price WHERE \nid=s.offer)\n AND o.departure>=current_date+10\n AND o.duration BETWEEN 7 AND 14\n AND o.ticket>0\n AND o.room_size=14;\n\n'Total runtime: 5.371 ms'\n\nBut if i use this query inside the fumction i have big performance problem\nWhy?\n\nCREATE OR REPLACE FUNCTION sql_test(v_allspo integer)\n RETURNS integer AS\n$BODY$\nBEGIN\n\n PERFORM DISTINCT s.allocation, s.city, s.operator FROM \nwebclient.ti_cache_alloc_price_dbl s\n JOIN ti.ti_offer_price o ON s.city = o.city AND s.operator = \no.operator\n JOIN ti.ti_offer_allocation2 al ON al.alloc_id = o.alloc_id AND \ns.allocation = al.allocation\n WHERE o.allspo = v_allspo\n AND NOT EXISTS(SELECT id FROM ti.ti_offer_price WHERE \nid=s.offer)\n AND o.departure>=current_date+10\n AND o.duration BETWEEN 7 AND 14\n AND o.ticket>0\n AND o.room_size=14;\n\n RETURN 1;\nEND;$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\n\n\nEXPLAIN ANALYZE SELECT sql_test(\n 21600254\n);\n\n'Result (cost=0.00..0.26 rows=1 width=0) (actual time=607.557..607.558 \nrows=1 loops=1)'\n' Buffers: shared hit=2059'\n'Total runtime: 607.570 ms'\n\nAnd forever .....\n\nIn 9.1 same function, same query works well!\n\nFirst run\nEXPLAIN (ANALYZE,BUFFERS) SELECT sql_test(\n 21600254\n);\n\n\n'Result (cost=0.00..0.26 rows=1 width=0) (actual time=486.003..486.004 \nrows=1 loops=1)'\n' Buffers: shared hit=5645 read=68 written=4'\n'Total runtime: 486.028 ms'\n\nSecond run\nEXPLAIN (ANALYZE,BUFFERS) SELECT sql_test(\n 21600254\n);\n\n'Result (cost=0.00..0.26 rows=1 width=0) (actual time=4.561..4.562 \nrows=1 loops=1)'\n' Buffers: shared hit=2852'\n'Total runtime: 4.576 ms'\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Sep 2013 14:24:11 +0300",
"msg_from": "=?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCR0LXQu9C40L3RgdC60LjQuQ==?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: function execute on v.9.2 slow down"
}
] |
[
{
"msg_contents": "Hi!\nI can't explain why function is slow down on same data.\nPostgresql.conf the same, hardware is more powerful.\nDiffrents is postgresql version\n\nHere it;s my tests\n\nServer 1 PSQL 9.1\n\nFIRST RUN\nEXPLAIN (ANALYZE, BUFFERS) SELECT webclient.prc_ti_cache_alloc_dbl_update(\n 21325134\n);\n\n'Result (cost=0.00..0.26 rows=1 width=0) (actual \ntime=1399.586..1399.587 rows=1 loops=1)'\n' Buffers: shared hit=40343 read=621'\n'Total runtime: 1399.613 ms'\n\nSECOND RUN\n'Result (cost=0.00..0.26 rows=1 width=0) (actual time=42.540..42.541 \nrows=1 loops=1)'\n' Buffers: shared hit=37069'\n'Total runtime: 42.558 ms'\n\nTHIRD RUN\n'Result (cost=0.00..0.26 rows=1 width=0) (actual time=198.893..198.894 \nrows=1 loops=1)'\n' Buffers: shared hit=37069'\n'Total runtime: 198.908 ms'\n\n\nServer 2 PSQL 9.2\n\nFIRST RUN\nEXPLAIN (ANALYZE, BUFFERS) SELECT webclient.prc_ti_cache_alloc_dbl_update(\n 21325134\n);\n\n'Result (cost=0.00..0.26 rows=1 width=0) (actual \ntime=1328.103..1328.104 rows=1 loops=1)'\n' Buffers: shared hit=43081 read=233 written=36'\n'Total runtime: 1328.129 ms'\n\nSECOND RUN\n'Result (cost=0.00..0.26 rows=1 width=0) (actual \ntime=1699.711..1699.712 rows=1 loops=1)'\n' Buffers: shared hit=42919'\n'Total runtime: 1699.737 ms'\n\nTHIRD RUN\n'Result (cost=0.00..0.26 rows=1 width=0) (actual \ntime=1907.947..1907.948 rows=1 loops=1)'\n' Buffers: shared hit=42869'\n'Total runtime: 1907.965 ms'\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 12 Aug 2013 17:59:43 +0300",
"msg_from": "=?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCR0LXQu9C40L3RgdC60LjQuQ==?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Function execute slow down in 9.2"
},
{
"msg_contents": "Hello\n\nit looks like known issue of sometimes dysfunctional plan cache in\nplpgsql in 9.2.\n\nsimilar issue http://postgresql.1045698.n5.nabble.com/Performance-problem-in-PLPgSQL-td5764796.html\n\nRegards\n\nPavel Stehule\n\n2013/8/12 Александр Белинский <[email protected]>:\n> Hi!\n> I can't explain why function is slow down on same data.\n> Postgresql.conf the same, hardware is more powerful.\n> Diffrents is postgresql version\n>\n> Here it;s my tests\n>\n> Server 1 PSQL 9.1\n>\n> FIRST RUN\n> EXPLAIN (ANALYZE, BUFFERS) SELECT webclient.prc_ti_cache_alloc_dbl_update(\n> 21325134\n> );\n>\n> 'Result (cost=0.00..0.26 rows=1 width=0) (actual time=1399.586..1399.587\n> rows=1 loops=1)'\n> ' Buffers: shared hit=40343 read=621'\n> 'Total runtime: 1399.613 ms'\n>\n> SECOND RUN\n> 'Result (cost=0.00..0.26 rows=1 width=0) (actual time=42.540..42.541 rows=1\n> loops=1)'\n> ' Buffers: shared hit=37069'\n> 'Total runtime: 42.558 ms'\n>\n> THIRD RUN\n> 'Result (cost=0.00..0.26 rows=1 width=0) (actual time=198.893..198.894\n> rows=1 loops=1)'\n> ' Buffers: shared hit=37069'\n> 'Total runtime: 198.908 ms'\n>\n>\n> Server 2 PSQL 9.2\n>\n> FIRST RUN\n> EXPLAIN (ANALYZE, BUFFERS) SELECT webclient.prc_ti_cache_alloc_dbl_update(\n> 21325134\n> );\n>\n> 'Result (cost=0.00..0.26 rows=1 width=0) (actual time=1328.103..1328.104\n> rows=1 loops=1)'\n> ' Buffers: shared hit=43081 read=233 written=36'\n> 'Total runtime: 1328.129 ms'\n>\n> SECOND RUN\n> 'Result (cost=0.00..0.26 rows=1 width=0) (actual time=1699.711..1699.712\n> rows=1 loops=1)'\n> ' Buffers: shared hit=42919'\n> 'Total runtime: 1699.737 ms'\n>\n> THIRD RUN\n> 'Result (cost=0.00..0.26 rows=1 width=0) (actual time=1907.947..1907.948\n> rows=1 loops=1)'\n> ' Buffers: shared hit=42869'\n> 'Total runtime: 1907.965 ms'\n>\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 12 Aug 2013 17:24:17 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function execute slow down in 9.2"
},
{
"msg_contents": "12.08.2013 18:24, Pavel Stehule пишет:\n> Hello\n>\n> it looks like known issue of sometimes dysfunctional plan cache in\n> plpgsql in 9.2.\n>\n> similar issuehttp://postgresql.1045698.n5.nabble.com/Performance-problem-in-PLPgSQL-td5764796.html\nThanks for the link ) I read about issue, but I can't understand what \nshould I do?\n\ni chage values of seq_page_cost\n=1.0\n=10.0\n= 100.0\n= 0.1\n=0.01\n\n but nothing chage, time of function execution the same.\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Aug 2013 15:27:39 +0300",
"msg_from": "=?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCR0LXQu9C40L3RgdC60LjQuQ==?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Function execute slow down in 9.2"
},
{
"msg_contents": "2013/8/16 Александр Белинский <[email protected]>\n\n> 12.08.2013 18:24, Pavel Stehule пишет:\n>\n> Hello\n>>\n>> it looks like known issue of sometimes dysfunctional plan cache in\n>> plpgsql in 9.2.\n>>\n>> similar issuehttp://postgresql.**1045698.n5.nabble.com/**\n>> Performance-problem-in-**PLPgSQL-td5764796.html<http://postgresql.1045698.n5.nabble.com/Performance-problem-in-PLPgSQL-td5764796.html>\n>>\n> Thanks for the link ) I read about issue, but I can't understand what\n> should I do?\n>\n>\nYou can do nothing. You can check, so described issue is same as your\nissue, and that is all :(.\n\nIt is bug in plan cache implementation.\n\nRegards\n\nPavel\n\n\n> i chage values of seq_page_cost\n> =1.0\n> =10.0\n> = 100.0\n> = 0.1\n> =0.01\n>\n> but nothing chage, time of function execution the same.\n>\n>\n>\n>\n>\n>\n\n2013/8/16 Александр Белинский <[email protected]>\n\n12.08.2013 18:24, Pavel Stehule пишет:\n\nHello\n\nit looks like known issue of sometimes dysfunctional plan cache in\nplpgsql in 9.2.\n\nsimilar issuehttp://postgresql.1045698.n5.nabble.com/Performance-problem-in-PLPgSQL-td5764796.html\n\nThanks for the link ) I read about issue, but I can't understand what should I do?\nYou can do nothing. You can check, so described issue is same as your issue, and that is all :(.It is bug in plan cache implementation.Regards\nPavel \ni chage values of seq_page_cost\n=1.0\n=10.0\n= 100.0\n= 0.1\n=0.01\n\n but nothing chage, time of function execution the same.",
"msg_date": "Wed, 21 Aug 2013 11:03:28 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Function execute slow down in 9.2"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm trying to simplify a schema, where I had many ranges floating around.\nMy idea is to put them all in an array field and query like this:\n\nSELECT\n event.*\nFROM event\nJOIN participant_details\n USING (participant_id)\nWHERE\n tsrange(event.start, event.end) && ANY (participant_details.periods);\n\nperiods is tsrange[].\n\nI've tryed and it worked, but without indexes. I've tried something, but\ndidn't found anything... Does someone know how to index this kind of field\n(tsrange[])?\n\n From the docs I learn that there is some GIST magic, but I would need to\ncode in C. Is that true?\n\nRegards,\n-- \nDaniel Cristian Cruz\nクルズ クリスチアン ダニエル\n\nHello,I'm trying to simplify a schema, where I had many ranges floating around. My idea is to put them all in an array field and query like this:SELECT\n event.*FROM eventJOIN participant_details USING (participant_id)WHERE tsrange(event.start, event.end) && ANY (participant_details.periods);\nperiods is tsrange[].I've tryed and it worked, but without indexes. I've tried something, but didn't found anything... Does someone know how to index this kind of field (tsrange[])?\nFrom the docs I learn that there is some GIST magic, but I would need to code in C. Is that true?Regards,-- Daniel Cristian Cruzクルズ クリスチアン ダニエル",
"msg_date": "Tue, 13 Aug 2013 17:47:52 -0300",
"msg_from": "Daniel Cristian Cruz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index on a range array"
},
{
"msg_contents": "I guess this is not a performance question... What kind of question would\nit be? Admin, General or SQL?\n\n\n2013/8/13 Daniel Cristian Cruz <[email protected]>\n\n> Hello,\n>\n> I'm trying to simplify a schema, where I had many ranges floating around.\n> My idea is to put them all in an array field and query like this:\n>\n> SELECT\n> event.*\n> FROM event\n> JOIN participant_details\n> USING (participant_id)\n> WHERE\n> tsrange(event.start, event.end) && ANY (participant_details.periods);\n>\n> periods is tsrange[].\n>\n> I've tryed and it worked, but without indexes. I've tried something, but\n> didn't found anything... Does someone know how to index this kind of field\n> (tsrange[])?\n>\n> From the docs I learn that there is some GIST magic, but I would need to\n> code in C. Is that true?\n>\n> Regards,\n> --\n> Daniel Cristian Cruz\n> クルズ クリスチアン ダニエル\n>\n\n\n\n-- \nDaniel Cristian Cruz\nクルズ クリスチアン ダニエル\n\nI guess this is not a performance question... What kind of question would it be? Admin, General or SQL?2013/8/13 Daniel Cristian Cruz <[email protected]>\nHello,I'm trying to simplify a schema, where I had many ranges floating around. My idea is to put them all in an array field and query like this:\nSELECT\n event.*FROM eventJOIN participant_details USING (participant_id)WHERE tsrange(event.start, event.end) && ANY (participant_details.periods);\nperiods is tsrange[].I've tryed and it worked, but without indexes. I've tried something, but didn't found anything... Does someone know how to index this kind of field (tsrange[])?\nFrom the docs I learn that there is some GIST magic, but I would need to code in C. Is that true?Regards,-- Daniel Cristian Cruz\nクルズ クリスチアン ダニエル\n\n-- Daniel Cristian Cruzクルズ クリスチアン ダニエル",
"msg_date": "Wed, 14 Aug 2013 09:21:39 -0300",
"msg_from": "Daniel Cristian Cruz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index on a range array"
},
{
"msg_contents": "On 13.08.2013 23:47, Daniel Cristian Cruz wrote:\n> Hello,\n>\n> I'm trying to simplify a schema, where I had many ranges floating around.\n> My idea is to put them all in an array field and query like this:\n>\n> SELECT\n> event.*\n> FROM event\n> JOIN participant_details\n> USING (participant_id)\n> WHERE\n> tsrange(event.start, event.end)&& ANY (participant_details.periods);\n>\n> periods is tsrange[].\n>\n> I've tryed and it worked, but without indexes. I've tried something, but\n> didn't found anything... Does someone know how to index this kind of field\n> (tsrange[])?\n>\n> From the docs I learn that there is some GIST magic, but I would need to\n> code in C. Is that true?\n\nYeah. It might be somewhat tricky to write an efficient GIST \nimplementation for this anyway. What you'd really want to do is to index \neach value in the array separately, which is more like what GIN does. \nWith the \"partial match\" infrastructure in GIN, it might be possible to \nwrite a GIN implementation that can speed up range overlap queries. \nHowever, that certainly requires C coding too.\n\nA couple of alternatives come to mind:\n\nYou could create the index on just the min and max values of the \nperiods, and in the query check for overlap with that. If there \ntypically aren't big gaps between the periods of each participant, that \nmight work well.\n\nOr you could split the range of expected timestamps into discrete steps, \nfor example at one-day granularity. Create a function to convert a range \ninto an array of steps, e.g convert each range into an array of days \nthat the range overlaps with. Create a GIN index on that array, and use \nit in the query. Something like this:\n\n-- Returns an int representing the day the given timestamp falls into\ncreate function epochday(timestamp) returns int4 as $$\n select extract (epoch from $1)::int4/(24*3600)\n$$ language sql immutable;\n\n-- Same for a range. Returns an array of ints representing all the\n-- days that the given range overlaps with.\ncreate function epochdays(tsrange) returns integer[]\nas $$\n select array_agg(g) from generate_series(epochday(lower($1)), \nepochday(upper($1))) g\n$$\nlanguage sql immutable;\n\n-- Same for an array of ranges. Returns an array of ints representing -- \nall the days that overlap with any of the given timestamp ranges\ncreate function epochdays(ranges tsrange[]) returns integer[]\nas $$\ndeclare\n r tsrange;\n result integer[];\nbegin\n foreach r in array ranges loop\n result = result || (select array_agg(g) from \ngenerate_series(epochday(lower(r)), epochday(upper(r))) g);\n end loop;\n return result;\nend;\n$$ language plpgsql immutable;\n\n-- Create the index on that:\ncreate index period_days on participant_details using gin \n(epochdays(periods));\n\n-- Query like this:\nSELECT event.* FROM event\nJOIN participant_details USING (participant_id)\n-- This WHERE-clause is for correctness:\nWHERE tsrange(event.start, event.end) && ANY (participant_details.periods);\n-- and this is to make use of the index:\nAND epochdays(tsrange(event.start, event.end)) && \nepochdays((participant_details.periods));\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 Aug 2013 10:30:55 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on a range array"
}
] |
[
{
"msg_contents": "Hi folks\n\nI've run into an interesting Stack Overflow post where the user shows\nthat marking a particular function as IMMUTABLE significantly hurts the\nperformance of a query.\n\nhttp://stackoverflow.com/q/18220761/398670\n\nCREATE OR REPLACE FUNCTION\n to_datestamp_immutable(time_int double precision) RETURNS date AS $$\n SELECT date_trunc('day', to_timestamp($1))::date;\n$$ LANGUAGE SQL IMMUTABLE;\n\nWith IMMUTABLE: 33060.918\nWith STABLE: 6063.498\n\nThe plans are the same for both, though the cost estimate for the\nIMMUTABLE variant is (surprisingly) massively higher.\n\nThe question contains detailed instructions to reproduce the issue, and\nI can confirm the same results on my machine.\n\nIt looks like the difference is created by to_timestamp , in that if\nto_timestamp is replaced with interval maths the difference goes away.\n\nI'm very curious and am doing a quick profile now, but I wanted to raise\nthis on the list for comment/opinions, since it's very\ncounter-intuitive. IIRC docs don't suggest that IMMUTABLE can ever be\nmore expensive.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 Aug 2013 08:41:36 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interesting case of IMMUTABLE significantly hurting performance"
},
{
"msg_contents": "On 08/14/2013 08:41 AM, Craig Ringer wrote:\n> Hi folks\n> \n> I've run into an interesting Stack Overflow post where the user shows\n> that marking a particular function as IMMUTABLE significantly hurts the\n> performance of a query.\n\nHere's `perf` report data for the two.\n\nWith IMMUTABLE:\n\nSamples: 90K of event 'cycles', Event count (approx.): 78028435735\n 7.74% postgres postgres [.] base_yyparse\n 6.54% postgres postgres [.] SearchCatCache\n 5.75% postgres postgres [.] AllocSetAlloc\n 3.14% postgres postgres [.] core_yylex\n 1.88% postgres libc-2.17.so [.] _int_malloc\n 1.58% postgres postgres [.] MemoryContextAllocZeroAligned\n 1.57% postgres libc-2.17.so [.] __strcmp_sse42\n 1.44% postgres libc-2.17.so [.] __memcpy_ssse3_back\n 1.42% postgres postgres [.] expression_tree_walker\n 1.37% postgres libc-2.17.so [.] vfprintf\n 1.35% postgres postgres [.] MemoryContextAlloc\n 1.32% postgres postgres [.] fmgr_info_cxt_security\n 1.31% postgres postgres [.] fmgr_sql\n 1.17% postgres postgres [.] ExecInitExpr\n\n\nwithout IMMUTABLE (i.e. VOLATILE):\n\nSamples: 16K of event 'cycles', Event count (approx.): 14348843004\n 6.78% postgres postgres [.] AllocSetAlloc\n 5.37% postgres libc-2.17.so [.] vfprintf\n 2.82% postgres postgres [.] SearchCatCache\n 2.82% postgres libc-2.17.so [.] _int_malloc\n 2.45% postgres postgres [.] timesub.isra.1\n 2.26% postgres postgres [.] ExecInitExpr\n 1.79% postgres postgres [.] MemoryContextAlloc\n 1.63% postgres postgres [.] MemoryContextAllocZeroAligned\n 1.60% postgres libc-2.17.so [.] _int_free\n 1.55% postgres postgres [.] j2date\n 1.52% postgres postgres [.] fmgr_info_cxt_security\n 1.41% postgres libc-2.17.so [.] _IO_default_xsputn\n 1.41% postgres postgres [.] fmgr_sql\n 1.40% postgres postgres [.] expression_tree_walker\n 1.39% postgres libc-2.17.so [.] __memset_sse2\n 1.39% postgres postgres [.] timestamp2tm\n 1.13% postgres postgres [.] MemoryContextCreate\n 1.11% postgres postgres [.] standard_ExecutorStart\n 1.11% postgres postgres [.] ExecProject\n 1.08% postgres postgres [.] AllocSetFree\n\n\n\nSo ... are we re-parsing the function every time if it's declared\nIMMUTABLE or something like that? If I break in base_yyparse I hit the\nbreakpoint 3x for the volatile case, and basically endlessly for the\nimmutable case, with a trace like:\n\n> #0 base_yyparse (yyscanner=yyscanner@entry=0x1d8f868) at gram.c:19604\n> #1 0x0000000000500381 in raw_parser (str=<optimized out>) at parser.c:52\n> #2 0x000000000064b552 in pg_parse_query (query_string=<optimized out>) at postgres.c:564\n> #3 0x0000000000587650 in init_sql_fcache (lazyEvalOK=1 '\\001', collation=0, finfo=<optimized out>) at functions.c:680\n> #4 fmgr_sql (fcinfo=0x1d76300) at functions.c:1040\n> #5 0x00000000005817e5 in ExecMakeFunctionResult (fcache=0x1d76290, econtext=0x1d75000, isNull=0x1d75db1 \"\", isDone=0x7fff103946bc) at execQual.c:1927\n> #6 0x000000000057dd05 in ExecEvalFuncArgs (fcinfo=fcinfo@entry=0x1d75a70, argList=argList@entry=0x1d76260, econtext=econtext@entry=0x1d75000) at execQual.c:1475\n> #7 0x00000000005816d5 in ExecMakeFunctionResult (fcache=0x1d75a00, econtext=0x1d75000, isNull=0x1d755a0 \"\", isDone=0x7fff103947dc) at execQual.c:1706\n> #8 0x000000000057dd05 in ExecEvalFuncArgs (fcinfo=fcinfo@entry=0x1d75260, argList=argList@entry=0x1d76b60, econtext=econtext@entry=0x1d75000) at execQual.c:1475\n> #9 0x00000000005816d5 in ExecMakeFunctionResult (fcache=0x1d751f0, econtext=0x1d75000, isNull=0x1d76d08 \"\", isDone=0x1d76e60) at execQual.c:1706\n> #10 0x0000000000583cfd in ExecTargetList (isDone=0x7fff1039497c, itemIsDone=0x1d76e60, isnull=<optimized out>, values=0x1d76cf0, econtext=0x1d75000, targetlist=0x1d76e30)\n> at execQual.c:5221\n> #11 ExecProject (projInfo=<optimized out>, isDone=isDone@entry=0x7fff1039497c) at execQual.c:5436\n> #12 0x0000000000594a12 in ExecResult (node=node@entry=0x1d74ef0) at nodeResult.c:155\n> #13 0x000000000057cff8 in ExecProcNode (node=node@entry=0x1d74ef0) at execProcnode.c:372\n> #14 0x000000000057a8e0 in ExecutePlan (dest=0x1d70d50, direction=<optimized out>, numberTuples=1, sendTuples=1 '\\001', operation=CMD_SELECT, planstate=0x1d74ef0, \n> estate=0x1d74de0) at execMain.c:1395\n> #15 standard_ExecutorRun (queryDesc=0x1d72f80, direction=<optimized out>, count=1) at execMain.c:303\n> #16 0x0000000000587bc2 in postquel_getnext (es=0x1d70c80, es=0x1d70c80, fcache=0x1d6edc0) at functions.c:844\n> #17 fmgr_sql (fcinfo=<optimized out>) at functions.c:1140\n> #18 0x000000000057e602 in ExecMakeFunctionResultNoSets (fcache=0x1d849e0, econtext=0x1d848d0, isNull=0x1d859b8 \"\", isDone=<optimized out>) at execQual.c:1993\n> #19 0x0000000000583cfd in ExecTargetList (isDone=0x7fff10394bfc, itemIsDone=0x1d85ad0, isnull=<optimized out>, values=0x1d859a0, econtext=0x1d848d0, targetlist=0x1d85aa0)\n> at execQual.c:5221\n> #20 ExecProject (projInfo=projInfo@entry=0x1d859d0, isDone=isDone@entry=0x7fff10394bfc) at execQual.c:5436\n> #21 0x000000000058409a in ExecScan (node=node@entry=0x1d847c0, accessMtd=accessMtd@entry=0x594c70 <SeqNext>, recheckMtd=recheckMtd@entry=0x594c60 <SeqRecheck>)\n> at execScan.c:207\n> #22 0x0000000000594cdf in ExecSeqScan (node=node@entry=0x1d847c0) at nodeSeqscan.c:113\n> #23 0x000000000057cfa8 in ExecProcNode (node=node@entry=0x1d847c0) at execProcnode.c:399\n> #24 0x000000000057a8e0 in ExecutePlan (dest=0x1d67180, direction=<optimized out>, numberTuples=0, sendTuples=1 '\\001', operation=CMD_SELECT, planstate=0x1d847c0, \n> estate=0x1d84660) at execMain.c:1395\n> #25 standard_ExecutorRun (queryDesc=0x1c95fd0, direction=<optimized out>, count=0) at execMain.c:303\n> #26 0x000000000064eb50 in PortalRunSelect (portal=portal@entry=0x1c93fc0, forward=forward@entry=1 '\\001', count=0, count@entry=9223372036854775807, dest=dest@entry=0x1d67180)\n> at pquery.c:944\n> #27 0x000000000064fe8b in PortalRun (portal=portal@entry=0x1c93fc0, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\\001', dest=dest@entry=0x1d67180, \n> altdest=altdest@entry=0x1d67180, completionTag=completionTag@entry=0x7fff103950e0 \"\") at pquery.c:788\n> #28 0x000000000064be96 in exec_simple_query (query_string=0x1d1f200 \"SELECT to_datestamp_immutable(time_int) FROM random_times;\") at postgres.c:1046\n> #29 PostgresMain (argc=<optimized out>, argv=argv@entry=0x1c742a0, dbname=0x1c74158 \"regress\", username=<optimized out>) at postgres.c:3959\n> #30 0x000000000060d7be in BackendRun (port=0x1c98020) at postmaster.c:3614\n> #31 BackendStartup (port=0x1c98020) at postmaster.c:3304\n> #32 ServerLoop () at postmaster.c:1367\n\n\n... so it's looking a lot like the function is parsed for each call.\nThat seems like a bit of a WTF.\n\nNo time to read the code path right now, but I plan to this evening.\n\nIn the mean time, any thoughts?\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 Aug 2013 10:46:52 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Interesting case of IMMUTABLE significantly hurting\n performance"
},
{
"msg_contents": "2013/8/14 Craig Ringer <[email protected]>\n\n> Hi folks\n>\n> I've run into an interesting Stack Overflow post where the user shows\n> that marking a particular function as IMMUTABLE significantly hurts the\n> performance of a query.\n>\n> http://stackoverflow.com/q/18220761/398670\n>\n> CREATE OR REPLACE FUNCTION\n> to_datestamp_immutable(time_int double precision) RETURNS date AS $$\n> SELECT date_trunc('day', to_timestamp($1))::date;\n> $$ LANGUAGE SQL IMMUTABLE;\n>\n> With IMMUTABLE: 33060.918\n> With STABLE: 6063.498\n>\n> The plans are the same for both, though the cost estimate for the\n> IMMUTABLE variant is (surprisingly) massively higher.\n>\n> The question contains detailed instructions to reproduce the issue, and\n> I can confirm the same results on my machine.\n>\n> It looks like the difference is created by to_timestamp , in that if\n> to_timestamp is replaced with interval maths the difference goes away.\n>\n> I'm very curious and am doing a quick profile now, but I wanted to raise\n> this on the list for comment/opinions, since it's very\n> counter-intuitive. IIRC docs don't suggest that IMMUTABLE can ever be\n> more expensive.\n>\n\n\nIf I understand, a used IMMUTABLE flag disables inlining. What you see, is\nSQL eval overflow.\n\nMy rule is - don't use flags in SQL functions, when it is possible.\n\nPavel\n\n\n>\n> --\n> Craig Ringer http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2013/8/14 Craig Ringer <[email protected]>\n\nHi folks\n\nI've run into an interesting Stack Overflow post where the user shows\nthat marking a particular function as IMMUTABLE significantly hurts the\nperformance of a query.\n\nhttp://stackoverflow.com/q/18220761/398670\n\nCREATE OR REPLACE FUNCTION\n to_datestamp_immutable(time_int double precision) RETURNS date AS $$\n SELECT date_trunc('day', to_timestamp($1))::date;\n$$ LANGUAGE SQL IMMUTABLE;\n\nWith IMMUTABLE: 33060.918\nWith STABLE: 6063.498\n\nThe plans are the same for both, though the cost estimate for the\nIMMUTABLE variant is (surprisingly) massively higher.\n\nThe question contains detailed instructions to reproduce the issue, and\nI can confirm the same results on my machine.\n\nIt looks like the difference is created by to_timestamp , in that if\nto_timestamp is replaced with interval maths the difference goes away.\n\nI'm very curious and am doing a quick profile now, but I wanted to raise\nthis on the list for comment/opinions, since it's very\ncounter-intuitive. IIRC docs don't suggest that IMMUTABLE can ever be\nmore expensive.If I understand, a used IMMUTABLE flag disables inlining. What you see, is SQL eval overflow. My rule is - don't use flags in SQL functions, when it is possible.\nPavel \n\n--\n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 14 Aug 2013 05:52:36 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interesting case of IMMUTABLE significantly hurting performance"
},
{
"msg_contents": "On 08/14/2013 11:52 AM, Pavel Stehule wrote:\n> \n> If I understand, a used IMMUTABLE flag disables inlining. What you see,\n> is SQL eval overflow.\n> \n> My rule is - don't use flags in SQL functions, when it is possible.\n\nInteresting. I knew that was the case for STRICT, but am surprised to\nhear it's the case for IMMUTABLE as well. That seems ...\ncounter-intuitive. Not to mention undocumented as far as I can see.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 Aug 2013 11:57:00 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Interesting case of IMMUTABLE significantly hurting\n performance"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> I've run into an interesting Stack Overflow post where the user shows\n> that marking a particular function as IMMUTABLE significantly hurts the\n> performance of a query.\n\n> http://stackoverflow.com/q/18220761/398670\n\n> CREATE OR REPLACE FUNCTION\n> to_datestamp_immutable(time_int double precision) RETURNS date AS $$\n> SELECT date_trunc('day', to_timestamp($1))::date;\n> $$ LANGUAGE SQL IMMUTABLE;\n\n[ shrug... ] Using IMMUTABLE to lie about the mutability of a function\n(in this case, date_trunc) is a bad idea. It's likely to lead to wrong\nanswers, never mind performance issues. In this particular case, I\nimagine the performance problem comes from having suppressed the option\nto inline the function body ... but you should be more worried about\nwhether you aren't getting flat-out bogus answers in other cases.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 Aug 2013 00:17:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interesting case of IMMUTABLE significantly hurting performance"
},
{
"msg_contents": "On 08/14/2013 12:17 PM, Tom Lane wrote:\n> [ shrug... ] Using IMMUTABLE to lie about the mutability of a function\n> (in this case, date_trunc) is a bad idea. It's likely to lead to wrong\n> answers, never mind performance issues. In this particular case, I\n> imagine the performance problem comes from having suppressed the option\n> to inline the function body ... but you should be more worried about\n> whether you aren't getting flat-out bogus answers in other cases.\n\nOh, I totally agree, and I'd never do this myself. I was just curious\nabout the behaviour.\n\nIt's interesting that this variant doesn't seem to be slow:\n\ncreate or replace function to_datestamp_immutable(\n time_int double precision\n) returns date as $$\n select date_trunc('day', timestamp 'epoch' + $1 * interval '1\nsecond')::date;\n$$ language sql immutable;\n\n\nand there's no sign it's parsed each time. So it's not just the\nIMMUTABLE flag.\n\nIf nothing else this strongly suggests that the docs don't cover this\narea particularly comprehensively.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 Aug 2013 12:44:22 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Interesting case of IMMUTABLE significantly hurting\n performance"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> It's interesting that this variant doesn't seem to be slow:\n\n> create or replace function to_datestamp_immutable(\n> time_int double precision\n> ) returns date as $$\n> select date_trunc('day', timestamp 'epoch' + $1 * interval '1\n> second')::date;\n> $$ language sql immutable;\n\n> and there's no sign it's parsed each time. So it's not just the\n> IMMUTABLE flag.\n\nIf you're working with timestamp not timestamptz, I think the functions\nbeing called here actually are immutable (they don't have any dependency\non the timezone parameter). So this function is safely inline-able\nand there's no performance hit from multiple executions.\n\nAs Pavel mentioned upthread, the safest rule of thumb for SQL functions\nthat you want to get inlined is to not mark them as to either mutability\nor strictness. That lets the planner inline them without any possible\nchange of semantics. (The basic point here is that a function marked\nvolatile can be expanded to its contained functions even if they're\nimmutable; but the other way around represents a potential semantic\nchange, so the planner won't do it.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 Aug 2013 15:05:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interesting case of IMMUTABLE significantly hurting performance"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am new in this group and need some help from your side.\n\nWe have a mediation product which is initially using Oracle as database.\n\nSome of our customer interested to move Postgres 9.1.\n\nOur mediation product storing some configuration related information in data base and some type of logging data.\n\nWe are using Hibernate in Java to interact with Postgres 9.1. \n\nCan you please suggest some test cases or some issues which may hamper us?\n\nRegards\nTarkeshwar\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 Aug 2013 05:38:12 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need some basic information"
},
{
"msg_contents": "\n> We are using Hibernate in Java to interact with Postgres 9.1. \n> \n> Can you please suggest some test cases or some issues which may hamper us?\n\nPort your application and run your smoke tests?\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 Aug 2013 17:57:31 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need some basic information"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've run into a strange plan difference on 9.1.9 - the first query does\n\"DISTINCT\" by doing a GROUP BY on the columns (both INT).\n\nSELECT\n \"f_account\".\"name_id\" AS \"a_1550\",\n \"f_message\".\"text_id\" AS \"a_1562\"\nFROM \"f_accountmessagefact\"\n INNER JOIN \"f_message\" ON ( \"f_accountmessagefact\".\"message_id\" =\n\"f_message\".\"id\" )\n INNER JOIN \"f_account\" ON ( \"f_accountmessagefact\".\"account_id\" =\n\"f_account\".\"id\" )\nGROUP BY 1, 2;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Group (cost=3575011.59..3721066.43 rows=19473978 width=8)\n -> Sort (cost=3575011.59..3623696.54 rows=19473978 width=8)\n Sort Key: f_account.name_id, f_message.text_id\n -> Hash Join (cost=51718.44..1217195.39 rows=19473978 width=8)\n Hash Cond: (f_accountmessagefact.account_id = f_account.id)\n -> Hash Join (cost=51699.42..949409.18 rows=19473978\nwidth=8)\n Hash Cond: (f_accountmessagefact.message_id =\nf_message.id)\n -> Seq Scan on f_accountmessagefact \n(cost=0.00..435202.78 rows=19473978 width=8)\n -> Hash (cost=37002.52..37002.52 rows=1175752 width=8)\n -> Seq Scan on f_message (cost=0.00..37002.52\nrows=1175752 width=8)\n -> Hash (cost=11.23..11.23 rows=623 width=8)\n -> Seq Scan on f_account (cost=0.00..11.23 rows=623\nwidth=8)\n(12 rows)\n\nNow, this takes ~45 seconds to execute, but after rewriting the query to\nuse the regular DISTINCT it suddenly switches to HashAggregate with ~1/3\nthe cost (although it produces the same output, AFAIK), and it executes in\n~15 seconds.\n\nSELECT DISTINCT\n \"f_account\".\"name_id\" AS \"a_1550\",\n \"f_message\".\"text_id\" AS \"a_1562\"\nFROM \"f_accountmessagefact\"\n INNER JOIN \"f_message\" ON ( \"f_accountmessagefact\".\"message_id\" =\n\"f_message\".\"id\" )\n INNER JOIN \"f_account\" ON ( \"f_accountmessagefact\".\"account_id\" =\n\"f_account\".\"id\" );\n\n QUERY PLAN\n------------------------------------------------------------------------------------------\n HashAggregate (cost=1314565.28..1509305.06 rows=19473978 width=8)\n -> Hash Join (cost=51718.44..1217195.39 rows=19473978 width=8)\n Hash Cond: (f_accountmessagefact.account_id = f_account.id)\n -> Hash Join (cost=51699.42..949409.18 rows=19473978 width=8)\n Hash Cond: (f_accountmessagefact.message_id = f_message.id)\n -> Seq Scan on f_accountmessagefact (cost=0.00..435202.78\nrows=19473978 width=8)\n -> Hash (cost=37002.52..37002.52 rows=1175752 width=8)\n -> Seq Scan on f_message (cost=0.00..37002.52\nrows=1175752 width=8)\n -> Hash (cost=11.23..11.23 rows=623 width=8)\n -> Seq Scan on f_account (cost=0.00..11.23 rows=623 width=8)\n(10 rows)\n\nI've tested this with other queries and those actually behave as expected\n(both using HashAggregate), so I'm wondering what's wrong with this one\nand why it's discarding a plan with much lower cost. Any ideas?\n\nThe estimates are quite exact (and exactly the same for both queries).\n\nBTW I can't test this on 9.2 or 9.3 easily, as this is our production\nenvironment and I can't just export the data. I've tried to simulate this\nbut so far no luck.\n\nregards\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 Aug 2013 17:33:53 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "queries with DISTINCT / GROUP BY giving different plans"
},
{
"msg_contents": "\"Tomas Vondra\" <[email protected]> writes:\n> I've run into a strange plan difference on 9.1.9 - the first query does\n> \"DISTINCT\" by doing a GROUP BY on the columns (both INT). ...\n> Now, this takes ~45 seconds to execute, but after rewriting the query to\n> use the regular DISTINCT it suddenly switches to HashAggregate with ~1/3\n> the cost (although it produces the same output, AFAIK), and it executes in\n> ~15 seconds.\n\n[ scratches head... ] I guess you're running into some corner case where\nchoose_hashed_grouping and choose_hashed_distinct make different choices.\nIt's going to be tough to debug without a test case though. I couldn't\nreproduce the behavior in a few tries here.\n\n> BTW I can't test this on 9.2 or 9.3 easily, as this is our production\n> environment and I can't just export the data. I've tried to simulate this\n> but so far no luck.\n\nI suppose they won't yet you step through those two functions with a\ndebugger either ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 Aug 2013 14:35:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with DISTINCT / GROUP BY giving different plans"
},
{
"msg_contents": "On 14.8.2013 20:35, Tom Lane wrote:\n> \"Tomas Vondra\" <[email protected]> writes:\n>> I've run into a strange plan difference on 9.1.9 - the first query\n>> does \"DISTINCT\" by doing a GROUP BY on the columns (both INT). ... \n>> Now, this takes ~45 seconds to execute, but after rewriting the\n>> query to use the regular DISTINCT it suddenly switches to\n>> HashAggregate with ~1/3 the cost (although it produces the same\n>> output, AFAIK), and it executes in ~15 seconds.\n> \n> [ scratches head... ] I guess you're running into some corner case\n> where choose_hashed_grouping and choose_hashed_distinct make\n> different choices. It's going to be tough to debug without a test\n> case though. I couldn't reproduce the behavior in a few tries here.\n> \n>> BTW I can't test this on 9.2 or 9.3 easily, as this is our\n>> production environment and I can't just export the data. I've tried\n>> to simulate this but so far no luck.\n> \n> I suppose they won't yet you step through those two functions with a \n> debugger either ...\n\nI've managed to get the data to a different machine, and I've spent some\ntime on debugging it. It seems that the difference is in evaluating\nhashentrysize - while\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Aug 2013 21:20:17 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with DISTINCT / GROUP BY giving different plans"
},
{
"msg_contents": "On 14.8.2013 20:35, Tom Lane wrote:\n> \"Tomas Vondra\" <[email protected]> writes:\n>> I've run into a strange plan difference on 9.1.9 - the first query\n>> does \"DISTINCT\" by doing a GROUP BY on the columns (both INT). ... \n>> Now, this takes ~45 seconds to execute, but after rewriting the\n>> query to use the regular DISTINCT it suddenly switches to\n>> HashAggregate with ~1/3 the cost (although it produces the same\n>> output, AFAIK), and it executes in ~15 seconds.\n> \n> [ scratches head... ] I guess you're running into some corner case\n> where choose_hashed_grouping and choose_hashed_distinct make\n> different choices. It's going to be tough to debug without a test\n> case though. I couldn't reproduce the behavior in a few tries here.\n> \n>> BTW I can't test this on 9.2 or 9.3 easily, as this is our\n>> production environment and I can't just export the data. I've tried\n>> to simulate this but so far no luck.\n> \n> I suppose they won't yet you step through those two functions with a \n> debugger either ...\n\nOK, this time the complete message ...\n\nI've managed to get the data to a different machine, and I've spent some\ntime on debugging it. It seems that the difference is in evaluating\nhashentrysize - while choose_hashed_distinct does this:\n\n /*\n * Don't do it if it doesn't look like the hashtable will fit into\n * work_mem.\n */\n hashentrysize = MAXALIGN(path_width)\n + MAXALIGN(sizeof(MinimalTupleData));\n\n if (hashentrysize * dNumDistinctRows > work_mem * 1024L)\n return false;\n\nwhile choose_hashed_grouping does this:\n\n /* Estimate per-hash-entry space at tuple width... */\n hashentrysize = MAXALIGN(path_width)\n + MAXALIGN(sizeof(MinimalTupleData));\n /* plus space for pass-by-ref transition values... */\n hashentrysize += agg_costs->transitionSpace;\n /* plus the per-hash-entry overhead */\n hashentrysize += hash_agg_entry_size(agg_costs->numAggs);\n\n if (hashentrysize * dNumGroups > work_mem * 1024L)\n return false;\n\nIn both cases the common parameter values are\n\n dNumGroups = dNumDistinctRows = 20451018\n work_mem = 819200\n\nbut the hashentrysize size is 24 (choose_hashed_distinct) or 56\n(choose_hashed_grouping). This causes that while _distinct evaluates the\ncondition as false, and _grouping as true (and thus returns false).\n\nNow, the difference between 24 and 56 is caused by hash_agg_entry_size.\nIt's called with numAggs=0 but returns 32. I'm wondering if it should\nreturn 0 in such cases, i.e. something like this:\n\n Size\n hash_agg_entry_size(int numAggs)\n {\n Size entrysize;\n\n if (numAggs == 0)\n return 0;\n\n /* This must match build_hash_table */\n entrysize = sizeof(AggHashEntryData) +\n (numAggs - 1) * sizeof(AggStatePerGroupData);\n entrysize = MAXALIGN(entrysize);\n /* Account for hashtable overhead (assuming fill factor = 1) */\n entrysize += 3 * sizeof(void *);\n return entrysize;\n }\n\nI've tested that after this both queries use HashAggregate (which is the\nright choice), but I haven't done any extensive checking so maybe I'm\nmissing something.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Aug 2013 21:36:19 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with DISTINCT / GROUP BY giving different plans"
},
{
"msg_contents": "On 16.8.2013 21:36, Tomas Vondra wrote:\n>\n> Now, the difference between 24 and 56 is caused by hash_agg_entry_size.\n> It's called with numAggs=0 but returns 32. I'm wondering if it should\n> return 0 in such cases, i.e. something like this:\n> \n> Size\n> hash_agg_entry_size(int numAggs)\n> {\n> Size entrysize;\n> \n> if (numAggs == 0)\n> return 0;\n> \n> /* This must match build_hash_table */\n> entrysize = sizeof(AggHashEntryData) +\n> (numAggs - 1) * sizeof(AggStatePerGroupData);\n> entrysize = MAXALIGN(entrysize);\n> /* Account for hashtable overhead (assuming fill factor = 1) */\n> entrysize += 3 * sizeof(void *);\n> return entrysize;\n> }\n> \n> I've tested that after this both queries use HashAggregate (which is the\n> right choice), but I haven't done any extensive checking so maybe I'm\n> missing something.\n\nSo, is this a sufficient / correct explanation? Any comments about the\nfix I suggested? Or should I try to get a permission to provide the data\nso that you can reproduce the issue on your own? That might take a few\ndays to get through.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 Aug 2013 17:13:09 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with DISTINCT / GROUP BY giving different plans"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> I've managed to get the data to a different machine, and I've spent some\n> time on debugging it.\n\nGreat, thanks for looking into it!\n\n> It seems that the difference is in evaluating hashentrysize\n> [ choose_hashed_distinct omits hash_agg_entry_size() ]\n> but the hashentrysize size is 24 (choose_hashed_distinct) or 56\n> (choose_hashed_grouping). This causes that while _distinct evaluates the\n> condition as false, and _grouping as true (and thus returns false).\n\nHah.\n\n> Now, the difference between 24 and 56 is caused by hash_agg_entry_size.\n> It's called with numAggs=0 but returns 32. I'm wondering if it should\n> return 0 in such cases, i.e. something like this:\n\nNo, I don't think so. I'm pretty sure the reason choose_hashed_distinct\nis like that is that I subconsciously assumed hash_agg_entry_size would\nproduce zero for numAggs = 0; but in fact it does not and should not,\nbecause there's still some overhead for the per-group hash entry whether\nor not there's any aggregates. So the right fix is that\nchoose_hashed_distinct should add hash_agg_entry_size(0) onto its\nhashentrysize estimate.\n\nA separate issue is that the use of numAggs-1 in hash_agg_entry_size's\ncalculations seems a bit risky if numAggs can be zero - I'm not sure we\ncan rely on compilers to get that right. I'm inclined to replace that\nwith use of offsetof. Likewise in build_hash_table.\n\n> I've tested that after this both queries use HashAggregate (which is the\n> right choice), but I haven't done any extensive checking so maybe I'm\n> missing something.\n\nIt might be the preferable choice in this example, but you're looking at\nan edge case. If you want the thing to be using a hash aggregate for\nthis size of problem, you should increase work_mem.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 Aug 2013 12:24:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with DISTINCT / GROUP BY giving different plans"
},
{
"msg_contents": "On 20.8.2013 18:24, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> I've managed to get the data to a different machine, and I've spent\n>> some time on debugging it.\n> \n> Great, thanks for looking into it!\n> \n>> It seems that the difference is in evaluating hashentrysize [\n>> choose_hashed_distinct omits hash_agg_entry_size() ] but the\n>> hashentrysize size is 24 (choose_hashed_distinct) or 56 \n>> (choose_hashed_grouping). This causes that while _distinct\n>> evaluates the condition as false, and _grouping as true (and thus\n>> returns false).\n> \n> Hah.\n> \n>> Now, the difference between 24 and 56 is caused by\n>> hash_agg_entry_size. It's called with numAggs=0 but returns 32. I'm\n>> wondering if it should return 0 in such cases, i.e. something like\n>> this:\n> \n> No, I don't think so. I'm pretty sure the reason\n> choose_hashed_distinct is like that is that I subconsciously assumed\n> hash_agg_entry_size would produce zero for numAggs = 0; but in fact\n> it does not and should not, because there's still some overhead for\n> the per-group hash entry whether or not there's any aggregates. So\n> the right fix is that choose_hashed_distinct should add\n> hash_agg_entry_size(0) onto its hashentrysize estimate.\n> \n> A separate issue is that the use of numAggs-1 in\n> hash_agg_entry_size's calculations seems a bit risky if numAggs can\n> be zero - I'm not sure we can rely on compilers to get that right.\n> I'm inclined to replace that with use of offsetof. Likewise in\n> build_hash_table.\n> \n>> I've tested that after this both queries use HashAggregate (which\n>> is the right choice), but I haven't done any extensive checking so\n>> maybe I'm missing something.\n> \n> It might be the preferable choice in this example, but you're looking\n> at an edge case. If you want the thing to be using a hash aggregate\n> for this size of problem, you should increase work_mem.\n\nHmmm. I think the main 'issue' here is that the queries behave quite\ndifferently although it seems like they should do the same thing (well,\nI understand they're not the same).\n\nWe're already using work_mem='800MB' so there's not much room to\nincrease this. Actually, this is probably the main reason why we haven't\nseen this issue more often, because the other dataset are smaller (but\nthat won't last for long, because of steady growth).\n\nA complete explain analyze for the HashAggregate plan is available here:\nhttp://explain.depesz.com/s/jCO The estimates seem to be pretty exact,\nexcept for the very last step:\n\n HashAggregate (cost=1399795.00..1604305.06 rows=20451006 width=8)\n (actual time=13985.580..14106.708 rows=355600 loops=1)\n\nSo, the estimate is ~60x higher than the actual value, which then\nhappens to work for choose_hashed_distinct (because it uses much lower\nvalue for hashentrysize), but for choose_hashed_grouping this is\nactually above the threshold.\n\nBut then again, the actual number of rows is much lower than the\nestimate so that the amount of memory is actually well within work_mem\nso it does not cause any trouble with OOM.\n\nSo I don't think increasing the work_mem is a good long-term solution\nhere, because the main problem here is the estimate. Another sign I\nshould probably start working on the multi-column indexes as I planned\nfor a long time ...\n\nAnyway, I still don't understand why the same logic around\nhash_agg_entry_size should not apply to choose_hashed_grouping as well?\nWell, it would make it slower in this particular corner case, but\nwouldn't it be more correct?\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 Aug 2013 19:56:01 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with DISTINCT / GROUP BY giving different plans"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 20.8.2013 18:24, Tom Lane wrote:\n>> No, I don't think so. I'm pretty sure the reason\n>> choose_hashed_distinct is like that is that I subconsciously assumed\n>> hash_agg_entry_size would produce zero for numAggs = 0; but in fact\n>> it does not and should not, because there's still some overhead for\n>> the per-group hash entry whether or not there's any aggregates. So\n>> the right fix is that choose_hashed_distinct should add\n>> hash_agg_entry_size(0) onto its hashentrysize estimate.\n\n> Hmmm. I think the main 'issue' here is that the queries behave quite\n> differently although it seems like they should do the same thing (well,\n> I understand they're not the same).\n\nThey are the same, assuming we choose to use hashed grouping for both.\n\nIt's somewhat unfortunate that the planner treats work_mem as a hard\nboundary; if the estimated memory requirement is 1 byte over the limit,\nit will not consider a hashed aggregation, period. It might be better if\nwe applied some kind of sliding penalty. On the other hand, given that\nthe calculation depends on an ndistinct estimate that's frequently pretty\nbad, it doesn't seem wise to me to have a calculation that's encouraging\nuse of hashed aggregation by underestimating the space needed per row;\nand the current code in choose_hashed_distinct is definitely doing that.\nA quick experiment (grouping a single float8 row) says that the actual\nspace consumption per group is about 80 bytes, using HEAD on a 64-bit\nmachine. This compares to choose_hashed_grouping's estimate of 56 bytes\nand choose_hashed_distinct's estimate of 24. I think the remaining\ndiscrepancy is because the estimation code isn't allowing for palloc\noverhead --- the actual space for each representative tuple is really\nmore than MAXALIGN(data_width) + MAXALIGN(sizeof(tuple_header)).\nI'm not entirely sure if we should add in the palloc overhead, but\nI am pretty sure that choose_hashed_distinct isn't doing anyone any\nfavors by being wrong by a factor of 3.\n\n> Anyway, I still don't understand why the same logic around\n> hash_agg_entry_size should not apply to choose_hashed_grouping as well?\n> Well, it would make it slower in this particular corner case, but\n> wouldn't it be more correct?\n\nchoose_hashed_grouping has it right, or at least more nearly right.\nchoose_hashed_distinct is simply failing to account for space that\nwill in fact be consumed. Not fixing that is not a good way to\ndeal with inaccurate number-of-groups estimates; if that estimate\nis low rather than high, the consequences will be a lot worse than\nthey are here.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 Aug 2013 17:02:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with DISTINCT / GROUP BY giving different plans"
},
{
"msg_contents": "On 20.8.2013 23:02, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n> \n>> Anyway, I still don't understand why the same logic around \n>> hash_agg_entry_size should not apply to choose_hashed_grouping as\n>> well? Well, it would make it slower in this particular corner case,\n>> but wouldn't it be more correct?\n\nMeh, I meant it the other way around - applying the hashentrysize logic\nfrom hashed_grouping to hashed_distinct. So that both use 56B.\n\n> choose_hashed_grouping has it right, or at least more nearly right. \n> choose_hashed_distinct is simply failing to account for space that \n> will in fact be consumed. Not fixing that is not a good way to deal\n> with inaccurate number-of-groups estimates; if that estimate is low\n> rather than high, the consequences will be a lot worse than they are\n> here.\n\nNot quite sure how to parse this (not a native speaker here, sorry).\nDoes that mean we want to keep it as it is now (because fixing it would\ncause even worse errors with low estimates)? Or do we want to fix\nhashed_distinct so that it behaves like hashed_grouping?\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 21 Aug 2013 00:21:24 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with DISTINCT / GROUP BY giving different plans"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> Not quite sure how to parse this (not a native speaker here, sorry).\n> Does that mean we want to keep it as it is now (because fixing it would\n> cause even worse errors with low estimates)? Or do we want to fix\n> hashed_distinct so that it behaves like hashed_grouping?\n\nWe need to fix hashed_distinct like this:\n\ndiff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c\nindex bcc0d45..99284cb 100644\n*** a/src/backend/optimizer/plan/planner.c\n--- b/src/backend/optimizer/plan/planner.c\n*************** choose_hashed_distinct(PlannerInfo *root\n*** 2848,2854 ****\n--- 2848,2858 ----\n \t * Don't do it if it doesn't look like the hashtable will fit into\n \t * work_mem.\n \t */\n+ \n+ \t/* Estimate per-hash-entry space at tuple width... */\n \thashentrysize = MAXALIGN(path_width) + MAXALIGN(sizeof(MinimalTupleData));\n+ \t/* plus the per-hash-entry overhead */\n+ \thashentrysize += hash_agg_entry_size(0);\n \n \tif (hashentrysize * dNumDistinctRows > work_mem * 1024L)\n \t\treturn false;\n\nI've started a thread over in -hackers about whether it's prudent to\nback-patch this change or not.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 Aug 2013 19:32:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with DISTINCT / GROUP BY giving different plans"
}
] |
[
{
"msg_contents": "Hello ,\n\nI am trying to run DBT5 to test performance of PG9.2.4,\n\nBut execution failed due to undefined symbol: PQescapeLiteral error in <output_dir>/bh/bh.out\n\nFull error as follow: \n----------------------------------------------------------------------\ndbt5 - Brokerage House\nListening on port: 30000\n\nUsing the following database settings:\nHost:\nDatabase port:\nDatabase name: dbt5\nBrokerage House opened for business, waiting traders...\nWARNING: Query CPF2_1 should return 10-30 rows.\nNOTICE: CPF2: INPUTS START\nNOTICE: CPF2: acct_id 0\nNOTICE: CPF2: INPUTS END\nWARNING: UNEXPECTED EXECUTION RESULT: customer_position.c 456\nSELECT t_id,\n t_s_symb,\n t_qty,\n st_name,\n th_dts\nFROM (SELECT t_id AS id\n FROM trade\n WHERE t_ca_id = 0\n ORDER BY t_dts DESC\n LIMIT 10) AS t,\n trade,\n trade_history,\n status_type\nWHERE t_id = id\n AND th_t_id = t_id\n AND st_id = th_st_id\nORDER BY th_dts DESC\nLIMIT 30\nBrokerageHouseMain: symbol lookup error: BrokerageHouseMain: undefined symbol: PQescapeLiteral\n--------------------------------------------------------------------------\n\nEnvironment :\n \nCentOS 6.4(Final)\nPG installed using source code\n\nDBT5 is cloned from http://github.com/petergeoghegan/dbt5.git\nalso tried with http://git.code.sf.net/p/osdldbt/dbt repo with pg9.0 \n\nsame error occur when used with PG9.0,PG9.1 PG9.3beta2\n\nExecution: \n\n1. Database loading :\n dbt5-pgsql-build-db -c 1000 -s 500 -w 300 -r\n\n\n2. Test \ndbt5-run-workload -a pgsql -c 1000 -d 300 -u 5 -n dbt5 -t 5000 -w 200 -o /tmp/result\n\n\nDo I missing something?\nIs there anything more need to do? \n\nThank you !!\n\nRegards,\nAmul Sul\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Aug 2013 12:13:16 +0800 (SGT)",
"msg_from": "amul sul <[email protected]>",
"msg_from_op": true,
"msg_subject": "DBT5 execution failed due to undefined symbol: PQescapeLiteral"
},
{
"msg_contents": "amul sul wrote:\r\n> I am trying to run DBT5 to test performance of PG9.2.4,\r\n> \r\n> But execution failed due to undefined symbol: PQescapeLiteral error in <output_dir>/bh/bh.out\r\n> \r\n> Full error as follow:\r\n[...]\r\n> BrokerageHouseMain: symbol lookup error: BrokerageHouseMain: undefined symbol: PQescapeLiteral\r\n> --------------------------------------------------------------------------\r\n> \r\n> Environment :\r\n> \r\n> CentOS 6.4(Final)\r\n> PG installed using source code\r\n> \r\n> DBT5 is cloned from http://github.com/petergeoghegan/dbt5.git\r\n> also tried with http://git.code.sf.net/p/osdldbt/dbt repo with pg9.0\r\n> \r\n> same error occur when used with PG9.0,PG9.1 PG9.3beta2\r\n\r\nThat is a problem on the client side.\r\n\r\nI guess that your binary was built and linked with PostgreSQL client library (libpq)\r\nfrom version 9.0 or later, but you are trying to run it with a libpq.so\r\nfrom version 8.4 or earlier (where PQescapeLiteral was not defined).\r\n\r\nIn that case you should upgrade the PostgreSQL client.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Aug 2013 07:26:09 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DBT5 execution failed due to undefined symbol:\n PQescapeLiteral"
},
{
"msg_contents": "Hi Laurenz Albe ,\n\n\nThanks for reply.\nYour guess was correct :).\n\n\nIt worked for me exporting LD_LIBRARY_PATH to postgres9.2 lib directory.\n\n\nThanks again for your help.\n\nRegards,\nAmul Sul\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/DBT5-execution-failed-due-to-undefined-symbol-PQescapeLiteral-tp5767580p5767992.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 Aug 2013 00:13:56 -0700 (PDT)",
"msg_from": "amulsul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DBT5 execution failed due to undefined symbol: PQescapeLiteral"
}
] |
[
{
"msg_contents": "Currently I run two queries back-to-back to correlate users with contacts.\n\nUPDATE contacts SET user_id = u.id\n FROM my_users u\n JOIN phone_numbers pn ON u.phone_significant = pn.significant\n WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND contacts.id =\npn.ref_contact_id;\n\nUPDATE contacts SET user_id = u.id\n FROM my_users u\n JOIN email_addresses em ON u.email = em.email\n WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND contacts.id =\nem.ref_contact_id;\n\nFor some reason I cannot figure out how to combine these into one update\nquery. They are running slower than I'd like them to even though I have\nindices on user_id, owner_id, email, and significant. So I'd like to try\nthem in a single query to see if that helps.\n\nAs always, thanks for your sage advice.\n\nCurrently I run two queries back-to-back to correlate users with contacts.UPDATE contacts SET user_id = u.id\n FROM my_users u JOIN phone_numbers pn ON u.phone_significant = pn.significant \n WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND contacts.id = pn.ref_contact_id;\nUPDATE contacts SET user_id = u.id\n FROM my_users u JOIN email_addresses em ON u.email = em.email WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND contacts.id = em.ref_contact_id;\nFor some reason I cannot figure out how to combine these into one update query. They are running slower than I'd like them to even though I have indices on user_id, owner_id, email, and significant. So I'd like to try them in a single query to see if that helps.\nAs always, thanks for your sage advice.",
"msg_date": "Sat, 17 Aug 2013 13:19:10 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Create one query out of two"
},
{
"msg_contents": "What does EXPLAIN show?\n\nWhat proportion of contacts have owner_id=7 and user_id is null?\n\nIf it's a large number of contacts, I'd try the following:\n\ncreate temporary table tusers as\nselect coalesce(p.ref_contact_id,e.ref_contact_id) as id, u.id as user_id\nfrom my_users u\n left join phone_number p on on p.significant=u.phone_significant\n left join email_addresses e on e.email=u.email\nwhere p.ref_contact_id is not null or e.ref_contact_id is not null;\n\ncreate unique index tusers_idx on tusers(id);\n\nupdate contacts set user_id=t.user_id\nfrom tusers t\nwhere t.id=contacts.id and contacts.owner=7 and contacts.user_id is null;\n\nIf it's a small number of contacts, then it might be worth creating a\ntemporary table of that subset, indexing it, then replacing \"where\np.ref_contact_id is not null or e.ref_contact_id is not null\" with \"where\np.ref_contact_id in (select id from TEMPTABLE) or e.ref_contact_id in\n(select id from TEMPTABLE)\"\n\n\nCalvin Dodge\n\n\nOn Sat, Aug 17, 2013 at 3:19 PM, Robert DiFalco <[email protected]>wrote:\n\n> Currently I run two queries back-to-back to correlate users with contacts.\n>\n> UPDATE contacts SET user_id = u.id\n> FROM my_users u\n> JOIN phone_numbers pn ON u.phone_significant = pn.significant\n> WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND\n> contacts.id = pn.ref_contact_id;\n>\n> UPDATE contacts SET user_id = u.id\n> FROM my_users u\n> JOIN email_addresses em ON u.email = em.email\n> WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND\n> contacts.id = em.ref_contact_id;\n>\n> For some reason I cannot figure out how to combine these into one update\n> query. They are running slower than I'd like them to even though I have\n> indices on user_id, owner_id, email, and significant. So I'd like to try\n> them in a single query to see if that helps.\n>\n> As always, thanks for your sage advice.\n>\n\nWhat does EXPLAIN show? What proportion of contacts have owner_id=7 and user_id is null?\nIf it's a large number of contacts, I'd try the following:\n\ncreate temporary table tusers as\nselect coalesce(p.ref_contact_id,e.ref_contact_id) as id, u.id as user_id \n\nfrom my_users u left join phone_number p on on p.significant=u.phone_significant left join email_addresses e on e.email=u.email\nwhere p.ref_contact_id is not null or e.ref_contact_id is not null;create unique index tusers_idx on tusers(id);\nupdate contacts set user_id=t.user_id\n\nfrom tusers twhere t.id=contacts.id and contacts.owner=7 and contacts.user_id is null;\nIf it's a small number of contacts, then it might be worth creating a temporary table of that subset, indexing it, then replacing \"where p.ref_contact_id is not null or e.ref_contact_id is not null\" with \"where p.ref_contact_id in (select id from TEMPTABLE) or e.ref_contact_id in (select id from TEMPTABLE)\"\nCalvin DodgeOn Sat, Aug 17, 2013 at 3:19 PM, Robert DiFalco <[email protected]> wrote:\nCurrently I run two queries back-to-back to correlate users with contacts.\nUPDATE contacts SET user_id = u.id\n FROM my_users u JOIN phone_numbers pn ON u.phone_significant = pn.significant \n WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND contacts.id = pn.ref_contact_id;\nUPDATE contacts SET user_id = u.id\n FROM my_users u JOIN email_addresses em ON u.email = em.email WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL AND contacts.id = em.ref_contact_id;\nFor some reason I cannot figure out how to combine these into one update query. They are running slower than I'd like them to even though I have indices on user_id, owner_id, email, and significant. So I'd like to try them in a single query to see if that helps.\nAs always, thanks for your sage advice.",
"msg_date": "Sat, 17 Aug 2013 19:31:48 -0500",
"msg_from": "Calvin Dodge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create one query out of two"
},
{
"msg_contents": "Robert DiFalco <[email protected]> wrote:>\n\n> UPDATE contacts SET user_id = u.id\n> FROM my_users u \n> JOIN phone_numbers pn ON u.phone_significant = pn.significant \n> WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL\n> AND contacts.id = pn.ref_contact_id;\n>\n> UPDATE contacts SET user_id = u.id\n> FROM my_users u \n> JOIN email_addresses em ON u.email = em.email \n> WHERE contacts.owner_id = 7 AND contacts.user_id IS NULL\n> AND contacts.id = em.ref_contact_id;\n>\n> They are running slower than I'd like them to even though I have\n> indices on user_id, owner_id, email, and significant.\n\nHave you tried those queries with an index like this?:\n\nCREATE INDEX contacts_owner_null_user\n ON contacts (owner_id)\n WHERE user_id IS NULL;\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 18 Aug 2013 13:23:41 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create one query out of two"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm on 9.2.4 with Ubuntu server. There are usually hundereds of \nconnections doing the same insert with different data from different \nnetworks every minute, through pgbouncer in the same network of the \ndatabase server. The database has been running for about one year \nwithout problem. Yesterday I got a problem that the connection count \nlimit of the database server is reached. I checked the connections and \nfound that there are many inserts hanging there. I checked the \nload(cpu,memory,io) of the db server but seems everything is fine. I \nalso checked pg log and I only found there are one \"incomplete message \nfrom client\" error message every several minute. The I recycled \npgbouncer and kept monitoring the connections. I found the majority of \nthe inserts finish quickly but every minute there are several inserts \nleft and seems hanging there . So after a while, the connection limit is \nreached again. Besides those inserts, there are no other long run \nqueries and auto vacuums. I also checked the locks of the inserts and \nfound they were all granted. The insert statement itself is very simple \nand it only inserts one row but there are some triggers involved. They \nmight impact the performance but I have never experience any since the \nmajority of the inserts are fine. The problem persisted about 1-2 hours. \nI didn't do anything except recycling pgbouncer a few times. After that \nperiod, everything goes back to normal. It's has been 24 hours and it \ndidn't happen again.\n\n From the error message in pg log, I supect it might be the network \nproblem from some clients. Could anyone point out if there are other \npossible causes? I'm also wondering what those inserts are doing \nactually when they are hanging there, such as if they are in the trigger \nor not. Anything I can get similar with the connection snapshots in db2?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 Aug 2013 09:44:39 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to investiage slow insert problem"
},
{
"msg_contents": "On Mon, Aug 19, 2013 at 6:44 PM, Rural Hunter <[email protected]> wrote:\n> I'm on 9.2.4 with Ubuntu server. There are usually hundereds of connections\n> doing the same insert with different data from different networks every\n> minute, through pgbouncer in the same network of the database server. The\n> database has been running for about one year without problem. Yesterday I\n> got a problem that the connection count limit of the database server is\n> reached. I checked the connections and found that there are many inserts\n> hanging there. I checked the load(cpu,memory,io) of the db server but seems\n> everything is fine. I also checked pg log and I only found there are one\n> \"incomplete message from client\" error message every several minute. The I\n> recycled pgbouncer and kept monitoring the connections. I found the majority\n> of the inserts finish quickly but every minute there are several inserts\n> left and seems hanging there . So after a while, the connection limit is\n> reached again. Besides those inserts, there are no other long run queries\n> and auto vacuums. I also checked the locks of the inserts and found they\n> were all granted. The insert statement itself is very simple and it only\n> inserts one row but there are some triggers involved. They might impact the\n> performance but I have never experience any since the majority of the\n> inserts are fine. The problem persisted about 1-2 hours. I didn't do\n> anything except recycling pgbouncer a few times. After that period,\n> everything goes back to normal. It's has been 24 hours and it didn't happen\n> again.\n>\n> From the error message in pg log, I supect it might be the network problem\n> from some clients. Could anyone point out if there are other possible\n> causes? I'm also wondering what those inserts are doing actually when they\n> are hanging there, such as if they are in the trigger or not. Anything I can\n> get similar with the connection snapshots in db2?\n\nWhat do you mean by recycling pgbouncer?\n\nHaven't you noticed what was in the state column of the\npg_state_activity view? In 9.2 the query column in this view shows the\nlast statement that was executed in this connection, and it does not\nmean that this statement is working at the moment of monitoring. If\nthe state is active, than it was working, however, my assumption is\nthat it was IDLE in transaction. You mentioned the \"incomplete message\nfrom client\" error, so it might somehow be a network problem that led\nto a hunging connection to pgbouncer, that made pgbouncer kept a\nconnection to postgres after transaction was started.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 19 Aug 2013 19:38:25 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to investiage slow insert problem"
},
{
"msg_contents": "于 2013/8/20 10:38, Sergey Konoplev 写道:\n> On Mon, Aug 19, 2013 at 6:44 PM, Rural Hunter <[email protected]> wrote:\n> What do you mean by recycling pgbouncer? \nI mean restarting pgbouncer.\n> Haven't you noticed what was in the state column of the \n> pg_state_activity view? In 9.2 the query column in this view shows the \n> last statement that was executed in this connection, and it does not \n> mean that this statement is working at the moment of monitoring. If \n> the state is active, than it was working, however, my assumption is \n> that it was IDLE in transaction. \nNo, they are alll with 'active' state.\n> You mentioned the \"incomplete message from client\" error, so it might \n> somehow be a network problem that led to a hunging connection to \n> pgbouncer, that made pgbouncer kept a connection to postgres after \n> transaction was started. \npgbouncer and the db server are in the same local network and there \nshouldn't be any network problem between them. I also ran ping from \npgbouncer server to the db server and there was no problem.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 Aug 2013 10:45:18 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to investiage slow insert problem"
},
{
"msg_contents": "On Mon, Aug 19, 2013 at 7:45 PM, Rural Hunter <[email protected]> wrote:\n>> You mentioned the \"incomplete message from client\" error, so it might\n>> somehow be a network problem that led to a hunging connection to pgbouncer,\n>> that made pgbouncer kept a connection to postgres after transaction was\n>> started.\n>\n> pgbouncer and the db server are in the same local network and there\n> shouldn't be any network problem between them. I also ran ping from\n> pgbouncer server to the db server and there was no problem.\n\nNext time, when you face this again, set log_min_duration_statement to\nthe value less that the age of hunging inserts and debug_print_parse,\ndebug_print_rewritten, debug_print_plan and debug_pretty_print to\n'on'. It will allow you to log what is happening with these inserts\nand what takes so many time.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 19 Aug 2013 20:01:10 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to investiage slow insert problem"
},
{
"msg_contents": "On Monday, August 19, 2013, Rural Hunter wrote:\n\n> Hi,\n>\n> I'm on 9.2.4 with Ubuntu server. There are usually hundereds of\n> connections doing the same insert with different data from different\n> networks every minute, through pgbouncer in the same network of the\n> database server. The database has been running for about one year without\n> problem. Yesterday I got a problem that the connection count limit of the\n> database server is reached.\n\n\nI think that this should generally not happen at the server if you are\nusing pgbouncer, as you should configure it so that pgbouncer has a lower\nlimit than postgresql itself does. What pooling method (session,\ntransaction, statement) are you using?\n\n\n> I checked the connections and found that there are many inserts hanging\n> there. I checked the load(cpu,memory,io) of the db server but seems\n> everything is fine.\n\n\nCan you provide some example numbers for the io load?\n\n\n> I also checked pg log and I only found there are one \"incomplete message\n> from client\" error message every several minute.\n\n\nCould you post the complete log message and a few lines of context around\nit?\n\n\n> The I recycled pgbouncer and kept monitoring the connections. I found the\n> majority of the inserts finish quickly but every minute there are several\n> inserts left and seems hanging there .\n\n\nHow long had they been hanging there? It makes a big difference whether\nthere are several hanging there at one moment, but a few milliseconds later\nthere are several different ones, versus the same few that hang around of\nmany seconds or minutes at a time.\n\n...\n\n From the error message in pg log, I supect it might be the network problem\n> from some clients. Could anyone point out if there are other possible\n> causes?\n\n\nIf the identities of the \"hung\" processes are rapidly changing, it could\njust be that you are hitting a throughput limit. When you do a lot of\ninserts into indexed the tables, the performance can drop precipitously\nonce the size of the actively updated part of the indexes exceeds\nshared_buffers. This would usually show up in the io stats, but if you\nalways have a lot of io going on, it might not be obvious.\n\nIf it is the same few processes hung for long periods, I would strace them,\nor gdb them and get a backtrace.\n\n\n\n> I'm also wondering what those inserts are doing actually when they are\n> hanging there, such as if they are in the trigger or not. Anything I can\n> get similar with the connection snapshots in db2?\n>\n\nSorry, I don't know what a connection snapshot in db2 looks like.\n\n\nCheers,\n\nJeff\n\nOn Monday, August 19, 2013, Rural Hunter wrote:Hi,\n\nI'm on 9.2.4 with Ubuntu server. There are usually hundereds of connections doing the same insert with different data from different networks every minute, through pgbouncer in the same network of the database server. The database has been running for about one year without problem. Yesterday I got a problem that the connection count limit of the database server is reached.\nI think that this should generally not happen at the server if you are using pgbouncer, as you should configure it so that pgbouncer has a lower limit than postgresql itself does. What pooling method (session, transaction, statement) are you using?\n I checked the connections and found that there are many inserts hanging there. I checked the load(cpu,memory,io) of the db server but seems everything is fine. \nCan you provide some example numbers for the io load? I also checked pg log and I only found there are one \"incomplete message from client\" error message every several minute. \nCould you post the complete log message and a few lines of context around it? The I recycled pgbouncer and kept monitoring the connections. I found the majority of the inserts finish quickly but every minute there are several inserts left and seems hanging there . \nHow long had they been hanging there? It makes a big difference whether there are several hanging there at one moment, but a few milliseconds later there are several different ones, versus the same few that hang around of many seconds or minutes at a time.\n ...From the error message in pg log, I supect it might be the network problem from some clients. Could anyone point out if there are other possible causes? \nIf the identities of the \"hung\" processes are rapidly changing, it could just be that you are hitting a throughput limit. When you do a lot of inserts into indexed the tables, the performance can drop precipitously once the size of the actively updated part of the indexes exceeds shared_buffers. This would usually show up in the io stats, but if you always have a lot of io going on, it might not be obvious.\nIf it is the same few processes hung for long periods, I would strace them, or gdb them and get a backtrace. \nI'm also wondering what those inserts are doing actually when they are hanging there, such as if they are in the trigger or not. Anything I can get similar with the connection snapshots in db2?\nSorry, I don't know what a connection snapshot in db2 looks like. Cheers,Jeff",
"msg_date": "Mon, 19 Aug 2013 21:34:07 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to investiage slow insert problem"
},
{
"msg_contents": "\n\n\n\n\n于 2013/8/20 12:34, Jeff Janes 写道:\n\nOn Monday, August 19, 2013, Rural Hunter wrote:\n \n\nI think that this should generally not happen at the server\n if you are using pgbouncer, as you should configure it so that\n pgbouncer has a lower limit than postgresql itself does. What\n pooling method (session, transaction, statement) are you using?\n\n statement. Currently, I set the limit of pgbouncer connection to\n same as db connection. But I also have a few connections connecting\n to db server directly.\n \n\n\nCan you provide some example numbers for the io load?\n\n I get some when the connection limit is reached(The database related\n storage is on sdb/sdd/sde/sdf):\n root@ubtserver:~# iostat -xm 3\n Linux 3.5.0-22-generic (ubuntu) 2013年08月19日 _x86_64_ (32\n CPU)\n\n avg-cpu: %user %nice %system %iowait %steal %idle\n 14.71 0.00 2.86 0.48 0.00 81.96\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n avgrq-sz avgqu-sz await r_await w_await svctm %util\n sda 0.00 0.26 0.04 0.36 0.00 0.00 \n 24.71 0.00 0.55 3.01 0.30 0.29 0.01\n sdb 0.00 0.26 0.18 2.32 0.02 0.38 \n 329.50 0.01 5.36 1.26 5.69 0.21 0.05\n sdc 0.01 4.59 10.13 45.75 0.30 0.92 \n 44.65 0.05 5.14 7.49 4.62 0.63 3.50\n dm-0 0.00 0.00 0.00 0.01 0.00 \n 0.00 8.00 0.00 6.37 6.38 6.36 3.62 0.00\n sdd 0.00 0.42 0.02 42.87 0.00 0.46 \n 22.12 0.03 0.78 14.09 0.77 0.49 2.10\n sde 0.00 3.68 10.23 156.41 0.19 1.45 \n 20.06 0.03 1.59 21.34 0.29 0.51 8.55\n sdf 0.00 2.56 6.29 66.00 0.29 0.71 \n 28.42 0.04 0.56 4.52 0.19 0.37 2.71\n\n avg-cpu: %user %nice %system %iowait %steal %idle\n 13.99 0.00 1.91 1.04 0.00 83.06\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n avgrq-sz avgqu-sz await r_await w_await svctm %util\n sda 0.00 0.00 0.33 0.00 0.00 0.00 \n 16.00 0.00 4.00 4.00 0.00 4.00 0.13\n sdb 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdc 0.00 15.33 5.33 14.33 0.13 0.21 \n 34.98 0.03 1.63 6.00 0.00 1.02 2.00\n dm-0 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdd 0.00 0.00 0.00 31.33 0.00 0.26 \n 17.19 0.01 0.34 0.00 0.34 0.34 1.07\n sde 0.00 0.00 43.00 163.67 0.59 1.29 \n 18.55 2.56 21.34 72.06 8.01 1.69 34.93\n sdf 0.00 0.00 6.00 62.00 0.17 0.55 \n 21.88 0.49 7.16 5.56 7.31 0.27 1.87\n\n avg-cpu: %user %nice %system %iowait %steal %idle\n 15.84 0.00 2.63 1.70 0.00 79.83\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n avgrq-sz avgqu-sz await r_await w_await svctm %util\n sda 0.00 1.67 0.00 2.00 0.00 0.01 \n 14.67 0.07 33.33 0.00 33.33 25.33 5.07\n sdb 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdc 0.00 0.00 4.67 0.00 0.06 0.00 \n 26.29 0.13 6.29 6.29 0.00 25.14 11.73\n dm-0 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdd 0.00 0.33 0.00 49.00 0.00 0.39 \n 16.49 0.02 0.35 0.00 0.35 0.35 1.73\n sde 0.00 11.00 30.67 81.33 0.38 0.71 \n 19.98 36.46 143.19 43.91 180.62 2.69 30.13\n sdf 0.00 9.33 3.00 326.00 0.09 2.75 \n 17.69 3.51 10.66 5.33 10.71 0.11 3.60\n\n avg-cpu: %user %nice %system %iowait %steal %idle\n 14.99 0.00 2.39 4.89 0.00 77.74\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n avgrq-sz avgqu-sz await r_await w_await svctm %util\n sda 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdb 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdc 0.00 19.67 7.33 29.00 0.09 0.60 \n 38.61 1.18 35.41 175.45 0.00 15.93 57.87\n dm-0 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdd 0.00 0.33 0.00 39.33 0.00 0.31 \n 15.93 0.01 0.37 0.00 0.37 0.37 1.47\n sde 0.00 11.33 29.67 312.67 0.39 2.51 \n 17.34 87.15 314.23 108.13 333.78 2.84 97.20\n sdf 0.00 0.00 8.33 0.00 0.17 0.00 \n 42.24 0.05 6.56 6.56 0.00 2.40 2.00\n\n avg-cpu: %user %nice %system %iowait %steal %idle\n 14.98 0.00 2.23 5.45 0.00 77.34\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n avgrq-sz avgqu-sz await r_await w_await svctm %util\n sda 0.00 0.00 0.00 0.67 0.00 0.01 \n 20.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdb 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdc 0.00 9.67 10.00 6.00 0.12 0.10 \n 27.83 0.08 5.08 8.13 0.00 1.42 2.27\n dm-0 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdd 0.00 0.00 0.00 44.33 0.00 0.35 \n 16.00 0.03 0.72 0.00 0.72 0.72 3.20\n sde 0.00 0.00 47.33 0.00 0.58 0.00 \n 25.18 5.26 111.04 111.04 0.00 19.10 90.40\n sdf 0.00 11.00 3.33 683.33 0.12 7.38 \n 22.37 12.05 17.54 244.00 16.44 0.49 33.33\n\n avg-cpu: %user %nice %system %iowait %steal %idle\n 15.21 0.00 2.54 0.56 0.00 81.69\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n avgrq-sz avgqu-sz await r_await w_await svctm %util\n sda 0.00 2.00 0.00 1.00 0.00 0.01 \n 24.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdb 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdc 0.00 0.00 14.33 2.00 0.20 0.39 \n 73.80 0.07 4.08 4.65 0.00 2.37 3.87\n dm-0 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdd 0.00 0.33 0.00 62.00 0.00 0.52 \n 17.08 0.02 0.34 0.00 0.34 0.34 2.13\n sde 0.00 9.67 30.67 157.33 0.43 1.27 \n 18.54 1.75 9.33 15.91 8.04 1.09 20.53\n sdf 0.00 9.67 6.67 0.67 0.13 0.04 \n 46.91 0.04 5.09 5.60 0.00 2.36 1.73\n\n avg-cpu: %user %nice %system %iowait %steal %idle\n 14.72 0.00 1.95 0.58 0.00 82.76\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n avgrq-sz avgqu-sz await r_await w_await svctm %util\n sda 0.00 0.00 0.00 2.00 0.00 \n 0.01 8.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdb 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdc 0.00 13.67 5.33 32.33 0.07 0.31 \n 20.46 0.04 1.03 7.25 0.00 0.46 1.73\n dm-0 0.00 0.00 0.00 0.00 0.00 \n 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n sdd 0.00 0.00 0.00 42.00 0.00 0.35 \n 17.27 0.03 0.79 0.00 0.79 0.79 3.33\n sde 0.00 0.33 48.00 804.00 0.61 6.34 \n 16.71 8.38 9.82 14.11 9.57 0.23 19.20\n sdf 0.00 0.00 8.00 463.00 0.09 4.12 \n 18.30 5.00 10.62 7.17 10.68 0.11 5.20\n\n\n\n\n\nCould you post the complete log message and a few lines of\n context around it?\n\n There is no context from the same connection around that message. \n \nHow long had they been hanging there? It makes a big\n difference whether there are several hanging there at one\n moment, but a few milliseconds later there are several different\n ones, versus the same few that hang around of many seconds or\n minutes at a time.\n\n The hanging connections never disappear until I restart pgbouncer.\n It's like this, At minute 1, 3 connections left. At minute 2,\n another 3 left, total 6. Another minute, another 3 left, total\n 9....till the limit reaches.\n\nIf the identities of the \"hung\" processes are rapidly\n changing, it could just be that you are hitting a throughput\n limit. When you do a lot of inserts into indexed the tables, the\n performance can drop precipitously once the size of the actively\n updated part of the indexes exceeds shared_buffers. This would\n usually show up in the io stats, but if you always have a lot of\n io going on, it might not be obvious.\n\n\nIf it is the same few processes hung for long periods, I\n would strace them, or gdb them and get a backtrace.\n\n any detail guide to use strace/gdb on pg process?\n \nSorry, I don't know what a connection snapshot in db2 looks\n like.\n\nhttp://pic.dhe.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=%2Fcom.ibm.db2.luw.admin.cmd.doc%2Fdoc%2Fr0001945.html\n search for \"get snapshot for application\". Note: some items in\n the sample are marked as \"Not Collected\" because some\n monitor flags are turned off.\n\n\n \n\n\nCheers,\n\n\nJeff\n\n\n\n\n",
"msg_date": "Tue, 20 Aug 2013 13:30:18 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to investiage slow insert problem"
},
{
"msg_contents": "On Mon, Aug 19, 2013 at 10:30 PM, Rural Hunter\n<[email protected]<javascript:;>>\nwrote:\n> 于 2013/8/20 12:34, Jeff Janes 写道:\n>\n\n> > How long had they been hanging there? It makes a big difference whether\n> > there are several hanging there at one moment, but a few milliseconds\nlater\n> > there are several different ones, versus the same few that hang around\nof\n> > many seconds or minutes at a time.\n>\n> The hanging connections never disappear until I restart pgbouncer. It's\nlike\n> this, At minute 1, 3 connections left. At minute 2, another 3 left, total\n6.\n> Another minute, another 3 left, total 9....till the limit reaches.\n\nOK, that certainly does sound like network problems and not disk\ncontention. But what I don't see is why it would be listed as \"active\" in\npg_stat_activity. If it is blocked on a network connection, I would think\nit would show 'idle'.\n\n>\n> > If it is the same few processes hung for long periods, I would strace\nthem,\n> > or gdb them and get a backtrace.\n>\n> any detail guide to use strace/gdb on pg process?\n\nstrace and gdb are aggressive forms of monitoring, so you should talk to\nyour sys admins before using them (if your organization has that type of\nsegregation of duties) or read their documentation.\n\nIt is best to run them on a test system if you can get the problem to occur\nthere. If you can't, then I would be willing to run them against a hung\nbackend on a production system, but I might be more adventurous than most.\n\nI have occasionally seen strace cause the straced program to seg-fault, but\nI've only seen this with GUI programs and never with postgresql.\n\nIn theory attaching gdb could make a backend pause while holding a\nspinlock, and if left that way for long enough could cause a\npostgresql-wide panic over a stuck spinlock. But I've never seen this\nhappen without intentionally causing. If the backend you attach is truly\nhung and hasn't already causes a panic, then it almost surely can't be\nhappen, but just to be sure you should quit gdb as soon as possible so that\nthe backend can continue unimpeded.\n\nAnyway, both are fairly easy to run once you find the pid of one of a stuck\nbackend (either using ps, or using pg_stat_activity). Then you give the\npid to the debugging program with -p option.\n\nYou probably have to run as the postgres user, or you won't have\npermissions to attach to the backend.\n\nWith strace, once you attach you will see a stream of system calls go to\nyour screen, until you hit ctrl-C to detach. But if the backend is hung on\na network connection, there should really only be one system call, and then\nit just wait until you detach or the network connection times out, like\nthis:\n\n$ strace -p 21116\nProcess 21116 attached - interrupt to quit\nrecvfrom(9,\n\nSo it is waiting on a recvfrom call, and that call never returns until I\ngot sick of waiting and hit ctrl-C. Not very interesting, but it does show\nit is indeed stuck on the network\n\nfor gdb, it is similar to invoke:\n\n$ gdb -p 21116\n\nand it then produces several screenfuls of diagnostic gibberish and gives\nyou an interactive command-line environment. Once attached, you want to\nget a backtrace (\"bt\", return), and then quit promptly (\"q\", return, \"y\").\n\nThat produces something like this:\n\nLoaded symbols for /lib64/libnss_files.so.2\n0x00000032a80e9672 in __libc_recv (fd=<value optimized out>, buf=0xb33f60,\nn=8192, flags=0) at ../sysdeps/unix/sysv/linux/x86_64/recv.c:30\n30 return INLINE_SYSCALL (recvfrom, 6, fd, buf, n, flags, NULL,\nNULL);\n(gdb) bt\n#0 0x00000032a80e9672 in __libc_recv (fd=<value optimized out>,\nbuf=0xb33f60, n=8192, flags=0) at\n../sysdeps/unix/sysv/linux/x86_64/recv.c:30\n#1 0x00000000005a4846 in secure_read (port=0x22f1190, ptr=0xb33f60,\nlen=8192) at be-secure.c:304\n#2 0x00000000005ae33b in pq_recvbuf () at pqcomm.c:824\n#3 0x00000000005ae73b in pq_getbyte () at pqcomm.c:865\n#4 0x0000000000651c11 in SocketBackend (argc=<value optimized out>,\nargv=<value optimized out>, dbname=0x22d2a28 \"jjanes\", username=<value\noptimized out>)\n at postgres.c:342\n#5 ReadCommand (argc=<value optimized out>, argv=<value optimized out>,\ndbname=0x22d2a28 \"jjanes\", username=<value optimized out>) at postgres.c:490\n#6 PostgresMain (argc=<value optimized out>, argv=<value optimized out>,\ndbname=0x22d2a28 \"jjanes\", username=<value optimized out>) at\npostgres.c:3923\n#7 0x000000000060e861 in BackendRun () at postmaster.c:3614\n#8 BackendStartup () at postmaster.c:3304\n#9 ServerLoop () at postmaster.c:1367\n#10 0x00000000006113b1 in PostmasterMain (argc=<value optimized out>,\nargv=<value optimized out>) at postmaster.c:1127\n#11 0x00000000005b0440 in main (argc=3, argv=0x22d1cb0) at main.c:199\n(gdb) q\nA debugging session is active.\n\n Inferior 1 [process 21116] will be detached.\n\nQuit anyway? (y or n) y\nDetaching from program: /usr/local/pgsql9_2/bin/postgres, process 21116\n\nCheers,\n\nJeff\n\nOn Mon, Aug 19, 2013 at 10:30 PM, Rural Hunter <[email protected]> wrote:\n> 于 2013/8/20 12:34, Jeff Janes 写道:\n>\n\n> > How long had they been hanging there? It makes a big difference whether\n> > there are several hanging there at one moment, but a few milliseconds later\n> > there are several different ones, versus the same few that hang around of\n> > many seconds or minutes at a time.\n>\n> The hanging connections never disappear until I restart pgbouncer. It's like\n> this, At minute 1, 3 connections left. At minute 2, another 3 left, total 6.\n> Another minute, another 3 left, total 9....till the limit reaches.\n\nOK, that certainly does sound like network problems and not disk contention. But what I don't see is why it would be listed as \"active\" in pg_stat_activity. If it is blocked on a network connection, I would think it would show 'idle'.\n\n>\n> > If it is the same few processes hung for long periods, I would strace them,\n> > or gdb them and get a backtrace.\n>\n> any detail guide to use strace/gdb on pg process?\n\nstrace and gdb are aggressive forms of monitoring, so you should talk to your sys admins before using them (if your organization has that type of segregation of duties) or read their documentation.\n\nIt is best to run them on a test system if you can get the problem to occur there. If you can't, then I would be willing to run them against a hung backend on a production system, but I might be more adventurous than most.\n\nI have occasionally seen strace cause the straced program to seg-fault, but I've only seen this with GUI programs and never with postgresql.\n\nIn theory attaching gdb could make a backend pause while holding a spinlock, and if left that way for long enough could cause a postgresql-wide panic over a stuck spinlock. But I've never seen this happen without intentionally causing. If the backend you attach is truly hung and hasn't already causes a panic, then it almost surely can't be happen, but just to be sure you should quit gdb as soon as possible so that the backend can continue unimpeded.\n\nAnyway, both are fairly easy to run once you find the pid of one of a stuck backend (either using ps, or using pg_stat_activity). Then you give the pid to the debugging program with -p option.\n\nYou probably have to run as the postgres user, or you won't have permissions to attach to the backend.\n\nWith strace, once you attach you will see a stream of system calls go to your screen, until you hit ctrl-C to detach. But if the backend is hung on a network connection, there should really only be one system call, and then it just wait until you detach or the network connection times out, like this:\n\n$ strace -p 21116\nProcess 21116 attached - interrupt to quit\nrecvfrom(9,\n\nSo it is waiting on a recvfrom call, and that call never returns until I got sick of waiting and hit ctrl-C. Not very interesting, but it does show it is indeed stuck on the network\n\nfor gdb, it is similar to invoke:\n\n$ gdb -p 21116\n\nand it then produces several screenfuls of diagnostic gibberish and gives you an interactive command-line environment. Once attached, you want to get a backtrace (\"bt\", return), and then quit promptly (\"q\", return, \"y\").\n\nThat produces something like this:\n\nLoaded symbols for /lib64/libnss_files.so.2\n0x00000032a80e9672 in __libc_recv (fd=<value optimized out>, buf=0xb33f60, n=8192, flags=0) at ../sysdeps/unix/sysv/linux/x86_64/recv.c:30\n30 return INLINE_SYSCALL (recvfrom, 6, fd, buf, n, flags, NULL, NULL);\n(gdb) bt\n#0 0x00000032a80e9672 in __libc_recv (fd=<value optimized out>, buf=0xb33f60, n=8192, flags=0) at ../sysdeps/unix/sysv/linux/x86_64/recv.c:30\n#1 0x00000000005a4846 in secure_read (port=0x22f1190, ptr=0xb33f60, len=8192) at be-secure.c:304\n#2 0x00000000005ae33b in pq_recvbuf () at pqcomm.c:824\n#3 0x00000000005ae73b in pq_getbyte () at pqcomm.c:865\n#4 0x0000000000651c11 in SocketBackend (argc=<value optimized out>, argv=<value optimized out>, dbname=0x22d2a28 \"jjanes\", username=<value optimized out>)\n at postgres.c:342\n#5 ReadCommand (argc=<value optimized out>, argv=<value optimized out>, dbname=0x22d2a28 \"jjanes\", username=<value optimized out>) at postgres.c:490\n#6 PostgresMain (argc=<value optimized out>, argv=<value optimized out>, dbname=0x22d2a28 \"jjanes\", username=<value optimized out>) at postgres.c:3923\n#7 0x000000000060e861 in BackendRun () at postmaster.c:3614\n#8 BackendStartup () at postmaster.c:3304\n#9 ServerLoop () at postmaster.c:1367\n#10 0x00000000006113b1 in PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1127\n#11 0x00000000005b0440 in main (argc=3, argv=0x22d1cb0) at main.c:199\n(gdb) q\nA debugging session is active.\n\n Inferior 1 [process 21116] will be detached.\n\nQuit anyway? (y or n) y\nDetaching from program: /usr/local/pgsql9_2/bin/postgres, process 21116\nCheers,Jeff",
"msg_date": "Tue, 20 Aug 2013 17:24:08 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "How to investiage slow insert problem"
},
{
"msg_contents": "Hi Jeff,\n\nThanks a lot for such a detailed guide!\n\n于 2013/8/21 8:24, Jeff Janes 写道:\n>\n> OK, that certainly does sound like network problems and not disk \n> contention. But what I don't see is why it would be listed as \n> \"active\" in pg_stat_activity. If it is blocked on a network \n> connection, I would think it would show 'idle'.\n>\n> strace and gdb are aggressive forms of monitoring, so you should talk \n> to your sys admins before using them (if your organization has that \n> type of segregation of duties) or read their documentation.\n>\n> It is best to run them on a test system if you can get the problem to \n> occur there. If you can't, then I would be willing to run them \n> against a hung backend on a production system, but I might be more \n> adventurous than most.\n>\n> I have occasionally seen strace cause the straced program to \n> seg-fault, but I've only seen this with GUI programs and never with \n> postgresql.\n>\n> In theory attaching gdb could make a backend pause while holding a \n> spinlock, and if left that way for long enough could cause a \n> postgresql-wide panic over a stuck spinlock. But I've never seen this \n> happen without intentionally causing. If the backend you attach is \n> truly hung and hasn't already causes a panic, then it almost surely \n> can't be happen, but just to be sure you should quit gdb as soon as \n> possible so that the backend can continue unimpeded.\n>\n> Anyway, both are fairly easy to run once you find the pid of one of a \n> stuck backend (either using ps, or using pg_stat_activity). Then you \n> give the pid to the debugging program with -p option.\n>\n> You probably have to run as the postgres user, or you won't have \n> permissions to attach to the backend.\n>\n> With strace, once you attach you will see a stream of system calls go \n> to your screen, until you hit ctrl-C to detach. But if the backend is \n> hung on a network connection, there should really only be one system \n> call, and then it just wait until you detach or the network connection \n> times out, like this:\n>\n> $ strace -p 21116\n> Process 21116 attached - interrupt to quit\n> recvfrom(9,\n>\n> So it is waiting on a recvfrom call, and that call never returns until \n> I got sick of waiting and hit ctrl-C. Not very interesting, but it \n> does show it is indeed stuck on the network\n>\n> for gdb, it is similar to invoke:\n>\n> $ gdb -p 21116\n>\n> and it then produces several screenfuls of diagnostic gibberish and \n> gives you an interactive command-line environment. Once attached, you \n> want to get a backtrace (\"bt\", return), and then quit promptly (\"q\", \n> return, \"y\").\n>\n> That produces something like this:\n>\n> Loaded symbols for /lib64/libnss_files.so.2\n> 0x00000032a80e9672 in __libc_recv (fd=<value optimized out>, \n> buf=0xb33f60, n=8192, flags=0) at \n> ../sysdeps/unix/sysv/linux/x86_64/recv.c:30\n> 30 return INLINE_SYSCALL (recvfrom, 6, fd, buf, n, flags, \n> NULL, NULL);\n> (gdb) bt\n> #0 0x00000032a80e9672 in __libc_recv (fd=<value optimized out>, \n> buf=0xb33f60, n=8192, flags=0) at \n> ../sysdeps/unix/sysv/linux/x86_64/recv.c:30\n> #1 0x00000000005a4846 in secure_read (port=0x22f1190, ptr=0xb33f60, \n> len=8192) at be-secure.c:304\n> #2 0x00000000005ae33b in pq_recvbuf () at pqcomm.c:824\n> #3 0x00000000005ae73b in pq_getbyte () at pqcomm.c:865\n> #4 0x0000000000651c11 in SocketBackend (argc=<value optimized out>, \n> argv=<value optimized out>, dbname=0x22d2a28 \"jjanes\", username=<value \n> optimized out>)\n> at postgres.c:342\n> #5 ReadCommand (argc=<value optimized out>, argv=<value optimized \n> out>, dbname=0x22d2a28 \"jjanes\", username=<value optimized out>) at \n> postgres.c:490\n> #6 PostgresMain (argc=<value optimized out>, argv=<value optimized \n> out>, dbname=0x22d2a28 \"jjanes\", username=<value optimized out>) at \n> postgres.c:3923\n> #7 0x000000000060e861 in BackendRun () at postmaster.c:3614\n> #8 BackendStartup () at postmaster.c:3304\n> #9 ServerLoop () at postmaster.c:1367\n> #10 0x00000000006113b1 in PostmasterMain (argc=<value optimized out>, \n> argv=<value optimized out>) at postmaster.c:1127\n> #11 0x00000000005b0440 in main (argc=3, argv=0x22d1cb0) at main.c:199\n> (gdb) q\n> A debugging session is active.\n>\n> Inferior 1 [process 21116] will be detached.\n>\n> Quit anyway? (y or n) y\n> Detaching from program: /usr/local/pgsql9_2/bin/postgres, process 21116\n>\n> Cheers,\n>\n> Jeff\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 21 Aug 2013 09:33:14 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to investiage slow insert problem"
},
{
"msg_contents": "(@Jeff, sorry I sent this message only to you by mistake, sending to the\nlist now...)\n\nOn Tue, Aug 20, 2013 at 9:24 PM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Aug 19, 2013 at 10:30 PM, Rural Hunter <[email protected]>\n> wrote:\n> > 于 2013/8/20 12:34, Jeff Janes 写道:\n> >\n>\n> > > How long had they been hanging there? It makes a big difference\n> whether\n> > > there are several hanging there at one moment, but a few milliseconds\n> later\n> > > there are several different ones, versus the same few that hang around\n> of\n> > > many seconds or minutes at a time.\n> >\n> > The hanging connections never disappear until I restart pgbouncer. It's\n> like\n> > this, At minute 1, 3 connections left. At minute 2, another 3 left,\n> total 6.\n> > Another minute, another 3 left, total 9....till the limit reaches.\n>\n> OK, that certainly does sound like network problems and not disk\n> contention. But what I don't see is why it would be listed as \"active\" in\n> pg_stat_activity. If it is blocked on a network connection, I would think\n> it would show 'idle'.\n\n\nIIRC, the \"state\" column will show if the query on \"query\" column is really\nrunning or not (by not I mean, it is \"idle[ in transaction]\"), the column\n\"waiting\" is the one that we should look at to see if the backend is really\nblocked, which is the case if waiting is true. If it is true, then we\nshould check at pg_locks to see who is blocking it, [1] and [2] has good\nqueries for that.\n\n[1] http://wiki.postgresql.org/wiki/Lock_Monitoring\n[2] http://wiki.postgresql.org/wiki/Lock_dependency_information\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\n(@Jeff, sorry I sent this message only to you by mistake, sending to the list now...)On Tue, Aug 20, 2013 at 9:24 PM, Jeff Janes <[email protected]> wrote:\nOn Mon, Aug 19, 2013 at 10:30 PM, Rural Hunter <[email protected]> wrote:\n\n\n\n\n> 于 2013/8/20 12:34, Jeff Janes 写道:\n>\n\n> > How long had they been hanging there? It makes a big difference whether\n> > there are several hanging there at one moment, but a few milliseconds later\n> > there are several different ones, versus the same few that hang around of\n> > many seconds or minutes at a time.\n>\n> The hanging connections never disappear until I restart pgbouncer. It's like\n> this, At minute 1, 3 connections left. At minute 2, another 3 left, total 6.\n> Another minute, another 3 left, total 9....till the limit reaches.\n\nOK, that certainly does sound like network problems and not disk \ncontention. But what I don't see is why it would be listed as \"active\" \nin pg_stat_activity. If it is blocked on a network connection, I would \nthink it would show 'idle'.\nIIRC, the \"state\" column \nwill show if the query on \"query\" column is really running or not (by \nnot I mean, it is \"idle[ in transaction]\"), the column \"waiting\" is the \none that we should look at to see if the backend is really blocked, \nwhich is the case if waiting is true. If it is true, then we should \ncheck at pg_locks to see who is blocking it, [1] and [2] has good \nqueries for that.\n[1] http://wiki.postgresql.org/wiki/Lock_Monitoring[2] http://wiki.postgresql.org/wiki/Lock_dependency_information\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Wed, 21 Aug 2013 10:10:15 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to investiage slow insert problem"
},
{
"msg_contents": "On Mon, Aug 19, 2013 at 10:44 PM, Rural Hunter <[email protected]>wrote:\n\n> Hi,\n>\n> I'm on 9.2.4 with Ubuntu server. There are usually hundereds of\n> connections doing the same insert with different data from different\n> networks every minute, through pgbouncer in the same network of the\n> database server. The database has been running for about one year without\n> problem. Yesterday I got a problem that the connection count limit of the\n> database server is reached. I checked the connections and found that there\n> are many inserts hanging there. I checked the load(cpu,memory,io) of the db\n> server but seems everything is fine. I also checked pg log and I only found\n> there are one \"incomplete message from client\" error message every several\n> minute.\n\n\nIt may not be related, it can be some kind of monitoring tool checking if\nPostgreSQL is listening on 5432 (or whatever) port. Do you have it?\n\n\n> The I recycled pgbouncer and kept monitoring the connections. I found the\n> majority of the inserts finish quickly but every minute there are several\n> inserts left and seems hanging there . So after a while, the connection\n> limit is reached again. Besides those inserts, there are no other long run\n> queries and auto vacuums. I also checked the locks of the inserts and found\n> they were all granted. The insert statement itself is very simple and it\n> only inserts one row but there are some triggers involved. They might\n> impact the performance but I have never experience any since the majority\n> of the inserts are fine.\n\n\nI would check this triggers first. If you execute (by hand) the same insert\n(perhaps inside a transaction, followed by a rollback) does it hangs? If\nso, you can try to trace what these triggers are doing, perhaps the\neasier/faster way would be the old and good RAISE NOTICE (if it is\nPL/pgSQL). Or even, try to execute the trigger's source by hand, if it is\nnot really huge; a EXPLAIN ANALYZE of the queries inside it may help.\n\nI already have problems with a system were some UPDATEs suddenly started\nhungging (like your case), and it was really an SELECT inside a trigger\nthat was with bad plans (some adjustment on ANALYZE parameters for one\ntable helped in the case).\n\n\n> The problem persisted about 1-2 hours. I didn't do anything except\n> recycling pgbouncer a few times. After that period, everything goes back to\n> normal. It's has been 24 hours and it didn't happen again.\n>\n> From the error message in pg log, I supect it might be the network problem\n> from some clients. Could anyone point out if there are other possible\n> causes? I'm also wondering what those inserts are doing actually when they\n> are hanging there, such as if they are in the trigger or not. Anything I\n> can get similar with the connection snapshots in db2?\n>\n>\n>\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Aug 19, 2013 at 10:44 PM, Rural Hunter <[email protected]> wrote:\nHi,\n\nI'm on 9.2.4 with Ubuntu server. There are usually hundereds of connections doing the same insert with different data from different networks every minute, through pgbouncer in the same network of the database server. The database has been running for about one year without problem. Yesterday I got a problem that the connection count limit of the database server is reached. I checked the connections and found that there are many inserts hanging there. I checked the load(cpu,memory,io) of the db server but seems everything is fine. I also checked pg log and I only found there are one \"incomplete message from client\" error message every several minute.\nIt may not be related, it can be some kind of monitoring tool checking if PostgreSQL is listening on 5432 (or whatever) port. Do you have it? \n\n The I recycled pgbouncer and kept monitoring the connections. I found the majority of the inserts finish quickly but every minute there are several inserts left and seems hanging there . So after a while, the connection limit is reached again. Besides those inserts, there are no other long run queries and auto vacuums. I also checked the locks of the inserts and found they were all granted. The insert statement itself is very simple and it only inserts one row but there are some triggers involved. They might impact the performance but I have never experience any since the majority of the inserts are fine.\nI would check this triggers first. If you execute (by hand) the same insert (perhaps inside a transaction, followed by a rollback) does it hangs? If so, you can try to trace what these triggers are doing, perhaps the easier/faster way would be the old and good RAISE NOTICE (if it is PL/pgSQL). Or even, try to execute the trigger's source by hand, if it is not really huge; a EXPLAIN ANALYZE of the queries inside it may help.\nI already have problems with a system were some UPDATEs suddenly started hungging (like your case), and it was really an SELECT inside a trigger that was with bad plans (some adjustment on ANALYZE parameters for one table helped in the case).\n The problem persisted about 1-2 hours. I didn't do anything except recycling pgbouncer a few times. After that period, everything goes back to normal. It's has been 24 hours and it didn't happen again.\n\n>From the error message in pg log, I supect it might be the network problem from some clients. Could anyone point out if there are other possible causes? I'm also wondering what those inserts are doing actually when they are hanging there, such as if they are in the trigger or not. Anything I can get similar with the connection snapshots in db2?\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Wed, 21 Aug 2013 10:17:01 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to investiage slow insert problem"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.