threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi all,\n i have 5 servers that have been installing postgresql .In order to\nknow the postgresql working status and monitor them ,moreover i don't want\nto use the monitor tools .I want to use the SQL commands to monitoring\npostgresql system . please suggest any SQL COMMANDS to work successfully.if\nyou have some good suggestion ,you can email to me\n([email protected]) or sky :xqwbx163\n \n\n\nbest regards \n\ncharles_xie\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/HELP-Need-to-Sql-commands-to-monitoring-Postgresql-tp5722548.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Tue, 4 Sep 2012 00:12:59 -0700 (PDT)",
"msg_from": "charles_xie <[email protected]>",
"msg_from_op": true,
"msg_subject": "HELP!!!-----Need to Sql commands to monitoring Postgresql"
},
{
"msg_contents": "On Tue, Sep 4, 2012 at 12:12 AM, charles_xie <[email protected]> wrote:\n> Hi all,\n> i have 5 servers that have been installing postgresql .In order to\n> know the postgresql working status and monitor them ,moreover i don't want\n> to use the monitor tools .I want to use the SQL commands to monitoring\n> postgresql system . please suggest any SQL COMMANDS to work successfully.if\n> you have some good suggestion ,you can email to me\n> ([email protected]) or sky :xqwbx163\n\nHello,\n\nYou might want to try pgsql-general or the wiki. The right stuff also\ndepends on what you are monitoring for.\n\nBasic uptime and information: \"SELECT 1\" (\"can I log in?\"), but also\ncounting the number of connections (select count(*) from\npg_stat_activity), the number of contending connections (select\ncount(*) from pg_stat_activity where waiting = 't'), the number of\ntables (select count(*) from pg_tables), database size (select\npg_database_size(<dbnamehere>)), and database version (select\nversion()) we find useful. It's so useful we put it into a very\ncondensed and cryptic status line (which can optionally have more\ninformation in more exceptional conditions) like:\n\n[100.5GB:140T:7C], (v9.0.6, --other statuses if they occur--)\n\nThe space of queries used for tuning and capacity are much larger, but\nI find these basic chunks of information a useful fingerprint of most\ndatabases and activity levels in a relatively small amount of space.\n\n-- \nfdr\n\n",
"msg_date": "Thu, 6 Sep 2012 00:43:26 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HELP!!!-----Need to Sql commands to monitoring Postgresql"
},
{
"msg_contents": "On 9/4/12 12:12 AM, charles_xie wrote:\n> Hi all,\n> i have 5 servers that have been installing postgresql .In order to\n> know the postgresql working status and monitor them ,moreover i don't want\n> to use the monitor tools .I want to use the SQL commands to monitoring\n> postgresql system . please suggest any SQL COMMANDS to work successfully.if\n> you have some good suggestion ,you can email to me\n> ([email protected]) or sky :xqwbx163\n\nActually, the Nagios extension for PostgreSQL, check_postgres.pl, has a\nreally good, very complete set of queries in its code. You could mine\nthem from there.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Thu, 06 Sep 2012 12:44:55 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HELP!!!-----Need to Sql commands to monitoring Postgresql"
},
{
"msg_contents": "Also probably some good info to be mined out of postbix.\nhttp://www.zabbix.com/wiki/howto/monitor/db/postbix/monitor_postgres_with_zabbix\n\nOn Thu, Sep 6, 2012 at 12:44 PM, Josh Berkus <[email protected]> wrote:\n\n> On 9/4/12 12:12 AM, charles_xie wrote:\n> > Hi all,\n> > i have 5 servers that have been installing postgresql .In order\n> to\n> > know the postgresql working status and monitor them ,moreover i don't\n> want\n> > to use the monitor tools .I want to use the SQL commands to monitoring\n> > postgresql system . please suggest any SQL COMMANDS to work\n> successfully.if\n> > you have some good suggestion ,you can email to me\n> > ([email protected]) or sky :xqwbx163\n>\n> Actually, the Nagios extension for PostgreSQL, check_postgres.pl, has a\n> really good, very complete set of queries in its code. You could mine\n> them from there.\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAlso probably some good info to be mined out of postbix.http://www.zabbix.com/wiki/howto/monitor/db/postbix/monitor_postgres_with_zabbix\nOn Thu, Sep 6, 2012 at 12:44 PM, Josh Berkus <[email protected]> wrote:\nOn 9/4/12 12:12 AM, charles_xie wrote:\n> Hi all,\n> i have 5 servers that have been installing postgresql .In order to\n> know the postgresql working status and monitor them ,moreover i don't want\n> to use the monitor tools .I want to use the SQL commands to monitoring\n> postgresql system . please suggest any SQL COMMANDS to work successfully.if\n> you have some good suggestion ,you can email to me\n> ([email protected]) or sky :xqwbx163\n\nActually, the Nagios extension for PostgreSQL, check_postgres.pl, has a\nreally good, very complete set of queries in its code. You could mine\nthem from there.\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 6 Sep 2012 12:50:18 -0700",
"msg_from": "Steven Crandell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HELP!!!-----Need to Sql commands to monitoring Postgresql"
},
{
"msg_contents": "Hi,\n Thanks for your advice.i know the basic monitoring skill,because the\npostgresql database is used for the factory production , so I hope they can\nrun normal and exert more perfect performance. so i need to be considered\nfrom the point of view ,eg : threading ,locks and so on.\n\n\nDaniel Farina-4 wrote\n> \n> On Tue, Sep 4, 2012 at 12:12 AM, charles_xie <xqwyy163@> wrote:\n>> Hi all,\n>> i have 5 servers that have been installing postgresql .In order\n>> to\n>> know the postgresql working status and monitor them ,moreover i don't\n>> want\n>> to use the monitor tools .I want to use the SQL commands to monitoring\n>> postgresql system . please suggest any SQL COMMANDS to work\n>> successfully.if\n>> you have some good suggestion ,you can email to me\n>> (charles.xie@) or sky :xqwbx163\n> \n> Hello,\n> \n> You might want to try pgsql-general or the wiki. The right stuff also\n> depends on what you are monitoring for.\n> \n> Basic uptime and information: \"SELECT 1\" (\"can I log in?\"), but also\n> counting the number of connections (select count(*) from\n> pg_stat_activity), the number of contending connections (select\n> count(*) from pg_stat_activity where waiting = 't'), the number of\n> tables (select count(*) from pg_tables), database size (select\n> pg_database_size(<dbnamehere>)), and database version (select\n> version()) we find useful. It's so useful we put it into a very\n> condensed and cryptic status line (which can optionally have more\n> information in more exceptional conditions) like:\n> \n> [100.5GB:140T:7C], (v9.0.6, --other statuses if they occur--)\n> \n> The space of queries used for tuning and capacity are much larger, but\n> I find these basic chunks of information a useful fingerprint of most\n> databases and activity levels in a relatively small amount of space.\n> \n> -- \n> fdr\n> \n> \n> -- \n> Sent via pgsql-performance mailing list (pgsql-performance@)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/HELP-Need-to-Sql-commands-to-monitoring-Postgresql-tp5722548p5723150.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Thu, 6 Sep 2012 18:50:11 -0700 (PDT)",
"msg_from": "charles_xie <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: HELP!!!-----Need to Sql commands to monitoring Postgresql"
},
{
"msg_contents": "On Thu, Sep 6, 2012 at 6:50 PM, charles_xie <[email protected]> wrote:\n> Hi,\n> Thanks for your advice.i know the basic monitoring skill,because the\n> postgresql database is used for the factory production , so I hope they can\n> run normal and exert more perfect performance. so i need to be considered\n> from the point of view ,eg : threading ,locks and so on.\n\nI think the key structures you are looking for, then, are queries on\npg_stat_activity, pg_locks, the pg_statio table, and also \"bloat\" of\ntables and indexes (the wiki has several slightly different relatively\nlarge queries that help track bloat).\n\nAs others have mentioned, there are existing tools with an impressive\nnumber of detailed queries, but knowing about these can help you\ninformally categorize what you are looking at. check_postgres.pl is\nespecially useful to copy queries from, if not using it in a Nagios\ninstallation entirely.\n\n-- \nfdr\n\n",
"msg_date": "Thu, 6 Sep 2012 22:45:23 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HELP!!!-----Need to Sql commands to monitoring Postgresql"
}
] |
[
{
"msg_contents": "Using explain analyze I saw that many of my queries run really fast, less\nthan 1 milliseconds, for example the analyze output of a simple query over a\ntable with 5millions of records return \"Total runtime: 0.078 ms\"\n\n \n\nBut the real time is a lot more, about 15 ms, in fact the pgadmin show this\nvalue.\n\n \n\nSo, where goes the others 14.2 ms?\n\n \n\nNetwork transfer (TCP)?\n\n \n\nOr analyze Total runtime don't represent the query runtime?\n\n \n\nThanks!\n\n\nUsing explain analyze I saw that many of my queries run really fast, less than 1 milliseconds, for example the analyze output of a simple query over a table with 5millions of records return \"Total runtime: 0.078 ms\" But the real time is a lot more, about 15 ms, in fact the pgadmin show this value. So, where goes the others 14.2 ms? Network transfer (TCP)? Or analyze Total runtime don’t represent the query runtime? Thanks!",
"msg_date": "Wed, 5 Sep 2012 17:48:16 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "query performance, where goes time?"
},
{
"msg_contents": "On 09/06/2012 07:48 AM, Anibal David Acosta wrote:\n> Using explain analyze I saw that many of my queries run really fast,\n> less than 1 milliseconds, for example the analyze output of a simple\n> query over a table with 5millions of records return \"Total runtime:\n> 0.078 ms\"\n>\n> But the real time is a lot more, about 15 ms, in fact the pgadmin show\n> this value.\n>\n> So, where goes the others 14.2 ms?\n\nClient-side latency, time spent transmitting query results, and network \nlatency.\n\nYou'll see much less difference in queries that take more meaningful \namounts of time. This query is so fast that timing accuracy will be an \nissue on some systems, and so will scheduler jitter etc.\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Thu, 06 Sep 2012 11:26:00 +1000",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query performance, where goes time?"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm working with an application that connects to a remote server database using \"libpq\" library over internet, but making a simple query is really slow even though I've done PostgreSQL Tunning and table being indexed, so I want to know:\n\n-Why is postgresql or libpq that slow when working over internet?\n-What else should I do to solve this issue in addition of postgresql tunning?\n-Why if I connect to the remote server desktop (using RDP or any Remote Desktop Application) and run the application using the same internet connection, it runs really fast when making requests to postgresql; but if I run the application locally by connecting to the remote postgresql server through \"libpq\", it's really slow?.\n\nThanks in advance,\n\nAriel Rodriguez\n\n\n",
"msg_date": "Thu, 6 Sep 2012 13:04:31 -0700 (PDT)",
"msg_from": "Aryan Ariel Rodriguez Chalas <[email protected]>",
"msg_from_op": true,
"msg_subject": "libpq or postgresql performance"
},
{
"msg_contents": "Aryan Ariel Rodriguez Chalas <[email protected]> wrote:\n\n> Hello,\n> \n> I'm working with an application that connects to a remote server database using \"libpq\" library over internet, but making a simple query is really slow even though I've done PostgreSQL Tunning and table being indexed, so I want to know:\n\ndefine slow.\n\n> \n> -Why is postgresql or libpq that slow when working over internet?\n> -What else should I do to solve this issue in addition of postgresql tunning?\n> -Why if I connect to the remote server desktop (using RDP or any Remote Desktop Application) and run the application using the same internet connection, it runs really fast when making requests to postgresql; but if I run the application locally by connecting to the remote postgresql server through \"libpq\", it's really slow?.\n\nMaybe DNS-Resolving - Problems...\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n",
"msg_date": "Fri, 7 Sep 2012 07:35:34 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq or postgresql performance"
},
{
"msg_contents": "Aryan Ariel Rodriguez Chalas wrote:\r\n> I'm working with an application that connects to a remote server database using \"libpq\" library over\r\n> internet, but making a simple query is really slow even though I've done PostgreSQL Tunning and table\r\n> being indexed, so I want to know:\r\n> \r\n> -Why is postgresql or libpq that slow when working over internet?\r\n> -What else should I do to solve this issue in addition of postgresql tunning?\r\n> -Why if I connect to the remote server desktop (using RDP or any Remote Desktop Application) and run\r\n> the application using the same internet connection, it runs really fast when making requests to\r\n> postgresql; but if I run the application locally by connecting to the remote postgresql server through\r\n> \"libpq\", it's really slow?.\r\n\r\nThere are a million possible reasons; it would be a good\r\nidea to trace at different levels to see where the time\r\nis lost.\r\n\r\nOne thing that comes to mind and that is often the cause of\r\nwhat you observe would be that there is a lot of traffic\r\nbetween the database and the application, but little traffic\r\nbetween the application and the user.\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Fri, 7 Sep 2012 08:45:38 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq or postgresql performance"
},
{
"msg_contents": "W dniu 2012-09-06 22:04, Aryan Ariel Rodriguez Chalas pisze:\n> -Why if I connect to the remote server desktop (using RDP or any Remote Desktop Application) and run the application using the same internet connection, it runs really fast when making requests to postgresql; but if I run the application locally by connecting to the remote postgresql server through \"libpq\", it's really slow?.\nIt might look like the client side fetches too much data or sends too many queries over the \nconnection to the database server and then further processes that data locally. Are you using some \nkind of ORM in your application?\n\nIf that is the case, you might need to refactor your application to do as much as possible \ncomputation at server side and deal only with computation results over the connection, not the raw data.\n\nTry to see in server log what SQL statements are executed while you are running your application. \nYou need to SET log_statement TO 'all' for that.\n\nWith psql, try to see how much data (how many rows) are returned from that query you call \"simple \nquery\". Even simple query may return a lot of rows. Server backend might execute it quickly, but \nreturning a huge result over the Internet might take a long time.\n\n",
"msg_date": "Fri, 07 Sep 2012 08:52:24 +0200",
"msg_from": "Ireneusz Pluta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq or postgresql performance"
},
{
"msg_contents": "\nW dniu 2012-09-07 17:43, Aryan Ariel Rodriguez Chalas pisze:\n> Thank you all for answering that quickly...\n>\n> A simple query could be \"SELECT LOCALTIMESTAMP(0)\" which could last even 5 seconds for returning data.\n>\n> The other queries that I would like to be faster, return less than 50 rows until now; and two of those queries (the ones the application uses the most), return only a record.\n>\n> I agree that a huge result over internet might take a long time, but until now my application is not returning that amount of rows...\n>\n> Just for the record, the server bandwidth is 2.0 mbps and the client is 1.89 mbps and other applications run really fast for my expectatives (such as opening a remote desktop and also my application through a remote desktop session runs very fast on this connection). But I'm still far of understanding why over a remote desktop session my application runs very fast on the same connection but when running it locally and connecting to the server, it's super slow. Could be because when running over a remote desktop session the application is connecting with \"libpq\" through \"Unix Domain Socket\", but when running locally; \"libpq\" works over \"TCP Sockets\"?.\nso change connection mode when running the application _at remote_ site _locally_ to the server to \ntcp too. And then try with connection to localhost, as well as to the public IP of that server, the \nsame which you connect to from your own desktop.\n> By the way I'm using Linux on both ends as Operating System, Lazarus 1.1 as IDE and Zeoslib for connecting to postgresql and I've noticed that when I compile the application for running on Windows as the client, it moves more or less acceptable; so it brings to me to another question: Why \"libpq\" is faster on Windows than Linux?.\ntry to benchmark your elementary queries with psql run from your site (remotely to the server) and \nfrom remote site (locally to the server). Use `psql -h hostaddr` at remote site to force tcp connection.\n\n>\n> Best regards...\n>\n> Ariel Rodriguez\n>\n> ----- Mensaje original -----\n> De: Ireneusz Pluta <[email protected]>\n> Para: Aryan Ariel Rodriguez Chalas <[email protected]>\n> CC: \"[email protected]\" <[email protected]>\n> Enviado: Viernes, 7 de septiembre, 2012 2:52 A.M.\n> Asunto: Re: [PERFORM] libpq or postgresql performance\n>\n> W dniu 2012-09-06 22:04, Aryan Ariel Rodriguez Chalas pisze:\n>> -Why if I connect to the remote server desktop (using RDP or any Remote Desktop Application) and run the application using the same internet connection, it runs really fast when making requests to postgresql; but if I run the application locally by connecting to the remote postgresql server through \"libpq\", it's really slow?.\n> It might look like the client side fetches too much data or sends too many queries over the connection to the database server and then further processes that data locally. Are you using some kind of ORM in your application?\n>\n> If that is the case, you might need to refactor your application to do as much as possible computation at server side and deal only with computation results over the connection, not the raw data.\n>\n> Try to see in server log what SQL statements are executed while you are running your application. You need to SET log_statement TO 'all' for that.\n>\n> With psql, try to see how much data (how many rows) are returned from that query you call \"simple query\". Even simple query may return a lot of rows. Server backend might execute it quickly, but returning a huge result over the Internet might take a long time.\n>\n>\n>\n>\n>\n>\n\n\n",
"msg_date": "Fri, 07 Sep 2012 19:24:53 +0200",
"msg_from": "Ireneusz Pluta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq or postgresql performance"
},
{
"msg_contents": "Hi Aryan\n\nwhen responding, please include the list as recipient too, so others have a chance to help you.\n\nW dniu 2012-09-08 08:00, Aryan Ariel Rodriguez Chalas pisze:\n> I didn't understand this part:\n>> so change connection mode when running the application _at remote_ site _locally_ to the server to\n> tcp too. And then try with connection to localhost, as well as to the public IP of that server, the\n> same which you connect to from your own desktop.\n>\n> Can you please be more specific, please?\n>\n> And about this part:\n>> try to benchmark your elementary queries with psql run from your site (remotely to the server) and\n> from remote site (locally to the server). Use `psql -h hostaddr` at remote site to force tcp connection.\n\nI generally wanted to say: make sure you use the same connection mode in both scenarios, use TCP in \nthe case you use Unix domain socket now.\n\nAlso, what could confuse you, except my bad English, there seem to be some inconsistency in naming \nconvention about what is called \"local\" and what is called \"remote\", particulary, when you also \nbring \"remote desktop\" into the mix.\n\nLet's make that clear, and focus on just the connection between database client and server:\n\nLocal connection is the one which runs quickly in your case - the one when client runs on the same \nmachine the server runs.\nRemote connection is when you connect from your site to the server over the internet - the one you \nhave problems with.\n\nI will use this convention from now on.\n\n> Well, I didn't mention it, but I've tested it before. For example I've done \"psql -h hostaddr -d dbname\" and it's\nis hostaddr here a raw IP or domain name? Is it just equivalent to \"servername.d2g.com\" which you \nused as an example to ssh connection?\n\n> really slow over internet, but if I do \"[email protected]\" and then \"su - postgres\" and then \"psql -d dbname\",\n\nuse -h in local connection too. This forces TCP connection instead of Unix sockect. Try to use both \n-h localhost (will resolve to 127.0.0.1) and -h servername.d2g.com.\n\n> it's really fast in the same connection for both test. I insist that this issue is not about network connection, or network latency, or any network problem, because one test is slow and the other one is fast when using the same connection.\n\nSome more ideas:\n\nFirst of all, as you seem to prefer suspecting your libpq and not believe in network problems: setup \nanother server at your site and connect to it with the same libpq.\n\nUsing two terminal sessions you may trace server logs in one session\n(ssh [email protected]; su - postgres; tail -f /path/to/pg_lof/current_postresql_logfile)\nand at the same time, in the other session, make remote connection with psql and run your queries. \nYou need to set log_statement to 'all' to see the queries in the log. This will not tell you exactly \nwhat is wrong, but you may observe latency.\n\nIn the similar way you may also track network packets:\ntcpdump port 5432\nRun it at the server as root (can you?) and try to catch the moments when packets are received and \nsent, delays between them and the moment you hit enter. With, say, -s 500 -X options you may also \nsee some packet contents (unless you use SSL) to have an idea. Make sure which interface you monitor \n- use -i option. Your finger and eye are not a perfect measurement tool, and tcp output is not \nperfectly immediate, but with the delays you say you have, and with some patience, you may come up \nwith some useful observations.\n\nSomeone here suggested DNS problems - try to see what happens when you use IP address of the server \ninstead of domain name (I am not sure what you use now). However, you seem to ssh to the server \nwithout problems, so that's not the case.\n\nMaybe the 5432 port (or whatever port your server runs on) has some problems on the connection \nroute. Stop the server and use the nc (or netcat) utility on the two machines talking to each other \non this port.\n\nJust use the tools Linux gives you, understand what happens behind the scenes.\n\n\nHope this helps\nIrek.\n> Any other ideas?.\n>\n> Thanks in advance...\n>\n>\n> ----- Mensaje original -----\n> De: Ireneusz Pluta<[email protected]>\n> Para: Aryan Ariel Rodriguez Chalas<[email protected]>;\"[email protected]\" <[email protected]>\n> CC:\n> Enviado: Viernes, 7 de septiembre, 2012 1:24 P.M.\n> Asunto: Re: [PERFORM] libpq or postgresql performance\n>\n>\n> W dniu 2012-09-07 17:43, Aryan Ariel Rodriguez Chalas pisze:\n>> Thank you all for answering that quickly...\n>>\n>> A simple query could be \"SELECT LOCALTIMESTAMP(0)\" which could last even 5 seconds for returning data.\n>>\n>> The other queries that I would like to be faster, return less than 50 rows until now; and two of those queries (the ones the application uses the most), return only a record.\n>>\n>> I agree that a huge result over internet might take a long time, but until now my application is not returning that amount of rows...\n>>\n>> Just for the record, the server bandwidth is 2.0 mbps and the client is 1.89 mbps and other applications run really fast for my expectatives (such as opening a remote desktop and also my application through a remote desktop session runs very fast on this connection). But I'm still far of understanding why over a remote desktop session my application runs very fast on the same connection but when running it locally and connecting to the server, it's super slow. Could be because when running over a remote desktop session the application is connecting with \"libpq\" through \"Unix Domain Socket\", but when running locally; \"libpq\" works over \"TCP Sockets\"?.\n> ��\u0005\b\u0003 `翿\u0004 Ѻ (\u0002 0�\u0006\b\t Ѻ (\u0002 �忿 �忿�濿?�\u001c(t濿��\u0005\b8翿�\u001c(,G!( h!(�濿5�\u001f(,G!( h!(�濿\u0019� (Eh!(\u000e �\u0007 9T\u001f(\f翿\u0004 h!(& �濿\u000eT\u001f(,G!( h!(Fh!(Տ (�뿿�Q\u0006\b\u001c翿��\u001b(\f翿\u0004 h!(& ��\u0005\b��\u001b(�뿿lQ\u0006\b�뿿 (翿\u0001 l#!(\u000e \b \u0001 \u000e \u0007 �� l��\u001c\u0005\b��\u0005\b\u0014쿿\n> so change connection mode when running the application _at remote_ site _locally_ to the server to\n> tcp too. And then try with connection to localhost, as well as to the public IP of that server, the\n> same which you connect to from your own desktop.\n>> By the way I'm using Linux on both ends as Operating System, Lazarus 1.1 as IDE and Zeoslib for connecting to postgresql and I've noticed that when I compile the application for running on Windows as the client, it moves more or less acceptable; so it brings to me to another question: Why \"libpq\" is faster on Windows than Linux?.\n> try to benchmark your elementary queries with psql run from your site (remotely to the server) and\n> from remote site (locally to the server). Use `psql -h hostaddr` at remote site to force tcp connection.\n>\n>> Best regards...\n>>\n>> Ariel Rodriguez\n>>\n>> ----- Mensaje original -----\n>> De: Ireneusz Pluta<[email protected]>\n>> Para: Aryan Ariel Rodriguez Chalas<[email protected]>\n>> CC:\"[email protected]\" <[email protected]>\n>> Enviado: Viernes, 7 de septiembre, 2012 2:52 A.M.\n>> Asunto: Re: [PERFORM] libpq or postgresql performance\n>>\n>> W dniu 2012-09-06 22:04, Aryan Ariel Rodriguez Chalas pisze:\n>>> -Why if I connect to the remote server desktop (using RDP or any Remote Desktop Application) and run the application using the same internet connection, it runs really fast when making requests to postgresql; but if I run the application locally by connecting to the remote postgresql server through \"libpq\", it's really slow?.\n>> It might look like the client side fetches too much data or sends too many queries over the connection to the database server and then further processes that data locally. Are you using some kind of ORM in your application?\n>>\n>> If that is the case, you might need to refactor your application to do as much as possible computation at server side and deal only with computation results over the connection, not the raw data.\n>>\n>> Try to see in server log what SQL statements are executed while you are running your application. You need to SET log_statement TO 'all' for that.\n>>\n>> With psql, try to see how much data (how many rows) are returned from that query you call \"simple query\". Even simple query may return a lot of rows. Server backend might execute it quickly, but returning a huge result over the Internet might take a long time.\n>>\n>>\n>>\n>>\n>>\n>>\n>\n\n\n\n",
"msg_date": "Mon, 10 Sep 2012 11:01:28 +0200",
"msg_from": "Ireneusz Pluta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: libpq or postgresql performance"
}
] |
[
{
"msg_contents": "Hello Community,\n\nI intend to understand further on PostgreSQL Index behavior on a \"SELECT\"\nstatement.\n\nWe have a situation where-in Index on unique column is not being picked up\nas expected when used with-in the WHERE clause with other non-unique\ncolumns using AND operator.\n\nexplain SELECT tv.short_code, tv.chn as pkg_subscription_chn,\n tv.vert as pkg_vert, ubs.campaign_id as campaign,\n'none'::varchar as referer,\n CAST('CAMPAIGNWISE_SUBSCRIBER_BASE' AS VARCHAR) as vn,\ncount(tv.msisdn) as n_count, '0'::numeric AS tot_revenue\n FROM campaign_base ubs\n JOIN tab_current_day_v2 tv\n ON ubs.ubs_seq_id = tv.ubs_seq_id\n AND tv.dt = CAST('2012-09-08' AS DATE)\n GROUP BY tv.short_code, tv.vert, tv.chn, ubs.campaign_id, vn;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n HashAggregate (cost=77754.57..77754.58 rows=1 width=38)\n -> Nested Loop (cost=0.00..77754.56 rows=1 width=38)\n -> Seq Scan on tab_current_day_v2 tv (cost=0.00..77746.26 rows=1\nwidth=39)\n Filter: (dt = '2012-09-08'::date)\n -> Index Scan using cb_ubs_id_idx on campaign_base ubs\n (cost=0.00..8.28 rows=1 width=15)\n Index Cond: (ubs.ubs_seq_id = tv.ubs_seq_id)\n(6 rows)\n\nThe above plan shows \"seq scan\" on tab_current_day_v2 table, though there\nis an index on \"ubs_seq_id\" column which is an unique column.\n\nCan anyone please help us understand, why PostgreSQL optimizer is not\nprioritizing the unique column and hitting ubs_seq_id_idx Index here ?\n\nLater -\n\nWe have created composite Index on \"dt\" (one distinct value) and\n\"ubs_seq_id\" (no duplicate values) and the index has been picked up.\n\nBelow is the scenario where-in the same query's plan picking up the\ncomposite Index.\n\nprod-db=# create index concurrently tab_dt_ubs_seq_id_idx on\ntab_current_day_v2(dt,ubs_seq_id);\nCREATE INDEX\n\nprod-db=# explain SELECT tv.short_code, tv.chn as pkg_subscription_chn,\n tv.vert as pkg_vert, ubs.campaign_id as campaign,\n'none'::varchar as referer,\n CAST('CAMPAIGNWISE_SUBSCRIBER_BASE' AS VARCHAR) as vn,\ncount(tv.msisdn) as n_count, '0'::numeric AS tot_revenue\n FROM campaign_base ubs\n JOIN tab_current_day_v2 tv\n ON ubs.ubs_seq_id = tv.ubs_seq_id\n AND tv.dt = CAST('2012-09-08' AS DATE)\n GROUP BY tv.short_code, tv.vert, tv.chn, ubs.campaign_id, vn;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=16.88..16.89 rows=1 width=38)\n -> Nested Loop (cost=0.00..16.86 rows=1 width=38)\n -> Index Scan using tab_dt_ubs_seq_id_idx on tab_current_day_v2\ntv (cost=0.00..8.57 rows=1 width=39)\n Index Cond: (dt = '2012-09-08'::date)\n -> Index Scan using cb_ubs_id_idx on campaign_base ubs\n (cost=0.00..8.28 rows=1 width=15)\n Index Cond: (ubs.ubs_seq_id = tv.ubs_seq_id)\n(6 rows)\n\nI was expecting the above behavior without a composite Index. A column with\nmost unique values must be picked up when multiple columns are used in\nWHERE clause using AND operator. Any thoughts ?\n\nprod-db# \\d tab_current_day_v2\n\n Table \"public.tab_current_day_v2\"\n Column | Type | Modifiers\n--------------------------+--------------------------+-----------\n dt | date |\n chn | character varying(10) |\n vert | character varying(20) |\n isdn | character varying |\n bc | character varying(40) |\n status | text |\n is_rene | boolean |\n age_in_sys | integer |\n age_in_grace | integer |\n has_prof | boolean |\n short_code | character varying |\n sub_vert | character varying(30) |\n mode | character varying |\n ubs_seq_id | bigint |\n pkg_name | character varying(200) |\n pkg_id | integer |\n subs_charge | money |\n subs_time | timestamp with time zone |\n ulq_seq_id | bigint |\n valid_till_time | timestamp with time zone |\n valid_from_time | timestamp with time zone |\n latest_ube_seq_id | bigint |\n latest_pkg_id | integer |\n price | integer |\nIndexes:\n \"tab_dt_ubs_seq_id_idx\" btree (dt, ubs_seq_id)\n \"tab_isdn_idx\" btree (msisdn)\n \"tab_status_idx\" btree (status)\n \"ubs_seq_id_idx\" btree (ubs_seq_id)\n\nBelow is the table structure and the uniqueness of each of the columns.\n\nairtel_user_data_oltp=# select attname, n_distinct from pg_Stats where\ntablename='tab_current_day_v2';\n\n attname | n_distinct\n--------------------------+------------\n dt | 1\n chn | 7\n vert | 94\n isdn | -0.727331\n bc | 4\n status | 3\n is_rene | 2\n age_in_sys | 1018\n age_in_grac | 369\n has_prof | 2\n short_code | 23\n sub_vert | 5\n mode | 0\n ubs_seq_id | -1\n pkg_name | 461\n pkg_id | 461\n subs_charge | 7\n subs_time | -1\n ulq_seq_id | 122887\n valid_till_time | -0.966585\n valid_from_time | -0.962563\n latest_ube_seq_id | -1\n latest_pkg_id | 475\n price | 18\n\n(24 rows)\n\nThis is not an issue, but, would like to understand how PostgreSQL\noptimizer picks up Indexes in SELECT queries.\n\nIn an other scenario, we had used 4 columns in WHERE clause with AND\noperator with an Index on the column with most unique values -- The Index\nwas picked up.\n\nLooking forward for your help !\n\nRegards,\nVB\n\n-- \n \n\nDISCLAIMER:\n\nPlease note that this message and any attachments may contain confidential \nand proprietary material and information and are intended only for the use \nof the intended recipient(s). If you are not the intended recipient, you \nare hereby notified that any review, use, disclosure, dissemination, \ndistribution or copying of this message and any attachments is strictly \nprohibited. If you have received this email in error, please immediately \nnotify the sender and delete this e-mail , whether electronic or printed. \nPlease also note that any views, opinions, conclusions or commitments \nexpressed in this message are those of the individual sender and do not \nnecessarily reflect the views of *Ver sé Innovation Pvt Ltd*.\n\n\nHello Community,I intend to understand further on PostgreSQL Index behavior on a \"SELECT\" statement.We have a situation where-in Index on unique column is not being picked up as expected when used with-in the WHERE clause with other non-unique columns using AND operator.\nexplain SELECT tv.short_code, tv.chn as pkg_subscription_chn, tv.vert as pkg_vert, ubs.campaign_id as campaign, 'none'::varchar as referer, CAST('CAMPAIGNWISE_SUBSCRIBER_BASE' AS VARCHAR) as vn, count(tv.msisdn) as n_count, '0'::numeric AS tot_revenue\n FROM campaign_base ubs JOIN tab_current_day_v2 tv ON ubs.ubs_seq_id = tv.ubs_seq_id AND tv.dt = CAST('2012-09-08' AS DATE)\n GROUP BY tv.short_code, tv.vert, tv.chn, ubs.campaign_id, vn; QUERY PLAN----------------------------------------------------------------------------------------------------\n HashAggregate (cost=77754.57..77754.58 rows=1 width=38) -> Nested Loop (cost=0.00..77754.56 rows=1 width=38) -> Seq Scan on tab_current_day_v2 tv (cost=0.00..77746.26 rows=1 width=39)\n Filter: (dt = '2012-09-08'::date) -> Index Scan using cb_ubs_id_idx on campaign_base ubs (cost=0.00..8.28 rows=1 width=15) Index Cond: (ubs.ubs_seq_id = tv.ubs_seq_id)\n(6 rows)The above plan shows \"seq scan\" on tab_current_day_v2 table, though there is an index on \"ubs_seq_id\" column which is an unique column. \nCan anyone please help us understand, why PostgreSQL optimizer is not prioritizing the unique column and hitting ubs_seq_id_idx Index here ?Later -We have created composite Index on \"dt\" (one distinct value) and \"ubs_seq_id\" (no duplicate values) and the index has been picked up.\nBelow is the scenario where-in the same query's plan picking up the composite Index.prod-db=# create index concurrently tab_dt_ubs_seq_id_idx on tab_current_day_v2(dt,ubs_seq_id);\nCREATE INDEXprod-db=# explain SELECT tv.short_code, tv.chn as pkg_subscription_chn, tv.vert as pkg_vert, ubs.campaign_id as campaign, 'none'::varchar as referer,\n CAST('CAMPAIGNWISE_SUBSCRIBER_BASE' AS VARCHAR) as vn, count(tv.msisdn) as n_count, '0'::numeric AS tot_revenue FROM campaign_base ubs JOIN tab_current_day_v2 tv\n ON ubs.ubs_seq_id = tv.ubs_seq_id AND tv.dt = CAST('2012-09-08' AS DATE) GROUP BY tv.short_code, tv.vert, tv.chn, ubs.campaign_id, vn;\n QUERY PLAN-----------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=16.88..16.89 rows=1 width=38) -> Nested Loop (cost=0.00..16.86 rows=1 width=38) -> Index Scan using tab_dt_ubs_seq_id_idx on tab_current_day_v2 tv (cost=0.00..8.57 rows=1 width=39)\n Index Cond: (dt = '2012-09-08'::date) -> Index Scan using cb_ubs_id_idx on campaign_base ubs (cost=0.00..8.28 rows=1 width=15) Index Cond: (ubs.ubs_seq_id = tv.ubs_seq_id)\n(6 rows)I was expecting the above behavior without a composite Index. A column with most unique values must be picked up when multiple columns are used in WHERE clause using AND operator. Any thoughts ?\nprod-db# \\d tab_current_day_v2 Table \"public.tab_current_day_v2\" Column | Type | Modifiers--------------------------+--------------------------+-----------\n dt | date | chn | character varying(10) | vert | character varying(20) | isdn | character varying |\n bc | character varying(40) | status | text | is_rene | boolean |\n age_in_sys | integer | age_in_grace | integer | has_prof | boolean | short_code | character varying |\n sub_vert | character varying(30) | mode | character varying | ubs_seq_id | bigint | pkg_name | character varying(200) |\n pkg_id | integer | subs_charge | money | subs_time | timestamp with time zone | ulq_seq_id | bigint |\n valid_till_time | timestamp with time zone | valid_from_time | timestamp with time zone | latest_ube_seq_id | bigint | latest_pkg_id | integer |\n price | integer |Indexes: \"tab_dt_ubs_seq_id_idx\" btree (dt, ubs_seq_id) \"tab_isdn_idx\" btree (msisdn) \"tab_status_idx\" btree (status)\n \"ubs_seq_id_idx\" btree (ubs_seq_id)Below is the table structure and the uniqueness of each of the columns.airtel_user_data_oltp=# select attname, n_distinct from pg_Stats where tablename='tab_current_day_v2';\n attname | n_distinct--------------------------+------------ dt | 1 chn | 7 vert | 94\n isdn | -0.727331 bc | 4 status | 3 is_rene | 2 age_in_sys | 1018\n age_in_grac | 369 has_prof | 2 short_code | 23 sub_vert | 5 mode | 0\n ubs_seq_id | -1 pkg_name | 461 pkg_id | 461 subs_charge | 7 subs_time | -1\n ulq_seq_id | 122887 valid_till_time | -0.966585 valid_from_time | -0.962563 latest_ube_seq_id | -1 latest_pkg_id | 475\n price | 18(24 rows)This is not an issue, but, would like to understand how PostgreSQL optimizer picks up Indexes in SELECT queries.\nIn an other scenario, we had used 4 columns in WHERE clause with AND operator with an Index on the column with most unique values -- The Index was picked up.Looking forward for your help !\nRegards,VB\n\n\nDISCLAIMER:\nPlease note that this message and\nany attachments may contain confidential and proprietary material and\ninformation and are intended only for the use of the intended recipient(s). If\nyou are not the intended recipient, you are hereby notified that any review,\nuse, disclosure, dissemination, distribution or copying of this message and any\nattachments is strictly prohibited. If you have received this email in error,\nplease immediately notify the sender and delete this e-mail , whether\nelectronic or printed. Please also note that any views, opinions, conclusions\nor commitments expressed in this message are those of the individual sender and\ndo not necessarily reflect the views of Ver sé Innovation Pvt Ltd.",
"msg_date": "Mon, 10 Sep 2012 18:09:35 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": ": PostgreSQL Index behavior"
},
{
"msg_contents": "On Mon, Sep 10, 2012 at 5:39 AM, Venkat Balaji <[email protected]> wrote:\n> Hello Community,\n>\n> I intend to understand further on PostgreSQL Index behavior on a \"SELECT\"\n> statement.\n>\n> We have a situation where-in Index on unique column is not being picked up\n> as expected when used with-in the WHERE clause with other non-unique columns\n> using AND operator.\n>\n> explain SELECT tv.short_code, tv.chn as pkg_subscription_chn,\n> tv.vert as pkg_vert, ubs.campaign_id as campaign,\n> 'none'::varchar as referer,\n> CAST('CAMPAIGNWISE_SUBSCRIBER_BASE' AS VARCHAR) as vn,\n> count(tv.msisdn) as n_count, '0'::numeric AS tot_revenue\n> FROM campaign_base ubs\n> JOIN tab_current_day_v2 tv\n> ON ubs.ubs_seq_id = tv.ubs_seq_id\n> AND tv.dt = CAST('2012-09-08' AS DATE)\n> GROUP BY tv.short_code, tv.vert, tv.chn, ubs.campaign_id, vn;\n...\n>\n> The above plan shows \"seq scan\" on tab_current_day_v2 table, though there is\n> an index on \"ubs_seq_id\" column which is an unique column.\n>\n> Can anyone please help us understand, why PostgreSQL optimizer is not\n> prioritizing the unique column and hitting ubs_seq_id_idx Index here ?\n\nThe query where clause does not specify a constant value for\nubs_seq_id. So it is likely that the only way to use that index would\nbe to reverse the order of the nested loop and seq scan the other\ntable. Is there any reason to think that doing that would be faster?\n\n\n>\n> Later -\n>\n> We have created composite Index on \"dt\" (one distinct value) and\n> \"ubs_seq_id\" (no duplicate values) and the index has been picked up.\n\nPostgres seems to think that \"dt\" has no duplicate values, the\nopposite of having one distinct value.\nThat is based on the estimates given in the explain plan, that teh seq\nscan will return only one row after the filter on Filter: \"(dt =\n'2012-09-08'::date)\". This does seem to conflict with what you\nreport from pg_stats, but I'm not familiar with that view, and you\nhaven't told us what version of pgsql you are using.\n\n\n\n> Below is the scenario where-in the same query's plan picking up the\n> composite Index.\n\nIt is only using the first column of that composite index. So if you\nbuilt a single column index just on dt, it would be picked up as well.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Mon, 10 Sep 2012 08:36:16 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : PostgreSQL Index behavior"
},
{
"msg_contents": "Thank you Jeff !\n\nMy comments are inline.\n\n> explain SELECT tv.short_code, tv.chn as pkg_subscription_chn,\n> > tv.vert as pkg_vert, ubs.campaign_id as campaign,\n> > 'none'::varchar as referer,\n> > CAST('CAMPAIGNWISE_SUBSCRIBER_BASE' AS VARCHAR) as vn,\n> > count(tv.msisdn) as n_count, '0'::numeric AS tot_revenue\n> > FROM campaign_base ubs\n> > JOIN tab_current_day_v2 tv\n> > ON ubs.ubs_seq_id = tv.ubs_seq_id\n> > AND tv.dt = CAST('2012-09-08' AS DATE)\n> > GROUP BY tv.short_code, tv.vert, tv.chn, ubs.campaign_id,\n> vn;\n> ...\n> >\n> > The above plan shows \"seq scan\" on tab_current_day_v2 table, though\n> there is\n> > an index on \"ubs_seq_id\" column which is an unique column.\n> >\n> > Can anyone please help us understand, why PostgreSQL optimizer is not\n> > prioritizing the unique column and hitting ubs_seq_id_idx Index here ?\n>\n> The query where clause does not specify a constant value for\n> ubs_seq_id. So it is likely that the only way to use that index would\n> be to reverse the order of the nested loop and seq scan the other\n> table. Is there any reason to think that doing that would be faster?\n>\n\nI believe, I missed an important point here. Yes, since the constant value\nis not provided for ubs_seq_id, Index scan is not a prime preference. Makes\nsense. Further analysis is explained below.\n\n\n> > Later -\n> >\n> > We have created composite Index on \"dt\" (one distinct value) and\n> > \"ubs_seq_id\" (no duplicate values) and the index has been picked up.\n>\n> Postgres seems to think that \"dt\" has no duplicate values, the\n> opposite of having one distinct value.\n> That is based on the estimates given in the explain plan, that teh seq\n> scan will return only one row after the filter on Filter: \"(dt =\n> '2012-09-08'::date)\". This does seem to conflict with what you\n> report from pg_stats, but I'm not familiar with that view, and you\n> haven't told us what version of pgsql you are using.\n>\n\nWe are using PostgreSQL-9.0.1.\n\nYes, \"dt\" has one distinct value all the time is generated on daily basis.\n\"2012-09-08\" is an non-existent value, so, Postgres seems to think there\nare no duplicates.\n\nIf i pass on the value which is existing in the table \"2012-09-11\",\nPostgreSQL optimizer is picking up \"Seq Scan\" (what ever Index is existing).\n\nIn our scenario, we cannot expect an Index scan to happen, because I\nbelieve, following are the reasons -\n\nubs_seq_id column in campaign_base table has 1.2 m rows -- all distinct\nubs_seq_id column in tab_current_day_v2 table has 1.9 m rows -- all distinct\ndt has only 1 distinct value.\n\nAll being used with AND operator, extracted rows will be minimum 1.2 m. So,\nI believe, \"seq scan\" is the best choice PG is opting for.\n\nI got the point. Thanks !\n\nRegards,\nVenkat\n\n-- \n \n\nDISCLAIMER:\n\nPlease note that this message and any attachments may contain confidential \nand proprietary material and information and are intended only for the use \nof the intended recipient(s). If you are not the intended recipient, you \nare hereby notified that any review, use, disclosure, dissemination, \ndistribution or copying of this message and any attachments is strictly \nprohibited. If you have received this email in error, please immediately \nnotify the sender and delete this e-mail , whether electronic or printed. \nPlease also note that any views, opinions, conclusions or commitments \nexpressed in this message are those of the individual sender and do not \nnecessarily reflect the views of *Ver sé Innovation Pvt Ltd*.\n\n\nThank you Jeff !My comments are inline.> explain SELECT tv.short_code, tv.chn as pkg_subscription_chn,\n\n\n> tv.vert as pkg_vert, ubs.campaign_id as campaign,\n> 'none'::varchar as referer,\n> CAST('CAMPAIGNWISE_SUBSCRIBER_BASE' AS VARCHAR) as vn,\n> count(tv.msisdn) as n_count, '0'::numeric AS tot_revenue\n> FROM campaign_base ubs\n> JOIN tab_current_day_v2 tv\n> ON ubs.ubs_seq_id = tv.ubs_seq_id\n> AND tv.dt = CAST('2012-09-08' AS DATE)\n> GROUP BY tv.short_code, tv.vert, tv.chn, ubs.campaign_id, vn;\n...\n>\n> The above plan shows \"seq scan\" on tab_current_day_v2 table, though there is\n> an index on \"ubs_seq_id\" column which is an unique column.\n>\n> Can anyone please help us understand, why PostgreSQL optimizer is not\n> prioritizing the unique column and hitting ubs_seq_id_idx Index here ?\n\nThe query where clause does not specify a constant value for\nubs_seq_id. So it is likely that the only way to use that index would\nbe to reverse the order of the nested loop and seq scan the other\ntable. Is there any reason to think that doing that would be faster?I believe, I missed an important point here. Yes, since the constant value is not provided for ubs_seq_id, Index scan is not a prime preference. Makes sense. Further analysis is explained below.\n \n> Later -\n>\n> We have created composite Index on \"dt\" (one distinct value) and\n> \"ubs_seq_id\" (no duplicate values) and the index has been picked up.\n\nPostgres seems to think that \"dt\" has no duplicate values, the\nopposite of having one distinct value.\nThat is based on the estimates given in the explain plan, that teh seq\nscan will return only one row after the filter on Filter: \"(dt =\n'2012-09-08'::date)\". This does seem to conflict with what you\nreport from pg_stats, but I'm not familiar with that view, and you\nhaven't told us what version of pgsql you are using.We are using PostgreSQL-9.0.1. Yes, \"dt\" has one distinct value all the time is generated on daily basis.\n\"2012-09-08\" is an non-existent value, so, Postgres seems to think there are no duplicates.If i pass on the value which is existing in the table \"2012-09-11\", PostgreSQL optimizer is picking up \"Seq Scan\" (what ever Index is existing).\nIn our scenario, we cannot expect an Index scan to happen, because I believe, following are the reasons -ubs_seq_id column in campaign_base table has 1.2 m rows -- all distinct\nubs_seq_id column in tab_current_day_v2 table has 1.9 m rows -- all distinctdt has only 1 distinct value.All being used with AND operator, extracted rows will be minimum 1.2 m. So, I believe, \"seq scan\" is the best choice PG is opting for.\nI got the point. Thanks !Regards,Venkat\n\n\nDISCLAIMER:\nPlease note that this message and\nany attachments may contain confidential and proprietary material and\ninformation and are intended only for the use of the intended recipient(s). If\nyou are not the intended recipient, you are hereby notified that any review,\nuse, disclosure, dissemination, distribution or copying of this message and any\nattachments is strictly prohibited. If you have received this email in error,\nplease immediately notify the sender and delete this e-mail , whether\nelectronic or printed. Please also note that any views, opinions, conclusions\nor commitments expressed in this message are those of the individual sender and\ndo not necessarily reflect the views of Ver sé Innovation Pvt Ltd.",
"msg_date": "Wed, 12 Sep 2012 12:27:55 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : PostgreSQL Index behavior"
},
{
"msg_contents": "On Wed, Sep 12, 2012 at 12:57 AM, Venkat Balaji <[email protected]> wrote:\n\n> We are using PostgreSQL-9.0.1.\n\nYou are missing almost 2 years of updates, bug fixes, and security fixes.\n\n",
"msg_date": "Wed, 12 Sep 2012 08:12:59 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : PostgreSQL Index behavior"
},
{
"msg_contents": "On Wed, Sep 12, 2012 at 7:42 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Wed, Sep 12, 2012 at 12:57 AM, Venkat Balaji <[email protected]>\n> wrote:\n>\n> > We are using PostgreSQL-9.0.1.\n>\n> You are missing almost 2 years of updates, bug fixes, and security fixes.\n>\n\nThank you Scott, We are planning to upgrade to the latest version (9.2) in\nthe near future.\n\nRegards,\nVB\n\n-- \n \n\nDISCLAIMER:\n\nPlease note that this message and any attachments may contain confidential \nand proprietary material and information and are intended only for the use \nof the intended recipient(s). If you are not the intended recipient, you \nare hereby notified that any review, use, disclosure, dissemination, \ndistribution or copying of this message and any attachments is strictly \nprohibited. If you have received this email in error, please immediately \nnotify the sender and delete this e-mail , whether electronic or printed. \nPlease also note that any views, opinions, conclusions or commitments \nexpressed in this message are those of the individual sender and do not \nnecessarily reflect the views of *Ver sé Innovation Pvt Ltd*.\n\n\nOn Wed, Sep 12, 2012 at 7:42 PM, Scott Marlowe <[email protected]> wrote:\nOn Wed, Sep 12, 2012 at 12:57 AM, Venkat Balaji <[email protected]> wrote:\n\n> We are using PostgreSQL-9.0.1.\n\nYou are missing almost 2 years of updates, bug fixes, and security fixes.Thank you Scott, We are planning to upgrade to the latest version (9.2) in the near future.\nRegards,VB\n\n\nDISCLAIMER:\nPlease note that this message and\nany attachments may contain confidential and proprietary material and\ninformation and are intended only for the use of the intended recipient(s). If\nyou are not the intended recipient, you are hereby notified that any review,\nuse, disclosure, dissemination, distribution or copying of this message and any\nattachments is strictly prohibited. If you have received this email in error,\nplease immediately notify the sender and delete this e-mail , whether\nelectronic or printed. Please also note that any views, opinions, conclusions\nor commitments expressed in this message are those of the individual sender and\ndo not necessarily reflect the views of Ver sé Innovation Pvt Ltd.",
"msg_date": "Thu, 13 Sep 2012 14:52:39 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : PostgreSQL Index behavior"
},
{
"msg_contents": "Dne 13.09.2012 11:22, Venkat Balaji napsal:\n> On Wed, Sep 12, 2012 at 7:42 PM, Scott Marlowe\n> <[email protected] [2]> wrote:\n>\n>> On Wed, Sep 12, 2012 at 12:57 AM, Venkat Balaji\n>> <[email protected] [1]> wrote:\n>>\n>> > We are using PostgreSQL-9.0.1.\n>>\n>> You are missing almost 2 years of updates, bug fixes, and security\n>> fixes.\n>\n> Thank you Scott, We are planning to upgrade to the latest version\n> (9.2) in the near future.\n\nThat was not the point. The last minor update in this branch (9.0) is \n9.0.9. You're missing fixes and improvements that happened between 9.0.1 \nand 9.0.9, that's what Scott probably meant. And some of those fixes may \nbe quite important, so do the upgrade ASAP.\n\nThese minor updates are binary compatible, so all you need to do is \nshutting down the DB, updating the binaries (e.g. installing a new \npackage) and starting the database again. Upgrading to 9.2 means you'll \nhave to do a dump/restore and possibly more.\n\nTomas\n\n",
"msg_date": "Thu, 13 Sep 2012 11:39:52 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : PostgreSQL Index behavior"
}
] |
[
{
"msg_contents": "Hi All\n\nI´ve ft_simple_core_content_content_idx\n ON core_content\n USING gin\n (to_tsvector('simple'::regconfig, content) );\n\n \nIf I´m seaching for a word which is NOT in the column content the query plan and the execution time differs with the given limit.\nIf I choose 3927 or any higher number the query execution took only few milliseconds. \n \n core_content content where\nto_tsvector('simple', content.content) @@ tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar || ':*')=true\nLimit 3927\n\n\"Limit (cost=0.00..19302.23 rows=3926 width=621) (actual time=52147.149..52147.149 rows=0 loops=1)\"\n\" -> Seq Scan on core_content content (cost=0.00..98384.34 rows=20011 width=621) (actual time=52147.147..52147.147 rows=0 loops=1)\"\n\" Filter: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\n\"Total runtime: 52147.173 ms\"\n\nIs there any posibility to improve the performance even if the limit is only 10? Is it possible to determine that the query optimizer takes only the fast bitmap heap scan instead of the slow seq scan?\n\nRegards,\nBill Martin\n\n\n---\nE-Mail ist da wo du bist! Jetzt mit freenetMail ganz bequem auch unterwegs E-Mails verschicken.\nAm besten gleich informieren unter http://mail.freenet.de/mobile-email/index.html\nHi AllI´ve ft_simple_core_content_content_idx ON core_content USING gin (to_tsvector('simple'::regconfig, content) ); If I´m seaching for a word which is NOT in the column content the query plan and the execution time differs with the given limit.If I choose 3927 or any higher number the query execution took only few milliseconds. core_content content whereto_tsvector('simple', content.content) @@ tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar || ':*')=trueLimit 3927\"Limit (cost=0.00..19302.23 rows=3926 width=621) (actual time=52147.149..52147.149 rows=0 loops=1)\"\" -> Seq Scan on core_content content (cost=0.00..98384.34 rows=20011 width=621) (actual time=52147.147..52147.147 rows=0 loops=1)\"\" Filter: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\"Total runtime: 52147.173 ms\"Is there any posibility to improve the performance even if the limit is only 10? Is it possible to determine that the query optimizer takes only the fast bitmap heap scan instead of the slow seq scan?Regards,\nBill Martin---E-Mail ist da wo du bist! Jetzt mit freenetMail ganz bequem auch unterwegs E-Mails verschicken.Am besten gleich informieren unter http://mail.freenet.de/mobile-email/index.html",
"msg_date": "Mon, 10 Sep 2012 16:24:30 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Planner selects different execution plans depending on limit"
},
{
"msg_contents": "On 10/09/12 16:24, [email protected] wrote:\n>\n> Hi All\n>\n> I´ve ft_simple_core_content_content_idx\n> ON core_content\n> USING gin\n> (to_tsvector('simple'::regconfig, content) );\n>\n>\n> If I´m seaching for a word which is NOT in the column content the \n> query plan and the execution time differs with the given limit.\n> If I choose 3927 or any higher number the query execution took only \n> few milliseconds.\n>\n> core_content content where\n> to_tsvector('simple', content.content) @@ \n> tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar || ':*')=true\n> Limit 3927\n>\n> \"Limit (cost=0.00..19302.23 rows=3926 width=621) (actual \n> time=52147.149..52147.149 rows=0 loops=1)\"\n> \" -> Seq Scan on core_content content (cost=0.00..98384.34 \n> rows=20011 width=621) (actual time=52147.147..52147.147 rows=0 loops=1)\"\n> \" Filter: (to_tsvector('simple'::regconfig, content) @@ \n> '''asdasdadas'':*'::tsquery)\"\n> \"Total runtime: 52147.173 ms\"\n>\n> Is there any posibility to improve the performance even if the limit \n> is only 10? Is it possible to determine that the query optimizer takes \n> only the fast bitmap heap scan instead of the slow seq scan?\n>\n\nThe big hammer is: \"set enable_seqscan = off\", but if you tell which PG \nversion you're on there may be something to do. I suggest you'd start by \nbumping the statistics target for the column to 10000 and run analyze to \nsee what that changes.\n\n-- \nJesper\n\n\n\n\n\n\n On 10/09/12 16:24, [email protected] wrote:\n \n\n\n\nHi All\n\n I´ve ft_simple_core_content_content_idx\n ON core_content\n USING gin\n (to_tsvector('simple'::regconfig, content) );\n\n \n If I´m seaching for a word which is NOT in the column content\n the query plan and the execution time differs with the given\n limit.\n If I choose 3927 or any higher number the query execution took\n only few milliseconds. \n \n core_content content where\n to_tsvector('simple', content.content) @@\n tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar ||\n ':*')=true\n Limit 3927\n\n \"Limit (cost=0.00..19302.23 rows=3926 width=621) (actual\n time=52147.149..52147.149 rows=0 loops=1)\"\n \" -> Seq Scan on core_content content \n (cost=0.00..98384.34 rows=20011 width=621) (actual\n time=52147.147..52147.147 rows=0 loops=1)\"\n \" Filter: (to_tsvector('simple'::regconfig, content) @@\n '''asdasdadas'':*'::tsquery)\"\n \"Total runtime: 52147.173 ms\"\n\n Is there any posibility to improve the performance even if the\n limit is only 10? Is it possible to determine that the query\n optimizer takes only the fast bitmap heap scan instead of the\n slow seq scan?\n\n\n\n\n The big hammer is: \"set enable_seqscan = off\", but if you tell which\n PG version you're on there may be something to do. I suggest you'd\n start by bumping the statistics target for the column to 10000 and\n run analyze to see what that changes. \n\n -- \n Jesper",
"msg_date": "Mon, 10 Sep 2012 20:18:38 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner selects different execution plans depending\n on limit"
}
] |
[
{
"msg_contents": "Hi,\n\nI want to force deafults, and wonder about the performance.\nThe trigger i use (below) makes the query (also below) take 45% more time.\nThe result is the same now, but i do have a use for using the trigger (see\n\"background info\").\n\nIsn't there a more efficient way to force the defaults (in the database)\nwhen the application layer inserts a NULL?\n\nCheers,\n\nWBL\n\n\n<backgorund info>\n I'm building an application where users can upload data to a database,\nwith a website as a gui.\n\n In the legacy database that we're \"pimping\", some attributes are optional,\nbut there are also codes that explicitly mark the attribute as 'Unknown' in\nthe same field.\n That would mean that there are 2 things that mean the same thing: NULL and\n'U' for unknown.\n I don't want to bother our customer with the NULL issues in queries, so i\nwould like to make those fields NOT NULL.\n\n The users will use an Excel or CSV form to upload the data and they can\njust leave a blank for the optional fields if they like.\n We'll use php to insert the data in a table, from which we'll check if the\ninput satisfies our demands before inserting into the actual tables that\nmatter.\n\n When the users leave a blank, php is bound to insert a NULL (or even an\nempty string) into the upload table.\n I want to use a default, even if php explicitly inserts a NULL.\n</backgorund info>\n\n\n--the TRIGGER\ncreate or replace function force_defaults () returns trigger as $$\nbegin\n new.val:=coalesce(new.val, 'U');\nreturn new;\nend;\n$$ language plpgsql;\n\n--the QUERIES (on my laptop, no postgres config, pg 9.1):\ncreate table accounts (like pgbench_accounts including all);\n--(1)\nalter table accounts add column val text default 'U';\ninsert into accounts(aid, bid, abalance, filler) select * from\npgbench_accounts;\n INSERT 0 50000000\n Time: 538760.542 ms\n\n--(2)\nalter table accounts alter column val set default null;\ncreate trigger bla before insert or update on accounts for each row ...etc\nvacuum accounts;\ninsert into accounts(aid, bid, abalance, filler) select * from\npgbench_accounts;\n INSERT 0 50000000\n Time: 780421.041 ms\n\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nHi,I want to force deafults, and wonder about the performance.The trigger i use (below) makes the query (also below) take 45% more time.The result is the same now, but i do have a use for using the trigger (see \"background info\").\nIsn't there a more efficient way to force the defaults (in the database) when the application layer inserts a NULL?Cheers,WBL<backgorund info> I'm building an application where users can upload data to a database, with a website as a gui.\n In the legacy database that we're \"pimping\", some attributes are optional, but there are also codes that explicitly mark the attribute as 'Unknown' in the same field. That would mean that there are 2 things that mean the same thing: NULL and 'U' for unknown.\n I don't want to bother our customer with the NULL issues in queries, so i would like to make those fields NOT NULL. The users will use an Excel or CSV form to upload the data and they can just leave a blank for the optional fields if they like.\n We'll use php to insert the data in a table, from which we'll check if the input satisfies our demands before inserting into the actual tables that matter. When the users leave a blank, php is bound to insert a NULL (or even an empty string) into the upload table.\n I want to use a default, even if php explicitly inserts a NULL.</backgorund info>--the TRIGGERcreate or replace function force_defaults () returns trigger as $$\nbegin new.val:=coalesce(new.val, 'U');return new;end;$$ language plpgsql;--the QUERIES (on my laptop, no postgres config, pg 9.1):create table accounts (like pgbench_accounts including all);\n--(1)alter table accounts add column val text default 'U';insert into accounts(aid, bid, abalance, filler) select * from pgbench_accounts; INSERT 0 50000000 Time: 538760.542 ms--(2)alter table accounts alter column val set default null;\ncreate trigger bla before insert or update on accounts for each row ...etcvacuum accounts;insert into accounts(aid, bid, abalance, filler) select * from pgbench_accounts; INSERT 0 50000000 Time: 780421.041 ms\n-- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Mon, 10 Sep 2012 16:40:28 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "force defaults"
}
] |
[
{
"msg_contents": "On 10/09/12 16:24, [email protected]<mailto:[email protected]> wrote:\r\n\r\nHi All\r\n\r\nI´ve ft_simple_core_content_content_idx\r\n ON core_content\r\n USING gin\r\n (to_tsvector('simple'::regconfig, content) );\r\n\r\n\r\nIf I´m seaching for a word which is NOT in the column content the query plan and the execution time differs with the given limit.\r\nIf I choose 3927 or any higher number the query execution took only few milliseconds.\r\n\r\ncore_content content where\r\nto_tsvector('simple', content.content) @@ tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar || ':*')=true\r\nLimit 3927\r\n\r\n\"Limit (cost=0.00..19302.23 rows=3926 width=621) (actual time=52147.149..52147.149 rows=0 loops=1)\"\r\n\" -> Seq Scan on core_content content (cost=0.00..98384.34 rows=20011 width=621) (actual time=52147.147..52147.147 rows=0 loops=1)\"\r\n\" Filter: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\r\n\"Total runtime: 52147.173 ms\"\r\n\r\nIs there any posibility to improve the performance even if the limit is only 10? Is it possible to determine that the query optimizer takes only the fast bitmap heap scan instead of the slow seq scan?\r\n\r\nThe big hammer is: \"set enable_seqscan = off\", but if you tell which PG version you're on there may be something to do. I suggest you'd start by bumping the statistics target for the column to 10000 and run analyze to see what that changes.\r\n\r\n--\r\nJesper\r\n\r\nHi,\r\nmy email client delete a lot of the content of the original thread message. Here is the full content:\r\n\r\nHi All\r\n\r\nI´ve created following table which contains one million records.\r\n\r\nCREATE TABLE core_content\r\n(\r\n id bigint NOT NULL,\r\n content text NOT NULL,\r\n short_content text,\r\n CONSTRAINT core_content_pkey PRIMARY KEY (id )\r\n)\r\n\r\nCREATE INDEX ft_simple_core_content_content_idx\r\n ON core_content\r\n USING gin\r\n (to_tsvector('simple'::regconfig, content) );\r\n\r\n\r\nIf I´m seaching for a word which is not in the column content the query plan and the execution time differs with the given limit.\r\nIf I choose 3927 or any higher number the query execution took only few milliseconds.\r\n\r\nselect * from core_content content where\r\nto_tsvector('simple', content.content) @@ tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar || ':*')=true\r\nLimit 3927\r\n\r\n\"Limit (cost=10091.09..19305.68 rows=3927 width=621) (actual time=0.255..0.255 rows=0 loops=1)\"\r\n\" -> Bitmap Heap Scan on core_content content (cost=10091.09..57046.32 rows=20011 width=621) (actual time=0.254..0.254 rows=0 loops=1)\"\r\n\" Recheck Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\r\n\" -> Bitmap Index Scan on ft_simple_core_content_content_idx (cost=0.00..10086.09 rows=20011 width=0) (actual time=0.251..0.251 rows=0 loops=1)\"\r\n\" Index Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\r\n\"Total runtime: 0.277 ms\"\r\n\r\nIf I choose 3926 or any lower number (e.g. 10) the query execution took more than fifty seconds.\r\n\r\nselect * from core_content content where\r\nto_tsvector('simple', content.content) @@ tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar || ':*')=true\r\nLimit 3927\r\n\r\n\"Limit (cost=0.00..19302.23 rows=3926 width=621) (actual time=52147.149..52147.149 rows=0 loops=1)\"\r\n\" -> Seq Scan on core_content content (cost=0.00..98384.34 rows=20011 width=621) (actual time=52147.147..52147.147 rows=0 loops=1)\"\r\n\" Filter: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\r\n\"Total runtime: 52147.173 ms\"\r\n\r\nIs there any posibility to tune up the performance even if the limit is only 10? Is it possible to determine that the query optimizer takes\r\nonly the fast bitmap heap scan instead of the slow seq scan?\r\n\r\nI use PostgreSQL 9.1.5.; Intel i5-2400 @ 3.1 GHz, 16GB; Windows 7 64 Bit\r\n\r\nRegards,\r\nBill Martin\r\n\r\n\n\n\n\n\n\n\n\n\nOn 10/09/12 16:24, [email protected] wrote:\r\n\n\nHi All\n\r\nI´ve ft_simple_core_content_content_idx\r\n ON core_content\r\n USING gin\r\n (to_tsvector('simple'::regconfig, content) );\n\r\n \r\nIf I´m seaching for a word which is NOT in the column content the query plan and the execution time differs with the given limit.\r\nIf I choose 3927 or any higher number the query execution took only few milliseconds.\r\n\r\n \r\ncore_content content where\r\nto_tsvector('simple', content.content) @@ tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar || ':*')=true\r\nLimit 3927\n\r\n\"Limit (cost=0.00..19302.23 rows=3926 width=621) (actual time=52147.149..52147.149 rows=0 loops=1)\"\r\n\" -> Seq Scan on core_content content (cost=0.00..98384.34 rows=20011 width=621) (actual time=52147.147..52147.147 rows=0 loops=1)\"\r\n\" Filter: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\r\n\"Total runtime: 52147.173 ms\"\n\r\nIs there any posibility to improve the performance even if the limit is only 10? Is it possible to determine that the query optimizer takes only the fast bitmap heap scan instead of the slow seq scan?\n\n\r\nThe big hammer is: \"set enable_seqscan = off\", but if you tell which PG version you're on there may be something to do. I suggest you'd start by bumping the statistics target for the column to 10000 and run analyze to see what that changes.\r\n\n\r\n-- \r\nJesper\n \nHi,\r\n\nmy email client delete a lot of the content of the original thread message. Here is the full content:\n \nHi All\n \nI´ve created following table which contains one million records.\n \nCREATE TABLE core_content\n(\n id bigint NOT NULL,\n content text NOT NULL,\n short_content text,\n CONSTRAINT core_content_pkey PRIMARY KEY (id )\n)\n \nCREATE INDEX ft_simple_core_content_content_idx\n ON core_content\n USING gin\n (to_tsvector('simple'::regconfig, content) );\n \n \r\n\nIf I´m seaching for a word which is not in the column content the query plan and the execution time differs with the given limit.\nIf I choose 3927 or any higher number the query execution took only few milliseconds.\r\n\n \nselect * from core_content content where\nto_tsvector('simple', content.content) @@ tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar || ':*')=true\nLimit 3927\n \n\"Limit (cost=10091.09..19305.68 rows=3927 width=621) (actual time=0.255..0.255 rows=0 loops=1)\"\n\" -> Bitmap Heap Scan on core_content content (cost=10091.09..57046.32 rows=20011 width=621) (actual time=0.254..0.254 rows=0 loops=1)\"\n\" Recheck Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\n\" -> Bitmap Index Scan on ft_simple_core_content_content_idx (cost=0.00..10086.09 rows=20011 width=0) (actual time=0.251..0.251 rows=0\r\n loops=1)\"\n\" Index Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\n\"Total runtime: 0.277 ms\"\n \nIf I choose 3926 or any lower number (e.g. 10) the query execution took more than fifty seconds.\n \nselect * from core_content content where\nto_tsvector('simple', content.content) @@ tsquery(plainto_tsquery('simple', 'asdasdadas') :: varchar || ':*')=true\nLimit 3927\n \n\"Limit (cost=0.00..19302.23 rows=3926 width=621) (actual time=52147.149..52147.149 rows=0 loops=1)\"\n\" -> Seq Scan on core_content content (cost=0.00..98384.34 rows=20011 width=621) (actual time=52147.147..52147.147 rows=0 loops=1)\"\n\" Filter: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\n\"Total runtime: 52147.173 ms\"\n \nIs there any posibility to tune up the performance even if the limit is only 10? Is it possible to determine that the query optimizer takes\r\n\nonly the fast bitmap heap scan instead of the slow seq scan?\n \nI use PostgreSQL 9.1.5.; Intel i5-2400 @ 3.1 GHz, 16GB; Windows 7 64 Bit\n \nRegards,\nBill Martin",
"msg_date": "Tue, 11 Sep 2012 07:20:24 +0000",
"msg_from": "Bill Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner selects different execution plans depending\n on limit"
},
{
"msg_contents": "Bill Martin <[email protected]> writes:\n> I´ve created following table which contains one million records.\n> ...\n\n> \"Limit (cost=10091.09..19305.68 rows=3927 width=621) (actual time=0.255..0.255 rows=0 loops=1)\"\n> \" -> Bitmap Heap Scan on core_content content (cost=10091.09..57046.32 rows=20011 width=621) (actual time=0.254..0.254 rows=0 loops=1)\"\n> \" Recheck Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\n> \" -> Bitmap Index Scan on ft_simple_core_content_content_idx (cost=0.00..10086.09 rows=20011 width=0) (actual time=0.251..0.251 rows=0 loops=1)\"\n> \" Index Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\n> \"Total runtime: 0.277 ms\"\n\n> Is there any posibility to tune up the performance even if the limit is only 10?\n\nThe problem is the way-off rowcount estimate (20011 rows when it's\nreally none); with a smaller estimate there, the planner wouldn't decide\nto switch to a seqscan.\n\nDid you take the advice to increase the column's statistics target?\nBecause 20011 looks suspiciously close to the default estimate that\ntsquery_opr_selec will fall back on if it hasn't got enough stats\nto come up with a trustworthy estimate for a *-pattern query.\n\n(I think there are probably some bugs in tsquery_opr_selec's estimate\nfor this, as I just posted about on pgsql-hackers. But this number\nlooks like you're not even getting to the estimation code, for lack\nof enough statistics entries.)\n\nThe other thing that seems kind of weird here is that the cost estimate\nfor the bitmap index scan seems out of line even given the\n20000-entries-to-fetch estimate. I'd have expected a cost estimate of a\nfew hundred for that, not 10000. Perhaps this index is really bloated,\nand it's time to REINDEX it?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 11 Sep 2012 13:19:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner selects different execution plans depending on limit"
}
] |
[
{
"msg_contents": "I have a table as follows:\n\\d entity\n Table \"public.entity\"\n Column | Type | Modifiers\n--------------+-----------------------------+--------------------\n crmid | integer | not null\n smcreatorid | integer | not null default 0\n smownerid | integer | not null default 0\n modifiedby | integer | not null default 0\n setype | character varying(30) | not null\n description | text |\n createdtime | timestamp without time zone | not null\n modifiedtime | timestamp without time zone | not null\n viewedtime | timestamp without time zone |\n status | character varying(50) |\n version | integer | not null default 0\n presence | integer | default 1\n deleted | integer | not null default 0\nIndexes:\n \"entity_pkey\" PRIMARY KEY, btree (crmid)\n \"entity_createdtime_idx\" btree (createdtime)\n \"entity_modifiedby_idx\" btree (modifiedby)\n \"entity_modifiedtime_idx\" btree (modifiedtime)\n \"entity_setype_idx\" btree (setype) WHERE deleted = 0\n \"entity_smcreatorid_idx\" btree (smcreatorid)\n \"entity_smownerid_idx\" btree (smownerid)\n \"ftx_en_entity_description\" gin (to_tsvector('vcrm_en'::regconfig,\nfor_fts(description)))\n \"entity_deleted_idx\" btree (deleted)\nReferenced by:\n TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid)\nREFERENCES entity(crmid) ON DELETE CASCADE\n TABLE \"servicecontracts\" CONSTRAINT \"fk_1_servicecontracts\" FOREIGN KEY\n(servicecontractsid) REFERENCES entity(crmid) ON DELETE CASCADE\n TABLE \"vantage_cc2entity\" CONSTRAINT \"fk_vantage_cc2entity_entity\"\nFOREIGN KEY (crm_id) REFERENCES entity(crmid) ON UPDATE CASCADE ON DELETE\nCASCADE\n TABLE \"vantage_emails_optout_history\" CONSTRAINT\n\"fk_vantage_emails_optout_history_crmid\" FOREIGN KEY (crmid) REFERENCES\nentity(crmid) ON DELETE CASCADE\n TABLE \"vantage_emails_optout_history\" CONSTRAINT\n\"fk_vantage_emails_optout_history_emailid\" FOREIGN KEY (emailid) REFERENCES\nentity(crmid) ON DELETE CASCADE\n\nI execued the query:\nALTER TABLE entity ADD COLUMN owner_type char(1) NOT NULL default 'U';\n\nThe db is stuck. The enity table has 2064740 records;\n\nWatching locks:\nselect\n pg_stat_activity.datname,pg_class.relname,pg_locks.mode, pg_locks.granted,\npg_stat_activity.usename,substr(pg_stat_activity.current_query,1,10),\npg_stat_activity.query_start,\nage(now(),pg_stat_activity.query_start) as \"age\", pg_stat_activity.procpid\nfrom pg_stat_activity,pg_locks left\nouter join pg_class on (pg_locks.relation = pg_class.oid)\nwhere pg_locks.pid=pg_stat_activity.procpid order by query_start;\n\n\n datname | relname | mode\n | granted | usename | substr | query_start |\n age | procpid\n-------------------+-------------------------------------+---------------------+---------+----------+------------+-------------------------------+-----------------+---------\n db_test | entity_modifiedtime_idx | AccessExclusiveLock | t\n| user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 |\n 13574\n db_test | | ExclusiveLock | t\n | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 |\n00:45:46.101971 | 13574\n db_test | entity_modifiedby_idx | AccessExclusiveLock | t\n| user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 |\n 13574\n db_test | entity_createdtime_idx | AccessExclusiveLock | t\n| user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 |\n 13574\n db_test | entity | ShareLock | t | user\n | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | entity | AccessExclusiveLock | t | user\n | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | | AccessExclusiveLock | t\n | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 |\n00:45:46.101971 | 13574\n db_test | | AccessExclusiveLock | t\n | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 |\n00:45:46.101971 | 13574\n db_test | | ExclusiveLock | t\n | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 |\n00:45:46.101971 | 13574\n db_test | entity_pkey | AccessExclusiveLock | t | user\n | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | | ShareLock | t\n | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 |\n00:45:46.101971 | 13574\n db_test | ftx_en_entity_description | AccessExclusiveLock | t | user\n | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | | AccessShareLock | t\n | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 |\n00:45:46.101971 | 13574\n db_test | entity_smcreatorid_idx | AccessExclusiveLock | t\n| user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 |\n 13574\n db_test | entity_smownerid_idx | AccessExclusiveLock | t\n| user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 |\n 13574\n db_test | entity_setype_idx | AccessExclusiveLock | t\n| user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 |\n 13574\n\nAny idea for the db stuck?\n\nI have a table as follows:\\d entity Table \"public.entity\" Column | Type | Modifiers --------------+-----------------------------+--------------------\n crmid | integer | not null smcreatorid | integer | not null default 0 smownerid | integer | not null default 0 modifiedby | integer | not null default 0\n setype | character varying(30) | not null description | text | createdtime | timestamp without time zone | not null modifiedtime | timestamp without time zone | not null\n viewedtime | timestamp without time zone | status | character varying(50) | version | integer | not null default 0 presence | integer | default 1\n deleted | integer | not null default 0Indexes: \"entity_pkey\" PRIMARY KEY, btree (crmid) \"entity_createdtime_idx\" btree (createdtime)\n \"entity_modifiedby_idx\" btree (modifiedby) \"entity_modifiedtime_idx\" btree (modifiedtime) \"entity_setype_idx\" btree (setype) WHERE deleted = 0 \"entity_smcreatorid_idx\" btree (smcreatorid)\n \"entity_smownerid_idx\" btree (smownerid) \"ftx_en_entity_description\" gin (to_tsvector('vcrm_en'::regconfig, for_fts(description))) \"entity_deleted_idx\" btree (deleted)\nReferenced by: TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid) REFERENCES entity(crmid) ON DELETE CASCADE TABLE \"servicecontracts\" CONSTRAINT \"fk_1_servicecontracts\" FOREIGN KEY (servicecontractsid) REFERENCES entity(crmid) ON DELETE CASCADE\n TABLE \"vantage_cc2entity\" CONSTRAINT \"fk_vantage_cc2entity_entity\" FOREIGN KEY (crm_id) REFERENCES entity(crmid) ON UPDATE CASCADE ON DELETE CASCADE TABLE \"vantage_emails_optout_history\" CONSTRAINT \"fk_vantage_emails_optout_history_crmid\" FOREIGN KEY (crmid) REFERENCES entity(crmid) ON DELETE CASCADE\n TABLE \"vantage_emails_optout_history\" CONSTRAINT \"fk_vantage_emails_optout_history_emailid\" FOREIGN KEY (emailid) REFERENCES entity(crmid) ON DELETE CASCADEI execued the query:\nALTER TABLE entity ADD COLUMN owner_type char(1) NOT NULL default 'U';The db is stuck. The enity table has 2064740 records;Watching locks:select \n pg_stat_activity.datname,pg_class.relname,pg_locks.mode, pg_locks.granted,pg_stat_activity.usename,substr(pg_stat_activity.current_query,1,10), pg_stat_activity.query_start, age(now(),pg_stat_activity.query_start) as \"age\", pg_stat_activity.procpid \nfrom pg_stat_activity,pg_locks left outer join pg_class on (pg_locks.relation = pg_class.oid) where pg_locks.pid=pg_stat_activity.procpid order by query_start;\n datname | relname | mode | granted | usename | substr | query_start | age | procpid -------------------+-------------------------------------+---------------------+---------+----------+------------+-------------------------------+-----------------+---------\n db_test | entity_modifiedtime_idx | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574 db_test | | ExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | entity_modifiedby_idx | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574 db_test | entity_createdtime_idx | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | entity | ShareLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574 db_test | entity | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574 db_test | | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | | ExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574 db_test | entity_pkey | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | | ShareLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574 db_test | ftx_en_entity_description | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | | AccessShareLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574 db_test | entity_smcreatorid_idx | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n db_test | entity_smownerid_idx | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574 db_test | entity_setype_idx | AccessExclusiveLock | t | user | ALTER TABL | 2012-09-11 12:26:20.269965+06 | 00:45:46.101971 | 13574\n Any idea for the db stuck?",
"msg_date": "Tue, 11 Sep 2012 19:20:28 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "add column with default value is very slow"
},
{
"msg_contents": "AI Rumman wrote:\n> I execued the query:\n> ALTER TABLE entity ADD COLUMN owner_type char(1) NOT NULL default 'U';\n> \n> The db is stuck. The enity table has 2064740 records;\n> \n> Watching locks:\n[all locks are granted]\n\n> Any idea for the db stuck?\n\nTo add the column, PostgreSQL has to modify all rows in the table.\n\nBut then 2064740 records is not very much, so it shouldn't take forever.\n\nDo you see processor or I/O activity?\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Tue, 11 Sep 2012 15:41:04 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add column with default value is very slow"
},
{
"msg_contents": "On Tue, Sep 11, 2012 at 07:20:28PM +0600, AI Rumman wrote:\n> I have a table as follows:\n> I execued the query:\n> ALTER TABLE entity ADD COLUMN owner_type char(1) NOT NULL default 'U';\n> \n> The db is stuck. The enity table has 2064740 records;\n\nsuch alter table has to rewrite whole table. So it will take a while\n\n> Watching locks:\n\noutput of this was perfectly unreadable, because your email client\nwrapped lines at some random places.\n\nIn future - please put such dumps on some paste site, or just attach it\nto mail, and not copy/paste them to body of message.\n\nBest regards,\n\ndepesz\n\n-- \nThe best thing about modern society is how easy it is to avoid contact with it.\n http://depesz.com/\n\n",
"msg_date": "Tue, 11 Sep 2012 15:44:25 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add column with default value is very slow"
},
{
"msg_contents": "I added the excel file for locks data.\nI was surprised to see that while I was updating a single column value for\nall records in a tables, all indexes are locked by the server.\n\nOn Tue, Sep 11, 2012 at 7:44 PM, hubert depesz lubaczewski <\[email protected]> wrote:\n\n> On Tue, Sep 11, 2012 at 07:20:28PM +0600, AI Rumman wrote:\n> > I have a table as follows:\n> > I execued the query:\n> > ALTER TABLE entity ADD COLUMN owner_type char(1) NOT NULL default 'U';\n> >\n> > The db is stuck. The enity table has 2064740 records;\n>\n> such alter table has to rewrite whole table. So it will take a while\n>\n> > Watching locks:\n>\n> output of this was perfectly unreadable, because your email client\n> wrapped lines at some random places.\n>\n> In future - please put such dumps on some paste site, or just attach it\n> to mail, and not copy/paste them to body of message.\n>\n> Best regards,\n>\n> depesz\n>\n> --\n> The best thing about modern society is how easy it is to avoid contact\n> with it.\n>\n> http://depesz.com/\n>",
"msg_date": "Tue, 11 Sep 2012 19:55:24 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add column with default value is very slow"
},
{
"msg_contents": "On Tue, Sep 11, 2012 at 07:55:24PM +0600, AI Rumman wrote:\n> I added the excel file for locks data.\n\nwell, it worked, but why didn't you just make it text file, in notepad or\nsomething like this?\n\n> I was surprised to see that while I was updating a single column value for\n> all records in a tables, all indexes are locked by the server.\n\nalter table is not locked (At least looking at the pg_locks data you\nshowed).\n\nthis means - it just takes long time.\n\nPlease do:\nselect pg_total_relation_size('entity');\nto see how much data it has to rewrite.\n\nfor future - just don't do alter table, with default, and not null.\ndoing it via add column; set default; batch-backfill data, set not null\nwill take longer but will be done with much shorter locks.\n\nBest regards,\n\ndepesz\n\n-- \nThe best thing about modern society is how easy it is to avoid contact with it.\n http://depesz.com/\n\n",
"msg_date": "Tue, 11 Sep 2012 15:59:40 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add column with default value is very slow"
},
{
"msg_contents": "Table size is 1186 MB.\nI split the command in three steps as you said, but the result same during\nthe update operation.\nOne more thing, I have just restored the db from dump and analyzed it and\nI am using Postgresql 9.1 with 3 GB Ram with dual core machine.\n\n\nOn Tue, Sep 11, 2012 at 7:59 PM, hubert depesz lubaczewski <\[email protected]> wrote:\n\n> On Tue, Sep 11, 2012 at 07:55:24PM +0600, AI Rumman wrote:\n> > I added the excel file for locks data.\n>\n> well, it worked, but why didn't you just make it text file, in notepad or\n> something like this?\n>\n> > I was surprised to see that while I was updating a single column value\n> for\n> > all records in a tables, all indexes are locked by the server.\n>\n> alter table is not locked (At least looking at the pg_locks data you\n> showed).\n>\n> this means - it just takes long time.\n>\n> Please do:\n> select pg_total_relation_size('entity');\n> to see how much data it has to rewrite.\n>\n> for future - just don't do alter table, with default, and not null.\n> doing it via add column; set default; batch-backfill data, set not null\n> will take longer but will be done with much shorter locks.\n>\n> Best regards,\n>\n> depesz\n>\n> --\n> The best thing about modern society is how easy it is to avoid contact\n> with it.\n>\n> http://depesz.com/\n>\n\nTable size is 1186 MB.I split the command in three steps as you said, but the result same during the update operation.One more thing, I have just restored the db from dump and analyzed it andI am using Postgresql 9.1 with 3 GB Ram with dual core machine.\nOn Tue, Sep 11, 2012 at 7:59 PM, hubert depesz lubaczewski <[email protected]> wrote:\nOn Tue, Sep 11, 2012 at 07:55:24PM +0600, AI Rumman wrote:\n> I added the excel file for locks data.\n\nwell, it worked, but why didn't you just make it text file, in notepad or\nsomething like this?\n\n> I was surprised to see that while I was updating a single column value for\n> all records in a tables, all indexes are locked by the server.\n\nalter table is not locked (At least looking at the pg_locks data you\nshowed).\n\nthis means - it just takes long time.\n\nPlease do:\nselect pg_total_relation_size('entity');\nto see how much data it has to rewrite.\n\nfor future - just don't do alter table, with default, and not null.\ndoing it via add column; set default; batch-backfill data, set not null\nwill take longer but will be done with much shorter locks.\n\nBest regards,\n\ndepesz\n\n--\nThe best thing about modern society is how easy it is to avoid contact with it.\n http://depesz.com/",
"msg_date": "Tue, 11 Sep 2012 20:04:06 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add column with default value is very slow"
},
{
"msg_contents": "On Tue, Sep 11, 2012 at 08:04:06PM +0600, AI Rumman wrote:\n> Table size is 1186 MB.\n\nif it takes long, it just means that your IO is slow.\n\n> I split the command in three steps as you said, but the result same during\n> the update operation.\n\nthree? I was showing four steps, and one of them is usually consisting\nhundreds, if not thousands, of queries.\n\n> One more thing, I have just restored the db from dump and analyzed it and\n> I am using Postgresql 9.1 with 3 GB Ram with dual core machine.\n\nso it looks like your IO channel is slow.\n\nBest regards,\n\ndepesz\n\n-- \nThe best thing about modern society is how easy it is to avoid contact with it.\n http://depesz.com/\n\n",
"msg_date": "Tue, 11 Sep 2012 16:05:54 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add column with default value is very slow"
},
{
"msg_contents": "\nOn 09/11/2012 09:55 AM, AI Rumman wrote:\n> I added the excel file for locks data.\n> I was surprised to see that while I was updating a single column value \n> for all records in a tables, all indexes are locked by the server.\n\n\nAny ALTER TABLE command locks the whole table in ACCESS EXCLUSIVE mode, \nindexes included. See the description of ACCESS EXCLUSIVE lock at \n<http://www.postgresql.org/docs/current/static/explicit-locking.html>\n\ncheers\n\nandrew\n\n\n\n",
"msg_date": "Tue, 11 Sep 2012 10:07:04 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add column with default value is very slow"
}
] |
[
{
"msg_contents": "Regarding the wiki page on reporting slow queries:\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\nWe currently recommend EXPLAIN ANALYZE over just EXPLAIN. Should we\nrecommend EXPLAIN (ANALYZE, BUFFERS) instead? I know I very often\nwish I could see that data. I don't think turning buffer accounting\non adds much cost over a mere ANALYZE.\n\n\nAlso, an additional thing that would be nice for people to report is\nwhether long running queries are CPU bound or IO bound. Should we add\nthat recommendation with links to how to do that in a couple OS, say,\nLinux and Windows. If so, does anyone know of good links that explain\nit for those OS?\n\nCheers,\n\nJeff\n\n",
"msg_date": "Wed, 12 Sep 2012 09:00:50 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Guide to Posting Slow Query Questions"
},
{
"msg_contents": "On Wed, Sep 12, 2012 at 7:00 PM, Jeff Janes <[email protected]> wrote:\n> Regarding the wiki page on reporting slow queries:\n> We currently recommend EXPLAIN ANALYZE over just EXPLAIN. Should we\n> recommend EXPLAIN (ANALYZE, BUFFERS) instead? I know I very often\n> wish I could see that data. I don't think turning buffer accounting\n> on adds much cost over a mere ANALYZE.\n\nGiven the amount of version 8 installs out there the recommendation\nshould be qualified with version >9.0. Otherwise a strong +1\n\n> Also, an additional thing that would be nice for people to report is\n> whether long running queries are CPU bound or IO bound. Should we add\n> that recommendation with links to how to do that in a couple OS, say,\n> Linux and Windows. If so, does anyone know of good links that explain\n> it for those OS?\n\nI don't have any links for OS level monitoring, but with version 9.2\ntrack_io_timing would do the job.\n\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n\n",
"msg_date": "Thu, 13 Sep 2012 09:40:11 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to Posting Slow Query Questions"
},
{
"msg_contents": "On Wed, Sep 12, 2012 at 11:40 PM, Ants Aasma <[email protected]> wrote:\n> On Wed, Sep 12, 2012 at 7:00 PM, Jeff Janes <[email protected]> wrote:\n>> Regarding the wiki page on reporting slow queries:\n>> We currently recommend EXPLAIN ANALYZE over just EXPLAIN. Should we\n>> recommend EXPLAIN (ANALYZE, BUFFERS) instead? I know I very often\n>> wish I could see that data. I don't think turning buffer accounting\n>> on adds much cost over a mere ANALYZE.\n>\n> Given the amount of version 8 installs out there the recommendation\n> should be qualified with version >9.0. Otherwise a strong +1\n\nEdit made.\n\n>\n>> Also, an additional thing that would be nice for people to report is\n>> whether long running queries are CPU bound or IO bound. Should we add\n>> that recommendation with links to how to do that in a couple OS, say,\n>> Linux and Windows. If so, does anyone know of good links that explain\n>> it for those OS?\n>\n> I don't have any links for OS level monitoring, but with version 9.2\n> track_io_timing would do the job.\n\nI don't know how to advice people on how to use this to obtain\ninformation on a specific query. Would someone else like to take a\nstab at explaining that?\n\nThanks,\n\nJeff\n\n",
"msg_date": "Wed, 26 Sep 2012 13:11:49 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guide to Posting Slow Query Questions"
},
{
"msg_contents": "On Wed, Sep 26, 2012 at 11:11 PM, Jeff Janes <[email protected]> wrote:\n> On Wed, Sep 12, 2012 at 11:40 PM, Ants Aasma <[email protected]> wrote:\n>> I don't have any links for OS level monitoring, but with version 9.2\n>> track_io_timing would do the job.\n>\n> I don't know how to advice people on how to use this to obtain\n> information on a specific query. Would someone else like to take a\n> stab at explaining that?\n\nI added a line suggesting that 9.2 users turn it on via SET\ntrack_io_timing TO on;\n\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n\n",
"msg_date": "Sun, 7 Oct 2012 17:43:45 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guide to Posting Slow Query Questions"
},
{
"msg_contents": "On Sun, Oct 7, 2012 at 7:43 AM, Ants Aasma <[email protected]> wrote:\n> On Wed, Sep 26, 2012 at 11:11 PM, Jeff Janes <[email protected]> wrote:\n>> On Wed, Sep 12, 2012 at 11:40 PM, Ants Aasma <[email protected]> wrote:\n>>> I don't have any links for OS level monitoring, but with version 9.2\n>>> track_io_timing would do the job.\n>>\n>> I don't know how to advice people on how to use this to obtain\n>> information on a specific query. Would someone else like to take a\n>> stab at explaining that?\n>\n> I added a line suggesting that 9.2 users turn it on via SET\n> track_io_timing TO on;\n\nThat was easy. I thought there was more to it because I didn't get\nany IO timing output when I tried it. But that was just because there\nwas nothing to output, as all data was in shared_buffers by the time I\nturned the timing on.\n\nThanks,\n\nJeff\n\n",
"msg_date": "Tue, 16 Oct 2012 10:38:34 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guide to Posting Slow Query Questions"
}
] |
[
{
"msg_contents": "Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:\n> Bill Martin <bill(dot)martin(at)communote(dot)com> writes:\n>> I´ve created following table which contains one million records.\n>> ...\n\n>> \"Limit (cost=10091.09..19305.68 rows=3927 width=621) (actual time=0.255..0.255 rows=0 loops=1)\"\n>> \" -> Bitmap Heap Scan on core_content content (cost=10091.09..57046.32 rows=20011 width=621) (actual time=0.254..0.254 rows=0 loops=1)\"\n>> \" Recheck Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\n>> \" -> Bitmap Index Scan on ft_simple_core_content_content_idx (cost=0.00..10086.09 rows=20011 width=0) (actual time=0.251..0.251 rows=0 loops=1)\"\n>> \" Index Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\n>> \"Total runtime: 0.277 ms\"\n\n>> Is there any posibility to tune up the performance even if the limit is only 10?\n\n> The problem is the way-off rowcount estimate (20011 rows when it's\n> really none); with a smaller estimate there, the planner wouldn't decide\n> to switch to a seqscan.\n>\n> Did you take the advice to increase the column's statistics target?\n> Because 20011 looks suspiciously close to the default estimate that\n> tsquery_opr_selec will fall back on if it hasn't got enough stats\n> to come up with a trustworthy estimate for a *-pattern query.\n>\n> (I think there are probably some bugs in tsquery_opr_selec's estimate\n> for this, as I just posted about on pgsql-hackers. But this number\n> looks like you're not even getting to the estimation code, for lack\n> of enough statistics entries.)\n>\n> The other thing that seems kind of weird here is that the cost estimate\n> for the bitmap index scan seems out of line even given the\n> 20000-entries-to-fetch estimate. I'd have expected a cost estimate of a\n> few hundred for that, not 10000. Perhaps this index is really bloated,\n> and it's time to REINDEX it?\n>\n> regards, tom lane\n\nHi,\nthank you for helping me.\n\nI´ve tried different values for the statistics but it is all the same (the planner decide to switch to a seqscan if the limit is 10).\n\n\nALTER TABLE core_content ALTER column content SET STATISTICS 1000;\n\n\nI also tried to reindex the index but the planner decide to switch to a seqscan.\n\n\nREINDEX INDEX ft_simple_core_content_content_idx;\n\n\n\nDisable the seqscan helps me but is this a good decision for all use cases?\n\n\n\nSET enable_seqscan = off;\n\n\n\nAre there any other possibilities to solve my problem?\n\n\n\nBest regards,\n\nBill Martin\n\n\n\n\n\n\n\n\n\n\nTom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>\nwrites:\n> Bill Martin <bill(dot)martin(at)communote(dot)com> writes:\n>> I´ve created following table which contains one million records.\n>> ...\n \n>> \"Limit (cost=10091.09..19305.68 rows=3927 width=621) (actual time=0.255..0.255 rows=0 loops=1)\"\n>> \" -> Bitmap Heap Scan on core_content content (cost=10091.09..57046.32 rows=20011 width=621) (actual time=0.254..0.254 rows=0 loops=1)\"\n>> \" Recheck Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\n>> \" -> Bitmap Index Scan on ft_simple_core_content_content_idx (cost=0.00..10086.09 rows=20011 width=0) (actual time=0.251..0.251 rows=0\n loops=1)\"\n>> \" Index Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)\"\n>> \"Total runtime: 0.277 ms\"\n \n>> Is there any posibility to tune up the performance even if the limit is only 10?\n \n> The problem is the way-off rowcount estimate (20011 rows when it's\n> really none); with a smaller estimate there, the planner wouldn't decide\n> to switch to a seqscan.\n> \n> Did you take the advice to increase the column's statistics target?\n> Because 20011 looks suspiciously close to the default estimate that\n> tsquery_opr_selec will fall back on if it hasn't got enough stats\n> to come up with a trustworthy estimate for a *-pattern query.\n> \n> (I think there are probably some bugs in tsquery_opr_selec's estimate\n> for this, as I just posted about on pgsql-hackers. But this number\n> looks like you're not even getting to the estimation code, for lack\n> of enough statistics entries.)\n> \n> The other thing that seems kind of weird here is that the cost estimate\n> for the bitmap index scan seems out of line even given the\n> 20000-entries-to-fetch estimate. I'd have expected a cost estimate of a\n> few hundred for that, not 10000. Perhaps this index is really bloated,\n> and it's time to REINDEX it?\n> \n> \nregards, tom lane\n \nHi,\nthank you for helping me.\n \nI´ve tried different values for the statistics but it is all the same (the planner decide to switch to a seqscan if the limit is 10).\n \nALTER TABLE core_content ALTER column content SET STATISTICS 1000;\n \nI also tried to reindex the index but the\nplanner decide to switch to a seqscan.\n \nREINDEX INDEX ft_simple_core_content_content_idx;\n \nDisable the seqscan helps me but is this a good decision for all use cases?\n \nSET enable_seqscan = off;\n \nAre there any other possibilities to solve my problem?\n \nBest regards,\nBill Martin",
"msg_date": "Thu, 13 Sep 2012 09:05:26 +0000",
"msg_from": "Bill Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner selects different execution plans depending\n on limit"
},
{
"msg_contents": "Bill Martin <[email protected]> writes:\n> I�ve tried different values for the statistics but it is all the same (the planner decide to switch to a seqscan if the limit is 10).\n\n> ALTER TABLE core_content ALTER column content SET STATISTICS 1000;\n\nUm, did you actually do an ANALYZE after changing that?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 13 Sep 2012 10:04:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner selects different execution plans depending on limit"
}
] |
[
{
"msg_contents": "> Tom Lane <[email protected]> writes:\r\n>> Bill Martin <[email protected]> writes:\r\n>> I've tried different values for the statistics but it is all the same (the planner decide to switch to a seqscan if the limit is 10).\r\n\r\n>> ALTER TABLE core_content ALTER column content SET STATISTICS 1000;\r\n\r\n> Um, did you actually do an ANALYZE after changing that?\r\n> \r\n> \t\t\tregards, tom lane\r\n\r\nYes, I've run the ANALYZE command.\r\n\r\nRegards,\r\nBill Martin\r\n",
"msg_date": "Thu, 13 Sep 2012 14:42:07 +0000",
"msg_from": "Bill Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner selects different execution plans depending\n on limit"
},
{
"msg_contents": "On 13/09/12 16:42, Bill Martin wrote:\n> Yes, I've run the ANALYZE command. Regards, Bill Martin \nThe main problem in your case is actually that you dont store the \ntsvector in the table.\n\nIf you store to_tsvector('simple',content.content) in a column in\nthe database and search against that instead\nthen you'll allow PG to garther statistics on the column and make the\nquery-planner act according to that.\n\nJesper\n\n",
"msg_date": "Thu, 13 Sep 2012 16:48:07 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner selects different execution plans depending\n on limit"
},
{
"msg_contents": "Jesper Krogh <[email protected]> writes:\n> On 13/09/12 16:42, Bill Martin wrote:\n>> Yes, I've run the ANALYZE command. Regards, Bill Martin \n\n> The main problem in your case is actually that you dont store the \n> tsvector in the table.\n\nOh, duh, obviously I lack caffeine this morning.\n\n> If you store to_tsvector('simple',content.content) in a column in\n> the database and search against that instead\n> then you'll allow PG to garther statistics on the column and make the\n> query-planner act according to that.\n\nHe can do it without having to change his schema --- but it's the index\ncolumn, not the underlying content column, that needs its statistics\ntarget adjusted.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 13 Sep 2012 10:54:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner selects different execution plans depending on limit"
},
{
"msg_contents": "> Tom Lane <[email protected]> writes:\n\n> He can do it without having to change his schema --- but it's the index\n> column, not the underlying content column, that needs its statistics\n> target adjusted.\n\n> regards, tom lane\n\nHow can I adjust the statistics target of the index?\n",
"msg_date": "Thu, 13 Sep 2012 17:19:10 +0000",
"msg_from": "Bill Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner selects different execution plans depending\n on limit"
},
{
"msg_contents": "Bill Martin <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> He can do it without having to change his schema --- but it's the index\n>> column, not the underlying content column, that needs its statistics\n>> target adjusted.\n\n> How can I adjust the statistics target of the index?\n\nJust pretend it's a table.\n\n\tALTER TABLE index_name ALTER COLUMN column_name SET STATISTICS ...\n\nYou'll need to look at the index (eg with \\d) to see what the name of\nthe desired column is, since index expressions have system-assigned\ncolumn names.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 13 Sep 2012 13:33:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner selects different execution plans depending on limit"
},
{
"msg_contents": "On Thu, Sep 13, 2012 at 10:33 AM, Tom Lane <[email protected]> wrote:\n> Bill Martin <[email protected]> writes:\n>\n>> How can I adjust the statistics target of the index?\n>\n> Just pretend it's a table.\n>\n> ALTER TABLE index_name ALTER COLUMN column_name SET STATISTICS ...\n>\n> You'll need to look at the index (eg with \\d) to see what the name of\n> the desired column is, since index expressions have system-assigned\n> column names.\n\nIs this documented anywhere? I couldn't find it. If not, which\nsection would be the best one to add it to?\n\nCheers,\n\nJeff\n\n",
"msg_date": "Sun, 16 Sep 2012 14:39:34 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Planner selects different execution plans depending on\n\tlimit"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> On Thu, Sep 13, 2012 at 10:33 AM, Tom Lane <[email protected]> wrote:\n>> Just pretend it's a table.\n>> ALTER TABLE index_name ALTER COLUMN column_name SET STATISTICS ...\n>> \n>> You'll need to look at the index (eg with \\d) to see what the name of\n>> the desired column is, since index expressions have system-assigned\n>> column names.\n\n> Is this documented anywhere? I couldn't find it. If not, which\n> section would be the best one to add it to?\n\nIt's not documented, mainly because it hasn't reached the level of being\na supported feature. I'd like to figure out how to get pg_dump to dump\nsuch settings before we call it supported. (The stumbling block is\nexactly that index column names aren't set in stone, so it's not clear\nthat the ALTER command would do the right thing on dump-and-reload.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 16 Sep 2012 18:16:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Planner selects different execution plans depending on\n\tlimit"
},
{
"msg_contents": "Tom Lane <mailto:[email protected]> writes:\r\n> Bill Martin <[email protected]> writes:\r\n>> Tom Lane <[email protected]> writes:\r\n>>> He can do it without having to change his schema --- but it's the \r\n>>> index column, not the underlying content column, that needs its \r\n>>> statistics target adjusted.\r\n\r\n>> How can I adjust the statistics target of the index?\r\n\r\n> Just pretend it's a table.\r\n\r\n>\tALTER TABLE index_name ALTER COLUMN column_name SET STATISTICS ...\r\n\r\n> You'll need to look at the index (eg with \\d) to see what the name of the desired column is, since index expressions have system-assigned\r\n> column names.\r\n\r\n>\t\tregards, tom lane\r\n\r\nI tried: \r\nALTER TABLE ft_simple_core_content_content_idx ALTER column to_tsvector SET STATISTICS 10000;\r\nANALYZE;\r\n\r\nand\r\nREINDEX INDEX ft_simple_core_content_content_idx;\r\n\r\nAll the trouble was for nothing.\r\n\r\nAre there any other possibilities to solve my problem?\r\n\r\nBest regards,\r\nBill Martin\r\n",
"msg_date": "Tue, 18 Sep 2012 07:28:25 +0000",
"msg_from": "Bill Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner selects different execution plans depending\n on limit"
},
{
"msg_contents": "On Sun, Sep 16, 2012 at 06:16:55PM -0400, Tom Lane wrote:\n> Jeff Janes <[email protected]> writes:\n> > On Thu, Sep 13, 2012 at 10:33 AM, Tom Lane <[email protected]> wrote:\n> >> Just pretend it's a table.\n> >> ALTER TABLE index_name ALTER COLUMN column_name SET STATISTICS ...\n> >> \n> >> You'll need to look at the index (eg with \\d) to see what the name of\n> >> the desired column is, since index expressions have system-assigned\n> >> column names.\n> \n> > Is this documented anywhere? I couldn't find it. If not, which\n> > section would be the best one to add it to?\n> \n> It's not documented, mainly because it hasn't reached the level of being\n> a supported feature. I'd like to figure out how to get pg_dump to dump\n> such settings before we call it supported. (The stumbling block is\n> exactly that index column names aren't set in stone, so it's not clear\n> that the ALTER command would do the right thing on dump-and-reload.)\n\nIs this TODO?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Wed, 26 Sep 2012 11:52:12 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Planner selects different execution plans\n\tdepending on limit"
}
] |
[
{
"msg_contents": "Hey PostgreSQL speed demons -\nAt work, we're considering an AppScale deployment (that's the Google App Engine\nroll-your-own http://appscale.cs.ucsb.edu/). It supports multiple technologies\nto back the datastore part of the platform (HBase, Hypertable, MySQL Cluster,\nCassandra, Voldemort, MongoDB, MemcacheDB, Redis). Oddly enough, in performance\ntests, the MySQL Cluster seems like the general winner (their tests, we haven't\ndone any yet)\n\nSo, my immediate thought was \"How hard would it be to replace the MySQL\nCluster bit w/ PostgreSQL?\" I'm thinking hot standby/streaming rep. May or may\nnot need a pooling solution in there as well (I need to look at the AppScale\nabstraction code, it may already be doing the pooling/direction bit.)\n\nAny thoughts from those with more experience using/building PostgreSQL clusters?\nReplacing MySQL Cluster? Clearly they must be using a subset of functionality,\nsince they support so many different backend stores. I'll probably have to\nset up an instance of all this, run some example apps, and see what's actually\nstored to get a handle on it. The GAE api for the datastore is sort of a ORM,\nw/ yet another query language, that seems to map to SQL better than to NoSQL,\nin any case. There seems to be a fairly explicit exposure of a table==class\nsort of mapping.\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nConnexions http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n\n\n",
"msg_date": "Thu, 13 Sep 2012 13:11:13 -0500",
"msg_from": "Ross Reedstrom <[email protected]>",
"msg_from_op": true,
"msg_subject": "AppScale backend datastore (NoSQL again kind of)"
},
{
"msg_contents": "Regards, Ross.\nDimitri Fontaine gave a excellent talk in the last PgCon about the \nmigration of Fotolog from MySQL to\nPostgreSQL with amazing advices around this, so you can contact him for \nhis advice.\n\nOn 09/13/2012 02:11 PM, Ross Reedstrom wrote:\n> Hey PostgreSQL speed demons -\n> At work, we're considering an AppScale deployment (that's the Google App Engine\n> roll-your-own http://appscale.cs.ucsb.edu/). It supports multiple technologies\n> to back the datastore part of the platform (HBase, Hypertable, MySQL Cluster,\n> Cassandra, Voldemort, MongoDB, MemcacheDB, Redis). Oddly enough, in performance\n> tests, the MySQL Cluster seems like the general winner (their tests, we haven't\n> done any yet)\n>\n> So, my immediate thought was \"How hard would it be to replace the MySQL\n> Cluster bit w/ PostgreSQL?\" I'm thinking hot standby/streaming rep. May or may\n> not need a pooling solution in there as well (I need to look at the AppScale\n> abstraction code, it may already be doing the pooling/direction bit.)\nIt depends of many factors:\n- size of the MySQL cluster\n- size of the involving data, etc\n\nFor the pooling solution, I recommend you to see PgBouncer, it�s a great\nproject widely used for this topic.\n>\n> Any thoughts from those with more experience using/building PostgreSQL clusters?\n> Replacing MySQL Cluster? Clearly they must be using a subset of functionality,\n> since they support so many different backend stores. I'll probably have to\n> set up an instance of all this, run some example apps, and see what's actually\n> stored to get a handle on it. The GAE api for the datastore is sort of a ORM,\n> w/ yet another query language, that seems to map to SQL better than to NoSQL,\n> in any case. There seems to be a fairly explicit exposure of a table==class\n> sort of mapping.\n>\n> Ross\nBest wishes\n-- \n\nMarcos Luis Ort�z Valmaseda\n*Data Engineer && Sr. System Administrator at UCI*\nabout.me/marcosortiz <http://about.me/marcosortiz>\nMy Blog <http://marcosluis2186.posterous.com>\nTumblr's blog <http://marcosortiz.tumblr.com/>\n@marcosluis2186 <http://twitter.com/marcosluis2186>\n\n\n\n\n\n10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...\nCONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION\n\nhttp://www.uci.cu\nhttp://www.facebook.com/universidad.uci\nhttp://www.flickr.com/photos/universidad_uci\n\n\n\n\n\n Regards, Ross.\n Dimitri Fontaine gave a excellent talk in the last PgCon about the\n migration of Fotolog from MySQL to \n PostgreSQL with amazing advices around this, so you can contact him\n for his advice.\n\nOn 09/13/2012 02:11 PM, Ross Reedstrom\n wrote:\n\n\nHey PostgreSQL speed demons -\nAt work, we're considering an AppScale deployment (that's the Google App Engine\nroll-your-own http://appscale.cs.ucsb.edu/). It supports multiple technologies\nto back the datastore part of the platform (HBase, Hypertable, MySQL Cluster,\nCassandra, Voldemort, MongoDB, MemcacheDB, Redis). Oddly enough, in performance\ntests, the MySQL Cluster seems like the general winner (their tests, we haven't\ndone any yet)\n\nSo, my immediate thought was \"How hard would it be to replace the MySQL\nCluster bit w/ PostgreSQL?\" I'm thinking hot standby/streaming rep. May or may\nnot need a pooling solution in there as well (I need to look at the AppScale\nabstraction code, it may already be doing the pooling/direction bit.)\n\n It depends of many factors:\n - size of the MySQL cluster\n - size of the involving data, etc\n\n For the pooling solution, I recommend you to see PgBouncer, it´s a\n great\n project widely used for this topic.\n\n\n\nAny thoughts from those with more experience using/building PostgreSQL clusters?\nReplacing MySQL Cluster? Clearly they must be using a subset of functionality,\nsince they support so many different backend stores. I'll probably have to\nset up an instance of all this, run some example apps, and see what's actually\nstored to get a handle on it. The GAE api for the datastore is sort of a ORM,\nw/ yet another query language, that seems to map to SQL better than to NoSQL,\nin any case. There seems to be a fairly explicit exposure of a table==class\nsort of mapping.\n\nRoss\n\n\n Best wishes\n-- \n\n\n\n\n Marcos Luis Ortíz Valmaseda\nData Engineer && Sr. System Administrator at\n UCI\nabout.me/marcosortiz\nMy Blog\nTumblr's blog\n@marcosluis2186",
"msg_date": "Thu, 13 Sep 2012 14:30:16 -0400",
"msg_from": "Marcos Ortiz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AppScale backend datastore (NoSQL again kind of)"
},
{
"msg_contents": "On 14/09/12 06:11, Ross Reedstrom wrote:\n> Hey PostgreSQL speed demons -\n> At work, we're considering an AppScale deployment (that's the Google App Engine\n> roll-your-own http://appscale.cs.ucsb.edu/). It supports multiple technologies\n> to back the datastore part of the platform (HBase, Hypertable, MySQL Cluster,\n> Cassandra, Voldemort, MongoDB, MemcacheDB, Redis). Oddly enough, in performance\n> tests, the MySQL Cluster seems like the general winner (their tests, we haven't\n> done any yet)\n>\n> So, my immediate thought was \"How hard would it be to replace the MySQL\n> Cluster bit w/ PostgreSQL?\" I'm thinking hot standby/streaming rep. May or may\n> not need a pooling solution in there as well (I need to look at the AppScale\n> abstraction code, it may already be doing the pooling/direction bit.)\n>\n> Any thoughts from those with more experience using/building PostgreSQL clusters?\n> Replacing MySQL Cluster? Clearly they must be using a subset of functionality,\n> since they support so many different backend stores. I'll probably have to\n> set up an instance of all this, run some example apps, and see what's actually\n> stored to get a handle on it. The GAE api for the datastore is sort of a ORM,\n> w/ yet another query language, that seems to map to SQL better than to NoSQL,\n> in any case. There seems to be a fairly explicit exposure of a table==class\n> sort of mapping.\n>\n\nPostgres-xc might be a good option to consider too.\n\nRegards\n\nMark\n\n\n",
"msg_date": "Fri, 14 Sep 2012 10:15:37 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AppScale backend datastore (NoSQL again kind of)"
}
] |
[
{
"msg_contents": "-- \nhttp://www.feteknoloji.com\nRegards, Feridun Türk\n\n-- http://www.feteknoloji.comRegards, Feridun Türk",
"msg_date": "Thu, 13 Sep 2012 21:25:18 +0300",
"msg_from": "=?UTF-8?Q?Feridun_t=C3=BCrk?= <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Hi\nI compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got\nthe 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,\n990X).\n\nCentOS 6.3 x86_64\nPostgreSQL 9.2\ncpufreq scaling_governor - performance\n\n# /etc/init.d/postgresql initdb\n# echo \"fsync = off\" >> /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql start\n# su - postgres\n$ psql\n# create database pgbench;\n# \\q\n\n# pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\ntps = 4670.635648 (including connections establishing)\ntps = 4673.630345 (excluding connections establishing)[/code]\n\nOn kernel 3.5.3:\ntps = ~5800\n\n1) Host 1 - 15-20% performance drop\nAMD Phenom(tm) II X6 1090T Processor\nMB: AMD 880G\nRAM: 16 Gb DDR3\nSSD: PLEXTOR PX-256M3 256Gb\n\n2) Host 2 - 15-20% performance drop\nAMD Phenom(tm) II X6 1055T Processor\nMB: AMD 990X\nRAM: 32 Gb DDR3\nSSD: Corsair Performance Pro 128Gb\n\n3) Host 3 - no problems - same performance\nIntel E6300\nMB: Intel® P43 / ICH10\nRAM: 4 Gb DDR3\nHDD: SATA 7200 rpm\n\nKernel config - http://pastebin.com/cFpg5JSJ\n\nAny ideas?\n\nThx\n\n",
"msg_date": "Fri, 14 Sep 2012 10:40:27 +0300",
"msg_from": "Nikolay Ulyanitsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "20% performance drop on PostgreSQL 9.2 from kernel 3.5.3 to 3.6-rc5\n\ton AMD chipsets"
},
{
"msg_contents": "On Fri, Sep 14, 2012 at 12:40 AM, Nikolay Ulyanitsky <[email protected]> wrote:\n> Hi\n> I compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got\n> the 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,\n> 990X).\n>\n> CentOS 6.3 x86_64\n> PostgreSQL 9.2\n> cpufreq scaling_governor - performance\n>\n> # /etc/init.d/postgresql initdb\n> # echo \"fsync = off\" >> /var/lib/pgsql/data/postgresql.conf\n> # /etc/init.d/postgresql start\n> # su - postgres\n> $ psql\n> # create database pgbench;\n> # \\q\n>\n> # pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\n> tps = 4670.635648 (including connections establishing)\n> tps = 4673.630345 (excluding connections establishing)[/code]\n>\n> On kernel 3.5.3:\n> tps = ~5800\n>\n> 1) Host 1 - 15-20% performance drop\n> AMD Phenom(tm) II X6 1090T Processor\n> MB: AMD 880G\n> RAM: 16 Gb DDR3\n> SSD: PLEXTOR PX-256M3 256Gb\n>\n> 2) Host 2 - 15-20% performance drop\n> AMD Phenom(tm) II X6 1055T Processor\n> MB: AMD 990X\n> RAM: 32 Gb DDR3\n> SSD: Corsair Performance Pro 128Gb\n>\n> 3) Host 3 - no problems - same performance\n> Intel E6300\n> MB: Intel® P43 / ICH10\n> RAM: 4 Gb DDR3\n> HDD: SATA 7200 rpm\n>\n> Kernel config - http://pastebin.com/cFpg5JSJ\n>\n> Any ideas?\n\nDid you tell LKML? It seems like a kind of change that could be found\nusing git bisect of Linux, albiet laboriously.\n\n-- \nfdr\n\n",
"msg_date": "Fri, 14 Sep 2012 01:45:32 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 20% performance drop on PostgreSQL 9.2 from kernel\n\t3.5.3 to 3.6-rc5 on AMD chipsets"
},
{
"msg_contents": "Regards, Nikolay.\nLike Daniel said to you, I encourage to inform all your findings to the \nLKML to\nreport all these problems.\n\nOnly one las t question: Did you tune the postgresql.conf for every \nsystem? or\nDid you use the default configuration ?\n\nBest wishes\nOn 09/14/2012 04:45 AM, Daniel Farina wrote:\n> On Fri, Sep 14, 2012 at 12:40 AM, Nikolay Ulyanitsky <[email protected]> wrote:\n>> Hi\n>> I compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got\n>> the 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,\n>> 990X).\n>>\n>> CentOS 6.3 x86_64\n>> PostgreSQL 9.2\n>> cpufreq scaling_governor - performance\n>>\n>> # /etc/init.d/postgresql initdb\n>> # echo \"fsync = off\" >> /var/lib/pgsql/data/postgresql.conf\n>> # /etc/init.d/postgresql start\n>> # su - postgres\n>> $ psql\n>> # create database pgbench;\n>> # \\q\n>>\n>> # pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\n>> tps = 4670.635648 (including connections establishing)\n>> tps = 4673.630345 (excluding connections establishing)[/code]\n>>\n>> On kernel 3.5.3:\n>> tps = ~5800\n>>\n>> 1) Host 1 - 15-20% performance drop\n>> AMD Phenom(tm) II X6 1090T Processor\n>> MB: AMD 880G\n>> RAM: 16 Gb DDR3\n>> SSD: PLEXTOR PX-256M3 256Gb\n>>\n>> 2) Host 2 - 15-20% performance drop\n>> AMD Phenom(tm) II X6 1055T Processor\n>> MB: AMD 990X\n>> RAM: 32 Gb DDR3\n>> SSD: Corsair Performance Pro 128Gb\n>>\n>> 3) Host 3 - no problems - same performance\n>> Intel E6300\n>> MB: Intel� P43 / ICH10\n>> RAM: 4 Gb DDR3\n>> HDD: SATA 7200 rpm\n>>\n>> Kernel config - http://pastebin.com/cFpg5JSJ\n>>\n>> Any ideas?\n> Did you tell LKML? It seems like a kind of change that could be found\n> using git bisect of Linux, albiet laboriously.\n>\n\n-- \n\nMarcos Luis Ort�z Valmaseda\n*Data Engineer && Sr. System Administrator at UCI*\nabout.me/marcosortiz <http://about.me/marcosortiz>\nMy Blog <http://marcosluis2186.posterous.com>\nTumblr's blog <http://marcosortiz.tumblr.com/>\n@marcosluis2186 <http://twitter.com/marcosluis2186>\n\n\n\n\n\n10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...\nCONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION\n\nhttp://www.uci.cu\nhttp://www.facebook.com/universidad.uci\nhttp://www.flickr.com/photos/universidad_uci\n\n\n\n\n\n Regards, Nikolay.\n Like Daniel said to you, I encourage to inform all your findings to\n the LKML to \n report all these problems.\n\n Only one las t question: Did you tune the postgresql.conf for every\n system? or \n Did you use the default configuration ?\n\n Best wishes\nOn 09/14/2012 04:45 AM, Daniel Farina\n wrote:\n\n\nOn Fri, Sep 14, 2012 at 12:40 AM, Nikolay Ulyanitsky <[email protected]> wrote:\n\n\nHi\nI compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got\nthe 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,\n990X).\n\nCentOS 6.3 x86_64\nPostgreSQL 9.2\ncpufreq scaling_governor - performance\n\n# /etc/init.d/postgresql initdb\n# echo \"fsync = off\" >> /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql start\n# su - postgres\n$ psql\n# create database pgbench;\n# \\q\n\n# pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\ntps = 4670.635648 (including connections establishing)\ntps = 4673.630345 (excluding connections establishing)[/code]\n\nOn kernel 3.5.3:\ntps = ~5800\n\n1) Host 1 - 15-20% performance drop\nAMD Phenom(tm) II X6 1090T Processor\nMB: AMD 880G\nRAM: 16 Gb DDR3\nSSD: PLEXTOR PX-256M3 256Gb\n\n2) Host 2 - 15-20% performance drop\nAMD Phenom(tm) II X6 1055T Processor\nMB: AMD 990X\nRAM: 32 Gb DDR3\nSSD: Corsair Performance Pro 128Gb\n\n3) Host 3 - no problems - same performance\nIntel E6300\nMB: Intel® P43 / ICH10\nRAM: 4 Gb DDR3\nHDD: SATA 7200 rpm\n\nKernel config - http://pastebin.com/cFpg5JSJ\n\nAny ideas?\n\n\n\nDid you tell LKML? It seems like a kind of change that could be found\nusing git bisect of Linux, albiet laboriously.\n\n\n\n\n-- \n\n\n\n\n Marcos Luis Ortíz Valmaseda\nData Engineer && Sr. System Administrator at\n UCI\nabout.me/marcosortiz\nMy Blog\nTumblr's blog\n@marcosluis2186",
"msg_date": "Fri, 14 Sep 2012 10:56:26 -0400",
"msg_from": "Marcos Ortiz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 20% performance drop on PostgreSQL 9.2 from kernel\n\t3.5.3 to 3.6-rc5 on AMD chipsets"
},
{
"msg_contents": "On 14 September 2012 11:45, Daniel Farina <[email protected]> wrote:\n> Did you tell LKML? It seems like a kind of change that could be found\n> using git bisect of Linux, albiet laboriously.\n\nHi, Daniel\nI sent it to [email protected] on Fri, 14 Sep 2012 10:47:44\n+0300.\n\n\n\nOn 14 September 2012 17:56, Marcos Ortiz <[email protected]> wrote:\n> Only one las t question: Did you tune the postgresql.conf for every\nsystem? or\n> Did you use the default configuration ?\n\n\nHi, Marcos\n\nI have the issue with default and tuned configuration on all AMD systems:\n\nTuned config for 32Gb RAM:\n#------------------------------------------------------------------------------\n# Connection Settings -\n#------------------------------------------------------------------------------\nlisten_addresses = '*'\nport = 5432\nmax_connections = 50\n\n#------------------------------------------------------------------------------\n# OPTIMIZATIONS\n#------------------------------------------------------------------------------\nshared_buffers = 7680MB\neffective_cache_size = 22GB\nwork_mem = 576MB\nmaintenance_work_mem = 2GB\nwal_buffers = 16MB\nfsync = off\nsynchronous_commit = off\n\n#------------------------------------------------------------------------------\n# PLANNER\n#------------------------------------------------------------------------------\ndefault_statistics_target = 100\nconstraint_exclusion = off\n\n#------------------------------------------------------------------------------\n# CHECKPOINTS\n#------------------------------------------------------------------------------\ncheckpoint_timeout = 5min\ncheckpoint_segments = 16\ncheckpoint_completion_target = 0.9\n\n#------------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#------------------------------------------------------------------------------\nautovacuum = on\nautovacuum_naptime = 5min\nautovacuum_max_workers = 1\n\nautovacuum_vacuum_scale_factor = 0.0001\nautovacuum_analyze_scale_factor = 0.0001\n\nautovacuum_vacuum_threshold = 100\nautovacuum_analyze_threshold = 100\n\nautovacuum_vacuum_cost_delay = 1ms\n\n#------------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#------------------------------------------------------------------------------\nlogging_collector = on\nlog_directory = 'pg_log'\nlog_filename = 'postgresql-%a.log'\nlog_truncate_on_rotation = on\nlog_rotation_age = 1d\nlog_rotation_size = 0\nlog_line_prefix = '%t %d %u '\n\nOn 14 September 2012 11:45, Daniel Farina <[email protected]> wrote:> Did you tell LKML? It seems like a kind of change that could be found> using git bisect of Linux, albiet laboriously.\nHi, DanielI sent it to [email protected] on Fri, 14 Sep 2012 10:47:44 +0300.\nOn 14 September 2012 17:56, Marcos Ortiz <[email protected]> wrote:> Only one las t question: Did you tune the postgresql.conf for every system? or > Did you use the default configuration ?\nHi, MarcosI have the issue with default and tuned configuration on all AMD systems:Tuned config for 32Gb RAM:#------------------------------------------------------------------------------\n# Connection Settings -#------------------------------------------------------------------------------listen_addresses = '*'port = 5432max_connections = 50\n#------------------------------------------------------------------------------# OPTIMIZATIONS#------------------------------------------------------------------------------\nshared_buffers = 7680MBeffective_cache_size = 22GBwork_mem = 576MBmaintenance_work_mem = 2GBwal_buffers = 16MBfsync = offsynchronous_commit = off\n#------------------------------------------------------------------------------# PLANNER#------------------------------------------------------------------------------default_statistics_target = 100\nconstraint_exclusion = off#------------------------------------------------------------------------------# CHECKPOINTS#------------------------------------------------------------------------------\ncheckpoint_timeout = 5mincheckpoint_segments = 16checkpoint_completion_target = 0.9#------------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS#------------------------------------------------------------------------------autovacuum = onautovacuum_naptime = 5minautovacuum_max_workers = 1\nautovacuum_vacuum_scale_factor = 0.0001autovacuum_analyze_scale_factor = 0.0001autovacuum_vacuum_threshold = 100autovacuum_analyze_threshold = 100\nautovacuum_vacuum_cost_delay = 1ms#------------------------------------------------------------------------------# ERROR REPORTING AND LOGGING#------------------------------------------------------------------------------\nlogging_collector = onlog_directory = 'pg_log'log_filename = 'postgresql-%a.log'log_truncate_on_rotation = onlog_rotation_age = 1dlog_rotation_size = 0\nlog_line_prefix = '%t %d %u '",
"msg_date": "Fri, 14 Sep 2012 18:04:12 +0300",
"msg_from": "Nikolay Ulyanitsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 20% performance drop on PostgreSQL 9.2 from kernel\n\t3.5.3 to 3.6-rc5 on AMD chipsets"
},
{
"msg_contents": "On Fri, Sep 14, 2012 at 12:40 AM, Nikolay Ulyanitsky <[email protected]>wrote:\n\n> Hi\n> I compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got\n> the 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,\n> 990X).\n>\n\nDid you compile the AMD code on the AMD system?\n\nWe use a different open-source project that provides chemistry\nfunctionality, and discovered the hard way that the code optimizer is\nspecific to each chip. Code compiled on Intel chips would sometimes run\n50% slower on AMD chips (and vice versa). When we compiled the Intel code\nusing Intel computers and AMD code using AMD computers, the performance\ndifference disappeared.\n\nThere's probably an optimizer flag somewhere that would allow you to force\nit to compile for one chip or the other, but by default it seems to pick\nthe one you're running on.\n\nCraig\n\n\n>\n> CentOS 6.3 x86_64\n> PostgreSQL 9.2\n> cpufreq scaling_governor - performance\n>\n> # /etc/init.d/postgresql initdb\n> # echo \"fsync = off\" >> /var/lib/pgsql/data/postgresql.conf\n> # /etc/init.d/postgresql start\n> # su - postgres\n> $ psql\n> # create database pgbench;\n> # \\q\n>\n> # pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\n> tps = 4670.635648 (including connections establishing)\n> tps = 4673.630345 (excluding connections establishing)[/code]\n>\n> On kernel 3.5.3:\n> tps = ~5800\n>\n> 1) Host 1 - 15-20% performance drop\n> AMD Phenom(tm) II X6 1090T Processor\n> MB: AMD 880G\n> RAM: 16 Gb DDR3\n> SSD: PLEXTOR PX-256M3 256Gb\n>\n> 2) Host 2 - 15-20% performance drop\n> AMD Phenom(tm) II X6 1055T Processor\n> MB: AMD 990X\n> RAM: 32 Gb DDR3\n> SSD: Corsair Performance Pro 128Gb\n>\n> 3) Host 3 - no problems - same performance\n> Intel E6300\n> MB: Intel® P43 / ICH10\n> RAM: 4 Gb DDR3\n> HDD: SATA 7200 rpm\n>\n> Kernel config - http://pastebin.com/cFpg5JSJ\n>\n> Any ideas?\n>\n> Thx\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Fri, Sep 14, 2012 at 12:40 AM, Nikolay Ulyanitsky <[email protected]> wrote:\nHi\nI compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got\nthe 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,\n990X).Did you compile the AMD code on the AMD system?We use a different open-source project that provides chemistry functionality, and discovered the hard way that the code optimizer is specific to each chip. Code compiled on Intel chips would sometimes run 50% slower on AMD chips (and vice versa). When we compiled the Intel code using Intel computers and AMD code using AMD computers, the performance difference disappeared.\nThere's probably an optimizer flag somewhere that would allow you to force it to compile for one chip or the other, but by default it seems to pick the one you're running on.Craig \n\nCentOS 6.3 x86_64\nPostgreSQL 9.2\ncpufreq scaling_governor - performance\n\n# /etc/init.d/postgresql initdb\n# echo \"fsync = off\" >> /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql start\n# su - postgres\n$ psql\n# create database pgbench;\n# \\q\n\n# pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\ntps = 4670.635648 (including connections establishing)\ntps = 4673.630345 (excluding connections establishing)[/code]\n\nOn kernel 3.5.3:\ntps = ~5800\n\n1) Host 1 - 15-20% performance drop\nAMD Phenom(tm) II X6 1090T Processor\nMB: AMD 880G\nRAM: 16 Gb DDR3\nSSD: PLEXTOR PX-256M3 256Gb\n\n2) Host 2 - 15-20% performance drop\nAMD Phenom(tm) II X6 1055T Processor\nMB: AMD 990X\nRAM: 32 Gb DDR3\nSSD: Corsair Performance Pro 128Gb\n\n3) Host 3 - no problems - same performance\nIntel E6300\nMB: Intel® P43 / ICH10\nRAM: 4 Gb DDR3\nHDD: SATA 7200 rpm\n\nKernel config - http://pastebin.com/cFpg5JSJ\n\nAny ideas?\n\nThx\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 14 Sep 2012 08:29:00 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 20% performance drop on PostgreSQL 9.2 from kernel\n\t3.5.3 to 3.6-rc5 on AMD chipsets"
},
{
"msg_contents": "Hi, Craig\n\nOn 14 September 2012 18:29, Craig James <[email protected]> wrote:\n> Did you compile the AMD code on the AMD system?\n\nYes\nAnd it is optimized for Generic-x86-64 (CONFIG_GENERIC_CPU).\n\n",
"msg_date": "Fri, 14 Sep 2012 18:35:20 +0300",
"msg_from": "Nikolay Ulyanitsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 20% performance drop on PostgreSQL 9.2 from kernel\n\t3.5.3 to 3.6-rc5 on AMD chipsets"
},
{
"msg_contents": "On 09/14/2012 10:45 AM, Daniel Farina wrote:\n> On Fri, Sep 14, 2012 at 12:40 AM, Nikolay Ulyanitsky <[email protected]> wrote:\n>> Hi\n>> I compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got\n>> the 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,\n>> 990X).\n\n[cut]\n\n>> Kernel config - http://pastebin.com/cFpg5JSJ\n>>\n>> Any ideas?\n>\n> Did you tell LKML? It seems like a kind of change that could be found\n> using git bisect of Linux, albiet laboriously.\n\n\njust a pointer to LKML thread:\n\nhttps://lkml.org/lkml/2012/9/14/99\n\nit seems that kernel dev were able to find the root cause\nafter bisecting kernel source.\n\nBorislav Petkov says that regression disappears\nafter reverting this commit:\n\ncommit 970e178985cadbca660feb02f4d2ee3a09f7fdda\nAuthor: Mike Galbraith <[email protected]>\nDate: Tue Jun 12 05:18:32 2012 +0200\n\n\n\nAndrea\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 18 Sep 2012 09:44:13 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 20% performance drop on PostgreSQL 9.2 from kernel\n\t3.5.3 to 3.6-rc5 on AMD chipsets"
},
{
"msg_contents": "On Tue, Sep 18, 2012 at 2:44 AM, Andrea Suisani <[email protected]> wrote:\n> On 09/14/2012 10:45 AM, Daniel Farina wrote:\n>>\n>> On Fri, Sep 14, 2012 at 12:40 AM, Nikolay Ulyanitsky <[email protected]>\n>> wrote:\n>>>\n>>> Hi\n>>> I compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got\n>>> the 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,\n>>> 990X).\n>\n>\n> [cut]\n>\n>\n>>> Kernel config - http://pastebin.com/cFpg5JSJ\n>>>\n>>> Any ideas?\n>>\n>>\n>> Did you tell LKML? It seems like a kind of change that could be found\n>> using git bisect of Linux, albiet laboriously.\n>\n>\n>\n> just a pointer to LKML thread:\n>\n> https://lkml.org/lkml/2012/9/14/99\n\nThere's some interesting discussion of postgres spinlocks in the thread:\n\n\"Yes, postgress performs loads better with it's spinlocks, but due to\nthat, it necessarily _hates_ preemption.\"\n\nmerlin\n\n",
"msg_date": "Tue, 18 Sep 2012 08:54:36 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 20% performance drop on PostgreSQL 9.2 from kernel\n\t3.5.3 to 3.6-rc5 on AMD chipsets"
},
{
"msg_contents": "[cut]\n\n>>>> Kernel config - http://pastebin.com/cFpg5JSJ\n>>>>\n>>>> Any ideas?\n>>>\n>>>\n>>> Did you tell LKML? It seems like a kind of change that could be found\n>>> using git bisect of Linux, albiet laboriously.\n>>\n>> just a pointer to LKML thread:\n>>\n>> https://lkml.org/lkml/2012/9/14/99\n>\n> There's some interesting discussion of postgres spinlocks in the thread:\n>\n> \"Yes, postgress performs loads better with it's spinlocks, but due to\n> that, it necessarily _hates_ preemption.\"\n\nanother one:\n\nhttps://lkml.org/lkml/2012/9/15/39\n\nquoting the relevant piece:\n\nOn Sat, Sep 15, 2012 at 06:11:02AM +0200, Mike Galbraith wrote:\n > My wild (and only) theory is that this is userspace spinlock related.\n > If so, starting the server and benchmark SCHED_BATCH should not only\n > kill the regression, but likely improve throughput as well.\n\nafter that message Borislav Petkov tried to\nto exactly that using schedtool(8) (tool to query and set CPU\nscheduling parameters)\n\nand he can confirm that performace are \"even better than the results with 3.5\n(had something around 3900ish on that particular configuration).\"\n\n\nAndrea\n\n\n",
"msg_date": "Tue, 18 Sep 2012 16:25:20 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 20% performance drop on PostgreSQL 9.2 from kernel\n\t3.5.3 to 3.6-rc5 on AMD chipsets"
},
{
"msg_contents": "Hi\n\nOn 09/18/2012 09:44 AM, Andrea Suisani wrote:\n> On 09/14/2012 10:45 AM, Daniel Farina wrote:\n>> On Fri, Sep 14, 2012 at 12:40 AM, Nikolay Ulyanitsky <[email protected]> wrote:\n>>> Hi\n>>> I compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got\n>>> the 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,\n>>> 990X).\n>\n> [cut]\n>\n>>> Kernel config - http://pastebin.com/cFpg5JSJ\n>>>\n>>> Any ideas?\n>>\n>> Did you tell LKML? It seems like a kind of change that could be found\n>> using git bisect of Linux, albiet laboriously.\n>\n>\n> just a pointer to LKML thread:\n>\n> https://lkml.org/lkml/2012/9/14/99\n\n[cut]\n\ntoday Jonathan Corbet has posted a good write-up on lwn.net\n\n\"How 3.6 nearly broke PostgreSQL\"\n\nhttp://lwn.net/SubscriberLink/518329/672d5c68286f9c18/\n\nthis is definitely a worth reading piece.\n\nAndrea\n\n",
"msg_date": "Wed, 03 Oct 2012 13:45:54 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 20% performance drop on PostgreSQL 9.2 from kernel\n\t3.5.3 to 3.6-rc5 on AMD chipsets"
}
] |
[
{
"msg_contents": "Hi, \n\nI have a Centos 6.2 Virtual machine that contains Postgresql version 8.4 database. The application installed in this Virtual machine uses this database that is local to this Virtual machine. I have tried to offload the database, by installing it on a remote Virtual machine, on another server, and tried to connect to it from my local Virtual machine. The application configuration remains the same, only database is offloaded to a remote Virtual machine on another server and the connection parameters have changed. The connection is all fine and the application can access the remote database.\n\nI have observed that the Postgresql is responding extremely slow. What should I do to improve its performance?\n\nPlease suggest.\n\n\nKind Regards,\nManoj Agarwal\n \n\n\n\n\n\nRemote access to Postgresql slow\n\n\n\nHi,\n\nI have a Centos 6.2 Virtual machine that contains Postgresql version 8.4 database. The application installed in this Virtual machine uses this database that is local to this Virtual machine. I have tried to offload the database, by installing it on a remote Virtual machine, on another server, and tried to connect to it from my local Virtual machine. The application configuration remains the same, only database is offloaded to a remote Virtual machine on another server and the connection parameters have changed. The connection is all fine and the application can access the remote database.\n\nI have observed that the Postgresql is responding extremely slow. What should I do to improve its performance?\n\nPlease suggest.\n\n\nKind Regards,\nManoj Agarwal",
"msg_date": "Fri, 14 Sep 2012 11:02:09 +0200",
"msg_from": "\"Manoj Agarwal\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remote access to Postgresql slow"
},
{
"msg_contents": "Is your network link between server and client across the public internet?\n\nYou need to check bandwidth and latency characteristics of your network.\n\nA simple test run following on server host and run it again on the client\nhost.\n\ntime psql [connect details] -c 'select now()'\n\nI access postgresql database across the public internet (by tunnelling port\n5432 across compressed ssh sessions). In my case latency is a significant\npenalty. Locally time response is for above is <10ms but remotely it is 30\ntimes slower (350ms)\n\nYou may need to install wireshark or similar and monitor client traffic in\norder to figure out the network penalty. Maybe your app goes back and\nforward to postgres multiple time; does lots of chatter. If so then\nlatency cost becomes very significant. You want to try and minimise the\nnumber of postgresql calls; retrieve more data will less SQL operations.\n\n\n\nOn Fri, Sep 14, 2012 at 7:02 PM, Manoj Agarwal <[email protected]> wrote:\n\n> **\n>\n> Hi,\n>\n> I have a Centos 6.2 Virtual machine that contains Postgresql version 8.4\n> database. The application installed in this Virtual machine uses this\n> database that is local to this Virtual machine. I have tried to offload\n> the database, by installing it on a remote Virtual machine, on another\n> server, and tried to connect to it from my local Virtual machine. The\n> application configuration remains the same, only database is offloaded to a\n> remote Virtual machine on another server and the connection parameters have\n> changed. The connection is all fine and the application can access the\n> remote database.\n>\n> I have observed that the Postgresql is responding extremely slow. What\n> should I do to improve its performance?\n>\n> Please suggest.\n>\n>\n> Kind Regards,\n> Manoj Agarwal\n>\n>\n\nIs your network link between server and client across the public internet? \n\nYou need to check bandwidth and latency characteristics of your network. \n\nA simple test run following on server host and run it again on the client host.\n\ntime psql [connect details] -c 'select now()'\n\nI access postgresql database across the public internet (by tunnelling port 5432 across compressed ssh sessions). In my case latency is a significant penalty. \nLocally time response is for above is <10ms but remotely it is 30 times slower \n(350ms)\n\nYou may need to install wireshark or similar and monitor client traffic \nin order to figure out the network penalty. Maybe your app goes back \nand forward to postgres multiple time; does lots of chatter. If so then\n latency cost becomes very significant. You want to try and minimise the\n number of postgresql calls; retrieve more data will less SQL \noperations. \n\nOn Fri, Sep 14, 2012 at 7:02 PM, Manoj Agarwal <[email protected]> wrote:\n\n\nHi,\n\nI have a Centos 6.2 Virtual machine that contains Postgresql version 8.4 database. The application installed in this Virtual machine uses this database that is local to this Virtual machine. I have tried to offload the database, by installing it on a remote Virtual machine, on another server, and tried to connect to it from my local Virtual machine. The application configuration remains the same, only database is offloaded to a remote Virtual machine on another server and the connection parameters have changed. The connection is all fine and the application can access the remote database.\n\nI have observed that the Postgresql is responding extremely slow. What should I do to improve its performance?\n\nPlease suggest.\n\n\nKind Regards,\nManoj Agarwal",
"msg_date": "Sun, 16 Sep 2012 07:17:19 +1000",
"msg_from": "Andrew Barnham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remote access to Postgresql slow"
},
{
"msg_contents": "On Fri, Sep 14, 2012 at 3:02 AM, Manoj Agarwal <[email protected]> wrote:\n> changed. The connection is all fine and the application can access the\n> remote database.\n>\n> I have observed that the Postgresql is responding extremely slow. What\n> should I do to improve its performance?\n\nAre you connecting via host name? If so have you tried connecting via IP?\n\n",
"msg_date": "Sat, 15 Sep 2012 15:20:29 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remote access to Postgresql slow"
}
] |
[
{
"msg_contents": "I am looking at changing all of the foreign key definitions to be deferrable (initially immediate). Then during a few scenarios performed by the application, set all foreign key constraints to be deferred (initially deferred) for that given transaction.\n\nMy underlying question/concern is \"will this change have any adverse affects (on performance) during normal operations when the foreign keys are set to deferrable initially immediate\" .vs. the foreign keys being defined as NOT DEFERRABLE.\n\nI have read that there can be a difference in behavior/performance when a Primary Key/Unique Key is changed to deferred, due to assumptions the optimizer can or cannot make regarding whether the associated index is unique. But I have not found any negatives in regard to changing foreign key definitions to be deferrable.\n\nThanks,\nAlan\n\n\n\nI am looking at changing all of the foreign key definitions to be deferrable (initially immediate). Then during a few scenarios performed by the application, set all foreign key constraints to be deferred (initially deferred) for that given transaction. My underlying question/concern is \"will this change have any adverse affects (on performance) during normal operations when the foreign keys are set to deferrable initially immediate\" .vs. the foreign keys being defined as NOT DEFERRABLE. I have read that there can be a difference in behavior/performance when a Primary Key/Unique Key is changed to deferred, due to assumptions the optimizer can or cannot make regarding whether the associated index is unique. But I have not found any negatives in regard to changing foreign key definitions to be deferrable. Thanks,Alan",
"msg_date": "Fri, 14 Sep 2012 11:56:04 -0400",
"msg_from": "\"McKinzie, Alan (Alan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Are there known performance issues with defining all Foreign Keys\n\tas deferrable initially immediate"
},
{
"msg_contents": "On 09/14/2012 11:56 PM, McKinzie, Alan (Alan) wrote:\n\n> My underlying question/concern is \"will this change have any adverse\n> affects (on performance) during normal operations when the foreign keys\n> are set to deferrable initially immediate\" .vs. the foreign keys being\n> defined as NOT DEFERRABLE.\n\nAFAIK in PostgreSQL DEFERRABLE INITIALLY IMMEDIATE is different to NOT \nDEFERRABLE.\n\nDEFERRABLE INITIALLY IMMEDIATE is executed at the end of the statement, \nwhile NOT DEFERRABLE is executed as soon as it arises.\n\nhttp://www.postgresql.org/docs/current/static/sql-set-constraints.html\n\nhttp://stackoverflow.com/questions/10032272/constraint-defined-deferrable-initially-immediate-is-still-deferred\n\nAgain from memory there's a performance cost to deferring constraint \nchecks to the end of the statement rather than doing them as soon as \nthey arise, so NOT DEFERRED can potentially perform better or at least \nnot hit limits that DEFERRABLE INITIALLY DEFERRED might hit in Pg.\n\nThis seems under-documented and I haven't found much good info on it, so \nthe best thing to do is test it.\n\n--\nCraig Ringer\n\n",
"msg_date": "Sun, 16 Sep 2012 21:45:35 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are there known performance issues with defining all\n\tForeign Keys as deferrable initially immediate"
},
{
"msg_contents": "On 09/16/2012 09:45 PM, Craig Ringer wrote:\n\n> This seems under-documented and I haven't found much good info on it, \n> so the best thing to do is test it.\n\nFound it, it's in the NOTES for CREATE TABLE.\n\nhttp://www.postgresql.org/docs/current/static/sql-createtable.html:\n\nWhen a UNIQUE or PRIMARY KEY constraint is not deferrable, PostgreSQL \nchecks for uniqueness immediately whenever a row is inserted or \nmodified. The SQL standard says that uniqueness should be enforced only \nat the end of the statement; this makes a difference when, for example, \na single command updates multiple key values. To obtain \nstandard-compliant behavior, declare the constraint as DEFERRABLE but \nnot deferred (i.e., INITIALLY IMMEDIATE). Be aware that this can be \nsignificantly slower than immediate uniqueness checking.\n\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 09/16/2012 09:45 PM, Craig Ringer\n wrote:\n\n\nThis\n seems under-documented and I haven't found much good info on it,\n so the best thing to do is test it.\n \n\n\n Found it, it's in the NOTES for CREATE TABLE.\n\nhttp://www.postgresql.org/docs/current/static/sql-createtable.html:\n\n\n When a UNIQUE or PRIMARY KEY constraint is not deferrable,\n PostgreSQL checks for uniqueness immediately whenever a row is\n inserted or modified. The SQL standard says that uniqueness should\n be enforced only at the end of the statement; this makes a\n difference when, for example, a single command updates multiple key\n values. To obtain standard-compliant behavior, declare the\n constraint as DEFERRABLE but not deferred (i.e., INITIALLY\n IMMEDIATE). Be aware that this can be significantly slower than\n immediate uniqueness checking.\n\n\n --\n Craig Ringer",
"msg_date": "Sun, 16 Sep 2012 22:12:13 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are there known performance issues with defining all\n\tForeign Keys as deferrable initially immediate"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> Found it, it's in the NOTES for CREATE TABLE.\n> http://www.postgresql.org/docs/current/static/sql-createtable.html:\n\n> When a UNIQUE or PRIMARY KEY constraint is not deferrable, PostgreSQL \n> checks for uniqueness immediately whenever a row is inserted or \n> modified. The SQL standard says that uniqueness should be enforced only \n> at the end of the statement; this makes a difference when, for example, \n> a single command updates multiple key values. To obtain \n> standard-compliant behavior, declare the constraint as DEFERRABLE but \n> not deferred (i.e., INITIALLY IMMEDIATE). Be aware that this can be \n> significantly slower than immediate uniqueness checking.\n\nNote that that is addressing uniqueness constraints, and *only*\nuniqueness constraints. Foreign key constraints are implemented\ndifferently. There is no equivalent to an immediate check of a foreign\nkey constraint --- it's checked either at end of statement or end of\ntransaction, depending on the DEFERRED property. So there's really no\nperformance difference for FKs, unless you let a large number of pending\nchecks accumulate over multiple commands within a transaction.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 16 Sep 2012 11:37:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are there known performance issues with defining all Foreign Keys\n\tas deferrable initially immediate"
},
{
"msg_contents": "On 09/16/2012 11:37 PM, Tom Lane wrote:\n> Craig Ringer <[email protected]> writes:\n>> Found it, it's in the NOTES for CREATE TABLE.\n>> http://www.postgresql.org/docs/current/static/sql-createtable.html:\n>\n>> When a UNIQUE or PRIMARY KEY constraint is not deferrable, PostgreSQL\n>> checks for uniqueness immediately whenever a row is inserted or\n>> modified. The SQL standard says that uniqueness should be enforced only\n>> at the end of the statement; this makes a difference when, for example,\n>> a single command updates multiple key values. To obtain\n>> standard-compliant behavior, declare the constraint as DEFERRABLE but\n>> not deferred (i.e., INITIALLY IMMEDIATE). Be aware that this can be\n>> significantly slower than immediate uniqueness checking.\n>\n> Note that that is addressing uniqueness constraints, and *only*\n> uniqueness constraints. Foreign key constraints are implemented\n> differently. There is no equivalent to an immediate check of a foreign\n> key constraint --- it's checked either at end of statement or end of\n> transaction, depending on the DEFERRED property. So there's really no\n> performance difference for FKs, unless you let a large number of pending\n> checks accumulate over multiple commands within a transaction.\n\nAh, thanks. I missed that detail.\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Mon, 17 Sep 2012 10:59:20 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are there known performance issues with defining all\n\tForeign Keys as deferrable initially immediate"
},
{
"msg_contents": "Thanks for the information guys. And Yes, I am only updating the Foreign Key definitions to be deferrable. I am not modifying the Unique/Primary Key definitions.\n\nThanks again,\nAlan\n\n-----Original Message-----\nFrom: Craig Ringer [mailto:[email protected]] \nSent: Sunday, September 16, 2012 9:59 PM\nTo: Tom Lane\nCc: McKinzie, Alan (Alan); [email protected]\nSubject: Re: [PERFORM] Are there known performance issues with defining all Foreign Keys as deferrable initially immediate\n\nOn 09/16/2012 11:37 PM, Tom Lane wrote:\n> Craig Ringer <[email protected]> writes:\n>> Found it, it's in the NOTES for CREATE TABLE.\n>> http://www.postgresql.org/docs/current/static/sql-createtable.html:\n>\n>> When a UNIQUE or PRIMARY KEY constraint is not deferrable, PostgreSQL\n>> checks for uniqueness immediately whenever a row is inserted or\n>> modified. The SQL standard says that uniqueness should be enforced only\n>> at the end of the statement; this makes a difference when, for example,\n>> a single command updates multiple key values. To obtain\n>> standard-compliant behavior, declare the constraint as DEFERRABLE but\n>> not deferred (i.e., INITIALLY IMMEDIATE). Be aware that this can be\n>> significantly slower than immediate uniqueness checking.\n>\n> Note that that is addressing uniqueness constraints, and *only*\n> uniqueness constraints. Foreign key constraints are implemented\n> differently. There is no equivalent to an immediate check of a foreign\n> key constraint --- it's checked either at end of statement or end of\n> transaction, depending on the DEFERRED property. So there's really no\n> performance difference for FKs, unless you let a large number of pending\n> checks accumulate over multiple commands within a transaction.\n\nAh, thanks. I missed that detail.\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Mon, 17 Sep 2012 09:38:27 -0400",
"msg_from": "\"McKinzie, Alan (Alan)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are there known performance issues with defining all\n\tForeign Keys as deferrable initially immediate"
}
] |
[
{
"msg_contents": "I am pondering about this... My thinking is that since *_scale_factor need\nto be set manually for largish tables (>1M), why not\nset autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor, and\nincrease the value of autovacuum_vacuum_threshold to, say, 10000, and\nautovacuum_analyze_threshold\nto 2500 ? What do you think ?\n\nAlso, with systems handling 8k-10k tps and dedicated to a single database,\nwould there be any cons to decreasing autovacuum_naptime to say 15s, so\nthat the system perf is less spiky ?\n\nSébastien\n\nI am pondering about this... My thinking is that since *_scale_factor need to be set manually for largish tables (>1M), why not set autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor, and increase the value of autovacuum_vacuum_threshold to, say, 10000, and autovacuum_analyze_threshold to 2500 ? What do you think ?\nAlso, with systems handling 8k-10k tps and dedicated to a single database, would there be any cons to decreasing autovacuum_naptime to say 15s, so that the system perf is less spiky ?\nSébastien",
"msg_date": "Fri, 14 Sep 2012 16:11:05 -0400",
"msg_from": "=?UTF-8?Q?S=C3=A9bastien_Lorion?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Setting autovacuum_vacuum_scale_factor to 0 a good idea ?"
},
{
"msg_contents": "\n> I am pondering about this... My thinking is that since *_scale_factor need\n> to be set manually for largish tables (>1M), why not\n> set autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor, and\n> increase the value of autovacuum_vacuum_threshold to, say, 10000, and\n> autovacuum_analyze_threshold\n> to 2500 ? What do you think ?\n\nI really doubt you want to be vacuuming a large table every 10,000 rows.\n Or analyzing every 2500 rows, for that matter. These things aren't\nfree, or we'd just do them constantly.\n\nManipulating the analyze thresholds for a large table make sense; on\ntables of over 10m rows, I often lower autovacuum_analyze_scale_factor\nto 0.02 or 0.01, to get them analyzed a bit more often. But vacuuming\nthem more often makes no sense.\n\n> Also, with systems handling 8k-10k tps and dedicated to a single database,\n> would there be any cons to decreasing autovacuum_naptime to say 15s, so\n> that the system perf is less spiky ?\n\nYou might also want to consider more autovacuum workers. Although if\nyou've set the thresholds as above, that's the reason autovacuum is\nalways busy and not keeping up ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Fri, 14 Sep 2012 14:49:30 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting autovacuum_vacuum_scale_factor to 0 a good\n idea ?"
},
{
"msg_contents": "Ah I see... I thought that by running the vacuum more often, its cost would\nbe divided in a more or less linear fashion, with a base constant cost.\nWhile I read about the vacuum process, I did not check the source code or\neven read about the actual algorithm, so I am sorry for having asked a\nnonsensical question :)\n\nIt was theoretical, my current database does what you suggest, but I might\nincrease workers as about 10 tables see a heavy update rate and are quite\nlarge compared to the others.\n\nSébastien\n\nOn Fri, Sep 14, 2012 at 5:49 PM, Josh Berkus <[email protected]> wrote:\n\n>\n> > I am pondering about this... My thinking is that since *_scale_factor\n> need\n> > to be set manually for largish tables (>1M), why not\n> > set autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor,\n> and\n> > increase the value of autovacuum_vacuum_threshold to, say, 10000, and\n> > autovacuum_analyze_threshold\n> > to 2500 ? What do you think ?\n>\n> I really doubt you want to be vacuuming a large table every 10,000 rows.\n> Or analyzing every 2500 rows, for that matter. These things aren't\n> free, or we'd just do them constantly.\n>\n> Manipulating the analyze thresholds for a large table make sense; on\n> tables of over 10m rows, I often lower autovacuum_analyze_scale_factor\n> to 0.02 or 0.01, to get them analyzed a bit more often. But vacuuming\n> them more often makes no sense.\n>\n> > Also, with systems handling 8k-10k tps and dedicated to a single\n> database,\n> > would there be any cons to decreasing autovacuum_naptime to say 15s, so\n> > that the system perf is less spiky ?\n>\n> You might also want to consider more autovacuum workers. Although if\n> you've set the thresholds as above, that's the reason autovacuum is\n> always busy and not keeping up ...\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAh I see... I thought that by running the vacuum more often, its cost would be divided in a more or less linear fashion, with a base constant cost. While I read about the vacuum process, I did not check the source code or even read about the actual algorithm, so I am sorry for having asked a nonsensical question :) \nIt was theoretical, my current database does what you suggest, but I might increase workers as about 10 tables see a heavy update rate and are quite large compared to the others.Sébastien\nOn Fri, Sep 14, 2012 at 5:49 PM, Josh Berkus <[email protected]> wrote:\n\n> I am pondering about this... My thinking is that since *_scale_factor need\n> to be set manually for largish tables (>1M), why not\n> set autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor, and\n> increase the value of autovacuum_vacuum_threshold to, say, 10000, and\n> autovacuum_analyze_threshold\n> to 2500 ? What do you think ?\n\nI really doubt you want to be vacuuming a large table every 10,000 rows.\n Or analyzing every 2500 rows, for that matter. These things aren't\nfree, or we'd just do them constantly.\n\nManipulating the analyze thresholds for a large table make sense; on\ntables of over 10m rows, I often lower autovacuum_analyze_scale_factor\nto 0.02 or 0.01, to get them analyzed a bit more often. But vacuuming\nthem more often makes no sense.\n\n> Also, with systems handling 8k-10k tps and dedicated to a single database,\n> would there be any cons to decreasing autovacuum_naptime to say 15s, so\n> that the system perf is less spiky ?\n\nYou might also want to consider more autovacuum workers. Although if\nyou've set the thresholds as above, that's the reason autovacuum is\nalways busy and not keeping up ...\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 14 Sep 2012 22:35:14 -0400",
"msg_from": "=?UTF-8?Q?S=C3=A9bastien_Lorion?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting autovacuum_vacuum_scale_factor to 0 a good idea ?"
}
] |
[
{
"msg_contents": "I am not able to set wal_sync_method to anything but fsync on FreeBSD 9.0\nfor a DB created on ZFS (I have not tested on UFS). Is that expected ? Has\nit anything to do with running on EC2 ?\n\nSébastien\n\nI am not able to set wal_sync_method to anything but fsync on FreeBSD 9.0 for a DB created on ZFS (I have not tested on UFS). Is that expected ? Has it anything to do with running on EC2 ?Sébastien",
"msg_date": "Fri, 14 Sep 2012 16:19:28 -0400",
"msg_from": "=?UTF-8?Q?S=C3=A9bastien_Lorion?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "wal_sync_method on FreeBSD 9.0 - ZFS"
},
{
"msg_contents": "On 14/09/2012 22:19, Sébastien Lorion wrote:\n> I am not able to set wal_sync_method to anything but fsync on FreeBSD 9.0\n> for a DB created on ZFS (I have not tested on UFS). Is that expected ? Has\n> it anything to do with running on EC2 ?\n\nCan you explain what prevents you for setting the wal_sync_method?",
"msg_date": "Mon, 17 Sep 2012 12:56:49 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal_sync_method on FreeBSD 9.0 - ZFS"
},
{
"msg_contents": "I don't remember the exact error message, but basically, if I set it to\nanything else but fsync, when I start PostgreSQL, it tells me that the new\nmethod is not available on my platform.\n\nSébastien\n\nOn Mon, Sep 17, 2012 at 6:56 AM, Ivan Voras <[email protected]> wrote:\n\n> On 14/09/2012 22:19, Sébastien Lorion wrote:\n> > I am not able to set wal_sync_method to anything but fsync on FreeBSD 9.0\n> > for a DB created on ZFS (I have not tested on UFS). Is that expected ?\n> Has\n> > it anything to do with running on EC2 ?\n>\n> Can you explain what prevents you for setting the wal_sync_method?\n>\n>\n\nI don't remember the exact error message, but basically, if I set it to anything else but fsync, when I start PostgreSQL, it tells me that the new method is not available on my platform.Sébastien\nOn Mon, Sep 17, 2012 at 6:56 AM, Ivan Voras <[email protected]> wrote:\nOn 14/09/2012 22:19, Sébastien Lorion wrote:\n> I am not able to set wal_sync_method to anything but fsync on FreeBSD 9.0\n> for a DB created on ZFS (I have not tested on UFS). Is that expected ? Has\n> it anything to do with running on EC2 ?\n\nCan you explain what prevents you for setting the wal_sync_method?",
"msg_date": "Sat, 22 Sep 2012 10:01:20 -0400",
"msg_from": "=?UTF-8?Q?S=C3=A9bastien_Lorion?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: wal_sync_method on FreeBSD 9.0 - ZFS"
}
] |
[
{
"msg_contents": "Hello All,\n \nWe are migrating our product from 32 bit CentOS version 5.0 (kernel 2.6.18) to 64 bit CentOS version 6.0 (kernel 2.6.32)\nSo we decided to upgrade the PostgreSQL version from 8.2.2 to 9.0.4\n \nWe are compiling the PostgreSQL source on our build machine to create an RPM before using it in our product.\n \nThe issue we have noticed is the 9.0.4 (64 bit) version of PostgreSQL has slower performance as compared to 8.2.2 (32 bit) version on an identical hardware.\n \nTo investigate further we tried monitoring the PostgreSQL process using strace and found that the earlier version of PostgreSQL was using _llseek() system call whereas the later version is using lseek() system call.\n \nWill this impact the PostgreSQL performance? When the timing is on we found every query executed on 9.0.4 was taking longer than the query executed on 8.2.2\n \nPlease guide me what should I see while compiling the PostgreSQL 9.0.4 version to improve its performance.\n \nThank you,\nUmesh\nHello All,\n \nWe are migrating our product from 32 bit CentOS version 5.0 (kernel 2.6.18) to 64 bit CentOS version 6.0 (kernel 2.6.32)\nSo we decided to upgrade the PostgreSQL version from 8.2.2 to 9.0.4\n \nWe are compiling the PostgreSQL source on our build machine to create an RPM before using it in our product.\n \nThe issue we have noticed is the 9.0.4 (64 bit) version of PostgreSQL has slower performance as compared to 8.2.2 (32 bit) version on an identical hardware.\n \nTo investigate further we tried monitoring the PostgreSQL process using strace and found that the earlier version of PostgreSQL was using _llseek() system call whereas the later version is using lseek() system call.\n \nWill this impact the PostgreSQL performance? When the timing is on we found every query executed on 9.0.4 was taking longer than the query executed on 8.2.2\n \nPlease guide me what should I see while compiling the PostgreSQL 9.0.4 version to improve its performance.\n \nThank you,\nUmesh",
"msg_date": "Sun, 16 Sep 2012 11:48:15 +0800 (SGT)",
"msg_from": "Umesh Kirdat <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL performance on 64 bit as compared to 32 bit"
},
{
"msg_contents": "On Sun, Sep 16, 2012 at 12:48 AM, Umesh Kirdat <[email protected]> wrote:\n> The issue we have noticed is the 9.0.4 (64 bit) version of PostgreSQL has\n> slower performance as compared to 8.2.2 (32 bit) version on an identical\n> hardware.\n\nFirst of all, that's comparing apples and oranges. Compare the same\nversion in 32-vs-64, and different versions on same-arch.\n\n> To investigate further we tried monitoring the PostgreSQL process using\n> strace and found that the earlier version of PostgreSQL was using _llseek()\n> system call whereas the later version is using lseek() system call.\n\nSecond, I doubt that's the problem. It's most likely increase memory\nfootprint due to 64-bit pointers, a known overhead of the 64-bit arch,\nbut a price you have to pay if you want access to more than 3-4GB of\nRAM. You'll be better off using a profiler, like oprofile, and compare\nthe profile between the two arches.\n\n",
"msg_date": "Fri, 21 Sep 2012 23:43:49 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance on 64 bit as compared to 32 bit"
}
] |
[
{
"msg_contents": "Using postgreSQL 9.2 with the following settings:\n\nmax_connections = 1000 # (change requires restart)\nshared_buffers = 65536MB # min 128kB\nwork_mem = 16MB # min 64kB\neffective_io_concurrency = 48 # 1-1000; 0 disables prefetching\nwal_buffers = 16MB # min 32kB, -1 sets based on\nshared_buffers\ncommit_delay = 10000 # range 0-100000, in microseconds\ncommit_siblings = 100 # range 1-1000\ncheckpoint_segments = 256 # in logfile segments, min 1, 16MB\neach\ncheckpoint_timeout = 10min # range 30s-1h\ncheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 -\n1.0\neffective_cache_size = 65536MB\n\nServer spec is Xeon 8 Core/16threads, 512Gb memory. Database size is 100G. \nUnderlying SAN is raid 1/0.\nOS is Linux Redhat 6.2. Kernel 2.6.32\n\nRunning hammer ora TPC-C type load. Under 20 user load (no key and think)\ngetting approx 180,000 TPM - which is about half of what I get with another\ndatabase vendor. \n\ntracing the process (strace -r) I get outtput like that below - a lot of the\ntime seems to be doing semop type operations (eg 0.001299 semop(13369414,\n{{3, -1, 0}}, 1) = 0)\n\nJust wondered if anyone could tell me what is going on there and any\npossibilities that I might have to decrease this wait time ?\n\n\n 0.000176 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.000031 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.001102 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.000049 semop(13369414, {{4, 1, 0}}, 1) = 0\n 0.000405 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.000049 semop(13369414, {{10, 1, 0}}, 1) = 0\n 0.000337 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.000057 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.000074 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.000779 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.000847 sendto(10,\n\"T\\0\\0\\0\\37\\0\\1neword\\0\\0\\0\\0\\0\\0\\0\\0\\0\\6\\244\\377\\377\\377\\377\\377\\377\\0\\0\"...,\n63, 0, NULL, 0) = 63\n 0.000063 recvfrom(10, \"Q\\0\\0\\0(select neword(142,1001,8,23\"..., 8192,\n0, NULL, NULL) = 41\n 0.000463 lseek(12, 0, SEEK_END) = 52486144\n 0.000057 lseek(13, 0, SEEK_END) = 6356992\n 0.001299 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.000864 semop(13402183, {{2, 1, 0}}, 1) = 0\n 0.000420 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.000675 semop(13402183, {{7, 1, 0}}, 1) = 0\n 0.000445 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.000156 semop(13369414, {{3, -1, 0}}, 1) = 0\n 0.001458 semop(13369414, {{6, 1, 0}}, 1) = 0\n\nCheers,\n\nMal\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Newbie-performance-problem-semop-taking-most-of-time-tp5724566.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Wed, 19 Sep 2012 05:34:59 -0700 (PDT)",
"msg_from": "\"mal.oracledba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Newbie performance problem - semop taking most of time ?"
},
{
"msg_contents": "On Wed, Sep 19, 2012 at 5:34 AM, mal.oracledba <[email protected]> wrote:\n> Running hammer ora TPC-C type load. Under 20 user load (no key and think)\n> getting approx 180,000 TPM - which is about half of what I get with another\n> database vendor.\n>\n> tracing the process (strace -r) I get outtput like that below - a lot of the\n> time seems to be doing semop type operations (eg 0.001299 semop(13369414,\n> {{3, -1, 0}}, 1) = 0)\n>\n> Just wondered if anyone could tell me what is going on there and any\n> possibilities that I might have to decrease this wait time ?\n\nI'm don't think system-call traces alone are enough to find a\nperformance issue; if using a sufficiently new Linux I'd highly\nrecommend posting the results of the tool 'perf'. Robert Haas writes\nsome of his favorite incantations of it here:\n\nhttp://rhaas.blogspot.com/2012/06/perf-good-bad-ugly.html\n\nYou might also want to offer some qualitative information...for\nexample, does the problem seem to be contention (wherein there is\nspare CPU time that should be getting used, but isn't) or maybe just\ntoo many cycles are being expended by Postgres vs Your Other Database\nVendor.\n\n-- \nfdr\n\n",
"msg_date": "Fri, 21 Sep 2012 22:50:25 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Newbie performance problem - semop taking most of time ?"
},
{
"msg_contents": "\nCPU on the server shows approx 60% used/40 % idle\n\n\n vmstat 5 5\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n18 0 0 160465072 455200 352185248 0 0 37 421 2 0 4 \n1 95 0 0\n14 1 0 160321088 455200 352315040 0 0 0 30559 30496 48583 48 \n4 46 1 0\n 1 0 0 160153584 455204 352466624 0 0 13 57266 35147 56949 56 \n5 38 2 0\n19 1 0 160030544 455204 352577504 0 0 13 27765 27924 41981 41 \n3 54 1 0\n10 0 0 159862800 455204 352731776 0 0 29 37807 35591 58193 57 \n5 37 2 0\n\n\n\nA snapshot of 'perf' output for one of the users below. It doesnt change\nmuch from that profile.\n\n\n\n PerfTop: 319 irqs/sec kernel: 7.5% exact: 0.0% [1000Hz cycles], \n(target_pid: 18450)\n---------------------------------------------------------------------------------------------------------------------------------------------\n\n samples pcnt function DSO\n _______ _____ ___________________________\n________________________________________________\n\n 235.00 7.5% AllocSetAlloc \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 155.00 5.0% SearchCatCache \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 134.00 4.3% hash_search_with_hash_value\n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 128.00 4.1% LWLockAcquire \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 110.00 3.5% _bt_compare \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 106.00 3.4% __memcpy_ssse3_back /lib64/libc-2.12.so\n 106.00 3.4% XLogInsert \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 92.00 2.9% cmp_numerics \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 73.00 2.3% ExecInitExpr \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 64.00 2.0% MemoryContextAlloc \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 61.00 1.9% GetSnapshotData \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 60.00 1.9% _int_malloc /lib64/libc-2.12.so\n 58.00 1.9% nocache_index_getattr \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 49.00 1.6% PinBuffer \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n 49.00 1.6% cmp_abs_common \n/post/PostgreSQL2/postgresql-9.2rc1/bin/postgres\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Newbie-performance-problem-semop-taking-most-of-time-tp5724566p5725062.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Sun, 23 Sep 2012 12:42:39 -0700 (PDT)",
"msg_from": "\"mal.oracledba\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Newbie performance problem - semop taking most of time ?"
},
{
"msg_contents": "\"mal.oracledba\" <[email protected]> writes:\n> A snapshot of 'perf' output for one of the users below. It doesnt change\n> much from that profile.\n\nYou might want to look into whether you could use int or bigint instead\nof numeric for your indexed columns. That trace is showing 4.5% of\nruntime directly blamable on cmp_numerics plus its subroutine\ncmp_abs_common, and I'll bet that a noticeable fraction of the\nAllocSetAlloc, memcpy, and malloc traffic is attributable to numeric\noperations too. You could probably not expect to get much more than 5%\nsavings, but still, if you don't need fractions then that's 5% that's\njust being wasted.\n\nThe bigger picture here though is a lot of context swaps, and I wonder\nhow much of that is blamable on your having activated commit_delay\n(especially with such silly settings as you chose ... more is not\nbetter there).\n\nThe fact that XLogInsert shows up high on the profile might indicate\nthat contention for WALInsertLock is a factor, though it's hardly proof\nof that. If that's the main problem there's not too much that can be\ndone about it in the near term. (There's some work going on to reduce\ncontention for that lock, but it's not done yet, let alone in 9.2.)\nIn a real application you could possibly reduce the problem by\nrearranging operations, but that would be cheating of course for a\nbenchmark.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 23 Sep 2012 16:36:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Newbie performance problem - semop taking most of time ?"
}
] |
[
{
"msg_contents": "Our production database, postgres 8.4 has an approximate size of 200 GB,\nmost of the data are large objects (174 GB), until a few months ago we used\npg_dump to perform backups, took about 3-4 hours to perform all the\nprocess. Some time ago the process became interminable, take one or two\ndays to process, we noticed that the decay process considerably to startup\nbackup of large object, so we had to opt for physical backups.\n\nWe perform various tests on similar servers with the same version and\npostgres 9.2 and it is exactly the same, the database does not have other\nproblems, nor has performance problems during everyday use.\n\n\nCould someone suggest a solution? thanks\n\nSergio\n\nOur production database, postgres 8.4 has an approximate size of 200 GB, most of the data are large objects (174 GB), until a few months ago we used pg_dump to perform backups, took about 3-4 hours to perform all the process. Some time ago the process became interminable, take one or two days to process, we noticed that the decay process considerably to startup backup of large object, so we had to opt for physical backups.\nWe perform various tests on similar servers with the same version and postgres 9.2 and it is exactly the same, the database does not have other problems, nor has performance problems during everyday use.\nCould someone suggest a solution? thanksSergio",
"msg_date": "Thu, 20 Sep 2012 09:06:35 -0300",
"msg_from": "Sergio Gabriel Rodriguez <[email protected]>",
"msg_from_op": true,
"msg_subject": "problems with large objects dump"
},
{
"msg_contents": "Sergio Gabriel Rodriguez <[email protected]> writes:\n> Our production database, postgres 8.4 has an approximate size of 200 GB,\n> most of the data are large objects (174 GB), until a few months ago we used\n> pg_dump to perform backups, took about 3-4 hours to perform all the\n> process. Some time ago the process became interminable, take one or two\n> days to process, we noticed that the decay process considerably to startup\n> backup of large object, so we had to opt for physical backups.\n\nHm ... there's been some recent work to reduce O(N^2) behaviors in\npg_dump when there are many objects to dump, but I'm not sure that's\nrelevant to your situation, because before 9.0 pg_dump didn't treat\nblobs as full-fledged database objects. You wouldn't happen to be\ntrying to use a 9.0 or later pg_dump would you? Exactly what 8.4.x\nrelease is this, anyway?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 20 Sep 2012 10:35:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with large objects dump"
},
{
"msg_contents": "On Thu, Sep 20, 2012 at 11:35 AM, Tom Lane <[email protected]> wrote:\n\n> You wouldn't happen to be\n> trying to use a 9.0 or later pg_dump would you? Exactly what 8.4.x\n> release is this, anyway?\n>\n>\n>\nTom, thanks for replying, yes, we tried it with postgres postgres 9.1 and\n9.2 and the behavior is exactly the same. The production version is 8.4.9\n\nGreetings,\n\nsergio.\n\nOn Thu, Sep 20, 2012 at 11:35 AM, Tom Lane <[email protected]> wrote:\n You wouldn't happen to be\ntrying to use a 9.0 or later pg_dump would you? Exactly what 8.4.x\nrelease is this, anyway?\nTom, thanks for replying, yes, we tried it with postgres postgres 9.1 and 9.2 and the behavior is exactly the same. The production version is 8.4.9Greetings, \nsergio.",
"msg_date": "Thu, 20 Sep 2012 12:53:10 -0300",
"msg_from": "Sergio Gabriel Rodriguez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: problems with large objects dump"
},
{
"msg_contents": "Sergio Gabriel Rodriguez <[email protected]> writes:\n> On Thu, Sep 20, 2012 at 11:35 AM, Tom Lane <[email protected]> wrote:\n>> You wouldn't happen to be\n>> trying to use a 9.0 or later pg_dump would you? Exactly what 8.4.x\n>> release is this, anyway?\n\n> Tom, thanks for replying, yes, we tried it with postgres postgres 9.1 and\n> 9.2 and the behavior is exactly the same. The production version is 8.4.9\n\nWell, I see three different fixes for O(N^2) pg_dump performance\nproblems in the 8.4.x change logs since 8.4.9, so you're a bit behind\nthe times there. However, all of those fixes would have been in 9.2.0,\nso if you saw no improvement with a 9.2.0 pg_dump then the problem is\nsomething else. Can you put together a test case for somebody else to\ntry, or try to locate the bottleneck yourself using oprofile or perf?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 20 Sep 2012 12:33:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with large objects dump"
},
{
"msg_contents": "Hi,\n I tried with Postgresql 9.2 and the process used to take almost a day\nand a half, was significantly reduced to 6 hours, before failing even used\nto take four hours. My question now is, how long should it take the backup\nfor a 200GB database with 80% of large objects?\n\nHp proliant Xeon G5\n32 GB RAM\nOS SLES 10 + logs --> raid 6\ndata-->raid 6\n\nthanks!\n\nOn Thu, Sep 20, 2012 at 12:53 PM, Sergio Gabriel Rodriguez <\[email protected]> wrote:\n\n> On Thu, Sep 20, 2012 at 11:35 AM, Tom Lane <[email protected]> wrote:\n>\n>> You wouldn't happen to be\n>> trying to use a 9.0 or later pg_dump would you? Exactly what 8.4.x\n>> release is this, anyway?\n>>\n>>\n>>\n> Tom, thanks for replying, yes, we tried it with postgres postgres 9.1 and\n> 9.2 and the behavior is exactly the same. The production version is 8.4.9\n>\n> Greetings,\n>\n> sergio.\n>\n>\n\nHi, \n I tried with Postgresql 9.2 and the process used to take almost a day and a half, was significantly reduced to 6 hours, before failing even used to take four hours. My question now is, how long should it take the backup for a 200GB database with 80% of large objects?\n\nHp proliant Xeon G532 GB RAM\nOS SLES 10 + logs --> raid 6data-->raid 6\nthanks!On Thu, Sep 20, 2012 at 12:53 PM, Sergio Gabriel Rodriguez <[email protected]> wrote:\nOn Thu, Sep 20, 2012 at 11:35 AM, Tom Lane <[email protected]> wrote:\n\n You wouldn't happen to be\ntrying to use a 9.0 or later pg_dump would you? Exactly what 8.4.x\nrelease is this, anyway?\nTom, thanks for replying, yes, we tried it with postgres postgres 9.1 and 9.2 and the behavior is exactly the same. The production version is 8.4.9\nGreetings, \nsergio.",
"msg_date": "Thu, 11 Oct 2012 18:46:04 -0300",
"msg_from": "Sergio Gabriel Rodriguez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: problems with large objects dump"
},
{
"msg_contents": "Sergio Gabriel Rodriguez <[email protected]> writes:\n> I tried with Postgresql 9.2 and the process used to take almost a day\n> and a half, was significantly reduced to 6 hours, before failing even used\n> to take four hours. My question now is, how long should it take the backup\n> for a 200GB database with 80% of large objects?\n\nIt's pretty hard to say without knowing a lot more info about your system\nthan you provided. One thing that would shed some light is if you spent\nsome time finding out where the time is going --- is the system\nconstantly I/O busy, or is it CPU-bound, and if so in which process,\npg_dump or the connected backend?\n\nAlso, how many large objects is that? (If you don't know already,\n\"select count(*) from pg_largeobject_metadata\" would tell you.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 11 Oct 2012 18:16:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with large objects dump"
},
{
"msg_contents": "On 10/11/2012 05:46 PM, Sergio Gabriel Rodriguez wrote:\n> Hi,\n> I tried with Postgresql 9.2 and the process used to take almost a \n> day and a half, was significantly reduced to 6 hours, before failing \n> even used to take four hours. My question now is, how long should it \n> take the backup for a 200GB database with 80% of large objects?\nRegards, Sergio.\nThat�s depends of several things.\n\n>\n> Hp proliant Xeon G5\n> 32 GB RAM\n> OS SLES 10 + logs --> raid 6\n> data-->raid 6\nCan you share your postgresql.conf here?\nWhich filesystem are you using for your data directory?\nWhat options are you using to do the backup?\n\n>\n> thanks!\n>\n> On Thu, Sep 20, 2012 at 12:53 PM, Sergio Gabriel Rodriguez \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> On Thu, Sep 20, 2012 at 11:35 AM, Tom Lane <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> You wouldn't happen to be\n> trying to use a 9.0 or later pg_dump would you? Exactly what\n> 8.4.x\n> release is this, anyway?\n>\n>\n>\n> Tom, thanks for replying, yes, we tried it with postgres postgres\n> 9.1 and 9.2 and the behavior is exactly the same. The production\n> version is 8.4.9\n>\n> Greetings,\n>\n> sergio.\n>\n>\n\n-- \n\nMarcos Luis Ort�z Valmaseda\nabout.me/marcosortiz <http://about.me/marcosortiz>\n@marcosluis2186 <http://twitter.com/marcosluis2186>\n\n\n\n10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...\nCONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION\n\nhttp://www.uci.cu\nhttp://www.facebook.com/universidad.uci\nhttp://www.flickr.com/photos/universidad_uci\n\n\n\n\n\n\nOn 10/11/2012 05:46 PM, Sergio Gabriel\n Rodriguez wrote:\n\nHi, \n \n I tried with Postgresql 9.2 and the process used to take\n almost a day and a half, was significantly reduced to 6 hours,\n before failing even used to take four hours. My question now is,\n how long should it take the backup for a 200GB database with 80%\n of large objects?\n\n Regards, Sergio.\n That´s depends of several things. \n\n\n\n\nHp\n proliant Xeon G5\n32\n GB RAM\nOS\n SLES 10 + logs --> raid 6\ndata-->raid\n 6\n\n Can you share your postgresql.conf here?\n Which filesystem are you using for your data directory?\n What options are you using to do the backup?\n\n\n\n\nthanks!\n\nOn Thu, Sep 20, 2012 at 12:53 PM, Sergio\n Gabriel Rodriguez <[email protected]>\n wrote:\n\n\nOn Thu, Sep 20, 2012 at 11:35 AM, Tom Lane <[email protected]>\n wrote:\n\n You wouldn't happen to be\n trying to use a 9.0 or later pg_dump would you? Exactly\n what 8.4.x\n release is this, anyway?\n\n\n\n\n\n\nTom, thanks for replying, yes, we tried it with\n postgres postgres 9.1 and 9.2 and the behavior is exactly\n the same. The production version is 8.4.9\n\n\n\n Greetings, \n\n\n\nsergio. \n\n\n\n\n\n\n\n-- \n\n\n\n\n Marcos Luis Ortíz Valmaseda\nabout.me/marcosortiz\n@marcosluis2186",
"msg_date": "Thu, 11 Oct 2012 18:26:19 -0400",
"msg_from": "Marcos Ortiz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with large objects dump"
},
{
"msg_contents": "On Thu, Oct 11, 2012 at 7:16 PM, Tom Lane <[email protected]> wrote:\n\n>\n> It's pretty hard to say without knowing a lot more info about your system\n> than you provided. One thing that would shed some light is if you spent\n> some time finding out where the time is going --- is the system\n> constantly I/O busy, or is it CPU-bound, and if so in which process,\n> pg_dump or the connected backend?\n>\n>\n the greatest amount of time is lost in I/O busy.\n\ndatabase_test=# select count(*) from pg_largeobject_metadata;\n count\n---------\n 5231973\n(1 row)\n\nI never use oprofile, but for a few hours into the process, I could take\nthis report:\n\n\nopreport -l /var/lib/pgsql/bin/pg_dump\nUsing /var/lib/oprofile/samples/ for samples directory.\nCPU: Core 2, speed 2333.42 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit\nmask of 0x00 (Unhalted core cycles) count 100000\nsamples % symbol name\n1202449 56.5535 sortDumpableObjects\n174626 8.2130 DOTypeNameCompare\n81181 3.8181 DeflateCompressorZlib\n70640 3.3223 _WriteByte\n68020 3.1991 DOCatalogIdCompare\n53789 2.5298 WriteInt\n39797 1.8717 WriteToc\n38252 1.7991 WriteDataToArchive\n32947 1.5496 WriteStr\n32488 1.5280 pg_qsort\n30122 1.4167 dumpTableData_copy\n27706 1.3031 dumpDumpableObject\n26078 1.2265 dumpBlobs\n25591 1.2036 _tocEntryRequired\n23030 1.0831 WriteData\n21171 0.9957 buildTocEntryArrays\n20825 0.9794 _WriteData\n18936 0.8906 _WriteBuf\n18113 0.8519 BuildArchiveDependencies\n12607 0.5929 findComments\n11642 0.5475 EndCompressor\n10833 0.5095 _CustomWriteFunc\n10562 0.4968 WriteDataChunks\n10247 0.4819 dumpBlob\n5947 0.2797 EndBlob\n5824 0.2739 _EndBlob\n5047 0.2374 main\n5030 0.2366 dumpComment\n4959 0.2332 AllocateCompressor\n4762 0.2240 dumpSecLabel\n4705 0.2213 StartBlob\n4052 0.1906 WriteOffset\n3285 0.1545 ArchiveEntry\n2640 0.1242 _StartBlob\n2391 0.1125 pg_calloc\n2233 0.1050 findObjectByDumpId\n2197 0.1033 SetArchiveRestoreOptions\n2149 0.1011 pg_strdup\n1760 0.0828 getDumpableObjects\n1311 0.0617 ParseCompressionOption\n1288 0.0606 med3\n1248 0.0587 _WriteExtraToc\n944 0.0444 AssignDumpId\n916 0.0431 findSecLabels\n788 0.0371 pg_malloc\n340 0.0160 addObjectDependency\n317 0.0149 _ArchiveEntry\n144 0.0068 swapfunc\n72 0.0034 ScanKeywordLookup\n60 0.0028 findObjectByCatalogId\n41 0.0019 fmtId\n27 0.0013 ExecuteSqlQuery\n20 9.4e-04 dumpTable\n10 4.7e-04 getTableAttrs\n8 3.8e-04 fmtCopyColumnList\n6 2.8e-04 shouldPrintColumn\n5 2.4e-04 findObjectByOid\n3 1.4e-04 dumpFunc\n3 1.4e-04 format_function_signature\n3 1.4e-04 getTypes\n2 9.4e-05 _StartData\n2 9.4e-05 buildACLCommands\n2 9.4e-05 findLoop\n2 9.4e-05 getTables\n2 9.4e-05 parseOidArray\n2 9.4e-05 selectSourceSchema\n1 4.7e-05 TocIDRequired\n1 4.7e-05 _EndData\n1 4.7e-05 archprintf\n1 4.7e-05 dumpACL\n1 4.7e-05 dumpCollation\n1 4.7e-05 dumpConstraint\n1 4.7e-05 dumpOpr\n1 4.7e-05 expand_schema_name_patterns\n1 4.7e-05 findDumpableDependencies\n1 4.7e-05 fmtQualifiedId\n1 4.7e-05 getCollations\n1 4.7e-05 getExtensions\n1 4.7e-05 getFormattedTypeName\n1 4.7e-05 getIndexes\n1 4.7e-05 makeTableDataInfo\n1 4.7e-05 vwrite_msg\n\n\nthank you very much for your help\n\nregards.\n\nSergio\n\nOn Thu, Oct 11, 2012 at 7:16 PM, Tom Lane <[email protected]> wrote:\nIt's pretty hard to say without knowing a lot more info about your system\nthan you provided. One thing that would shed some light is if you spent\nsome time finding out where the time is going --- is the system\nconstantly I/O busy, or is it CPU-bound, and if so in which process,\npg_dump or the connected backend? the greatest amount of time is lost in I/O busy.database_test=# select count(*) from pg_largeobject_metadata;\n count --------- 5231973(1 row)I never use oprofile, but for a few hours into the process, I could take this report:\nopreport -l /var/lib/pgsql/bin/pg_dumpUsing /var/lib/oprofile/samples/ for samples directory.CPU: Core 2, speed 2333.42 MHz (estimated)Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000\nsamples % symbol name1202449 56.5535 sortDumpableObjects174626 8.2130 DOTypeNameCompare81181 3.8181 DeflateCompressorZlib70640 3.3223 _WriteByte\n68020 3.1991 DOCatalogIdCompare53789 2.5298 WriteInt39797 1.8717 WriteToc38252 1.7991 WriteDataToArchive32947 1.5496 WriteStr32488 1.5280 pg_qsort\n30122 1.4167 dumpTableData_copy27706 1.3031 dumpDumpableObject26078 1.2265 dumpBlobs25591 1.2036 _tocEntryRequired23030 1.0831 WriteData\n21171 0.9957 buildTocEntryArrays20825 0.9794 _WriteData18936 0.8906 _WriteBuf18113 0.8519 BuildArchiveDependencies12607 0.5929 findComments11642 0.5475 EndCompressor\n10833 0.5095 _CustomWriteFunc10562 0.4968 WriteDataChunks10247 0.4819 dumpBlob5947 0.2797 EndBlob5824 0.2739 _EndBlob5047 0.2374 main\n5030 0.2366 dumpComment4959 0.2332 AllocateCompressor4762 0.2240 dumpSecLabel4705 0.2213 StartBlob4052 0.1906 WriteOffset3285 0.1545 ArchiveEntry\n2640 0.1242 _StartBlob2391 0.1125 pg_calloc2233 0.1050 findObjectByDumpId2197 0.1033 SetArchiveRestoreOptions2149 0.1011 pg_strdup1760 0.0828 getDumpableObjects\n1311 0.0617 ParseCompressionOption1288 0.0606 med31248 0.0587 _WriteExtraToc944 0.0444 AssignDumpId916 0.0431 findSecLabels788 0.0371 pg_malloc\n340 0.0160 addObjectDependency317 0.0149 _ArchiveEntry144 0.0068 swapfunc72 0.0034 ScanKeywordLookup60 0.0028 findObjectByCatalogId\n41 0.0019 fmtId27 0.0013 ExecuteSqlQuery20 9.4e-04 dumpTable10 4.7e-04 getTableAttrs8 3.8e-04 fmtCopyColumnList6 2.8e-04 shouldPrintColumn\n5 2.4e-04 findObjectByOid3 1.4e-04 dumpFunc3 1.4e-04 format_function_signature3 1.4e-04 getTypes2 9.4e-05 _StartData2 9.4e-05 buildACLCommands\n2 9.4e-05 findLoop2 9.4e-05 getTables2 9.4e-05 parseOidArray2 9.4e-05 selectSourceSchema1 4.7e-05 TocIDRequired1 4.7e-05 _EndData\n1 4.7e-05 archprintf1 4.7e-05 dumpACL1 4.7e-05 dumpCollation1 4.7e-05 dumpConstraint1 4.7e-05 dumpOpr1 4.7e-05 expand_schema_name_patterns\n1 4.7e-05 findDumpableDependencies1 4.7e-05 fmtQualifiedId1 4.7e-05 getCollations1 4.7e-05 getExtensions1 4.7e-05 getFormattedTypeName\n1 4.7e-05 getIndexes1 4.7e-05 makeTableDataInfo1 4.7e-05 vwrite_msgthank you very much for your help\nregards.Sergio",
"msg_date": "Fri, 12 Oct 2012 19:35:06 -0300",
"msg_from": "Sergio Gabriel Rodriguez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: problems with large objects dump"
},
{
"msg_contents": "Sergio Gabriel Rodriguez <[email protected]> writes:\n> On Thu, Oct 11, 2012 at 7:16 PM, Tom Lane <[email protected]> wrote:\n>> It's pretty hard to say without knowing a lot more info about your system\n>> than you provided. One thing that would shed some light is if you spent\n>> some time finding out where the time is going --- is the system\n>> constantly I/O busy, or is it CPU-bound, and if so in which process,\n>> pg_dump or the connected backend?\n\n> the greatest amount of time is lost in I/O busy.\n\nIn that case there's not going to be a whole lot you can do about it,\nprobably. Or at least not that's very practical --- I assume \"buy\nfaster disks\" isn't a helpful answer.\n\nIf the blobs are relatively static, it's conceivable that clustering\npg_largeobject would help, but you're probably not going to want to take\ndown your database for as long as that would take --- and the potential\ngains are unclear anyway.\n\n> I never use oprofile, but for a few hours into the process, I could take\n> this report:\n\n> 1202449 56.5535 sortDumpableObjects\n\nHm. I suspect a lot of that has to do with the large objects; and it's\nreally overkill to treat them as full-fledged objects since they never\nhave unique dependencies. This wasn't a problem when commit\nc0d5be5d6a736d2ee8141e920bc3de8e001bf6d9 went in, but I think now it\nmight be because of the additional constraints added in commit\na1ef01fe163b304760088e3e30eb22036910a495. I wonder if it's time to try\nto optimize pg_dump's handling of blobs a bit better. But still, any\nsuch fix probably wouldn't make a huge difference for you. Most of the\ntime is going into pushing the blob data around, I think.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 12 Oct 2012 19:18:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with large objects dump"
},
{
"msg_contents": "I wrote:\n> Sergio Gabriel Rodriguez <[email protected]> writes:\n>> I never use oprofile, but for a few hours into the process, I could take\n>> this report:\n>> 1202449 56.5535 sortDumpableObjects\n\n> Hm. I suspect a lot of that has to do with the large objects; and it's\n> really overkill to treat them as full-fledged objects since they never\n> have unique dependencies. This wasn't a problem when commit\n> c0d5be5d6a736d2ee8141e920bc3de8e001bf6d9 went in, but I think now it\n> might be because of the additional constraints added in commit\n> a1ef01fe163b304760088e3e30eb22036910a495. I wonder if it's time to try\n> to optimize pg_dump's handling of blobs a bit better. But still, any\n> such fix probably wouldn't make a huge difference for you. Most of the\n> time is going into pushing the blob data around, I think.\n\nFor fun, I tried adding 5 million empty blobs to the standard regression\ndatabase, and then did a pg_dump. It took a bit under 9 minutes on my\nworkstation. oprofile showed about 32% of pg_dump's runtime going into\nsortDumpableObjects, which might make you think that's worth optimizing\n... until you look at the bigger picture system-wide:\n\n samples| %|\n------------------\n 727394 59.4098 kernel\n 264874 21.6336 postgres\n 136734 11.1677 /lib64/libc-2.14.90.so\n 39878 3.2570 pg_dump\n 37025 3.0240 libpq.so.5.6\n 17964 1.4672 /usr/bin/wc\n 354 0.0289 /usr/bin/oprofiled\n\nSo actually sortDumpableObjects took only about 1% of the CPU cycles.\nAnd remember this is with empty objects. If we'd been shoving 200GB of\ndata through the dump, the data pipeline would surely have swamped all\nelse.\n\nSo I think the original assumption that we didn't need to optimize\npg_dump's object management infrastructure for blobs still holds good.\nIf there's anything that is worth fixing here, it's the number of server\nroundtrips being used ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 12 Oct 2012 21:31:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with large objects dump"
},
{
"msg_contents": "On Fri, Oct 12, 2012 at 10:31 PM, Tom Lane <[email protected]> wrote:\n\n> So I think the original assumption that we didn't need to optimize\n> pg_dump's object management infrastructure for blobs still holds good.\n> If there's anything that is worth fixing here, it's the number of server\n> roundtrips being used ...\n>\n\n\nI found something similar\n\n samples| %|\n------------------\n233391664 60.5655 no-vmlinux\n 78789949 20.4461 libz.so.1.2.3\n 31984753 8.3001 postgres\n 21564413 5.5960 libc-2.4.so\n 4086941 1.0606 ld-2.4.so\n 2427151 0.6298 bash\n 2355895 0.6114 libc-2.4.so\n 2173558 0.5640 pg_dump\n 1771931 0.4598 oprofiled\n\nthere are anything I can do to improve this?\n\nThanks\n\nSergio\n\nOn Fri, Oct 12, 2012 at 10:31 PM, Tom Lane <[email protected]> wrote:\nSo I think the original assumption that we didn't need to optimize\npg_dump's object management infrastructure for blobs still holds good.\nIf there's anything that is worth fixing here, it's the number of server\nroundtrips being used ...I found something similar samples| %|------------------233391664 60.5655 no-vmlinux\n 78789949 20.4461 libz.so.1.2.3 31984753 8.3001 postgres 21564413 5.5960 libc-2.4.so 4086941 1.0606 ld-2.4.so\n 2427151 0.6298 bash 2355895 0.6114 libc-2.4.so 2173558 0.5640 pg_dump 1771931 0.4598 oprofiledthere are anything I can do to improve this? \nThanksSergio",
"msg_date": "Mon, 15 Oct 2012 07:19:30 -0300",
"msg_from": "Sergio Gabriel Rodriguez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: problems with large objects dump"
}
] |
[
{
"msg_contents": "Hello!\n\nI'm one of the developers of the Ruby on Rails web framework.\n\nIn some situations, the framework generates an empty transaction block.\nI.e. we sent a BEGIN and then later a COMMIT, with no other queries in\nthe middle.\n\nWe currently can't avoid doing this, because a user *may* send queries\ninside the transaction.\n\nI am considering the possibility of making the transaction lazy. So we\nwould delay sending the BEGIN until we have the first query ready to go.\nIf that query never comes then neither BEGIN nor COMMIT would ever be sent.\n\nSo my question is: is this a worthwhile optimisation to make? In\nparticular, I am wondering whether empty transactions increase the work\nthe database has to do when there are several other connections open?\nI.e. does it cause contention?\n\nIf anyone has any insight about other database servers that would also\nbe welcome.\n\nThanks!\n\nJon Leighton\n\n-- \nhttp://jonathanleighton.com/\n\n",
"msg_date": "Fri, 21 Sep 2012 11:46:58 +0100",
"msg_from": "Jon Leighton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cost of opening and closing an empty transaction"
},
{
"msg_contents": "On Fri, Sep 21, 2012 at 7:46 AM, Jon Leighton <[email protected]> wrote:\n> So my question is: is this a worthwhile optimisation to make? In\n> particular, I am wondering whether empty transactions increase the work\n> the database has to do when there are several other connections open?\n> I.e. does it cause contention?\n\nI found myself on a similar situation, with a different framework\n(SQLAlchemy), and it turned out to be worthwhile, mainly because\nregardless of the load generated on the database, which may or may not\nbe of consequence to a particular application, the very significant\nsaving of at least 4 roundtrips (send begin, receive ack, send commit,\nreceive ack) can be worth the effort.\n\nIn particular, my application had many and very likely such cases\nwhere no query would be issued (because of caching), and we were able\nto reduce overall latency from 20ms to 1ms. Presumably, the high\nlatencies were due to busy links since it was all on a private (but\nvery busy) network.\n\nNow, from the point of view of what resources would this idle\ntransaction consume on the server, you will at least consume a\nconnection (and hold a worker process idle for no reason). If you want\nhigh concurrency, you don't want to take a connection from the\nconnection pool unless you're going to use it, because you'll be\nblocking other clients.\n\n",
"msg_date": "Sat, 22 Sep 2012 00:08:15 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost of opening and closing an empty transaction"
},
{
"msg_contents": "Jon Leighton wrote:\n> I'm one of the developers of the Ruby on Rails web framework.\n> \n> In some situations, the framework generates an empty transaction\nblock.\n> I.e. we sent a BEGIN and then later a COMMIT, with no other queries in\n> the middle.\n> \n> We currently can't avoid doing this, because a user *may* send queries\n> inside the transaction.\n> \n> I am considering the possibility of making the transaction lazy. So we\n> would delay sending the BEGIN until we have the first query ready to\ngo.\n> If that query never comes then neither BEGIN nor COMMIT would ever be\nsent.\n> \n> So my question is: is this a worthwhile optimisation to make? In\n> particular, I am wondering whether empty transactions increase the\nwork\n> the database has to do when there are several other connections open?\n> I.e. does it cause contention?\n> \n> If anyone has any insight about other database servers that would also\n> be welcome.\n\nThe one thing that will be the same for all databases is that\nsaving the two client-server roud trips for BEGIN and COMMIT\nis probably worth the effort if it happens often enough.\n\nThe question which resources an empty transaction consumes\nis probably database specific; for PostgreSQL the expense is\nnot high, as far as I can tell.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 24 Sep 2012 11:48:23 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost of opening and closing an empty transaction"
},
{
"msg_contents": "N.B. I realize this is an ancient email, but there's a significant issue that\ndidn't get raised. Opening a transaction and leaving it idle can be a major\npain on a MVCC database like PostgreSQL. The reason is that this is the\ndreaded 'idle in transaction' state. If these tranactions become long lived\n(waiting for a form submit, etc.) they can easily become oldest transaction in\nthe cluster, forcing the system to keep data for snapshots that far back. I'm\nnot an Oracle expert, but I understand this is an issue there as well, since\nthey have to keep replay logs to recreate that state as well. So besides the\nwasted round trips, the issue of idle open transactions can be significant.\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nConnexions http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n\n\nOn Mon, Sep 24, 2012 at 11:48:23AM +0200, Albe Laurenz wrote:\n> Jon Leighton wrote:\n> > I'm one of the developers of the Ruby on Rails web framework.\n> > \n> > In some situations, the framework generates an empty transaction\n> block.\n> > I.e. we sent a BEGIN and then later a COMMIT, with no other queries in\n> > the middle.\n> > \n> > We currently can't avoid doing this, because a user *may* send queries\n> > inside the transaction.\n> > \n> > I am considering the possibility of making the transaction lazy. So we\n> > would delay sending the BEGIN until we have the first query ready to\n> go.\n> > If that query never comes then neither BEGIN nor COMMIT would ever be\n> sent.\n> > \n> > So my question is: is this a worthwhile optimisation to make? In\n> > particular, I am wondering whether empty transactions increase the\n> work\n> > the database has to do when there are several other connections open?\n> > I.e. does it cause contention?\n> > \n> > If anyone has any insight about other database servers that would also\n> > be welcome.\n> \n> The one thing that will be the same for all databases is that\n> saving the two client-server roud trips for BEGIN and COMMIT\n> is probably worth the effort if it happens often enough.\n> \n> The question which resources an empty transaction consumes\n> is probably database specific; for PostgreSQL the expense is\n> not high, as far as I can tell.\n> \n> Yours,\n> Laurenz Albe\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 May 2013 10:22:42 -0500",
"msg_from": "Ross Reedstrom <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost of opening and closing an empty transaction"
}
] |
[
{
"msg_contents": "I've run into a query planner issue while querying my data with a\nlarge offset (100,000). My schema is\nhttp://pgsql.privatepaste.com/ce7cc05a66 . I have about 220,000 rows\nin audit_spoke.audits. The original query\nhttp://pgsql.privatepaste.com/61cbdd51c2 ( explain:\nhttp://explain.depesz.com/s/84d ) takes quite a bit longer this query\nhttp://pgsql.privatepaste.com/45ad8c7135 ( explain:\nhttp://explain.depesz.com/s/KmT ). Is this just an edge case for the\nquery planner or am I doing something wrong in the first query?\n\n",
"msg_date": "Fri, 21 Sep 2012 12:08:28 -0500",
"msg_from": "Brandon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Planner Optimization?"
}
] |
[
{
"msg_contents": "Hi,\nI'm using and Amazon ec2 instance with the following spec and the\napplication that I'm running uses a postgres DB 9.1.\nThe app has 3 main cron jobs.\n\n*Ubuntu 12, High-Memory Extra Large Instance\n17.1 GB of memory\n6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)\n420 GB of instance storage\n64-bit platform*\n\nI've changed the main default values under file *postgresql.conf* to:\nshared_buffers = 4GB\nwork_mem = 16MB\nwal_buffers = 16MB\ncheckpoint_segments = 32\neffective_cache_size = 8GB\n\nWhen I run the app, after an hour or two, free -m looks like below ans the\ncrons can't run due to memory loss or similar (i'm new to postgres and db\nadmin).\nThanks!\n\nfree -m, errors:\n\ntotal used free shared buffers cached\nMem: 17079 13742 3337 0 64 11882\n-/+ buffers/cache: 1796 15283\nSwap: 511 0 511\n\ntotal used *free* shared buffers cached\nMem: 17079 16833 *245 *0 42 14583\n-/+ buffers/cache: 2207 14871\nSwap: 511 0 511\n\n**errors:\n*DBI connect('database=---;host=localhost','postgres',...) failed: could\nnot fork new process for connection: Cannot allocate memory*\ncould not fork new process for connection: Cannot allocate memory\n\nand\nexecute failed: ERROR: out of memory\nDETAIL: Failed on request of size 968. [for Statement \"\nSELECT DISTINCT....\n\nThank you!\n\nHi,\nI'm using and Amazon ec2 instance with the following spec and the application that I'm running uses a postgres DB 9.1.\nThe app has 3 main cron jobs.\nUbuntu 12, High-Memory Extra Large Instance\n17.1 GB of memory6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)420 GB of instance storage64-bit platform\nI've changed the main default values under file postgresql.conf to:\nshared_buffers = 4GB\nwork_mem = 16MB\nwal_buffers = 16MB\ncheckpoint_segments = 32\neffective_cache_size = 8GB\nWhen I run the app, after an hour or two, free -m looks like below ans the crons can't run due to memory loss or similar (i'm new to postgres and db admin).\nThanks!\nfree -m, errors:\ntotal used free shared buffers cached\nMem: 17079 13742 3337 0 64 11882\n-/+ buffers/cache: 1796 15283\nSwap: 511 0 511\ntotal used free shared buffers cached\nMem: 17079 16833 245 0 42 14583\n-/+ buffers/cache: 2207 14871\nSwap: 511 0 511\n**errors:\nDBI connect('database=---;host=localhost','postgres',...) failed: could not fork new process for connection: Cannot allocate memory\ncould not fork new process for connection: Cannot allocate memory\nand\nexecute failed: ERROR: out of memory\nDETAIL: Failed on request of size 968. [for Statement \"\nSELECT DISTINCT....\nThank you!",
"msg_date": "Mon, 24 Sep 2012 08:41:13 +0200",
"msg_from": "Shiran Kleiderman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory issues"
}
] |
[
{
"msg_contents": "Hi,\nI'm using and Amazon ec2 instance with the following spec and the\napplication that I'm running uses a postgres DB 9.1.\nThe app has 3 main cron jobs.\n\n*Ubuntu 12, High-Memory Extra Large Instance\n17.1 GB of memory\n6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)\n420 GB of instance storage\n64-bit platform*\n\nI've changed the main default values under file *postgresql.conf* to:\nshared_buffers = 4GB\nwork_mem = 16MB\nwal_buffers = 16MB\ncheckpoint_segments = 32\neffective_cache_size = 8GB\n\nWhen I run the app, after an hour or two, free -m looks like below ans the\ncrons can't run due to memory loss or similar (i'm new to postgres and db\nadmin).\nThanks!\n\nfree -m, errors:\n\ntotal used free shared buffers cached\nMem: 17079 13742 3337 0 64 11882\n-/+ buffers/cache: 1796 15283\nSwap: 511 0 511\n\ntotal used *free* shared buffers cached\nMem: 17079 16833 *245 *0 42 14583\n-/+ buffers/cache: 2207 14871\nSwap: 511 0 511\n\n**free above stays low even when nothing is running.\n\n**errors:\n*DBI connect('database=---;host=localhost','postgres',...) failed: could\nnot fork new process for connection: Cannot allocate memory*\ncould not fork new process for connection: Cannot allocate memory\n\nand\nexecute failed: ERROR: out of memory\nDETAIL: Failed on request of size 968. [for Statement \"\nSELECT DISTINCT....\n\nThank you!\n\n\nHi,I'm using and Amazon ec2 instance with the following spec and the application that I'm running uses a postgres DB 9.1.The app has 3 main cron jobs.\nUbuntu 12, High-Memory Extra Large Instance17.1 GB of memory6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)420 GB of instance storage64-bit platformI've changed the main default values under file postgresql.conf to:\nshared_buffers = 4GBwork_mem = 16MBwal_buffers = 16MBcheckpoint_segments = 32effective_cache_size = 8GBWhen I run the app, after an hour or two, free -m looks like below ans the crons can't run due to memory loss or similar (i'm new to postgres and db admin).\nThanks!free -m, errors:total used free shared buffers cachedMem: 17079 13742 3337 0 64 11882-/+ buffers/cache: 1796 15283Swap: 511 0 511total used free shared buffers cachedMem: 17079 16833 245 0 42 14583\n-/+ buffers/cache: 2207 14871Swap: 511 0 511**free above stays low even when nothing is running.\n**errors:DBI connect('database=---;host=localhost','postgres',...) failed: could not fork new process for connection: Cannot allocate memorycould not fork new process for connection: Cannot allocate memory\nandexecute failed: ERROR: out of memoryDETAIL: Failed on request of size 968. [for Statement \"SELECT DISTINCT....Thank you!",
"msg_date": "Mon, 24 Sep 2012 08:45:06 +0200",
"msg_from": "Shiran Kleiderman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory issues"
},
{
"msg_contents": "On Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman <[email protected]> wrote:\n>\n>\n> Hi,\n> I'm using and Amazon ec2 instance with the following spec and the\n> application that I'm running uses a postgres DB 9.1.\n> The app has 3 main cron jobs.\n>\n> Ubuntu 12, High-Memory Extra Large Instance\n> 17.1 GB of memory\n> 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)\n> 420 GB of instance storage\n> 64-bit platform\n>\n> I've changed the main default values under file postgresql.conf to:\n> shared_buffers = 4GB\n> work_mem = 16MB\n> wal_buffers = 16MB\n> checkpoint_segments = 32\n> effective_cache_size = 8GB\n>\n> When I run the app, after an hour or two, free -m looks like below ans the\n> crons can't run due to memory loss or similar (i'm new to postgres and db\n> admin).\n> Thanks!\n>\n> free -m, errors:\n>\n> total used free shared buffers cached\n> Mem: 17079 13742 3337 0 64 11882\n> -/+ buffers/cache: 1796 15283\n> Swap: 511 0 511\n\nYou have 11.8G cached, that's basically free memory on demand.\n\n> total used free shared buffers cached\n> Mem: 17079 16833 245 0 42 14583\n> -/+ buffers/cache: 2207 14871\n> Swap: 511 0 511\n\nHere you have 14.5G cached, again that's free memory so to speak.\nI.e. when something needs it it gets allocated.\n\n> **free above stays low even when nothing is running.\n>\n>\n> **errors:\n> DBI connect('database=---;host=localhost','postgres',...) failed: could not\n> fork new process for connection: Cannot allocate memory\n> could not fork new process for connection: Cannot allocate memory\n\nThis error is happening in your client process. Maybe it's 32 bit or\nsomething and running out of local memory in its process space? Maybe\nmemory is so fragmented that no large blocks can get allocated or\nsomething? Either way, your machine has plenty of memory according to\nfree. BTW, it's pretty common for folks new to unix to mis-read free\nand not realize that cached memory + free memory is what's really\navailable.\n\n",
"msg_date": "Tue, 25 Sep 2012 18:56:45 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory issues"
},
{
"msg_contents": "Hi\nThanks for your answer.\nI understood that the server is ok memory wise.\nWhat can I check on the client side or the DB queries?\n\nThank u.\nOn Wed, Sep 26, 2012 at 2:56 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman <[email protected]>\n> wrote:\n> >\n> >\n> > Hi,\n> > I'm using and Amazon ec2 instance with the following spec and the\n> > application that I'm running uses a postgres DB 9.1.\n> > The app has 3 main cron jobs.\n> >\n> > Ubuntu 12, High-Memory Extra Large Instance\n> > 17.1 GB of memory\n> > 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)\n> > 420 GB of instance storage\n> > 64-bit platform\n> >\n> > I've changed the main default values under file postgresql.conf to:\n> > shared_buffers = 4GB\n> > work_mem = 16MB\n> > wal_buffers = 16MB\n> > checkpoint_segments = 32\n> > effective_cache_size = 8GB\n> >\n> > When I run the app, after an hour or two, free -m looks like below ans\n> the\n> > crons can't run due to memory loss or similar (i'm new to postgres and db\n> > admin).\n> > Thanks!\n> >\n> > free -m, errors:\n> >\n> > total used free shared buffers cached\n> > Mem: 17079 13742 3337 0 64 11882\n> > -/+ buffers/cache: 1796 15283\n> > Swap: 511 0 511\n>\n> You have 11.8G cached, that's basically free memory on demand.\n>\n> > total used free shared buffers cached\n> > Mem: 17079 16833 245 0 42 14583\n> > -/+ buffers/cache: 2207 14871\n> > Swap: 511 0 511\n>\n> Here you have 14.5G cached, again that's free memory so to speak.\n> I.e. when something needs it it gets allocated.\n>\n> > **free above stays low even when nothing is running.\n> >\n> >\n> > **errors:\n> > DBI connect('database=---;host=localhost','postgres',...) failed: could\n> not\n> > fork new process for connection: Cannot allocate memory\n> > could not fork new process for connection: Cannot allocate memory\n>\n> This error is happening in your client process. Maybe it's 32 bit or\n> something and running out of local memory in its process space? Maybe\n> memory is so fragmented that no large blocks can get allocated or\n> something? Either way, your machine has plenty of memory according to\n> free. BTW, it's pretty common for folks new to unix to mis-read free\n> and not realize that cached memory + free memory is what's really\n> available.\n>\n\n\n\n-- \nBest,\nShiran Kleiderman\n+972 - 542380838\nSkype - shirank1\n\nHiThanks for your answer.I understood that the server is ok memory wise.What can I check on the client side or the DB queries?Thank u.\nOn Wed, Sep 26, 2012 at 2:56 AM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Sep 24, 2012 at 12:45 AM, Shiran Kleiderman <[email protected]> wrote:\n>\n>\n> Hi,\n> I'm using and Amazon ec2 instance with the following spec and the\n> application that I'm running uses a postgres DB 9.1.\n> The app has 3 main cron jobs.\n>\n> Ubuntu 12, High-Memory Extra Large Instance\n> 17.1 GB of memory\n> 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)\n> 420 GB of instance storage\n> 64-bit platform\n>\n> I've changed the main default values under file postgresql.conf to:\n> shared_buffers = 4GB\n> work_mem = 16MB\n> wal_buffers = 16MB\n> checkpoint_segments = 32\n> effective_cache_size = 8GB\n>\n> When I run the app, after an hour or two, free -m looks like below ans the\n> crons can't run due to memory loss or similar (i'm new to postgres and db\n> admin).\n> Thanks!\n>\n> free -m, errors:\n>\n> total used free shared buffers cached\n> Mem: 17079 13742 3337 0 64 11882\n> -/+ buffers/cache: 1796 15283\n> Swap: 511 0 511\n\nYou have 11.8G cached, that's basically free memory on demand.\n\n> total used free shared buffers cached\n> Mem: 17079 16833 245 0 42 14583\n> -/+ buffers/cache: 2207 14871\n> Swap: 511 0 511\n\nHere you have 14.5G cached, again that's free memory so to speak.\nI.e. when something needs it it gets allocated.\n\n> **free above stays low even when nothing is running.\n>\n>\n> **errors:\n> DBI connect('database=---;host=localhost','postgres',...) failed: could not\n> fork new process for connection: Cannot allocate memory\n> could not fork new process for connection: Cannot allocate memory\n\nThis error is happening in your client process. Maybe it's 32 bit or\nsomething and running out of local memory in its process space? Maybe\nmemory is so fragmented that no large blocks can get allocated or\nsomething? Either way, your machine has plenty of memory according to\nfree. BTW, it's pretty common for folks new to unix to mis-read free\nand not realize that cached memory + free memory is what's really\navailable.\n-- Best,Shiran Kleiderman\n+972 - 542380838Skype - shirank1",
"msg_date": "Wed, 26 Sep 2012 03:00:53 +0200",
"msg_from": "Shiran Kleiderman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory issues"
},
{
"msg_contents": "On Tue, Sep 25, 2012 at 7:00 PM, Shiran Kleiderman <[email protected]> wrote:\n>\n> Hi\n> Thanks for your answer.\n> I understood that the server is ok memory wise.\n> What can I check on the client side or the DB queries?\n\nWell you're connecting to localhost so I'd expect you to show a memory\nissue in free I'm not seeing. Are you really connecting to localhost\nor not?\n\n",
"msg_date": "Wed, 26 Sep 2012 10:29:07 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory issues"
},
{
"msg_contents": "Hi\nThanks again.\nRight now, this is *free -m and ps aux* and non of the crons can run -\ncan't allocate memory.\n\ncif@domU-12-31-39-08-06-20:~$ free -m\n total used free shared buffers cached\nMem: 17079 12051 5028 0 270 9578\n-/+ buffers/cache: 2202 14877\nSwap: 511 0 511\n\n\ncif@domU-12-31-39-08-06-20:~$ ps aux\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nroot 1 0.0 0.0 24316 2280 ? Ss Sep24 0:00 /sbin/init\nroot 2 0.0 0.0 0 0 ? S Sep24 0:00 [kthreadd]\nroot 3 0.0 0.0 0 0 ? S Sep24 0:00\n[ksoftirqd/0]\nroot 4 0.0 0.0 0 0 ? S Sep24 0:00\n[kworker/0:0]\nroot 5 0.0 0.0 0 0 ? S Sep24 0:00\n[kworker/u:0]\nroot 6 0.0 0.0 0 0 ? S Sep24 0:00\n[migration/0]\nroot 7 0.0 0.0 0 0 ? S Sep24 0:00\n[watchdog/0]\nroot 8 0.0 0.0 0 0 ? S Sep24 0:00\n[migration/1]\nroot 9 0.0 0.0 0 0 ? S Sep24 0:00\n[kworker/1:0]\nroot 10 0.0 0.0 0 0 ? S Sep24 0:01\n[ksoftirqd/1]\nroot 11 0.0 0.0 0 0 ? S Sep24 0:00\n[watchdog/1]\nroot 12 0.0 0.0 0 0 ? S< Sep24 0:00 [cpuset]\nroot 13 0.0 0.0 0 0 ? S< Sep24 0:00 [khelper]\nroot 14 0.0 0.0 0 0 ? S Sep24 0:00 [kdevtmpfs]\nroot 15 0.0 0.0 0 0 ? S< Sep24 0:00 [netns]\nroot 16 0.0 0.0 0 0 ? S Sep24 0:00\n[kworker/u:1]\nroot 17 0.0 0.0 0 0 ? S Sep24 0:00 [xenwatch]\nroot 18 0.0 0.0 0 0 ? S Sep24 0:00 [xenbus]\nroot 19 0.0 0.0 0 0 ? S Sep24 0:00\n[sync_supers]\nroot 20 0.0 0.0 0 0 ? S Sep24 0:00\n[bdi-default]\nroot 21 0.0 0.0 0 0 ? S< Sep24 0:00\n[kintegrityd]\nroot 22 0.0 0.0 0 0 ? S< Sep24 0:00 [kblockd]\nroot 23 0.0 0.0 0 0 ? S< Sep24 0:00 [ata_sff]\nroot 24 0.0 0.0 0 0 ? S Sep24 0:00 [khubd]\nroot 25 0.0 0.0 0 0 ? S< Sep24 0:00 [md]\nroot 26 0.0 0.0 0 0 ? S Sep24 0:02\n[kworker/0:1]\nroot 28 0.0 0.0 0 0 ? S Sep24 0:00\n[khungtaskd]\nroot 29 0.0 0.0 0 0 ? S Sep24 0:00 [kswapd0]\nroot 30 0.0 0.0 0 0 ? SN Sep24 0:00 [ksmd]\nroot 31 0.0 0.0 0 0 ? S Sep24 0:00\n[fsnotify_mark]\nroot 32 0.0 0.0 0 0 ? S Sep24 0:00\n[ecryptfs-kthrea]\nroot 33 0.0 0.0 0 0 ? S< Sep24 0:00 [crypto]\nroot 41 0.0 0.0 0 0 ? S< Sep24 0:00 [kthrotld]\nroot 42 0.0 0.0 0 0 ? S Sep24 0:00 [khvcd]\nroot 43 0.0 0.0 0 0 ? S Sep24 0:01\n[kworker/1:1]\nroot 62 0.0 0.0 0 0 ? S< Sep24 0:00\n[devfreq_wq]\nroot 176 0.0 0.0 0 0 ? S< Sep24 0:00 [kdmflush]\nroot 187 0.0 0.0 0 0 ? S Sep24 0:01\n[jbd2/xvda1-8]\nroot 188 0.0 0.0 0 0 ? S< Sep24 0:00\n[ext4-dio-unwrit]\nroot 258 0.0 0.0 17224 640 ? S Sep24 0:00\nupstart-udev-bridge --daemon\nroot 265 0.0 0.0 21460 1196 ? Ss Sep24 0:00\n/sbin/udevd --daemon\nroot 328 0.0 0.0 21456 712 ? S Sep24 0:00\n/sbin/udevd --daemon\nroot 329 0.0 0.0 21456 716 ? S Sep24 0:00\n/sbin/udevd --daemon\nroot 389 0.0 0.0 15180 392 ? S Sep24 0:00\nupstart-socket-bridge --daemon\nroot 419 0.0 0.0 7256 1008 ? Ss Sep24 0:00 dhclient3\n-e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf\n/var/lib/dhcp/dhclient.eth0.leases -1 eth\nroot 574 0.0 0.0 0 0 ? S Sep24 0:03\n[jbd2/dm-0-8]\nroot 575 0.0 0.0 0 0 ? S< Sep24 0:00\n[ext4-dio-unwrit]\nroot 610 0.0 0.0 49948 2880 ? Ss Sep24 0:00\n/usr/sbin/sshd -D\nsyslog 625 0.0 0.0 253708 1552 ? Sl Sep24 0:11 rsyslogd\n-c5\n102 630 0.0 0.0 23808 944 ? Ss Sep24 0:00\ndbus-daemon --system --fork --activation=upstart\nroot 687 0.0 0.0 14496 968 tty4 Ss+ Sep24 0:00\n/sbin/getty -8 38400 tty4\nroot 696 0.0 0.0 14496 972 tty5 Ss+ Sep24 0:00\n/sbin/getty -8 38400 tty5\nroot 708 0.0 0.0 14496 968 tty2 Ss+ Sep24 0:00\n/sbin/getty -8 38400 tty2\nroot 710 0.0 0.0 14496 964 tty3 Ss+ Sep24 0:00\n/sbin/getty -8 38400 tty3\nroot 715 0.0 0.0 14496 968 tty6 Ss+ Sep24 0:00\n/sbin/getty -8 38400 tty6\nroot 720 0.0 0.0 4320 660 ? Ss Sep24 0:00 acpid -c\n/etc/acpi/events -s /var/run/acpid.socket\nroot 728 0.0 11.9 2194848 2097324 ? Ss Sep24 0:09\n/usr/bin/searchd --nodetach\nroot 733 0.0 0.0 19104 928 ? Ss Sep24 0:00 cron\ndaemon 735 0.0 0.0 16900 376 ? Ss Sep24 0:00 atd\nbind 739 0.0 0.0 235540 13404 ? Ssl Sep24 0:00\n/usr/sbin/named -u bind\nmysql 755 0.0 0.2 558104 47940 ? Ssl Sep24 0:34\n/usr/sbin/mysqld\nwhoopsie 790 0.0 0.0 187576 4236 ? Ssl Sep24 0:00 whoopsie\nroot 924 0.0 0.0 0 0 ? S Sep24 0:00\n[flush-252:0]\nroot 999 0.0 0.0 99400 6496 ? Ss Sep24 0:04\n/usr/sbin/apache2 -k start\nwww-data 1018 0.0 1.0 427080 185684 ? S Sep24 0:10\n/usr/sbin/apache2 -k start\nwww-data 1019 0.0 1.0 427140 185852 ? S Sep24 0:33\n/usr/sbin/apache2 -k start\nroot 1032 0.0 0.1 80220 21276 ? Ss Sep24 0:17 starman\nmaster --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgi\nroot 1035 0.0 0.0 14496 968 tty1 Ss+ Sep24 0:00\n/sbin/getty -8 38400 tty1\nroot 1037 0.0 0.1 184400 28532 ? S Sep24 0:00 starman\nworker --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgi\nroot 1038 0.0 0.1 184444 28592 ? S Sep24 0:00 starman\nworker --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgi\nroot 1039 0.0 0.1 184132 28040 ? S Sep24 0:00 starman\nworker --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgi\nroot 1040 0.0 0.1 184408 28600 ? S Sep24 0:00 starman\nworker --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgi\nroot 1041 0.0 0.1 184444 28588 ? S Sep24 0:00 starman\nworker --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgi\nwww-data 1055 0.0 1.2 469948 225732 ? S Sep24 1:04\n/usr/sbin/apache2 -k start\nwww-data 1056 0.0 1.0 427180 185924 ? S Sep24 0:28\n/usr/sbin/apache2 -k start\nwww-data 3452 0.0 1.0 426964 185624 ? S Sep25 0:28\n/usr/sbin/apache2 -k start\nwww-data 3775 0.0 1.0 426900 185696 ? S Sep25 0:14\n/usr/sbin/apache2 -k start\npostgres 4717 0.0 0.6 4411584 113372 ? S Sep25 0:01\n/usr/lib/postgresql/9.1/bin/postgres -D /mnt/dbstorage/9.1/main -c\nconfig_file=/etc/postgresql/9.1/main\npostgres 4720 0.0 0.1 4413628 31392 ? Ss Sep25 0:09 postgres:\nwriter process\npostgres 4721 0.0 0.0 4413636 1808 ? Ss Sep25 0:08 postgres:\nwal writer process\npostgres 4722 0.0 0.0 4414344 3044 ? Ss Sep25 1:36 postgres:\nautovacuum launcher process\npostgres 4723 0.0 0.0 94920 1752 ? Ss Sep25 0:41 postgres:\nstats collector process\npostgres 4738 0.0 7.0 4491580 1229488 ? Ss Sep25 0:58 postgres:\npostgres cif 127.0.0.1(56867) idle\npostgres 4740 0.0 6.9 4417748 1221100 ? Ss Sep25 0:18 postgres:\npostgres cif 127.0.0.1(56869) idle\npostgres 4741 0.0 7.0 4425720 1228856 ? Ss Sep25 0:31 postgres:\npostgres cif 127.0.0.1(56870) idle\npostgres 4742 0.0 6.9 4417376 1220464 ? Ss Sep25 0:08 postgres:\npostgres cif 127.0.0.1(56871) idle\npostgres 4743 0.0 7.0 4421104 1225328 ? Ss Sep25 0:24 postgres:\npostgres cif 127.0.0.1(56872) idle\npostgres 4745 0.0 7.0 4421124 1225040 ? Ss Sep25 0:27 postgres:\npostgres cif 127.0.0.1(56874) idle\nwww-data 4746 0.0 1.2 455984 212284 ? S Sep25 0:30\n/usr/sbin/apache2 -k start\npostgres 4754 0.0 6.9 4417728 1220988 ? Ss Sep25 0:30 postgres:\npostgres cif 127.0.0.1(56879) idle\nwww-data 4755 0.0 0.9 403836 163240 ? S Sep25 0:06\n/usr/sbin/apache2 -k start\npostgres 4765 0.0 6.9 4482528 1220704 ? Ss Sep25 0:23 postgres:\npostgres cif 127.0.0.1(56881) idle\nwww-data 4995 0.0 0.9 403464 162872 ? S Sep25 0:03\n/usr/sbin/apache2 -k start\nwww-data 4997 0.0 0.3 305460 64996 ? S Sep25 0:00\n/usr/sbin/apache2 -k start\npostgres 5002 0.0 6.9 4417384 1220172 ? Ss Sep25 0:09 postgres:\npostgres cif 127.0.0.1(56895) idle\npostgres 5003 0.0 7.1 4417920 1243772 ? Ss Sep25 0:06 postgres:\npostgres cif 127.0.0.1(56896) idle\nroot 5218 0.0 0.0 0 0 ? S Sep25 0:00\n[flush-202:1]\nroot 5820 0.0 0.0 73352 3568 ? Ss 16:37 0:00 sshd:\nubuntu [priv]\nubuntu 5950 0.0 0.0 73352 1676 ? S 16:37 0:00 sshd:\nubuntu@pts/0\nubuntu 5952 0.6 0.0 25872 8432 pts/0 Ss 16:37 0:00 -bash\nroot 6048 0.0 0.0 41896 1752 pts/0 S 16:38 0:00 sudo su -\ncif\ncif 6049 0.0 0.0 39516 1388 pts/0 S 16:38 0:00 su - cif\ncif 6050 0.8 0.0 25912 8472 pts/0 S 16:38 0:00 -su\ncif 6161 0.0 0.0 16872 1272 pts/0 R+ 16:38 0:00 ps aux\n\n\nOn Wed, Sep 26, 2012 at 6:29 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Tue, Sep 25, 2012 at 7:00 PM, Shiran Kleiderman <[email protected]>\n> wrote:\n> >\n> > Hi\n> > Thanks for your answer.\n> > I understood that the server is ok memory wise.\n> > What can I check on the client side or the DB queries?\n>\n> Well you're connecting to localhost so I'd expect you to show a memory\n> issue in free I'm not seeing. Are you really connecting to localhost\n> or not?\n>\n\n\n\n-- \nBest,\nShiran Kleiderman\n+972 - 542380838\nSkype - shirank1\n\nHiThanks again.Right now, this is free -m and ps aux and non of the crons can run - can't allocate memory.cif@domU-12-31-39-08-06-20:~$ free -m\n total used free shared buffers cachedMem: 17079 12051 5028 0 270 9578-/+ buffers/cache: 2202 14877\nSwap: 511 0 511cif@domU-12-31-39-08-06-20:~$ ps auxUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nroot 1 0.0 0.0 24316 2280 ? Ss Sep24 0:00 /sbin/initroot 2 0.0 0.0 0 0 ? S Sep24 0:00 [kthreadd]\nroot 3 0.0 0.0 0 0 ? S Sep24 0:00 [ksoftirqd/0]root 4 0.0 0.0 0 0 ? S Sep24 0:00 [kworker/0:0]\nroot 5 0.0 0.0 0 0 ? S Sep24 0:00 [kworker/u:0]root 6 0.0 0.0 0 0 ? S Sep24 0:00 [migration/0]\nroot 7 0.0 0.0 0 0 ? S Sep24 0:00 [watchdog/0]root 8 0.0 0.0 0 0 ? S Sep24 0:00 [migration/1]\nroot 9 0.0 0.0 0 0 ? S Sep24 0:00 [kworker/1:0]root 10 0.0 0.0 0 0 ? S Sep24 0:01 [ksoftirqd/1]\nroot 11 0.0 0.0 0 0 ? S Sep24 0:00 [watchdog/1]root 12 0.0 0.0 0 0 ? S< Sep24 0:00 [cpuset]root 13 0.0 0.0 0 0 ? S< Sep24 0:00 [khelper]\nroot 14 0.0 0.0 0 0 ? S Sep24 0:00 [kdevtmpfs]root 15 0.0 0.0 0 0 ? S< Sep24 0:00 [netns]\nroot 16 0.0 0.0 0 0 ? S Sep24 0:00 [kworker/u:1]root 17 0.0 0.0 0 0 ? S Sep24 0:00 [xenwatch]root 18 0.0 0.0 0 0 ? S Sep24 0:00 [xenbus]\nroot 19 0.0 0.0 0 0 ? S Sep24 0:00 [sync_supers]root 20 0.0 0.0 0 0 ? S Sep24 0:00 [bdi-default]\nroot 21 0.0 0.0 0 0 ? S< Sep24 0:00 [kintegrityd]root 22 0.0 0.0 0 0 ? S< Sep24 0:00 [kblockd]\nroot 23 0.0 0.0 0 0 ? S< Sep24 0:00 [ata_sff]root 24 0.0 0.0 0 0 ? S Sep24 0:00 [khubd]root 25 0.0 0.0 0 0 ? S< Sep24 0:00 [md]\nroot 26 0.0 0.0 0 0 ? S Sep24 0:02 [kworker/0:1]root 28 0.0 0.0 0 0 ? S Sep24 0:00 [khungtaskd]\nroot 29 0.0 0.0 0 0 ? S Sep24 0:00 [kswapd0]root 30 0.0 0.0 0 0 ? SN Sep24 0:00 [ksmd]root 31 0.0 0.0 0 0 ? S Sep24 0:00 [fsnotify_mark]\nroot 32 0.0 0.0 0 0 ? S Sep24 0:00 [ecryptfs-kthrea]root 33 0.0 0.0 0 0 ? S< Sep24 0:00 [crypto]\nroot 41 0.0 0.0 0 0 ? S< Sep24 0:00 [kthrotld]root 42 0.0 0.0 0 0 ? S Sep24 0:00 [khvcd]\nroot 43 0.0 0.0 0 0 ? S Sep24 0:01 [kworker/1:1]root 62 0.0 0.0 0 0 ? S< Sep24 0:00 [devfreq_wq]\nroot 176 0.0 0.0 0 0 ? S< Sep24 0:00 [kdmflush]root 187 0.0 0.0 0 0 ? S Sep24 0:01 [jbd2/xvda1-8]\nroot 188 0.0 0.0 0 0 ? S< Sep24 0:00 [ext4-dio-unwrit]root 258 0.0 0.0 17224 640 ? S Sep24 0:00 upstart-udev-bridge --daemon\nroot 265 0.0 0.0 21460 1196 ? Ss Sep24 0:00 /sbin/udevd --daemonroot 328 0.0 0.0 21456 712 ? S Sep24 0:00 /sbin/udevd --daemon\nroot 329 0.0 0.0 21456 716 ? S Sep24 0:00 /sbin/udevd --daemonroot 389 0.0 0.0 15180 392 ? S Sep24 0:00 upstart-socket-bridge --daemon\nroot 419 0.0 0.0 7256 1008 ? Ss Sep24 0:00 dhclient3 -e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -1 ethroot 574 0.0 0.0 0 0 ? S Sep24 0:03 [jbd2/dm-0-8]\nroot 575 0.0 0.0 0 0 ? S< Sep24 0:00 [ext4-dio-unwrit]root 610 0.0 0.0 49948 2880 ? Ss Sep24 0:00 /usr/sbin/sshd -D\nsyslog 625 0.0 0.0 253708 1552 ? Sl Sep24 0:11 rsyslogd -c5102 630 0.0 0.0 23808 944 ? Ss Sep24 0:00 dbus-daemon --system --fork --activation=upstart\nroot 687 0.0 0.0 14496 968 tty4 Ss+ Sep24 0:00 /sbin/getty -8 38400 tty4root 696 0.0 0.0 14496 972 tty5 Ss+ Sep24 0:00 /sbin/getty -8 38400 tty5\nroot 708 0.0 0.0 14496 968 tty2 Ss+ Sep24 0:00 /sbin/getty -8 38400 tty2root 710 0.0 0.0 14496 964 tty3 Ss+ Sep24 0:00 /sbin/getty -8 38400 tty3\nroot 715 0.0 0.0 14496 968 tty6 Ss+ Sep24 0:00 /sbin/getty -8 38400 tty6root 720 0.0 0.0 4320 660 ? Ss Sep24 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket\nroot 728 0.0 11.9 2194848 2097324 ? Ss Sep24 0:09 /usr/bin/searchd --nodetachroot 733 0.0 0.0 19104 928 ? Ss Sep24 0:00 cron\ndaemon 735 0.0 0.0 16900 376 ? Ss Sep24 0:00 atdbind 739 0.0 0.0 235540 13404 ? Ssl Sep24 0:00 /usr/sbin/named -u bind\nmysql 755 0.0 0.2 558104 47940 ? Ssl Sep24 0:34 /usr/sbin/mysqldwhoopsie 790 0.0 0.0 187576 4236 ? Ssl Sep24 0:00 whoopsie\nroot 924 0.0 0.0 0 0 ? S Sep24 0:00 [flush-252:0]root 999 0.0 0.0 99400 6496 ? Ss Sep24 0:04 /usr/sbin/apache2 -k start\nwww-data 1018 0.0 1.0 427080 185684 ? S Sep24 0:10 /usr/sbin/apache2 -k startwww-data 1019 0.0 1.0 427140 185852 ? S Sep24 0:33 /usr/sbin/apache2 -k start\nroot 1032 0.0 0.1 80220 21276 ? Ss Sep24 0:17 starman master --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgiroot 1035 0.0 0.0 14496 968 tty1 Ss+ Sep24 0:00 /sbin/getty -8 38400 tty1\nroot 1037 0.0 0.1 184400 28532 ? S Sep24 0:00 starman worker --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgiroot 1038 0.0 0.1 184444 28592 ? S Sep24 0:00 starman worker --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgi\nroot 1039 0.0 0.1 184132 28040 ? S Sep24 0:00 starman worker --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgiroot 1040 0.0 0.1 184408 28600 ? S Sep24 0:00 starman worker --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgi\nroot 1041 0.0 0.1 184444 28588 ? S Sep24 0:00 starman worker --port 5000 --daemonize -MMoose /usr/local/cif-rest-sphinx/CIF.psgiwww-data 1055 0.0 1.2 469948 225732 ? S Sep24 1:04 /usr/sbin/apache2 -k start\nwww-data 1056 0.0 1.0 427180 185924 ? S Sep24 0:28 /usr/sbin/apache2 -k startwww-data 3452 0.0 1.0 426964 185624 ? S Sep25 0:28 /usr/sbin/apache2 -k start\nwww-data 3775 0.0 1.0 426900 185696 ? S Sep25 0:14 /usr/sbin/apache2 -k startpostgres 4717 0.0 0.6 4411584 113372 ? S Sep25 0:01 /usr/lib/postgresql/9.1/bin/postgres -D /mnt/dbstorage/9.1/main -c config_file=/etc/postgresql/9.1/main\npostgres 4720 0.0 0.1 4413628 31392 ? Ss Sep25 0:09 postgres: writer processpostgres 4721 0.0 0.0 4413636 1808 ? Ss Sep25 0:08 postgres: wal writer process\npostgres 4722 0.0 0.0 4414344 3044 ? Ss Sep25 1:36 postgres: autovacuum launcher processpostgres 4723 0.0 0.0 94920 1752 ? Ss Sep25 0:41 postgres: stats collector process\npostgres 4738 0.0 7.0 4491580 1229488 ? Ss Sep25 0:58 postgres: postgres cif 127.0.0.1(56867) idlepostgres 4740 0.0 6.9 4417748 1221100 ? Ss Sep25 0:18 postgres: postgres cif 127.0.0.1(56869) idle\npostgres 4741 0.0 7.0 4425720 1228856 ? Ss Sep25 0:31 postgres: postgres cif 127.0.0.1(56870) idlepostgres 4742 0.0 6.9 4417376 1220464 ? Ss Sep25 0:08 postgres: postgres cif 127.0.0.1(56871) idle\npostgres 4743 0.0 7.0 4421104 1225328 ? Ss Sep25 0:24 postgres: postgres cif 127.0.0.1(56872) idlepostgres 4745 0.0 7.0 4421124 1225040 ? Ss Sep25 0:27 postgres: postgres cif 127.0.0.1(56874) idle\nwww-data 4746 0.0 1.2 455984 212284 ? S Sep25 0:30 /usr/sbin/apache2 -k startpostgres 4754 0.0 6.9 4417728 1220988 ? Ss Sep25 0:30 postgres: postgres cif 127.0.0.1(56879) idle\nwww-data 4755 0.0 0.9 403836 163240 ? S Sep25 0:06 /usr/sbin/apache2 -k startpostgres 4765 0.0 6.9 4482528 1220704 ? Ss Sep25 0:23 postgres: postgres cif 127.0.0.1(56881) idle\nwww-data 4995 0.0 0.9 403464 162872 ? S Sep25 0:03 /usr/sbin/apache2 -k startwww-data 4997 0.0 0.3 305460 64996 ? S Sep25 0:00 /usr/sbin/apache2 -k start\npostgres 5002 0.0 6.9 4417384 1220172 ? Ss Sep25 0:09 postgres: postgres cif 127.0.0.1(56895) idlepostgres 5003 0.0 7.1 4417920 1243772 ? Ss Sep25 0:06 postgres: postgres cif 127.0.0.1(56896) idle\nroot 5218 0.0 0.0 0 0 ? S Sep25 0:00 [flush-202:1]root 5820 0.0 0.0 73352 3568 ? Ss 16:37 0:00 sshd: ubuntu [priv]\nubuntu 5950 0.0 0.0 73352 1676 ? S 16:37 0:00 sshd: ubuntu@pts/0ubuntu 5952 0.6 0.0 25872 8432 pts/0 Ss 16:37 0:00 -bash\nroot 6048 0.0 0.0 41896 1752 pts/0 S 16:38 0:00 sudo su - cifcif 6049 0.0 0.0 39516 1388 pts/0 S 16:38 0:00 su - cifcif 6050 0.8 0.0 25912 8472 pts/0 S 16:38 0:00 -su\ncif 6161 0.0 0.0 16872 1272 pts/0 R+ 16:38 0:00 ps auxOn Wed, Sep 26, 2012 at 6:29 PM, Scott Marlowe <[email protected]> wrote:\nOn Tue, Sep 25, 2012 at 7:00 PM, Shiran Kleiderman <[email protected]> wrote:\n\n>\n> Hi\n> Thanks for your answer.\n> I understood that the server is ok memory wise.\n> What can I check on the client side or the DB queries?\n\nWell you're connecting to localhost so I'd expect you to show a memory\nissue in free I'm not seeing. Are you really connecting to localhost\nor not?\n-- Best,Shiran Kleiderman\n+972 - 542380838Skype - shirank1",
"msg_date": "Wed, 26 Sep 2012 18:41:48 +0200",
"msg_from": "Shiran Kleiderman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory issues"
},
{
"msg_contents": "On Wed, Sep 26, 2012 at 10:41 AM, Shiran Kleiderman <[email protected]> wrote:\n> Hi\n> Thanks again.\n> Right now, this is free -m and ps aux and non of the crons can run - can't\n> allocate memory.\n\nOK, so is the machine you're running free -m on the same as the one\nrunning postgresql and the same one you're running cron jobs on and\nthe same one you're running apache on?\n\nAlso please don't remove the cc for the list, others might have an\ninsight I'd miss.\n\n> cif@domU-12-31-39-08-06-20:~$ free -m\n> total used free shared buffers cached\n> Mem: 17079 12051 5028 0 270 9578\n> -/+ buffers/cache: 2202 14877\n> Swap: 511 0 511\n>\n\n",
"msg_date": "Wed, 26 Sep 2012 14:55:00 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory issues"
},
{
"msg_contents": "Hi\nYes, same machine.\n\nThanks for your help.\n\nOn Wed, Sep 26, 2012 at 10:55 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Wed, Sep 26, 2012 at 10:41 AM, Shiran Kleiderman <[email protected]>\n> wrote:\n> > Hi\n> > Thanks again.\n> > Right now, this is free -m and ps aux and non of the crons can run -\n> can't\n> > allocate memory.\n>\n> OK, so is the machine you're running free -m on the same as the one\n> running postgresql and the same one you're running cron jobs on and\n> the same one you're running apache on?\n>\n> Also please don't remove the cc for the list, others might have an\n> insight I'd miss.\n>\n> > cif@domU-12-31-39-08-06-20:~$ free -m\n> > total used free shared buffers cached\n> > Mem: 17079 12051 5028 0 270 9578\n> > -/+ buffers/cache: 2202 14877\n> > Swap: 511 0 511\n> >\n>\n\n\n\n-- \nBest,\nShiran Kleiderman\n+972 - 542380838\nSkype - shirank1\n\nHiYes, same machine.Thanks for your help.On Wed, Sep 26, 2012 at 10:55 PM, Scott Marlowe <[email protected]> wrote:\nOn Wed, Sep 26, 2012 at 10:41 AM, Shiran Kleiderman <[email protected]> wrote:\n\n> Hi\n> Thanks again.\n> Right now, this is free -m and ps aux and non of the crons can run - can't\n> allocate memory.\n\nOK, so is the machine you're running free -m on the same as the one\nrunning postgresql and the same one you're running cron jobs on and\nthe same one you're running apache on?\n\nAlso please don't remove the cc for the list, others might have an\ninsight I'd miss.\n\n> cif@domU-12-31-39-08-06-20:~$ free -m\n> total used free shared buffers cached\n> Mem: 17079 12051 5028 0 270 9578\n> -/+ buffers/cache: 2202 14877\n> Swap: 511 0 511\n>\n-- Best,Shiran Kleiderman\n+972 - 542380838Skype - shirank1",
"msg_date": "Wed, 26 Sep 2012 23:00:56 +0200",
"msg_from": "Shiran Kleiderman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Memory issues"
},
{
"msg_contents": "OK then I'm lost. It's got to either be a bug in how amazon ec2\ninstances work or severely fragmented memory because you've got a TON\nof kernel cache available.\n\nOn Wed, Sep 26, 2012 at 3:00 PM, Shiran Kleiderman <[email protected]> wrote:\n> Hi\n> Yes, same machine.\n>\n> Thanks for your help.\n>\n>\n> On Wed, Sep 26, 2012 at 10:55 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Wed, Sep 26, 2012 at 10:41 AM, Shiran Kleiderman <[email protected]>\n>> wrote:\n>> > Hi\n>> > Thanks again.\n>> > Right now, this is free -m and ps aux and non of the crons can run -\n>> > can't\n>> > allocate memory.\n>>\n>> OK, so is the machine you're running free -m on the same as the one\n>> running postgresql and the same one you're running cron jobs on and\n>> the same one you're running apache on?\n>>\n>> Also please don't remove the cc for the list, others might have an\n>> insight I'd miss.\n>>\n>> > cif@domU-12-31-39-08-06-20:~$ free -m\n>> > total used free shared buffers\n>> > cached\n>> > Mem: 17079 12051 5028 0 270\n>> > 9578\n>> > -/+ buffers/cache: 2202 14877\n>> > Swap: 511 0 511\n>> >\n>\n>\n>\n>\n> --\n> Best,\n> Shiran Kleiderman\n> +972 - 542380838\n> Skype - shirank1\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n",
"msg_date": "Wed, 26 Sep 2012 15:11:19 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Memory issues"
},
{
"msg_contents": "Hi\nI contact amazon with this issue.\nWhat can I check for the fragmented memory issue?\n\nThanks (:\n\nOn Wed, Sep 26, 2012 at 11:11 PM, Scott Marlowe <[email protected]>wrote:\n\n> OK then I'm lost. It's got to either be a bug in how amazon ec2\n> instances work or severely fragmented memory because you've got a TON\n> of kernel cache available.\n>\n> On Wed, Sep 26, 2012 at 3:00 PM, Shiran Kleiderman <[email protected]>\n> wrote:\n> > Hi\n> > Yes, same machine.\n> >\n> > Thanks for your help.\n> >\n> >\n> > On Wed, Sep 26, 2012 at 10:55 PM, Scott Marlowe <[email protected]\n> >\n> > wrote:\n> >>\n> >> On Wed, Sep 26, 2012 at 10:41 AM, Shiran Kleiderman <[email protected]\n> >\n> >> wrote:\n> >> > Hi\n> >> > Thanks again.\n> >> > Right now, this is free -m and ps aux and non of the crons can run -\n> >> > can't\n> >> > allocate memory.\n> >>\n> >> OK, so is the machine you're running free -m on the same as the one\n> >> running postgresql and the same one you're running cron jobs on and\n> >> the same one you're running apache on?\n> >>\n> >> Also please don't remove the cc for the list, others might have an\n> >> insight I'd miss.\n> >>\n> >> > cif@domU-12-31-39-08-06-20:~$ free -m\n> >> > total used free shared buffers\n> >> > cached\n> >> > Mem: 17079 12051 5028 0 270\n> >> > 9578\n> >> > -/+ buffers/cache: 2202 14877\n> >> > Swap: 511 0 511\n> >> >\n> >\n> >\n> >\n> >\n> > --\n> > Best,\n> > Shiran Kleiderman\n> > +972 - 542380838\n> > Skype - shirank1\n> >\n>\n>\n>\n> --\n> To understand recursion, one must first understand recursion.\n>\n\n\n\n-- \nBest,\nShiran Kleiderman\n+972 - 542380838\nSkype - shirank1\n\nHiI contact amazon with this issue.What can I check for the fragmented memory issue?Thanks (:On Wed, Sep 26, 2012 at 11:11 PM, Scott Marlowe <[email protected]> wrote:\nOK then I'm lost. It's got to either be a bug in how amazon ec2\ninstances work or severely fragmented memory because you've got a TON\nof kernel cache available.\n\nOn Wed, Sep 26, 2012 at 3:00 PM, Shiran Kleiderman <[email protected]> wrote:\n> Hi\n> Yes, same machine.\n>\n> Thanks for your help.\n>\n>\n> On Wed, Sep 26, 2012 at 10:55 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Wed, Sep 26, 2012 at 10:41 AM, Shiran Kleiderman <[email protected]>\n>> wrote:\n>> > Hi\n>> > Thanks again.\n>> > Right now, this is free -m and ps aux and non of the crons can run -\n>> > can't\n>> > allocate memory.\n>>\n>> OK, so is the machine you're running free -m on the same as the one\n>> running postgresql and the same one you're running cron jobs on and\n>> the same one you're running apache on?\n>>\n>> Also please don't remove the cc for the list, others might have an\n>> insight I'd miss.\n>>\n>> > cif@domU-12-31-39-08-06-20:~$ free -m\n>> > total used free shared buffers\n>> > cached\n>> > Mem: 17079 12051 5028 0 270\n>> > 9578\n>> > -/+ buffers/cache: 2202 14877\n>> > Swap: 511 0 511\n>> >\n>\n>\n>\n>\n> --\n> Best,\n> Shiran Kleiderman\n> +972 - 542380838\n> Skype - shirank1\n>\n\n\n\n--\nTo understand recursion, one must first understand recursion.\n-- Best,Shiran Kleiderman\n+972 - 542380838Skype - shirank1",
"msg_date": "Wed, 26 Sep 2012 23:36:24 +0200",
"msg_from": "Shiran Kleiderman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Memory issues"
},
{
"msg_contents": "Hi\nAnother thing that may help,\nWhen I restart the postgres db, then I have a little bit of \"grace\" time\n(the memory free field is also lifted a bit).\nI can run the crons and then after an hour or two the status returns to\nregular... \"out of memory\" errors.\n\nThanks!\n\n\nOn Wed, Sep 26, 2012 at 11:36 PM, Shiran Kleiderman <[email protected]>wrote:\n\n> Hi\n> I contact amazon with this issue.\n> What can I check for the fragmented memory issue?\n>\n> Thanks (:\n>\n>\n> On Wed, Sep 26, 2012 at 11:11 PM, Scott Marlowe <[email protected]>wrote:\n>\n>> OK then I'm lost. It's got to either be a bug in how amazon ec2\n>> instances work or severely fragmented memory because you've got a TON\n>> of kernel cache available.\n>>\n>> On Wed, Sep 26, 2012 at 3:00 PM, Shiran Kleiderman <[email protected]>\n>> wrote:\n>> > Hi\n>> > Yes, same machine.\n>> >\n>> > Thanks for your help.\n>> >\n>> >\n>> > On Wed, Sep 26, 2012 at 10:55 PM, Scott Marlowe <\n>> [email protected]>\n>> > wrote:\n>> >>\n>> >> On Wed, Sep 26, 2012 at 10:41 AM, Shiran Kleiderman <\n>> [email protected]>\n>> >> wrote:\n>> >> > Hi\n>> >> > Thanks again.\n>> >> > Right now, this is free -m and ps aux and non of the crons can run -\n>> >> > can't\n>> >> > allocate memory.\n>> >>\n>> >> OK, so is the machine you're running free -m on the same as the one\n>> >> running postgresql and the same one you're running cron jobs on and\n>> >> the same one you're running apache on?\n>> >>\n>> >> Also please don't remove the cc for the list, others might have an\n>> >> insight I'd miss.\n>> >>\n>> >> > cif@domU-12-31-39-08-06-20:~$ free -m\n>> >> > total used free shared buffers\n>> >> > cached\n>> >> > Mem: 17079 12051 5028 0 270\n>> >> > 9578\n>> >> > -/+ buffers/cache: 2202 14877\n>> >> > Swap: 511 0 511\n>> >> >\n>> >\n>> >\n>> >\n>> >\n>> > --\n>> > Best,\n>> > Shiran Kleiderman\n>> > +972 - 542380838\n>> > Skype - shirank1\n>> >\n>>\n>>\n>>\n>> --\n>> To understand recursion, one must first understand recursion.\n>>\n>\n>\n>\n> --\n> Best,\n> Shiran Kleiderman\n> +972 - 542380838\n> Skype - shirank1\n>\n>\n\n\n-- \nBest,\nShiran Kleiderman\n+972 - 542380838\nSkype - shirank1\n\nHiAnother thing that may help,When I restart the postgres db, then I have a little bit of \"grace\" time (the memory free field is also lifted a bit).I can run the crons and then after an hour or two the status returns to regular... \"out of memory\" errors.\nThanks!On Wed, Sep 26, 2012 at 11:36 PM, Shiran Kleiderman <[email protected]> wrote:\nHiI contact amazon with this issue.What can I check for the fragmented memory issue?\nThanks (:On Wed, Sep 26, 2012 at 11:11 PM, Scott Marlowe <[email protected]> wrote:\nOK then I'm lost. It's got to either be a bug in how amazon ec2\ninstances work or severely fragmented memory because you've got a TON\nof kernel cache available.\n\nOn Wed, Sep 26, 2012 at 3:00 PM, Shiran Kleiderman <[email protected]> wrote:\n> Hi\n> Yes, same machine.\n>\n> Thanks for your help.\n>\n>\n> On Wed, Sep 26, 2012 at 10:55 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Wed, Sep 26, 2012 at 10:41 AM, Shiran Kleiderman <[email protected]>\n>> wrote:\n>> > Hi\n>> > Thanks again.\n>> > Right now, this is free -m and ps aux and non of the crons can run -\n>> > can't\n>> > allocate memory.\n>>\n>> OK, so is the machine you're running free -m on the same as the one\n>> running postgresql and the same one you're running cron jobs on and\n>> the same one you're running apache on?\n>>\n>> Also please don't remove the cc for the list, others might have an\n>> insight I'd miss.\n>>\n>> > cif@domU-12-31-39-08-06-20:~$ free -m\n>> > total used free shared buffers\n>> > cached\n>> > Mem: 17079 12051 5028 0 270\n>> > 9578\n>> > -/+ buffers/cache: 2202 14877\n>> > Swap: 511 0 511\n>> >\n>\n>\n>\n>\n> --\n> Best,\n> Shiran Kleiderman\n> +972 - 542380838\n> Skype - shirank1\n>\n\n\n\n--\nTo understand recursion, one must first understand recursion.\n-- Best,Shiran Kleiderman\n+972 - 542380838Skype - shirank1\n\n\n-- Best,Shiran Kleiderman\n+972 - 542380838Skype - shirank1",
"msg_date": "Thu, 27 Sep 2012 00:01:19 +0200",
"msg_from": "Shiran Kleiderman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Memory issues"
},
{
"msg_contents": "On Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:\n> Hi,\n> I'm using and Amazon ec2 instance with the following spec and the\n> application that I'm running uses a postgres DB 9.1.\n> The app has 3 main cron jobs.\n> \n> *Ubuntu 12, High-Memory Extra Large Instance\n> 17.1 GB of memory\n> 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)\n> 420 GB of instance storage\n> 64-bit platform*\n> \n> I've changed the main default values under file *postgresql.conf* to:\n> shared_buffers = 4GB\n> work_mem = 16MB\n> wal_buffers = 16MB\n> checkpoint_segments = 32\n> effective_cache_size = 8GB\n> \n> When I run the app, after an hour or two, free -m looks like below ans the\n> crons can't run due to memory loss or similar (i'm new to postgres and db\n> admin).\n> Thanks!\n> \n> free -m, errors:\n> \n> total used free shared buffers cached\n> Mem: 17079 13742 3337 0 64 11882\n> -/+ buffers/cache: 1796 15283\n> Swap: 511 0 511\n> \n> total used *free* shared buffers cached\n> Mem: 17079 16833 *245 *0 42 14583\n> -/+ buffers/cache: 2207 14871\n> Swap: 511 0 511\n> \n> **free above stays low even when nothing is running.\n> \n> **errors:\n> *DBI connect('database=---;host=localhost','postgres',...) failed: could\n> not fork new process for connection: Cannot allocate memory*\n> could not fork new process for connection: Cannot allocate memory\n> \n> and\n> execute failed: ERROR: out of memory\n> DETAIL: Failed on request of size 968. [for Statement \"\n> SELECT DISTINCT....\ncould you show cat /proc/meminfo?\n\nGreetings,\n\nAndres\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Thu, 27 Sep 2012 08:59:09 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory issues"
},
{
"msg_contents": "Hi\nI've returned the memory configs to the default, erased data from my db and\nam testing the system again.\n\nThis is the output of *cat /proc/meminfo*\nThanks\n\nroot@ip-10-194-167-240:~# cat /proc/meminfo\nMemTotal: 7629508 kB\nMemFree: 170368 kB\nBuffers: 10272 kB\nCached: 6220848 kB\nSwapCached: 0 kB\nActive: 3249748 kB\nInactive: 3936960 kB\nActive(anon): 971336 kB\nInactive(anon): 2103844 kB\nActive(file): 2278412 kB\nInactive(file): 1833116 kB\nUnevictable: 0 kB\nMlocked: 0 kB\nSwapTotal: 524284 kB\nSwapFree: 522716 kB\nDirty: 83068 kB\nWriteback: 3080 kB\nAnonPages: 955856 kB\nMapped: 2132564 kB\nShmem: 2119424 kB\nSlab: 157200 kB\nSReclaimable: 144488 kB\nSUnreclaim: 12712 kB\nKernelStack: 1184 kB\nPageTables: 21092 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nWritebackTmp: 0 kB\nCommitLimit: 4339036 kB\nCommitted_AS: 3637424 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 26152 kB\nVmallocChunk: 34359710052 kB\nHardwareCorrupted: 0 kB\nAnonHugePages: 0 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugePages_Rsvd: 0\nHugePages_Surp: 0\nHugepagesize: 2048 kB\nDirectMap4k: 7872512 kB\nDirectMap2M: 0 kB\n\n\nOn Thu, Sep 27, 2012 at 8:59 AM, Andres Freund <[email protected]>wrote:\n\n> On Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:\n> > Hi,\n> > I'm using and Amazon ec2 instance with the following spec and the\n> > application that I'm running uses a postgres DB 9.1.\n> > The app has 3 main cron jobs.\n> >\n> > *Ubuntu 12, High-Memory Extra Large Instance\n> > 17.1 GB of memory\n> > 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)\n> > 420 GB of instance storage\n> > 64-bit platform*\n> >\n> > I've changed the main default values under file *postgresql.conf* to:\n> > shared_buffers = 4GB\n> > work_mem = 16MB\n> > wal_buffers = 16MB\n> > checkpoint_segments = 32\n> > effective_cache_size = 8GB\n> >\n> > When I run the app, after an hour or two, free -m looks like below ans\n> the\n> > crons can't run due to memory loss or similar (i'm new to postgres and db\n> > admin).\n> > Thanks!\n> >\n> > free -m, errors:\n> >\n> > total used free shared buffers cached\n> > Mem: 17079 13742 3337 0 64 11882\n> > -/+ buffers/cache: 1796 15283\n> > Swap: 511 0 511\n> >\n> > total used *free* shared buffers cached\n> > Mem: 17079 16833 *245 *0 42 14583\n> > -/+ buffers/cache: 2207 14871\n> > Swap: 511 0 511\n> >\n> > **free above stays low even when nothing is running.\n> >\n> > **errors:\n> > *DBI connect('database=---;host=localhost','postgres',...) failed: could\n> > not fork new process for connection: Cannot allocate memory*\n> > could not fork new process for connection: Cannot allocate memory\n> >\n> > and\n> > execute failed: ERROR: out of memory\n> > DETAIL: Failed on request of size 968. [for Statement \"\n> > SELECT DISTINCT....\n> could you show cat /proc/meminfo?\n>\n> Greetings,\n>\n> Andres\n> --\n> Andres Freund http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\n\n\n-- \nBest,\nShiran Kleiderman\n+972 - 542380838\nSkype - shirank1\n\nHiI've returned the memory configs to the default, erased data from my db and am testing the system again.This is the output of cat /proc/meminfoThanks\nroot@ip-10-194-167-240:~# cat /proc/meminfoMemTotal: 7629508 kBMemFree: 170368 kBBuffers: 10272 kBCached: 6220848 kB\nSwapCached: 0 kBActive: 3249748 kBInactive: 3936960 kBActive(anon): 971336 kBInactive(anon): 2103844 kBActive(file): 2278412 kB\nInactive(file): 1833116 kBUnevictable: 0 kBMlocked: 0 kBSwapTotal: 524284 kBSwapFree: 522716 kBDirty: 83068 kB\nWriteback: 3080 kBAnonPages: 955856 kBMapped: 2132564 kBShmem: 2119424 kBSlab: 157200 kBSReclaimable: 144488 kB\nSUnreclaim: 12712 kBKernelStack: 1184 kBPageTables: 21092 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kB\nCommitLimit: 4339036 kBCommitted_AS: 3637424 kBVmallocTotal: 34359738367 kBVmallocUsed: 26152 kBVmallocChunk: 34359710052 kBHardwareCorrupted: 0 kB\nAnonHugePages: 0 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kB\nDirectMap4k: 7872512 kBDirectMap2M: 0 kBOn Thu, Sep 27, 2012 at 8:59 AM, Andres Freund <[email protected]> wrote:\nOn Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:\n> Hi,\n> I'm using and Amazon ec2 instance with the following spec and the\n> application that I'm running uses a postgres DB 9.1.\n> The app has 3 main cron jobs.\n>\n> *Ubuntu 12, High-Memory Extra Large Instance\n> 17.1 GB of memory\n> 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)\n> 420 GB of instance storage\n> 64-bit platform*\n>\n> I've changed the main default values under file *postgresql.conf* to:\n> shared_buffers = 4GB\n> work_mem = 16MB\n> wal_buffers = 16MB\n> checkpoint_segments = 32\n> effective_cache_size = 8GB\n>\n> When I run the app, after an hour or two, free -m looks like below ans the\n> crons can't run due to memory loss or similar (i'm new to postgres and db\n> admin).\n> Thanks!\n>\n> free -m, errors:\n>\n> total used free shared buffers cached\n> Mem: 17079 13742 3337 0 64 11882\n> -/+ buffers/cache: 1796 15283\n> Swap: 511 0 511\n>\n> total used *free* shared buffers cached\n> Mem: 17079 16833 *245 *0 42 14583\n> -/+ buffers/cache: 2207 14871\n> Swap: 511 0 511\n>\n> **free above stays low even when nothing is running.\n>\n> **errors:\n> *DBI connect('database=---;host=localhost','postgres',...) failed: could\n> not fork new process for connection: Cannot allocate memory*\n> could not fork new process for connection: Cannot allocate memory\n>\n> and\n> execute failed: ERROR: out of memory\n> DETAIL: Failed on request of size 968. [for Statement \"\n> SELECT DISTINCT....\ncould you show cat /proc/meminfo?\n\nGreetings,\n\nAndres\n--\n Andres Freund http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n-- Best,Shiran Kleiderman\n+972 - 542380838Skype - shirank1",
"msg_date": "Mon, 15 Oct 2012 02:45:06 +0200",
"msg_from": "Shiran Kleiderman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory issues"
},
{
"msg_contents": "Hello,\n\nI'm on a project which requires adding PostgreSQL tables to DB2 Federated\nServer. I'm getting an error with PostgreSQL data types boolean, text,\nbytea, and XML. I believe this can be solved with the CREATE TYPE MAPPING in\nFed Server. Does anyone know which values to use? I'm not that familiar with\nFed Server.\n\nAlso, the Postgres data is being extracted and inserted from the same table\nusing Optim Archive. Does this pose an additional challenge with setting up\nthe mapping?\n\nThank you in advance\n\nAlex\n\n \n\n\nHello,I'm on a project which requires adding PostgreSQL tables to DB2 Federated Server. I'm getting an error with PostgreSQL data types boolean, text, bytea, and XML. I believe this can be solved with the CREATE TYPE MAPPING in Fed Server. Does anyone know which values to use? I'm not that familiar with Fed Server.Also, the Postgres data is being extracted and inserted from the same table using Optim Archive. Does this pose an additional challenge with setting up the mapping?Thank you in advanceAlex",
"msg_date": "Sun, 14 Oct 2012 19:52:09 -0500",
"msg_from": "\"Alexander Gataric\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Mapping PostgreSQL data types to DB2 Federated Server"
},
{
"msg_contents": "On 10/14/12 5:52 PM, Alexander Gataric wrote:\n>\n> I'm on a project which requires adding PostgreSQL tables to DB2 \n> Federated Server. I'm getting an error with PostgreSQL data types \n> boolean, text, bytea, and XML. I believe this can be solved with the \n> CREATE TYPE MAPPING in Fed Server. Does anyone know which values to \n> use? I'm not that familiar with Fed Server.\n>\n> Also, the Postgres data is being extracted and inserted from the same \n> table using Optim Archive. Does this pose an additional challenge with \n> setting up the mapping?\n>\n\nI suggest you talk to your IBM support contacts for these issues, they \nreally have little to do with Postgres and are completely beyond \npostgres' control.\n\n\n\n-- \njohn r pierce N 37, W 122\nsanta cruz ca mid-left coast\n\n\n",
"msg_date": "Sun, 14 Oct 2012 18:05:28 -0700",
"msg_from": "John R Pierce <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mapping PostgreSQL data types to DB2 Federated Server"
},
{
"msg_contents": "Hi\nThis is the output of meminfo when the system is under some stress.\nThanks\n\ncif@ip-10-194-167-240:/tmp$ cat /proc/meminfo\nMemTotal: 7629508 kB\nMemFree: 37820 kB\nBuffers: 2108 kB\nCached: 5500200 kB\nSwapCached: 332 kB\nActive: 4172020 kB\nInactive: 3166244 kB\nActive(anon): 1864040 kB\nInactive(anon): 1568760 kB\nActive(file): 2307980 kB\nInactive(file): 1597484 kB\nUnevictable: 0 kB\nMlocked: 0 kB\nSwapTotal: 524284 kB\nSwapFree: 0 kB\nDirty: 23336 kB\nWriteback: 0 kB\nAnonPages: 1835716 kB\nMapped: 1610460 kB\nShmem: 1596916 kB\nSlab: 136168 kB\nSReclaimable: 123820 kB\nSUnreclaim: 12348 kB\nKernelStack: 1176 kB\nPageTables: 23148 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nWritebackTmp: 0 kB\nCommitLimit: 4339036 kB\nCommitted_AS: 4517524 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 26152 kB\nVmallocChunk: 34359710052 kB\nHardwareCorrupted: 0 kB\nAnonHugePages: 0 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugePages_Rsvd: 0\nHugePages_Surp: 0\nHugepagesize: 2048 kB\nDirectMap4k: 7872512 kB\nDirectMap2M: 0 kB\n\n\nOn Mon, Oct 15, 2012 at 2:45 AM, Shiran Kleiderman <[email protected]>wrote:\n\n> Hi\n> I've returned the memory configs to the default, erased data from my db\n> and am testing the system again.\n>\n> This is the output of *cat /proc/meminfo*\n> Thanks\n>\n> root@ip-10-194-167-240:~# cat /proc/meminfo\n> MemTotal: 7629508 kB\n> MemFree: 170368 kB\n> Buffers: 10272 kB\n> Cached: 6220848 kB\n> SwapCached: 0 kB\n> Active: 3249748 kB\n> Inactive: 3936960 kB\n> Active(anon): 971336 kB\n> Inactive(anon): 2103844 kB\n> Active(file): 2278412 kB\n> Inactive(file): 1833116 kB\n> Unevictable: 0 kB\n> Mlocked: 0 kB\n> SwapTotal: 524284 kB\n> SwapFree: 522716 kB\n> Dirty: 83068 kB\n> Writeback: 3080 kB\n> AnonPages: 955856 kB\n> Mapped: 2132564 kB\n> Shmem: 2119424 kB\n> Slab: 157200 kB\n> SReclaimable: 144488 kB\n> SUnreclaim: 12712 kB\n> KernelStack: 1184 kB\n> PageTables: 21092 kB\n> NFS_Unstable: 0 kB\n> Bounce: 0 kB\n> WritebackTmp: 0 kB\n> CommitLimit: 4339036 kB\n> Committed_AS: 3637424 kB\n> VmallocTotal: 34359738367 kB\n> VmallocUsed: 26152 kB\n> VmallocChunk: 34359710052 kB\n> HardwareCorrupted: 0 kB\n> AnonHugePages: 0 kB\n> HugePages_Total: 0\n> HugePages_Free: 0\n> HugePages_Rsvd: 0\n> HugePages_Surp: 0\n> Hugepagesize: 2048 kB\n> DirectMap4k: 7872512 kB\n> DirectMap2M: 0 kB\n>\n>\n> On Thu, Sep 27, 2012 at 8:59 AM, Andres Freund <[email protected]>wrote:\n>\n>> On Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:\n>> > Hi,\n>> > I'm using and Amazon ec2 instance with the following spec and the\n>> > application that I'm running uses a postgres DB 9.1.\n>> > The app has 3 main cron jobs.\n>> >\n>> > *Ubuntu 12, High-Memory Extra Large Instance\n>> > 17.1 GB of memory\n>> > 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)\n>> > 420 GB of instance storage\n>> > 64-bit platform*\n>> >\n>> > I've changed the main default values under file *postgresql.conf* to:\n>> > shared_buffers = 4GB\n>> > work_mem = 16MB\n>> > wal_buffers = 16MB\n>> > checkpoint_segments = 32\n>> > effective_cache_size = 8GB\n>> >\n>> > When I run the app, after an hour or two, free -m looks like below ans\n>> the\n>> > crons can't run due to memory loss or similar (i'm new to postgres and\n>> db\n>> > admin).\n>> > Thanks!\n>> >\n>> > free -m, errors:\n>> >\n>> > total used free shared buffers cached\n>> > Mem: 17079 13742 3337 0 64 11882\n>> > -/+ buffers/cache: 1796 15283\n>> > Swap: 511 0 511\n>> >\n>> > total used *free* shared buffers cached\n>> > Mem: 17079 16833 *245 *0 42 14583\n>> > -/+ buffers/cache: 2207 14871\n>> > Swap: 511 0 511\n>> >\n>> > **free above stays low even when nothing is running.\n>> >\n>> > **errors:\n>> > *DBI connect('database=---;host=localhost','postgres',...) failed: could\n>> > not fork new process for connection: Cannot allocate memory*\n>> > could not fork new process for connection: Cannot allocate memory\n>> >\n>> > and\n>> > execute failed: ERROR: out of memory\n>> > DETAIL: Failed on request of size 968. [for Statement \"\n>> > SELECT DISTINCT....\n>> could you show cat /proc/meminfo?\n>>\n>> Greetings,\n>>\n>> Andres\n>> --\n>> Andres Freund http://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Training & Services\n>>\n>\n>\n>\n> --\n> Best,\n> Shiran Kleiderman\n> +972 - 542380838\n> Skype - shirank1\n>\n>\n\n\n-- \nBest,\nShiran Kleiderman\n+972 - 542380838\nSkype - shirank1\n\nHiThis is the output of meminfo when the system is under some stress.Thankscif@ip-10-194-167-240:/tmp$ cat /proc/meminfoMemTotal: 7629508 kB\nMemFree: 37820 kBBuffers: 2108 kBCached: 5500200 kBSwapCached: 332 kBActive: 4172020 kBInactive: 3166244 kB\nActive(anon): 1864040 kBInactive(anon): 1568760 kBActive(file): 2307980 kBInactive(file): 1597484 kBUnevictable: 0 kBMlocked: 0 kB\nSwapTotal: 524284 kBSwapFree: 0 kBDirty: 23336 kBWriteback: 0 kBAnonPages: 1835716 kBMapped: 1610460 kB\nShmem: 1596916 kBSlab: 136168 kBSReclaimable: 123820 kBSUnreclaim: 12348 kBKernelStack: 1176 kBPageTables: 23148 kB\nNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 4339036 kBCommitted_AS: 4517524 kBVmallocTotal: 34359738367 kB\nVmallocUsed: 26152 kBVmallocChunk: 34359710052 kBHardwareCorrupted: 0 kBAnonHugePages: 0 kBHugePages_Total: 0HugePages_Free: 0\nHugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 7872512 kBDirectMap2M: 0 kB\nOn Mon, Oct 15, 2012 at 2:45 AM, Shiran Kleiderman <[email protected]> wrote:\nHiI've returned the memory configs to the default, erased data from my db and am testing the system again.This is the output of cat /proc/meminfoThanks\nroot@ip-10-194-167-240:~# cat /proc/meminfoMemTotal: 7629508 kBMemFree: 170368 kBBuffers: 10272 kBCached: 6220848 kB\nSwapCached: 0 kBActive: 3249748 kBInactive: 3936960 kBActive(anon): 971336 kBInactive(anon): 2103844 kBActive(file): 2278412 kB\nInactive(file): 1833116 kBUnevictable: 0 kBMlocked: 0 kBSwapTotal: 524284 kBSwapFree: 522716 kBDirty: 83068 kB\nWriteback: 3080 kBAnonPages: 955856 kBMapped: 2132564 kBShmem: 2119424 kBSlab: 157200 kBSReclaimable: 144488 kB\nSUnreclaim: 12712 kBKernelStack: 1184 kBPageTables: 21092 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kB\nCommitLimit: 4339036 kBCommitted_AS: 3637424 kBVmallocTotal: 34359738367 kBVmallocUsed: 26152 kBVmallocChunk: 34359710052 kBHardwareCorrupted: 0 kB\nAnonHugePages: 0 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kB\n\nDirectMap4k: 7872512 kBDirectMap2M: 0 kBOn Thu, Sep 27, 2012 at 8:59 AM, Andres Freund <[email protected]> wrote:\nOn Monday, September 24, 2012 08:45:06 AM Shiran Kleiderman wrote:\n> Hi,\n> I'm using and Amazon ec2 instance with the following spec and the\n> application that I'm running uses a postgres DB 9.1.\n> The app has 3 main cron jobs.\n>\n> *Ubuntu 12, High-Memory Extra Large Instance\n> 17.1 GB of memory\n> 6.5 EC2 Compute Units (2 virtual cores with 3.25 EC2 Compute Units each)\n> 420 GB of instance storage\n> 64-bit platform*\n>\n> I've changed the main default values under file *postgresql.conf* to:\n> shared_buffers = 4GB\n> work_mem = 16MB\n> wal_buffers = 16MB\n> checkpoint_segments = 32\n> effective_cache_size = 8GB\n>\n> When I run the app, after an hour or two, free -m looks like below ans the\n> crons can't run due to memory loss or similar (i'm new to postgres and db\n> admin).\n> Thanks!\n>\n> free -m, errors:\n>\n> total used free shared buffers cached\n> Mem: 17079 13742 3337 0 64 11882\n> -/+ buffers/cache: 1796 15283\n> Swap: 511 0 511\n>\n> total used *free* shared buffers cached\n> Mem: 17079 16833 *245 *0 42 14583\n> -/+ buffers/cache: 2207 14871\n> Swap: 511 0 511\n>\n> **free above stays low even when nothing is running.\n>\n> **errors:\n> *DBI connect('database=---;host=localhost','postgres',...) failed: could\n> not fork new process for connection: Cannot allocate memory*\n> could not fork new process for connection: Cannot allocate memory\n>\n> and\n> execute failed: ERROR: out of memory\n> DETAIL: Failed on request of size 968. [for Statement \"\n> SELECT DISTINCT....\ncould you show cat /proc/meminfo?\n\nGreetings,\n\nAndres\n--\n Andres Freund http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n-- Best,\nShiran Kleiderman\n+972 - 542380838Skype - shirank1\n\n\n-- Best,Shiran Kleiderman\n+972 - 542380838Skype - shirank1",
"msg_date": "Mon, 15 Oct 2012 04:36:02 +0200",
"msg_from": "Shiran Kleiderman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory issues"
}
] |
[
{
"msg_contents": "Hi,\n\nThe problem : Postgres is becoming slow, day after day, and only a full vacuum fixes the problem.\n\nInformation you may need to evaluate :\n\nThe problem lies on all tables and queries, as far as I can tell, but we can focus on a single table for better comprehension.\n\nThe queries I am running to test the speed are :\nINSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', '1111', 1, '2012-06-16 13:39:19', '111');\nDELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\nSELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\n\nAfter a full vacuum, they run in about 100ms.\nToday, before the full vacuum, they were taking around 500ms.\n\nBelow is an explain analyze of the commands AFTER a full vacuum. I did not run it before, so I can not post relevant info before the vacuum. So, after the full vacuum :\n\nexplain analyze INSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', '1111', 1, '2012-06-16 13:39:19', '111');\n\"Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002 rows=1 loops=1)\"\n\"Trigger for constraint FK_AWAITINGSTATUSSMPP_MESSAGES: time=0.131 calls=1\"\n\"Trigger bucardo_add_delta: time=0.454 calls=1\"\n\"Trigger bucardo_triggerkick_MassSMs: time=0.032 calls=1\"\n\"Total runtime: 0.818 ms\"\n\nexplain analyze DELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\"Seq Scan on \"AWAITINGSTATUSSMPP\" (cost=0.00..2.29 rows=1 width=6) (actual time=0.035..0.035 rows=0 loops=1)\"\n\" Filter: (((\"SMSCMSGID\")::text = '1111'::text) AND (\"CONNECTIONID\" = 1))\"\n\"Trigger bucardo_triggerkick_MassSMs: time=0.066 calls=1\"\n\"Total runtime: 0.146 ms\"\n\nexplain analyze SELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\n\"Seq Scan on \"AWAITINGSTATUSSMPP\" (cost=0.00..2.29 rows=1 width=557) (actual time=0.028..0.028 rows=0 loops=1)\"\n\" Filter: (((\"SMSCMSGID\")::text = '1111'::text) AND (\"CONNECTIONID\" = 1))\"\n\"Total runtime: 0.053 ms\"\n\nBelow are the metadata of the table :\n=====================================\nCREATE TABLE \"AWAITINGSTATUSSMPP\"\n(\n \"MESSAGEID\" bigint NOT NULL,\n \"SMSCMSGID\" character varying(50) NOT NULL,\n \"CONNECTIONID\" smallint NOT NULL,\n \"EXPIRE_TIME\" timestamp without time zone NOT NULL,\n \"RECIPIENT\" character varying(20) NOT NULL,\n \"CLIENT_MSG_ID\" character varying(255),\n CONSTRAINT \"PK_AWAITINGSTATUSSMPP\" PRIMARY KEY (\"SMSCMSGID\", \"CONNECTIONID\"),\n CONSTRAINT \"FK_AWAITINGSTATUSSMPP_MESSAGES\" FOREIGN KEY (\"MESSAGEID\")\n REFERENCES \"MESSAGES\" (\"ID\") MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE \"AWAITINGSTATUSSMPP\" OWNER TO postgres;\nGRANT ALL ON TABLE \"AWAITINGSTATUSSMPP\" TO \"MassSMsUsers\";\n\nCREATE INDEX \"IX_AWAITINGSTATUSSMPP_MSGID_RCP\"\n ON \"AWAITINGSTATUSSMPP\"\n USING btree\n (\"MESSAGEID\", \"RECIPIENT\");\n\nCREATE TRIGGER bucardo_add_delta\n AFTER INSERT OR UPDATE OR DELETE\n ON \"AWAITINGSTATUSSMPP\"\n FOR EACH ROW\n EXECUTE PROCEDURE bucardo.\"bucardo_add_delta_SMSCMSGID|CONNECTIONID\"();\n\nCREATE TRIGGER \"bucardo_triggerkick_MassSMs\"\n AFTER INSERT OR UPDATE OR DELETE OR TRUNCATE\n ON \"AWAITINGSTATUSSMPP\"\n FOR EACH STATEMENT\n EXECUTE PROCEDURE bucardo.\"bucardo_triggerkick_MassSMs\"();\n=====================================\n\nThe table only has about 200 records because it is being used a temporary storage and records are constantly inserted and deleted.\nBUT please don't get hold on this fact, because as I already said, the speed problem is not restricted to this table. The same problems appear on the following query \nUPDATE \"MESSAGES\" SET \"SENT\" = \"SENT\" + 1 WHERE \"ID\" = 143447;\nand MESSAGES table has mainly inserts and few deletes...\n\nMy postgresql.conf file :\n======================\nport = 5433 # (change requires restart)\nmax_connections = 100 # (change requires restart)\nshared_buffers = 256MB # min 128kB. DoubleIP - Default was 32MB\nsynchronous_commit = off # immediate fsync at commit. DoubleIP - Default was on\neffective_cache_size = 512MB # DoubleIP - Default was 128MB\nlog_destination = 'stderr' # Valid values are combinations of\nlogging_collector = on # Enable capturing of stderr and csvlog\nsilent_mode = on # Run server silently.\nlog_line_prefix = '%t %d %u ' # special values:\nlog_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\nautovacuum_naptime = 28800 # time between autovacuum runs. DoubleIP - default was 1min\nautovacuum_vacuum_threshold = 100 # min number of row updates before\nautovacuum_vacuum_scale_factor = 0.0 # fraction of table size before vacuum. DoubleIP - default was 0.2\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\n=======================\n\nAs you will see, I have altered the shared_buffers and synchronous_commit values.\nThe shared_buffers had the default value 32Mb. When I changed it to 256Mb the problem still appears but it takes more time to appear (3-4 days). With 32MB, it appeared faster, probably after 24 hours.\nAlso, I have changed the autovacuum daemon to work every 8 hours but I changed its values to make sure it vacuums pretty much all tables (the ones for which at least 100 rows have changed).\nPlease note, though, that my problem existed even before playing around with the autovacuum. This is why I tried to change its values in the first place.\n\nThe server is synchronized with another server using bucardo. Bucardo process is running on the other server.\nThe same problem appears on the 2nd server too... after 3-4 days, postgres is running slower and slower.\n\nOur server configuration :\nDELL PowerEdge T610 Tower Chassis for Up to 8x 3.5\" HDDs\n2x Intel Xeon E5520 Processor (2.26GHz, 8M Cache, 5.86 GT/s QPI, Turbo, HT), 1066MHz Max Memory\n8GB Memory,1333MHz\n2 x 146GB SAS 15k 3.5\" HD Hot Plug\n6 x 1TB SATA 7.2k 3.5\" Additional HD Hot Plug\nPERC 6/i RAID Controller Card 256MB PCIe, 2x4 Connectors\nSUSE Linux Enterprise Server 10, SP2\n\nThe 2 HDs are set up with RAID-1\nThe 6 HDs are set up with RAID-5\n\nLinux is running on the RAID-1 configuration\nPostgres is running on the RAID-5 configuration\n\n\nFinally a top before and after the full vacuum :\ntop - 11:27:44 up 72 days, 13:27, 37 users, load average: 1.05, 1.31, 1.45\nTasks: 279 total, 3 running, 276 sleeping, 0 stopped, 0 zombie\nCpu(s): 3.6%us, 0.8%sy, 0.0%ni, 95.5%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st\nMem: 8166432k total, 7963116k used, 203316k free, 115344k buffers\nSwap: 2097144k total, 2097020k used, 124k free, 2337636k cached\n\ntop - 11:30:58 up 72 days, 13:31, 38 users, load average: 1.53, 1.59, 1.53\nTasks: 267 total, 2 running, 265 sleeping, 0 stopped, 0 zombie\nCpu(s): 1.3%us, 0.4%sy, 0.0%ni, 98.0%id, 0.3%wa, 0.0%hi, 0.1%si, 0.0%st\nMem: 8166432k total, 6016268k used, 2150164k free, 61092k buffers\nSwap: 2097144k total, 2010204k used, 86940k free, 2262896k cached\n\n\nI hope I have provided enough info and hope that someone can point me to the correct direction.\n\n\nThank you very much even for reading up to here !\n\nBest regards,\nKiriakos\nHi,The problem : Postgres is becoming slow, day after day, and only a full vacuum fixes the problem.Information you may need to evaluate :The problem lies on all tables and queries, as far as I can tell, but we can focus on a single table for better comprehension.The queries I am running to test the speed are :INSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', '1111', 1, '2012-06-16 13:39:19', '111');DELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;SELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;After a full vacuum, they run in about 100ms.Today, before the full vacuum, they were taking around 500ms.Below is an explain analyze of the commands AFTER a full vacuum. I did not run it before, so I can not post relevant info before the vacuum. So, after the full vacuum :explain analyze INSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', '1111', 1, '2012-06-16 13:39:19', '111');\"Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002 rows=1 loops=1)\"\"Trigger for constraint FK_AWAITINGSTATUSSMPP_MESSAGES: time=0.131 calls=1\"\"Trigger bucardo_add_delta: time=0.454 calls=1\"\"Trigger bucardo_triggerkick_MassSMs: time=0.032 calls=1\"\"Total runtime: 0.818 ms\"explain analyze DELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\"Seq Scan on \"AWAITINGSTATUSSMPP\" (cost=0.00..2.29 rows=1 width=6) (actual time=0.035..0.035 rows=0 loops=1)\"\" Filter: (((\"SMSCMSGID\")::text = '1111'::text) AND (\"CONNECTIONID\" = 1))\"\"Trigger bucardo_triggerkick_MassSMs: time=0.066 calls=1\"\"Total runtime: 0.146 ms\"explain analyze SELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\"Seq Scan on \"AWAITINGSTATUSSMPP\" (cost=0.00..2.29 rows=1 width=557) (actual time=0.028..0.028 rows=0 loops=1)\"\" Filter: (((\"SMSCMSGID\")::text = '1111'::text) AND (\"CONNECTIONID\" = 1))\"\"Total runtime: 0.053 ms\"Below are the metadata of the table :=====================================CREATE TABLE \"AWAITINGSTATUSSMPP\"( \"MESSAGEID\" bigint NOT NULL, \"SMSCMSGID\" character varying(50) NOT NULL, \"CONNECTIONID\" smallint NOT NULL, \"EXPIRE_TIME\" timestamp without time zone NOT NULL, \"RECIPIENT\" character varying(20) NOT NULL, \"CLIENT_MSG_ID\" character varying(255), CONSTRAINT \"PK_AWAITINGSTATUSSMPP\" PRIMARY KEY (\"SMSCMSGID\", \"CONNECTIONID\"), CONSTRAINT \"FK_AWAITINGSTATUSSMPP_MESSAGES\" FOREIGN KEY (\"MESSAGEID\") REFERENCES \"MESSAGES\" (\"ID\") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE)WITH ( OIDS=FALSE);ALTER TABLE \"AWAITINGSTATUSSMPP\" OWNER TO postgres;GRANT ALL ON TABLE \"AWAITINGSTATUSSMPP\" TO \"MassSMsUsers\";CREATE INDEX \"IX_AWAITINGSTATUSSMPP_MSGID_RCP\" ON \"AWAITINGSTATUSSMPP\" USING btree (\"MESSAGEID\", \"RECIPIENT\");CREATE TRIGGER bucardo_add_delta AFTER INSERT OR UPDATE OR DELETE ON \"AWAITINGSTATUSSMPP\" FOR EACH ROW EXECUTE PROCEDURE bucardo.\"bucardo_add_delta_SMSCMSGID|CONNECTIONID\"();CREATE TRIGGER \"bucardo_triggerkick_MassSMs\" AFTER INSERT OR UPDATE OR DELETE OR TRUNCATE ON \"AWAITINGSTATUSSMPP\" FOR EACH STATEMENT EXECUTE PROCEDURE bucardo.\"bucardo_triggerkick_MassSMs\"();=====================================The table only has about 200 records because it is being used a temporary storage and records are constantly inserted and deleted.BUT please don't get hold on this fact, because as I already said, the speed problem is not restricted to this table. The same problems appear on the following query UPDATE \"MESSAGES\" SET \"SENT\" = \"SENT\" + 1 WHERE \"ID\" = 143447;and MESSAGES table has mainly inserts and few deletes...My postgresql.conf file :======================port = 5433 # (change requires restart)max_connections = 100 # (change requires restart)shared_buffers = 256MB # min 128kB. DoubleIP - Default was 32MBsynchronous_commit = off # immediate fsync at commit. DoubleIP - Default was oneffective_cache_size = 512MB # DoubleIP - Default was 128MBlog_destination = 'stderr' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvlogsilent_mode = on # Run server silently.log_line_prefix = '%t %d %u ' # special values:log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions andautovacuum_naptime = 28800 # time between autovacuum runs. DoubleIP - default was 1minautovacuum_vacuum_threshold = 100 # min number of row updates beforeautovacuum_vacuum_scale_factor = 0.0 # fraction of table size before vacuum. DoubleIP - default was 0.2datestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error messagelc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formattinglc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'=======================As you will see, I have altered the shared_buffers and synchronous_commit values.The shared_buffers had the default value 32Mb. When I changed it to 256Mb the problem still appears but it takes more time to appear (3-4 days). With 32MB, it appeared faster, probably after 24 hours.Also, I have changed the autovacuum daemon to work every 8 hours but I changed its values to make sure it vacuums pretty much all tables (the ones for which at least 100 rows have changed).Please note, though, that my problem existed even before playing around with the autovacuum. This is why I tried to change its values in the first place.The server is synchronized with another server using bucardo. Bucardo process is running on the other server.The same problem appears on the 2nd server too... after 3-4 days, postgres is running slower and slower.Our server configuration :DELL PowerEdge T610 Tower Chassis for Up to 8x 3.5\" HDDs2x Intel Xeon E5520 Processor (2.26GHz, 8M Cache, 5.86 GT/s QPI, Turbo, HT), 1066MHz Max Memory8GB Memory,1333MHz2 x 146GB SAS 15k 3.5\" HD Hot Plug6 x 1TB SATA 7.2k 3.5\" Additional HD Hot PlugPERC 6/i RAID Controller Card 256MB PCIe, 2x4 ConnectorsSUSE Linux Enterprise Server 10, SP2The 2 HDs are set up with RAID-1The 6 HDs are set up with RAID-5Linux is running on the RAID-1 configurationPostgres is running on the RAID-5 configurationFinally a top before and after the full vacuum :top - 11:27:44 up 72 days, 13:27, 37 users, load average: 1.05, 1.31, 1.45Tasks: 279 total, 3 running, 276 sleeping, 0 stopped, 0 zombieCpu(s): 3.6%us, 0.8%sy, 0.0%ni, 95.5%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%stMem: 8166432k total, 7963116k used, 203316k free, 115344k buffersSwap: 2097144k total, 2097020k used, 124k free, 2337636k cachedtop - 11:30:58 up 72 days, 13:31, 38 users, load average: 1.53, 1.59, 1.53Tasks: 267 total, 2 running, 265 sleeping, 0 stopped, 0 zombieCpu(s): 1.3%us, 0.4%sy, 0.0%ni, 98.0%id, 0.3%wa, 0.0%hi, 0.1%si, 0.0%stMem: 8166432k total, 6016268k used, 2150164k free, 61092k buffersSwap: 2097144k total, 2010204k used, 86940k free, 2262896k cachedI hope I have provided enough info and hope that someone can point me to the correct direction.Thank you very much even for reading up to here !Best regards,Kiriakos",
"msg_date": "Mon, 24 Sep 2012 13:33:25 +0300",
"msg_from": "Kiriakos Tsourapas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Sorry, forgot to mention the most obvious and important information :\nMy postgres is 8.4.2\n\nOn Sep 24, 2012, at 13:33, Kiriakos Tsourapas wrote:\n\n> Hi,\n> \n> The problem : Postgres is becoming slow, day after day, and only a full vacuum fixes the problem.\n> \n> Information you may need to evaluate :\n> \n> The problem lies on all tables and queries, as far as I can tell, but we can focus on a single table for better comprehension.\n> \n> The queries I am running to test the speed are :\n> INSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', '1111', 1, '2012-06-16 13:39:19', '111');\n> DELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\n> SELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\n> \n> After a full vacuum, they run in about 100ms.\n> Today, before the full vacuum, they were taking around 500ms.\n> \n> Below is an explain analyze of the commands AFTER a full vacuum. I did not run it before, so I can not post relevant info before the vacuum. So, after the full vacuum :\n> \n> explain analyze INSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', '1111', 1, '2012-06-16 13:39:19', '111');\n> \"Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002 rows=1 loops=1)\"\n> \"Trigger for constraint FK_AWAITINGSTATUSSMPP_MESSAGES: time=0.131 calls=1\"\n> \"Trigger bucardo_add_delta: time=0.454 calls=1\"\n> \"Trigger bucardo_triggerkick_MassSMs: time=0.032 calls=1\"\n> \"Total runtime: 0.818 ms\"\n> \n> explain analyze DELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\"Seq Scan on \"AWAITINGSTATUSSMPP\" (cost=0.00..2.29 rows=1 width=6) (actual time=0.035..0.035 rows=0 loops=1)\"\n> \" Filter: (((\"SMSCMSGID\")::text = '1111'::text) AND (\"CONNECTIONID\" = 1))\"\n> \"Trigger bucardo_triggerkick_MassSMs: time=0.066 calls=1\"\n> \"Total runtime: 0.146 ms\"\n> \n> explain analyze SELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\n> \"Seq Scan on \"AWAITINGSTATUSSMPP\" (cost=0.00..2.29 rows=1 width=557) (actual time=0.028..0.028 rows=0 loops=1)\"\n> \" Filter: (((\"SMSCMSGID\")::text = '1111'::text) AND (\"CONNECTIONID\" = 1))\"\n> \"Total runtime: 0.053 ms\"\n> \n> Below are the metadata of the table :\n> =====================================\n> CREATE TABLE \"AWAITINGSTATUSSMPP\"\n> (\n> \"MESSAGEID\" bigint NOT NULL,\n> \"SMSCMSGID\" character varying(50) NOT NULL,\n> \"CONNECTIONID\" smallint NOT NULL,\n> \"EXPIRE_TIME\" timestamp without time zone NOT NULL,\n> \"RECIPIENT\" character varying(20) NOT NULL,\n> \"CLIENT_MSG_ID\" character varying(255),\n> CONSTRAINT \"PK_AWAITINGSTATUSSMPP\" PRIMARY KEY (\"SMSCMSGID\", \"CONNECTIONID\"),\n> CONSTRAINT \"FK_AWAITINGSTATUSSMPP_MESSAGES\" FOREIGN KEY (\"MESSAGEID\")\n> REFERENCES \"MESSAGES\" (\"ID\") MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE CASCADE\n> )\n> WITH (\n> OIDS=FALSE\n> );\n> ALTER TABLE \"AWAITINGSTATUSSMPP\" OWNER TO postgres;\n> GRANT ALL ON TABLE \"AWAITINGSTATUSSMPP\" TO \"MassSMsUsers\";\n> \n> CREATE INDEX \"IX_AWAITINGSTATUSSMPP_MSGID_RCP\"\n> ON \"AWAITINGSTATUSSMPP\"\n> USING btree\n> (\"MESSAGEID\", \"RECIPIENT\");\n> \n> CREATE TRIGGER bucardo_add_delta\n> AFTER INSERT OR UPDATE OR DELETE\n> ON \"AWAITINGSTATUSSMPP\"\n> FOR EACH ROW\n> EXECUTE PROCEDURE bucardo.\"bucardo_add_delta_SMSCMSGID|CONNECTIONID\"();\n> \n> CREATE TRIGGER \"bucardo_triggerkick_MassSMs\"\n> AFTER INSERT OR UPDATE OR DELETE OR TRUNCATE\n> ON \"AWAITINGSTATUSSMPP\"\n> FOR EACH STATEMENT\n> EXECUTE PROCEDURE bucardo.\"bucardo_triggerkick_MassSMs\"();\n> =====================================\n> \n> The table only has about 200 records because it is being used a temporary storage and records are constantly inserted and deleted.\n> BUT please don't get hold on this fact, because as I already said, the speed problem is not restricted to this table. The same problems appear on the following query \n> UPDATE \"MESSAGES\" SET \"SENT\" = \"SENT\" + 1 WHERE \"ID\" = 143447;\n> and MESSAGES table has mainly inserts and few deletes...\n> \n> My postgresql.conf file :\n> ======================\n> port = 5433 # (change requires restart)\n> max_connections = 100 # (change requires restart)\n> shared_buffers = 256MB # min 128kB. DoubleIP - Default was 32MB\n> synchronous_commit = off # immediate fsync at commit. DoubleIP - Default was on\n> effective_cache_size = 512MB # DoubleIP - Default was 128MB\n> log_destination = 'stderr' # Valid values are combinations of\n> logging_collector = on # Enable capturing of stderr and csvlog\n> silent_mode = on # Run server silently.\n> log_line_prefix = '%t %d %u ' # special values:\n> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\n> autovacuum_naptime = 28800 # time between autovacuum runs. DoubleIP - default was 1min\n> autovacuum_vacuum_threshold = 100 # min number of row updates before\n> autovacuum_vacuum_scale_factor = 0.0 # fraction of table size before vacuum. DoubleIP - default was 0.2\n> datestyle = 'iso, mdy'\n> lc_messages = 'en_US.UTF-8' # locale for system error message\n> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n> lc_time = 'en_US.UTF-8' # locale for time formatting\n> default_text_search_config = 'pg_catalog.english'\n> =======================\n> \n> As you will see, I have altered the shared_buffers and synchronous_commit values.\n> The shared_buffers had the default value 32Mb. When I changed it to 256Mb the problem still appears but it takes more time to appear (3-4 days). With 32MB, it appeared faster, probably after 24 hours.\n> Also, I have changed the autovacuum daemon to work every 8 hours but I changed its values to make sure it vacuums pretty much all tables (the ones for which at least 100 rows have changed).\n> Please note, though, that my problem existed even before playing around with the autovacuum. This is why I tried to change its values in the first place.\n> \n> The server is synchronized with another server using bucardo. Bucardo process is running on the other server.\n> The same problem appears on the 2nd server too... after 3-4 days, postgres is running slower and slower.\n> \n> Our server configuration :\n> DELL PowerEdge T610 Tower Chassis for Up to 8x 3.5\" HDDs\n> 2x Intel Xeon E5520 Processor (2.26GHz, 8M Cache, 5.86 GT/s QPI, Turbo, HT), 1066MHz Max Memory\n> 8GB Memory,1333MHz\n> 2 x 146GB SAS 15k 3.5\" HD Hot Plug\n> 6 x 1TB SATA 7.2k 3.5\" Additional HD Hot Plug\n> PERC 6/i RAID Controller Card 256MB PCIe, 2x4 Connectors\n> SUSE Linux Enterprise Server 10, SP2\n> \n> The 2 HDs are set up with RAID-1\n> The 6 HDs are set up with RAID-5\n> \n> Linux is running on the RAID-1 configuration\n> Postgres is running on the RAID-5 configuration\n> \n> \n> Finally a top before and after the full vacuum :\n> top - 11:27:44 up 72 days, 13:27, 37 users, load average: 1.05, 1.31, 1.45\n> Tasks: 279 total, 3 running, 276 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 3.6%us, 0.8%sy, 0.0%ni, 95.5%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st\n> Mem: 8166432k total, 7963116k used, 203316k free, 115344k buffers\n> Swap: 2097144k total, 2097020k used, 124k free, 2337636k cached\n> \n> top - 11:30:58 up 72 days, 13:31, 38 users, load average: 1.53, 1.59, 1.53\n> Tasks: 267 total, 2 running, 265 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 1.3%us, 0.4%sy, 0.0%ni, 98.0%id, 0.3%wa, 0.0%hi, 0.1%si, 0.0%st\n> Mem: 8166432k total, 6016268k used, 2150164k free, 61092k buffers\n> Swap: 2097144k total, 2010204k used, 86940k free, 2262896k cached\n> \n> \n> I hope I have provided enough info and hope that someone can point me to the correct direction.\n> \n> \n> Thank you very much even for reading up to here !\n> \n> Best regards,\n> Kiriakos\n\n\nSorry, forgot to mention the most obvious and important information :My postgres is 8.4.2On Sep 24, 2012, at 13:33, Kiriakos Tsourapas wrote:Hi,The problem : Postgres is becoming slow, day after day, and only a full vacuum fixes the problem.Information you may need to evaluate :The problem lies on all tables and queries, as far as I can tell, but we can focus on a single table for better comprehension.The queries I am running to test the speed are :INSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', '1111', 1, '2012-06-16 13:39:19', '111');DELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;SELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;After a full vacuum, they run in about 100ms.Today, before the full vacuum, they were taking around 500ms.Below is an explain analyze of the commands AFTER a full vacuum. I did not run it before, so I can not post relevant info before the vacuum. So, after the full vacuum :explain analyze INSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', '1111', 1, '2012-06-16 13:39:19', '111');\"Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002 rows=1 loops=1)\"\"Trigger for constraint FK_AWAITINGSTATUSSMPP_MESSAGES: time=0.131 calls=1\"\"Trigger bucardo_add_delta: time=0.454 calls=1\"\"Trigger bucardo_triggerkick_MassSMs: time=0.032 calls=1\"\"Total runtime: 0.818 ms\"explain analyze DELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\"Seq Scan on \"AWAITINGSTATUSSMPP\" (cost=0.00..2.29 rows=1 width=6) (actual time=0.035..0.035 rows=0 loops=1)\"\" Filter: (((\"SMSCMSGID\")::text = '1111'::text) AND (\"CONNECTIONID\" = 1))\"\"Trigger bucardo_triggerkick_MassSMs: time=0.066 calls=1\"\"Total runtime: 0.146 ms\"explain analyze SELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \"CONNECTIONID\" = 1;\"Seq Scan on \"AWAITINGSTATUSSMPP\" (cost=0.00..2.29 rows=1 width=557) (actual time=0.028..0.028 rows=0 loops=1)\"\" Filter: (((\"SMSCMSGID\")::text = '1111'::text) AND (\"CONNECTIONID\" = 1))\"\"Total runtime: 0.053 ms\"Below are the metadata of the table :=====================================CREATE TABLE \"AWAITINGSTATUSSMPP\"( \"MESSAGEID\" bigint NOT NULL, \"SMSCMSGID\" character varying(50) NOT NULL, \"CONNECTIONID\" smallint NOT NULL, \"EXPIRE_TIME\" timestamp without time zone NOT NULL, \"RECIPIENT\" character varying(20) NOT NULL, \"CLIENT_MSG_ID\" character varying(255), CONSTRAINT \"PK_AWAITINGSTATUSSMPP\" PRIMARY KEY (\"SMSCMSGID\", \"CONNECTIONID\"), CONSTRAINT \"FK_AWAITINGSTATUSSMPP_MESSAGES\" FOREIGN KEY (\"MESSAGEID\") REFERENCES \"MESSAGES\" (\"ID\") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE)WITH ( OIDS=FALSE);ALTER TABLE \"AWAITINGSTATUSSMPP\" OWNER TO postgres;GRANT ALL ON TABLE \"AWAITINGSTATUSSMPP\" TO \"MassSMsUsers\";CREATE INDEX \"IX_AWAITINGSTATUSSMPP_MSGID_RCP\" ON \"AWAITINGSTATUSSMPP\" USING btree (\"MESSAGEID\", \"RECIPIENT\");CREATE TRIGGER bucardo_add_delta AFTER INSERT OR UPDATE OR DELETE ON \"AWAITINGSTATUSSMPP\" FOR EACH ROW EXECUTE PROCEDURE bucardo.\"bucardo_add_delta_SMSCMSGID|CONNECTIONID\"();CREATE TRIGGER \"bucardo_triggerkick_MassSMs\" AFTER INSERT OR UPDATE OR DELETE OR TRUNCATE ON \"AWAITINGSTATUSSMPP\" FOR EACH STATEMENT EXECUTE PROCEDURE bucardo.\"bucardo_triggerkick_MassSMs\"();=====================================The table only has about 200 records because it is being used a temporary storage and records are constantly inserted and deleted.BUT please don't get hold on this fact, because as I already said, the speed problem is not restricted to this table. The same problems appear on the following query UPDATE \"MESSAGES\" SET \"SENT\" = \"SENT\" + 1 WHERE \"ID\" = 143447;and MESSAGES table has mainly inserts and few deletes...My postgresql.conf file :======================port = 5433 # (change requires restart)max_connections = 100 # (change requires restart)shared_buffers = 256MB # min 128kB. DoubleIP - Default was 32MBsynchronous_commit = off # immediate fsync at commit. DoubleIP - Default was oneffective_cache_size = 512MB # DoubleIP - Default was 128MBlog_destination = 'stderr' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvlogsilent_mode = on # Run server silently.log_line_prefix = '%t %d %u ' # special values:log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions andautovacuum_naptime = 28800 # time between autovacuum runs. DoubleIP - default was 1minautovacuum_vacuum_threshold = 100 # min number of row updates beforeautovacuum_vacuum_scale_factor = 0.0 # fraction of table size before vacuum. DoubleIP - default was 0.2datestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error messagelc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formattinglc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'=======================As you will see, I have altered the shared_buffers and synchronous_commit values.The shared_buffers had the default value 32Mb. When I changed it to 256Mb the problem still appears but it takes more time to appear (3-4 days). With 32MB, it appeared faster, probably after 24 hours.Also, I have changed the autovacuum daemon to work every 8 hours but I changed its values to make sure it vacuums pretty much all tables (the ones for which at least 100 rows have changed).Please note, though, that my problem existed even before playing around with the autovacuum. This is why I tried to change its values in the first place.The server is synchronized with another server using bucardo. Bucardo process is running on the other server.The same problem appears on the 2nd server too... after 3-4 days, postgres is running slower and slower.Our server configuration :DELL PowerEdge T610 Tower Chassis for Up to 8x 3.5\" HDDs2x Intel Xeon E5520 Processor (2.26GHz, 8M Cache, 5.86 GT/s QPI, Turbo, HT), 1066MHz Max Memory8GB Memory,1333MHz2 x 146GB SAS 15k 3.5\" HD Hot Plug6 x 1TB SATA 7.2k 3.5\" Additional HD Hot PlugPERC 6/i RAID Controller Card 256MB PCIe, 2x4 ConnectorsSUSE Linux Enterprise Server 10, SP2The 2 HDs are set up with RAID-1The 6 HDs are set up with RAID-5Linux is running on the RAID-1 configurationPostgres is running on the RAID-5 configurationFinally a top before and after the full vacuum :top - 11:27:44 up 72 days, 13:27, 37 users, load average: 1.05, 1.31, 1.45Tasks: 279 total, 3 running, 276 sleeping, 0 stopped, 0 zombieCpu(s): 3.6%us, 0.8%sy, 0.0%ni, 95.5%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%stMem: 8166432k total, 7963116k used, 203316k free, 115344k buffersSwap: 2097144k total, 2097020k used, 124k free, 2337636k cachedtop - 11:30:58 up 72 days, 13:31, 38 users, load average: 1.53, 1.59, 1.53Tasks: 267 total, 2 running, 265 sleeping, 0 stopped, 0 zombieCpu(s): 1.3%us, 0.4%sy, 0.0%ni, 98.0%id, 0.3%wa, 0.0%hi, 0.1%si, 0.0%stMem: 8166432k total, 6016268k used, 2150164k free, 61092k buffersSwap: 2097144k total, 2010204k used, 86940k free, 2262896k cachedI hope I have provided enough info and hope that someone can point me to the correct direction.Thank you very much even for reading up to here !Best regards,Kiriakos",
"msg_date": "Mon, 24 Sep 2012 14:55:01 +0300",
"msg_from": "Kiriakos Tsourapas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Hello,\n\n1) upgrade your PostgreSQL installation, there have been numerous \nbugfixes releases since 8.4.2\n2) you'll have to show us an explain analyze of the slow queries. If I \ntake a look at those you provided everything run i less than 1ms.\n3) with 200 records you'll always have a seqscan\n4) how much memory do you have ? shared_buffers = 256MB and \neffective_cache_size = 512MB looks OK only if you have between 1 and 2GB \nof RAM\n5) synchronous_commit = off should only be used if you have a \nbattery-backed write cache.\n6) autovacuum_naptime should be changed only if autovacuum is constantly \nrunning (so if you have dozen of databases in your cluster)\n7) are you sure the problem isn't related to Bucardo ?\n\nJulien\n\nOn 09/24/2012 13:55, Kiriakos Tsourapas wrote:\n> Sorry, forgot to mention the most obvious and important information :\n> My postgres is 8.4.2\n>\n> On Sep 24, 2012, at 13:33, Kiriakos Tsourapas wrote:\n>\n>> Hi,\n>>\n>> The problem : *Postgres is becoming slow, day after day, and only a \n>> full vacuum fixes the problem*.\n>>\n>> Information you may need to evaluate :\n>>\n>> The problem lies on all tables and queries, as far as I can tell, but \n>> we can focus on a single table for better comprehension.\n>>\n>> The queries I am running to test the speed are :\n>> INSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', '1111', 1, \n>> '2012-06-16 13:39:19', '111');\n>> DELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \n>> \"CONNECTIONID\" = 1;\n>> SELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND \n>> \"CONNECTIONID\" = 1;\n>>\n>> After a full vacuum, they run in about 100ms.\n>> Today, before the full vacuum, they were taking around 500ms.\n>>\n>> Below is an explain analyze of the commands AFTER a full vacuum. I \n>> did not run it before, so I can not post relevant info before the \n>> vacuum. So, after the full vacuum :\n>>\n>> explain analyze INSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', \n>> '1111', 1, '2012-06-16 13:39:19', '111');\n>> \"Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002 \n>> rows=1 loops=1)\"\n>> \"Trigger for constraint FK_AWAITINGSTATUSSMPP_MESSAGES: time=0.131 \n>> calls=1\"\n>> \"Trigger bucardo_add_delta: time=0.454 calls=1\"\n>> \"Trigger bucardo_triggerkick_MassSMs: time=0.032 calls=1\"\n>> \"Total runtime: 0.818 ms\"\n>>\n>> explain analyze DELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = \n>> '1111' AND \"CONNECTIONID\" = 1;\"Seq Scan on \"AWAITINGSTATUSSMPP\" \n>> (cost=0.00..2.29 rows=1 width=6) (actual time=0.035..0.035 rows=0 \n>> loops=1)\"\n>> \" Filter: (((\"SMSCMSGID\")::text = '1111'::text) AND (\"CONNECTIONID\" \n>> = 1))\"\n>> \"Trigger bucardo_triggerkick_MassSMs: time=0.066 calls=1\"\n>> \"Total runtime: 0.146 ms\"\n>>\n>> explain analyze SELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" \n>> = '1111' AND \"CONNECTIONID\" = 1;\n>> \"Seq Scan on \"AWAITINGSTATUSSMPP\" (cost=0.00..2.29 rows=1 width=557) \n>> (actual time=0.028..0.028 rows=0 loops=1)\"\n>> \" Filter: (((\"SMSCMSGID\")::text = '1111'::text) AND (\"CONNECTIONID\" \n>> = 1))\"\n>> \"Total runtime: 0.053 ms\"\n>>\n>> Below are the metadata of the table :\n>> =====================================\n>> CREATE TABLE \"AWAITINGSTATUSSMPP\"\n>> (\n>> \"MESSAGEID\" bigint NOT NULL,\n>> \"SMSCMSGID\" character varying(50) NOT NULL,\n>> \"CONNECTIONID\" smallint NOT NULL,\n>> \"EXPIRE_TIME\" timestamp without time zone NOT NULL,\n>> \"RECIPIENT\" character varying(20) NOT NULL,\n>> \"CLIENT_MSG_ID\" character varying(255),\n>> CONSTRAINT \"PK_AWAITINGSTATUSSMPP\" PRIMARY KEY (\"SMSCMSGID\", \n>> \"CONNECTIONID\"),\n>> CONSTRAINT \"FK_AWAITINGSTATUSSMPP_MESSAGES\" FOREIGN KEY (\"MESSAGEID\")\n>> REFERENCES \"MESSAGES\" (\"ID\") MATCH SIMPLE\n>> ON UPDATE NO ACTION ON DELETE CASCADE\n>> )\n>> WITH (\n>> OIDS=FALSE\n>> );\n>> ALTER TABLE \"AWAITINGSTATUSSMPP\" OWNER TO postgres;\n>> GRANT ALL ON TABLE \"AWAITINGSTATUSSMPP\" TO \"MassSMsUsers\";\n>>\n>> CREATE INDEX \"IX_AWAITINGSTATUSSMPP_MSGID_RCP\"\n>> ON \"AWAITINGSTATUSSMPP\"\n>> USING btree\n>> (\"MESSAGEID\", \"RECIPIENT\");\n>>\n>> CREATE TRIGGER bucardo_add_delta\n>> AFTER INSERT OR UPDATE OR DELETE\n>> ON \"AWAITINGSTATUSSMPP\"\n>> FOR EACH ROW\n>> EXECUTE PROCEDURE bucardo.\"bucardo_add_delta_SMSCMSGID|CONNECTIONID\"();\n>>\n>> CREATE TRIGGER \"bucardo_triggerkick_MassSMs\"\n>> AFTER INSERT OR UPDATE OR DELETE OR TRUNCATE\n>> ON \"AWAITINGSTATUSSMPP\"\n>> FOR EACH STATEMENT\n>> EXECUTE PROCEDURE bucardo.\"bucardo_triggerkick_MassSMs\"();\n>> =====================================\n>>\n>> The table only has about 200 records because it is being used a \n>> temporary storage and records are constantly inserted and deleted.\n>> BUT please don't get hold on this fact, because as I already said, \n>> the speed problem is not restricted to this table. The same problems \n>> appear on the following query\n>> UPDATE \"MESSAGES\" SET \"SENT\" = \"SENT\" + 1 WHERE \"ID\" = 143447;\n>> and MESSAGES table has mainly inserts and few deletes...\n>>\n>> My postgresql.conf file :\n>> ======================\n>> port = 5433 # (change requires restart)\n>> max_connections = 100 # (change requires restart)\n>> shared_buffers = 256MB # min 128kB. DoubleIP - \n>> Default was 32MB\n>> synchronous_commit = off # immediate fsync at commit. \n>> DoubleIP - Default was on\n>> effective_cache_size = 512MB # DoubleIP - Default was 128MB\n>> log_destination = 'stderr' # Valid values are \n>> combinations of\n>> logging_collector = on # Enable capturing of stderr \n>> and csvlog\n>> silent_mode = on # Run server silently.\n>> log_line_prefix = '%t %d %u ' # special values:\n>> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all \n>> actions and\n>> autovacuum_naptime = 28800 # time between autovacuum \n>> runs. DoubleIP - default was 1min\n>> autovacuum_vacuum_threshold = 100 # min number of row updates \n>> before\n>> autovacuum_vacuum_scale_factor = 0.0 # fraction of table size \n>> before vacuum. DoubleIP - default was 0.2\n>> datestyle = 'iso, mdy'\n>> lc_messages = 'en_US.UTF-8' # locale for system \n>> error message\n>> lc_monetary = 'en_US.UTF-8' # locale for monetary \n>> formatting\n>> lc_numeric = 'en_US.UTF-8' # locale for number \n>> formatting\n>> lc_time = 'en_US.UTF-8' # locale for time \n>> formatting\n>> default_text_search_config = 'pg_catalog.english'\n>> =======================\n>>\n>> As you will see, I have altered the shared_buffers \n>> and synchronous_commit values.\n>> The shared_buffers had the default value 32Mb. When I changed it to \n>> 256Mb the problem still appears but it takes more time to appear (3-4 \n>> days). With 32MB, it appeared faster, probably after 24 hours.\n>> Also, I have changed the autovacuum daemon to work every 8 hours but \n>> I changed its values to make sure it vacuums pretty much all tables \n>> (the ones for which at least 100 rows have changed).\n>> Please note, though, that my problem existed even before playing \n>> around with the autovacuum. This is why I tried to change its values \n>> in the first place.\n>>\n>> The server is synchronized with another server using bucardo. Bucardo \n>> process is running on the other server.\n>> The same problem appears on the 2nd server too... after 3-4 days, \n>> postgres is running slower and slower.\n>>\n>> Our server configuration :\n>> DELL PowerEdge T610 Tower Chassis for Up to 8x 3.5\" HDDs\n>> 2x Intel Xeon E5520 Processor (2.26GHz, 8M Cache, 5.86 GT/s QPI, \n>> Turbo, HT), 1066MHz Max Memory\n>> 8GB Memory,1333MHz\n>> 2 x 146GB SAS 15k 3.5\" HD Hot Plug\n>> 6 x 1TB SATA 7.2k 3.5\" Additional HD Hot Plug\n>> PERC 6/i RAID Controller Card 256MB PCIe, 2x4 Connectors\n>> SUSE Linux Enterprise Server 10, SP2\n>>\n>> The 2 HDs are set up with RAID-1\n>> The 6 HDs are set up with RAID-5\n>>\n>> Linux is running on the RAID-1 configuration\n>> Postgres is running on the RAID-5 configuration\n>>\n>>\n>> Finally a top before and after the full vacuum :\n>> top - 11:27:44 up 72 days, 13:27, 37 users, load average: 1.05, \n>> 1.31, 1.45\n>> Tasks: 279 total, 3 running, 276 sleeping, 0 stopped, 0 zombie\n>> Cpu(s): 3.6%us, 0.8%sy, 0.0%ni, 95.5%id, 0.0%wa, 0.0%hi, \n>> 0.1%si, 0.0%st\n>> Mem: 8166432k total, 7963116k used, 203316k free, 115344k buffers\n>> Swap: 2097144k total, 2097020k used, 124k free, 2337636k cached\n>>\n>> top - 11:30:58 up 72 days, 13:31, 38 users, load average: 1.53, \n>> 1.59, 1.53\n>> Tasks: 267 total, 2 running, 265 sleeping, 0 stopped, 0 zombie\n>> Cpu(s): 1.3%us, 0.4%sy, 0.0%ni, 98.0%id, 0.3%wa, 0.0%hi, \n>> 0.1%si, 0.0%st\n>> Mem: 8166432k total, 6016268k used, 2150164k free, 61092k buffers\n>> Swap: 2097144k total, 2010204k used, 86940k free, 2262896k cached\n>>\n>>\n>> I hope I have provided enough info and hope that someone can point me \n>> to the correct direction.\n>>\n>>\n>> Thank you very much even for reading up to here !\n>>\n>> Best regards,\n>> Kiriakos\n>\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Mon, 24 Sep 2012 14:21:09 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "i remember having a server with 8.4.4 where we had multiple problems with\nautovacuum.\nif i am not mistaken there are some bugs related with vacuum until 8.4.7. \ni would suggest you to upgrade to the latest 8.4.x version\n\nBR,\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Postgres-becoming-slow-only-full-vacuum-fixes-it-tp5725119p5725129.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Mon, 24 Sep 2012 05:23:13 -0700 (PDT)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "On Monday, September 24, 2012 02:21:09 PM Julien Cigar wrote:\n> 5) synchronous_commit = off should only be used if you have a \n> battery-backed write cache.\nHuh? Are you possibly confusing this with full_page_writes?\n\nGreetings,\n\nAndres\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 24 Sep 2012 14:34:56 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "On 09/24/2012 14:34, Andres Freund wrote:\n> On Monday, September 24, 2012 02:21:09 PM Julien Cigar wrote:\n>> 5) synchronous_commit = off should only be used if you have a\n>> battery-backed write cache.\n> Huh? Are you possibly confusing this with full_page_writes?\n\nindeed...! sorry for that\n(note that you still have a (very) small chance of loosing data with \nsynchronous_commit = off if your server crashes between two \"commit chunks\")\n\n> Greetings,\n>\n> Andres\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Mon, 24 Sep 2012 14:53:59 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "On Monday, September 24, 2012 02:53:59 PM Julien Cigar wrote:\n> On 09/24/2012 14:34, Andres Freund wrote:\n> > On Monday, September 24, 2012 02:21:09 PM Julien Cigar wrote:\n> >> 5) synchronous_commit = off should only be used if you have a\n> >> battery-backed write cache.\n> > \n> > Huh? Are you possibly confusing this with full_page_writes?\n> \n> indeed...! sorry for that\n> (note that you still have a (very) small chance of loosing data with\n> synchronous_commit = off if your server crashes between two \"commit\n> chunks\")\nSure, you have a chance of loosing the last some transactions, but you won't \ncorrupt anything. Thats the entire point of the setting ;)\n\nGreetings,\n\nAndres\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 24 Sep 2012 15:34:31 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Hi,\n\nThank you for your response.\nPlease find below my answers/comments.\n\n\nOn Sep 24, 2012, at 15:21, Julien Cigar wrote:\n\n> Hello,\n> \n> 1) upgrade your PostgreSQL installation, there have been numerous bugfixes releases since 8.4.2\nNot possible right now. It will have to be the last solution.\n> 2) you'll have to show us an explain analyze of the slow queries. If I take a look at those you provided everything run i less than 1ms.\nWill do so in a couple of days that it will get slow again.\n> 3) with 200 records you'll always have a seqscan\nDoes it really matter? I mean, with 200 records any query should be ultra fast. Right ?\n> 4) how much memory do you have ? shared_buffers = 256MB and effective_cache_size = 512MB looks OK only if you have between 1 and 2GB of RAM\nI have included the server specs and the results of top commands, showing that we have 8GB ram and how much memory is used/cached/swapped. Personally I don't quite understand the linux memory, but I have posted them hoping you may see something I don't.\n> 5) synchronous_commit = off should only be used if you have a battery-backed write cache.\nI agree with the comments that have followed my post. I have changed it, knowing there is a small risk, but hoping it will help our performance.\n> 6) autovacuum_naptime should be changed only if autovacuum is constantly running (so if you have dozen of databases in your cluster)\nAs I said, changing the autovacuum values have not changed the problem. So, you may as well consider that we have the default values for autovacuuming... the problem existed with the default values too.\n> 7) are you sure the problem isn't related to Bucardo ?\nNot at all sure... I have no idea. Can you suggest of a way to figure it out ?\n\n\nThank you\n",
"msg_date": "Mon, 24 Sep 2012 16:51:44 +0300",
"msg_from": "Kiriakos Tsourapas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "On 09/24/2012 15:51, Kiriakos Tsourapas wrote:\n> Hi,\n>\n> Thank you for your response.\n> Please find below my answers/comments.\n>\n>\n> On Sep 24, 2012, at 15:21, Julien Cigar wrote:\n>\n>> Hello,\n>>\n>> 1) upgrade your PostgreSQL installation, there have been numerous bugfixes releases since 8.4.2\n> Not possible right now. It will have to be the last solution.\n>> 2) you'll have to show us an explain analyze of the slow queries. If I take a look at those you provided everything run i less than 1ms.\n> Will do so in a couple of days that it will get slow again.\n>> 3) with 200 records you'll always have a seqscan\n> Does it really matter? I mean, with 200 records any query should be ultra fast. Right ?\n\nright..!\n\n>> 4) how much memory do you have ? shared_buffers = 256MB and effective_cache_size = 512MB looks OK only if you have between 1 and 2GB of RAM\n> I have included the server specs and the results of top commands, showing that we have 8GB ram and how much memory is used/cached/swapped. Personally I don't quite understand the linux memory, but I have posted them hoping you may see something I don't.\n\nwith 8GB of RAM I would start with shared_buffers to 1GB and \neffective_cache_size to 4GB. I would also change the default work_mem to \n32MB and maintenance_work_mem to 512MB\n\n>> 5) synchronous_commit = off should only be used if you have a battery-backed write cache.\n> I agree with the comments that have followed my post. I have changed it, knowing there is a small risk, but hoping it will help our performance.\n>> 6) autovacuum_naptime should be changed only if autovacuum is constantly running (so if you have dozen of databases in your cluster)\n> As I said, changing the autovacuum values have not changed the problem. So, you may as well consider that we have the default values for autovacuuming... the problem existed with the default values too.\n>> 7) are you sure the problem isn't related to Bucardo ?\n> Not at all sure... I have no idea. Can you suggest of a way to figure it out ?\n\nUnfortunately I never used Bucardo, but be sure that it's not a problem \nwith your network (and that you understand all the challenges involved \nin multi-master replication)\n\n>\n>\n> Thank you\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Mon, 24 Sep 2012 16:14:21 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Hi,\n\nOn 24 September 2012 20:33, Kiriakos Tsourapas <[email protected]> wrote:\n> The problem : Postgres is becoming slow, day after day, and only a full\n> vacuum fixes the problem.\n>\n> Information you may need to evaluate :\n>\n> The problem lies on all tables and queries, as far as I can tell, but we can\n> focus on a single table for better comprehension.\n>\n> The queries I am running to test the speed are :\n> INSERT INTO \"AWAITINGSTATUSSMPP\" VALUES('143428', '1111', 1, '2012-06-16\n> 13:39:19', '111');\n> DELETE FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND\n> \"CONNECTIONID\" = 1;\n> SELECT * FROM \"AWAITINGSTATUSSMPP\" WHERE \"SMSCMSGID\" = '1111' AND\n> \"CONNECTIONID\" = 1;\n>\n> After a full vacuum, they run in about 100ms.\n> Today, before the full vacuum, they were taking around 500ms.\n\nI had similar issue and I disabled cost based auto vacuum:\nautovacuum_vacuum_cost_delay = -1\n\n-1 says that vacuum_cost_delay will be used and default value for\nvacuum_cost_delay is 0 (ie. off)\n\nOf couse you need to change other autovacuum settings but you did that.\n\n-- \nOndrej Ivanic\n([email protected])\n\n",
"msg_date": "Tue, 25 Sep 2012 07:43:30 +1000",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "On 24/09/12 22:33, Kiriakos Tsourapas wrote:\n> Hi,\n>\n> The problem : Postgres is becoming slow, day after day, and only a full vacuum fixes the problem.\n>\n>\n>\n> My postgresql.conf file :\n> ======================\n> port = 5433 # (change requires restart)\n> max_connections = 100 # (change requires restart)\n> shared_buffers = 256MB # min 128kB. DoubleIP - Default was 32MB\n> synchronous_commit = off # immediate fsync at commit. DoubleIP - Default was on\n> effective_cache_size = 512MB # DoubleIP - Default was 128MB\n> log_destination = 'stderr' # Valid values are combinations of\n> logging_collector = on # Enable capturing of stderr and csvlog\n> silent_mode = on # Run server silently.\n> log_line_prefix = '%t %d %u ' # special values:\n> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\n> autovacuum_naptime = 28800 # time between autovacuum runs. DoubleIP - default was 1min\n> autovacuum_vacuum_threshold = 100 # min number of row updates before\n> autovacuum_vacuum_scale_factor = 0.0 # fraction of table size before vacuum. DoubleIP - default was 0.2\n> datestyle = 'iso, mdy'\n> lc_messages = 'en_US.UTF-8' # locale for system error message\n> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n> lc_time = 'en_US.UTF-8' # locale for time formatting\n> default_text_search_config = 'pg_catalog.english'\n>\n\nGiven that vacuum full fixes the issue I suspect you need to have \nautovacuum set wake up much sooner, not later. So autovacuum_naptime = \n28800 or even = 60 (i.e the default) is possibly too long. We have \nseveral database here where I change this setting to 10 i.e:\n\nautovacuum_naptime = 10s\n\n\nin order to avoid massive database bloat and queries that get slower and \nslower...\n\nYou might want to be a bit *less* aggressive with \nautovacuum_vacuum_scale_factor - I usually have this at 0.1, i.e:\n\nautovacuum_vacuum_scale_factor = 0.1\n\n\notherwise you will be vacuuming all the time - which is usually not what \nyou want (not for all your tables anyway).\n\nregards\n\nMark\n\n",
"msg_date": "Tue, 25 Sep 2012 11:08:34 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Thank you,\n\nI will take this into consideration, since upgrading to 9 will be much harder I assume...\n\n\nOn Sep 24, 2012, at 15:23, MirrorX wrote:\n\n> i remember having a server with 8.4.4 where we had multiple problems with\n> autovacuum.\n> if i am not mistaken there are some bugs related with vacuum until 8.4.7. \n> i would suggest you to upgrade to the latest 8.4.x version\n\n\n",
"msg_date": "Tue, 25 Sep 2012 14:01:10 +0300",
"msg_from": "Kiriakos Tsourapas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Hi Mark,\n\nWhen the problem appears, vacuuming is not helping. I ran vacuum manually and the problem was still there. Only full vacuum worked.\n\nAs far as I have understood, autovacuuming is NOT doing FULL vacuum. So, messing around with its values should not help me in any way.\n\n\nThanks\n\n\n> \n> Given that vacuum full fixes the issue I suspect you need to have autovacuum set wake up much sooner, not later. So autovacuum_naptime = 28800 or even = 60 (i.e the default) is possibly too long. We have several database here where I change this setting to 10 i.e:\n> \n> autovacuum_naptime = 10s\n> \n> \n> in order to avoid massive database bloat and queries that get slower and slower...\n> \n> You might want to be a bit *less* aggressive with autovacuum_vacuum_scale_factor - I usually have this at 0.1, i.e:\n> \n> autovacuum_vacuum_scale_factor = 0.1\n> \n> \n> otherwise you will be vacuuming all the time - which is usually not what you want (not for all your tables anyway).\n> \n> regards\n> \n> Mark\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n",
"msg_date": "Tue, 25 Sep 2012 14:07:55 +0300",
"msg_from": "Kiriakos Tsourapas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Hi,\n\nSuggestion noted.\nNevertheless, I cannot imagine what it would help. Actually, the cost_delay makes autovacuum freeze when it takes more time than expected, therefore, having it enabled should help the system.\n\nI may try it as a last possible resolution (remember that I have to wait for a couple of days for the problem to appear, so any test I perform will be taking days to figure out if it helped !!!)\n\n\n> \n> I had similar issue and I disabled cost based auto vacuum:\n> autovacuum_vacuum_cost_delay = -1\n> \n> -1 says that vacuum_cost_delay will be used and default value for\n> vacuum_cost_delay is 0 (ie. off)\n> \n> Of couse you need to change other autovacuum settings but you did that.\n> \n> -- \n> Ondrej Ivanic\n> ([email protected])\n\n\n",
"msg_date": "Tue, 25 Sep 2012 14:10:08 +0300",
"msg_from": "Kiriakos Tsourapas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Kiriakos Tsourapas, 25.09.2012 13:01:\n> Thank you,\n>\n> I will take this into consideration, since upgrading to 9 will be much harder I assume...\n>\n\nI think an upgrade from 8.3 to 8.4 was \"harder\" due to the removal of a lot of implicit type casts.\n8.4 to 9.x shouldn't be that problematic after all (but will take longer due to the required dump/reload)\n\n\n\n\n\n",
"msg_date": "Tue, 25 Sep 2012 13:24:04 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Hi,\n\nOn 25 September 2012 21:10, Kiriakos Tsourapas <[email protected]> wrote:\n> Suggestion noted.\n> Nevertheless, I cannot imagine what it would help. Actually, the cost_delay\n> makes autovacuum freeze when it takes more time than expected, therefore,\n> having it enabled should help the system.\n\nYes, and I think that \"freeze\" might be part of your problem. You can:\n- turn of auto cost based vacuum\n- or properly set cost parameters: vacuum_cost_page_hit (1),\nvacuum_cost_page_miss (10), vacuum_cost_page_dirty (20) and\nvacuum_cost_limit (200)\n\nIn order to \"freeze\" ie. reach vacuum_cost_limit auto vacuum needs to:\n- vacuum up to 200 buffers found in the shared buffer cache (200 /\nvacuum_cost_page_hit = 200)\n- or vacuum up to 20 buffers that have to be read from disk (200 /\nvacuum_cost_page_miss = 20)\n- or when vacuum modifies up to 10 blocks that were previously clean\n(200 / vacuum_cost_page_dirty = 10)\n\nBasically, you can fiddle with all three parameters until the cows\ncome home or just disable cost based auto vacuum. I think your\nconfiguration can handle agressive auto vacuum.\n\n-- \nOndrej Ivanic\n([email protected])\n\n",
"msg_date": "Wed, 26 Sep 2012 09:14:06 +1000",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Dear all,\n\nI am taking your suggestions one step at a time.\n\nI changed my configuration to a much more aggressive autovacuum policy (0.5% for analyzing and 1% for autovacuum).\n\nautovacuum_naptime = 1min\nautovacuum_vacuum_threshold = 50\n#autovacuum_analyze_threshold = 50\nautovacuum_vacuum_scale_factor = 0.01\nautovacuum_analyze_scale_factor = 0.005\n\nI had tables with 180.000 record and another with 2M records, so the default values of 0.2 for autovacuum would mean that 18.000 and 200K records would have to change respectively, delaying the vacuum for many days.\n\nI will monitor for the next 2-3 days and post back the results.\n\n\nThank you all for your suggestions so far.\nKiriakos\n\n\n",
"msg_date": "Wed, 26 Sep 2012 12:41:46 +0300",
"msg_from": "Kiriakos Tsourapas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "On Tue, Sep 25, 2012 at 5:24 AM, Thomas Kellerer <[email protected]> wrote:\n> I think an upgrade from 8.3 to 8.4 was \"harder\" due to the removal of a lot\n> of implicit type casts.\n\nFYI that was from 8.2 to 8.3 that implicit casts were removed.\n\n",
"msg_date": "Sat, 29 Sep 2012 00:59:33 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "> -----Original Message-----\r\n> From: Thomas Kellerer [mailto:[email protected]]\r\n> Sent: Tuesday, September 25, 2012 7:24 AM\r\n> To: [email protected]\r\n> Subject: Re: Postgres becoming slow, only full vacuum fixes it\r\n> \r\n> Kiriakos Tsourapas, 25.09.2012 13:01:\r\n> > Thank you,\r\n> >\r\n> > I will take this into consideration, since upgrading to 9 will be\r\n> much harder I assume...\r\n> >\r\n> \r\n> I think an upgrade from 8.3 to 8.4 was \"harder\" due to the removal of a\r\n> lot of implicit type casts.\r\n> 8.4 to 9.x shouldn't be that problematic after all (but will take\r\n> longer due to the required dump/reload)\r\n> \r\n\r\nActually, 8.3 to 8.4 required db dump/restore.\r\nWhen upgrading from 8.4 to 9.x pg_upgrade could be used without dump/restore.\r\n\r\nRegards,\r\nIgor Neyman\r\n",
"msg_date": "Mon, 1 Oct 2012 14:09:25 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
}
] |
[
{
"msg_contents": "I'm using Postgres 9.1 on Debian Lenny and via a Java server (JBoss AS 6.1) I'm executing a simple \"select ... for update\" query:\nSELECT\n\timporting\nFROM\n\tcustomer\nWHERE\n\tid = :customer_id\nFOR UPDATE NOWAIT\nOnce every 10 to 20 times Postgres fails to obtain the lock for no apparent reason:\n18:22:18,285 WARN [org.hibernate.util.JDBCExceptionReporter] SQL Error: 0, SQLState: 55P0318:22:18,285 ERROR [org.hibernate.util.JDBCExceptionReporter] ERROR: could not obtain lock on row in relation \"customer\"\nI'm \"pretty\" sure there's really no other process that has the lock, as I'm the only one on a test DB. If I execute the query immediately again, it does succeed in obtaining the lock. I can however not reproduce this via e.g. PGAdmin.\n\nIs it possible or perhaps even known that PG has this behavior, or should I look for the cause in the Java code? (I'm using Java EE\"s entity manager to execute a native query inside an EJB bean that lets a JDBC connection from a pool join a JTA transaction.)\nThanks!\n \t\t \t \t\t \n\n\n\n\nI'm using Postgres 9.1 on Debian Lenny and via a Java server (JBoss AS 6.1) I'm executing a simple \"select ... for update\" query:SELECT\n importing\nFROM\n customer\nWHERE\n id = :customer_id\nFOR UPDATE NOWAITOnce every 10 to 20 times Postgres fails to obtain the lock for no apparent reason:18:22:18,285 WARN [org.hibernate.util.JDBCExceptionReporter] SQL Error: 0, SQLState: 55P0318:22:18,285 ERROR [org.hibernate.util.JDBCExceptionReporter] ERROR: could not obtain lock on row in relation \"customer\"I'm \"pretty\" sure there's really no other process that has the lock, as I'm the only one on a test DB. If I execute the query immediately again, it does succeed in obtaining the lock. I can however not reproduce this via e.g. PGAdmin.Is it possible or perhaps even known that PG has this behavior, or should I look for the cause in the Java code? (I'm using Java EE\"s entity manager to execute a native query inside an EJB bean that lets a JDBC connection from a pool join a JTA transaction.)Thanks!",
"msg_date": "Mon, 24 Sep 2012 23:18:15 +0200",
"msg_from": "henk de wit <[email protected]>",
"msg_from_op": true,
"msg_subject": "Spurious failure to obtain row lock possible in PG 9.1?"
},
{
"msg_contents": "henk de wit wrote:\n> I'm using Postgres 9.1 on Debian Lenny and via a Java server (JBoss AS\n6.1) I'm executing a simple\n> \"select ... for update\" query:\n> \n> \n> SELECT\n> \n> importing\n> \n> FROM\n> \n> customer\n> \n> WHERE\n> \n> id = :customer_id\n> \n> FOR UPDATE NOWAIT\n> \n> \n> Once every 10 to 20 times Postgres fails to obtain the lock for no\napparent reason:\n> \n> 18:22:18,285 WARN [org.hibernate.util.JDBCExceptionReporter] SQL\nError: 0, SQLState: 55P03\n> 18:22:18,285 ERROR [org.hibernate.util.JDBCExceptionReporter] ERROR:\ncould not obtain lock on row in\n> relation \"customer\"\n> \n> \n> I'm \"pretty\" sure there's really no other process that has the lock,\nas I'm the only one on a test DB.\n> If I execute the query immediately again, it does succeed in obtaining\nthe lock. I can however not\n> reproduce this via e.g. PGAdmin.\n> \n> \n> Is it possible or perhaps even known that PG has this behavior, or\nshould I look for the cause in the\n> Java code? (I'm using Java EE\"s entity manager to execute a native\nquery inside an EJB bean that lets\n> a JDBC connection from a pool join a JTA transaction.)\n\nThere must be at least a second database connection that holds\nlocks on the objects you need.\nLook in pg_stat_activity if you see other connections.\n\nIt is probably a race condition of some kind.\n\nTurn on logging og connections and disconnections.\nSet log_statement='all'\n\nThat way you should be able to see from the log entries\nwho issues what queries concurrently with you.\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Tue, 25 Sep 2012 12:48:30 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Spurious failure to obtain row lock possible in PG 9.1?"
},
{
"msg_contents": "Hi there,\n\n> henk de wit wrote:\n> > I'm using Postgres 9.1 on Debian Lenny and via a Java server (JBoss AS\n> > I'm \"pretty\" sure there's really no other process that has the lock,\n> as I'm the only one on a test DB.\n> > If I execute the query immediately again, it does succeed in obtaining\n> the lock. I can however not\n> > reproduce this via e.g. PGAdmin.\n> \n> \n> There must be at least a second database connection that holds\n> locks on the objects you need.\n> Look in pg_stat_activity if you see other connections.\n> \n> It is probably a race condition of some kind.\nIt indeed most likely was, but not exactly the kind of race condition I had in mind.\nI was (wrongfully) thinking that a \"... for update nowait\" lock, would only not wait for other \"... for update nowait\" locks. However, as it turned out it also immediately returns with the error code if there's a kind of transitive \"normal\" lock related to a plain insert or update elsewhere (plain = without a 'for update' clause).\nAs I was the only one on the Database, I was pretty sure there was no other \"... for update nowait\" query executing, but there *was* another parallel insert of a row that had a foreign key to the entry in the table I was trying to lock explicitly. That insert caused the lock in the other query to immediately fail. To me this was quite unexpected, but that's probably just me.\nWhat I thus actually need from PG is a \"nowaitforupdate\" or such thing; e.g. if there's a normal insert going on with a FK that happens to reference that row, it's okay to wait. The only thing I don't want to wait for is explicit locks that are hold by application code. I've worked around the issue by creating a separate table called \"customer_lock\" without any foreign keys from it or to it. It's used exclusively for obtaining those explicit locks. It violates the relational model a bit, but it does work.\nThanks for your help! \t\t \t \t\t \n\n\n\n\nHi there,> henk de wit wrote:> > I'm using Postgres 9.1 on Debian Lenny and via a Java server (JBoss AS> > I'm \"pretty\" sure there's really no other process that has the lock,> as I'm the only one on a test DB.> > If I execute the query immediately again, it does succeed in obtaining> the lock. I can however not> > reproduce this via e.g. PGAdmin.> > > There must be at least a second database connection that holds> locks on the objects you need.> Look in pg_stat_activity if you see other connections.> > It is probably a race condition of some kind.It indeed most likely was, but not exactly the kind of race condition I had in mind.I was (wrongfully) thinking that a \"... for update nowait\" lock, would only not wait for other \"... for update nowait\" locks. However, as it turned out it also immediately returns with the error code if there's a kind of transitive \"normal\" lock related to a plain insert or update elsewhere (plain = without a 'for update' clause).As I was the only one on the Database, I was pretty sure there was no other \"... for update nowait\" query executing, but there *was* another parallel insert of a row that had a foreign key to the entry in the table I was trying to lock explicitly. That insert caused the lock in the other query to immediately fail. To me this was quite unexpected, but that's probably just me.What I thus actually need from PG is a \"nowaitforupdate\" or such thing; e.g. if there's a normal insert going on with a FK that happens to reference that row, it's okay to wait. The only thing I don't want to wait for is explicit locks that are hold by application code. I've worked around the issue by creating a separate table called \"customer_lock\" without any foreign keys from it or to it. It's used exclusively for obtaining those explicit locks. It violates the relational model a bit, but it does work.Thanks for your help!",
"msg_date": "Sat, 29 Sep 2012 14:40:35 +0200",
"msg_from": "henk de wit <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Spurious failure to obtain row lock possible in PG\n 9.1?"
}
] |
[
{
"msg_contents": "Kiriakos Tsourapas wrote:\n\n> When the problem appears, vacuuming is not helping. I ran vacuum\n> manually and the problem was still there. Only full vacuum worked.\n> \n> As far as I have understood, autovacuuming is NOT doing FULL\n> vacuum. So, messing around with its values should not help me in\n> any way.\n\nThat is absolutely wrong. A regular vacuum, or autovacuum not\nhopelessly crippled by your configuration, will prevent the table\nbloat which is slowing things down. It does not, however, fix bloat\nonce it has occurred; a normal vacuum then is like closing the barn\ndoor after the horse has already bolted -- it would have prevented\nthe problem if done in time, but it won't cure it.\n\nA VACUUM FULL in version 8.4 will fix bloat of the table's heap, but\nwill tend to bloat the indexes. You should probably fix your\nautovcauum configuration (making it *at least* as aggressive as the\ndefault), CLUSTER the affected table(s) to fix both table and index\nbloat, and schedule an upgrade to the latest bug fix release of major\nversion 8.4.\n\nhttp://www.postgresql.org/support/versioning/\n\nMinor releases (where the version number only changes after the\nsecond dot) only contain fixes for bugs and security problems, and\nnever require a dump/load or pg_upgrade run. If you insist on\nrunning with known bugs, you can expect problems.\n\n-Kevin\n\n",
"msg_date": "Tue, 25 Sep 2012 08:40:55 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm new here so i hope i don't do mistakes.\n\nI'm having a serious performance issue in postgresql.\n\nI have tables containing adresses with X,Y GPS coordinates and tables with\nzoning and square of gps coordinates.\n\nBasicly it looks like\n\nadresses_01 (id,X,Y)\ngps_01 (id,x_min,x_max,y_min,y_max).\n\n[code]\n\"\nSELECT\n t2.id,\nFROM\n tables_gps.gps_01 t1\nINNER JOIN\n tables_adresses.adresses_01 t2\nON\n t2.\"X\" BETWEEN t1.x_min AND t1.x_max AND t2.\"Y\" BETWEEN t1.y_min AND\nt1.y_max\nWHERE\n t2.id='0'\n\"\n[/code]\n\nI have something like 250000rows in each table.\n\nNow when i execute this on adresses_01 and gps_01, the request complete in a\nfew minutes.\nBut when doing it on adresses_02 and gps_02 (same number of rows\napproximately) the query takes 5hours.\n\nI have indexes on adresses on X,Y and an index in gps on\nx_min,y_min,x_max,y_max.\n\nNow i do updates in result of this query on ID (so i have an index on ID\ntoo).\n\nMy question is ... Why ? (;o). And also, do i need to use CLUSTER (i don't\nreally understand what it does). And if so. Do i need to CLUSTER the id ? Or\nthe X,Y index ?\n\nIt may be not really clear so just ask questions if you don't get when i\nmean or if you need specs or anything. I just moved from MySql to PostgreSql\nlast month.\n\nThanks in advance :)\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Wed, 26 Sep 2012 05:27:26 -0700 (PDT)",
"msg_from": "FFW_Rude <[email protected]>",
"msg_from_op": true,
"msg_subject": "Same query doing slow then quick"
},
{
"msg_contents": "Here is the answer to Ray Stell who send me the wiki page of Slow Query. I\nhope i detailed all you wanted (i basicly pasted the page and add my\nanswers).\n\nFull Table and Index Schema: \n\nschema tables_adresses\n\"Tables\"\ntables_adresses.adresses_XX (id (serial), X(Double precision),Y (Double\nprecision)).\n\"Indexes\"\nadresses_XX_pkey (Primary key, btree)\ncalcul_XX (non unique, Btree on X,Y)\n\nschema tables_gps\n\"Tables\"\ntables_gps.gps_XX (id (int),x_max(numeric(10,5)), y_max\n(numeric(10,5)),x_min(numeric(10,5)),y_min(numeric(10,5)))\n\"Indexes\"\ncalculs_XX (non unique Btree x_min,x_max,y_min,y_max)\ngps_10_pkey (Primary key on id btree)\n\nApproximate rows 250000.\nNo large objects in it (just data)\nNo NULL\nreceives a large number of UPDATEs or DELETEs regularly\nis growing daily\n\nI can't post an EXPLAIN ANALYZE because of the 6hour query time.\n\nPostgres version: 9.1\n\nHistory: was this query always slow, : \"YES\"\n\nHardware: Ubuntu server last version 32bits\n\nDaily VACUUM FULL ANALYZE, REINDEX TABLE on all the tables.\n\nWAL Configuration: Whats a WAL ?\n\nGUC Settings: i didn't change anything. All is standard.\n\nshared_buffers should be 10% to 25% of available RAM (it's on 24MB and can't\ngo higher. The server has 4Gb)\n \neffective_cache_size should be 75% of available RAM => I don't now what this\nis.\n\nTest changing work_mem: increase it to 8MB, 32MB, 256MB, 1GB. Does it make a\ndifference? \"No\"\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725491.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Wed, 26 Sep 2012 06:03:49 -0700 (PDT)",
"msg_from": "FFW_Rude <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "On 09/26/2012 15:03, FFW_Rude wrote:\n> Here is the answer to Ray Stell who send me the wiki page of Slow Query. I\n> hope i detailed all you wanted (i basicly pasted the page and add my\n> answers).\n>\n> Full Table and Index Schema:\n>\n> schema tables_adresses\n> \"Tables\"\n> tables_adresses.adresses_XX (id (serial), X(Double precision),Y (Double\n> precision)).\n> \"Indexes\"\n> adresses_XX_pkey (Primary key, btree)\n> calcul_XX (non unique, Btree on X,Y)\n>\n> schema tables_gps\n> \"Tables\"\n> tables_gps.gps_XX (id (int),x_max(numeric(10,5)), y_max\n> (numeric(10,5)),x_min(numeric(10,5)),y_min(numeric(10,5)))\n> \"Indexes\"\n> calculs_XX (non unique Btree x_min,x_max,y_min,y_max)\n> gps_10_pkey (Primary key on id btree)\n>\n> Approximate rows 250000.\n> No large objects in it (just data)\n> No NULL\n> receives a large number of UPDATEs or DELETEs regularly\n> is growing daily\n>\n> I can't post an EXPLAIN ANALYZE because of the 6hour query time.\n>\n> Postgres version: 9.1\n>\n> History: was this query always slow, : \"YES\"\n>\n> Hardware: Ubuntu server last version 32bits\n>\n> Daily VACUUM FULL ANALYZE, REINDEX TABLE on all the tables.\n>\n> WAL Configuration: Whats a WAL ?\n>\n> GUC Settings: i didn't change anything. All is standard.\n>\n> shared_buffers should be 10% to 25% of available RAM (it's on 24MB and can't\n> go higher. The server has 4Gb)\n>\n> effective_cache_size should be 75% of available RAM => I don't now what this\n> is.\n\nbefore looking further, please configure shared_buffers and \neffective_cache_size properly, it's fundamental\nyou'll probably need to raise SHMALL/SHMMAX, take a look at: \nhttp://www.postgresql.org/docs/current/static/kernel-resources.html\nfor 4GB of RAM I start with shared_buffers to 512MB and \neffective_cache_size to 2GB\n\n> Test changing work_mem: increase it to 8MB, 32MB, 256MB, 1GB. Does it make a\n> difference? \"No\"\n\ndefault work_mem is very small, set it to something like 16MB\n\n>\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725491.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Wed, 26 Sep 2012 15:21:38 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "Hi,\nThank you for your answer.\nIt was already at 16MB and i upped it just this morning to 64MB. Still no change\n\nRude - Last Territory\nOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\t (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\nOu acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\nDate: Wed, 26 Sep 2012 06:22:35 -0700\nFrom: [email protected]\nTo: [email protected]\nSubject: Re: Same query doing slow then quick\n\n\n\n\tOn 09/26/2012 15:03, FFW_Rude wrote:\n\n> Here is the answer to Ray Stell who send me the wiki page of Slow Query. I\n\n> hope i detailed all you wanted (i basicly pasted the page and add my\n\n> answers).\n\n>\n\n> Full Table and Index Schema:\n\n>\n\n> schema tables_adresses\n\n> \"Tables\"\n\n> tables_adresses.adresses_XX (id (serial), X(Double precision),Y (Double\n\n> precision)).\n\n> \"Indexes\"\n\n> adresses_XX_pkey (Primary key, btree)\n\n> calcul_XX (non unique, Btree on X,Y)\n\n>\n\n> schema tables_gps\n\n> \"Tables\"\n\n> tables_gps.gps_XX (id (int),x_max(numeric(10,5)), y_max\n\n> (numeric(10,5)),x_min(numeric(10,5)),y_min(numeric(10,5)))\n\n> \"Indexes\"\n\n> calculs_XX (non unique Btree x_min,x_max,y_min,y_max)\n\n> gps_10_pkey (Primary key on id btree)\n\n>\n\n> Approximate rows 250000.\n\n> No large objects in it (just data)\n\n> No NULL\n\n> receives a large number of UPDATEs or DELETEs regularly\n\n> is growing daily\n\n>\n\n> I can't post an EXPLAIN ANALYZE because of the 6hour query time.\n\n>\n\n> Postgres version: 9.1\n\n>\n\n> History: was this query always slow, : \"YES\"\n\n>\n\n> Hardware: Ubuntu server last version 32bits\n\n>\n\n> Daily VACUUM FULL ANALYZE, REINDEX TABLE on all the tables.\n\n>\n\n> WAL Configuration: Whats a WAL ?\n\n>\n\n> GUC Settings: i didn't change anything. All is standard.\n\n>\n\n> shared_buffers should be 10% to 25% of available RAM (it's on 24MB and can't\n\n> go higher. The server has 4Gb)\n\n>\n\n> effective_cache_size should be 75% of available RAM => I don't now what this\n\n> is.\nbefore looking further, please configure shared_buffers and \n\neffective_cache_size properly, it's fundamental\n\nyou'll probably need to raise SHMALL/SHMMAX, take a look at: \n\nhttp://www.postgresql.org/docs/current/static/kernel-resources.html\nfor 4GB of RAM I start with shared_buffers to 512MB and \n\neffective_cache_size to 2GB\n\n\n> Test changing work_mem: increase it to 8MB, 32MB, 256MB, 1GB. Does it make a\n\n> difference? \"No\"\n\n\ndefault work_mem is very small, set it to something like 16MB\n\n\n>\n\n>\n\n>\n\n> --\n\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725491.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n>\n\n>\n\n\n\n-- \n\nNo trees were killed in the creation of this message.\n\nHowever, many electrons were terribly inconvenienced.\n\n\n\n\n-- \n\nSent via pgsql-performance mailing list ([hidden email])\n\nTo make changes to your subscription:\n\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n jcigar.vcf (304 bytes) Download Attachment\n\n\t\n\t\n\t\n\t\n\n\t\n\n\t\n\t\n\t\tIf you reply to this email, your message will be added to the discussion below:\n\t\thttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725493.html\n\t\n\t\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\n\n\t\tNAML\n\t \t\t \t \t\t \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725495.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\nHi,Thank you for your answer.It was already at 16MB and i upped it just this morning to 64MB. Still no changeRude - Last TerritoryOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)Ou acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\nDate: Wed, 26 Sep 2012 06:22:35 -0700From: [hidden email]To: [hidden email]Subject: Re: Same query doing slow then quick\n\n\tOn 09/26/2012 15:03, FFW_Rude wrote:\n> Here is the answer to Ray Stell who send me the wiki page of Slow Query. I\n> hope i detailed all you wanted (i basicly pasted the page and add my\n> answers).\n>\n> Full Table and Index Schema:\n>\n> schema tables_adresses\n> \"Tables\"\n> tables_adresses.adresses_XX (id (serial), X(Double precision),Y (Double\n> precision)).\n> \"Indexes\"\n> adresses_XX_pkey (Primary key, btree)\n> calcul_XX (non unique, Btree on X,Y)\n>\n> schema tables_gps\n> \"Tables\"\n> tables_gps.gps_XX (id (int),x_max(numeric(10,5)), y_max\n> (numeric(10,5)),x_min(numeric(10,5)),y_min(numeric(10,5)))\n> \"Indexes\"\n> calculs_XX (non unique Btree x_min,x_max,y_min,y_max)\n> gps_10_pkey (Primary key on id btree)\n>\n> Approximate rows 250000.\n> No large objects in it (just data)\n> No NULL\n> receives a large number of UPDATEs or DELETEs regularly\n> is growing daily\n>\n> I can't post an EXPLAIN ANALYZE because of the 6hour query time.\n>\n> Postgres version: 9.1\n>\n> History: was this query always slow, : \"YES\"\n>\n> Hardware: Ubuntu server last version 32bits\n>\n> Daily VACUUM FULL ANALYZE, REINDEX TABLE on all the tables.\n>\n> WAL Configuration: Whats a WAL ?\n>\n> GUC Settings: i didn't change anything. All is standard.\n>\n> shared_buffers should be 10% to 25% of available RAM (it's on 24MB and can't\n> go higher. The server has 4Gb)\n>\n> effective_cache_size should be 75% of available RAM => I don't now what this\n> is.\nbefore looking further, please configure shared_buffers and \neffective_cache_size properly, it's fundamental\nyou'll probably need to raise SHMALL/SHMMAX, take a look at: \nhttp://www.postgresql.org/docs/current/static/kernel-resources.htmlfor 4GB of RAM I start with shared_buffers to 512MB and \neffective_cache_size to 2GB\n> Test changing work_mem: increase it to 8MB, 32MB, 256MB, 1GB. Does it make a\n> difference? \"No\"\ndefault work_mem is very small, set it to something like 16MB\n>\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725491.html> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n-- \nSent via pgsql-performance mailing list ([hidden email])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance jcigar.vcf (304 bytes) Download Attachment\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725493.html\n\n\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\nNAML\n \n\nView this message in context: RE: Same query doing slow then quick\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Wed, 26 Sep 2012 06:36:45 -0700 (PDT)",
"msg_from": "FFW_Rude <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "On 09/26/2012 15:36, FFW_Rude wrote:\n> Hi,\n>\n> Thank you for your answer.\n>\n> It was already at 16MB and i upped it just this morning to 64MB. Still \n> no change\n>\n\nthat's normal, please configure shared_buffers and effective_cache_size \nproperly\n\n> Rude - Last Territory\n>\n> *Ou écouter ?*\n> http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 \n> (Post-apocalyptic Metal)\n> http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\n>\n> *Ou acheter ?*\n> /La Fnac/\n> http://recherche.fnac.com/fmia14622213/Last-Territory\n> http://recherche.fnac.com/fmia14770622/Rude-Undertaker\n>\n> /iTunes/\n> http://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4 \n>\n>\n>\n> ------------------------------------------------------------------------\n> Date: Wed, 26 Sep 2012 06:22:35 -0700\n> From: [hidden email] </user/SendEmail.jtp?type=node&node=5725495&i=0>\n> To: [hidden email] </user/SendEmail.jtp?type=node&node=5725495&i=1>\n> Subject: Re: Same query doing slow then quick\n>\n> On 09/26/2012 15:03, FFW_Rude wrote:\n>\n> > Here is the answer to Ray Stell who send me the wiki page of Slow \n> Query. I\n> > hope i detailed all you wanted (i basicly pasted the page and add my\n> > answers).\n> >\n> > Full Table and Index Schema:\n> >\n> > schema tables_adresses\n> > \"Tables\"\n> > tables_adresses.adresses_XX (id (serial), X(Double precision),Y (Double\n> > precision)).\n> > \"Indexes\"\n> > adresses_XX_pkey (Primary key, btree)\n> > calcul_XX (non unique, Btree on X,Y)\n> >\n> > schema tables_gps\n> > \"Tables\"\n> > tables_gps.gps_XX (id (int),x_max(numeric(10,5)), y_max\n> > (numeric(10,5)),x_min(numeric(10,5)),y_min(numeric(10,5)))\n> > \"Indexes\"\n> > calculs_XX (non unique Btree x_min,x_max,y_min,y_max)\n> > gps_10_pkey (Primary key on id btree)\n> >\n> > Approximate rows 250000.\n> > No large objects in it (just data)\n> > No NULL\n> > receives a large number of UPDATEs or DELETEs regularly\n> > is growing daily\n> >\n> > I can't post an EXPLAIN ANALYZE because of the 6hour query time.\n> >\n> > Postgres version: 9.1\n> >\n> > History: was this query always slow, : \"YES\"\n> >\n> > Hardware: Ubuntu server last version 32bits\n> >\n> > Daily VACUUM FULL ANALYZE, REINDEX TABLE on all the tables.\n> >\n> > WAL Configuration: Whats a WAL ?\n> >\n> > GUC Settings: i didn't change anything. All is standard.\n> >\n> > shared_buffers should be 10% to 25% of available RAM (it's on 24MB \n> and can't\n> > go higher. The server has 4Gb)\n> >\n> > effective_cache_size should be 75% of available RAM => I don't now \n> what this\n> > is.\n> before looking further, please configure shared_buffers and\n> effective_cache_size properly, it's fundamental\n> you'll probably need to raise SHMALL/SHMMAX, take a look at:\n> http://www.postgresql.org/docs/current/static/kernel-resources.html\n> for 4GB of RAM I start with shared_buffers to 512MB and\n> effective_cache_size to 2GB\n>\n> > Test changing work_mem: increase it to 8MB, 32MB, 256MB, 1GB. Does \n> it make a\n> > difference? \"No\"\n>\n> default work_mem is very small, set it to something like 16MB\n>\n> >\n> >\n> >\n> > --\n> > View this message in context: \n> http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725491.html\n> > Sent from the PostgreSQL - performance mailing list archive at \n> Nabble.com.\n> >\n> >\n>\n>\n> -- \n> No trees were killed in the creation of this message.\n> However, many electrons were terribly inconvenienced.\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([hidden email] \n> <http:///user/SendEmail.jtp?type=node&node=5725493&i=0>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> *jcigar.vcf* (304 bytes) Download Attachment \n> <http://postgresql.1045698.n5.nabble.com/attachment/5725493/0/jcigar.vcf>\n>\n>\n> ------------------------------------------------------------------------\n> If you reply to this email, your message will be added to the \n> discussion below:\n> http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725493.html \n>\n> To unsubscribe from Same query doing slow then quick, click here.\n> NAML \n> <http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble:email.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble:email.naml-instant_emails%21nabble:email.naml-send_instant_email%21nabble:email.naml> \n>\n>\n> ------------------------------------------------------------------------\n> View this message in context: RE: Same query doing slow then quick \n> <http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725495.html>\n> Sent from the PostgreSQL - performance mailing list archive \n> <http://postgresql.1045698.n5.nabble.com/PostgreSQL-performance-f2050081.html> \n> at Nabble.com.\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Wed, 26 Sep 2012 15:49:22 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "Thank for you answer.\n\nshared_buffer is at 24Mb\neffective_cache_size at 2048Mb\n\nWhat do you mean properly ? That's not really helping a novice...\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725505.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Wed, 26 Sep 2012 07:14:19 -0700 (PDT)",
"msg_from": "FFW_Rude <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "On 09/26/2012 16:14, FFW_Rude wrote:\n> Thank for you answer.\n>\n> shared_buffer is at 24Mb\n> effective_cache_size at 2048Mb\n>\n> What do you mean properly ? That's not really helping a novice...\n>\n\nfrom my previous mail:\n\nbefore looking further, please configure shared_buffers and \neffective_cache_size properly, it's fundamental\nyou'll probably need to raise SHMALL/SHMMAX, take a look at: \nhttp://www.postgresql.org/docs/current/static/kernel-resources.html\nfor 4GB of RAM I would start with shared_buffers to 512MB and \neffective_cache_size to 2GB\n\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725505.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Wed, 26 Sep 2012 16:17:19 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "My bad. Did not see that part.\nI tried to elevate buffer and SHMAX was a problem. I'll give it another try and will keep you posted.\nThank you.\n\nRude - Last Territory\nOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\t (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\nOu acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\nDate: Wed, 26 Sep 2012 07:17:56 -0700\nFrom: [email protected]\nTo: [email protected]\nSubject: Re: Same query doing slow then quick\n\n\n\n\tOn 09/26/2012 16:14, FFW_Rude wrote:\n\n> Thank for you answer.\n\n>\n\n> shared_buffer is at 24Mb\n\n> effective_cache_size at 2048Mb\n\n>\n\n> What do you mean properly ? That's not really helping a novice...\n\n>\n\n\nfrom my previous mail:\n\n\nbefore looking further, please configure shared_buffers and \n\neffective_cache_size properly, it's fundamental\n\nyou'll probably need to raise SHMALL/SHMMAX, take a look at: \n\nhttp://www.postgresql.org/docs/current/static/kernel-resources.html\nfor 4GB of RAM I would start with shared_buffers to 512MB and \n\neffective_cache_size to 2GB\n\n\n>\n\n>\n\n> --\n\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725505.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n>\n\n>\n\n\n\n-- \n\nNo trees were killed in the creation of this message.\n\nHowever, many electrons were terribly inconvenienced.\n\n\n\n\n-- \n\nSent via pgsql-performance mailing list ([hidden email])\n\nTo make changes to your subscription:\n\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n jcigar.vcf (304 bytes) Download Attachment\n\n\t\n\t\n\t\n\t\n\n\t\n\n\t\n\t\n\t\tIf you reply to this email, your message will be added to the discussion below:\n\t\thttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725506.html\n\t\n\t\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\n\n\t\tNAML\n\t \t\t \t \t\t \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725508.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\nMy bad. Did not see that part.I tried to elevate buffer and SHMAX was a problem. I'll give it another try and will keep you posted.Thank you.Rude - Last TerritoryOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)Ou acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\nDate: Wed, 26 Sep 2012 07:17:56 -0700From: [hidden email]To: [hidden email]Subject: Re: Same query doing slow then quick\n\n\tOn 09/26/2012 16:14, FFW_Rude wrote:\n> Thank for you answer.\n>\n> shared_buffer is at 24Mb\n> effective_cache_size at 2048Mb\n>\n> What do you mean properly ? That's not really helping a novice...\n>\nfrom my previous mail:\nbefore looking further, please configure shared_buffers and \neffective_cache_size properly, it's fundamental\nyou'll probably need to raise SHMALL/SHMMAX, take a look at: \nhttp://www.postgresql.org/docs/current/static/kernel-resources.htmlfor 4GB of RAM I would start with shared_buffers to 512MB and \neffective_cache_size to 2GB\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725505.html> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n-- \nSent via pgsql-performance mailing list ([hidden email])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance jcigar.vcf (304 bytes) Download Attachment\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725506.html\n\n\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\nNAML\n \n\nView this message in context: RE: Same query doing slow then quick\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Wed, 26 Sep 2012 07:19:44 -0700 (PDT)",
"msg_from": "FFW_Rude <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "Ok done to 512Mb and 2048Mb\nI'm relaunching. See you in a few hours (so tommorrow)\n\nRude - Last Territory\nOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\t (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\nOu acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\nDate: Wed, 26 Sep 2012 07:17:56 -0700\nFrom: [email protected]\nTo: [email protected]\nSubject: Re: Same query doing slow then quick\n\n\n\n\tOn 09/26/2012 16:14, FFW_Rude wrote:\n\n> Thank for you answer.\n\n>\n\n> shared_buffer is at 24Mb\n\n> effective_cache_size at 2048Mb\n\n>\n\n> What do you mean properly ? That's not really helping a novice...\n\n>\n\n\nfrom my previous mail:\n\n\nbefore looking further, please configure shared_buffers and \n\neffective_cache_size properly, it's fundamental\n\nyou'll probably need to raise SHMALL/SHMMAX, take a look at: \n\nhttp://www.postgresql.org/docs/current/static/kernel-resources.html\nfor 4GB of RAM I would start with shared_buffers to 512MB and \n\neffective_cache_size to 2GB\n\n\n>\n\n>\n\n> --\n\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725505.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n>\n\n>\n\n\n\n-- \n\nNo trees were killed in the creation of this message.\n\nHowever, many electrons were terribly inconvenienced.\n\n\n\n\n-- \n\nSent via pgsql-performance mailing list ([hidden email])\n\nTo make changes to your subscription:\n\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n jcigar.vcf (304 bytes) Download Attachment\n\n\t\n\t\n\t\n\t\n\n\t\n\n\t\n\t\n\t\tIf you reply to this email, your message will be added to the discussion below:\n\t\thttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725506.html\n\t\n\t\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\n\n\t\tNAML\n\t \t\t \t \t\t \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725518.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\nOk done to 512Mb and 2048MbI'm relaunching. See you in a few hours (so tommorrow)Rude - Last TerritoryOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)Ou acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\nDate: Wed, 26 Sep 2012 07:17:56 -0700From: [hidden email]To: [hidden email]Subject: Re: Same query doing slow then quick\n\n\tOn 09/26/2012 16:14, FFW_Rude wrote:\n> Thank for you answer.\n>\n> shared_buffer is at 24Mb\n> effective_cache_size at 2048Mb\n>\n> What do you mean properly ? That's not really helping a novice...\n>\nfrom my previous mail:\nbefore looking further, please configure shared_buffers and \neffective_cache_size properly, it's fundamental\nyou'll probably need to raise SHMALL/SHMMAX, take a look at: \nhttp://www.postgresql.org/docs/current/static/kernel-resources.htmlfor 4GB of RAM I would start with shared_buffers to 512MB and \neffective_cache_size to 2GB\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725505.html> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n-- \nSent via pgsql-performance mailing list ([hidden email])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance jcigar.vcf (304 bytes) Download Attachment\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725506.html\n\n\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\nNAML\n \n\nView this message in context: RE: Same query doing slow then quick\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Wed, 26 Sep 2012 07:41:34 -0700 (PDT)",
"msg_from": "FFW_Rude <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "On 09/26/2012 16:41, FFW_Rude wrote:\n> Ok done to 512Mb and 2048Mb\n>\n> I'm relaunching. See you in a few hours (so tommorrow)\n\nwith 250 000 rows and proper indexes it should run in less than a second.\nbe sure your indexes are set properly and that they're used (use EXPLAIN \nANALYZE for that) within your query ...\n\n>\n> Rude - Last Territory\n>\n> *Ou écouter ?*\n> http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 \n> (Post-apocalyptic Metal)\n> http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\n>\n> *Ou acheter ?*\n> /La Fnac/\n> http://recherche.fnac.com/fmia14622213/Last-Territory\n> http://recherche.fnac.com/fmia14770622/Rude-Undertaker\n>\n> /iTunes/\n> http://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4 \n>\n>\n>\n> ------------------------------------------------------------------------\n> Date: Wed, 26 Sep 2012 07:17:56 -0700\n> From: [hidden email] </user/SendEmail.jtp?type=node&node=5725518&i=0>\n> To: [hidden email] </user/SendEmail.jtp?type=node&node=5725518&i=1>\n> Subject: Re: Same query doing slow then quick\n>\n> On 09/26/2012 16:14, FFW_Rude wrote:\n> > Thank for you answer.\n> >\n> > shared_buffer is at 24Mb\n> > effective_cache_size at 2048Mb\n> >\n> > What do you mean properly ? That's not really helping a novice...\n> >\n>\n> from my previous mail:\n>\n> before looking further, please configure shared_buffers and\n> effective_cache_size properly, it's fundamental\n> you'll probably need to raise SHMALL/SHMMAX, take a look at:\n> http://www.postgresql.org/docs/current/static/kernel-resources.html\n> for 4GB of RAM I would start with shared_buffers to 512MB and\n> effective_cache_size to 2GB\n>\n> >\n> >\n> > --\n> > View this message in context: \n> http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725505.html\n> > Sent from the PostgreSQL - performance mailing list archive at \n> Nabble.com.\n> >\n> >\n>\n>\n> -- \n> No trees were killed in the creation of this message.\n> However, many electrons were terribly inconvenienced.\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([hidden email] \n> <http:///user/SendEmail.jtp?type=node&node=5725506&i=0>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> *jcigar.vcf* (304 bytes) Download Attachment \n> <http://postgresql.1045698.n5.nabble.com/attachment/5725506/0/jcigar.vcf>\n>\n>\n> ------------------------------------------------------------------------\n> If you reply to this email, your message will be added to the \n> discussion below:\n> http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725506.html \n>\n> To unsubscribe from Same query doing slow then quick, click here.\n> NAML \n> <http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble:email.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble:email.naml-instant_emails%21nabble:email.naml-send_instant_email%21nabble:email.naml> \n>\n>\n> ------------------------------------------------------------------------\n> View this message in context: RE: Same query doing slow then quick \n> <http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725518.html>\n> Sent from the PostgreSQL - performance mailing list archive \n> <http://postgresql.1045698.n5.nabble.com/PostgreSQL-performance-f2050081.html> \n> at Nabble.com.\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Wed, 26 Sep 2012 16:52:33 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "It sure does not take less than a second :(\n37minutes in and no results. I'm gonna wait until the end to see the result of the explain\n\nRude - Last Territory\nOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\t (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\nOu acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\nDate: Wed, 26 Sep 2012 08:07:08 -0700\nFrom: [email protected]\nTo: [email protected]\nSubject: Re: Same query doing slow then quick\n\n\n\n\t\n \n \n \n \n On 09/26/2012 16:41, FFW_Rude wrote:\n \n \n Ok done to 512Mb and 2048Mb\n \n\n \n I'm relaunching. See you in a few hours (so tommorrow)\n\n \n \n \n \n\n with 250 000 rows and proper indexes it should run in less than a\n second.\n\n be sure your indexes are set properly and that they're used (use\n EXPLAIN ANALYZE for that) within your query ...\n\n \n\n \n \n \n\n Rude -\n Last Territory\n \n\n \n Ou écouter ?\n http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\n (Post-apocalyptic\n Metal)\n http://www.deezer.com/fr/music/rude-undertaker \n (Pop-Rock)\n \n\n \n Ou acheter ?\n La Fnac\n http://recherche.fnac.com/fmia14622213/Last-Territory\n \n http://recherche.fnac.com/fmia14770622/Rude-Undertaker\n \n \n\n \n iTunes\n http://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n \n \n\n \n\n \n Date: Wed, 26 Sep 2012 07:17:56 -0700\n\n From: [hidden\n email]\n\n To: [hidden\n email]\n\n Subject: Re: Same query doing slow then quick\n\n \n\n On 09/26/2012 16:14, FFW_Rude wrote:\n \n\n > Thank for you answer.\n \n\n >\n \n\n > shared_buffer is at 24Mb\n \n\n > effective_cache_size at 2048Mb\n \n\n >\n \n\n > What do you mean properly ? That's not really helping a\n novice...\n \n\n >\n \n\n \n\n from my previous mail:\n \n\n \n\n before looking further, please configure shared_buffers and\n \n\n effective_cache_size properly, it's fundamental\n \n\n you'll probably need to raise SHMALL/SHMMAX, take a look at:\n \n\n http://www.postgresql.org/docs/current/static/kernel-resources.html\n\n for 4GB of RAM I would start with shared_buffers to 512MB\n and \n\n effective_cache_size to 2GB\n \n\n \n\n >\n \n\n >\n \n\n > --\n \n\n > View this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725505.html\n\n > Sent from the PostgreSQL - performance mailing list\n archive at Nabble.com.\n \n\n >\n \n\n >\n \n\n \n\n \n\n -- \n\n No trees were killed in the creation of this message.\n \n\n However, many electrons were terribly inconvenienced.\n \n\n \n\n \n\n \n\n -- \n\n Sent via pgsql-performance mailing list ([hidden\n email])\n \n\n To make changes to your subscription:\n \n\n http://www.postgresql.org/mailpref/pgsql-performance\n\n \n\n jcigar.vcf\n (304 bytes) Download\n Attachment\n \n\n \n\n \n \n If you reply to this email,\n your message will be added to the discussion below:\n http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725506.html\n \n \n To unsubscribe from Same query doing slow then quick, click here.\n\n NAML \n \n \n \n \n\n \n View this message in context: RE:\n Same query doing slow then quick\n\n Sent from the PostgreSQL\n - performance mailing list archive at Nabble.com.\n\n \n \n\n \n\n -- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n \n\n\n\n-- \n\nSent via pgsql-performance mailing list ([hidden email])\n\nTo make changes to your subscription:\n\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n jcigar.vcf (422 bytes) Download Attachment\n\n\t\n\t\n\t\n\t\n\n\t\n\n\t\n\t\n\t\tIf you reply to this email, your message will be added to the discussion below:\n\t\thttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725526.html\n\t\n\t\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\n\n\t\tNAML\n\t \t\t \t \t\t \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725527.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\nIt sure does not take less than a second :(37minutes in and no results. I'm gonna wait until the end to see the result of the explainRude - Last TerritoryOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)Ou acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\nDate: Wed, 26 Sep 2012 08:07:08 -0700From: [hidden email]To: [hidden email]Subject: Re: Same query doing slow then quick\n\n\t\n \n \n \n \n On 09/26/2012 16:41, FFW_Rude wrote:\n \n\n Ok done to 512Mb and 2048Mb\n \n\nI'm relaunching. See you in a few hours (so tommorrow)\n\n\n\n\n with 250 000 rows and proper indexes it should run in less than a\n second.\n be sure your indexes are set properly and that they're used (use\n EXPLAIN ANALYZE for that) within your query ...\n\n\n\n\nRude -\n Last Territory\n\n\nOu écouter ?\nhttp://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\n (Post-apocalyptic\n Metal)\nhttp://www.deezer.com/fr/music/rude-undertaker \n (Pop-Rock)\n\n\nOu acheter ?\nLa Fnac\nhttp://recherche.fnac.com/fmia14622213/Last-Territory\n\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\n\n\niTunes\nhttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\n\n\nDate: Wed, 26 Sep 2012 07:17:56 -0700\n From: [hidden\n email]\n To: [hidden\n email]\n Subject: Re: Same query doing slow then quick\n\n On 09/26/2012 16:14, FFW_Rude wrote:\n \n > Thank for you answer.\n \n >\n \n > shared_buffer is at 24Mb\n \n > effective_cache_size at 2048Mb\n \n >\n \n > What do you mean properly ? That's not really helping a\n novice...\n \n >\n \n\n from my previous mail:\n \n\n before looking further, please configure shared_buffers and\n \n effective_cache_size properly, it's fundamental\n \n you'll probably need to raise SHMALL/SHMMAX, take a look at:\n \nhttp://www.postgresql.org/docs/current/static/kernel-resources.html\n for 4GB of RAM I would start with shared_buffers to 512MB\n and \n effective_cache_size to 2GB\n \n\n >\n \n >\n \n > --\n \n > View this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725505.html\n > Sent from the PostgreSQL - performance mailing list\n archive at Nabble.com.\n \n >\n \n >\n \n\n\n -- \n No trees were killed in the creation of this message.\n \n However, many electrons were terribly inconvenienced.\n \n\n\n\n -- \n Sent via pgsql-performance mailing list ([hidden\n email])\n \n To make changes to your subscription:\n \nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n jcigar.vcf\n (304 bytes) Download\n Attachment\n\n\n\n\nIf you reply to this email,\n your message will be added to the discussion below:\nhttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725506.html\n\n\n To unsubscribe from Same query doing slow then quick, click here.\nNAML \n\n\n\n\n\n View this message in context: RE:\n Same query doing slow then quick\n Sent from the PostgreSQL\n - performance mailing list archive at Nabble.com.\n\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n-- \nSent via pgsql-performance mailing list ([hidden email])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance jcigar.vcf (422 bytes) Download Attachment\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725526.html\n\n\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\nNAML\n \n\nView this message in context: RE: Same query doing slow then quick\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Wed, 26 Sep 2012 08:18:08 -0700 (PDT)",
"msg_from": "FFW_Rude <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "Hi, FFW_Rude\n\n1. Benchmark the device with your PostgreSQL DB:\n\n# hdparm -tT /dev/sda\n\n/dev/sda:\n Timing cached reads: 6604 MB in 2.00 seconds = 3303.03 MB/sec\n Timing buffered disk reads: 1316 MB in 3.00 seconds = 438.18 MB/sec\n\n\n2. Benchmark your PostgreSQL with pgbench:\n\nSet \"fsync = off\" on /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql restart\n\n# su - postgres\n$ psql\n# create database pgbench;\n# \\q\n# pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\ntps = 5670.635648 (including connections establishing)\ntps = 5673.630345 (excluding connections establishing)[/code]\n\nSet \"fsync = on\" on /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql restart\n\n\n--\nWith best regards,\nNikolay\n\nHi, FFW_Rude1. Benchmark the device with your PostgreSQL DB:# hdparm -tT /dev/sda/dev/sda: Timing cached reads: 6604 MB in 2.00 seconds = 3303.03 MB/sec\n Timing buffered disk reads: 1316 MB in 3.00 seconds = 438.18 MB/sec2. Benchmark your PostgreSQL with pgbench:Set \"fsync = off\" on /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql restart# su - postgres$ psql# create database pgbench;# \\q# pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\ntps = 5670.635648 (including connections establishing)tps = 5673.630345 (excluding connections establishing)[/code]Set \"fsync = on\" on /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql restart--With best regards,Nikolay",
"msg_date": "Wed, 26 Sep 2012 18:33:29 +0300",
"msg_from": "Nikolay Ulyanitsky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "Hi,\nroot@testBI:/etc/postgresql/9.1/main# hdparm -tT /dev/sda\n/dev/sda: Timing cached reads: 892 MB in 2.01 seconds = 444.42 MB/sec Timing buffered disk reads: 190 MB in 3.02 seconds = 62.90 MB/sec\nIs fsync off by default ? I have#fsync = on (so it's off right ?).\npgbench is not found on my server. Do i have to apt-get install pgbench ?\nRude - Last Territory\nOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\t (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\nOu acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\nDate: Wed, 26 Sep 2012 08:34:06 -0700\nFrom: [email protected]\nTo: [email protected]\nSubject: Re: Same query doing slow then quick\n\n\n\n\tHi, FFW_Rude\n1. Benchmark the device with your PostgreSQL DB:\n# hdparm -tT /dev/sda\n/dev/sda: Timing cached reads: 6604 MB in 2.00 seconds = 3303.03 MB/sec\n Timing buffered disk reads: 1316 MB in 3.00 seconds = 438.18 MB/sec\n\n2. Benchmark your PostgreSQL with pgbench:\nSet \"fsync = off\" on /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql restart\n# su - postgres$ psql# create database pgbench;# \\q# pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\ntps = 5670.635648 (including connections establishing)tps = 5673.630345 (excluding connections establishing)[/code]\nSet \"fsync = on\" on /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql restart\n\n--With best regards,Nikolay\n\n\n\t\n\t\n\t\n\t\n\n\t\n\n\t\n\t\n\t\tIf you reply to this email, your message will be added to the discussion below:\n\t\thttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725533.html\n\t\n\t\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\n\n\t\tNAML\n\t \t\t \t \t\t \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725536.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\nHi,root@testBI:/etc/postgresql/9.1/main# hdparm -tT /dev/sda/dev/sda: Timing cached reads: 892 MB in 2.01 seconds = 444.42 MB/sec Timing buffered disk reads: 190 MB in 3.02 seconds = 62.90 MB/secIs fsync off by default ? I have#fsync = on (so it's off right ?).pgbench is not found on my server. Do i have to apt-get install pgbench ?Rude - Last TerritoryOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)Ou acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\nDate: Wed, 26 Sep 2012 08:34:06 -0700From: [hidden email]To: [hidden email]Subject: Re: Same query doing slow then quick\nHi, FFW_Rude1. Benchmark the device with your PostgreSQL DB:# hdparm -tT /dev/sda/dev/sda: Timing cached reads: 6604 MB in 2.00 seconds = 3303.03 MB/sec\n Timing buffered disk reads: 1316 MB in 3.00 seconds = 438.18 MB/sec2. Benchmark your PostgreSQL with pgbench:Set \"fsync = off\" on /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql restart# su - postgres$ psql# create database pgbench;# \\q# pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\ntps = 5670.635648 (including connections establishing)tps = 5673.630345 (excluding connections establishing)[/code]Set \"fsync = on\" on /var/lib/pgsql/data/postgresql.conf\n# /etc/init.d/postgresql restart--With best regards,Nikolay\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725533.html\n\n\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\nNAML\n \n\nView this message in context: RE: Same query doing slow then quick\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Wed, 26 Sep 2012 08:38:58 -0700 (PDT)",
"msg_from": "FFW_Rude <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "On 26 September 2012 18:38, FFW_Rude <[email protected]> wrote:\n> root@testBI:/etc/postgresql/9.1/main# hdparm -tT /dev/sda\n> Timing cached reads: 892 MB in 2.01 seconds = 444.42 MB/sec\n> Timing buffered disk reads: 190 MB in 3.02 seconds = 62.90 MB/sec\n\nIt's OK for single HDD.\n\n\n> Is fsync off by default ? I have\n> #fsync = on (so it's off right ?).\n\nDisable fsync for pgbench temporarily.\n\n\n> pgbench is not found on my server. Do i have to apt-get install pgbench ?\n\nInstall the postgresql-contrib deb:\nhttp://pkgs.org/download/postgresql-contrib\n\n# sudo apt-get update\n# sudo apt-get install postgresql-contrib\n\n\n--\nWith best regards,\nNikolay\n\n",
"msg_date": "Wed, 26 Sep 2012 18:52:27 +0300",
"msg_from": "Nikolay Ulyanitsky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "Ok i'm installing. Can't stop the server right now. I'm gonna have to get back to you tomorrow afternoon (have other tasks that need to run from now until tomorrow by 1pm)\n\nRude - Last Territory\nOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\t (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\nOu acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\n> Date: Wed, 26 Sep 2012 18:52:27 +0300\n> Subject: Re: [PERFORM] Same query doing slow then quick\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> \n> On 26 September 2012 18:38, FFW_Rude <[email protected]> wrote:\n> > root@testBI:/etc/postgresql/9.1/main# hdparm -tT /dev/sda\n> > Timing cached reads: 892 MB in 2.01 seconds = 444.42 MB/sec\n> > Timing buffered disk reads: 190 MB in 3.02 seconds = 62.90 MB/sec\n> \n> It's OK for single HDD.\n> \n> \n> > Is fsync off by default ? I have\n> > #fsync = on (so it's off right ?).\n> \n> Disable fsync for pgbench temporarily.\n> \n> \n> > pgbench is not found on my server. Do i have to apt-get install pgbench ?\n> \n> Install the postgresql-contrib deb:\n> http://pkgs.org/download/postgresql-contrib\n> \n> # sudo apt-get update\n> # sudo apt-get install postgresql-contrib\n> \n> \n> --\n> With best regards,\n> Nikolay\n \t\t \t \t\t \n\n\n\n\nOk i'm installing. Can't stop the server right now. I'm gonna have to get back to you tomorrow afternoon (have other tasks that need to run from now until tomorrow by 1pm)Rude - Last TerritoryOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)Ou acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n> Date: Wed, 26 Sep 2012 18:52:27 +0300> Subject: Re: [PERFORM] Same query doing slow then quick> From: [email protected]> To: [email protected]> CC: [email protected]> > On 26 September 2012 18:38, FFW_Rude <[email protected]> wrote:> > root@testBI:/etc/postgresql/9.1/main# hdparm -tT /dev/sda> > Timing cached reads: 892 MB in 2.01 seconds = 444.42 MB/sec> > Timing buffered disk reads: 190 MB in 3.02 seconds = 62.90 MB/sec> > It's OK for single HDD.> > > > Is fsync off by default ? I have> > #fsync = on (so it's off right ?).> > Disable fsync for pgbench temporarily.> > > > pgbench is not found on my server. Do i have to apt-get install pgbench ?> > Install the postgresql-contrib deb:> http://pkgs.org/download/postgresql-contrib> > # sudo apt-get update> # sudo apt-get install postgresql-contrib> > > --> With best regards,> Nikolay",
"msg_date": "Wed, 26 Sep 2012 18:05:11 +0200",
"msg_from": "Undertaker Rude <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "so installing postgresql-contrib stopped my server and i don't have pgbench in it. It is still pgbench command not found...\nCould you explain what you are asking me to do because i don't really know what i'm doing...\n\nRude - Last Territory\nOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\t (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\nOu acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\nDate: Wed, 26 Sep 2012 08:53:29 -0700\nFrom: [email protected]\nTo: [email protected]\nSubject: Re: Same query doing slow then quick\n\n\n\n\tOn 26 September 2012 18:38, FFW_Rude <[hidden email]> wrote:\n\n> root@testBI:/etc/postgresql/9.1/main# hdparm -tT /dev/sda\n\n> Timing cached reads: 892 MB in 2.01 seconds = 444.42 MB/sec\n\n> Timing buffered disk reads: 190 MB in 3.02 seconds = 62.90 MB/sec\n\n\nIt's OK for single HDD.\n\n\n\n> Is fsync off by default ? I have\n\n> #fsync = on (so it's off right ?).\n\n\nDisable fsync for pgbench temporarily.\n\n\n\n> pgbench is not found on my server. Do i have to apt-get install pgbench ?\n\n\nInstall the postgresql-contrib deb:\n\nhttp://pkgs.org/download/postgresql-contrib\n\n# sudo apt-get update\n\n# sudo apt-get install postgresql-contrib\n\n\n\n--\n\nWith best regards,\n\nNikolay\n\n\n\n-- \n\nSent via pgsql-performance mailing list ([hidden email])\n\nTo make changes to your subscription:\n\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\t\n\t\n\t\n\t\n\n\t\n\n\t\n\t\n\t\tIf you reply to this email, your message will be added to the discussion below:\n\t\thttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725542.html\n\t\n\t\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\n\n\t\tNAML\n\t \t\t \t \t\t \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725549.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\nso installing postgresql-contrib stopped my server and i don't have pgbench in it. It is still pgbench command not found...Could you explain what you are asking me to do because i don't really know what i'm doing...Rude - Last TerritoryOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)Ou acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\nDate: Wed, 26 Sep 2012 08:53:29 -0700From: [hidden email]To: [hidden email]Subject: Re: Same query doing slow then quick\n\n\tOn 26 September 2012 18:38, FFW_Rude <[hidden email]> wrote:\n> root@testBI:/etc/postgresql/9.1/main# hdparm -tT /dev/sda\n> Timing cached reads: 892 MB in 2.01 seconds = 444.42 MB/sec\n> Timing buffered disk reads: 190 MB in 3.02 seconds = 62.90 MB/sec\nIt's OK for single HDD.\n> Is fsync off by default ? I have\n> #fsync = on (so it's off right ?).\nDisable fsync for pgbench temporarily.\n> pgbench is not found on my server. Do i have to apt-get install pgbench ?\nInstall the postgresql-contrib deb:\nhttp://pkgs.org/download/postgresql-contrib# sudo apt-get update\n# sudo apt-get install postgresql-contrib\n--\nWith best regards,\nNikolay\n-- \nSent via pgsql-performance mailing list ([hidden email])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nIf you reply to this email, your message will be added to the discussion below:\nhttp://postgresql.1045698.n5.nabble.com/Same-query-doing-slow-then-quick-tp5725486p5725542.html\n\n\n\t\t\n\t\tTo unsubscribe from Same query doing slow then quick, click here.\nNAML\n \n\nView this message in context: RE: Same query doing slow then quick\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Wed, 26 Sep 2012 09:09:17 -0700 (PDT)",
"msg_from": "FFW_Rude <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "On 26 September 2012 19:09, FFW_Rude <[email protected]> wrote:\n> Could you explain what you are asking me to do because i don't really know\n> what i'm doing...\n\npostgresql-contrib packages contains pgbench tool on Ubuntu.\n\nFor example postgresql-contrib-9.1_9.1.3-2_i386.deb on Ubuntu 12.04 contains:\n/usr/lib/postgresql/9.1/bin/pgbench\n\n\n> i don't have pgbench in it. It is still pgbench command not found...\n\nYou need to run pgbench as postgres user.\nFor example on CentOS:\n# su - postgres\n$ pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\n\n\n--\nWith best regards,\nNikolay\n\n",
"msg_date": "Wed, 26 Sep 2012 19:30:30 +0300",
"msg_from": "Nikolay Ulyanitsky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "Oh ok. But what is this command doing ? i'm gonna runn it today. I'll keep you posted. Here is some EXPLAIN ANALYZE from the querys :\n\nNested Loop (cost=0.00..353722.89 rows=124893 width=16) (actual time=261158.061..10304193.501 rows=99 loops=1) Join Filter: ((t2.\"X\" >= (t1.x_min)::double precision) AND (t2.\"X\" <= (t1.x_max)::double precision) AND (t2.\"Y\" >= (t1.y_min)::double precision) AND (t2.\"Y\" <= (t1.y_max)::double precision)) -> Seq Scan on gps_22 t1 (cost=0.00..3431.80 rows=177480 width=44) (actual time=0.036..1399.621 rows=177480 loops=1) -> Materialize (cost=0.00..20572.83 rows=57 width=20) (actual time=0.012..10.274 rows=2924 loops=177480) -> Seq Scan on adresses_22 t2 (cost=0.00..20572.55 rows=57 width=20) (actual time=1570.240..1726.376 rows=2924 loops=1) Filter: ((id_maille_200m)::text = '0'::text)Total runtime: 10304211.648 ms\n\nNested Loop (cost=0.00..88186069.17 rows=33397899 width=16) (actual time=3060.373..3060.373 rows=0 loops=1) Join Filter: ((t2.\"X\" >= (t1.x_min)::double precision) AND (t2.\"X\" <= (t1.x_max)::double precision) AND (t2.\"Y\" >= (t1.y_min)::double precision) AND (t2.\"Y\" <= (t1.y_max)::double precision)) -> Seq Scan on gps_31 t1 (cost=0.00..3096.38 rows=161738 width=44) (actual time=4.612..442.935 rows=161738 loops=1) -> Materialize (cost=0.00..12562.25 rows=16726 width=20) (actual time=0.012..0.012 rows=0 loops=161738) -> Seq Scan on adresses_31 t2 (cost=0.00..12478.62 rows=16726 width=20) (actual time=1504.082..1504.082 rows=0 loops=1) Filter: ((id_maille_200m)::text = '0'::text)Total runtime: 3060.469 ms\nNested Loop (cost=0.00..84287659.70 rows=31920943 width=64) (actual time=220198.891..32665395.631 rows=21409 loops=1) Join Filter: ((t2.\"X\" >= (t1.x_min)::double precision) AND (t2.\"X\" <= (t1.x_max)::double precision) AND (t2.\"Y\" >= (t1.y_min)::double precision) AND (t2.\"Y\" <= (t1.y_max)::double precision)) -> Seq Scan on gps_67 t1 (cost=0.00..2350.55 rows=121555 width=44) (actual time=0.038..1570.994 rows=121555 loops=1) -> Materialize (cost=0.00..14072.09 rows=21271 width=20) (actual time=0.001..34.394 rows=22540 loops=121555) -> Seq Scan on adresses_67 t2 (cost=0.00..13965.74 rows=21271 width=20) (actual time=0.032..1283.087 rows=22540 loops=1) Filter: ((id_maille_200m)::text = '0'::text)Total runtime: 32665478.631 ms\n\nRude - Last Territory\nOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\t (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\nOu acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\n> Date: Wed, 26 Sep 2012 19:30:30 +0300\n> Subject: Re: [PERFORM] Same query doing slow then quick\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> \n> On 26 September 2012 19:09, FFW_Rude <[email protected]> wrote:\n> > Could you explain what you are asking me to do because i don't really know\n> > what i'm doing...\n> \n> postgresql-contrib packages contains pgbench tool on Ubuntu.\n> \n> For example postgresql-contrib-9.1_9.1.3-2_i386.deb on Ubuntu 12.04 contains:\n> /usr/lib/postgresql/9.1/bin/pgbench\n> \n> \n> > i don't have pgbench in it. It is still pgbench command not found...\n> \n> You need to run pgbench as postgres user.\n> For example on CentOS:\n> # su - postgres\n> $ pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\n> \n> \n> --\n> With best regards,\n> Nikolay\n \t\t \t \t\t \n\n\n\n\nOh ok. But what is this command doing ? i'm gonna runn it today. I'll keep you posted. Here is some EXPLAIN ANALYZE from the querys :Nested Loop (cost=0.00..353722.89 rows=124893 width=16) (actual time=261158.061..10304193.501 rows=99 loops=1) Join Filter: ((t2.\"X\" >= (t1.x_min)::double precision) AND (t2.\"X\" <= (t1.x_max)::double precision) AND (t2.\"Y\" >= (t1.y_min)::double precision) AND (t2.\"Y\" <= (t1.y_max)::double precision)) -> Seq Scan on gps_22 t1 (cost=0.00..3431.80 rows=177480 width=44) (actual time=0.036..1399.621 rows=177480 loops=1) -> Materialize (cost=0.00..20572.83 rows=57 width=20) (actual time=0.012..10.274 rows=2924 loops=177480) -> Seq Scan on adresses_22 t2 (cost=0.00..20572.55 rows=57 width=20) (actual time=1570.240..1726.376 rows=2924 loops=1) Filter: ((id_maille_200m)::text = '0'::text)Total runtime: 10304211.648 msNested Loop (cost=0.00..88186069.17 rows=33397899 width=16) (actual time=3060.373..3060.373 rows=0 loops=1) Join Filter: ((t2.\"X\" >= (t1.x_min)::double precision) AND (t2.\"X\" <= (t1.x_max)::double precision) AND (t2.\"Y\" >= (t1.y_min)::double precision) AND (t2.\"Y\" <= (t1.y_max)::double precision)) -> Seq Scan on gps_31 t1 (cost=0.00..3096.38 rows=161738 width=44) (actual time=4.612..442.935 rows=161738 loops=1) -> Materialize (cost=0.00..12562.25 rows=16726 width=20) (actual time=0.012..0.012 rows=0 loops=161738) -> Seq Scan on adresses_31 t2 (cost=0.00..12478.62 rows=16726 width=20) (actual time=1504.082..1504.082 rows=0 loops=1) Filter: ((id_maille_200m)::text = '0'::text)Total runtime: 3060.469 msNested Loop (cost=0.00..84287659.70 rows=31920943 width=64) (actual time=220198.891..32665395.631 rows=21409 loops=1) Join Filter: ((t2.\"X\" >= (t1.x_min)::double precision) AND (t2.\"X\" <= (t1.x_max)::double precision) AND (t2.\"Y\" >= (t1.y_min)::double precision) AND (t2.\"Y\" <= (t1.y_max)::double precision)) -> Seq Scan on gps_67 t1 (cost=0.00..2350.55 rows=121555 width=44) (actual time=0.038..1570.994 rows=121555 loops=1) -> Materialize (cost=0.00..14072.09 rows=21271 width=20) (actual time=0.001..34.394 rows=22540 loops=121555) -> Seq Scan on adresses_67 t2 (cost=0.00..13965.74 rows=21271 width=20) (actual time=0.032..1283.087 rows=22540 loops=1) Filter: ((id_maille_200m)::text = '0'::text)Total runtime: 32665478.631 msRude - Last TerritoryOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)Ou acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n> Date: Wed, 26 Sep 2012 19:30:30 +0300> Subject: Re: [PERFORM] Same query doing slow then quick> From: [email protected]> To: [email protected]> CC: [email protected]> > On 26 September 2012 19:09, FFW_Rude <[email protected]> wrote:> > Could you explain what you are asking me to do because i don't really know> > what i'm doing...> > postgresql-contrib packages contains pgbench tool on Ubuntu.> > For example postgresql-contrib-9.1_9.1.3-2_i386.deb on Ubuntu 12.04 contains:> /usr/lib/postgresql/9.1/bin/pgbench> > > > i don't have pgbench in it. It is still pgbench command not found...> > You need to run pgbench as postgres user.> For example on CentOS:> # su - postgres> $ pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench> > > --> With best regards,> Nikolay",
"msg_date": "Thu, 27 Sep 2012 10:33:33 +0200",
"msg_from": "Undertaker Rude <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "So i tried to run your pgbench command with the postgres user but it's stil telling me command not found\n\nRude - Last Territory\nOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\t (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\nOu acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\n> Date: Wed, 26 Sep 2012 19:30:30 +0300\n> Subject: Re: [PERFORM] Same query doing slow then quick\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> \n> On 26 September 2012 19:09, FFW_Rude <[email protected]> wrote:\n> > Could you explain what you are asking me to do because i don't really know\n> > what i'm doing...\n> \n> postgresql-contrib packages contains pgbench tool on Ubuntu.\n> \n> For example postgresql-contrib-9.1_9.1.3-2_i386.deb on Ubuntu 12.04 contains:\n> /usr/lib/postgresql/9.1/bin/pgbench\n> \n> \n> > i don't have pgbench in it. It is still pgbench command not found...\n> \n> You need to run pgbench as postgres user.\n> For example on CentOS:\n> # su - postgres\n> $ pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench\n> \n> \n> --\n> With best regards,\n> Nikolay\n \t\t \t \t\t \n\n\n\n\nSo i tried to run your pgbench command with the postgres user but it's stil telling me command not foundRude - Last TerritoryOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)Ou acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n> Date: Wed, 26 Sep 2012 19:30:30 +0300> Subject: Re: [PERFORM] Same query doing slow then quick> From: [email protected]> To: [email protected]> CC: [email protected]> > On 26 September 2012 19:09, FFW_Rude <[email protected]> wrote:> > Could you explain what you are asking me to do because i don't really know> > what i'm doing...> > postgresql-contrib packages contains pgbench tool on Ubuntu.> > For example postgresql-contrib-9.1_9.1.3-2_i386.deb on Ubuntu 12.04 contains:> /usr/lib/postgresql/9.1/bin/pgbench> > > > i don't have pgbench in it. It is still pgbench command not found...> > You need to run pgbench as postgres user.> For example on CentOS:> # su - postgres> $ pgbench -i pgbench && pgbench -c 10 -t 10000 pgbench> > > --> With best regards,> Nikolay",
"msg_date": "Thu, 27 Sep 2012 11:01:48 +0200",
"msg_from": "Undertaker Rude <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "Sorry for the late answer, I was going through my e-mail backlog and\nnoticed that this question hadn't been answered.\n\nOn Thu, Sep 27, 2012 at 11:33 AM, Undertaker Rude <[email protected]> wrote:\n> Oh ok. But what is this command doing ? i'm gonna runn it today. I'll keep\n> you posted. Here is some EXPLAIN ANALYZE from the querys :\n>\n>\n> Nested Loop (cost=0.00..353722.89 rows=124893 width=16) (actual\n> time=261158.061..10304193.501 rows=99 loops=1)\n> Join Filter: ((t2.\"X\" >= (t1.x_min)::double precision) AND (t2.\"X\" <=\n> (t1.x_max)::double precision) AND (t2.\"Y\" >= (t1.y_min)::double precision)\n> AND (t2.\"Y\" <= (t1.y_max)::double precision))\n> -> Seq Scan on gps_22 t1 (cost=0.00..3431.80 rows=177480 width=44)\n> (actual time=0.036..1399.621 rows=177480 loops=1)\n> -> Materialize (cost=0.00..20572.83 rows=57 width=20) (actual\n> time=0.012..10.274 rows=2924 loops=177480)\n> -> Seq Scan on adresses_22 t2 (cost=0.00..20572.55 rows=57\n> width=20) (actual time=1570.240..1726.376 rows=2924 loops=1)\n> Filter: ((id_maille_200m)::text = '0'::text)\n> Total runtime: 10304211.648 ms\n\nAs you can see from the explain plan, postgresql is not using any\nindexes here. The reason is the type mismatch between the X and x_min\ncolumns. Use matching types between tables to enable index use. The\nsame goes for the id column, if the column type is integer use a\nnumeric literal 0 not a text literal '0'.\n\nRegards,\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n\n",
"msg_date": "Sun, 7 Oct 2012 17:27:02 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
},
{
"msg_contents": "Oh, thankx. I forgot to put the answer i got from another site. I was told to use box and point type and create an index on it and it works really well !\n\nRude - Last Territory\nOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781\t (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)\nOu acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\n\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n\n\n> Date: Sun, 7 Oct 2012 17:27:02 +0300\n> Subject: Re: [PERFORM] Same query doing slow then quick\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]; [email protected]\n> \n> Sorry for the late answer, I was going through my e-mail backlog and\n> noticed that this question hadn't been answered.\n> \n> On Thu, Sep 27, 2012 at 11:33 AM, Undertaker Rude <[email protected]> wrote:\n> > Oh ok. But what is this command doing ? i'm gonna runn it today. I'll keep\n> > you posted. Here is some EXPLAIN ANALYZE from the querys :\n> >\n> >\n> > Nested Loop (cost=0.00..353722.89 rows=124893 width=16) (actual\n> > time=261158.061..10304193.501 rows=99 loops=1)\n> > Join Filter: ((t2.\"X\" >= (t1.x_min)::double precision) AND (t2.\"X\" <=\n> > (t1.x_max)::double precision) AND (t2.\"Y\" >= (t1.y_min)::double precision)\n> > AND (t2.\"Y\" <= (t1.y_max)::double precision))\n> > -> Seq Scan on gps_22 t1 (cost=0.00..3431.80 rows=177480 width=44)\n> > (actual time=0.036..1399.621 rows=177480 loops=1)\n> > -> Materialize (cost=0.00..20572.83 rows=57 width=20) (actual\n> > time=0.012..10.274 rows=2924 loops=177480)\n> > -> Seq Scan on adresses_22 t2 (cost=0.00..20572.55 rows=57\n> > width=20) (actual time=1570.240..1726.376 rows=2924 loops=1)\n> > Filter: ((id_maille_200m)::text = '0'::text)\n> > Total runtime: 10304211.648 ms\n> \n> As you can see from the explain plan, postgresql is not using any\n> indexes here. The reason is the type mismatch between the X and x_min\n> columns. Use matching types between tables to enable index use. The\n> same goes for the id column, if the column type is integer use a\n> numeric literal 0 not a text literal '0'.\n> \n> Regards,\n> Ants Aasma\n> -- \n> Cybertec Schönig & Schönig GmbH\n> Gröhrmühlgasse 26\n> A-2700 Wiener Neustadt\n> Web: http://www.postgresql-support.de\n \t\t \t \t\t \n\n\n\n\nOh, thankx. I forgot to put the answer i got from another site. I was told to use box and point type and create an index on it and it works really well !Rude - Last TerritoryOu écouter ?http://www.deezer.com/fr/music/last-territory/the-last-hope-3617781 (Post-apocalyptic Metal)http://www.deezer.com/fr/music/rude-undertaker (Pop-Rock)Ou acheter ?La Fnachttp://recherche.fnac.com/fmia14622213/Last-Territory\nhttp://recherche.fnac.com/fmia14770622/Rude-Undertaker\niTuneshttp://itunes.apple.com/us/artist/last-territory/id533857009?ign-mpt=uo%3D4\n> Date: Sun, 7 Oct 2012 17:27:02 +0300> Subject: Re: [PERFORM] Same query doing slow then quick> From: [email protected]> To: [email protected]> CC: [email protected]; [email protected]> > Sorry for the late answer, I was going through my e-mail backlog and> noticed that this question hadn't been answered.> > On Thu, Sep 27, 2012 at 11:33 AM, Undertaker Rude <[email protected]> wrote:> > Oh ok. But what is this command doing ? i'm gonna runn it today. I'll keep> > you posted. Here is some EXPLAIN ANALYZE from the querys :> >> >> > Nested Loop (cost=0.00..353722.89 rows=124893 width=16) (actual> > time=261158.061..10304193.501 rows=99 loops=1)> > Join Filter: ((t2.\"X\" >= (t1.x_min)::double precision) AND (t2.\"X\" <=> > (t1.x_max)::double precision) AND (t2.\"Y\" >= (t1.y_min)::double precision)> > AND (t2.\"Y\" <= (t1.y_max)::double precision))> > -> Seq Scan on gps_22 t1 (cost=0.00..3431.80 rows=177480 width=44)> > (actual time=0.036..1399.621 rows=177480 loops=1)> > -> Materialize (cost=0.00..20572.83 rows=57 width=20) (actual> > time=0.012..10.274 rows=2924 loops=177480)> > -> Seq Scan on adresses_22 t2 (cost=0.00..20572.55 rows=57> > width=20) (actual time=1570.240..1726.376 rows=2924 loops=1)> > Filter: ((id_maille_200m)::text = '0'::text)> > Total runtime: 10304211.648 ms> > As you can see from the explain plan, postgresql is not using any> indexes here. The reason is the type mismatch between the X and x_min> columns. Use matching types between tables to enable index use. The> same goes for the id column, if the column type is integer use a> numeric literal 0 not a text literal '0'.> > Regards,> Ants Aasma> -- > Cybertec Schönig & Schönig GmbH> Gröhrmühlgasse 26> A-2700 Wiener Neustadt> Web: http://www.postgresql-support.de",
"msg_date": "Mon, 8 Oct 2012 09:39:26 +0200",
"msg_from": "Undertaker Rude <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Same query doing slow then quick"
}
] |
[
{
"msg_contents": "[resending because I accidentally failed to include the list]\n\nKiriakos Tsourapas wrote:\n\n> I am taking your suggestions one step at a time.\n> \n> I changed my configuration to a much more aggressive autovacuum\n> policy (0.5% for analyzing and 1% for autovacuum).\n> \n> autovacuum_naptime = 1min\n> autovacuum_vacuum_threshold = 50\n> #autovacuum_analyze_threshold = 50\n> autovacuum_vacuum_scale_factor = 0.01\n> autovacuum_analyze_scale_factor = 0.005\n> \n> I had tables with 180.000 record and another with 2M records, so\n> the default values of 0.2 for autovacuum would mean that 18.000 and\n> 200K records would have to change respectively, delaying the vacuum\n> for many days.\n\nI am concerned that your initial email said that you had this\nsetting:\n\nautovacuum_naptime = 28800\n\nThis is much too high for most purposes; small, frequently-modified\ntables won't be kept in good shape with this setting. Perhaps you're\nnot having that problem at the moment, but it's risky to assume that\nyou don't and never will. When autovacuum wakes up and there is\nnothing to do it should go back to sleep very quickly.\n\nDon't expect too much from just making autovacuum run more often\nuntil you have eliminated existing bloat (autovacuum generally just\nlimits further growth of bloat) and updated to the latest 8.4 minor\nrelease. The following bugs fixes are among many you are living\nwithout until you upgrade:\n\n - Prevent show_session_authorization() from crashing within\nautovacuum processes (Tom Lane)\n\n - Fix persistent slowdown of autovacuum workers when multiple\nworkers remain active for a long time (Tom Lane)\nThe effective vacuum_cost_limit for an autovacuum worker could drop\nto nearly zero if it processed enough tables, causing it to run\nextremely slowly.\n\n - Fix VACUUM so that it always updates pg_class.reltuples/relpages\n(Tom Lane)\nThis fixes some scenarios where autovacuum could make increasingly\npoor decisions about when to vacuum tables.\n\n - Fix btree index corruption from insertions concurrent with\nvacuuming (Tom Lane)\nAn index page split caused by an insertion could sometimes cause a\nconcurrently-running VACUUM to miss removing index entries that it\nshould remove. After the corresponding table rows are removed, the\ndangling index entries would cause errors (such as \"could not read\nblock N in file ...\") or worse, silently wrong query results after\nunrelated rows are re-inserted at the now-free table locations. This\nbug has been present since release 8.2, but occurs so infrequently\nthat it was not diagnosed until now. If you have reason to suspect\nthat it has happened in your database, reindexing the affected index\nwill fix things.\n\n - Ensure autovacuum worker processes perform stack depth checking\nproperly (Heikki Linnakangas)\nPreviously, infinite recursion in a function invoked by auto-ANALYZE\ncould crash worker processes.\n\n - Only allow autovacuum to be auto-canceled by a directly blocked\nprocess (Tom Lane)\nThe original coding could allow inconsistent behavior in some cases;\nin particular, an autovacuum could get canceled after less than\ndeadlock_timeout grace period.\n\n - Improve logging of autovacuum cancels (Robert Haas)\n\n-Kevin\n\n",
"msg_date": "Wed, 26 Sep 2012 08:58:41 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
}
] |
[
{
"msg_contents": "Hi Kevin,\n\nOn Sep 26, 2012, at 14:39, Kevin Grittner wrote:\n> \n> I am concerned that your initial email said that you had this\n> setting:\n> \n> autovacuum_naptime = 28800\n> \n> This is much too high for most purposes; small, frequently-modified\n> tables won't be kept in good shape with this setting. Perhaps you're\n> not having that problem at the moment, but it's risky to assume that\n> you don't and never will. When autovacuum wakes up and there is\n> nothing to do it should go back to sleep very quickly.\n> \n\nI used the 28800 (8hours) setting after I realized that the default 1min was not helping.\nI also changed other parameters when I changed it to 8 hours, to make sure tables would be auto vacuumed.\nThe problem with my setting was that autovacuum gets stopped if a lock is needed on the table. So, it was very bad choice to run it every 8 hours, because usually it got stopped and never did anything.\nSo, I turned back to the original setting of 1min but changed the autovacuum_vacuum_scale_factor to 1% instead of 20%. Hopefully tables will be more frequently vacuumed now and the problem will not appear again.\n\n> Don't expect too much from just making autovacuum run more often\n> until you have eliminated existing bloat (autovacuum generally just\n> limits further growth of bloat) and updated to the latest 8.4 minor\n> release. The following bugs fixes are among many you are living\n> without until you upgrade:\n\nCan you please suggest of a way to \n- find if there is existing bloat\n- eliminate it\n\nThank you\nHi Kevin,On Sep 26, 2012, at 14:39, Kevin Grittner wrote:I am concerned that your initial email said that you had thissetting:autovacuum_naptime = 28800This is much too high for most purposes; small, frequently-modifiedtables won't be kept in good shape with this setting. Perhaps you'renot having that problem at the moment, but it's risky to assume thatyou don't and never will. When autovacuum wakes up and there isnothing to do it should go back to sleep very quickly.I used the 28800 (8hours) setting after I realized that the default 1min was not helping.I also changed other parameters when I changed it to 8 hours, to make sure tables would be auto vacuumed.The problem with my setting was that autovacuum gets stopped if a lock is needed on the table. So, it was very bad choice to run it every 8 hours, because usually it got stopped and never did anything.So, I turned back to the original setting of 1min but changed the autovacuum_vacuum_scale_factor to 1% instead of 20%. Hopefully tables will be more frequently vacuumed now and the problem will not appear again.Don't expect too much from just making autovacuum run more oftenuntil you have eliminated existing bloat (autovacuum generally justlimits further growth of bloat) and updated to the latest 8.4 minorrelease. The following bugs fixes are among many you are livingwithout until you upgrade:Can you please suggest of a way to - find if there is existing bloat- eliminate itThank you",
"msg_date": "Wed, 26 Sep 2012 16:50:29 +0300",
"msg_from": "Kiriakos Tsourapas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "Dear all,\n\nJust letting you know that making the autovacuum policy more aggressive seems to have fixed the problem.\nIt's been 4 days now and everything is running smoothly.\n\nJust a reminder, what I changed was :\nautovacuum_vacuum_scale_factor = 0.01\nautovacuum_analyze_scale_factor = 0.005\nmaking autovacuum run at 1% instead of 20% (the dafault) and the analyze run at 0,5% instead of 10%.\n\nMaybe it's more aggressive than needed... I will monitor and post back.\n\n\nThank you all for your help.\n",
"msg_date": "Fri, 28 Sep 2012 09:52:10 +0300",
"msg_from": "Kiriakos Tsourapas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
},
{
"msg_contents": "I am posting back to let you know that the DB is working fine since the changes in the autovacuum settings.\n\nI am including the changes I made for later reference to anyone that may face similar issues.\n\n\nThank you all for your time and help !\n\n\nOn Sep 28, 2012, at 9:52, Kiriakos Tsourapas wrote:\n\n> Dear all,\n> \n> Just letting you know that making the autovacuum policy more aggressive seems to have fixed the problem.\n> It's been 4 days now and everything is running smoothly.\n> \n> Just a reminder, what I changed was :\n> autovacuum_vacuum_scale_factor = 0.01\n> autovacuum_analyze_scale_factor = 0.005\n> making autovacuum run at 1% instead of 20% (the dafault) and the analyze run at 0,5% instead of 10%.\n> \n> Maybe it's more aggressive than needed... I will monitor and post back.\n> \n> \n> Thank you all for your help.\n\n\n",
"msg_date": "Wed, 3 Oct 2012 16:39:04 +0300",
"msg_from": "Kiriakos Tsourapas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres becoming slow, only full vacuum fixes it"
}
] |
[
{
"msg_contents": "Hey Everyone, \n\nI seem to be getting an inaccurate cost from explain. Here are two examples for one query with two different query plans:\n\nexchange_prod=# set enable_nestloop = on;\nSET\nexchange_prod=#\nexchange_prod=# explain analyze SELECT COUNT(DISTINCT \"exchange_uploads\".\"id\") FROM \"exchange_uploads\" INNER JOIN \"upload_destinations\" ON \"upload_destinations\".\"id\" = \"exchange_uploads\".\"upload_destination_id\" LEFT OUTER JOIN \"uploads\" ON \"uploads\".\"id\" = \"exchange_uploads\".\"upload_id\" LEFT OUTER JOIN \"import_errors\" ON \"import_errors\".\"exchange_upload_id\" = \"exchange_uploads\".\"id\" LEFT OUTER JOIN \"exchanges\" ON \"exchanges\".\"id\" = \"upload_destinations\".\"exchange_id\" WHERE ((\"exchange_uploads\".\"created_at\" >= '2012-07-27 21:21:57.363944' AND \"upload_destinations\".\"office_id\" = 6));\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=190169.54..190169.55 rows=1 width=4) (actual time=199806.806..199806.807 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..190162.49 rows=2817 width=4) (actual time=163.293..199753.548 rows=43904 loops=1)\n -> Nested Loop (cost=0.00..151986.53 rows=2817 width=4) (actual time=163.275..186869.844 rows=43904 loops=1)\n -> Index Scan using upload_destinations_office_id_idx on upload_destinations (cost=0.00..29.95 rows=4 width=8) (actual time=0.060..0.093 rows=6 loops=1)\n Index Cond: (office_id = 6)\n -> Index Scan using index_exchange_uploads_on_upload_destination_id on exchange_uploads (cost=0.00..37978.21 rows=875 width=12) (actual time=27.197..31140.375 rows=7317 loops=6)\n Index Cond: (upload_destination_id = upload_destinations.id)\n Filter: (created_at >= '2012-07-27 21:21:57.363944'::timestamp without time zone)\n -> Index Scan using index_import_errors_on_exchange_upload_id on import_errors (cost=0.00..8.49 rows=405 width=4) (actual time=0.291..0.291 rows=0 loops=43904)\n Index Cond: (exchange_upload_id = exchange_uploads.id)\n Total runtime: 199806.951 ms\n(11 rows)\n\nexchange_prod=# \nexchange_prod=# set enable_nestloop = off;\nSET\nexchange_prod=# \nexchange_prod=# explain analyze SELECT COUNT(DISTINCT \"exchange_uploads\".\"id\") FROM \"exchange_uploads\" INNER JOIN \"upload_destinations\" ON \"upload_destinations\".\"id\" = \"exchange_uploads\".\"upload_destination_id\" LEFT OUTER JOIN \"uploads\" ON \"uploads\".\"id\" = \"exchange_uploads\".\"upload_id\" LEFT OUTER JOIN \"import_errors\" ON \"import_errors\".\"exchange_upload_id\" = \"exchange_uploads\".\"id\" LEFT OUTER JOIN \"exchanges\" ON \"exchanges\".\"id\" = \"upload_destinations\".\"exchange_id\" WHERE ((\"exchange_uploads\".\"created_at\" >= '2012-07-27 21:21:57.363944' AND \"upload_destinations\".\"office_id\" = 6));\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2535992.33..2535992.34 rows=1 width=4) (actual time=133447.507..133447.507 rows=1 loops=1)\n -> Hash Right Join (cost=1816553.69..2535985.56 rows=2708 width=4) (actual time=133405.326..133417.078 rows=43906 loops=1)\n Hash Cond: (import_errors.exchange_upload_id = exchange_uploads.id)\n -> Seq Scan on import_errors (cost=0.00..710802.71 rows=2300471 width=4) (actual time=0.006..19199.569 rows=2321888 loops=1)\n -> Hash (cost=1816519.84..1816519.84 rows=2708 width=4) (actual time=112938.606..112938.606 rows=43906 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1544kB\n -> Hash Join (cost=28.25..1816519.84 rows=2708 width=4) (actual time=42.957..112892.689 rows=43906 loops=1)\n Hash Cond: (exchange_uploads.upload_destination_id = upload_destinations.id)\n -> Index Scan using index_upload_destinations_on_created_at on exchange_uploads (cost=0.00..1804094.96 rows=3298545 width=12) (actual time=17.686..111649.272 rows=3303488 loops=1)\n Index Cond: (created_at >= '2012-07-27 21:21:57.363944'::timestamp without time zone)\n -> Hash (cost=28.20..28.20 rows=4 width=8) (actual time=0.043..0.043 rows=6 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Bitmap Heap Scan on upload_destinations (cost=6.28..28.20 rows=4 width=8) (actual time=0.026..0.036 rows=6 loops=1)\n Recheck Cond: (office_id = 6)\n -> Bitmap Index Scan on upload_destinations_office_id_idx (cost=0.00..6.28 rows=4 width=0) (actual time=0.020..0.020 rows=6 loops=1)\n Index Cond: (office_id = 6)\n Total runtime: 133447.790 ms\n(17 rows)\n\n\n\nThe first query shows a cost of 190,169.55 and runs in 199,806.951 ms. When I disable nested loop, I get a cost of 2,535,992.34 which runs in only 133,447.790 ms. We have run queries on our database with a cost of 200K cost before and they ran less then a few seconds, which makes me wonder if the first query plan is inaccurate. The other issue is understanding why a query plan with a much higher cost is taking less time to run.\n\nI do not think these queries are cached differently, as we have gotten the same results ran a couple of times at across a few days. We also analyzed the tables that we are querying before trying the explain analyze again, and were met with the same statistics. Any help on how Postgres comes up with a query plan like this, and why there is a difference would be very helpful.\n\nThanks! \n\n-- \nRobert Sosinski\n\n\n\n Hey Everyone,\n I seem to be getting an inaccurate cost from explain. Here are two examples for one query with two different query plans:exchange_prod=# set enable_nestloop = on;SETexchange_prod=#exchange_prod=# explain analyze SELECT COUNT(DISTINCT \"exchange_uploads\".\"id\") FROM \"exchange_uploads\" INNER JOIN \"upload_destinations\" ON \"upload_destinations\".\"id\" = \"exchange_uploads\".\"upload_destination_id\" LEFT OUTER JOIN \"uploads\" ON \"uploads\".\"id\" = \"exchange_uploads\".\"upload_id\" LEFT OUTER JOIN \"import_errors\" ON \"import_errors\".\"exchange_upload_id\" = \"exchange_uploads\".\"id\" LEFT OUTER JOIN \"exchanges\" ON \"exchanges\".\"id\" = \"upload_destinations\".\"exchange_id\" WHERE ((\"exchange_uploads\".\"created_at\" >= '2012-07-27 21:21:57.363944' AND \"upload_destinations\".\"office_id\" = 6)); QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=190169.54..190169.55 rows=1 width=4) (actual time=199806.806..199806.807 rows=1 loops=1) -> Nested Loop Left Join (cost=0.00..190162.49 rows=2817 width=4) (actual time=163.293..199753.548 rows=43904 loops=1) -> Nested Loop (cost=0.00..151986.53 rows=2817 width=4) (actual time=163.275..186869.844 rows=43904 loops=1) -> Index Scan using upload_destinations_office_id_idx on upload_destinations (cost=0.00..29.95 rows=4 width=8) (actual time=0.060..0.093 rows=6 loops=1) Index Cond: (office_id = 6) -> Index Scan using index_exchange_uploads_on_upload_destination_id on exchange_uploads (cost=0.00..37978.21 rows=875 width=12) (actual time=27.197..31140.375 rows=7317 loops=6) Index Cond: (upload_destination_id = upload_destinations.id) Filter: (created_at >= '2012-07-27 21:21:57.363944'::timestamp without time zone) -> Index Scan using index_import_errors_on_exchange_upload_id on import_errors (cost=0.00..8.49 rows=405 width=4) (actual time=0.291..0.291 rows=0 loops=43904) Index Cond: (exchange_upload_id = exchange_uploads.id) Total runtime: 199806.951 ms(11 rows)exchange_prod=# exchange_prod=# set enable_nestloop = off;SETexchange_prod=# exchange_prod=# explain analyze SELECT COUNT(DISTINCT \"exchange_uploads\".\"id\") FROM \"exchange_uploads\" INNER JOIN \"upload_destinations\" ON \"upload_destinations\".\"id\" = \"exchange_uploads\".\"upload_destination_id\" LEFT OUTER JOIN \"uploads\" ON \"uploads\".\"id\" = \"exchange_uploads\".\"upload_id\" LEFT OUTER JOIN \"import_errors\" ON \"import_errors\".\"exchange_upload_id\" = \"exchange_uploads\".\"id\" LEFT OUTER JOIN \"exchanges\" ON \"exchanges\".\"id\" = \"upload_destinations\".\"exchange_id\" WHERE ((\"exchange_uploads\".\"created_at\" >= '2012-07-27 21:21:57.363944' AND \"upload_destinations\".\"office_id\" = 6)); QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=2535992.33..2535992.34 rows=1 width=4) (actual time=133447.507..133447.507 rows=1 loops=1) -> Hash Right Join (cost=1816553.69..2535985.56 rows=2708 width=4) (actual time=133405.326..133417.078 rows=43906 loops=1) Hash Cond: (import_errors.exchange_upload_id = exchange_uploads.id) -> Seq Scan on import_errors (cost=0.00..710802.71 rows=2300471 width=4) (actual time=0.006..19199.569 rows=2321888 loops=1) -> Hash (cost=1816519.84..1816519.84 rows=2708 width=4) (actual time=112938.606..112938.606 rows=43906 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 1544kB -> Hash Join (cost=28.25..1816519.84 rows=2708 width=4) (actual time=42.957..112892.689 rows=43906 loops=1) Hash Cond: (exchange_uploads.upload_destination_id = upload_destinations.id) -> Index Scan using index_upload_destinations_on_created_at on exchange_uploads (cost=0.00..1804094.96 rows=3298545 width=12) (actual time=17.686..111649.272 rows=3303488 loops=1) Index Cond: (created_at >= '2012-07-27 21:21:57.363944'::timestamp without time zone) -> Hash (cost=28.20..28.20 rows=4 width=8) (actual time=0.043..0.043 rows=6 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 1kB -> Bitmap Heap Scan on upload_destinations (cost=6.28..28.20 rows=4 width=8) (actual time=0.026..0.036 rows=6 loops=1) Recheck Cond: (office_id = 6) -> Bitmap Index Scan on upload_destinations_office_id_idx (cost=0.00..6.28 rows=4 width=0) (actual time=0.020..0.020 rows=6 loops=1) Index Cond: (office_id = 6) Total runtime: 133447.790 ms(17 rows)The first query shows a cost of 190,169.55 and runs in 199,806.951 ms. When I disable nested loop, I get a cost of 2,535,992.34 which runs in only 133,447.790 ms. We have run queries on our database with a cost of 200K cost before and they ran less then a few seconds, which makes me wonder if the first query plan is inaccurate. The other issue is understanding why a query plan with a much higher cost is taking less time to run.I do not think these queries are cached differently, as we have gotten the same results ran a couple of times at across a few days. We also analyzed the tables that we are querying before trying the explain analyze again, and were met with the same statistics. Any help on how Postgres comes up with a query plan like this, and why there is a difference would be very helpful.Thanks!\n-- Robert Sosinski",
"msg_date": "Wed, 26 Sep 2012 14:38:09 -0400",
"msg_from": "Robert Sosinski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inaccurate Explain Cost"
},
{
"msg_contents": "On 09/26/2012 01:38 PM, Robert Sosinski wrote:\n\n> I seem to be getting an inaccurate cost from explain. Here are two\n> examples for one query with two different query plans:\n\nWell, there's this:\n\nNested Loop (cost=0.00..151986.53 rows=2817 width=4) (actual \ntime=163.275..186869.844 rows=43904 loops=1)\n\nIf anything's a smoking gun, that is. I could see why you'd want to turn \noff nested loops to get better execution time. But the question is: why \ndid it think it would match so few rows in the first place? The planner \nprobably would have thrown away this query plan had it known it would \nloop 20x more than it thought.\n\nI think we need to know what your default_statistics_target is set at, \nand really... all of your relevant postgresql settings.\n\nPlease see this:\n\nhttp://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nBut you also may need to look a lot more into your query itself. The \ndifference between a 2 or a 3 minute query isn't going to help you \nmuch. Over here, we tend to spend more of our time turning 2 or 3 minute \nqueries into 20 or 30ms queries. But judging by your date range, getting \nthe last 2-months of data from a table that large generally won't be \nfast by any means.\n\nThat said, looking at your actual query:\n\nSELECT COUNT(DISTINCT eu.id)\n FROM exchange_uploads eu\n JOIN upload_destinations ud ON ud.id = eu.upload_destination_id\n LEFT JOIN uploads u ON u.id = eu.upload_id\n LEFT JOIN import_errors ie ON ie.exchange_upload_id = eu.id\n LEFT JOIN exchanges e ON e.id = ud.exchange_id\n WHERE eu.created_at >= '2012-07-27 21:21:57.363944'\n AND ud.office_id = 6;\n\nDoesn't need half of these joins. They're left joins, and never used in \nthe query results or where criteria. You could just use this:\n\nSELECT COUNT(DISTINCT eu.id)\n FROM exchange_uploads eu\n JOIN upload_destinations ud ON (ud.id = eu.upload_destination_id)\n WHERE eu.created_at >= '2012-07-27 21:21:57.363944'\n AND ud.office_id = 6;\n\nThough I presume this is just a count precursor to a query that fetches \nthe actul results and does need the left join. Either way, the index \nscan from your second example matches 3.3M rows by using the created_at \nindex on exchange_uploads. That's not really very restrictive, and so \nyou have two problems:\n\n1. Your nested loop stats from office_id are somehow wrong. Try \nincreasing your stats on that column, or just default_statistics_target \nin general, and re-analyze.\n2. Your created_at criteria above match way too many rows, and will also \ntake a long time to process.\n\nThose are your two actual problems. We can probably get your query to \nrun faster, but those are pretty significant hurdles.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Wed, 26 Sep 2012 15:03:03 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inaccurate Explain Cost"
},
{
"msg_contents": "Em 26/09/2012 17:03, Shaun Thomas escreveu:\n> On 09/26/2012 01:38 PM, Robert Sosinski wrote:\n>\n>> I seem to be getting an inaccurate cost from explain. Here are two\n>> examples for one query with two different query plans:\n>\n> Well, there's this:\n>\n> Nested Loop (cost=0.00..151986.53 rows=2817 width=4) (actual \n> time=163.275..186869.844 rows=43904 loops=1)\n>\n> If anything's a smoking gun, that is. I could see why you'd want to \n> turn off nested loops to get better execution time. But the question \n> is: why did it think it would match so few rows in the first place? \n> The planner probably would have thrown away this query plan had it \n> known it would loop 20x more than it thought.\n>\n> I think we need to know what your default_statistics_target is set at, \n> and really... all of your relevant postgresql settings.\n>\n> Please see this:\n>\n> http://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> But you also may need to look a lot more into your query itself. The \n> difference between a 2 or a 3 minute query isn't going to help you \n> much. Over here, we tend to spend more of our time turning 2 or 3 \n> minute queries into 20 or 30ms queries. But judging by your date \n> range, getting the last 2-months of data from a table that large \n> generally won't be fast by any means.\n>\n> That said, looking at your actual query:\n>\n> SELECT COUNT(DISTINCT eu.id)\n> FROM exchange_uploads eu\n> JOIN upload_destinations ud ON ud.id = eu.upload_destination_id\n> LEFT JOIN uploads u ON u.id = eu.upload_id\n> LEFT JOIN import_errors ie ON ie.exchange_upload_id = eu.id\n> LEFT JOIN exchanges e ON e.id = ud.exchange_id\n> WHERE eu.created_at >= '2012-07-27 21:21:57.363944'\n> AND ud.office_id = 6;\n>\n> Doesn't need half of these joins. They're left joins, and never used \n> in the query results or where criteria. You could just use this:\n\nInteresting. I've similar situation, where user can choose a set of \nfilters, and then the query must have several left joins \"just in case\" \n(user need in the filer).\nI know other database that is able to remove unnecessary outer joins \nfrom queries when they are not relevant and for instance become faster.\nCan't PostgreSQL do the same?\n\nRegards,\n\nEdson.\n\n>\n> SELECT COUNT(DISTINCT eu.id)\n> FROM exchange_uploads eu\n> JOIN upload_destinations ud ON (ud.id = eu.upload_destination_id)\n> WHERE eu.created_at >= '2012-07-27 21:21:57.363944'\n> AND ud.office_id = 6;\n>\n> Though I presume this is just a count precursor to a query that \n> fetches the actul results and does need the left join. Either way, the \n> index scan from your second example matches 3.3M rows by using the \n> created_at index on exchange_uploads. That's not really very \n> restrictive, and so you have two problems:\n>\n> 1. Your nested loop stats from office_id are somehow wrong. Try \n> increasing your stats on that column, or just \n> default_statistics_target in general, and re-analyze.\n> 2. Your created_at criteria above match way too many rows, and will \n> also take a long time to process.\n>\n> Those are your two actual problems. We can probably get your query to \n> run faster, but those are pretty significant hurdles.\n>\n>\n\n\n",
"msg_date": "Wed, 26 Sep 2012 17:20:01 -0300",
"msg_from": "Edson Richter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Inaccurate Explain Cost"
},
{
"msg_contents": "On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:\n> The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.\n> When I disable nested loop, I get a cost of 2,535,992.34 which runs in\n> only 133,447.790 ms. We have run queries on our database with a cost\n> of 200K cost before and they ran less then a few seconds, which makes\n> me wonder if the first query plan is inaccurate. The other issue is\n> understanding why a query plan with a much higher cost is taking less\n> time to run.\n\nAre you under impression that cost should be somehow related to actual\ntime?\nIf yes - that's not true, and afaik never was.\nthe fact that you got similar time and cost is just a coincidence.\n\nBest regards,\n\ndepesz\n\n-- \nThe best thing about modern society is how easy it is to avoid contact with it.\n http://depesz.com/\n\n",
"msg_date": "Wed, 26 Sep 2012 22:21:40 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inaccurate Explain Cost"
},
{
"msg_contents": "On Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski <\[email protected]> wrote:\n\n> On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:\n> > The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.\n> > When I disable nested loop, I get a cost of 2,535,992.34 which runs in\n> > only 133,447.790 ms. We have run queries on our database with a cost\n> > of 200K cost before and they ran less then a few seconds, which makes\n> > me wonder if the first query plan is inaccurate. The other issue is\n> > understanding why a query plan with a much higher cost is taking less\n> > time to run.\n>\n> Are you under impression that cost should be somehow related to actual\n> time?\n> If yes - that's not true, and afaik never was.\n> the fact that you got similar time and cost is just a coincidence.\n>\n\nWell...only sort of. In a well-tuned db with accurate statistics, relative\ncost between 2 plans should be reflected in relative execution time between\nthose 2 queries (assuming the data in memory is similar for both runs,\nanyway), and that's what he seems to be complaining about. The plan with\nhigher cost had lower execution time, which resulted in the planner picking\nthe slower query. But the reason for the execution time discrepancy would\nappear to be, at least in part, inaccurate statistics resulting in an\nincorrect estimate of number of rows in a loop iteration. More info about\nthe db config would help to identify other things contributing to the\ninaccurate cost estimate - as mentioned earlier, please refer to\nhttp://wiki.postgresql.org/wiki/Slow_Query_Questions when asking\nperformance questions\n\nAnd yes, I know you know all of this, Hubert. I wrote it for the benefit\nof the original questioner.\n\n--sam\n\nOn Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski <[email protected]> wrote:\nOn Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:\n> The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.\n> When I disable nested loop, I get a cost of 2,535,992.34 which runs in\n> only 133,447.790 ms. We have run queries on our database with a cost\n> of 200K cost before and they ran less then a few seconds, which makes\n> me wonder if the first query plan is inaccurate. The other issue is\n> understanding why a query plan with a much higher cost is taking less\n> time to run.\n\nAre you under impression that cost should be somehow related to actual\ntime?\nIf yes - that's not true, and afaik never was.\nthe fact that you got similar time and cost is just a coincidence.Well...only sort of. In a well-tuned db with accurate statistics, relative cost between 2 plans should be reflected in relative execution time between those 2 queries (assuming the data in memory is similar for both runs, anyway), and that's what he seems to be complaining about. The plan with higher cost had lower execution time, which resulted in the planner picking the slower query. But the reason for the execution time discrepancy would appear to be, at least in part, inaccurate statistics resulting in an incorrect estimate of number of rows in a loop iteration. More info about the db config would help to identify other things contributing to the inaccurate cost estimate - as mentioned earlier, please refer to http://wiki.postgresql.org/wiki/Slow_Query_Questions when asking performance questions\nAnd yes, I know you know all of this, Hubert. I wrote it for the benefit of the original questioner.--sam",
"msg_date": "Wed, 26 Sep 2012 15:42:09 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inaccurate Explain Cost"
},
{
"msg_contents": "Edson Richter <[email protected]> writes:\n>> That said, looking at your actual query:\n>> \n>> SELECT COUNT(DISTINCT eu.id)\n>> FROM exchange_uploads eu\n>> JOIN upload_destinations ud ON ud.id = eu.upload_destination_id\n>> LEFT JOIN uploads u ON u.id = eu.upload_id\n>> LEFT JOIN import_errors ie ON ie.exchange_upload_id = eu.id\n>> LEFT JOIN exchanges e ON e.id = ud.exchange_id\n>> WHERE eu.created_at >= '2012-07-27 21:21:57.363944'\n>> AND ud.office_id = 6;\n>> \n>> Doesn't need half of these joins. They're left joins, and never used \n>> in the query results or where criteria. You could just use this:\n\n> Interesting. I've similar situation, where user can choose a set of \n> filters, and then the query must have several left joins \"just in case\" \n> (user need in the filer).\n> I know other database that is able to remove unnecessary outer joins \n> from queries when they are not relevant and for instance become faster.\n> Can't PostgreSQL do the same?\n\nIt does, and did - note the query plan is only scanning 3 of the 5\ntables mentioned in the query. (The other left join appears to be\nto a non-unique column, which makes it not redundant.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 26 Sep 2012 19:29:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Inaccurate Explain Cost"
},
{
"msg_contents": "On Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski\n<[email protected]> wrote:\n> On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:\n>> The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.\n>> When I disable nested loop, I get a cost of 2,535,992.34 which runs in\n>> only 133,447.790 ms. We have run queries on our database with a cost\n>> of 200K cost before and they ran less then a few seconds, which makes\n>> me wonder if the first query plan is inaccurate. The other issue is\n>> understanding why a query plan with a much higher cost is taking less\n>> time to run.\n>\n> Are you under impression that cost should be somehow related to actual\n> time?\n\nI am certainly under that impression. If the estimated cost has\nnothing to do with run time, then what is it that the cost-based\noptimizer is trying to optimize?\n\nThe arbitrary numbers of the cost parameters do not formally have any\nunits, but they had better have some vaguely proportional relationship\nwith the dimension of time, or else there is no point in having an\noptimizer. For any given piece of hardware (including table-space, if\nyou have different table-spaces on different storage), configuration\nand cachedness, there should be some constant factor to translate cost\ninto time. To the extent that there fails to be such a constant\nfactor, it is either a misconfiguration, or a room for improvement in\nthe planner.\n\nThe only exceptions I can think of is are 1) when there is only one\nway to do something, the planner may not bother to cost it (i.e.\nassign it a cost of zero) because it will not help make a decision.\nHowever, the only instances of this that I know of are in DML, not in\npure selects, and 2) the costs of setting hint bits and such in\nselects is not estimated, except to the extent they are folded into\nsomething else, like the page visiting costs.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Wed, 26 Sep 2012 17:04:08 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Inaccurate Explain Cost"
},
{
"msg_contents": "Am 27.09.2012, 02:04 Uhr, schrieb Jeff Janes <[email protected]>:\n\n> On Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski\n> <[email protected]> wrote:\n>> On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:\n>>> The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.\n>>> When I disable nested loop, I get a cost of 2,535,992.34 which runs in\n>>> only 133,447.790 ms. We have run queries on our database with a cost\n>>> of 200K cost before and they ran less then a few seconds, which makes\n>>> me wonder if the first query plan is inaccurate. The other issue is\n>>> understanding why a query plan with a much higher cost is taking less\n>>> time to run.\n>>\n>> Are you under impression that cost should be somehow related to actual\n>> time?\n>\n> I am certainly under that impression. If the estimated cost has\n> nothing to do with run time, then what is it that the cost-based\n> optimizer is trying to optimize?\n\nSee http://www.postgresql.org/docs/9.2/static/runtime-config-query.html \nsection \"18.7.2. Planner Cost Constants\".\n\n-Matthias\n\n",
"msg_date": "Thu, 27 Sep 2012 12:48:24 +0200",
"msg_from": "Matthias <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Inaccurate Explain Cost"
}
] |
[
{
"msg_contents": "Hello,\n\ni have a problem with relatively easy query.\n\nEXPLAIN ANALYZE SELECT content.* FROM content JOIN blog ON blog.id = content.blog_id JOIN community_prop ON blog.id = community_prop.blog_id JOIN community ON community.id = community_prop.id WHERE community.id IN (33, 55, 61, 1741, 75, 90, 106, 180, 228, 232, 256, 310, 388, 404, 504, 534, 536, 666, 700, 768, 824, 832, 855, 873, 898, 962, 1003, 1008, 1027, 1051, 1201, 1258, 1269, 1339, 1355, 1360, 1383, 1390, 1430, 1505, 1506, 1530, 1566, 1578, 1616, 1678, 1701, 1713, 1723, 1821, 1842, 1880, 1882, 1894, 1973, 2039, 2069, 2106, 2130, 2204, 2226, 2236, 2238, 2263, 2272, 2310, 2317, 2327, 2353, 2360, 2401, 2402, 2409, 2419, 2425, 2426, 2438, 2440, 2452, 2467, 2494, 2514, 2559, 2581, 2653, 2677, 2679, 2683, 2686, 2694, 2729, 2732, 2739, 2779, 2785, 2795, 2821, 2831, 2839, 2862, 2864, 2866, 2882, 2890, 2905, 2947, 2962, 2964, 2978, 2981, 3006, 3016, 3037, 3039, 3055, 3060, 3076, 3112, 3124, 3135, 3138, 3186, 3213, 3222, 3225, 3269, 3273, 3288, 3291, 3329, 3363, 3375, 3376, 3397, 3415, 3491, 3500, 2296, 3547, 129, 1039, 8, 1053, 1441, 2372, 1974, 289, 2449, 2747, 2075, 57, 3550, 3069, 89, 1603, 1570, 54, 152, 1035, 1456, 506, 1387, 43, 1805, 1851, 1843, 2587, 1908, 1790, 2630, 901, 13, 529, 705, 81, 2668, 1086, 603, 1986, 2516, 2969, 2671, 568, 4636, 1115, 864, 381, 4516, 2608, 677, 88, 1825, 3220, 3284, 947, 1190, 2233, 4489, 3320, 2957, 4146, 1841, 25, 643, 4352, 14, 4261, 3876, 1311, 1342, 4057, 3974) ORDER BY content.time_create DESC LIMIT 10;\n\nhttp://explain.depesz.com/s/ccE\n\nAs you can see, planner estimates 115 rows, but there are 259554 of them.\n\nThis query shows root of the problem\nEXPLAIN ANALYZE SELECT content.* FROM content JOIN blog ON blog.id = content.blog_id JOIN community_prop ON blog.id = community_prop.blog_id;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=24498.17..137922.26 rows=2624 width=572) (actual time=36.028..1342.267 rows=408374 loops=1)\n Hash Cond: (content.blog_id = blog.id)\n -> Seq Scan on content (cost=0.00..102364.99 rows=1260899 width=572) (actual time=0.030..983.274 rows=1256128 loops=1)\n -> Hash (cost=24439.07..24439.07 rows=4728 width=8) (actual time=35.964..35.964 rows=4728 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 185kB\n -> Nested Loop (cost=0.00..24439.07 rows=4728 width=8) (actual time=0.064..33.092 rows=4728 loops=1)\n -> Seq Scan on community_prop (cost=0.00..463.28 rows=4728 width=4) (actual time=0.004..5.089 rows=4728 loops=1)\n -> Index Scan using blog_pkey on blog (cost=0.00..5.06 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=4728)\n Index Cond: (id = community_prop.blog_id)\n Total runtime: 1361.354 ms\n\n2624 vs 408374\n\nJoining only content with blog: 1260211 vs 1256124.\nJoining only blog with community_prop: 4728 vs 4728\nJoining only content with community_prop: 78304 vs 408376\n\nSHOW default_statistics_target ;\n default_statistics_target \n---------------------------\n 500\n\nI already altered stats on blog_id column \nALTER TABLE content ALTER COLUMN blog_id SET STATISTICS 1000;\n\nTried setting 3000 and 10000 on all join columns - did not make a difference.\nTried setting n_distinct on content(blog_id) manually to different values from 10000 to 200000 (exact distinct is 90k, vacuum sets it to 76k) - did not change the estimate result set, only estimated index lookup.\n\nDon't now what to do with this.\n\nReady to provide any additional information.\nThank you for your time.\nHello,i have a problem with relatively easy query.EXPLAIN ANALYZE SELECT content.* FROM content JOIN blog ON blog.id = content.blog_id JOIN community_prop ON blog.id = community_prop.blog_id JOIN community ON community.id = community_prop.id WHERE community.id IN (33, 55, 61, 1741, 75, 90, 106, 180, 228, 232, 256, 310, 388, 404, 504, 534, 536, 666, 700, 768, 824, 832, 855, 873, 898, 962, 1003, 1008, 1027, 1051, 1201, 1258, 1269, 1339, 1355, 1360, 1383, 1390, 1430, 1505, 1506, 1530, 1566, 1578, 1616, 1678, 1701, 1713, 1723, 1821, 1842, 1880, 1882, 1894, 1973, 2039, 2069, 2106, 2130, 2204, 2226, 2236, 2238, 2263, 2272, 2310, 2317, 2327, 2353, 2360, 2401, 2402, 2409, 2419, 2425, 2426, 2438, 2440, 2452, 2467, 2494, 2514, 2559, 2581, 2653, 2677, 2679, 2683, 2686, 2694, 2729, 2732, 2739, 2779, 2785, 2795, 2821, 2831, 2839, 2862, 2864, 2866, 2882, 2890, 2905, 2947, 2962, 2964, 2978, 2981, 3006, 3016, 3037, 3039, 3055, 3060, 3076, 3112, 3124, 3135, 3138, 3186, 3213, 3222, 3225, 3269, 3273, 3288, 3291, 3329, 3363, 3375, 3376, 3397, 3415, 3491, 3500, 2296, 3547, 129, 1039, 8, 1053, 1441, 2372, 1974, 289, 2449, 2747, 2075, 57, 3550, 3069, 89, 1603, 1570, 54, 152, 1035, 1456, 506, 1387, 43, 1805, 1851, 1843, 2587, 1908, 1790, 2630, 901, 13, 529, 705, 81, 2668, 1086, 603, 1986, 2516, 2969, 2671, 568, 4636, 1115, 864, 381, 4516, 2608, 677, 88, 1825, 3220, 3284, 947, 1190, 2233, 4489, 3320, 2957, 4146, 1841, 25, 643, 4352, 14, 4261, 3876, 1311, 1342, 4057, 3974) ORDER BY content.time_create DESC LIMIT 10;http://explain.depesz.com/s/ccEAs you can see, planner estimates 115 rows, but there are 259554 of them.This query shows root of the problemEXPLAIN ANALYZE SELECT content.* FROM content JOIN blog ON blog.id = content.blog_id JOIN community_prop ON blog.id = community_prop.blog_id; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=24498.17..137922.26 rows=2624 width=572) (actual time=36.028..1342.267 rows=408374 loops=1) Hash Cond: (content.blog_id = blog.id) -> Seq Scan on content (cost=0.00..102364.99 rows=1260899 width=572) (actual time=0.030..983.274 rows=1256128 loops=1) -> Hash (cost=24439.07..24439.07 rows=4728 width=8) (actual time=35.964..35.964 rows=4728 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 185kB -> Nested Loop (cost=0.00..24439.07 rows=4728 width=8) (actual time=0.064..33.092 rows=4728 loops=1) -> Seq Scan on community_prop (cost=0.00..463.28 rows=4728 width=4) (actual time=0.004..5.089 rows=4728 loops=1) -> Index Scan using blog_pkey on blog (cost=0.00..5.06 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=4728) Index Cond: (id = community_prop.blog_id) Total runtime: 1361.354 ms2624 vs 408374Joining only content with blog: 1260211 vs 1256124.Joining only blog with community_prop: 4728 vs 4728Joining only content with community_prop: 78304 vs 408376SHOW default_statistics_target ; default_statistics_target --------------------------- 500I already altered stats on blog_id column ALTER TABLE content ALTER COLUMN blog_id SET STATISTICS 1000;Tried setting 3000 and 10000 on all join columns - did not make a difference.Tried setting n_distinct on content(blog_id) manually to different values from 10000 to 200000 (exact distinct is 90k, vacuum sets it to 76k) - did not change the estimate result set, only estimated index lookup.Don't now what to do with this.Ready to provide any additional information.Thank you for your time.",
"msg_date": "Thu, 27 Sep 2012 01:57:46 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "wrong join result set estimate"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI want to buy a new server, and am contemplating a Dell R710 or the \nnewer R720. The R710 has the x5600 series CPU, while the R720 has the \nnewer E5-2600 series CPU.\n\nAt this point I'm dealing with a fairly small database of 8 to 9 GB. \nThe server will be dedicated to Postgres and a C++ based middle tier. \nThe longest operations right now is loading the item list (80,000 items) \nand checking On Hand for an item. The item list does a sum for each \nitem to get OH. The database design is out of my control. The on_hand \nlookup table currently has 3 million rows after 4 years of data.\n\nMy main question is: Will a E5-2660 perform faster than a X5690? I'm \nleaning to clock speeds because I know doing the sum of those rows is \nCPU intensive, but have not done extensive research to see if the newer \nCPUs will outperform the x5690 per clock cycle. Overall the current CPU \nis hardly busy (after 1 min) - load average: 0.81, 0.46, 0.30, with % \nnever exceeding 50%, but the speed increase is something I'm ready to \npay for if it will actually be noticeably faster.\n\nI'm comparing the E5-2660 rather than the 2690 because of price.\n\nFor both servers I'd have at least 32GB Ram and 4 Hard Drives in raid 10.\n\nBest regards,\nMark\n\n\n",
"msg_date": "Thu, 27 Sep 2012 13:11:24 -0600",
"msg_from": "\"M. D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 4:11 PM, M. D. <[email protected]> wrote:\n> At this point I'm dealing with a fairly small database of 8 to 9 GB.\n...\n> The on_hand lookup table\n> currently has 3 million rows after 4 years of data.\n...\n> For both servers I'd have at least 32GB Ram and 4 Hard Drives in raid 10.\n\nFor a 9GB database, that amount of RAM seams like overkill to me.\nUnless you expect to grow a lot faster than you've been growing, or\nperhaps your middle tier consumes a lot of those 32GB, I don't see the\npoint there.\n\n",
"msg_date": "Thu, 27 Sep 2012 16:22:18 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 12:11 PM, M. D. <[email protected]> wrote:\n> Hi everyone,\n>\n> I want to buy a new server, and am contemplating a Dell R710 or the newer\n> R720. The R710 has the x5600 series CPU, while the R720 has the newer\n> E5-2600 series CPU.\n>\n> At this point I'm dealing with a fairly small database of 8 to 9 GB. The\n> server will be dedicated to Postgres and a C++ based middle tier. The\n> longest operations right now is loading the item list (80,000 items) and\n> checking On Hand for an item. The item list does a sum for each item to get\n> OH. The database design is out of my control. The on_hand lookup table\n> currently has 3 million rows after 4 years of data.\n>\n> My main question is: Will a E5-2660 perform faster than a X5690? I'm leaning\n> to clock speeds because I know doing the sum of those rows is CPU intensive,\n> but have not done extensive research to see if the newer CPUs will\n> outperform the x5690 per clock cycle. Overall the current CPU is hardly busy\n> (after 1 min) - load average: 0.81, 0.46, 0.30, with % never exceeding 50%,\n> but the speed increase is something I'm ready to pay for if it will actually\n> be noticeably faster.\n>\n> I'm comparing the E5-2660 rather than the 2690 because of price.\n>\n> For both servers I'd have at least 32GB Ram and 4 Hard Drives in raid 10.\n\nI don't think you've supplied enough information for anyone to give\nyou a meaningful answer. What's your current configuration? Are you\nI/O bound, CPU bound, memory limited, or some other problem? You need\nto do a specific analysis of the queries that are causing you problems\n(i.e. why do you need to upgrade at all?)\n\nRegarding Dell ... we were disappointed by Dell. They're expensive,\nthey try to lock you in to their service contracts, and (when I bought\ntwo) they lock you in to their replacement parts, which cost 2-3x what\nyou can buy from anyone else.\n\nIf you're planning to use a RAID 10 configuration, then a BBU cache\nwill make more difference than almost anything else you can do. I've\nheard that Dell's current RAID controller is pretty good, but in the\npast they've re-branded other controllers as \"Perc XYZ\" and you\ncouldn't figure out what was really under the covers. RAID\ncontrollers are wildly different in performance, and you really want\nto get only the best.\n\nWe use a \"white box\" vendor (ASA Computers), and have been very happy\nwith the results. They build exactly what I ask for and deliver it in\nabout a week. They offer on-site service and warranties, but don't\npressure me to buy them. I'm not locked in to anything. Their prices\nare good.\n\nMy current configuration is a dual 4-core Intel Xeon 2.13 GHz system\nwith 12GB memory and 12x500GB 7200RPM SATA disks, controlled by a\n3WARE RAID controller with a BBU cache. The OS and WAL are on a RAID1\npair, and the Postgres database is on a 8-disk RAID10 array. That\nleaves two hot spare disks. I get about 7,000 TPS for pg_bench. The\nchassis has dual hot-swappable power supplies and dual networks for\nfailover. It's in the neighborhood of $5,000.\n\nCraig\n\n>\n> Best regards,\n> Mark\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 27 Sep 2012 12:37:51 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/27/2012 01:22 PM, Claudio Freire wrote:\n> On Thu, Sep 27, 2012 at 4:11 PM, M. D. <[email protected]> wrote:\n>> At this point I'm dealing with a fairly small database of 8 to 9 GB.\n> ...\n>> The on_hand lookup table\n>> currently has 3 million rows after 4 years of data.\n> ...\n>> For both servers I'd have at least 32GB Ram and 4 Hard Drives in raid 10.\n> For a 9GB database, that amount of RAM seams like overkill to me.\n> Unless you expect to grow a lot faster than you've been growing, or\n> perhaps your middle tier consumes a lot of those 32GB, I don't see the\n> point there.\n>\nThe middle tier does caching and can easily take up to 10GB of RAM, \ntherefore I'm buying more.\n\n\n\n",
"msg_date": "Thu, 27 Sep 2012 13:40:01 -0600",
"msg_from": "\"M. D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 9/27/2012 1:11 PM, M. D. wrote:\n>\n> I want to buy a new server, and am contemplating a Dell R710 or the \n> newer R720. The R710 has the x5600 series CPU, while the R720 has the \n> newer E5-2600 series CPU.\n\nFor this the best data I've found (excepting actually running tests on \nthe physical hardware) is to use the SpecIntRate2006 numbers, which can \nbe found for both machines on the spec.org web site.\n\nI think the newer CPU is the clear winner with a specintrate performance \nof 589 vs 432.\nIt also has a significantly larger cache. Comparing single-threaded \nperformance, the older CPU is slightly faster (50 vs 48). That wouldn't \nbe a big enough difference to make me pick it.\n\nThe Sandy Bridge-based machine will likely use less power.\n\nhttp://www.spec.org/cpu2006/results/res2012q2/cpu2006-20120604-22697.html\n\nhttp://www.spec.org/cpu2006/results/res2012q1/cpu2006-20111219-19272.html\n\nTo find more results use this page : \nhttp://www.spec.org/cgi-bin/osgresults?conf=cpu2006;op=form\n(enter R710 or R720 in the \"system\" field).\n\n\n\n",
"msg_date": "Thu, 27 Sep 2012 13:40:17 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 9/27/2012 1:37 PM, Craig James wrote:\n> We use a \"white box\" vendor (ASA Computers), and have been very happy\n> with the results. They build exactly what I ask for and deliver it in\n> about a week. They offer on-site service and warranties, but don't\n> pressure me to buy them. I'm not locked in to anything. Their prices\n> are good.\n\nI'll second that : we build our own machines from white-label parts for \ntypically less than 1/2 the Dell list price. However, Dell does provide \nvalue to some people : for example you can point a third-party software \nvendor at a Dell box and demand they make their application work \nproperly whereas they may turn their nose up at a white label box. Same \ngoes for Operating Systems : we have spent much time debugging Linux \nkernel issues on white box hardware. On Dell hardware we would most \nlikely have not hit those bugs because Red Hat tests on Dell. So YMMV...\n\n\n\n\n\n",
"msg_date": "Thu, 27 Sep 2012 13:47:40 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/27/2012 01:47 PM, David Boreham wrote:\n> On 9/27/2012 1:37 PM, Craig James wrote:\n>> We use a \"white box\" vendor (ASA Computers), and have been very happy\n>> with the results. They build exactly what I ask for and deliver it in\n>> about a week. They offer on-site service and warranties, but don't\n>> pressure me to buy them. I'm not locked in to anything. Their prices\n>> are good.\n>\n> I'll second that : we build our own machines from white-label parts \n> for typically less than 1/2 the Dell list price. However, Dell does \n> provide value to some people : for example you can point a third-party \n> software vendor at a Dell box and demand they make their application \n> work properly whereas they may turn their nose up at a white label \n> box. Same goes for Operating Systems : we have spent much time \n> debugging Linux kernel issues on white box hardware. On Dell hardware \n> we would most likely have not hit those bugs because Red Hat tests on \n> Dell. So YMMV...\n>\nI'm in Belize, so what I'm considering is from ebay, where it's unlikely \nthat I'll get the warranty. Should I consider some other brand rather? \nTo build my own or buy custom might be an option too, but I would not \nget any warranty.\n\nDell does sales directly to Belize, but the price is so much higher than \nUS prices that it's hardly worth the support/warranty.\n\n\n",
"msg_date": "Thu, 27 Sep 2012 13:56:02 -0600",
"msg_from": "\"M. D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 9/27/2012 1:56 PM, M. D. wrote:\n> I'm in Belize, so what I'm considering is from ebay, where it's \n> unlikely that I'll get the warranty. Should I consider some other \n> brand rather? To build my own or buy custom might be an option too, \n> but I would not get any warranty. \nI don't have any recent experience with white label system vendors, but \nI suspect they are assembling machines from supermicro, asus, intel or \ntyan motherboards and enclosures, which is what we do. You can buy the \nhardware from suppliers such as newegg.com. It takes some time to read \nthe manufacturer's documentation, figure out what kind of memory to buy \nand so on, which is basically what you're paying a white label box \nseller to do for you.\n\nFor example here's a similar barebones system to the R720 I found with a \ncouple minutes searching on newegg.com : \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16816117259\nYou could order that SKU, plus the two CPU devices, however many memory \nsticks you need, and drives. If you need less RAM (the Dell box allows \nup to 24 sticks) there are probably cheaper options.\n\nThe equivalent Supermicro box looks to be somewhat less expensive : \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16816101693\n\nWhen you consider downtime and the cost to ship equipment back to the \nsupplier, a warranty doesn't have much value to me but it may be useful \nin your situation.\n\n\n\n",
"msg_date": "Thu, 27 Sep 2012 14:13:01 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thursday, September 27, 2012 02:13:01 PM David Boreham wrote:\n> The equivalent Supermicro box looks to be somewhat less expensive :\n> http://www.newegg.com/Product/Product.aspx?Item=N82E16816101693\n> \n> When you consider downtime and the cost to ship equipment back to the\n> supplier, a warranty doesn't have much value to me but it may be useful\n> in your situation.\n\nAnd you can probably buy 2 Supermicros for the cost of the Dell. 100% spares.\n\n\n",
"msg_date": "Thu, 27 Sep 2012 13:31:20 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 2:31 PM, Alan Hodgson <[email protected]> wrote:\n> On Thursday, September 27, 2012 02:13:01 PM David Boreham wrote:\n>> The equivalent Supermicro box looks to be somewhat less expensive :\n>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816101693\n>>\n>> When you consider downtime and the cost to ship equipment back to the\n>> supplier, a warranty doesn't have much value to me but it may be useful\n>> in your situation.\n>\n> And you can probably buy 2 Supermicros for the cost of the Dell. 100% spares.\n\nThis 100x this. We used to buy our boxes from aberdeeninc.com and got\na 5 year replacement parts warranty included. We spent ~$10k on a\nserver that was right around $18k from dell for the same numbers and a\n3 year warranty.\n\n",
"msg_date": "Thu, 27 Sep 2012 14:44:44 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/27/2012 01:37 PM, Craig James wrote:\n> I don't think you've supplied enough information for anyone to give\n> you a meaningful answer. What's your current configuration? Are you\n> I/O bound, CPU bound, memory limited, or some other problem? You need\n> to do a specific analysis of the queries that are causing you problems\n> (i.e. why do you need to upgrade at all?)\nMy current configuration is a Dell PE 1900, E5335, 16GB Ram, 2 250GB Raid 0.\n\nI'm buying a new server mostly because the current one is a bit slow and \nI need a new gateway server, so to get faster database responses, I want \nto upgrade this and use the old one for gateway.\n\nThe current system is limited to 16GB Ram, so it is basically maxed out.\n\nA query that takes 89 seconds right now is run on a regular basis \n(82,000 rows):\n\nselect item.item_id,item_plu.number,item.description,\n(select number from account where asset_acct = account_id),\n(select number from account where expense_acct = account_id),\n(select number from account where income_acct = account_id),\n(select dept.name from dept where dept.dept_id = item.dept_id) as dept,\n(select subdept.name from subdept where subdept.subdept_id = \nitem.subdept_id) as subdept,\n(select sum(on_hand) from item_change where item_change.item_id = \nitem.item_id) as on_hand,\n(select sum(on_order) from item_change where item_change.item_id = \nitem.item_id) as on_order,\n(select sum(total_cost) from item_change where item_change.item_id = \nitem.item_id) as total_cost\nfrom item join item_plu on item.item_id = item_plu.item_id and \nitem_plu.seq_num = 0\nwhere item.inactive_on is null and exists (select item_num.number from \nitem_num\nwhere item_num.item_id = item.item_id)\nand exists (select stocked from item_store where stocked = 'Y'\nand inactive_on is null\nand item_store.item_id = item.item_id)\n\n\nExplain analyse: http://explain.depesz.com/s/sGq\n\n\n\n",
"msg_date": "Thu, 27 Sep 2012 14:46:22 -0600",
"msg_from": "\"M. D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/27/2012 02:40 PM, David Boreham wrote:\n\n> I think the newer CPU is the clear winner with a specintrate\n> performance of 589 vs 432.\n\nThe comparisons you linked to had 24 absolute threads pitted against 32, \nsince the newer CPUs have a higher maximum cores per CPU. That said, \nyou're right that it has a fairly large cache. And from my experience, \nIntel CPU generations have been scaling incredibly well lately. \n(Opteron, we hardly knew ye!)\n\nWe went from Dunnington to Nehalem, and it was stunning how much better \nthe X5675 was compared to the E7450. Sandy Bridge isn't quite that much \nof a jump though, so if you don't need that kind of bleeding-edge, you \nmight be able to save some cash. This is especially true since the \nE5-2600 series has the same TDP profile and both use 32nm lithography.\n\nMe? I'm waiting for Haswell, the next \"tock\" in Intel's Tick-Tock strategy.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Thu, 27 Sep 2012 15:47:34 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/27/2012 03:44 PM, Scott Marlowe wrote:\n\n> This 100x this. We used to buy our boxes from aberdeeninc.com and got\n> a 5 year replacement parts warranty included. We spent ~$10k on a\n> server that was right around $18k from dell for the same numbers and a\n> 3 year warranty.\n\nWhatever you do, go for the Intel ethernet adaptor option. We've had so \nmany headaches with integrated broadcom NICs. :(\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Thu, 27 Sep 2012 15:50:33 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 2:46 PM, M. D. <[email protected]> wrote:\n>\n> select item.item_id,item_plu.number,item.description,\n> (select number from account where asset_acct = account_id),\n> (select number from account where expense_acct = account_id),\n> (select number from account where income_acct = account_id),\n> (select dept.name from dept where dept.dept_id = item.dept_id) as dept,\n> (select subdept.name from subdept where subdept.subdept_id =\n> item.subdept_id) as subdept,\n> (select sum(on_hand) from item_change where item_change.item_id =\n> item.item_id) as on_hand,\n> (select sum(on_order) from item_change where item_change.item_id =\n> item.item_id) as on_order,\n> (select sum(total_cost) from item_change where item_change.item_id =\n> item.item_id) as total_cost\n> from item join item_plu on item.item_id = item_plu.item_id and\n> item_plu.seq_num = 0\n> where item.inactive_on is null and exists (select item_num.number from\n> item_num\n> where item_num.item_id = item.item_id)\n> and exists (select stocked from item_store where stocked = 'Y'\n> and inactive_on is null\n> and item_store.item_id = item.item_id)\n\nHave you tried re-writing this query first? Is there a reason to have\na bunch of subselects instead of joining the tables? What pg version\nare you running btw? A newer version of pg might help too.\n\n",
"msg_date": "Thu, 27 Sep 2012 14:55:04 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 2:50 PM, Shaun Thomas <[email protected]> wrote:\n> On 09/27/2012 03:44 PM, Scott Marlowe wrote:\n>\n>> This 100x this. We used to buy our boxes from aberdeeninc.com and got\n>> a 5 year replacement parts warranty included. We spent ~$10k on a\n>> server that was right around $18k from dell for the same numbers and a\n>> 3 year warranty.\n>\n>\n> Whatever you do, go for the Intel ethernet adaptor option. We've had so many\n> headaches with integrated broadcom NICs. :(\n\nI too have had problems with broadcom, as well as with nvidia nics and\nmost other built in nics on servers. The Intel PCI dual nic cards\nhave been my savior in the past.\n\n",
"msg_date": "Thu, 27 Sep 2012 14:55:58 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/27/2012 03:55 PM, Scott Marlowe wrote:\n\n> Have you tried re-writing this query first? Is there a reason to have\n> a bunch of subselects instead of joining the tables? What pg version\n> are you running btw? A newer version of pg might help too.\n\nWow, yeah. I was just about to say something about that. I even pasted \nit into a notepad and started cutting it apart, but I wasn't sure about \nenough of the column sources in all those subqueries.\n\nIt looks like it'd be a very, very good candidate for a window function \nor two, and maybe a few CASE statements. But I'm about 80% certain it's \nnot very efficient as is.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Thu, 27 Sep 2012 16:01:26 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 9/27/2012 2:55 PM, Scott Marlowe wrote:\n> Whatever you do, go for the Intel ethernet adaptor option. We've had so many\n> >headaches with integrated broadcom NICs.:(\nSound advice, but not a get out of jail card unfortunately : we had a \nhorrible problem with the Intel e1000 driver in RHEL for several releases.\nFinally diagnosed it just as RH shipped a fixed driver.\n\n\n\n",
"msg_date": "Thu, 27 Sep 2012 15:04:51 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 9/27/2012 2:47 PM, Shaun Thomas wrote:\n> On 09/27/2012 02:40 PM, David Boreham wrote:\n>\n>> I think the newer CPU is the clear winner with a specintrate\n>> performance of 589 vs 432.\n>\n> The comparisons you linked to had 24 absolute threads pitted against \n> 32, since the newer CPUs have a higher maximum cores per CPU. That \n> said, you're right that it has a fairly large cache. And from my \n> experience, Intel CPU generations have been scaling incredibly well \n> lately. (Opteron, we hardly knew ye!)\nYes, the \"rate\" spec test uses all the available cores. I'm assuming a \nconcurrent workload, but since the single-thread performance isn't that \nmuch different between the two I think the higher number of cores, \nlarger cache, newer design CPU is the best choice.\n>\n> We went from Dunnington to Nehalem, and it was stunning how much \n> better the X5675 was compared to the E7450. Sandy Bridge isn't quite \n> that much of a jump though, so if you don't need that kind of \n> bleeding-edge, you might be able to save some cash. This is especially \n> true since the E5-2600 series has the same TDP profile and both use \n> 32nm lithography.\nWe use Opteron on a price/performance basis. Intel always seems to come \nup with some way to make their low-cost processors useless (such as \nlimiting the amount of memory they can address).\n\n\n\n\n\n",
"msg_date": "Thu, 27 Sep 2012 15:08:03 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "Hello,\n\nfrom benchmarking on my r/o in memory database, i can tell that 9.1 on x5650 is faster than 9.2 on e2440.\nI do not have x5690, but i have not so loaded e2660. \n\nIf you can give me a dump and some queries, i can bench them.\n\nNevertheless x5690 seems more efficient on single threaded workload than 2660, unless you have many clients.\n",
"msg_date": "Fri, 28 Sep 2012 01:08:46 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 6:08 PM, David Boreham <[email protected]> wrote:\n>>\n>> We went from Dunnington to Nehalem, and it was stunning how much better\n>> the X5675 was compared to the E7450. Sandy Bridge isn't quite that much of a\n>> jump though, so if you don't need that kind of bleeding-edge, you might be\n>> able to save some cash. This is especially true since the E5-2600 series has\n>> the same TDP profile and both use 32nm lithography.\n>\n> We use Opteron on a price/performance basis. Intel always seems to come up\n> with some way to make their low-cost processors useless (such as limiting\n> the amount of memory they can address).\n\nCareful with AMD, since many (I'm not sure about the latest ones)\ncannot saturate the memory bus when running single-threaded. So, great\nif you have a high concurrent workload, quite bad if you don't.\n\n",
"msg_date": "Thu, 27 Sep 2012 18:16:29 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thursday, September 27, 2012 03:04:51 PM David Boreham wrote:\n> On 9/27/2012 2:55 PM, Scott Marlowe wrote:\n> > Whatever you do, go for the Intel ethernet adaptor option. We've had so\n> > many> \n> > >headaches with integrated broadcom NICs.:(\n> \n> Sound advice, but not a get out of jail card unfortunately : we had a\n> horrible problem with the Intel e1000 driver in RHEL for several releases.\n> Finally diagnosed it just as RH shipped a fixed driver.\n\nYeah I've been compiling a newer one on each kernel release for a couple of \nyears. But the hardware rocks.\n\nThe Supermicro boxes also mostly have Intel network onboard, so not a problem \nthere.\n\n",
"msg_date": "Thu, 27 Sep 2012 14:17:57 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/27/2012 04:08 PM, Evgeny Shishkin wrote:\n\n> from benchmarking on my r/o in memory database, i can tell that 9.1\n> on x5650 is faster than 9.2 on e2440.\n\nHow did you run those benchmarks? I find that incredibly hard to \nbelieve. Not only does 9.2 scale *much* better than 9.1, but the E5-2440 \nis a 15MB cache Sandy Bridge, as opposed to a 12MB cache Nehalem. \nDespite the slightly lower clock speed, you should have much better \nperformance with 9.2 on the 2440.\n\nI know one thing you might want to check is to make sure both servers \nhave turbo mode enabled, and power savings turned off for all CPUs. \nCheck the BIOS for the CPU settings, because some motherboards and \nvendors have different defaults. I know we got inconsistent and much \nworse performance until we made those two changes on our HP systems.\n\nWe use pgbench for benchmarking, so there's not anything I can really \nsend you. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Thu, 27 Sep 2012 16:20:33 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/27/2012 02:55 PM, Scott Marlowe wrote:\n> On Thu, Sep 27, 2012 at 2:46 PM, M. D. <[email protected]> wrote:\n>> select item.item_id,item_plu.number,item.description,\n>> (select number from account where asset_acct = account_id),\n>> (select number from account where expense_acct = account_id),\n>> (select number from account where income_acct = account_id),\n>> (select dept.name from dept where dept.dept_id = item.dept_id) as dept,\n>> (select subdept.name from subdept where subdept.subdept_id =\n>> item.subdept_id) as subdept,\n>> (select sum(on_hand) from item_change where item_change.item_id =\n>> item.item_id) as on_hand,\n>> (select sum(on_order) from item_change where item_change.item_id =\n>> item.item_id) as on_order,\n>> (select sum(total_cost) from item_change where item_change.item_id =\n>> item.item_id) as total_cost\n>> from item join item_plu on item.item_id = item_plu.item_id and\n>> item_plu.seq_num = 0\n>> where item.inactive_on is null and exists (select item_num.number from\n>> item_num\n>> where item_num.item_id = item.item_id)\n>> and exists (select stocked from item_store where stocked = 'Y'\n>> and inactive_on is null\n>> and item_store.item_id = item.item_id)\n> Have you tried re-writing this query first? Is there a reason to have\n> a bunch of subselects instead of joining the tables? What pg version\n> are you running btw? A newer version of pg might help too.\n>\n>\nThis query is inside an application (Quasar Accounting) written in Qt \nand I don't have access to the source code. The query is cross \ndatabase, so it's likely that's why it's written the way it is. The form \nthis query is on also allows the user to add/remove columns, so it makes \nit a LOT easier from the application point of view to do columns as they \nare here. I had at one point tried to make this same query a table \njoin, but did not notice any performance difference in pg 8.x - been a \nwhile so don't remember exactly what version.\n\nI'm currently on 9.0. I will upgrade to 9.2 once I get a new server. \nAs noted above, I need to buy a new server anyway, so I'm going for this \none and using the current as a VM server for several VMs and also a \nbackup database server.\n\n\n",
"msg_date": "Thu, 27 Sep 2012 15:22:41 -0600",
"msg_from": "\"M. D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 9/27/2012 3:16 PM, Claudio Freire wrote:\n> Careful with AMD, since many (I'm not sure about the latest ones)\n> cannot saturate the memory bus when running single-threaded. So, great\n> if you have a high concurrent workload, quite bad if you don't.\n>\nActually we test memory bandwidth with John McCalpin's stream program.\nUnfortunately it is hard to find stream test results for recent machines so it can be hard to compare two boxes unless you own examples, so I didn't mention it as a useful option. But if you can find results for the machines, or ask a friend to run it for you...definitely useful information.\n\n\n\n\n\n",
"msg_date": "Thu, 27 Sep 2012 15:28:38 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "\nOn Sep 28, 2012, at 1:20 AM, Shaun Thomas <[email protected]> wrote:\n\n> On 09/27/2012 04:08 PM, Evgeny Shishkin wrote:\n> \n>> from benchmarking on my r/o in memory database, i can tell that 9.1\n>> on x5650 is faster than 9.2 on e2440.\n> \n> How did you run those benchmarks? I find that incredibly hard to believe. Not only does 9.2 scale *much* better than 9.1, but the E5-2440 is a 15MB cache Sandy Bridge, as opposed to a 12MB cache Nehalem. Despite the slightly lower clock speed, you should have much better performance with 9.2 on the 2440.\n> \n> I know one thing you might want to check is to make sure both servers have turbo mode enabled, and power savings turned off for all CPUs. Check the BIOS for the CPU settings, because some motherboards and vendors have different defaults. I know we got inconsistent and much worse performance until we made those two changes on our HP systems.\n> \n> We use pgbench for benchmarking, so there's not anything I can really send you. :)\n\nYes, on pgbench utilising cpu to 80-90% e2660 is better, it goes to 140k ro tps, so scalability is very very good.\nBut i talk about real oltp ro query. Single threaded. And cpu clock was real winner.\n",
"msg_date": "Fri, 28 Sep 2012 01:29:48 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "Please don't take responses off list, someone else may have an insight I'd miss.\n\nOn Thu, Sep 27, 2012 at 3:20 PM, M. D. <[email protected]> wrote:\n> On 09/27/2012 02:55 PM, Scott Marlowe wrote:\n>>\n>> On Thu, Sep 27, 2012 at 2:46 PM, M. D. <[email protected]> wrote:\n>>>\n>>> select item.item_id,item_plu.number,item.description,\n>>> (select number from account where asset_acct = account_id),\n>>> (select number from account where expense_acct = account_id),\n>>> (select number from account where income_acct = account_id),\n>>> (select dept.name from dept where dept.dept_id = item.dept_id) as dept,\n>>> (select subdept.name from subdept where subdept.subdept_id =\n>>> item.subdept_id) as subdept,\n>>> (select sum(on_hand) from item_change where item_change.item_id =\n>>> item.item_id) as on_hand,\n>>> (select sum(on_order) from item_change where item_change.item_id =\n>>> item.item_id) as on_order,\n>>> (select sum(total_cost) from item_change where item_change.item_id =\n>>> item.item_id) as total_cost\n>>> from item join item_plu on item.item_id = item_plu.item_id and\n>>> item_plu.seq_num = 0\n>>> where item.inactive_on is null and exists (select item_num.number from\n>>> item_num\n>>> where item_num.item_id = item.item_id)\n>>> and exists (select stocked from item_store where stocked = 'Y'\n>>> and inactive_on is null\n>>> and item_store.item_id = item.item_id)\n>>\n>> Have you tried re-writing this query first? Is there a reason to have\n>> a bunch of subselects instead of joining the tables? What pg version\n>> are you running btw? A newer version of pg might help too.\n>>\n> This query is inside an application (Quasar Accounting) written in Qt and I\n> don't have access to the source code. The query is cross database, so it's\n> likely that's why it's written the way it is. The form this query is on also\n> allows the user to add/remove columns, so it makes it a LOT easier from the\n> application point of view to do columns as they are here. I had at one\n> point tried to make this same query a table join, but did not notice any\n> performance difference in pg 8.x - been a while so don't remember exactly\n> what version.\n\nHave you tried cranking up work_mem and see if it helps this query at\nleast avoid a nested look on 80k rows? If they'd fit in memory and\nuse bitmap hashes it should be MUCH faster than a nested loop.\n\n>\n> I'm currently on 9.0. I will upgrade to 9.2 once I get a new server. As\n> noted above, I need to buy a new server anyway, so I'm going for this one\n> and using the current as a VM server for several VMs and also a backup\n> database server.\n\nWell being on 9.0 should make a big diff from 8.2. But again, without\nenough work_mem for the query to use a bitmap hash or something more\nefficient than a nested loop it's gonna be slow.\n\n",
"msg_date": "Thu, 27 Sep 2012 15:32:38 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 3:16 PM, Claudio Freire <[email protected]> wrote:\n> On Thu, Sep 27, 2012 at 6:08 PM, David Boreham <[email protected]> wrote:\n>>>\n>>> We went from Dunnington to Nehalem, and it was stunning how much better\n>>> the X5675 was compared to the E7450. Sandy Bridge isn't quite that much of a\n>>> jump though, so if you don't need that kind of bleeding-edge, you might be\n>>> able to save some cash. This is especially true since the E5-2600 series has\n>>> the same TDP profile and both use 32nm lithography.\n>>\n>> We use Opteron on a price/performance basis. Intel always seems to come up\n>> with some way to make their low-cost processors useless (such as limiting\n>> the amount of memory they can address).\n>\n> Careful with AMD, since many (I'm not sure about the latest ones)\n> cannot saturate the memory bus when running single-threaded. So, great\n> if you have a high concurrent workload, quite bad if you don't.\n\nConversely, we often got MUCH better parallel performance from our\nquad 12 core opteron servers than I could get on a dual 8 core xeon at\nthe time. The newest quad 10 core Intels are about as fast as the\nquad 12 core opteron from 3 years ago. So for parallel operation, do\nremember to look at the opteron. It was much cheaper to get highly\nparallel operation on the opterons than the xeons at the time we got\nthe quad 12 core machine at my last job.\n\n",
"msg_date": "Thu, 27 Sep 2012 15:36:35 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 3:36 PM, Scott Marlowe <[email protected]> wrote:\n> Conversely, we often got MUCH better parallel performance from our\n> quad 12 core opteron servers than I could get on a dual 8 core xeon at\n> the time.\n\nClarification that the two base machines were about the same price.\n48 opteron cores (2.2GHz) or 16 xeon cores at ~2.6GHz. It's been a\nfew years, I'm not gonna testify to the exact numbers in court. But\nthe performance to 32 to 100 threads was WAY better on the 48 core\nopteron machine, never really breaking down even to 120+ threads. The\nIntel machine hit a very real knee of performance and dropped off\nreally badly after about 40 threads (they were hyperthreaded).\n\n",
"msg_date": "Thu, 27 Sep 2012 15:39:08 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "\nOn Sep 28, 2012, at 1:36 AM, Scott Marlowe <[email protected]> wrote:\n\n> On Thu, Sep 27, 2012 at 3:16 PM, Claudio Freire <[email protected]> wrote:\n>> On Thu, Sep 27, 2012 at 6:08 PM, David Boreham <[email protected]> wrote:\n>>>> \n>>>> We went from Dunnington to Nehalem, and it was stunning how much better\n>>>> the X5675 was compared to the E7450. Sandy Bridge isn't quite that much of a\n>>>> jump though, so if you don't need that kind of bleeding-edge, you might be\n>>>> able to save some cash. This is especially true since the E5-2600 series has\n>>>> the same TDP profile and both use 32nm lithography.\n>>> \n>>> We use Opteron on a price/performance basis. Intel always seems to come up\n>>> with some way to make their low-cost processors useless (such as limiting\n>>> the amount of memory they can address).\n>> \n>> Careful with AMD, since many (I'm not sure about the latest ones)\n>> cannot saturate the memory bus when running single-threaded. So, great\n>> if you have a high concurrent workload, quite bad if you don't.\n> \n> Conversely, we often got MUCH better parallel performance from our\n> quad 12 core opteron servers than I could get on a dual 8 core xeon at\n> the time. The newest quad 10 core Intels are about as fast as the\n> quad 12 core opteron from 3 years ago. So for parallel operation, do\n> remember to look at the opteron. It was much cheaper to get highly\n> parallel operation on the opterons than the xeons at the time we got\n> the quad 12 core machine at my last job.\n> \n\n\nBut what about latency, not throughput?\n",
"msg_date": "Fri, 28 Sep 2012 01:40:06 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/27/2012 04:39 PM, Scott Marlowe wrote:\n\n> Clarification that the two base machines were about the same price.\n> 48 opteron cores (2.2GHz) or 16 xeon cores at ~2.6GHz. It's been a\n> few years, I'm not gonna testify to the exact numbers in court.\n\nSame here. We got really good performance on Opteron \"a few years ago\" \ntoo. :)\n\nBut some more anecdotes... with the 4x8 E7450 Dunnington, our \nperformance was OK. With the 2x6x2 X5675 Nehalem, it was ridiculous. \nHalf the cores, 2.5x the speed, so far as pgbench was concerned. On \nevery workload, on every level of concurrency I tried. Like you said, \nthe 7450 dropped off at higher concurrency, but the 5675 kept on trucking.\n\nThat's why I qualified my statement about Intel CPUs as \"lately.\" They \nreally seem to have cleaned up their server architecture.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Thu, 27 Sep 2012 16:44:11 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 3:40 PM, Evgeny Shishkin <[email protected]> wrote:\n>\n> On Sep 28, 2012, at 1:36 AM, Scott Marlowe <[email protected]> wrote:\n>\n>> On Thu, Sep 27, 2012 at 3:16 PM, Claudio Freire <[email protected]> wrote:\n>>> On Thu, Sep 27, 2012 at 6:08 PM, David Boreham <[email protected]> wrote:\n>>>>>\n>>>>> We went from Dunnington to Nehalem, and it was stunning how much better\n>>>>> the X5675 was compared to the E7450. Sandy Bridge isn't quite that much of a\n>>>>> jump though, so if you don't need that kind of bleeding-edge, you might be\n>>>>> able to save some cash. This is especially true since the E5-2600 series has\n>>>>> the same TDP profile and both use 32nm lithography.\n>>>>\n>>>> We use Opteron on a price/performance basis. Intel always seems to come up\n>>>> with some way to make their low-cost processors useless (such as limiting\n>>>> the amount of memory they can address).\n>>>\n>>> Careful with AMD, since many (I'm not sure about the latest ones)\n>>> cannot saturate the memory bus when running single-threaded. So, great\n>>> if you have a high concurrent workload, quite bad if you don't.\n>>\n>> Conversely, we often got MUCH better parallel performance from our\n>> quad 12 core opteron servers than I could get on a dual 8 core xeon at\n>> the time. The newest quad 10 core Intels are about as fast as the\n>> quad 12 core opteron from 3 years ago. So for parallel operation, do\n>> remember to look at the opteron. It was much cheaper to get highly\n>> parallel operation on the opterons than the xeons at the time we got\n>> the quad 12 core machine at my last job.\n>\n> But what about latency, not throughput?\n\nIt means little when you're building a server to handle literally\nthousands of queries per seconds from hundreds of active connections.\nThe intel box would have simply fallen over under the load we were\nhandling on the 48 core opteron at the time. Note that under maximum\nload we saw load factors in the 20 to 100 on that opteron box and\nstill got very good response times (average latency on most queries\nwas still in the single digits of milliseconds).\n\nFor single threaded or only a few threads, yeah, the intel was\nslightly faster, but as soon as the real load of our web site hit the\nmachine it wasn't even close.\n\n",
"msg_date": "Thu, 27 Sep 2012 17:52:01 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 3:44 PM, Shaun Thomas <[email protected]> wrote:\n> On 09/27/2012 04:39 PM, Scott Marlowe wrote:\n>\n>> Clarification that the two base machines were about the same price.\n>> 48 opteron cores (2.2GHz) or 16 xeon cores at ~2.6GHz. It's been a\n>> few years, I'm not gonna testify to the exact numbers in court.\n>\n>\n> Same here. We got really good performance on Opteron \"a few years ago\" too.\n> :)\n>\n> But some more anecdotes... with the 4x8 E7450 Dunnington, our performance\n> was OK. With the 2x6x2 X5675 Nehalem, it was ridiculous. Half the cores,\n> 2.5x the speed, so far as pgbench was concerned. On every workload, on every\n> level of concurrency I tried. Like you said, the 7450 dropped off at higher\n> concurrency, but the 5675 kept on trucking.\n>\n> That's why I qualified my statement about Intel CPUs as \"lately.\" They\n> really seem to have cleaned up their server architecture.\n\nYeah, Intel's made a lot of headway on multi-core architecture since\nthen. But the 5620 etc series of the time were still pretty meh at\nhigh concurrency compared to the opteron. The latest ones, which I've\ntested now (40 hyperthreaded cores i.e 80 virtual cores) are\ndefinitely faster than the now 4 year old 48 core opterons. But at a\nmuch higher cost for a pretty moderate (20 to 30%) increase in\nperformance. OTOH, they don't \"break down\" past 40 to 100 connections\nany more, so that's the big improvement to me.\n\nHow the curve looks like heading to 60+ threads is mildly interesting,\nbut how the server performs as you go past it was what worried me\nbefore. Now both architectures seem to behave much better in such\n\"overload\" scenarios.\n\n",
"msg_date": "Thu, 27 Sep 2012 17:56:13 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 3:28 PM, David Boreham <[email protected]> wrote:\n> On 9/27/2012 3:16 PM, Claudio Freire wrote:\n>>\n>> Careful with AMD, since many (I'm not sure about the latest ones)\n>> cannot saturate the memory bus when running single-threaded. So, great\n>> if you have a high concurrent workload, quite bad if you don't.\n>>\n> Actually we test memory bandwidth with John McCalpin's stream program.\n> Unfortunately it is hard to find stream test results for recent machines so\n> it can be hard to compare two boxes unless you own examples, so I didn't\n> mention it as a useful option. But if you can find results for the machines,\n> or ask a friend to run it for you...definitely useful information.\n\nIIRC the most recent tests from Greg Smith show the latest model\nIntels winning by a fair bit over the opterons. Before that though\nthe 48 core opteron servers were winning. It tends to go back and\nforth. Dollar for dollar, the Opterons are usually the better value\nnow, while the Intels give the absolute best performance money can\nbuy.\n\n",
"msg_date": "Thu, 27 Sep 2012 17:57:47 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/27/2012 10:22 PM, M. D. wrote:\n> On 09/27/2012 02:55 PM, Scott Marlowe wrote:\n>> On Thu, Sep 27, 2012 at 2:46 PM, M. D. <[email protected]> wrote:\n>>> select item.item_id,item_plu.number,item.description,\n>>> (select number from account where asset_acct = account_id),\n>>> (select number from account where expense_acct = account_id),\n>>> (select number from account where income_acct = account_id),\n>>> (select dept.name from dept where dept.dept_id = item.dept_id) as dept,\n>>> (select subdept.name from subdept where subdept.subdept_id =\n>>> item.subdept_id) as subdept,\n>>> (select sum(on_hand) from item_change where item_change.item_id =\n>>> item.item_id) as on_hand,\n>>> (select sum(on_order) from item_change where item_change.item_id =\n>>> item.item_id) as on_order,\n>>> (select sum(total_cost) from item_change where item_change.item_id =\n>>> item.item_id) as total_cost\n>>> from item join item_plu on item.item_id = item_plu.item_id and\n>>> item_plu.seq_num = 0\n>>> where item.inactive_on is null and exists (select item_num.number from\n>>> item_num\n>>> where item_num.item_id = item.item_id)\n>>> and exists (select stocked from item_store where stocked = 'Y'\n>>> and inactive_on is null\n>>> and item_store.item_id = item.item_id)\n\n\n>> Have you tried re-writing this query first? Is there a reason to have\n>> a bunch of subselects instead of joining the tables? What pg version\n>> are you running btw? A newer version of pg might help too.\n>>\n>>\n> This query is inside an application (Quasar Accounting) written in Qt and I don't have access to the source code.\n\nIs there any prospect of the planner/executor being taught to\nmerge each of those groups of three index scans,\nto aid this sort of poor query?\n-- \nJeremy\n\n",
"msg_date": "Fri, 28 Sep 2012 11:38:07 +0100",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Thu, Sep 27, 2012 at 03:50:33PM -0500, Shaun Thomas wrote:\n> On 09/27/2012 03:44 PM, Scott Marlowe wrote:\n> \n> >This 100x this. We used to buy our boxes from aberdeeninc.com and got\n> >a 5 year replacement parts warranty included. We spent ~$10k on a\n> >server that was right around $18k from dell for the same numbers and a\n> >3 year warranty.\n> \n> Whatever you do, go for the Intel ethernet adaptor option. We've had\n> so many headaches with integrated broadcom NICs. :(\n> \n+++1 Sigh.\n\nKen\n\n",
"msg_date": "Fri, 28 Sep 2012 10:38:23 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 9/27/2012 1:56 PM, M. D. wrote:\n>>\n>> I'm in Belize, so what I'm considering is from ebay, where it's unlikely\n>> that I'll get the warranty. Should I consider some other brand rather? To\n>> build my own or buy custom might be an option too, but I would not get any\n>> warranty.\n\nYour best warranty would be to have the confidence to do your own\nrepairs, and to have the parts on hand. I'd seriously consider\nputting your own system together. Maybe go to a few sites with\npre-configured machines and see what parts they use. Order those,\nscrew the thing together yourself, and put a spare of each critical\npart on your shelf.\n\nA warranty is useless if you can't use it in a timely fashion. And\nyou could easily get better reliability by spending the money on spare\nparts. I'd bet that for the price of a warranty you can buy a spare\nmotherboard, a few spare disks, a memory stick or two, a spare power\nsupply, and maybe even a spare 3WARE RAID controller.\n\nCraig\n\n",
"msg_date": "Fri, 28 Sep 2012 08:46:50 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 9/28/2012 9:46 AM, Craig James wrote:\n> Your best warranty would be to have the confidence to do your own\n> repairs, and to have the parts on hand. I'd seriously consider\n> putting your own system together. Maybe go to a few sites with\n> pre-configured machines and see what parts they use. Order those,\n> screw the thing together yourself, and put a spare of each critical\n> part on your shelf.\n>\nThis is what I did for years, but after taking my old parts collection \nto the landfill a few times, realized I may as well just buy N+1 \nmachines and keep zero spares on the shelf. That way I get a spare \nmachine available for use immediately, and I know the parts are working \n(parts on the shelf may be defective). If something breaks, I use the \nspare machine until the replacement parts arrive.\n\nNote in addition that a warranty can be extremely useful in certain \norganizations as a vehicle of blame avoidance (this may be its primary \npurpose in fact). If I buy a bunch of machines that turn out to have \nbuggy NICs, well that's my fault and I can kick myself since I own the \ncompany, stay up late into the night reading kernel code, and buy new \nNICs. If I have an evil Dilbertian boss, then well...I'd be seriously \nthinking about buying Dell boxes in order to blame Dell rather than \nmyself, and be able to say \"everything is warrantied\" if badness goes \ndown. Just saying...\n\n\n\n",
"msg_date": "Fri, 28 Sep 2012 09:57:16 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On 09/28/2012 09:57 AM, David Boreham wrote:\n> On 9/28/2012 9:46 AM, Craig James wrote:\n>> Your best warranty would be to have the confidence to do your own\n>> repairs, and to have the parts on hand. I'd seriously consider\n>> putting your own system together. Maybe go to a few sites with\n>> pre-configured machines and see what parts they use. Order those,\n>> screw the thing together yourself, and put a spare of each critical\n>> part on your shelf.\n>>\n> This is what I did for years, but after taking my old parts collection \n> to the landfill a few times, realized I may as well just buy N+1 \n> machines and keep zero spares on the shelf. That way I get a spare \n> machine available for use immediately, and I know the parts are \n> working (parts on the shelf may be defective). If something breaks, I \n> use the spare machine until the replacement parts arrive.\n>\n> Note in addition that a warranty can be extremely useful in certain \n> organizations as a vehicle of blame avoidance (this may be its primary \n> purpose in fact). If I buy a bunch of machines that turn out to have \n> buggy NICs, well that's my fault and I can kick myself since I own the \n> company, stay up late into the night reading kernel code, and buy new \n> NICs. If I have an evil Dilbertian boss, then well...I'd be seriously \n> thinking about buying Dell boxes in order to blame Dell rather than \n> myself, and be able to say \"everything is warrantied\" if badness goes \n> down. Just saying...\n>\nI'm kinda in the latter shoes. Dell is the only thing that is trusted \nin my organisation. If I would build my own, I would be fully blamed \nfor anything going wrong in the next 3 years. Thanks everyone for your \ninput. Now my final choice will be if my budget allows for the latest \nand fastest, else I'm going for the x5690. I don't have hundreds of \nusers, so I think the x5690 should do a pretty good job handling the load.\n\n\n\n",
"msg_date": "Fri, 28 Sep 2012 11:33:58 -0600",
"msg_from": "\"M. D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Fri, Sep 28, 2012 at 11:33 AM, M. D. <[email protected]> wrote:\n> On 09/28/2012 09:57 AM, David Boreham wrote:\n>>\n>> On 9/28/2012 9:46 AM, Craig James wrote:\n>>>\n>>> Your best warranty would be to have the confidence to do your own\n>>> repairs, and to have the parts on hand. I'd seriously consider\n>>> putting your own system together. Maybe go to a few sites with\n>>> pre-configured machines and see what parts they use. Order those,\n>>> screw the thing together yourself, and put a spare of each critical\n>>> part on your shelf.\n>>>\n>> This is what I did for years, but after taking my old parts collection to\n>> the landfill a few times, realized I may as well just buy N+1 machines and\n>> keep zero spares on the shelf. That way I get a spare machine available for\n>> use immediately, and I know the parts are working (parts on the shelf may be\n>> defective). If something breaks, I use the spare machine until the\n>> replacement parts arrive.\n>>\n>> Note in addition that a warranty can be extremely useful in certain\n>> organizations as a vehicle of blame avoidance (this may be its primary\n>> purpose in fact). If I buy a bunch of machines that turn out to have buggy\n>> NICs, well that's my fault and I can kick myself since I own the company,\n>> stay up late into the night reading kernel code, and buy new NICs. If I have\n>> an evil Dilbertian boss, then well...I'd be seriously thinking about buying\n>> Dell boxes in order to blame Dell rather than myself, and be able to say\n>> \"everything is warrantied\" if badness goes down. Just saying...\n>>\n> I'm kinda in the latter shoes. Dell is the only thing that is trusted in my\n> organisation. If I would build my own, I would be fully blamed for anything\n> going wrong in the next 3 years. Thanks everyone for your input. Now my\n> final choice will be if my budget allows for the latest and fastest, else\n> I'm going for the x5690. I don't have hundreds of users, so I think the\n> x5690 should do a pretty good job handling the load.\n\nIf people in your organization trust Dell, they just haven't dealt\nwith them enough.\n\n",
"msg_date": "Fri, 28 Sep 2012 17:48:20 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": ">________________________________\n\n> From: M. D. <[email protected]>\n>To: [email protected] \n>Sent: Friday, 28 September 2012, 18:33\n>Subject: Re: [PERFORM] hardware advice\n> \n>On 09/28/2012 09:57 AM, David Boreham wrote:\n>> On 9/28/2012 9:46 AM, Craig James wrote:\n>>> Your best warranty would be to have the confidence to do your own\n>>> repairs, and to have the parts on hand. I'd seriously consider\n>>> putting your own system together. Maybe go to a few sites with\n>>> pre-configured machines and see what parts they use. Order those,\n>>> screw the thing together yourself, and put a spare of each critical\n>>> part on your shelf.\n>>> \n>> This is what I did for years, but after taking my old parts collection to the landfill a few times, realized I may as well just buy N+1 machines and keep zero spares on the shelf. That way I get a spare machine available for use immediately, and I know the parts are working (parts on the shelf may be defective). If something breaks, I use the spare machine until the replacement parts arrive.\n>> \n>> Note in addition that a warranty can be extremely useful in certain organizations as a vehicle of blame avoidance (this may be its primary purpose in fact). If I buy a bunch of machines that turn out to have buggy NICs, well that's my fault and I can kick myself since I own the company, stay up late into the night reading kernel code, and buy new NICs. If I have an evil Dilbertian boss, then well...I'd be seriously thinking about buying Dell boxes in order to blame Dell rather than myself, and be able to say \"everything is warrantied\" if badness goes down. Just saying...\n>> \n>I'm kinda in the latter shoes. Dell is the only thing that is trusted in my organisation. If I would build my own, I would be fully blamed for anything going wrong in the next 3 years. Thanks everyone for your input. Now my final choice will be if my budget allows for the latest and fastest, else I'm going for the x5690. I don't have hundreds of users, so I think the x5690 should do a pretty good job handling the load.\n>\n>\n\nHaving\nplenty experience with Dell I'd urge you reconsider. All the Dell servers\nwe've had have arrived hideously misconfigured, and tech support gets you\nnowhere. Once we've rejigged the hardware ourselves, maybe replacing a\npart or two they've performed okay.\n \nReliability has been okay, however one of our\nnewer R910s recently all of a sudden went dead to the world; no prior symptoms\nshowing in our hardware and software monitoring, no errors in the os logs,\nnothing in the dell drac logs. After a hard reset it's back up as if\nnothing happened, and it's an issue I'm none the wiser to the cause. Not\ngood piece of mind.\n \nLook around and find another vendor, even if\nyour company has to pay more for you to have that blame avoidance.\n\n",
"msg_date": "Tue, 2 Oct 2012 09:20:51 +0100 (BST)",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "> From: [email protected]\n[mailto:[email protected]] On Behalf Of Glyn Astill\n> Sent: Tuesday, October 02, 2012 4:21 AM\n> To: M. D.; [email protected]\n> Subject: Re: [PERFORM] hardware advice\n>\n>> From: M. D. <[email protected]>\n>> To: [email protected]\n>> Sent: Friday, 28 September 2012, 18:33\n>> Subject: Re: [PERFORM] hardware advice\n>>\n>> On 09/28/2012 09:57 AM, David Boreham wrote:\n>>> On 9/28/2012 9:46 AM, Craig James wrote:\n>>>> Your best warranty would be to have the confidence to do your own\n>>>> repairs, and to have the parts on hand. I'd seriously consider\n>>>> putting your own system together. Maybe go to a few sites with\n>>>> pre-configured machines and see what parts they use. Order those,\n>>>> screw the thing together yourself, and put a spare of each critical\n>>>> part on your shelf.\n>>>>\n>>> This is what I did for years, but after taking my old parts\ncollection to the landfill a few times, realized I may as well just buy\nN+1 machines and keep zero spares on the shelf. That way I get a spare\nmachine available for use immediately, and I know the parts are working\n(parts on the shelf may be defective). If something breaks, I use the\nspare machine until the replacement parts arrive.\n>>>\n>>> Note in addition that a warranty can be extremely useful in certain\norganizations as a vehicle of blame avoidance (this may be its primary\npurpose in fact). If I buy a bunch of machines that turn out to have\nbuggy NICs, well that's my fault and I can kick myself since I own the\ncompany, stay up late into the night reading kernel code, and buy new\nNICs. If I have an evil Dilbertian boss, then well...I'd be seriously\nthinking about buying Dell boxes in order to blame Dell rather than\nmyself, and be able to say \"everything is warrantied\" if badness goes\ndown. Just saying...\n>>>\n>>I'm kinda in the latter shoes. Dell is the only thing that is trusted\nin my organisation. If I would build my own, I would be fully blamed\nfor anything going wrong in the next 3 years. Thanks everyone for your\ninput. Now my final choice will be if my budget allows for the latest\nand fastest, else I'm going for the x5690. I don't have hundreds of\nusers, so I think the x5690 should do a pretty good job handling the\nload.\n>>\n>>\n>\n> Having plenty experience with Dell I'd urge you reconsider. All the\nDell servers\n> we've had have arrived hideously misconfigured, and tech support gets\nyou\n> nowhere. Once we've rejigged the hardware ourselves, maybe replacing\na\n> part or two they've performed okay.\n>\n> Reliability has been okay, however one of our newer R910s recently all\n> of a sudden went dead to the world; no prior symptoms showing in our\n> hardware and software monitoring, no errors in the os logs, nothing in\n> the dell drac logs. After a hard reset it's back up as if nothing\n> happened, and it's an issue I'm none the wiser to the cause. Not good\n> piece of mind.\n>\n> Look around and find another vendor, even if your company has to pay\n> more for you to have that blame avoidance.\n\nWe're currently using Dell and have had enough problems to think about\nswitching.\nWhat about HP?\n\nDan Franklin\n\n\n\n\n\nRE: [PERFORM] hardware advice - opinions about HP?\n\n\n\n> From: [email protected] [mailto:[email protected]] On Behalf Of Glyn Astill\n> Sent: Tuesday, October 02, 2012 4:21 AM\n> To: M. D.; [email protected]\n> Subject: Re: [PERFORM] hardware advice\n>\n>> From: M. D. <[email protected]>\n>> To: [email protected]\n>> Sent: Friday, 28 September 2012, 18:33\n>> Subject: Re: [PERFORM] hardware advice\n>>\n>> On 09/28/2012 09:57 AM, David Boreham wrote:\n>>> On 9/28/2012 9:46 AM, Craig James wrote:\n>>>> Your best warranty would be to have the confidence to do your own\n>>>> repairs, and to have the parts on hand. I'd seriously consider\n>>>> putting your own system together. Maybe go to a few sites with\n>>>> pre-configured machines and see what parts they use. Order those,\n>>>> screw the thing together yourself, and put a spare of each critical\n>>>> part on your shelf.\n>>>>\n>>> This is what I did for years, but after taking my old parts collection to the landfill a few times, realized I may as well just buy N+1 machines and keep zero spares on the shelf. That way I get a spare machine available for use immediately, and I know the parts are working (parts on the shelf may be defective). If something breaks, I use the spare machine until the replacement parts arrive.\n>>>\n>>> Note in addition that a warranty can be extremely useful in certain organizations as a vehicle of blame avoidance (this may be its primary purpose in fact). If I buy a bunch of machines that turn out to have buggy NICs, well that's my fault and I can kick myself since I own the company, stay up late into the night reading kernel code, and buy new NICs. If I have an evil Dilbertian boss, then well...I'd be seriously thinking about buying Dell boxes in order to blame Dell rather than myself, and be able to say \"everything is warrantied\" if badness goes down. Just saying...\n>>>\n>>I'm kinda in the latter shoes. Dell is the only thing that is trusted in my organisation. If I would build my own, I would be fully blamed for anything going wrong in the next 3 years. Thanks everyone for your input. Now my final choice will be if my budget allows for the latest and fastest, else I'm going for the x5690. I don't have hundreds of users, so I think the x5690 should do a pretty good job handling the load.\n>>\n>>\n>\n> Having plenty experience with Dell I'd urge you reconsider. All the Dell servers\n> we've had have arrived hideously misconfigured, and tech support gets you\n> nowhere. Once we've rejigged the hardware ourselves, maybe replacing a\n> part or two they've performed okay.\n>\n> Reliability has been okay, however one of our newer R910s recently all\n> of a sudden went dead to the world; no prior symptoms showing in our\n> hardware and software monitoring, no errors in the os logs, nothing in\n> the dell drac logs. After a hard reset it's back up as if nothing\n> happened, and it's an issue I'm none the wiser to the cause. Not good\n> piece of mind.\n>\n> Look around and find another vendor, even if your company has to pay\n> more for you to have that blame avoidance.\nWe're currently using Dell and have had enough problems to think about switching.\nWhat about HP?\nDan Franklin",
"msg_date": "Tue, 02 Oct 2012 10:51:46 -0400",
"msg_from": "\"Franklin, Dan (FEN)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice - opinions about HP?"
},
{
"msg_contents": "On Tue, Oct 2, 2012 at 10:51:46AM -0400, Franklin, Dan (FEN) wrote:\n> > Look around and find another vendor, even if your company has to pay\n> \n> > more for you to have that blame avoidance.\n> \n> We're currently using Dell and have had enough problems to think about\n> switching.\n> \n> What about HP?\n\nIf you need a big vendor, I think HP is a good choice.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Tue, 2 Oct 2012 11:14:09 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice - opinions about HP?"
},
{
"msg_contents": "On 10/2/2012 2:20 AM, Glyn Astill wrote:\n> newer R910s recently all of a sudden went dead to the world; no prior symptoms\n> showing in our hardware and software monitoring, no errors in the os logs,\n> nothing in the dell drac logs. After a hard reset it's back up as if\n> nothing happened, and it's an issue I'm none the wiser to the cause. Not\n> good piece of mind.\nThis could be an OS bug rather than a hardware problem.\n\n\n\n\n\n",
"msg_date": "Tue, 02 Oct 2012 09:14:18 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
},
{
"msg_contents": "On Tue, Oct 2, 2012 at 9:14 AM, Bruce Momjian <[email protected]> wrote:\n> On Tue, Oct 2, 2012 at 10:51:46AM -0400, Franklin, Dan (FEN) wrote:\n>> We're currently using Dell and have had enough problems to think about\n>> switching.\n>>\n>> What about HP?\n>\n> If you need a big vendor, I think HP is a good choice.\n\nThis brings up a point I make sometimes to folks. Big companies can\nget great treatment from big vendors. When you work somewhere that\norders servers by the truckload, you need a vendor who can fill trucks\nwith servers in a day's notice, and send you a hundred different\nreplacement parts the next.\n\nConversely, if you are a smaller company that orders a dozen or so\nservers a year, then often a big vendor is not the best match. You're\njust a drop in the ocean to them. A small vendor is often a much\nbetter match here. They can carefully test those two 48 core opteron\nservers with 100 drives over a week's time to make sure it works the\nway you need it to. It might take them four weeks to build a big\nspecialty box, but it will usually get built right and for a decent\nprice. Also the sales people will usually be more knowledgeable about\nthe machines they sell.\n\nRecent job: 20 or fewer servers ordered a year, boutique shop for them\n(aberdeeninc in this case).\nOther recent job: 20 or more servers a week. Big reseller (not at\nliberty to release the name).\n\n",
"msg_date": "Tue, 2 Oct 2012 10:01:04 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice - opinions about HP?"
},
{
"msg_contents": "----- Original Message -----\n\n> From: David Boreham <[email protected]>\n> To: \"[email protected]\" <[email protected]>\n> Cc: \n> Sent: Tuesday, 2 October 2012, 16:14\n> Subject: Re: [PERFORM] hardware advice\n> \n> On 10/2/2012 2:20 AM, Glyn Astill wrote:\n>> newer R910s recently all of a sudden went dead to the world; no prior \n> symptoms\n>> showing in our hardware and software monitoring, no errors in the os logs,\n>> nothing in the dell drac logs. After a hard reset it's back up as if\n>> nothing happened, and it's an issue I'm none the wiser to the \n> cause. Not\n>> good piece of mind.\n> This could be an OS bug rather than a hardware problem.\n\nYeah actually I'm leaning towards this being a specific bug in the linux kernel. Everything else I said still stands though.\n\n\n",
"msg_date": "Wed, 3 Oct 2012 12:56:44 +0100 (BST)",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware advice"
}
] |
[
{
"msg_contents": "Hello everybody,\n\nWe have being doing some testing with an ISD transaction and we had\nsome problems that we posted here.\n\nThe answers we got were very kind and useful but we couldn't solve the problem.\n\nWe have doing some investigations after this and we are thinking if is\nit possible that OS has something to do with this issue. I mean, we\nhave two hosts, both of them with OS = Red Hat Enterprise Linux Server\nrelease 6.2 (Santiago)\n\nBut when doing \"select * from version()\" on the postgres shell we obtain:\n\nsessions=# select * from version();\n version\n--------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n4.4.6 20110731 (Red Hat 4.4.6-3), 64-bit\n(1 row)\n\nWe don't understand why in here it's written \"(Red Hat 4.4.6-3)\".\n\nIs it possible that we have installed a postgres' version that it's\nnot perfect for the OS?\n\nBut if this is a problem, why are we obtaining a normal perform on a\nhost and an exponential performance decrease on another?\n\nAnd how can we obtain a normal performance when launching the program\nwhich does the queries from another host (remote url) but when\nlaunching it in the same host we obtain this decrease on the\nperformance?\n\n\nAny idea would be great!\n\nThanks very much!!!!\n\n\n\n\nUseful data:\n\nname |\ncurrent_setting\n\n--------------------------+--------------------------------------------------------------------------------------------\n------------------\n version | PostgreSQL 9.1.3 on\nx86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red\nHat\n 4.4.6-3), 64-bit\n archive_mode | off\n client_encoding | UTF8\n fsync | on\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_directory | pg_log\n log_filename | postgresql-%a.log\n log_rotation_age | 1d\n log_rotation_size | 0\n log_truncate_on_rotation | on\n logging_collector | on\n max_connections | 100\n max_stack_depth | 2MB\n port | 50008\n server_encoding | UTF8\n shared_buffers | 32MB\n synchronous_commit | on\n TimeZone | Europe/Madrid\n wal_buffers | 64kB\n wal_sync_method | fsync\n(22 rows)\n\n",
"msg_date": "Fri, 28 Sep 2012 12:43:24 +0200",
"msg_from": "John Nash <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?exponentia=E2=80=8Bl_performanc=E2=80=8Be_decrease=2C_problem_with?=\n\t=?UTF-8?Q?_version_postgres_=2B_RHEL=3F?="
},
{
"msg_contents": "Hi\n\nOn 28 September 2012 13:43, John Nash <[email protected]> wrote:\n\n> We don't understand why in here it's written \"(Red Hat 4.4.6-3)\".\n\nGCC version is 4.4.6-3 on RHEL 6.2 :)\n\n--\nWith best regards,\nNikolay\n\n",
"msg_date": "Fri, 28 Sep 2012 13:47:53 +0300",
"msg_from": "Nikolay Ulyanitsky <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_exponentia=E2=80=8Bl_performanc=E2=80=8Be_decrease?=\n\t=?UTF-8?Q?=2C_problem_with_version_postgres_=2B_RHEL=3F?="
},
{
"msg_contents": "John Nash wrote:\r\n> We have being doing some testing with an ISD transaction and we had\r\n> some problems that we posted here.\r\n> \r\n> The answers we got were very kind and useful but we couldn't solve the problem.\r\n\r\nCould you refer to the threads so that you don't get the same advice again?\r\n\r\n> We have doing some investigations after this and we are thinking if is\r\n> it possible that OS has something to do with this issue. I mean, we\r\n> have two hosts, both of them with OS = Red Hat Enterprise Linux Server\r\n> release 6.2 (Santiago)\r\n> \r\n> But when doing \"select * from version()\" on the postgres shell we obtain:\r\n> \r\n> sessions=# select * from version();\r\n> version\r\n> ------------------------------------------------------------------------------------------------------\r\n> --------\r\n> PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\r\n> 4.4.6 20110731 (Red Hat 4.4.6-3), 64-bit\r\n> (1 row)\r\n> \r\n> We don't understand why in here it's written \"(Red Hat 4.4.6-3)\".\r\n> \r\n> Is it possible that we have installed a postgres' version that it's\r\n> not perfect for the OS?\r\n\r\nIt means that the PostgreSQL you are using was compiled with a\r\ncompiler that was compiled on RHEL4. Shouldn't be a problem.\r\n\r\n> But if this is a problem, why are we obtaining a normal perform on a\r\n> host and an exponential performance decrease on another?\r\n> \r\n> And how can we obtain a normal performance when launching the program\r\n> which does the queries from another host (remote url) but when\r\n> launching it in the same host we obtain this decrease on the\r\n> performance?\r\n\r\nTry to identify the bottleneck.\r\nIs it disk I/O, CPU, memory or something else?\r\n\r\n> name |\r\n> current_setting\r\n> \r\n> --------------------------+---------------------------------------------------------------------------\r\n> -----------------\r\n> ------------------\r\n> version | PostgreSQL 9.1.3 on\r\n> x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red\r\n> Hat\r\n> 4.4.6-3), 64-bit\r\n> archive_mode | off\r\n> client_encoding | UTF8\r\n> fsync | on\r\n> lc_collate | en_US.UTF-8\r\n> lc_ctype | en_US.UTF-8\r\n> listen_addresses | *\r\n> log_directory | pg_log\r\n> log_filename | postgresql-%a.log\r\n> log_rotation_age | 1d\r\n> log_rotation_size | 0\r\n> log_truncate_on_rotation | on\r\n> logging_collector | on\r\n> max_connections | 100\r\n> max_stack_depth | 2MB\r\n> port | 50008\r\n> server_encoding | UTF8\r\n> shared_buffers | 32MB\r\n\r\nNow that sticks out as being pretty small.\r\nTry 1/4 of the memory available for the database, but not\r\nmore than 2 GB.\r\n\r\n> synchronous_commit | on\r\n> TimeZone | Europe/Madrid\r\n> wal_buffers | 64kB\r\n\r\nThat's also pretty small.\r\n\r\n> wal_sync_method | fsync\r\n> (22 rows)\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Fri, 28 Sep 2012 12:58:12 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?utf-8?B?UkU6IFtQRVJGT1JNXSBleHBvbmVudGlh4oCLbCBwZXJmbw==?=\n\t=?utf-8?B?cm1hbmPigItlIGRlY3JlYXNlLCBwcm9ibGVtIHdpdGggdg==?=\n\t=?utf-8?B?ZXJzaW9uIHBvc3RncmVzICsgUkhFTD8=?="
},
{
"msg_contents": "Ah ok!\n\nThank you very much!\n\n As it's written:\n\n compiled by gcc (GCC) 4.4.6 20110731\n\nwe thought that was the gcc version and the one written between () was\nthe OS version.\n\nSo, this is not a problem!\n\nThank you!!!!\n\n\n\n2012/9/28 Nikolay Ulyanitsky <[email protected]>:\n> Hi\n>\n> On 28 September 2012 13:43, John Nash <[email protected]> wrote:\n>\n>> We don't understand why in here it's written \"(Red Hat 4.4.6-3)\".\n>\n> GCC version is 4.4.6-3 on RHEL 6.2 :)\n>\n> --\n> With best regards,\n> Nikolay\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 28 Sep 2012 12:58:42 +0200",
"msg_from": "John Nash <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_exponentia=E2=80=8Bl_performan?=\n\t=?UTF-8?Q?c=E2=80=8Be_decrease=2C_problem_with_version_postgres_=2B_RHEL=3F?="
}
] |
[
{
"msg_contents": "On machine 1 - a table that contains between 12 and 18 million rows\nOn machine 2 - a Java app that calls Select * on the table, and writes it\ninto a Lucene index\n\nOriginally had a fetchSize of 10,000 and would take around 38 minutes for 12\nmillion, 50 minutes for 16ish million to read it all & write it all back out\nas the lucene index\n\nOne day it started taking 4 hours. If something changed, we dont know what\nit was\n\nWe tracked it down to, after 10 million or so rows, the Fetch to get the\nnext 10,000 rows from the DB goes from like 1 second to 30 seconds, and\nstays there\n\nAfter spending a week of two devs & DBA trying to solve this, we eventually\n\"solved\" it by upping the FetchRowSize in the JDBC call to 50,000\n\nIt was performing well enough again for a few weeks\n\nthen...one day... it started taking 4 hours again\n\nwe tried upping the shared_buffer from 16GB to 20GB\n\nAnd last night... it took 7 hours\n\nwe are using PGSQL 9.1\n\ndoes anyone have ANY ideas?!\n\nthanks much\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Select-on-12-18M-row-table-from-remote-machine-thru-JDBC-Performance-nose-dives-after-10M-ish-records-tp5725853.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Fri, 28 Sep 2012 06:52:30 -0700 (PDT)",
"msg_from": "antthelimey <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"Select * \" on 12-18M row table from remote machine thru JDBC -\n\tPerformance nose-dives after 10M-ish records"
},
{
"msg_contents": "I think the best advice I can think of is to go back to the basics. Tools\nlike sar and top and look at logs. Changing random settings on both the\nclient and server seems like guessing. I find it unlikely that the changes\nyou made (jdbc and shared buffers) had the effects you noticed. Determine\nif it is I/O, CPU, or network. Put all your settings back to the way they\nwere. If the DB did not change, then look at OS and network.\n\nDeron\nOn Sep 28, 2012 6:53 AM, \"antthelimey\" <[email protected]> wrote:\n\n> On machine 1 - a table that contains between 12 and 18 million rows\n> On machine 2 - a Java app that calls Select * on the table, and writes it\n> into a Lucene index\n>\n> Originally had a fetchSize of 10,000 and would take around 38 minutes for\n> 12\n> million, 50 minutes for 16ish million to read it all & write it all back\n> out\n> as the lucene index\n>\n> One day it started taking 4 hours. If something changed, we dont know what\n> it was\n>\n> We tracked it down to, after 10 million or so rows, the Fetch to get the\n> next 10,000 rows from the DB goes from like 1 second to 30 seconds, and\n> stays there\n>\n> After spending a week of two devs & DBA trying to solve this, we\n> eventually\n> \"solved\" it by upping the FetchRowSize in the JDBC call to 50,000\n>\n> It was performing well enough again for a few weeks\n>\n> then...one day... it started taking 4 hours again\n>\n> we tried upping the shared_buffer from 16GB to 20GB\n>\n> And last night... it took 7 hours\n>\n> we are using PGSQL 9.1\n>\n> does anyone have ANY ideas?!\n>\n> thanks much\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Select-on-12-18M-row-table-from-remote-machine-thru-JDBC-Performance-nose-dives-after-10M-ish-records-tp5725853.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI think the best advice I can think of is to go back to the basics. Tools like sar and top and look at logs. Changing random settings on both the client and server seems like guessing. I find it unlikely that the changes you made (jdbc and shared buffers) had the effects you noticed. Determine if it is I/O, CPU, or network. Put all your settings back to the way they were. If the DB did not change, then look at OS and network.\nDeron\nOn Sep 28, 2012 6:53 AM, \"antthelimey\" <[email protected]> wrote:\nOn machine 1 - a table that contains between 12 and 18 million rows\nOn machine 2 - a Java app that calls Select * on the table, and writes it\ninto a Lucene index\n\nOriginally had a fetchSize of 10,000 and would take around 38 minutes for 12\nmillion, 50 minutes for 16ish million to read it all & write it all back out\nas the lucene index\n\nOne day it started taking 4 hours. If something changed, we dont know what\nit was\n\nWe tracked it down to, after 10 million or so rows, the Fetch to get the\nnext 10,000 rows from the DB goes from like 1 second to 30 seconds, and\nstays there\n\nAfter spending a week of two devs & DBA trying to solve this, we eventually\n\"solved\" it by upping the FetchRowSize in the JDBC call to 50,000\n\nIt was performing well enough again for a few weeks\n\nthen...one day... it started taking 4 hours again\n\nwe tried upping the shared_buffer from 16GB to 20GB\n\nAnd last night... it took 7 hours\n\nwe are using PGSQL 9.1\n\ndoes anyone have ANY ideas?!\n\nthanks much\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Select-on-12-18M-row-table-from-remote-machine-thru-JDBC-Performance-nose-dives-after-10M-ish-records-tp5725853.html\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 28 Sep 2012 08:10:26 -0700",
"msg_from": "Deron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Select * \" on 12-18M row table from remote machine\n\tthru JDBC - Performance nose-dives after 10M-ish records"
}
] |
[
{
"msg_contents": "Hey guys,\n\nI ran into this while we were working on an upgrade project. We're \nmoving from 8.2 (don't ask) to 9.1, and started getting terrible \nperformance for some queries. I've managed to boil it down to a test case:\n\ncreate temp table my_foo as\nselect a.id, '2012-01-01'::date + (random()*365)::int AS created_dt\n from generate_series(1,5000) as a(id);\n\ncreate temp table my_bar as\nselect b.id, (random()*4999)::int + 1 as aid,\n '2012-01-01'::date + (random()*365)::int AS created_dt\n from generate_series(1,500000) as b(id);\n\nanalyze my_foo;\nanalyze my_bar;\n\ncreate index idx_foo_id on my_foo (id);\ncreate index idx_foo_const on my_foo (created_dt);\n\ncreate index idx_bar_id on my_bar(id);\ncreate index idx_bar_aid on my_bar(aid);\ncreate index idx_bar_const on my_bar (created_dt);\n\n\nOk, simple enough, right? Now do this:\n\n\nexplain analyze\nselect b.*\n from my_foo a, my_bar b\n where a.created_dt = '2012-05-05'\n and b.created_dt between a.created_dt\n and a.created_dt + interval '1 month';\n\nexplain analyze\nselect b.*\n from my_foo a, my_bar b\n where a.created_dt = '2012-05-05'\n and b.created_dt between '2012-05-05'\n and '2012-05-05'::date + interval '1 month';\n\n\nThese do not create the same query plan, which itself is odd. But the \nother thing, is that query 1 is about 4-8x slower than query 2, but only \nwhen I test it on PostgreSQL 9.1. When I test it on 8.2 (eww) they're \nabout equal in performance. I should note that the plan for both cases \nin 8.2, performs better than query 1 in 9.1.\n\nSo I've got two questions:\n\n1. Is it normal for trivially equal values to be non-optimal like this?\n2. What on earth happened between 8.2 and 9.1 that made performance \nworse for this test case?\n\nJust to address any questions, I've tested this in multiple \nenvironments, and it's always consistent. 9.1 performs worse than 8.2 \nhere, so long as you rely on PostgreSQL to make the equivalence instead \nof doing it manually.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 28 Sep 2012 14:22:56 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Possible Performance Regression with Transitive Comparisons vs.\n\tConstants"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> I ran into this while we were working on an upgrade project. We're \n> moving from 8.2 (don't ask) to 9.1, and started getting terrible \n> performance for some queries. I've managed to boil it down to a test case:\n\n9.1.what? For me, 8.2.23 and 9.1.6 produce the same plan and just about\nthe same runtime for your query 1. For query 2, 9.1.6 prefers to stick\nin a Materialize node, which cuts the runtime 30% or so --- but if I set\nenable_material to off then I get the same plan and runtime as with 8.2.\n\nPerhaps you should show the EXPLAIN ANALYZE outputs you're actually\ngetting, rather than assuming others will get the same thing.\n\n\t\t\tregards, tom lane\n\n(PS: it does seem that HEAD has got some kind of issue here, because\nit's picking a plain not bitmap indexscan. I'll go look at that.\nBut I don't see that misbehavior in 9.1.)\n\n",
"msg_date": "Fri, 28 Sep 2012 16:35:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible Performance Regression with Transitive Comparisons vs.\n\tConstants"
},
{
"msg_contents": "On 09/28/2012 03:35 PM, Tom Lane wrote:\n\n> 9.1.what? For me, 8.2.23 and 9.1.6 produce the same plan and just\n> about the same runtime for your query 1.\n\nI withdraw that part of my question. I apparently didn't look closely \nenough at the actual output. I was basing the version assumption on the \nquery speed on the new server, when it was probably due to cache effects.\n\nThe first part of the question stands, though... Why isn't the optimizer \nsubstituting these values? a.created_date should be exactly equivalent \nto '2012-05-05', but it's clearly not being treated that way.\n\nWith the full substitutions, I'm seeing things like this:\n\nhttp://explain.depesz.com/s/3T4\n\nWith the column names, it's this:\n\nhttp://explain.depesz.com/s/Fq7\n\nThis is on 8.2, but the behavior is the same on 9.1. From 130s to 23s \nsimply by substituting the constant wherever the column name is \nencountered. For reference, the queries are, slow:\n\nselect a.id, f.ezorder_id\n from reporting.account a\n join ezorder f on f.account_id = a.account_id\n where a.process_date = '2012-09-27'\n and f.date_created between a.process_date - interval '6 months'\n and a.process_date\n and a.row_out is null\n\nAnd fast:\n\nselect a.id, f.ezorder_id\n from reporting.account a\n join ezorder f on f.account_id = a.account_id\n where a.process_date = '2012-09-27'\n and f.date_created between '2012-09-27'::date - interval '6 months'\n and '2012-09-27'\n and a.row_out is null\n\nWe discovered this during the upgrade, but it seems to equally apply to \nboth 8.2 and 9.1. I've been telling the devs to replace any of these \nthey find all day. I can't quite say why we never \"noticed\" this before, \nbut it got exposed today pretty plainly. If this were a compiler, I'd \nhave expected it to treat the values as equivalent, but that's clearly \nnot what's happening.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 28 Sep 2012 16:37:40 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possible Performance Regression with Transitive Comparisons\n\tvs. Constants"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> The first part of the question stands, though... Why isn't the optimizer \n> substituting these values? a.created_date should be exactly equivalent \n> to '2012-05-05', but it's clearly not being treated that way.\n\nNo version of Postgres has ever substituted constants in the way you're\nimagining, and I wouldn't hold my breath waiting for it to happen. The\nreason is that \"x = constant\" only creates a requirement for x to be\nbtree-equal to the constant, and btree equality doesn't guarantee\nequality for all purposes. In this example we'd have to assume that\nbtree-equality guaranteed identical results from the date + interval\naddition operator. While that happens to be true for this operator,\nthe planner can't know that.\n\nA real-world example of the kind of case I'm worried about is that in\nIEEE-spec float arithmetic, minus zero and plus zero compare equal ---\nbut there are functions that give different results for the two values.\nAnother is that the char(n) type's equality operator will say that\n'foo' and 'foo ' are equal, but those values are definitely\ndistinguishable by some operations, eg length().\n\nThere are some cases where the planner can effectively propagate\nconstants, but they rely on transitivity of btree equality operators.\nFor instance if we have x = constant and x = y, with compatible equality\noperators, we can deduce y = constant. But that doesn't imply that y\n*is* the constant, just that it's btree-equal to it.\n\nThere have been some discussions of inventing a stronger notion of\nequality than btree equality, so that we could know when it's safe to\nmake this type of substitution; but nothing's been done about that.\nPersonally I think it's fairly rare that any real win would come from\nthis type of constant substitution, and so it's very likely that adding\nit would just create a net drag on performance (because of the added\nplanner cycles spent looking for substitution opportunities, which would\nhappen in every query whether it got any benefit or not).\n\nAnother point here is that at least for the one side of your BETWEEN\noperator, b.created_dt >= a.created_dt, we could in fact combine that\nwith a.created_dt = '2012-05-05' to deduce b.created_dt >= '2012-05-05',\nbecause we know from the btree opclass for dates that these = and >=\noperators have compatible semantics. Again though, it seems likely that\nthe cost of looking for such opportunities would outweigh the benefits.\nIn this particular example I don't think it'd do much good --- the\nreason the planner isn't picking a plan similar to the \"fast\" one is\nthat it doesn't know that the BETWEEN with variable limits will select\nonly a relatively small part of the table. Providing a constant limit\nfor just one side wouldn't fix that.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 28 Sep 2012 18:25:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible Performance Regression with Transitive Comparisons vs.\n\tConstants"
}
] |
[
{
"msg_contents": "Howdy, I've been debugging a client's slow query today and I'm curious\nabout the query plan. It's picking a plan that hashes lots of rows from the\nversions table (on v9.0.10)...\n\nEXPLAIN ANALYZE\nSELECT COUNT(*) FROM notes a WHERE\na.project_id = 114 AND\nEXISTS (\n SELECT 1 FROM note_links b\n WHERE\n b.note_id = a.id AND\n b.entity_type = 'Version' AND\n EXISTS (\n SELECT 1 FROM versions c\n WHERE\n c.id = b.entity_id AND\n c.code ILIKE '%comp%' AND\n c.retirement_date IS NULL\n ) AND\n b.retirement_date IS NULL\n)\n\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=833177.30..833177.31 rows=1 width=0) (actual\ntime=10806.416..10806.416 rows=1 loops=1)\n -> Hash Semi Join (cost=747004.15..833154.86 rows=8977 width=0)\n(actual time=10709.343..10806.344 rows=894 loops=1)\n Hash Cond: (a.id = b.note_id)\n -> Index Scan using notes_retirement_date_project on notes a\n (cost=0.00..66725.10 rows=12469 width=4) (actual time=12.213..71.199\nrows=12469 loops=1)\n Index Cond: (project_id = 114)\n -> Hash (cost=723749.35..723749.35 rows=1417424 width=4) (actual\ntime=10696.192..10696.192 rows=227261 loops=1)\n Buckets: 65536 Batches: 4 Memory Usage: 2016kB\n -> Hash Semi Join (cost=620007.75..723749.35 rows=1417424\nwidth=4) (actual time=8953.460..10645.714 rows=227261 loops=1)\n Hash Cond: (b.entity_id = c.id)\n -> Seq Scan on note_links b (cost=0.00..71849.56\nrows=1417424 width=8) (actual time=0.075..628.183 rows=1509795 loops=1)\n Filter: ((retirement_date IS NULL) AND\n((entity_type)::text = 'Version'::text))\n -> Hash (cost=616863.62..616863.62 rows=251530\nwidth=4) (actual time=8953.327..8953.327 rows=300115 loops=1)\n Buckets: 32768 Batches: 1 Memory Usage: 10551kB\n -> Seq Scan on versions c\n (cost=0.00..616863.62 rows=251530 width=4) (actual time=176.590..8873.588\nrows=300115 loops=1)\n Filter: ((retirement_date IS NULL) AND\n((code)::text ~~* '%comp%'::text))\n Total runtime: 10810.479 ms\n(16 rows)\n\nHowever, I can trick it into a better plan by adding LIMIT 1 into the inner\nEXISTS:\n\nEXPLAIN ANALYZE\nSELECT COUNT(*) FROM notes a WHERE\na.project_id = 114 AND\nEXISTS (\n SELECT 1 FROM note_links b\n WHERE\n b.note_id = a.id AND\n b.entity_type = 'Version' AND\n EXISTS (\n SELECT 1 FROM versions c\n WHERE\n c.id = b.entity_id AND\n c.code ILIKE '%comp%' AND\n c.retirement_date IS NULL\n LIMIT 1\n ) AND\n b.retirement_date IS NULL\n)\n\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=372820.37..372820.38 rows=1 width=0) (actual\ntime=139.430..139.430 rows=1 loops=1)\n -> Nested Loop Semi Join (cost=0.00..372809.15 rows=4488 width=0)\n(actual time=9.735..139.333 rows=894 loops=1)\n -> Index Scan using notes_retirement_date_project on notes a\n (cost=0.00..66725.10 rows=12469 width=4) (actual time=9.699..67.263\nrows=12469 loops=1)\n Index Cond: (project_id = 114)\n -> Index Scan using note_links_note on note_links b\n (cost=0.00..24.54 rows=1 width=4) (actual time=0.006..0.006 rows=0\nloops=12469)\n Index Cond: (b.note_id = a.id)\n Filter: ((b.retirement_date IS NULL) AND\n((b.entity_type)::text = 'Version'::text) AND (SubPlan 1))\n SubPlan 1\n -> Limit (cost=0.00..9.04 rows=1 width=0) (actual\ntime=0.003..0.003 rows=0 loops=11794)\n -> Index Scan using versions_pkey on versions c\n (cost=0.00..9.04 rows=1 width=0) (actual time=0.003..0.003 rows=0\nloops=11794)\n Index Cond: (id = $0)\n Filter: ((retirement_date IS NULL) AND\n((code)::text ~~* '%comp%'::text))\n Total runtime: 139.465 ms\n(13 rows)\n\n\nUnfortunately, a couple other queries I tested got slower by adding the\nLIMIT so I don't think that's going to be a good workaround. It doesn't\nappear to be related to ILIKE, because I tried a straight equals against\nanother un-indexed column of versions and still get a slow plan (and adding\nthe LIMIT to this one made it fast too):\n\nEXPLAIN ANALYZE\nSELECT COUNT(*) FROM notes a WHERE\na.project_id = 114 AND\nEXISTS (\n SELECT 1 FROM note_links b\n WHERE\n b.note_id = a.id AND\n b.entity_type = 'Version' AND\n EXISTS (\n SELECT 1 FROM versions c\n WHERE\n c.id = b.entity_id AND\n c.sg_status_list = 'ip' AND\n c.retirement_date IS NULL\n ) AND\n b.retirement_date IS NULL\n)\n\n\nQUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=821544.18..821544.19 rows=1 width=0) (actual\ntime=5046.492..5046.492 rows=1 loops=1)\n -> Hash Semi Join (cost=735371.03..821521.73 rows=8977 width=0)\n(actual time=4941.968..5045.968 rows=7116 loops=1)\n Hash Cond: (a.id = b.note_id)\n -> Index Scan using notes_retirement_date_project on notes a\n (cost=0.00..66725.10 rows=12469 width=4) (actual time=9.639..68.751\nrows=12469 loops=1)\n Index Cond: (project_id = 114)\n -> Hash (cost=712116.23..712116.23 rows=1417424 width=4) (actual\ntime=4931.956..4931.956 rows=297401 loops=1)\n Buckets: 65536 Batches: 4 Memory Usage: 2633kB\n -> Hash Join (cost=620484.32..712116.23 rows=1417424\nwidth=4) (actual time=3362.472..4864.816 rows=297401 loops=1)\n Hash Cond: (b.entity_id = c.id)\n -> Seq Scan on note_links b (cost=0.00..71849.56\nrows=1417424 width=8) (actual time=0.079..622.277 rows=1509795 loops=1)\n Filter: ((retirement_date IS NULL) AND\n((entity_type)::text = 'Version'::text))\n -> Hash (cost=618673.97..618673.97 rows=144828\nwidth=4) (actual time=3362.337..3362.337 rows=155834 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 5479kB\n -> HashAggregate (cost=617225.69..618673.97\nrows=144828 width=4) (actual time=3289.861..3335.344 rows=155834 loops=1)\n -> Seq Scan on versions c\n (cost=0.00..616863.62 rows=144828 width=4) (actual time=217.080..3133.870\nrows=155834 loops=1)\n Filter: ((retirement_date IS NULL)\nAND ((sg_status_list)::text = 'ip'::text))\n Total runtime: 5051.414 ms\n(17 rows)\n\n\nDoes anything come to mind that would help me debug why this plan is being\nchosen? Thanks!\n\nMatt\n\nHowdy, I've been debugging a client's slow query today and I'm curious about the query plan. It's picking a plan that hashes lots of rows from the versions table (on v9.0.10)...EXPLAIN ANALYZE\nSELECT COUNT(*) FROM notes a WHEREa.project_id = 114 ANDEXISTS ( SELECT 1 FROM note_links b WHERE b.note_id = a.id AND\n\n b.entity_type = 'Version' AND EXISTS ( SELECT 1 FROM versions c WHERE c.id = b.entity_id AND\n c.code ILIKE '%comp%' AND\n c.retirement_date IS NULL ) AND b.retirement_date IS NULL) QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=833177.30..833177.31 rows=1 width=0) (actual time=10806.416..10806.416 rows=1 loops=1)\n -> Hash Semi Join (cost=747004.15..833154.86 rows=8977 width=0) (actual time=10709.343..10806.344 rows=894 loops=1) Hash Cond: (a.id = b.note_id)\n -> Index Scan using notes_retirement_date_project on notes a (cost=0.00..66725.10 rows=12469 width=4) (actual time=12.213..71.199 rows=12469 loops=1)\n Index Cond: (project_id = 114) -> Hash (cost=723749.35..723749.35 rows=1417424 width=4) (actual time=10696.192..10696.192 rows=227261 loops=1) Buckets: 65536 Batches: 4 Memory Usage: 2016kB\n -> Hash Semi Join (cost=620007.75..723749.35 rows=1417424 width=4) (actual time=8953.460..10645.714 rows=227261 loops=1) Hash Cond: (b.entity_id = c.id)\n -> Seq Scan on note_links b (cost=0.00..71849.56 rows=1417424 width=8) (actual time=0.075..628.183 rows=1509795 loops=1) Filter: ((retirement_date IS NULL) AND ((entity_type)::text = 'Version'::text))\n -> Hash (cost=616863.62..616863.62 rows=251530 width=4) (actual time=8953.327..8953.327 rows=300115 loops=1) Buckets: 32768 Batches: 1 Memory Usage: 10551kB\n -> Seq Scan on versions c (cost=0.00..616863.62 rows=251530 width=4) (actual time=176.590..8873.588 rows=300115 loops=1) Filter: ((retirement_date IS NULL) AND ((code)::text ~~* '%comp%'::text))\n Total runtime: 10810.479 ms(16 rows)However, I can trick it into a better plan by adding LIMIT 1 into the inner EXISTS:EXPLAIN ANALYZE\n\nSELECT COUNT(*) FROM notes a WHEREa.project_id = 114 ANDEXISTS ( SELECT 1 FROM note_links b WHERE b.note_id = a.id AND\n b.entity_type = 'Version' AND\n EXISTS ( SELECT 1 FROM versions c WHERE c.id = b.entity_id AND c.code ILIKE '%comp%' AND\n c.retirement_date IS NULL\n LIMIT 1 ) AND b.retirement_date IS NULL) QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=372820.37..372820.38 rows=1 width=0) (actual time=139.430..139.430 rows=1 loops=1)\n -> Nested Loop Semi Join (cost=0.00..372809.15 rows=4488 width=0) (actual time=9.735..139.333 rows=894 loops=1) -> Index Scan using notes_retirement_date_project on notes a (cost=0.00..66725.10 rows=12469 width=4) (actual time=9.699..67.263 rows=12469 loops=1)\n Index Cond: (project_id = 114) -> Index Scan using note_links_note on note_links b (cost=0.00..24.54 rows=1 width=4) (actual time=0.006..0.006 rows=0 loops=12469) Index Cond: (b.note_id = a.id)\n Filter: ((b.retirement_date IS NULL) AND ((b.entity_type)::text = 'Version'::text) AND (SubPlan 1)) SubPlan 1 -> Limit (cost=0.00..9.04 rows=1 width=0) (actual time=0.003..0.003 rows=0 loops=11794)\n -> Index Scan using versions_pkey on versions c (cost=0.00..9.04 rows=1 width=0) (actual time=0.003..0.003 rows=0 loops=11794) Index Cond: (id = $0)\n Filter: ((retirement_date IS NULL) AND ((code)::text ~~* '%comp%'::text)) Total runtime: 139.465 ms(13 rows)Unfortunately, a couple other queries I tested got slower by adding the LIMIT so I don't think that's going to be a good workaround. It doesn't appear to be related to ILIKE, because I tried a straight equals against another un-indexed column of versions and still get a slow plan (and adding the LIMIT to this one made it fast too):\nEXPLAIN ANALYZESELECT COUNT(*) FROM notes a WHEREa.project_id = 114 ANDEXISTS ( SELECT 1 FROM note_links b WHERE b.note_id = a.id AND\n b.entity_type = 'Version' AND EXISTS ( SELECT 1 FROM versions c WHERE c.id = b.entity_id AND\n c.sg_status_list = 'ip' AND\n c.retirement_date IS NULL ) AND b.retirement_date IS NULL) QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=821544.18..821544.19 rows=1 width=0) (actual time=5046.492..5046.492 rows=1 loops=1)\n -> Hash Semi Join (cost=735371.03..821521.73 rows=8977 width=0) (actual time=4941.968..5045.968 rows=7116 loops=1) Hash Cond: (a.id = b.note_id)\n -> Index Scan using notes_retirement_date_project on notes a (cost=0.00..66725.10 rows=12469 width=4) (actual time=9.639..68.751 rows=12469 loops=1)\n Index Cond: (project_id = 114) -> Hash (cost=712116.23..712116.23 rows=1417424 width=4) (actual time=4931.956..4931.956 rows=297401 loops=1) Buckets: 65536 Batches: 4 Memory Usage: 2633kB\n -> Hash Join (cost=620484.32..712116.23 rows=1417424 width=4) (actual time=3362.472..4864.816 rows=297401 loops=1) Hash Cond: (b.entity_id = c.id)\n -> Seq Scan on note_links b (cost=0.00..71849.56 rows=1417424 width=8) (actual time=0.079..622.277 rows=1509795 loops=1) Filter: ((retirement_date IS NULL) AND ((entity_type)::text = 'Version'::text))\n -> Hash (cost=618673.97..618673.97 rows=144828 width=4) (actual time=3362.337..3362.337 rows=155834 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 5479kB\n -> HashAggregate (cost=617225.69..618673.97 rows=144828 width=4) (actual time=3289.861..3335.344 rows=155834 loops=1) -> Seq Scan on versions c (cost=0.00..616863.62 rows=144828 width=4) (actual time=217.080..3133.870 rows=155834 loops=1)\n Filter: ((retirement_date IS NULL) AND ((sg_status_list)::text = 'ip'::text)) Total runtime: 5051.414 ms(17 rows)\nDoes anything come to mind that would help me debug why this plan is being chosen? Thanks!Matt",
"msg_date": "Fri, 28 Sep 2012 14:04:04 -0700",
"msg_from": "Matt Daw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query plan, nested EXISTS"
},
{
"msg_contents": "Matt Daw <[email protected]> writes:\n> Howdy, I've been debugging a client's slow query today and I'm curious\n> about the query plan. It's picking a plan that hashes lots of rows from the\n> versions table (on v9.0.10)...\n\n> EXPLAIN ANALYZE\n> SELECT COUNT(*) FROM notes a WHERE\n> a.project_id = 114 AND\n> EXISTS (\n> SELECT 1 FROM note_links b\n> WHERE\n> b.note_id = a.id AND\n> b.entity_type = 'Version' AND\n> EXISTS (\n> SELECT 1 FROM versions c\n> WHERE\n> c.id = b.entity_id AND\n> c.code ILIKE '%comp%' AND\n> c.retirement_date IS NULL\n> ) AND\n> b.retirement_date IS NULL\n> )\n\nI think the real problem here is that 9.0 is incapable of avoiding a\nfull table scan on \"note_links\", which means it doesn't really have any\nbetter option than to do the inner EXISTS as a full-table semijoin.\nThis is because it can't push a.id down through two levels of join, and\nbecause the semijoins don't commute, there's no way to get a.id into the\nscan of note_links to pull out only the useful rows. The hack with\nLIMIT avoids this problem by preventing the inner EXISTS from being\ntreated as a full-fledged semijoin; but of course that hack leaves you\nvulnerable to very bad plans if the statistics are such that a nestloop\njoin isn't the best bet for the inner EXISTS.\n\nThe work I did for parameterized paths in 9.2 was intended to address\nexactly this type of scenario. I would be interested to know if 9.2\ndoes this any better for you.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 28 Sep 2012 17:44:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan, nested EXISTS"
},
{
"msg_contents": "Hi Tom, thank you very much. I'll load these tables onto a 9.2 instance and\nreport back.\n\nMatt\n\nOn Fri, Sep 28, 2012 at 2:44 PM, Tom Lane <[email protected]> wrote:\n\n> Matt Daw <[email protected]> writes:\n> > Howdy, I've been debugging a client's slow query today and I'm curious\n> > about the query plan. It's picking a plan that hashes lots of rows from\n> the\n> > versions table (on v9.0.10)...\n>\n> > EXPLAIN ANALYZE\n> > SELECT COUNT(*) FROM notes a WHERE\n> > a.project_id = 114 AND\n> > EXISTS (\n> > SELECT 1 FROM note_links b\n> > WHERE\n> > b.note_id = a.id AND\n> > b.entity_type = 'Version' AND\n> > EXISTS (\n> > SELECT 1 FROM versions c\n> > WHERE\n> > c.id = b.entity_id AND\n> > c.code ILIKE '%comp%' AND\n> > c.retirement_date IS NULL\n> > ) AND\n> > b.retirement_date IS NULL\n> > )\n>\n> I think the real problem here is that 9.0 is incapable of avoiding a\n> full table scan on \"note_links\", which means it doesn't really have any\n> better option than to do the inner EXISTS as a full-table semijoin.\n> This is because it can't push a.id down through two levels of join, and\n> because the semijoins don't commute, there's no way to get a.id into the\n> scan of note_links to pull out only the useful rows. The hack with\n> LIMIT avoids this problem by preventing the inner EXISTS from being\n> treated as a full-fledged semijoin; but of course that hack leaves you\n> vulnerable to very bad plans if the statistics are such that a nestloop\n> join isn't the best bet for the inner EXISTS.\n>\n> The work I did for parameterized paths in 9.2 was intended to address\n> exactly this type of scenario. I would be interested to know if 9.2\n> does this any better for you.\n>\n> regards, tom lane\n>\n\nHi Tom, thank you very much. I'll load these tables onto a 9.2 instance and report back.MattOn Fri, Sep 28, 2012 at 2:44 PM, Tom Lane <[email protected]> wrote:\nMatt Daw <[email protected]> writes:\n> Howdy, I've been debugging a client's slow query today and I'm curious\n> about the query plan. It's picking a plan that hashes lots of rows from the\n> versions table (on v9.0.10)...\n\n> EXPLAIN ANALYZE\n> SELECT COUNT(*) FROM notes a WHERE\n> a.project_id = 114 AND\n> EXISTS (\n> SELECT 1 FROM note_links b\n> WHERE\n> b.note_id = a.id AND\n> b.entity_type = 'Version' AND\n> EXISTS (\n> SELECT 1 FROM versions c\n> WHERE\n> c.id = b.entity_id AND\n> c.code ILIKE '%comp%' AND\n> c.retirement_date IS NULL\n> ) AND\n> b.retirement_date IS NULL\n> )\n\nI think the real problem here is that 9.0 is incapable of avoiding a\nfull table scan on \"note_links\", which means it doesn't really have any\nbetter option than to do the inner EXISTS as a full-table semijoin.\nThis is because it can't push a.id down through two levels of join, and\nbecause the semijoins don't commute, there's no way to get a.id into the\nscan of note_links to pull out only the useful rows. The hack with\nLIMIT avoids this problem by preventing the inner EXISTS from being\ntreated as a full-fledged semijoin; but of course that hack leaves you\nvulnerable to very bad plans if the statistics are such that a nestloop\njoin isn't the best bet for the inner EXISTS.\n\nThe work I did for parameterized paths in 9.2 was intended to address\nexactly this type of scenario. I would be interested to know if 9.2\ndoes this any better for you.\n\n regards, tom lane",
"msg_date": "Fri, 28 Sep 2012 14:47:47 -0700",
"msg_from": "Matt Daw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query plan, nested EXISTS"
},
{
"msg_contents": "Hi Tom, v9.2.1 looks good!\n\n Aggregate (cost=420808.99..420809.00 rows=1 width=0) (actual\ntime=147.345..147.345 rows=1 loops=1)\n -> Nested Loop Semi Join (cost=0.00..420786.71 rows=8914 width=0)\n(actual time=13.847..147.219 rows=894 loops=1)\n -> Index Scan using notes_retirement_date_project on notes a\n (cost=0.00..67959.22 rows=12535 width=4) (actual time=13.811..71.741\nrows=12469 loops=1)\n Index Cond: (project_id = 114)\n -> Nested Loop Semi Join (cost=0.00..28.14 rows=1 width=4)\n(actual time=0.006..0.006 rows=0 loops=12469)\n -> Index Scan using note_links_note on note_links b\n (cost=0.00..12.37 rows=1 width=8) (actual time=0.002..0.002 rows=1\nloops=12469)\n Index Cond: (note_id = a.id)\n Filter: ((retirement_date IS NULL) AND\n((entity_type)::text = 'Version'::text))\n Rows Removed by Filter: 1\n -> Index Scan using versions_pkey on versions c\n (cost=0.00..15.76 rows=1 width=4) (actual time=0.003..0.003 rows=0\nloops=11794)\n Index Cond: (id = b.entity_id)\n Filter: ((retirement_date IS NULL) AND ((code)::text\n~~* '%comp%'::text))\n Rows Removed by Filter: 1\n Total runtime: 147.411 ms\n(14 rows)\n\nOn Fri, Sep 28, 2012 at 2:47 PM, Matt Daw <[email protected]> wrote:\n\n> Hi Tom, thank you very much. I'll load these tables onto a 9.2 instance\n> and report back.\n>\n> Matt\n>\n>\n> On Fri, Sep 28, 2012 at 2:44 PM, Tom Lane <[email protected]> wrote:\n>\n>> Matt Daw <[email protected]> writes:\n>> > Howdy, I've been debugging a client's slow query today and I'm curious\n>> > about the query plan. It's picking a plan that hashes lots of rows from\n>> the\n>> > versions table (on v9.0.10)...\n>>\n>> > EXPLAIN ANALYZE\n>> > SELECT COUNT(*) FROM notes a WHERE\n>> > a.project_id = 114 AND\n>> > EXISTS (\n>> > SELECT 1 FROM note_links b\n>> > WHERE\n>> > b.note_id = a.id AND\n>> > b.entity_type = 'Version' AND\n>> > EXISTS (\n>> > SELECT 1 FROM versions c\n>> > WHERE\n>> > c.id = b.entity_id AND\n>> > c.code ILIKE '%comp%' AND\n>> > c.retirement_date IS NULL\n>> > ) AND\n>> > b.retirement_date IS NULL\n>> > )\n>>\n>> I think the real problem here is that 9.0 is incapable of avoiding a\n>> full table scan on \"note_links\", which means it doesn't really have any\n>> better option than to do the inner EXISTS as a full-table semijoin.\n>> This is because it can't push a.id down through two levels of join, and\n>> because the semijoins don't commute, there's no way to get a.id into the\n>> scan of note_links to pull out only the useful rows. The hack with\n>> LIMIT avoids this problem by preventing the inner EXISTS from being\n>> treated as a full-fledged semijoin; but of course that hack leaves you\n>> vulnerable to very bad plans if the statistics are such that a nestloop\n>> join isn't the best bet for the inner EXISTS.\n>>\n>> The work I did for parameterized paths in 9.2 was intended to address\n>> exactly this type of scenario. I would be interested to know if 9.2\n>> does this any better for you.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nHi Tom, v9.2.1 looks good! Aggregate (cost=420808.99..420809.00 rows=1 width=0) (actual time=147.345..147.345 rows=1 loops=1) -> Nested Loop Semi Join (cost=0.00..420786.71 rows=8914 width=0) (actual time=13.847..147.219 rows=894 loops=1)\n -> Index Scan using notes_retirement_date_project on notes a (cost=0.00..67959.22 rows=12535 width=4) (actual time=13.811..71.741 rows=12469 loops=1) Index Cond: (project_id = 114)\n -> Nested Loop Semi Join (cost=0.00..28.14 rows=1 width=4) (actual time=0.006..0.006 rows=0 loops=12469) -> Index Scan using note_links_note on note_links b (cost=0.00..12.37 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=12469)\n Index Cond: (note_id = a.id) Filter: ((retirement_date IS NULL) AND ((entity_type)::text = 'Version'::text)) Rows Removed by Filter: 1\n -> Index Scan using versions_pkey on versions c (cost=0.00..15.76 rows=1 width=4) (actual time=0.003..0.003 rows=0 loops=11794) Index Cond: (id = b.entity_id)\n Filter: ((retirement_date IS NULL) AND ((code)::text ~~* '%comp%'::text)) Rows Removed by Filter: 1 Total runtime: 147.411 ms(14 rows)\nOn Fri, Sep 28, 2012 at 2:47 PM, Matt Daw <[email protected]> wrote:\nHi Tom, thank you very much. I'll load these tables onto a 9.2 instance and report back.Matt\nOn Fri, Sep 28, 2012 at 2:44 PM, Tom Lane <[email protected]> wrote:\nMatt Daw <[email protected]> writes:\n\n> Howdy, I've been debugging a client's slow query today and I'm curious\n> about the query plan. It's picking a plan that hashes lots of rows from the\n> versions table (on v9.0.10)...\n\n> EXPLAIN ANALYZE\n> SELECT COUNT(*) FROM notes a WHERE\n> a.project_id = 114 AND\n> EXISTS (\n> SELECT 1 FROM note_links b\n> WHERE\n> b.note_id = a.id AND\n> b.entity_type = 'Version' AND\n> EXISTS (\n> SELECT 1 FROM versions c\n> WHERE\n> c.id = b.entity_id AND\n> c.code ILIKE '%comp%' AND\n> c.retirement_date IS NULL\n> ) AND\n> b.retirement_date IS NULL\n> )\n\nI think the real problem here is that 9.0 is incapable of avoiding a\nfull table scan on \"note_links\", which means it doesn't really have any\nbetter option than to do the inner EXISTS as a full-table semijoin.\nThis is because it can't push a.id down through two levels of join, and\nbecause the semijoins don't commute, there's no way to get a.id into the\nscan of note_links to pull out only the useful rows. The hack with\nLIMIT avoids this problem by preventing the inner EXISTS from being\ntreated as a full-fledged semijoin; but of course that hack leaves you\nvulnerable to very bad plans if the statistics are such that a nestloop\njoin isn't the best bet for the inner EXISTS.\n\nThe work I did for parameterized paths in 9.2 was intended to address\nexactly this type of scenario. I would be interested to know if 9.2\ndoes this any better for you.\n\n regards, tom lane",
"msg_date": "Fri, 28 Sep 2012 15:56:10 -0700",
"msg_from": "Matt Daw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query plan, nested EXISTS"
}
] |
[
{
"msg_contents": "Greetings.\n\nI have a small monitoring query on the following tables:\nselect relname,relpages,reltuples::numeric(12) from pg_class where relname\nin ('meta_version','account') order by 1;\n relname | relpages | reltuples\n--------------+----------+-----------\n account | 3235 | 197723\n meta_version | 710068 | 32561200\n(2 rows)\n\nThe logical “body” of the query is:\nselect count(*) from meta_version where account_id in (select account_id\nfrom account where customer_id = 8608064);\n\nI know that due to the data distribution (above customer's accounts are\nused in 45% of the meta_version table) I\ncannot expect fast results. But I have another question.\n\nWith default default_statistics_target I get the following plan:\nhttp://explain.depesz.com/s/jri\n\nIn order to get better estimates, I've increased statistics targets to 200\nfor account.customer_id and meta_version.account_id.\nNow I have the following plan:\nhttp://explain.depesz.com/s/YZJ\n\nSecond query takes twice more time.\nMy questions are:\n- why with better statistics planner chooses to do a SeqScan in favor of\nBitmapIndexScan inside the NestedLoops?\n- is it possible to adjust this decision by changing other GUCs, perhaps\ncosts?\n- would it be correct to adjust seq_page_cost and random_page_cost based on\nthe IOPS of the underlying disks?\n any other metrics should be considered?\n\nI'm running on a:\n name |\n current_setting\n----------------------------+---------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu,\ncompiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit\n archive_command | test ! -f $PG_WAL/%f && cp %p $PG_WAL/%f\n archive_mode | on\n bgwriter_delay | 50ms\n bgwriter_lru_maxpages | 200\n checkpoint_segments | 25\n checkpoint_timeout | 30min\n client_encoding | UTF8\n effective_cache_size | 8GB\n hot_standby | on\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_checkpoints | on\n log_connections | on\n log_destination | csvlog\n log_directory | ../../log/CLUSTER\n log_disconnections | on\n log_file_mode | 0640\n log_filename | pg-%Y%m%d_%H%M%S.log\n log_line_prefix | %u:%d:%a:%h:%c:%x:%t>\n log_lock_waits | on\n log_min_duration_statement | 300ms\n log_rotation_age | 1d\n log_rotation_size | 0\n log_temp_files | 20MB\n logging_collector | on\n maintenance_work_mem | 512MB\n max_connections | 200\n max_prepared_transactions | 0\n max_stack_depth | 2MB\n max_wal_senders | 2\n port | 9120\n server_encoding | UTF8\n shared_buffers | 5GB\n silent_mode | on\n ssl | on\n ssl_renegotiation_limit | 0\n tcp_keepalives_idle | 0\n temp_buffers | 256MB\n TimeZone | US/Eastern\n wal_buffers | 512kB\n wal_keep_segments | 0\n wal_level | hot_standby\n wal_sender_delay | 1s\n work_mem | 32MB\n\nRegards.\n\n-- \nVictor Y. Yegorov\n\nGreetings.I have a small monitoring query on the following tables:select relname,relpages,reltuples::numeric(12) from pg_class where relname in ('meta_version','account') order by 1;\n relname | relpages | reltuples --------------+----------+-----------\n account | 3235 | 197723 meta_version | 710068 | 32561200\n(2 rows)The logical “body” of the query is:select count(*) from meta_version where account_id in (select account_id from account where customer_id = 8608064);\nI know that due to the data distribution (above customer's accounts are used in 45% of the meta_version table) I\ncannot expect fast results. But I have another question.With default default_statistics_target I get the following plan:\nhttp://explain.depesz.com/s/jriIn order to get better estimates, I've increased statistics targets to 200 for account.customer_id and meta_version.account_id.\nNow I have the following plan:http://explain.depesz.com/s/YZJSecond query takes twice more time.My questions are:\n- why with better statistics planner chooses to do a SeqScan in favor of BitmapIndexScan inside the NestedLoops?- is it possible to adjust this decision by changing other GUCs, perhaps costs?- would it be correct to adjust seq_page_cost and random_page_cost based on the IOPS of the underlying disks?\n any other metrics should be considered?I'm running on a: name | current_setting \n----------------------------+---------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit archive_command | test ! -f $PG_WAL/%f && cp %p $PG_WAL/%f\n archive_mode | on bgwriter_delay | 50ms\n bgwriter_lru_maxpages | 200 checkpoint_segments | 25\n checkpoint_timeout | 30min client_encoding | UTF8\n effective_cache_size | 8GB hot_standby | on\n lc_collate | en_US.UTF-8 lc_ctype | en_US.UTF-8\n listen_addresses | * log_checkpoints | on\n log_connections | on log_destination | csvlog\n log_directory | ../../log/CLUSTER log_disconnections | on\n log_file_mode | 0640 log_filename | pg-%Y%m%d_%H%M%S.log\n log_line_prefix | %u:%d:%a:%h:%c:%x:%t> log_lock_waits | on\n log_min_duration_statement | 300ms log_rotation_age | 1d\n log_rotation_size | 0 log_temp_files | 20MB\n logging_collector | on maintenance_work_mem | 512MB\n max_connections | 200 max_prepared_transactions | 0\n max_stack_depth | 2MB max_wal_senders | 2\n port | 9120 server_encoding | UTF8\n shared_buffers | 5GB silent_mode | on\n ssl | on ssl_renegotiation_limit | 0\n tcp_keepalives_idle | 0 temp_buffers | 256MB\n TimeZone | US/Eastern wal_buffers | 512kB\n wal_keep_segments | 0 wal_level | hot_standby\n wal_sender_delay | 1s work_mem | 32MB\nRegards.-- Victor Y. Yegorov",
"msg_date": "Sat, 29 Sep 2012 02:11:58 +0300",
"msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGAINCV0LPQvtGA0L7Qsg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "NestedLoops over BitmapScan question"
},
{
"msg_contents": "Well, I've managed to track down the cause of improper plans.\n\nDue to the data distribution n_distinct had been estimated way too low.\nI've manually set it to be 195300 instead of 15500 (with stats_target=200):\nselect tablename,attname,null_frac,avg_width,n_distinct,correlation\n from pg_stats\n where (tablename,attname) IN\n (VALUES ('meta_version','account_id'),('account','customer_id'));\n tablename | attname | null_frac | avg_width | n_distinct | correlation\n--------------+-------------+-----------+-----------+------------+-------------\n account | customer_id | 0 | 4 | 57 | 0.998553\n meta_version | account_id | 0 | 4 | 195300 | 0.0262315\n(2 rows)\n\nStill, optimizer underestimates rows returned by the IndexScan heavily:\nhttp://explain.depesz.com/s/pDw\n\nIs it possible to get correct estimates for the IndexScan on the right side\nof the NestedLoops? I assume estimation is done by the B-tree AM and\nit is seems to be not affected by the STATISTICS parameter of the\ncolumn.\n\n\n2012/9/29 Виктор Егоров <[email protected]>:\n> Now I have the following plan:\n> http://explain.depesz.com/s/YZJ\n>\n> Second query takes twice more time.\n\n\n-- \nVictor Y. Yegorov\n\n",
"msg_date": "Tue, 2 Oct 2012 02:09:44 +0300",
"msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGAINCV0LPQvtGA0L7Qsg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NestedLoops over BitmapScan question"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have a question about the deadlock_timeout in regards to performance.\nRight now we have this timeout set at its default of 1s.\nMy understanding of it is that this means that every 1 second the server\nwill check for deadlocks.\nWhat I am wondering is how much of a performance improvement we would\nexpect to get if this was raised to 30 seconds?\nIs it negligible or could it be a substantial performance improvement on a\nbusy system?\nWe very rarely have deadlocks and waiting 30 seconds to discover one\ndoesn't seem too bad.\n\nThank you.\n\nHi all,I have a question about the deadlock_timeout in regards to performance.Right now we have this timeout set at its default of 1s.My understanding of it is that this means that every 1 second the server will check for deadlocks.\n\nWhat I am wondering is how much of a performance improvement we would expect to get if this was raised to 30 seconds?Is it negligible or could it be a substantial performance improvement on a busy system?We very rarely have deadlocks and waiting 30 seconds to discover one doesn't seem too bad.\nThank you.",
"msg_date": "Mon, 1 Oct 2012 12:49:53 -0400",
"msg_from": "pg noob <[email protected]>",
"msg_from_op": true,
"msg_subject": "deadlock_timeout affect on performance"
},
{
"msg_contents": "On 01.10.2012 19:49, pg noob wrote:\n> Hi all,\n>\n> I have a question about the deadlock_timeout in regards to performance.\n> Right now we have this timeout set at its default of 1s.\n> My understanding of it is that this means that every 1 second the server\n> will check for deadlocks.\n\nNot quite. It means that when a backend gets blocked, waiting on a lock, \nit will check for deadlocks after waiting for 1 second. When no backend \nis waiting for a lock, there are no deadlock checks regardless of \ndeadlock_timeout.\n\n> What I am wondering is how much of a performance improvement we would\n> expect to get if this was raised to 30 seconds?\n> Is it negligible or could it be a substantial performance improvement on a\n> busy system?\n> We very rarely have deadlocks and waiting 30 seconds to discover one\n> doesn't seem too bad.\n\nIt's almost certainly negligible. If you regularly have deadlocks, it \nmight even better for performance to make the timeout shorter than 1 s, \nso that deadlocks are detected earlier, and backends will spend less \ntime deadlocked, and more time doing real work. Although I doubt it will \nmake any meaningful difference either way.\n\n- Heikki\n\n",
"msg_date": "Tue, 02 Oct 2012 11:11:40 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: deadlock_timeout affect on performance"
}
] |
[
{
"msg_contents": "Hi, previously I selected categorized data for update then updated counts\nor inserted a new record if it was a new category of data.\n\nselect all categories\nupdate batches of categories\nor insert batches [intermingled as they hit batch size]\n\nProblem was the select was saturating the network (pulling back far more\ndata than needed too)\nSo I switched to doing optimistic updates where I checked for 0 row updates\nand made inserts out of them.\n\noptimistic update batches\nfollowed by insert batches\n\nNew problem massive table bloat. I'm losing gigabytes of disk an hour which\nI can only recover by clustering.\n\nNow's the bit where I lose some of my audience by saying I'm having this\nbloat problem on 8.3.7 and 8.4.4 but not 9.0. I'd love to upgrade obviously\nbut that's out of my hands and I've been told not an option in the short\nterm.\n\nMy thoughts are: surely 0-row updates dont cause this or have impact on the\nvacuum. I'm still doing the same updates after all why have things\ndegenerated so badly?\nWhile it made sense to me that the dead tuples are now more in the middle\nof the table than the end somehow and since autovacuum starts from the back\nthat might be the cause, but I've turned on full autovacuum logging and\nthere is seemingly very little vaccuming going on in either scenario (we\nhave a nightly scheduled cluster). In desperation I've also doubled the\nfreespace map settings in 8.3 to the seemingly very large max_fsm_pages =\n25000000 and max_fsm_relations = 200000 without improvement.\n\nAny suggestions? These are roughly 0.5 to 1TB databases with 8GB shared\nbuffers and work mem set appropriately and otherwise running fine.\n\ncheers\nColin\n\nHi, previously I selected categorized data for update then updated counts or inserted a new record if it was a new category of data.select all categoriesupdate batches of categories\nor insert batches [intermingled as they hit batch size]Problem was the select was saturating the network (pulling back far more data than needed too)So I switched to doing optimistic updates where I checked for 0 row updates and made inserts out of them.\noptimistic update batches followed by insert batchesNew problem massive table bloat. I'm losing gigabytes of disk an hour which I can only recover by clustering.\n Now's the bit where I lose some of my audience by saying I'm having this bloat problem on 8.3.7 and 8.4.4 but not 9.0. I'd love to upgrade obviously but that's out of my hands and I've been told not an option in the short term.\nMy thoughts are: surely 0-row updates dont cause this or have impact on the vacuum. I'm still doing the same updates after all why have things degenerated so badly?While it made sense to me that the dead tuples are now more in the middle of the table than the end somehow and since autovacuum starts from the back that might be the cause, but I've turned on full autovacuum logging and there is seemingly very little vaccuming going on in either scenario (we have a nightly scheduled cluster). In desperation I've also doubled the freespace map settings in 8.3 to the seemingly very large max_fsm_pages = 25000000 and max_fsm_relations = 200000 without improvement. \n Any suggestions? These are roughly 0.5 to 1TB databases with 8GB shared buffers and work mem set appropriately and otherwise running fine.cheersColin",
"msg_date": "Tue, 2 Oct 2012 10:24:47 +1300",
"msg_from": "Colin Taylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "A Tale of 2 algorithms"
},
{
"msg_contents": "On 10/02/2012 05:24 AM, Colin Taylor wrote:\n> My thoughts are: surely 0-row updates dont cause this or have impact on\n> the vacuum. I'm still doing the same updates after all why have things\n> degenerated so badly?\n\nWhat exactly is bloating? Have you checked? Is it the table its self? \nOne of its indexes? Something else?\n\n--\nCraig Ringer\n\n",
"msg_date": "Tue, 02 Oct 2012 12:54:38 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A Tale of 2 algorithms"
},
{
"msg_contents": "On Tue, Oct 2, 2012 at 5:54 PM, Craig Ringer <[email protected]> wrote:\n\n> On 10/02/2012 05:24 AM, Colin Taylor wrote:\n>\n>> My thoughts are: surely 0-row updates dont cause this or have impact on\n>> the vacuum. I'm still doing the same updates after all why have things\n>> degenerated so badly?\n>>\n>\n> What exactly is bloating? Have you checked? Is it the table its self? One\n> of its indexes? Something else?\n>\n> --\n> Craig Ringer\n>\n\nThe table and its indexes, I have them in separate tablespaces so its quite\napparent.\n\nOn Tue, Oct 2, 2012 at 5:54 PM, Craig Ringer <[email protected]> wrote:\nOn 10/02/2012 05:24 AM, Colin Taylor wrote:\n\nMy thoughts are: surely 0-row updates dont cause this or have impact on\nthe vacuum. I'm still doing the same updates after all why have things\ndegenerated so badly?\n\n\nWhat exactly is bloating? Have you checked? Is it the table its self? One of its indexes? Something else?\n\n--\nCraig Ringer\nThe table and its indexes, I have them in separate tablespaces so its quite apparent.",
"msg_date": "Fri, 5 Oct 2012 13:34:43 +1300",
"msg_from": "Colin Taylor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A Tale of 2 algorithms"
}
] |
[
{
"msg_contents": "I'm struggling with a query that seems to use a suboptimal query plan.\n\nSchema: units reference a subjob reference a job. In other words: a job contains multiple subjobs. A subjob contains multiple units. (full schema below)\n\nWe're trying to select all subjobs that need to be reviewed and that contain units that aren't reviewed yet (either because validated is NULL or validated is 'N')\n\nNotice the EXISTS with subquery which will turn out to be the problem:\n\n(SELECT s0_m0_msubJobs.\"__id\"\n AS\n s0_msubJobs_mid,\n s0_m0_msubJobs.\"document_mflow\"\n AS s0_msubJobs_mdocument_mflow,\n s0_m0_msubJobs.\"status\"\n AS s0_msubJobs_mstatus,\n s0_m0_msubJobs.\"error_mmessage\"\n AS s0_msubJobs_merror_mmessage,\n s0_m0_msubJobs.\"validation_mrequired\"\n AS s0_msubJobs_mvalidation_mrequired,\n s0_m0_msubJobs.\"completion_mdate\"\n AS s0_msubJobs_mcompletion_mdate,\n s0_m0_msubJobs.\"creation_mdate\"\n AS s0_msubJobs_mcreation_mdate,\n s0_m0_msubJobs.\"file_mlocation\"\n AS s0_msubJobs_mfile_mlocation,\n s0_m1_mjob.\"__id\"\n AS s0_mjob_mid,\n s0_m1_mjob.\"xml_mname\"\n AS s0_mjob_mxml_mname,\n ( s0_m0_msubJobs.\"creation_mdate\" )\n AS e0_m4\n FROM \"subJobs\" s0_m0_msubJobs,\n \"job\" s0_m1_mjob\n WHERE ( ( ( ( s0_m0_msubJobs.\"status\" ) = ( 'IN_PROGRESS' ) )\n AND ( ( s0_m0_msubJobs.\"validation_mrequired\" ) = ( 'Y' ) ) )\n AND ( EXISTS (((SELECT s1_m1_munit.\"__id\" AS s1_munit_mid\n FROM \"subJobs\" s1_m0_msubJobs,\n \"unit\" s1_m1_munit\n WHERE ( ( ( s0_m0_msubJobs.\"__id\" ) =\n ( s1_m0_msubJobs.\"__id\" ) )\n\n AND\n ( s1_m0_msubJobs.\"__id\" = s1_m1_munit.\"subJobs_mid\" ) )\n AND ( ( NOT ( s1_m1_munit.\"validated\" IS NOT NULL ) )\n OR ( ( s1_m1_munit.\"validated\" ) = ( 'N'\n ) ) )))\n )\n ) )\n AND ( s0_m0_msubJobs.\"job_mid\" = s0_m1_mjob.\"__id\" ))\nORDER BY e0_m4 DESC,\n s0_mjob_mid nulls first,\n s0_msubjobs_mid nulls first\n\nThis generates the following query plan\n\nSort (cost=63242.75..63242.83 rows=30 width=503) (actual time=804.180..804.182 rows=49 loops=1)\n Sort Key: s0_m0_msubjobs.creation_mdate, s0_m1_mjob.__id, s0_m0_msubjobs.__id\n Sort Method: quicksort Memory: 31kB\n Buffers: shared hit=3855 read=13852\n -> Hash Join (cost=63087.27..63242.02 rows=30 width=503) (actual time=803.045..804.144 rows=49 loops=1)\n Hash Cond: (s0_m0_msubjobs.job_mid = s0_m1_mjob.__id)\n Buffers: shared hit=3855 read=13852\n -> Hash Join (cost=63069.02..63223.35 rows=30 width=484) (actual time=802.875..803.953 rows=49 loops=1)\n Hash Cond: (s1_m0_msubjobs.__id = s0_m0_msubjobs.__id)\n Buffers: shared hit=3848 read=13852\n -> HashAggregate (cost=63014.58..63060.13 rows=4555 width=16) (actual time=802.733..803.452 rows=4555 loops=1)\n Buffers: shared hit=3808 read=13852\n -> Hash Join (cost=149.49..59533.65 rows=1392372 width=16) (actual time=1.157..620.181 rows=1392372 loops=1)\n Hash Cond: (s1_m1_munit.\"subJobs_mid\" = s1_m0_msubjobs.__id)\n Buffers: shared hit=3808 read=13852\n -> Seq Scan on unit s1_m1_munit (cost=0.00..35017.65 rows=1392372 width=8) (actual time=0.004..211.780 rows=1392372 loops=1)\n Filter: ((validated IS NULL) OR ((validated)::text = 'N'::text))\n Buffers: shared hit=3761 read=13852\n -> Hash (cost=92.55..92.55 rows=4555 width=8) (actual time=1.140..1.140 rows=4555 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 178kB\n Buffers: shared hit=47\n -> Seq Scan on \"subJobs\" s1_m0_msubjobs (cost=0.00..92.55 rows=4555 width=8) (actual time=0.004..0.551 rows=4555 loops=1)\n Buffers: shared hit=47\n -> Hash (cost=54.07..54.07 rows=30 width=484) (actual time=0.122..0.122 rows=49 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 5kB\n Buffers: shared hit=40\n -> Bitmap Heap Scan on \"subJobs\" s0_m0_msubjobs (cost=5.20..54.07 rows=30 width=484) (actual time=0.046..0.110 rows=49 loops=1)\n Recheck Cond: ((status)::text = 'IN_PROGRESS'::text)\n Filter: ((validation_mrequired)::text = 'Y'::text)\n Buffers: shared hit=40\n -> Bitmap Index Scan on subjob_status (cost=0.00..5.19 rows=125 width=0) (actual time=0.034..0.034 rows=125 loops=1)\n Index Cond: ((status)::text = 'IN_PROGRESS'::text)\n Buffers: shared hit=2\n -> Hash (cost=12.00..12.00 rows=500 width=27) (actual time=0.165..0.165 rows=500 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 30kB\n Buffers: shared hit=7\n -> Seq Scan on job s0_m1_mjob (cost=0.00..12.00 rows=500 width=27) (actual time=0.005..0.085 rows=500 loops=1)\n Buffers: shared hit=7\nTotal runtime: 804.382 ms\n\nNow, if we add OFFSET 0 to the EXISTS subquery (which shouldn't alter the query's meaning - correct?)\n\nEXPLAIN (ANALYZE, BUFFERS) (SELECT s0_m0_msubJobs.\"__id\"\n AS\n s0_msubJobs_mid,\n s0_m0_msubJobs.\"document_mflow\"\n AS s0_msubJobs_mdocument_mflow,\n s0_m0_msubJobs.\"status\"\n AS s0_msubJobs_mstatus,\n s0_m0_msubJobs.\"error_mmessage\"\n AS s0_msubJobs_merror_mmessage,\n s0_m0_msubJobs.\"validation_mrequired\"\n AS s0_msubJobs_mvalidation_mrequired,\n s0_m0_msubJobs.\"completion_mdate\"\n AS s0_msubJobs_mcompletion_mdate,\n s0_m0_msubJobs.\"creation_mdate\"\n AS s0_msubJobs_mcreation_mdate,\n s0_m0_msubJobs.\"file_mlocation\"\n AS s0_msubJobs_mfile_mlocation,\n s0_m1_mjob.\"__id\"\n AS s0_mjob_mid,\n s0_m1_mjob.\"xml_mname\"\n AS s0_mjob_mxml_mname,\n ( s0_m0_msubJobs.\"creation_mdate\" )\n AS e0_m4\n FROM \"subJobs\" s0_m0_msubJobs,\n \"job\" s0_m1_mjob\n WHERE ( ( ( ( s0_m0_msubJobs.\"status\" ) = ( 'IN_PROGRESS' ) )\n AND ( ( s0_m0_msubJobs.\"validation_mrequired\" ) = ( 'Y' ) ) )\n AND ( EXISTS (((SELECT s1_m1_munit.\"__id\" AS s1_munit_mid\n FROM \"subJobs\" s1_m0_msubJobs,\n \"unit\" s1_m1_munit\n WHERE ( ( ( s0_m0_msubJobs.\"__id\" ) =\n ( s1_m0_msubJobs.\"__id\" ) )\n\n AND\n ( s1_m0_msubJobs.\"__id\" = s1_m1_munit.\"subJobs_mid\" ) )\n AND ( ( NOT ( s1_m1_munit.\"validated\" IS NOT NULL ) )\n OR ( ( s1_m1_munit.\"validated\" ) = ( 'N'\n ) ) )\n OFFSET 0))\n )\n ) )\n AND ( s0_m0_msubJobs.\"job_mid\" = s0_m1_mjob.\"__id\" ))\nORDER BY e0_m4 DESC,\n s0_mjob_mid nulls first,\n s0_msubjobs_mid nulls first\n\nwe get the following query plan\n\nSort (cost=556.27..556.30 rows=15 width=503) (actual time=0.828..0.829 rows=49 loops=1)\n Sort Key: s0_m0_msubjobs.creation_mdate, s0_m1_mjob.__id, s0_m0_msubjobs.__id\n Sort Method: quicksort Memory: 31kB\n Buffers: shared hit=390\n -> Hash Join (cost=23.44..555.97 rows=15 width=503) (actual time=0.229..0.788 rows=49 loops=1)\n Hash Cond: (s0_m0_msubjobs.job_mid = s0_m1_mjob.__id)\n Buffers: shared hit=390\n -> Bitmap Heap Scan on \"subJobs\" s0_m0_msubjobs (cost=5.19..537.52 rows=15 width=484) (actual time=0.057..0.591 rows=49 loops=1)\n Recheck Cond: ((status)::text = 'IN_PROGRESS'::text)\n Filter: (((validation_mrequired)::text = 'Y'::text) AND (SubPlan 1))\n Buffers: shared hit=383\n -> Bitmap Index Scan on subjob_status (cost=0.00..5.19 rows=125 width=0) (actual time=0.031..0.031 rows=125 loops=1)\n Index Cond: ((status)::text = 'IN_PROGRESS'::text)\n Buffers: shared hit=2\n SubPlan 1\n -> Limit (cost=0.00..1187.36 rows=307 width=8) (actual time=0.009..0.009 rows=1 loops=49)\n Buffers: shared hit=343\n -> Nested Loop (cost=0.00..1187.36 rows=307 width=8) (actual time=0.009..0.009 rows=1 loops=49)\n Buffers: shared hit=343\n -> Index Scan using \"subJobs_mid_mindex\" on \"subJobs\" s1_m0_msubjobs (cost=0.00..8.27 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=49)\n Index Cond: (__id = s0_m0_msubjobs.__id)\n Buffers: shared hit=147\n -> Index Scan using \"unit_msubJobs_mid_mindex\" on unit s1_m1_munit (cost=0.00..1176.02 rows=307 width=16) (actual time=0.006..0.006 rows=1 loops=49)\n Index Cond: (\"subJobs_mid\" = s0_m0_msubjobs.__id)\n Filter: ((validated IS NULL) OR ((validated)::text = 'N'::text))\n Buffers: shared hit=196\n -> Hash (cost=12.00..12.00 rows=500 width=27) (actual time=0.164..0.164 rows=500 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 30kB\n Buffers: shared hit=7\n -> Seq Scan on job s0_m1_mjob (cost=0.00..12.00 rows=500 width=27) (actual time=0.003..0.082 rows=500 loops=1)\n Buffers: shared hit=7\nTotal runtime: 0.899 ms\n\nwhich is a few orders of magnitude faster.\n\nIs there a reason why the more optimal query plan isn't chosen without the OFFSET 0 clause?\nShouldn't the optimizer evaluate the option where the EXISTS query is JOINED as well as the option where the EXISTS query isn't and choose the plan with the lowest cost?\n\nAny light you could shed on this is appreciated.\n\nPotentially useful information:\nVersion: PostgreSQL 9.1.1, compiled by Visual C++ build 1500, 64-bit\n\nData: Most units have validated set to NULL, 500 jobs, 4555 subJobs, 1392372 units.\n\nSchema:\n\n-- Table: job\nCREATE TABLE job\n(\n __id bigint NOT NULL,\n parent_mjob bigint,\n status character varying(32),\n priority integer,\n creation_mdate timestamp without time zone,\n completion_mdate timestamp without time zone,\n description character varying(200),\n xml_mname character varying(200),\n __source character varying(200),\n __label character varying(200),\n error_mmessage character varying(200),\n last_mchange_mdate timestamp without time zone,\n __size numeric(19,0),\n CONSTRAINT job_pkey PRIMARY KEY (__id ),\n CONSTRAINT job_parent_mjob_fkey FOREIGN KEY (parent_mjob)\n REFERENCES job (__id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE SET NULL\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX job_mdescription_mindex\n ON job\n USING gin\n (to_tsvector('english'::regconfig, description::text) );\n\nCREATE INDEX job_mid_mindex\n ON job\n USING btree\n (__id );\n\n-- Table: \"subJobs\"\nCREATE TABLE \"subJobs\"\n(\n __id bigint NOT NULL,\n document_mflow character varying(200),\n status character varying(200),\n error_mmessage character varying(200),\n validation_mrequired character varying(200),\n completion_mdate timestamp without time zone,\n creation_mdate timestamp without time zone,\n job_mid bigint NOT NULL,\n file_mlocation character varying(200),\n CONSTRAINT \"subJobs_pkey\" PRIMARY KEY (__id ),\n CONSTRAINT \"subJobs_job_mid_fkey\" FOREIGN KEY (job_mid)\n REFERENCES job (__id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX \"subJobs_mid_mindex\"\n ON \"subJobs\"\n USING btree\n (__id );\n\nCREATE INDEX \"subJobs_mjob_mid_mindex\"\n ON \"subJobs\"\n USING btree\n (job_mid );\n\nCREATE INDEX subjob_status\n ON \"subJobs\"\n USING btree\n (status COLLATE pg_catalog.\"default\" );\n\n-- Table: unit\nCREATE TABLE unit\n(\n __id bigint NOT NULL,\n client_mnumber character varying(200) NOT NULL,\n source_mid character varying(200),\n delivery_mformat character varying(200),\n delivery_mtype character varying(200),\n client_memailaddress character varying(200),\n client_mcollectivity character varying(200),\n client_mcommunication_mpreference character varying(200),\n validated character varying(200),\n status character varying(200),\n error_mmessage character varying(200),\n completion_mdate timestamp without time zone,\n creation_mdate timestamp without time zone,\n file_mlocation character varying(200),\n delivery_mfeedback character varying(200),\n \"subJobs_mid\" bigint NOT NULL,\n __type character varying(200),\n CONSTRAINT unit_pkey PRIMARY KEY (__id ),\n CONSTRAINT \"unit_subJobs_mid_fkey\" FOREIGN KEY (\"subJobs_mid\")\n REFERENCES \"subJobs\" (__id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX unit_mid_mindex\n ON unit\n USING btree\n (__id );\n\nCREATE INDEX \"unit_msubJobs_mid_mindex\"\n ON unit\n USING btree\n (\"subJobs_mid\" );\n\nCREATE INDEX unit_validated\n ON unit\n USING btree\n (validated COLLATE pg_catalog.\"default\" );\n\nWith kind regards,\n\nNick Hofstede\n\n\n________________________________\n\nInventive Designers' Email Disclaimer:\nhttp://www.inventivedesigners.com/email-disclaimer\n\n",
"msg_date": "Tue, 2 Oct 2012 16:46:38 +0000",
"msg_from": "Nick Hofstede <[email protected]>",
"msg_from_op": true,
"msg_subject": "suboptimal query plan"
},
{
"msg_contents": "Nick Hofstede <[email protected]> writes:\n> I'm struggling with a query that seems to use a suboptimal query plan.\n\nTry it in 9.2 - this is the same type of join ordering restriction\ncomplained of last week here:\nhttp://archives.postgresql.org/pgsql-performance/2012-09/msg00201.php\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 02 Oct 2012 23:54:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: suboptimal query plan"
},
{
"msg_contents": "That fixed it :)\n\nThe 9.2 query plan for reference:\n\nSort (cost=439.67..439.74 rows=30 width=503) (actual time=0.754..0.756 rows=49 loops=1)\n Sort Key: s0_m0_msubjobs.creation_mdate, s0_m1_mjob.__id, s0_m0_msubjobs.__id\n Sort Method: quicksort Memory: 31kB\n -> Hash Join (cost=23.45..438.93 rows=30 width=503) (actual time=0.213..0.718 rows=49 loops=1)\n Hash Cond: (s0_m0_msubjobs.job_mid = s0_m1_mjob.__id)\n -> Nested Loop Semi Join (cost=5.20..420.27 rows=30 width=484) (actual time=0.054..0.543 rows=49 loops=1)\n -> Bitmap Heap Scan on \"subJobs\" s0_m0_msubjobs (cost=5.20..54.08 rows=30 width=484) (actual time=0.040..0.102 rows=49 loops=1)\n Recheck Cond: ((status)::text = 'IN_PROGRESS'::text)\n Filter: ((validation_mrequired)::text = 'Y'::text)\n Rows Removed by Filter: 76\n -> Bitmap Index Scan on subjob_status (cost=0.00..5.19 rows=125 width=0) (actual time=0.029..0.029 rows=125 loops=1)\n Index Cond: ((status)::text = 'IN_PROGRESS'::text)\n -> Nested Loop (cost=0.00..307.45 rows=307 width=16) (actual time=0.009..0.009 rows=1 loops=49)\n -> Index Only Scan using \"subJobs_mid_mindex\" on \"subJobs\" s1_m0_msubjobs (cost=0.00..5.34 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=49)\n Index Cond: (__id = s0_m0_msubjobs.__id)\n Heap Fetches: 49\n -> Index Scan using \"unit_msubJobs_mid_mindex\" on unit s1_m1_munit (cost=0.00..299.03 rows=307 width=8) (actual time=0.006..0.006 rows=1 loops=49)\n Index Cond: (\"subJobs_mid\" = s1_m0_msubjobs.__id)\n Filter: ((validated IS NULL) OR ((validated)::text = 'N'::text))\n -> Hash (cost=12.00..12.00 rows=500 width=27) (actual time=0.149..0.149 rows=500 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 30kB\n -> Seq Scan on job s0_m1_mjob (cost=0.00..12.00 rows=500 width=27) (actual time=0.003..0.071 rows=500 loops=1)\nTotal runtime: 0.818 ms\n\nGreat work,\n\nNick Hofstede\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: woensdag 3 oktober 2012 5:55\nTo: Nick Hofstede\nCc: [email protected]\nSubject: Re: [PERFORM] suboptimal query plan\n\nNick Hofstede <[email protected]> writes:\n> I'm struggling with a query that seems to use a suboptimal query plan.\n\nTry it in 9.2 - this is the same type of join ordering restriction complained of last week here:\nhttp://archives.postgresql.org/pgsql-performance/2012-09/msg00201.php\n\n regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n--\nThis message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.\n\n\n________________________________\n\nInventive Designers' Email Disclaimer:\nhttp://www.inventivedesigners.com/email-disclaimer\n\n",
"msg_date": "Wed, 3 Oct 2012 08:02:49 +0000",
"msg_from": "Nick Hofstede <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: suboptimal query plan"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a table with about 10 millions of records, this table is update and\ninserted very often during the day (approx. 200 per second) , in the night\nthe activity is a lot less, so in the first seconds of a day (00:00:01) a\nbatch process update some columns (used like counters) of this table\nsetting his value to 0.\n\n \n\nYesterday, the first time it occurs, I got a deadlock when other process try\nto delete multiple (about 10 or 20) rows of the same table.\n\n \n\nI think that maybe the situation was:\n\n \n\nProcess A (PA) (massive update)\n\nProcess B (PB) (multiple delete)\n\n \n\nPA Block record 1, update\n\nPA Block record 2, update\n\nPA Block record 3, update\n\nPB Block record 4, delete\n\nPB Block record 5, delete\n\nPA Block record 4, waiting\n\nPB Block record 3, waiting\n\n \n\nThe other situation could be that update process while blocking rows scale\nto block page and the try to scale to lock table while the delete process as\nsome locked rows.\n\n \n\nAny ideas how to prevent this situation?\n\n \n\nThanks!\n\n\nHi,I have a table with about 10 millions of records, this table is update and inserted very often during the day (approx. 200 per second) , in the night the activity is a lot less, so in the first seconds of a day (00:00:01) a batch process update some columns (used like counters) of this table setting his value to 0. Yesterday, the first time it occurs, I got a deadlock when other process try to delete multiple (about 10 or 20) rows of the same table. I think that maybe the situation was: Process A (PA) (massive update)Process B (PB) (multiple delete) PA Block record 1, updatePA Block record 2, updatePA Block record 3, updatePB Block record 4, deletePB Block record 5, deletePA Block record 4, waitingPB Block record 3, waiting The other situation could be that update process while blocking rows scale to block page and the try to scale to lock table while the delete process as some locked rows. Any ideas how to prevent this situation? Thanks!",
"msg_date": "Thu, 4 Oct 2012 10:01:15 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "On Thu, Oct 4, 2012 at 7:01 AM, Anibal David Acosta <[email protected]> wrote:\n> Hi,\n>\n> I have a table with about 10 millions of records, this table is update and\n> inserted very often during the day (approx. 200 per second) , in the night\n> the activity is a lot less, so in the first seconds of a day (00:00:01) a\n> batch process update some columns (used like counters) of this table\n> setting his value to 0.\n>\n>\n>\n> Yesterday, the first time it occurs, I got a deadlock when other process try\n> to delete multiple (about 10 or 20) rows of the same table.\n...\n>\n> Any ideas how to prevent this situation?\n\nThe bulk update could take an Exclusive (not Access Exclusive) lock.\nOr the delete could perhaps be arranged to delete the records in ctid\norder (although that might still deadlock). Or you could just repeat\nthe failed transaction.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Thu, 4 Oct 2012 09:10:08 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "From: Anibal David Acosta [mailto:[email protected]] \nSent: Thursday, October 04, 2012 10:01 AM\nTo: [email protected]\nSubject: how to avoid deadlock on masive update with multiples delete\n\n.....\n..... \n.....\n\nThe other situation could be that update process while blocking rows scale to block page and the try to scale to lock table while the delete process as some locked rows.\n\nThanks!\n\n\nThis (lock escalation from row -> to page -> to table) is MS SQL Server \"feature\", pretty sure Postgres does not do it.\n\nRegards,\nIgor Neyman\n\n",
"msg_date": "Fri, 5 Oct 2012 14:06:41 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "On Thu, Oct 4, 2012 at 1:10 PM, Jeff Janes <[email protected]> wrote:\n> The bulk update could take an Exclusive (not Access Exclusive) lock.\n> Or the delete could perhaps be arranged to delete the records in ctid\n> order (although that might still deadlock). Or you could just repeat\n> the failed transaction.\n\nHow do you make pg update/delete records, in bulk, in some particular order?\n\n(ie, without issuing separate queries for each record)\n\n",
"msg_date": "Fri, 5 Oct 2012 11:27:13 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "Presumably something like this?:\n\nmaciek=# CREATE TABLE test AS SELECT g, random() FROM\ngenerate_series(1,1000) g;\nCREATE\nmaciek=# EXPLAIN DELETE FROM test USING (SELECT g FROM test ORDER BY\nctid) x where x.g = test.g;\n QUERY PLAN\n---------------------------------------------------------------------------------\n Delete on test (cost=188.99..242.34 rows=1940 width=34)\n -> Hash Join (cost=188.99..242.34 rows=1940 width=34)\n Hash Cond: (x.g = public.test.g)\n -> Subquery Scan on x (cost=135.34..159.59 rows=1940 width=32)\n\t -> Sort (cost=135.34..140.19 rows=1940 width=10)\n Sort Key: public.test.ctid\n -> Seq Scan on test (cost=0.00..29.40 rows=1940 width=10)\n -> Hash (cost=29.40..29.40 rows=1940 width=10)\n\t -> Seq Scan on test (cost=0.00..29.40 rows=1940 width=10)\n(9 rows)\n\n",
"msg_date": "Fri, 5 Oct 2012 08:08:19 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "Maciek Sakrejda <[email protected]> writes:\n> Presumably something like this?:\n> maciek=# CREATE TABLE test AS SELECT g, random() FROM\n> generate_series(1,1000) g;\n> CREATE\n> maciek=# EXPLAIN DELETE FROM test USING (SELECT g FROM test ORDER BY\n> ctid) x where x.g = test.g;\n\nThere's no guarantee that the planner won't re-sort the rows coming from\nthe sub-select, unfortunately.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 05 Oct 2012 11:31:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "On Friday, October 05, 2012 05:31:43 PM Tom Lane wrote:\n> Maciek Sakrejda <[email protected]> writes:\n> > Presumably something like this?:\n> > maciek=# CREATE TABLE test AS SELECT g, random() FROM\n> > generate_series(1,1000) g;\n> > CREATE\n> > maciek=# EXPLAIN DELETE FROM test USING (SELECT g FROM test ORDER BY\n> > ctid) x where x.g = test.g;\n> \n> There's no guarantee that the planner won't re-sort the rows coming from\n> the sub-select, unfortunately.\nMore often than not you can prevent the planner from doing that by putting a \nOFFSET 0 in the query. Not 100% but better than nothing.\n\nWe really need ORDER BY for DML.\n\nAndres\n-- \nAndres Freund\t\thttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 5 Oct 2012 17:36:14 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On Friday, October 05, 2012 05:31:43 PM Tom Lane wrote:\n>> There's no guarantee that the planner won't re-sort the rows coming from\n>> the sub-select, unfortunately.\n\n> More often than not you can prevent the planner from doing that by putting a \n> OFFSET 0 in the query. Not 100% but better than nothing.\n\nNo, that will accomplish exactly nothing. The ORDER BY is already an\noptimization fence. The problem is that of the several ways the planner\nmight choose to join the subquery output to the original table, not all\nwill produce the join rows in the same order as the subquery's result\nis. For instance, when I tried his example I initially got\n\n Delete on test (cost=400.88..692.85 rows=18818 width=34)\n -> Merge Join (cost=400.88..692.85 rows=18818 width=34)\n Merge Cond: (test.g = x.g)\n -> Sort (cost=135.34..140.19 rows=1940 width=10)\n Sort Key: test.g\n -> Seq Scan on test (cost=0.00..29.40 rows=1940 width=10)\n -> Sort (cost=265.53..270.38 rows=1940 width=32)\n Sort Key: x.g\n -> Subquery Scan on x (cost=135.34..159.59 rows=1940 width=32)\n -> Sort (cost=135.34..140.19 rows=1940 width=10)\n Sort Key: test_1.ctid\n -> Seq Scan on test test_1 (cost=0.00..29.40 rows=1940 width=10)\n\nwhich is going to do the deletes in \"g\" order, not ctid order;\nand then after an ANALYZE I got\n\n Delete on test (cost=90.83..120.58 rows=1000 width=34)\n -> Hash Join (cost=90.83..120.58 rows=1000 width=34)\n Hash Cond: (test.g = x.g)\n -> Seq Scan on test (cost=0.00..16.00 rows=1000 width=10)\n -> Hash (cost=78.33..78.33 rows=1000 width=32)\n -> Subquery Scan on x (cost=65.83..78.33 rows=1000 width=32)\n -> Sort (cost=65.83..68.33 rows=1000 width=10)\n Sort Key: test_1.ctid\n -> Seq Scan on test test_1 (cost=0.00..16.00 rows=1000 width=10)\n\nwhich is going to do the deletes in ctid order, but that's an artifact\nof using a seqscan on the test table; the order of the subquery's output\nis irrelevant, since it got hashed.\n\n> We really need ORDER BY for DML.\n\nMeh. That's outside the SQL standard (not only outside the letter of\nthe standard, but foreign to its very conceptual model) and I don't\nthink the problem really comes up that often. Personally, if I had to\ndeal with this I'd use a plpgsql function (or DO command) that does\n\n\tFOR c IN SELECT ctid FROM table WHERE ... ORDER BY ... LOOP\n\t\tDELETE FROM table WHERE ctid = c;\n\tEND LOOP;\n\nwhich is not great but at least it avoids client-to-server traffic.\n\nHaving said all that, are we sure this is even a deletion-order\nproblem? I was wondering about deadlocks from foreign key references,\nfor instance.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 05 Oct 2012 11:46:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "On Fri, Oct 5, 2012 at 10:46 AM, Tom Lane <[email protected]> wrote:\n> Andres Freund <[email protected]> writes:\n>> On Friday, October 05, 2012 05:31:43 PM Tom Lane wrote:\n>>> There's no guarantee that the planner won't re-sort the rows coming from\n>>> the sub-select, unfortunately.\n>\n>> More often than not you can prevent the planner from doing that by putting a\n>> OFFSET 0 in the query. Not 100% but better than nothing.\n>\n> No, that will accomplish exactly nothing. The ORDER BY is already an\n> optimization fence. The problem is that of the several ways the planner\n> might choose to join the subquery output to the original table, not all\n> will produce the join rows in the same order as the subquery's result\n> is. For instance, when I tried his example I initially got\n>\n> Delete on test (cost=400.88..692.85 rows=18818 width=34)\n> -> Merge Join (cost=400.88..692.85 rows=18818 width=34)\n> Merge Cond: (test.g = x.g)\n> -> Sort (cost=135.34..140.19 rows=1940 width=10)\n> Sort Key: test.g\n> -> Seq Scan on test (cost=0.00..29.40 rows=1940 width=10)\n> -> Sort (cost=265.53..270.38 rows=1940 width=32)\n> Sort Key: x.g\n> -> Subquery Scan on x (cost=135.34..159.59 rows=1940 width=32)\n> -> Sort (cost=135.34..140.19 rows=1940 width=10)\n> Sort Key: test_1.ctid\n> -> Seq Scan on test test_1 (cost=0.00..29.40 rows=1940 width=10)\n>\n> which is going to do the deletes in \"g\" order, not ctid order;\n> and then after an ANALYZE I got\n>\n> Delete on test (cost=90.83..120.58 rows=1000 width=34)\n> -> Hash Join (cost=90.83..120.58 rows=1000 width=34)\n> Hash Cond: (test.g = x.g)\n> -> Seq Scan on test (cost=0.00..16.00 rows=1000 width=10)\n> -> Hash (cost=78.33..78.33 rows=1000 width=32)\n> -> Subquery Scan on x (cost=65.83..78.33 rows=1000 width=32)\n> -> Sort (cost=65.83..68.33 rows=1000 width=10)\n> Sort Key: test_1.ctid\n> -> Seq Scan on test test_1 (cost=0.00..16.00 rows=1000 width=10)\n>\n> which is going to do the deletes in ctid order, but that's an artifact\n> of using a seqscan on the test table; the order of the subquery's output\n> is irrelevant, since it got hashed.\n>\n>> We really need ORDER BY for DML.\n>\n> Meh. That's outside the SQL standard (not only outside the letter of\n> the standard, but foreign to its very conceptual model) and I don't\n> think the problem really comes up that often. Personally, if I had to\n> deal with this I'd use a plpgsql function (or DO command) that does\n\nCan't it be forced like this (assuming it is in fact a vanilla order\nby problem)?\n\nEXPLAIN DELETE FROM test USING (SELECT g FROM test ORDER BY\nctid FOR UPDATE) x where x.g = test.g;\n\n(emphasis on 'for update')\n\nmerlin\n\n",
"msg_date": "Fri, 5 Oct 2012 10:55:39 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "On Fri, Oct 5, 2012 at 12:46 PM, Tom Lane <[email protected]> wrote:\n>\n> FOR c IN SELECT ctid FROM table WHERE ... ORDER BY ... LOOP\n> DELETE FROM table WHERE ctid = c;\n> END LOOP;\n\nMaybe, in that sense, it would be better to optimize client-server\nprotocol for batch operations.\n\nPREPARE blah(c) AS DELETE FROM table WHERE ctid = $1;\n\nEXECUTE blah(c1), blah(c2), blah(c3), ...\n ^ 1 transaction, 1 roundtrip, multiple queries\n\n",
"msg_date": "Fri, 5 Oct 2012 12:58:52 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "Merlin Moncure <[email protected]> writes:\n> Can't it be forced like this (assuming it is in fact a vanilla order\n> by problem)?\n\n> EXPLAIN DELETE FROM test USING (SELECT g FROM test ORDER BY\n> ctid FOR UPDATE) x where x.g = test.g;\n\n> (emphasis on 'for update')\n\nHm ... yeah, that might work, once you redefine the problem as \"get the\nrow locks in a consistent order\" rather than \"do the updates in a\nconsistent order\". But I'd be inclined to phrase it as\n\nEXPLAIN DELETE FROM test USING (SELECT ctid FROM test ORDER BY\ng FOR UPDATE) x where x.ctid = test.ctid;\n\nI'm not sure that \"ORDER BY ctid\" is really very meaningful here; think\nabout FOR UPDATE switching its attention to updated versions of rows.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 05 Oct 2012 12:21:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "On Friday, October 05, 2012 05:46:05 PM Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On Friday, October 05, 2012 05:31:43 PM Tom Lane wrote:\n> >> There's no guarantee that the planner won't re-sort the rows coming from\n> >> the sub-select, unfortunately.\n> > \n> > More often than not you can prevent the planner from doing that by\n> > putting a OFFSET 0 in the query. Not 100% but better than nothing.\n> \n> No, that will accomplish exactly nothing. The ORDER BY is already an\n> optimization fence.\nYea, sorry. I was thinking of related problem/solution.\n\n> > We really need ORDER BY for DML.\n> \n> Meh. That's outside the SQL standard (not only outside the letter of\n> the standard, but foreign to its very conceptual model) and I don't\n> think the problem really comes up that often.\nBack when I mostly did consulting/development on client code it came up about \nonce a week. I might have a warped view though because thats the kind of \nthing you would ask a consultant about...\n\n> Having said all that, are we sure this is even a deletion-order\n> problem? I was wondering about deadlocks from foreign key references,\n> for instance.\nAbsolutely not sure, no.\n\nAndres\n-- \nAndres Freund\t\thttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 5 Oct 2012 18:33:53 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
},
{
"msg_contents": "Process 1 (massive update): update table A set column1=0, column2=0 \n\nProcess 2 (multiple delete): perform delete_row(user_name, column1, column2)\nfrom table A where user_name=YYY\n\nThe pgsql function delete_row delete the row and do other business logic not\nrelated to table A.\n\n\n\n-----Mensaje original-----\nDe: Claudio Freire [mailto:[email protected]] \nEnviado el: viernes, 05 de octubre de 2012 10:27 a.m.\nPara: Jeff Janes\nCC: Anibal David Acosta; [email protected]\nAsunto: Re: [PERFORM] how to avoid deadlock on masive update with multiples\ndelete\n\nOn Thu, Oct 4, 2012 at 1:10 PM, Jeff Janes <[email protected]> wrote:\n> The bulk update could take an Exclusive (not Access Exclusive) lock.\n> Or the delete could perhaps be arranged to delete the records in ctid \n> order (although that might still deadlock). Or you could just repeat \n> the failed transaction.\n\nHow do you make pg update/delete records, in bulk, in some particular order?\n\n(ie, without issuing separate queries for each record)\n\n\n",
"msg_date": "Fri, 5 Oct 2012 14:33:49 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to avoid deadlock on masive update with multiples delete"
}
] |
[
{
"msg_contents": "Hello!\n\nI would like to ask following question:\nI have created a table and I updated all records.\nAnd I executed this update command again and again....\nExecution time was growing after each step.\nI cannot understand this behavior.\nFirst update command took 6 sec, 30th update (same) command took 36 \nsec (6x times greater value!!!).\nCan somebody explain me why increasing this update time?\n\n-- 1st update: 6175 ms\n-- 5th update: 9265 ms\n-- 10th update: 15669 ms\n-- 20th update: 26940 ms\n-- 20th update: 36198 ms\n\nPGSQL version: 9.1.5, parameters: default install used\n\nThanks your answer in advance!\n\nSCRIPT:\n\nDROP SCHEMA IF EXISTS tempdb CASCADE;\nCREATE SCHEMA tempdb;\nSET search_path TO tempdb;\n\nDROP TABLE IF EXISTS t;\nCREATE TABLE t (\n id SERIAL ,\n num int NOT NULL,\n PRIMARY KEY (id)\n);\n\n\ninsert into t\n SELECT *,0 FROM generate_series(1,100000);\n\nupdate t set num=num+1; -- 1st update: 6175 ms\nupdate t set num=num+1;\nupdate t set num=num+1;\nupdate t set num=num+1;\nupdate t set num=num+1;\n\nupdate t set num=num+1; -- 5th update: 9265 ms\n.....\n\n\nupdate t set num=num+1; -- 10th update: 15669 ms\n.....\n\n\nupdate t set num=num+1; -- 20th update: 26940 ms\n.....\n\n\nupdate t set num=num+1; -- 30th update: 36198 ms\n.....\n\n\n----------------------------------------------------------------\nThis message was sent using IMP, the Internet Messaging Program.\n\n\n",
"msg_date": "Sun, 07 Oct 2012 15:49:23 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "UPDATE execution time is increasing"
},
{
"msg_contents": "On Sun, Oct 7, 2012 at 3:49 PM, <[email protected]> wrote:\n\n> Hello!\n>\n> I would like to ask following question:\n> I have created a table and I updated all records.\n> And I executed this update command again and again....\n> Execution time was growing after each step.\n> I cannot understand this behavior.\n> First update command took 6 sec, 30th update (same) command took 36 sec\n> (6x times greater value!!!).\n> Can somebody explain me why increasing this update time?\n>\n> -- 1st update: 6175 ms\n> -- 5th update: 9265 ms\n> -- 10th update: 15669 ms\n> -- 20th update: 26940 ms\n> -- 20th update: 36198 ms\n>\n> PGSQL version: 9.1.5, parameters: default install used\n>\n> Thanks your answer in advance!\n>\n> SCRIPT:\n>\n> DROP SCHEMA IF EXISTS tempdb CASCADE;\n> CREATE SCHEMA tempdb;\n> SET search_path TO tempdb;\n>\n> DROP TABLE IF EXISTS t;\n> CREATE TABLE t (\n> id SERIAL ,\n> num int NOT NULL,\n> PRIMARY KEY (id)\n> );\n>\n>\n> insert into t\n> SELECT *,0 FROM generate_series(1,100000);\n>\n> update t set num=num+1; -- 1st update: 6175 ms\n> update t set num=num+1;\n> update t set num=num+1;\n>\n\nHello, could you do the same putting VACUUM t; between your updates? What\nis the change in UPDATE time?\n\n-- Valentin\n\nOn Sun, Oct 7, 2012 at 3:49 PM, <[email protected]> wrote:\n\nHello!I would like to ask following question:I have created a table and I updated all records.And I executed this update command again and again....Execution time was growing after each step.I cannot understand this behavior.\n\nFirst update command took 6 sec, 30th update (same) command took 36 sec (6x times greater value!!!).Can somebody explain me why increasing this update time?-- 1st update: 6175 ms-- 5th update: 9265 ms\n-- 10th update: 15669 ms\n-- 20th update: 26940 ms-- 20th update: 36198 msPGSQL version: 9.1.5, parameters: default install usedThanks your answer in advance!SCRIPT:DROP SCHEMA IF EXISTS tempdb CASCADE;CREATE SCHEMA tempdb;\n\nSET search_path TO tempdb;DROP TABLE IF EXISTS t;CREATE TABLE t ( id SERIAL , num int NOT NULL, PRIMARY KEY (id));insert into t SELECT *,0 FROM generate_series(1,100000);\nupdate t set num=num+1; -- 1st update: 6175 msupdate t set num=num+1;update t set num=num+1;Hello, could you do the same putting VACUUM t; between your updates? What is the change in UPDATE time?\n-- Valentin",
"msg_date": "Mon, 8 Oct 2012 14:20:09 +0200",
"msg_from": "Valentine Gogichashvili <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATE execution time is increasing"
}
] |
[
{
"msg_contents": "I have table:\ncreate table hashcheck(id serial, name varchar, value varchar);\nand query:\nhashaggr=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=471979.50..471985.40 rows=2362 width=9) (actual time=19642.938..19643.184 rows=4001 loops=1)\n Output: name, (count(name))\n Sort Key: hashcheck.name\n Sort Method: quicksort Memory: 343kB\n -> HashAggregate (cost=471823.53..471847.15 rows=2362 width=9) (actual time=19632.256..19632.995 rows=4001 loops=1)\n Output: name, count(name)\n -> Seq Scan on public.hashcheck (cost=0.00..363494.69 rows=21665769 width=9) (actual time=49.552..11674.170 rows=23103672 loops=1)\n Output: id, name, value\n Total runtime: 19643.497 ms\n(9 rows)\n\nwithout indexes. \nIndexes don't speedup the query much\nFor hash Total runtime: 17678.225 ms\nFor btree Total runtime: 14188.484 ms\nI'm don't know how to use Gin and gist this way. \n\nSo the question is there any way to speed up the \"group by\" query? Or may be there does exists any other way to count histogram?\n\nThank you.\n\n\nPS:\nhashaggr=# select version();\n version \n-------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.1.6 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2), 64-bit\n\n",
"msg_date": "Sun, 07 Oct 2012 21:33:19 +0400",
"msg_from": "Korisk <[email protected]>",
"msg_from_op": true,
"msg_subject": "hash aggregation speedup"
}
] |
[
{
"msg_contents": "Hi!\n\nAfter upgrade (dump/restore/analyze) query (below) after some time is killed by kernel.\nPostgres creates a lot of tmp files:\n\ntemporary file: path \"base/pgsql_tmp/pgsql_tmp8949.2046\", size 24576\ntemporary file: path \"base/pgsql_tmp/pgsql_tmp8949.2045\", size 24576\ntemporary file: path \"base/pgsql_tmp/pgsql_tmp8949.2044\", size 32768\ntemporary file: path \"base/pgsql_tmp/pgsql_tmp8949.2043\", size 32768\ntemporary file: path \"base/pgsql_tmp/pgsql_tmp8949.2042\", size 24576\n\n\nI have no idea whats wrong. Looks like planer bad decision.\n\nSELECT vwc.* , c.jednzawid as bureau_id ,\nc.nazwa as borrower_name ,\nc.nrumowy as agreement_number ,\nc.kwotakred as credit_amount_gross ,\nc.nrumchar as id_ge ,\nc.zazndoanuldata as marked_to_cancel_date ,\nc.bureau_category_id ,\nl.typid as product_type_id ,\ncs.doc_receive_tmstp ,\ncs.doc_send_path_id ,\ncs.first_scan_complete_tmstp ,\ncs.verification_doc_not_complete_id ,\ncs.verification_phone_not_complete_id ,\ncaged.requested_doc_send_path_id ,\nend_of_validity(c.*,l.*) AS end_of_validity ,\nbc.creationdate as last_contact_tmstp ,\nlast_va.action_status_id as verification_status_id ,\n c.verification_assistant_id ,\n caged.client_verification_path_id\n FROM verification_waiting_credit vwc,\n kredyty c LEFT JOIN kredaged caged ON (caged.kredytid = c.id) LEFT\nJOIN bureau_contact bc ON (bc.credit_id = c.id AND NOT EXISTS\n ( SELECT 1 FROM bureau_contact bc1\n WHERE bc1.credit_id = bc.credit_id and bc1.id > bc.id)\n ) LEFT JOIN verification_action last_va ON (last_va.credit_id = c.id\nAND NOT EXISTS\n (SELECT 1 FROM verification_action va_t\n WHERE va_t.id > last_va.id\n AND va_t.credit_id = last_va.credit_id)\n ) , kredytstatus cs , linie l\n WHERE true\n AND vwc.user_id = 12949\n AND vwc.credit_id = c.id\n AND vwc.credit_id = cs.kredytid\n AND c.linia = l.id\n ORDER BY vwc.id\n\nQuery plan below (9.2).\n\n\"Nested Loop (cost=73132.54..5320352.90 rows=1 width=4681)\"\n\" -> Nested Loop (cost=73132.54..5320345.72 rows=1 width=4416)\"\n\" -> Nested Loop (cost=73132.54..5320334.36 rows=1 width=4392)\"\n\" Join Filter: (vwc.credit_id = c.id)\"\n\" -> Index Scan using verification_waiting_credit_pkey on verification_waiting_credit vwc (cost=0.00..12.97 rows=1 width=156)\"\n\" Filter: (user_id = 12949)\"\n\" -> Hash Left Join (cost=73132.54..5279312.99 rows=3280672 width=4236)\"\n\" Hash Cond: (c.id = bc.credit_id)\"\n\" -> Hash Left Join (cost=73019.54..5119260.82 rows=3280672 width=4228)\"\n\" Hash Cond: (c.id = last_va.credit_id)\"\n\" -> Hash Left Join (cost=24351.16..4650135.04 rows=3280672 width=4224)\"\n\" Hash Cond: (c.id = caged.kredytid)\"\n\" -> Seq Scan on kredyty c (cost=0.00..1202754.72 rows=3280672 width=4216)\"\n\" -> Hash (cost=16741.96..16741.96 rows=437696 width=12)\"\n\" -> Seq Scan on kredaged caged (cost=0.00..16741.96 rows=437696 width=12)\"\n\" -> Hash (cost=45953.74..45953.74 rows=217172 width=8)\"\n\" -> Hash Anti Join (cost=19583.56..45953.74 rows=217172 width=8)\"\n\" Hash Cond: (last_va.credit_id = va_t.credit_id)\"\n\" Join Filter: (va_t.id > last_va.id)\"\n\" -> Seq Scan on verification_action last_va (cost=0.00..15511.58 rows=325758 width=12)\"\n\" -> Hash (cost=15511.58..15511.58 rows=325758 width=8)\"\n\" -> Seq Scan on verification_action va_t (cost=0.00..15511.58 rows=325758 width=8)\"\n\" -> Hash (cost=104.99..104.99 rows=641 width=12)\"\n\" -> Hash Anti Join (cost=49.65..104.99 rows=641 width=12)\"\n\" Hash Cond: (bc.credit_id = bc1.credit_id)\"\n\" Join Filter: (bc1.id > bc.id)\"\n\" -> Seq Scan on bureau_contact bc (cost=0.00..37.62 rows=962 width=16)\"\n\" -> Hash (cost=37.62..37.62 rows=962 width=8)\"\n\" -> Seq Scan on bureau_contact bc1 (cost=0.00..37.62 rows=962 width=8)\"\n\" -> Index Scan using kredytstatus_pkey on kredytstatus cs (cost=0.00..11.35 rows=1 width=32)\"\n\" Index Cond: (kredytid = c.id)\"\n\" -> Index Scan using linie_pkey on linie l (cost=0.00..6.92 rows=1 width=273)\"\n\" Index Cond: (id = c.linia)\"\n\nTo compare query plan on postgresql 9.0\n\n\n\"Nested Loop (cost=0.00..28892.99 rows=1 width=357)\"\n\" -> Nested Loop Left Join (cost=0.00..28884.11 rows=1 width=333)\"\n\" Join Filter: (bc.credit_id = c.id)\"\n\" -> Nested Loop (cost=0.00..135.81 rows=1 width=325)\"\n\" -> Nested Loop Left Join (cost=0.00..133.03 rows=1 width=293)\"\n\" -> Nested Loop Left Join (cost=0.00..96.01 rows=1 width=289)\"\n\" -> Nested Loop (cost=0.00..92.56 rows=1 width=281)\"\n\" -> Index Scan using verification_waiting_credit_pkey on verification_waiting_credit vwc (cost=0.00..83.92 rows=1 width=156)\"\n\" Filter: (user_id = 12949)\"\n\" -> Index Scan using kredyty_desc_pkey on kredyty c (cost=0.00..8.63 rows=1 width=125)\"\n\" Index Cond: (c.id = vwc.credit_id)\"\n\" -> Index Scan using kredaged_pkey on kredaged caged (cost=0.00..3.44 rows=1 width=12)\"\n\" Index Cond: (caged.kredytid = c.id)\"\n\" -> Index Scan using verification_action_credit_id_idx on verification_action last_va (cost=0.00..37.00 rows=2 width=8)\"\n\" Index Cond: (last_va.credit_id = c.id)\"\n\" Filter: (NOT (SubPlan 2))\"\n\" SubPlan 2\"\n\" -> Index Scan using verification_action_credit_id_idx on verification_action va_t (cost=0.00..8.38 rows=1 width=0)\"\n\" Index Cond: (credit_id = $3)\"\n\" Filter: (id > $2)\"\n\" -> Index Scan using linie_pkey on linie l (cost=0.00..2.77 rows=1 width=40)\"\n\" Index Cond: (l.id = c.linia)\"\n\" -> Seq Scan on bureau_contact bc (cost=0.00..28742.08 rows=498 width=12)\"\n\" Filter: (NOT (SubPlan 1))\"\n\" SubPlan 1\"\n\" -> Index Scan using bureau_contact_pkey on bureau_contact bc1 (cost=0.00..28.79 rows=1 width=0)\"\n\" Index Cond: (id > $1)\"\n\" Filter: (credit_id = $0)\"\n\" -> Index Scan using kredytstatus_pkey on kredytstatus cs (cost=0.00..8.62 rows=1 width=32)\"\n\" Index Cond: (cs.kredytid = vwc.credit_id)\"\n\nQuery run time 52ms\n\nBest regards\n\n-- \nAndrzej Zawadzki\n\n\n",
"msg_date": "Mon, 08 Oct 2012 10:18:06 +0200",
"msg_from": "Andrzej Zawadzki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange behavior after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "On 10/08/2012 04:18 PM, Andrzej Zawadzki wrote:\n> Hi!\n>\n> After upgrade (dump/restore/analyze) query (below) after some time is killed by kernel.\n\nWhat's `shared_buffers`? `work_mem`?\n\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Mon, 08 Oct 2012 18:15:15 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange behavior after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "On 08.10.2012 12:15, Craig Ringer wrote:\n> On 10/08/2012 04:18 PM, Andrzej Zawadzki wrote:\n>> Hi!\n>>\n>> After upgrade (dump/restore/analyze) query (below) after some time is\n>> killed by kernel.\n>\n> What's `shared_buffers`? `work_mem`?\nshared_buffers = 64MB\nwork_mem = 48MB\neffective_cache_size = 512MB\n\nNothing changed. Config is pretty much similar.\n\nI noticed the correctness: when table\n\nverification_waiting_credit\n\ndoes not contain any record of user_id = 'value' then query execute very\nfast.\n\n>\n> https://wiki.postgresql.org/wiki/Server_Configuration\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\nI understand but server is the same - KVM image.\nPostgresql engine is different.\n\nTable kredyty is big - contains ~3M records.\nverification_waiting_credit is small - ~40 records.\n\nDo you need schemas? I don't know if I can post here... :-(\n\n-- \nAndrzej Zawadzki\n\n",
"msg_date": "Mon, 08 Oct 2012 13:40:31 +0200",
"msg_from": "Andrzej Zawadzki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange behavior after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "Andrzej Zawadzki <[email protected]> writes:\n> I have no idea whats wrong. Looks like planer bad decision.\n\n[ counts... ] You've got nine base relations in that query. I think\nyou need to increase from_collapse_limit and/or join_collapse_limit.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 08 Oct 2012 10:52:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange behavior after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "On 08.10.2012 16:52, Tom Lane wrote:\n> Andrzej Zawadzki <[email protected]> writes:\n>> I have no idea whats wrong. Looks like planer bad decision.\n> [ counts... ] You've got nine base relations in that query. I think\n> you need to increase from_collapse_limit and/or join_collapse_limit.\n>\nBingo! Thank you!\nBut... looks like in 9.0 this worked differently or option was skipped?\nBecause I had default settings of that options and query has worked fine.\n\n-- \nAndrzej Zawadzki\n\n",
"msg_date": "Mon, 08 Oct 2012 17:41:17 +0200",
"msg_from": "Andrzej Zawadzki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange behavior after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "Andrzej Zawadzki <[email protected]> writes:\n> On 08.10.2012 16:52, Tom Lane wrote:\n>> [ counts... ] You've got nine base relations in that query. I think\n>> you need to increase from_collapse_limit and/or join_collapse_limit.\n\n> Bingo! Thank you!\n\n> But... looks like in 9.0 this worked differently or option was skipped?\n> Because I had default settings of that options and query has worked fine.\n\nIt looks like 9.0 wasn't flattening the EXISTS subqueries, so those\ntables didn't count as relations of the main query.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 08 Oct 2012 11:56:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange behavior after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "On 08.10.2012 17:56, Tom Lane wrote:\n> Andrzej Zawadzki <[email protected]> writes:\n>> On 08.10.2012 16:52, Tom Lane wrote:\n>>> [ counts... ] You've got nine base relations in that query. I think\n>>> you need to increase from_collapse_limit and/or join_collapse_limit.\n>> Bingo! Thank you!\n>> But... looks like in 9.0 this worked differently or option was skipped?\n>> Because I had default settings of that options and query has worked fine.\n> It looks like 9.0 wasn't flattening the EXISTS subqueries, so those\n> tables didn't count as relations of the main query.\nThanks for explanation.\nLooks like now I can switch to 9.2.1 on production server. :-)\n\n\n-- \nAndrzej Zawadzki\n\n",
"msg_date": "Mon, 08 Oct 2012 18:14:25 +0200",
"msg_from": "Andrzej Zawadzki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange behavior after upgrade from 9.0 to 9.2"
}
] |
[
{
"msg_contents": "Hi all,\n\n I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance. I need to access the last week data from the table.\nIt takes huge time to process the simple query.So, i throws time out exception error.\n\nquery is :\n select count(*) from dealer_vehicle_details where modified_on between '2012-10-01' and '2012-10-08' and dealer_id=270001;\n\nAfter a lot of time it responds 1184 as count\n\nwhat are the ways i have to follow to increase the performance of this query?\n \nThe insertion also going parallel since the daily realtime updation.\n\nwhat could be the reason exactly for this lacking performace?\n\n",
"msg_date": "Mon, 8 Oct 2012 08:26:19 -0700 (PDT)",
"msg_from": "Navaneethan R <[email protected]>",
"msg_from_op": true,
"msg_subject": "Scaling 10 million records in PostgreSQL table"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 10:26 AM, Navaneethan R <[email protected]> wrote:\n> Hi all,\n>\n> I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance. I need to access the last week data from the table.\n> It takes huge time to process the simple query.So, i throws time out exception error.\n>\n> query is :\n> select count(*) from dealer_vehicle_details where modified_on between '2012-10-01' and '2012-10-08' and dealer_id=270001;\n>\n> After a lot of time it responds 1184 as count\n>\n> what are the ways i have to follow to increase the performance of this query?\n>\n> The insertion also going parallel since the daily realtime updation.\n>\n> what could be the reason exactly for this lacking performace?\n\ncan you send explain analyze? also table structure?\n\nmerlin\n\n",
"msg_date": "Mon, 8 Oct 2012 14:50:16 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling 10 million records in PostgreSQL table"
},
{
"msg_contents": "On 10/08/2012 17:26, Navaneethan R wrote:\n> Hi all,\n\nHello,\n\n> I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance. I need to access the last week data from the table.\n> It takes huge time to process the simple query.So, i throws time out exception error.\n>\n> query is :\n> select count(*) from dealer_vehicle_details where modified_on between '2012-10-01' and '2012-10-08' and dealer_id=270001;\n\nplease show us an EXPLAIN ANALYZE of the query\n\n> After a lot of time it responds 1184 as count\n>\n> what are the ways i have to follow to increase the performance of this query?\n> \n> The insertion also going parallel since the daily realtime updation.\n>\n> what could be the reason exactly for this lacking performace?\n\nmissing index, wrong configuration, ...\nplease also note that, generally, all those \"cloud stuff\" have generally \nvery poor I/O performance ..\n\n>\n>\n\n\n",
"msg_date": "Mon, 08 Oct 2012 21:52:29 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling 10 million records in PostgreSQL table"
},
{
"msg_contents": "On 2012-10-08 10:26, Navaneethan R wrote:\n> Hi all,\n>\n> I have 10 million records in my postgres table.I am running the\n> database in amazon ec2 medium instance. I need to access the last \n> week\n> data from the table.\n> It takes huge time to process the simple query.So, i throws time out\n> exception error.\n>\n> query is :\n> select count(*) from dealer_vehicle_details where modified_on\n> between '2012-10-01' and '2012-10-08' and dealer_id=270001;\n>\n> After a lot of time it responds 1184 as count\n>\n> what are the ways i have to follow to increase the performance of \n> this query?\n>\n> The insertion also going parallel since the daily realtime updation.\n>\n> what could be the reason exactly for this lacking performace?\nWhat indexes do you have on your table?\n\nI'll bet none.\n\nWhat does an explain select count(*) from dealer_vehicle_details where \nmodified_on\n between '2012-10-01' and '2012-10-08' and dealer_id=270001;\n\nshow?\n\nI have a 380Million row table, with LOTS of indexing, and we perform \nvery well.\n\nWithout indexes, the query had to sequential scan all 10 million rows. \nThat's going to be bad on ANY database.\n\n\n\n",
"msg_date": "Mon, 08 Oct 2012 14:53:48 -0500",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling 10 million records in PostgreSQL table"
},
{
"msg_contents": "On 10/08/2012 08:26 AM, Navaneethan R wrote:\n> Hi all,\n>\n> I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance. I need to access the last week data from the table.\n> It takes huge time to process the simple query.So, i throws time out exception error.\n>\n> query is :\n> select count(*) from dealer_vehicle_details where modified_on between '2012-10-01' and '2012-10-08' and dealer_id=270001;\n>\n> After a lot of time it responds 1184 as count\n>\n> what are the ways i have to follow to increase the performance of this query?\n> \n> The insertion also going parallel since the daily realtime updation.\n>\n> what could be the reason exactly for this lacking performace?\n>\n>\nWhat version of PostgreSQL? You can use \"select version();\" and note \nthat 9.2 has index-only scans which can result in a substantial \nperformance boost for queries of this type.\n\nWhat is the structure of your table? You can use \"\\d+ \ndealer_vehicle_details\" in psql.\n\nHave you tuned PostgreSQL in any way? If so, what?\n\nCheers,\nSteve\n\n",
"msg_date": "Mon, 08 Oct 2012 13:09:59 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling 10 million records in PostgreSQL table"
},
{
"msg_contents": "On Tuesday, October 9, 2012 1:40:08 AM UTC+5:30, Steve Crawford wrote:\n> On 10/08/2012 08:26 AM, Navaneethan R wrote:\n> \n> > Hi all,\n> \n> >\n> \n> > I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance. I need to access the last week data from the table.\n> \n> > It takes huge time to process the simple query.So, i throws time out exception error.\n> \n> >\n> \n> > query is :\n> \n> > select count(*) from dealer_vehicle_details where modified_on between '2012-10-01' and '2012-10-08' and dealer_id=270001;\n> \n> >\n> \n> > After a lot of time it responds 1184 as count\n> \n> >\n> \n> > what are the ways i have to follow to increase the performance of this query?\n> \n> > \n> \n> > The insertion also going parallel since the daily realtime updation.\n> \n> >\n> \n> > what could be the reason exactly for this lacking performace?\n> \n> >\n> \n> >\n> \n> What version of PostgreSQL? You can use \"select version();\" and note \n> \n> that 9.2 has index-only scans which can result in a substantial \n> \n> performance boost for queries of this type.\n> \n> \n> \n> What is the structure of your table? You can use \"\\d+ \n> \n> dealer_vehicle_details\" in psql.\n> \n> \n> \n> Have you tuned PostgreSQL in any way? If so, what?\n> \n> \n> \n> Cheers,\n> \n> Steve\n> \n> \n> \n> \n> \n> -- \n> \n> Sent via pgsql-performance mailing list ([email protected])\n> \n> To make changes to your subscription:\n> \n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\nversion():\n\n PostgreSQL 8.4.8 on i686-pc-linux-gnu, compiled by GCC gcc-4.5.real (Ubuntu/Linaro 4.5.2-8ubuntu4) 4.5.2, 32-bit\n\nDesc:\n Table \"public.dealer_vehicle_details\"\n Column | Type | Modifiers | Storage | Description \n----------------+--------------------------+-------------------------------------------------------------------------+---------+-------------\n id | integer | not null default nextval('dealer_vehicle_details_new_id_seq'::regclass) | plain | \n vin_id | integer | not null | plain | \n vin_details_id | integer | | plain | \n price | integer | | plain | \n mileage | double precision | | plain | \n dealer_id | integer | not null | plain | \n created_on | timestamp with time zone | not null | plain | \n modified_on | timestamp with time zone | not null | plain | \nIndexes:\n \"dealer_vehicle_details_pkey\" PRIMARY KEY, btree (id)\n \"idx_dealer_sites_id\" UNIQUE, btree (id) WHERE dealer_id = 270001\n \"idx_dealer_sites_id_526889\" UNIQUE, btree (id) WHERE dealer_id = 526889\n \"idx_dealer_sites_id_9765\" UNIQUE, btree (id, vin_id) WHERE dealer_id = 9765\n \"idx_dealer_sites_id_9765_all\" UNIQUE, btree (id, vin_id, price, mileage, modified_on, created_on, vin_details_id) WHERE dealer_id = 9765\n \"mileage_idx\" btree (mileage)\n \"price_idx\" btree (price)\n \"vehiclecre_idx\" btree (created_on)\n \"vehicleid_idx\" btree (id)\n \"vehiclemod_idx\" btree (modified_on)\n \"vin_details_id_idx\" btree (vin_details_id)\n \"vin_id_idx\" btree (vin_id)\nForeign-key constraints:\n \"dealer_vehicle_master_dealer_id_fkey\" FOREIGN KEY (dealer_id) REFERENCES dealer_dealer_master(id) DEFERRABLE INITIALLY DEFERRED\n \"dealer_vehicle_master_vehicle_id_fkey\" FOREIGN KEY (vin_id) REFERENCES dealer_vehicle(id) DEFERRABLE INITIALLY DEFERRED\n \"dealer_vehicle_master_vin_details_id_fkey\" FOREIGN KEY (vin_details_id) REFERENCES vin_lookup_table(id) DEFERRABLE INITIALLY DEFERRED\nHas OIDs: no\n\n\n After created the index for WHERE clause \"WHERE dealer_id = 270001\"..It is performing better.I have more dealer ids Should I do it for each dealer_id?\n\nAnd The insertion service also happening background parallel. \n\nSo, What are the important steps I should follow frequently to keep the database healthy?\n\nSince, the insertion is happening all time..It would reach millions of millions soon.What are precautions should be followed?\n\n",
"msg_date": "Mon, 8 Oct 2012 13:25:02 -0700 (PDT)",
"msg_from": "Navaneethan R <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scaling 10 million records in PostgreSQL table"
},
{
"msg_contents": "On 10/08/2012 11:26 PM, Navaneethan R wrote:\n> Hi all,\n>\n> I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance.\n\nEC2 usually means \"My I/O performance is terrible\" and \"medium instance\" \nmeans \"I don't have enough RAM for caching to make up for my terrible \nI/O\" at the database sizes you're talking.\n\nAnything that hits most of the database is likely to perform pretty \npoorly on something like EC2. It might be worth considering one of the \nhigh memory or high I/O instances, but unfortunately they only come in \n\"really big and really expensive\".\n\nIf you already have appropriate indexes and have used `explain analyze` \nto verify that the query isn't doing anything slow and expensive, it's \npossible the easiest way to improve performance is to set up async \nreplication or log shipping to a local hot standby on real physical \nhardware, then do the query there.\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Tue, 09 Oct 2012 04:27:45 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling 10 million records in PostgreSQL table"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 1:27 PM, Craig Ringer <[email protected]> wrote:\n\n>\n> If you already have appropriate indexes and have used `explain analyze` to\n> verify that the query isn't doing anything slow and expensive, it's\n> possible the easiest way to improve performance is to set up async\n> replication or log shipping to a local hot standby on real physical\n> hardware, then do the query there.\n>\n\nI've run postgresql on medium instances using elastic block store for the\nstorage and had no difficulty running queries like this one on tables of\ncomparable (and larger) size. It might not come back in 10ms, but such\nqueries weren't so slow that I would describe the wait as \"a lot of time\"\neither. My guess is that this is a sequential scan on a 10 million record\ntable with lots of bloat due to updates. Without more info about table\nstructure and explain analyze output, we are all just guessing, though.\n Please read the wiki page which describes how to submit performance\nproblems and restate your question.\n\nOn Mon, Oct 8, 2012 at 1:27 PM, Craig Ringer <[email protected]> wrote:\n\nIf you already have appropriate indexes and have used `explain analyze` to verify that the query isn't doing anything slow and expensive, it's possible the easiest way to improve performance is to set up async replication or log shipping to a local hot standby on real physical hardware, then do the query there.\nI've run postgresql on medium instances using elastic block store for the storage and had no difficulty running queries like this one on tables of comparable (and larger) size. It might not come back in 10ms, but such queries weren't so slow that I would describe the wait as \"a lot of time\" either. My guess is that this is a sequential scan on a 10 million record table with lots of bloat due to updates. Without more info about table structure and explain analyze output, we are all just guessing, though. Please read the wiki page which describes how to submit performance problems and restate your question.",
"msg_date": "Mon, 8 Oct 2012 15:42:39 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling 10 million records in PostgreSQL table"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 1:25 PM, Navaneethan R <[email protected]> wrote:\n\n>\n> After created the index for WHERE clause \"WHERE dealer_id = 270001\"..It\n> is performing better.I have more dealer ids Should I do it for each\n> dealer_id?\n>\n>\nAll you've really done is confuse the issue. Please read the wiki page on\nhow to submit performance questions and actually follow the directions.\n Show us the table structure when the query is performing poorly ALONG WITH\nexplain analyze output, so we can see how the query is being handled by the\ndb. Adding indexes for just one particular value isn't likely a great\nsolution unless there's a reason why that value is special or performance\nfor that value needs to be particularly good. Far better to get at the\nroot problem of performance issues on that table, whether it is table\nbloat, insufficient indexes, invalid statistics, or something else.\n\nOn Mon, Oct 8, 2012 at 1:25 PM, Navaneethan R <[email protected]> wrote:\n\n After created the index for WHERE clause \"WHERE dealer_id = 270001\"..It is performing better.I have more dealer ids Should I do it for each dealer_id?\nAll you've really done is confuse the issue. Please read the wiki page on how to submit performance questions and actually follow the directions. Show us the table structure when the query is performing poorly ALONG WITH explain analyze output, so we can see how the query is being handled by the db. Adding indexes for just one particular value isn't likely a great solution unless there's a reason why that value is special or performance for that value needs to be particularly good. Far better to get at the root problem of performance issues on that table, whether it is table bloat, insufficient indexes, invalid statistics, or something else.",
"msg_date": "Tue, 9 Oct 2012 14:47:07 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling 10 million records in PostgreSQL table"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 1:25 PM, Navaneethan R <[email protected]> wrote:\n> On Tuesday, October 9, 2012 1:40:08 AM UTC+5:30, Steve Crawford wrote:\n>> On 10/08/2012 08:26 AM, Navaneethan R wrote:\n>>\n>> > Hi all,\n>>\n>> >\n>>\n>> > I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance. I need to access the last week data from the table.\n>>\n>> > It takes huge time to process the simple query.So, i throws time out exception error.\n>>\n>> >\n>>\n>> > query is :\n>>\n>> > select count(*) from dealer_vehicle_details where modified_on between '2012-10-01' and '2012-10-08' and dealer_id=270001;\n>>\n>> >\n>>\n>> > After a lot of time it responds 1184 as count\n>>\n>> >\n>>\n>> > what are the ways i have to follow to increase the performance of this query?\n>>\n>> >\n>>\n>> > The insertion also going parallel since the daily realtime updation.\n>>\n>> >\n>>\n>> > what could be the reason exactly for this lacking performace?\n>>\n>> >\n>>\n>> >\n>>\n>> What version of PostgreSQL? You can use \"select version();\" and note\n>>\n>> that 9.2 has index-only scans which can result in a substantial\n>>\n>> performance boost for queries of this type.\n>>\n>>\n>>\n>> What is the structure of your table? You can use \"\\d+\n>>\n>> dealer_vehicle_details\" in psql.\n>>\n>>\n>>\n>> Have you tuned PostgreSQL in any way? If so, what?\n>>\n>>\n>>\n>> Cheers,\n>>\n>> Steve\n>>\n>>\n>>\n>>\n>>\n>> --\n>>\n>> Sent via pgsql-performance mailing list ([email protected])\n>>\n>> To make changes to your subscription:\n>>\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n> version():\n>\n> PostgreSQL 8.4.8 on i686-pc-linux-gnu, compiled by GCC gcc-4.5.real (Ubuntu/Linaro 4.5.2-8ubuntu4) 4.5.2, 32-bit\n>\n> Desc:\n> Table \"public.dealer_vehicle_details\"\n> Column | Type | Modifiers | Storage | Description\n> ----------------+--------------------------+-------------------------------------------------------------------------+---------+-------------\n> id | integer | not null default nextval('dealer_vehicle_details_new_id_seq'::regclass) | plain |\n> vin_id | integer | not null | plain |\n> vin_details_id | integer | | plain |\n> price | integer | | plain |\n> mileage | double precision | | plain |\n> dealer_id | integer | not null | plain |\n> created_on | timestamp with time zone | not null | plain |\n> modified_on | timestamp with time zone | not null | plain |\n> Indexes:\n> \"dealer_vehicle_details_pkey\" PRIMARY KEY, btree (id)\n> \"idx_dealer_sites_id\" UNIQUE, btree (id) WHERE dealer_id = 270001\n> \"idx_dealer_sites_id_526889\" UNIQUE, btree (id) WHERE dealer_id = 526889\n> \"idx_dealer_sites_id_9765\" UNIQUE, btree (id, vin_id) WHERE dealer_id = 9765\n> \"idx_dealer_sites_id_9765_all\" UNIQUE, btree (id, vin_id, price, mileage, modified_on, created_on, vin_details_id) WHERE dealer_id = 9765\n> \"mileage_idx\" btree (mileage)\n> \"price_idx\" btree (price)\n> \"vehiclecre_idx\" btree (created_on)\n> \"vehicleid_idx\" btree (id)\n> \"vehiclemod_idx\" btree (modified_on)\n> \"vin_details_id_idx\" btree (vin_details_id)\n> \"vin_id_idx\" btree (vin_id)\n> Foreign-key constraints:\n> \"dealer_vehicle_master_dealer_id_fkey\" FOREIGN KEY (dealer_id) REFERENCES dealer_dealer_master(id) DEFERRABLE INITIALLY DEFERRED\n> \"dealer_vehicle_master_vehicle_id_fkey\" FOREIGN KEY (vin_id) REFERENCES dealer_vehicle(id) DEFERRABLE INITIALLY DEFERRED\n> \"dealer_vehicle_master_vin_details_id_fkey\" FOREIGN KEY (vin_details_id) REFERENCES vin_lookup_table(id) DEFERRABLE INITIALLY DEFERRED\n> Has OIDs: no\n>\n>\n> After created the index for WHERE clause \"WHERE dealer_id = 270001\"..It is performing better.I have more dealer ids Should I do it for each dealer_id?\n\nYou seem to have created a partial index. Normally, that's not what\nyou want. You just want an index on the field \"dealer_id\", without\nthe conditional index. Conditional indexes are useful when you have a\nlot of queries with the same WHERE clause entry, such as \"WHERE\ndeleted_at IS NULL\" or whatnot where most of the table has been\nsoft-deleted.\n\nHere's a recent blog post discussing the topic that doesn't presume a\nlot of familiarity with database performance, geared towards\napplication developers writing OLTP applications, which this seems\nlike one of:\n\n http://www.craigkerstiens.com/2012/10/01/understanding-postgres-performance/\n\n\n-- \nfdr\n\n",
"msg_date": "Tue, 9 Oct 2012 14:49:04 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling 10 million records in PostgreSQL table"
}
] |
[
{
"msg_contents": "This is driving me crazy. A new server, virtually identical to an old one,\nhas 50% of the performance with pgbench. I've checked everything I can\nthink of.\n\nThe setups (call the servers \"old\" and \"new\"):\n\nold: 2 x 4-core Intel Xeon E5620\nnew: 4 x 4-core Intel Xeon E5606\n\nboth:\n\n memory: 12 GB DDR EC\n Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n 2 disks, RAID1: OS (ext4) and postgres xlog (ext2)\n 8 disks, RAID10: $PGDATA\n\n 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli)\n indicates that the battery is charged and the cache is working on both\nunits.\n\n Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was\n actually cloned from old server).\n\n Postgres: 8.4.4 (yes, I should update. But both are identical.)\n\nThe postgres.conf files are identical; diffs from the original are:\n\n max_connections = 500\n shared_buffers = 1000MB\n work_mem = 128MB\n synchronous_commit = off\n full_page_writes = off\n wal_buffers = 256kB\n checkpoint_segments = 30\n effective_cache_size = 4GB\n track_activities = on\n track_counts = on\n track_functions = none\n autovacuum = on\n autovacuum_naptime = 5min\n escape_string_warning = off\n\nNote that the old server is in production and was serving a light load\nwhile this test was running, so in theory it should be slower, not faster,\nthan the new server.\n\npgbench: Old server\n\n pgbench -i -s 100 -U test\n pgbench -U test -c ... -t ...\n\n -c -t TPS\n 5 20000 3777\n 10 10000 2622\n 20 5000 3759\n 30 3333 5712\n 40 2500 5953\n 50 2000 6141\n\nNew server\n -c -t TPS\n 5 20000 2733\n 10 10000 2783\n 20 5000 3241\n 30 3333 2987\n 40 2500 2739\n 50 2000 2119\n\nAs you can see, the new server is dramatically slower than the old one.\n\nI tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++.\nThe xlog disks were almost identical in performance. The RAID10 pg-data\ndisks looked like this:\n\nOld server:\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\nxenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31\n737.6 31\nLatency 20512us 469ms 394ms 21402us 396ms\n112ms\nVersion 1.96 ------Sequential Create------ --------Random\nCreate--------\nxenon -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++\n+++\nLatency 43291us 857us 519us 1588us 37us\n178us\n1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n+,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\n\n\nNew server:\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n%CP\nzinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17\n752.0 23\nLatency 15613us 598ms 597ms 2764us 398ms\n215ms\nVersion 1.96 ------Sequential Create------ --------Random\nCreate--------\nzinc -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++\n+++\nLatency 487us 627us 407us 972us 29us\n262us\n1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262us\n\nI don't know enough about bonnie++ to know if these differences are\ninteresting.\n\nOne dramatic difference I noted via vmstat. On the old server, the I/O\nload during the bonnie++ run was steady, like this:\n\nprocs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3\n86 10\n 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2\n86 11\n 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4\n86 10\n 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2\n87 11\n 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3\n86 10\n 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4\n86 10\n 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3\n86 10\n 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2\n87 11\n 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2\n86 12\n\nBut the new server varied wildly during bonnie++:\n\nprocs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0 2\n93 5\n 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0 1\n94 5\n 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0 1\n91 7\n 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0 1\n93 6\n 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0 1\n90 9\n 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0 1\n96 4\n 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0 1\n94 5\n 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1 2\n91 6\n 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0 2\n92 6\n 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0 2\n93 5\n 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0 1\n94 4\n 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0 3\n90 7\n 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0 1\n91 8\n 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0 1\n96 3\n\nAny ideas where to look next would be greatly appreciated.\n\nCraig\n\nThis is driving me crazy. A new server, virtually identical to an old one, has 50% of the performance with pgbench. I've checked everything I can think of.The setups (call the servers \"old\" and \"new\"):\nold: 2 x 4-core Intel Xeon E5620new: 4 x 4-core Intel Xeon E5606both: memory: 12 GB DDR EC Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n 2 disks, RAID1: OS (ext4) and postgres xlog (ext2) 8 disks, RAID10: $PGDATA\n 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli) indicates that the battery is charged and the cache is working on both units.\n Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was actually cloned from old server).\n Postgres: 8.4.4 (yes, I should update. But both are identical.)The postgres.conf files are identical; diffs from the original are:\n max_connections = 500 shared_buffers = 1000MB\n work_mem = 128MB synchronous_commit = off\n full_page_writes = off wal_buffers = 256kB\n checkpoint_segments = 30 effective_cache_size = 4GB\n track_activities = on track_counts = on\n track_functions = none autovacuum = on\n autovacuum_naptime = 5min escape_string_warning = off\nNote that the old server is in production and was serving a light load while \nthis test was running, so in theory it should be slower, not faster, \nthan the new server. pgbench: Old server pgbench -i -s 100 -U test pgbench -U test -c ... -t ...\n -c -t TPS 5 20000 3777\n 10 10000 2622 20 5000 3759\n 30 3333 5712 40 2500 5953\n 50 2000 6141New server -c -t TPS\n 5 20000 2733 10 10000 2783\n 20 5000 3241 30 3333 2987\n 40 2500 2739 50 2000 2119\nAs you can see, the new server is dramatically slower than the old one.I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++. The xlog disks were almost identical in performance. The RAID10 pg-data disks looked like this:\nOld server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPxenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31 737.6 31\nLatency 20512us 469ms 394ms 21402us 396ms 112msVersion 1.96 ------Sequential Create------ --------Random Create--------\nxenon -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 43291us 857us 519us 1588us 37us 178us\n1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n+,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\nNew server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPzinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17 752.0 23\nLatency 15613us 598ms 597ms 2764us 398ms 215msVersion 1.96 ------Sequential Create------ --------Random Create--------\nzinc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 487us 627us 407us 972us 29us 262us\n1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262usI don't know enough about bonnie++ to know if these differences are interesting.\nOne dramatic difference I noted via vmstat. On the old server, the I/O load during the bonnie++ run was steady, like this:procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3 86 10 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2 86 11\n 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4 86 10 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2 87 11\n 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3 86 10 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4 86 10\n 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3 86 10 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2 87 11\n 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2 86 12But the new server varied wildly during bonnie++:\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0 2 93 5 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0 1 94 5\n 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0 1 91 7 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0 1 93 6\n 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0 1 90 9 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0 1 96 4\n 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0 1 94 5 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1 2 91 6\n 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0 2 92 6 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0 2 93 5\n 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0 1 94 4 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0 3 90 7\n 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0 1 91 8 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0 1 96 3\nAny ideas where to look next would be greatly appreciated.Craig",
"msg_date": "Mon, 8 Oct 2012 14:45:18 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Two identical systems, radically different performance"
},
{
"msg_contents": "On Oct 9, 2012, at 1:45 AM, Craig James <[email protected]> wrote:\n\n> This is driving me crazy. A new server, virtually identical to an old one, has 50% of the performance with pgbench. I've checked everything I can think of.\n> \n> The setups (call the servers \"old\" and \"new\"):\n> \n> old: 2 x 4-core Intel Xeon E5620\n> new: 4 x 4-core Intel Xeon E5606\n> \n> both:\n> \n> memory: 12 GB DDR EC\n> Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n> 2 disks, RAID1: OS (ext4) and postgres xlog (ext2)\n> 8 disks, RAID10: $PGDATA\n> \n> 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli)\n> indicates that the battery is charged and the cache is working on both units.\n> \n> Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was\n> actually cloned from old server).\n> \n> Postgres: 8.4.4 (yes, I should update. But both are identical.)\n> \n> The postgres.conf files are identical; diffs from the original are:\n> \n> max_connections = 500\n> shared_buffers = 1000MB\n> work_mem = 128MB\n> synchronous_commit = off\n> full_page_writes = off\n> wal_buffers = 256kB\n\nwal buffers seems very small. Simon suggests to set them at least to 16MB.\n> checkpoint_segments = 30\n> effective_cache_size = 4GB\n\nYou have 12Gb RAM.\n> track_activities = on\n> track_counts = on\n> track_functions = none\n> autovacuum = on\n> autovacuum_naptime = 5min\n> escape_string_warning = off\n> \n> Note that the old server is in production and was serving a light load while this test was running, so in theory it should be slower, not faster, than the new server. \n> \n> pgbench: Old server\n> \n> pgbench -i -s 100 -U test\n> pgbench -U test -c ... -t ...\n> \n> -c -t TPS\n> 5 20000 3777\n> 10 10000 2622\n> 20 5000 3759\n> 30 3333 5712\n> 40 2500 5953\n> 50 2000 6141\n> \n> New server\n> -c -t TPS\n> 5 20000 2733\n> 10 10000 2783\n> 20 5000 3241\n> 30 3333 2987\n> 40 2500 2739\n> 50 2000 2119\n> \n> As you can see, the new server is dramatically slower than the old one.\n> \n> I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++. The xlog disks were almost identical in performance. The RAID10 pg-data disks looked like this:\n> \n> Old server:\n> Version 1.96 ------Sequential Output------ --Sequential Input- --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> xenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31 737.6 31\n> Latency 20512us 469ms 394ms 21402us 396ms 112ms\n> Version 1.96 ------Sequential Create------ --------Random Create--------\n> xenon -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> Latency 43291us 857us 519us 1588us 37us 178us\n> 1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n> +,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\n> \n> \n> New server:\n> Version 1.96 ------Sequential Output------ --Sequential Input- --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> zinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17 752.0 23\n> Latency 15613us 598ms 597ms 2764us 398ms 215ms\n> Version 1.96 ------Sequential Create------ --------Random Create--------\n> zinc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> Latency 487us 627us 407us 972us 29us 262us\n> 1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n> ,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262us\n\nSequential Input on the new one is 279MB/s, on the old 400MB/s. \n\n> I don't know enough about bonnie++ to know if these differences are interesting.\n> \n> One dramatic difference I noted via vmstat. On the old server, the I/O load during the bonnie++ run was steady, like this:\n> \n> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3 86 10\n> 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2 86 11\n> 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4 86 10\n> 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2 87 11\n> 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3 86 10\n> 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4 86 10\n> 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3 86 10\n> 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2 87 11\n> 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2 86 12\n> \n> But the new server varied wildly during bonnie++:\n> \n> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0 2 93 5\n> 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0 1 94 5\n> 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0 1 91 7\n> 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0 1 93 6\n> 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0 1 90 9\n> 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0 1 96 4\n> 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0 1 94 5\n> 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1 2 91 6\n> 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0 2 92 6\n> 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0 2 93 5\n> 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0 1 94 4\n> 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0 3 90 7\n> 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0 1 91 8\n> 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0 1 96 3\n> \n> Any ideas where to look next would be greatly appreciated.\n> \n> Craig\n> \n\n\nOn Oct 9, 2012, at 1:45 AM, Craig James <[email protected]> wrote:This is driving me crazy. A new server, virtually identical to an old one, has 50% of the performance with pgbench. I've checked everything I can think of.The setups (call the servers \"old\" and \"new\"):\nold: 2 x 4-core Intel Xeon E5620new: 4 x 4-core Intel Xeon E5606both: memory: 12 GB DDR EC Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n 2 disks, RAID1: OS (ext4) and postgres xlog (ext2) 8 disks, RAID10: $PGDATA\n 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli) indicates that the battery is charged and the cache is working on both units.\n Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was actually cloned from old server).\n Postgres: 8.4.4 (yes, I should update. But both are identical.)The postgres.conf files are identical; diffs from the original are:\n max_connections = 500 shared_buffers = 1000MB\n work_mem = 128MB synchronous_commit = off\n full_page_writes = off wal_buffers = 256kBwal buffers seems very small. Simon suggests to set them at least to 16MB.\n checkpoint_segments = 30 effective_cache_size = 4GBYou have 12Gb RAM.\n track_activities = on track_counts = on\n track_functions = none autovacuum = on\n autovacuum_naptime = 5min escape_string_warning = off\nNote that the old server is in production and was serving a light load while \nthis test was running, so in theory it should be slower, not faster, \nthan the new server. pgbench: Old server pgbench -i -s 100 -U test pgbench -U test -c ... -t ...\n -c -t TPS 5 20000 3777\n 10 10000 2622 20 5000 3759\n 30 3333 5712 40 2500 5953\n 50 2000 6141New server -c -t TPS\n 5 20000 2733 10 10000 2783\n 20 5000 3241 30 3333 2987\n 40 2500 2739 50 2000 2119\nAs you can see, the new server is dramatically slower than the old one.I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++. The xlog disks were almost identical in performance. The RAID10 pg-data disks looked like this:\nOld server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPxenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31 737.6 31\nLatency 20512us 469ms 394ms 21402us 396ms 112msVersion 1.96 ------Sequential Create------ --------Random Create--------\nxenon -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 43291us 857us 519us 1588us 37us 178us\n1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n+,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\nNew server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPzinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17 752.0 23\nLatency 15613us 598ms 597ms 2764us 398ms 215msVersion 1.96 ------Sequential Create------ --------Random Create--------\nzinc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 487us 627us 407us 972us 29us 262us\n1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262usSequential Input on the new one is 279MB/s, on the old 400MB/s. I don't know enough about bonnie++ to know if these differences are interesting.\nOne dramatic difference I noted via vmstat. On the old server, the I/O load during the bonnie++ run was steady, like this:procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3 86 10 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2 86 11\n 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4 86 10 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2 87 11\n 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3 86 10 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4 86 10\n 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3 86 10 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2 87 11\n 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2 86 12But the new server varied wildly during bonnie++:\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0 2 93 5 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0 1 94 5\n 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0 1 91 7 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0 1 93 6\n 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0 1 90 9 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0 1 96 4\n 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0 1 94 5 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1 2 91 6\n 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0 2 92 6 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0 2 93 5\n 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0 1 94 4 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0 3 90 7\n 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0 1 91 8 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0 1 96 3\nAny ideas where to look next would be greatly appreciated.Craig",
"msg_date": "Tue, 9 Oct 2012 01:57:24 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 2:57 PM, Evgeny Shishkin <[email protected]>wrote:\n\n>\n> On Oct 9, 2012, at 1:45 AM, Craig James <[email protected]> wrote:\n>\n> I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++.\n> The xlog disks were almost identical in performance. The RAID10 pg-data\n> disks looked like this:\n>\n> Old server:\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> xenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31\n> 737.6 31\n> Latency 20512us 469ms 394ms 21402us 396ms\n> 112ms\n> Version 1.96 ------Sequential Create------ --------Random\n> Create--------\n> xenon -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> +++++ +++\n> Latency 43291us 857us 519us 1588us 37us\n> 178us\n>\n> 1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n>\n> +,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\n>\n>\n> New server:\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> zinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17\n> 752.0 23\n> Latency 15613us 598ms 597ms 2764us 398ms\n> 215ms\n> Version 1.96 ------Sequential Create------ --------Random\n> Create--------\n> zinc -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> +++++ +++\n> Latency 487us 627us 407us 972us 29us\n> 262us\n>\n> 1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n>\n> ,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262us\n>\n>\n> Sequential Input on the new one is 279MB/s, on the old 400MB/s.\n>\n>\nBut why? What have I overlooked?\n\nThanks,\nCraig\n\nOn Mon, Oct 8, 2012 at 2:57 PM, Evgeny Shishkin <[email protected]> wrote:\nOn Oct 9, 2012, at 1:45 AM, Craig James <[email protected]> wrote:I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++. The xlog disks were almost identical in performance. The RAID10 pg-data disks looked like this:\nOld server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPxenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31 737.6 31\nLatency 20512us 469ms 394ms 21402us 396ms 112msVersion 1.96 ------Sequential Create------ --------Random Create--------\nxenon -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 43291us 857us 519us 1588us 37us 178us\n1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n+,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\nNew server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPzinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17 752.0 23\nLatency 15613us 598ms 597ms 2764us 398ms 215msVersion 1.96 ------Sequential Create------ --------Random Create--------\nzinc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 487us 627us 407us 972us 29us 262us\n1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262us\nSequential Input on the new one is 279MB/s, on the old 400MB/s. But why? What have I overlooked?\nThanks,Craig",
"msg_date": "Mon, 8 Oct 2012 15:06:05 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Oct 9, 2012, at 2:06 AM, Craig James <[email protected]> wrote:\n\n> \n> \n> On Mon, Oct 8, 2012 at 2:57 PM, Evgeny Shishkin <[email protected]> wrote:\n> \n> On Oct 9, 2012, at 1:45 AM, Craig James <[email protected]> wrote:\n> \n>> I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++. The xlog disks were almost identical in performance. The RAID10 pg-data disks looked like this:\n>> \n>> Old server:\n>> Version 1.96 ------Sequential Output------ --Sequential Input- --Random-\n>> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n>> xenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31 737.6 31\n>> Latency 20512us 469ms 394ms 21402us 396ms 112ms\n>> Version 1.96 ------Sequential Create------ --------Random Create--------\n>> xenon -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n>> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n>> 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n>> Latency 43291us 857us 519us 1588us 37us 178us\n>> 1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n>> +,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\n>> \n>> \n>> New server:\n>> Version 1.96 ------Sequential Output------ --Sequential Input- --Random-\n>> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n>> zinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17 752.0 23\n>> Latency 15613us 598ms 597ms 2764us 398ms 215ms\n>> Version 1.96 ------Sequential Create------ --------Random Create--------\n>> zinc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n>> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n>> 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n>> Latency 487us 627us 407us 972us 29us 262us\n>> 1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n>> ,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262us\n> \n> Sequential Input on the new one is 279MB/s, on the old 400MB/s. \n> \n> \n> But why? What have I overlooked?\n\nblockdev --setra 32000 ?\nAlso you benchmarked volume for pgdata? Can you provide benchmarks for wal volume?\n> \n> Thanks,\n> Craig\n> \n\n\nOn Oct 9, 2012, at 2:06 AM, Craig James <[email protected]> wrote:On Mon, Oct 8, 2012 at 2:57 PM, Evgeny Shishkin <[email protected]> wrote:\nOn Oct 9, 2012, at 1:45 AM, Craig James <[email protected]> wrote:I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++. The xlog disks were almost identical in performance. The RAID10 pg-data disks looked like this:\nOld server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPxenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31 737.6 31\nLatency 20512us 469ms 394ms 21402us 396ms 112msVersion 1.96 ------Sequential Create------ --------Random Create--------\nxenon -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 43291us 857us 519us 1588us 37us 178us\n1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n+,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\nNew server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPzinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17 752.0 23\nLatency 15613us 598ms 597ms 2764us 398ms 215msVersion 1.96 ------Sequential Create------ --------Random Create--------\nzinc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 487us 627us 407us 972us 29us 262us\n1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262us\nSequential Input on the new one is 279MB/s, on the old 400MB/s. But why? What have I overlooked?blockdev --setra 32000 ?Also you benchmarked volume for pgdata? Can you provide benchmarks for wal volume?\nThanks,Craig",
"msg_date": "Tue, 9 Oct 2012 02:08:29 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 7:06 PM, Craig James <[email protected]> wrote:\n>> Sequential Input on the new one is 279MB/s, on the old 400MB/s.\n>>\n>\n> But why? What have I overlooked?\n\nDo you have readahead properly set up on the new one?\n\n",
"msg_date": "Mon, 8 Oct 2012 19:09:45 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 10/08/2012 02:45 PM, Craig James wrote:\n> This is driving me crazy. A new server, virtually identical to an old \n> one, has 50% of the performance with pgbench. I've checked everything \n> I can think of.\n>\n> The setups (call the servers \"old\" and \"new\"):\n>\n> old: 2 x 4-core Intel Xeon E5620\n> new: 4 x 4-core Intel Xeon E5606\n>\n> both:\n>\n> memory: 12 GB DDR EC\n> Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n> 2 disks, RAID1: OS (ext4) and postgres xlog (ext2)\n> 8 disks, RAID10: $PGDATA\nExact same model of disk, same on-board cache, same RAID-card RAM size, \nsame RAID strip-size, etc.??\n\nCheers,\nSteve\n\n\n\n\n\n\n\nOn 10/08/2012 02:45 PM, Craig James\n wrote:\n\nThis is driving me crazy. A new server, virtually\n identical to an old one, has 50% of the performance with pgbench.\n I've checked everything I can think of.\n\n The setups (call the servers \"old\" and \"new\"):\n\n old: 2 x 4-core Intel Xeon E5620\n new: 4 x 4-core Intel Xeon E5606\n\n both:\n\n memory: 12 GB\n DDR EC\n Disks: 12x500GB\n disks (Western Digital 7200RPM SATA)\n 2 disks,\n RAID1: OS (ext4) and postgres xlog (ext2)\n 8 disks,\n RAID10: $PGDATA\n\n Exact same model of disk, same on-board cache, same RAID-card RAM\n size, same RAID strip-size, etc.??\n\n Cheers,\n Steve",
"msg_date": "Mon, 08 Oct 2012 15:16:05 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 3:09 PM, Claudio Freire <[email protected]>wrote:\n\n> On Mon, Oct 8, 2012 at 7:06 PM, Craig James <[email protected]> wrote:\n> >> Sequential Input on the new one is 279MB/s, on the old 400MB/s.\n> >>\n> >\n> > But why? What have I overlooked?\n>\n> Do you have readahead properly set up on the new one?\n>\n\n # blockdev --getra /dev/sdb1\n256\n\nSame on both servers.\n\nThanks,\nCraig\n\nOn Mon, Oct 8, 2012 at 3:09 PM, Claudio Freire <[email protected]> wrote:\nOn Mon, Oct 8, 2012 at 7:06 PM, Craig James <[email protected]> wrote:\n>> Sequential Input on the new one is 279MB/s, on the old 400MB/s.\n>>\n>\n> But why? What have I overlooked?\n\nDo you have readahead properly set up on the new one?\n # blockdev --getra /dev/sdb1256Same on both servers.Thanks,Craig",
"msg_date": "Mon, 8 Oct 2012 15:25:43 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": ">old: 2 x 4-core Intel Xeon E5620\n>new: 4 x 4-core Intel Xeon E5606\n\nhttp://ark.intel.com/compare/47925,52583\n\nold: Xeon E5620 : 4 cores ; 8 Threads ; *Clock Speed : 2.40 GHz\n ; Max Turbo Frequency: 2.66 GHz*\nnew: Xeon E5606 : 4 cores ; 4 Threads ; Clock Speed : 2.13 GHz ;\nMax Turbo Frequency: -\n\nthe older processor maybe faster ;\n\nImre\n\n2012/10/8 Craig James <[email protected]>\n\n> This is driving me crazy. A new server, virtually identical to an old\n> one, has 50% of the performance with pgbench. I've checked everything I\n> can think of.\n>\n> The setups (call the servers \"old\" and \"new\"):\n>\n> old: 2 x 4-core Intel Xeon E5620\n> new: 4 x 4-core Intel Xeon E5606\n>\n> both:\n>\n> memory: 12 GB DDR EC\n> Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n> 2 disks, RAID1: OS (ext4) and postgres xlog (ext2)\n> 8 disks, RAID10: $PGDATA\n>\n> 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli)\n> indicates that the battery is charged and the cache is working on both\n> units.\n>\n> Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was\n> actually cloned from old server).\n>\n> Postgres: 8.4.4 (yes, I should update. But both are identical.)\n>\n> The postgres.conf files are identical; diffs from the original are:\n>\n> max_connections = 500\n> shared_buffers = 1000MB\n> work_mem = 128MB\n> synchronous_commit = off\n> full_page_writes = off\n> wal_buffers = 256kB\n> checkpoint_segments = 30\n> effective_cache_size = 4GB\n> track_activities = on\n> track_counts = on\n> track_functions = none\n> autovacuum = on\n> autovacuum_naptime = 5min\n> escape_string_warning = off\n>\n> Note that the old server is in production and was serving a light load\n> while this test was running, so in theory it should be slower, not faster,\n> than the new server.\n>\n> pgbench: Old server\n>\n> pgbench -i -s 100 -U test\n> pgbench -U test -c ... -t ...\n>\n> -c -t TPS\n> 5 20000 3777\n> 10 10000 2622\n> 20 5000 3759\n> 30 3333 5712\n> 40 2500 5953\n> 50 2000 6141\n>\n> New server\n> -c -t TPS\n> 5 20000 2733\n> 10 10000 2783\n> 20 5000 3241\n> 30 3333 2987\n> 40 2500 2739\n> 50 2000 2119\n>\n> As you can see, the new server is dramatically slower than the old one.\n>\n> I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++.\n> The xlog disks were almost identical in performance. The RAID10 pg-data\n> disks looked like this:\n>\n> Old server:\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> xenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31\n> 737.6 31\n> Latency 20512us 469ms 394ms 21402us 396ms\n> 112ms\n> Version 1.96 ------Sequential Create------ --------Random\n> Create--------\n> xenon -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> +++++ +++\n> Latency 43291us 857us 519us 1588us 37us\n> 178us\n>\n> 1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n>\n> +,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\n>\n>\n> New server:\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> zinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17\n> 752.0 23\n> Latency 15613us 598ms 597ms 2764us 398ms\n> 215ms\n> Version 1.96 ------Sequential Create------ --------Random\n> Create--------\n> zinc -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> +++++ +++\n> Latency 487us 627us 407us 972us 29us\n> 262us\n>\n> 1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n>\n> ,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262us\n>\n> I don't know enough about bonnie++ to know if these differences are\n> interesting.\n>\n> One dramatic difference I noted via vmstat. On the old server, the I/O\n> load during the bonnie++ run was steady, like this:\n>\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3\n> 86 10\n> 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2\n> 86 11\n> 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4\n> 86 10\n> 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2\n> 87 11\n> 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3\n> 86 10\n> 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4\n> 86 10\n> 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3\n> 86 10\n> 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2\n> 87 11\n> 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2\n> 86 12\n>\n> But the new server varied wildly during bonnie++:\n>\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0\n> 2 93 5\n> 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0\n> 1 94 5\n> 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0\n> 1 91 7\n> 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0\n> 1 93 6\n> 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0\n> 1 90 9\n> 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0\n> 1 96 4\n> 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0\n> 1 94 5\n> 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1\n> 2 91 6\n> 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0\n> 2 92 6\n> 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0\n> 2 93 5\n> 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0\n> 1 94 4\n> 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0\n> 3 90 7\n> 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0\n> 1 91 8\n> 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0\n> 1 96 3\n>\n> Any ideas where to look next would be greatly appreciated.\n>\n> Craig\n>\n>\n\n>old: 2 x 4-core Intel Xeon E5620>new: 4 x 4-core Intel Xeon E5606http://ark.intel.com/compare/47925,52583\nold: Xeon E5620 : 4 cores ; 8 Threads ; Clock Speed : 2.40 GHz ; Max Turbo Frequency: 2.66 GHznew: Xeon E5606 : 4 cores ; 4 Threads ; Clock Speed : 2.13 GHz ; Max Turbo Frequency: - \nthe older processor maybe faster ;Imre2012/10/8 Craig James <[email protected]>\nThis is driving me crazy. A new server, virtually identical to an old one, has 50% of the performance with pgbench. I've checked everything I can think of.\nThe setups (call the servers \"old\" and \"new\"):\nold: 2 x 4-core Intel Xeon E5620new: 4 x 4-core Intel Xeon E5606both: memory: 12 GB DDR EC Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n 2 disks, RAID1: OS (ext4) and postgres xlog (ext2) 8 disks, RAID10: $PGDATA\n 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli) indicates that the battery is charged and the cache is working on both units.\n Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was actually cloned from old server).\n Postgres: 8.4.4 (yes, I should update. But both are identical.)The postgres.conf files are identical; diffs from the original are:\n max_connections = 500 shared_buffers = 1000MB\n work_mem = 128MB synchronous_commit = off\n full_page_writes = off wal_buffers = 256kB\n checkpoint_segments = 30 effective_cache_size = 4GB\n track_activities = on track_counts = on\n track_functions = none autovacuum = on\n autovacuum_naptime = 5min escape_string_warning = off\nNote that the old server is in production and was serving a light load while \nthis test was running, so in theory it should be slower, not faster, \nthan the new server. pgbench: Old server pgbench -i -s 100 -U test pgbench -U test -c ... -t ...\n -c -t TPS 5 20000 3777\n 10 10000 2622 20 5000 3759\n 30 3333 5712 40 2500 5953\n 50 2000 6141New server -c -t TPS\n 5 20000 2733 10 10000 2783\n 20 5000 3241 30 3333 2987\n 40 2500 2739 50 2000 2119\nAs you can see, the new server is dramatically slower than the old one.I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++. The xlog disks were almost identical in performance. The RAID10 pg-data disks looked like this:\nOld server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPxenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31 737.6 31\nLatency 20512us 469ms 394ms 21402us 396ms 112msVersion 1.96 ------Sequential Create------ --------Random Create--------\nxenon -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 43291us 857us 519us 1588us 37us 178us\n1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n+,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\nNew server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPzinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17 752.0 23\nLatency 15613us 598ms 597ms 2764us 398ms 215msVersion 1.96 ------Sequential Create------ --------Random Create--------\nzinc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 487us 627us 407us 972us 29us 262us\n1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262usI don't know enough about bonnie++ to know if these differences are interesting.\nOne dramatic difference I noted via vmstat. On the old server, the I/O load during the bonnie++ run was steady, like this:procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3 86 10 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2 86 11\n 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4 86 10 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2 87 11\n 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3 86 10 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4 86 10\n 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3 86 10 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2 87 11\n 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2 86 12But the new server varied wildly during bonnie++:\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0 2 93 5 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0 1 94 5\n 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0 1 91 7 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0 1 93 6\n 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0 1 90 9 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0 1 96 4\n 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0 1 94 5 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1 2 91 6\n 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0 2 92 6 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0 2 93 5\n 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0 1 94 4 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0 3 90 7\n 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0 1 91 8 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0 1 96 3\nAny ideas where to look next would be greatly appreciated.Craig",
"msg_date": "Tue, 9 Oct 2012 00:28:37 +0200",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "One mistake in my descriptions...\n\nOn Mon, Oct 8, 2012 at 2:45 PM, Craig James <[email protected]> wrote:\n\n> This is driving me crazy. A new server, virtually identical to an old\n> one, has 50% of the performance with pgbench. I've checked everything I\n> can think of.\n>\n> The setups (call the servers \"old\" and \"new\"):\n>\n> old: 2 x 4-core Intel Xeon E5620\n> new: 4 x 4-core Intel Xeon E5606\n>\n\nActually it's not 16 cores. It's 8 cores, hyperthreaded. Hyperthreading\nis disabled on the old system.\n\nIs that enough to make this radical difference? (The server is at a\nco-location site, so I have to go down there to boot into the BIOS and\ndisable hyperthreading.)\n\nCraig\n\n\n>\n> both:\n>\n> memory: 12 GB DDR EC\n> Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n> 2 disks, RAID1: OS (ext4) and postgres xlog (ext2)\n> 8 disks, RAID10: $PGDATA\n>\n> 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli)\n> indicates that the battery is charged and the cache is working on both\n> units.\n>\n> Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was\n> actually cloned from old server).\n>\n> Postgres: 8.4.4 (yes, I should update. But both are identical.)\n>\n> The postgres.conf files are identical; diffs from the original are:\n>\n> max_connections = 500\n> shared_buffers = 1000MB\n> work_mem = 128MB\n> synchronous_commit = off\n> full_page_writes = off\n> wal_buffers = 256kB\n> checkpoint_segments = 30\n> effective_cache_size = 4GB\n> track_activities = on\n> track_counts = on\n> track_functions = none\n> autovacuum = on\n> autovacuum_naptime = 5min\n> escape_string_warning = off\n>\n> Note that the old server is in production and was serving a light load\n> while this test was running, so in theory it should be slower, not faster,\n> than the new server.\n>\n> pgbench: Old server\n>\n> pgbench -i -s 100 -U test\n> pgbench -U test -c ... -t ...\n>\n> -c -t TPS\n> 5 20000 3777\n> 10 10000 2622\n> 20 5000 3759\n> 30 3333 5712\n> 40 2500 5953\n> 50 2000 6141\n>\n> New server\n> -c -t TPS\n> 5 20000 2733\n> 10 10000 2783\n> 20 5000 3241\n> 30 3333 2987\n> 40 2500 2739\n> 50 2000 2119\n>\n> As you can see, the new server is dramatically slower than the old one.\n>\n> I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++.\n> The xlog disks were almost identical in performance. The RAID10 pg-data\n> disks looked like this:\n>\n> Old server:\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> xenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31\n> 737.6 31\n> Latency 20512us 469ms 394ms 21402us 396ms\n> 112ms\n> Version 1.96 ------Sequential Create------ --------Random\n> Create--------\n> xenon -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> +++++ +++\n> Latency 43291us 857us 519us 1588us 37us\n> 178us\n>\n> 1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n>\n> +,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\n>\n>\n> New server:\n> Version 1.96 ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> zinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17\n> 752.0 23\n> Latency 15613us 598ms 597ms 2764us 398ms\n> 215ms\n> Version 1.96 ------Sequential Create------ --------Random\n> Create--------\n> zinc -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> +++++ +++\n> Latency 487us 627us 407us 972us 29us\n> 262us\n>\n> 1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n>\n> ,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262us\n>\n> I don't know enough about bonnie++ to know if these differences are\n> interesting.\n>\n> One dramatic difference I noted via vmstat. On the old server, the I/O\n> load during the bonnie++ run was steady, like this:\n>\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3\n> 86 10\n> 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2\n> 86 11\n> 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4\n> 86 10\n> 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2\n> 87 11\n> 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3\n> 86 10\n> 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4\n> 86 10\n> 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3\n> 86 10\n> 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2\n> 87 11\n> 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2\n> 86 12\n>\n> But the new server varied wildly during bonnie++:\n>\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0\n> 2 93 5\n> 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0\n> 1 94 5\n> 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0\n> 1 91 7\n> 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0\n> 1 93 6\n> 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0\n> 1 90 9\n> 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0\n> 1 96 4\n> 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0\n> 1 94 5\n> 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1\n> 2 91 6\n> 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0\n> 2 92 6\n> 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0\n> 2 93 5\n> 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0\n> 1 94 4\n> 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0\n> 3 90 7\n> 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0\n> 1 91 8\n> 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0\n> 1 96 3\n>\n> Any ideas where to look next would be greatly appreciated.\n>\n> Craig\n>\n>\n\nOne mistake in my descriptions...On Mon, Oct 8, 2012 at 2:45 PM, Craig James <[email protected]> wrote:\nThis is driving me crazy. A new server, virtually identical to an old one, has 50% of the performance with pgbench. I've checked everything I can think of.\nThe setups (call the servers \"old\" and \"new\"):\nold: 2 x 4-core Intel Xeon E5620new: 4 x 4-core Intel Xeon E5606Actually it's not 16 cores. It's 8 cores, hyperthreaded. Hyperthreading is disabled on the old system.Is that enough to make this radical difference? (The server is at a co-location site, so I have to go down there to boot into the BIOS and disable hyperthreading.)\nCraig both: memory: 12 GB DDR EC\n Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n 2 disks, RAID1: OS (ext4) and postgres xlog (ext2) 8 disks, RAID10: $PGDATA\n 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli) indicates that the battery is charged and the cache is working on both units.\n Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was actually cloned from old server).\n Postgres: 8.4.4 (yes, I should update. But both are identical.)The postgres.conf files are identical; diffs from the original are:\n max_connections = 500 shared_buffers = 1000MB\n work_mem = 128MB synchronous_commit = off\n full_page_writes = off wal_buffers = 256kB\n checkpoint_segments = 30 effective_cache_size = 4GB\n track_activities = on track_counts = on\n track_functions = none autovacuum = on\n autovacuum_naptime = 5min escape_string_warning = off\nNote that the old server is in production and was serving a light load while \nthis test was running, so in theory it should be slower, not faster, \nthan the new server. pgbench: Old server pgbench -i -s 100 -U test pgbench -U test -c ... -t ...\n -c -t TPS 5 20000 3777\n 10 10000 2622 20 5000 3759\n 30 3333 5712 40 2500 5953\n 50 2000 6141New server -c -t TPS\n 5 20000 2733 10 10000 2783\n 20 5000 3241 30 3333 2987\n 40 2500 2739 50 2000 2119\nAs you can see, the new server is dramatically slower than the old one.I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++. The xlog disks were almost identical in performance. The RAID10 pg-data disks looked like this:\nOld server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPxenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31 737.6 31\nLatency 20512us 469ms 394ms 21402us 396ms 112msVersion 1.96 ------Sequential Create------ --------Random Create--------\nxenon -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 43291us 857us 519us 1588us 37us 178us\n1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n+,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\nNew server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPzinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17 752.0 23\nLatency 15613us 598ms 597ms 2764us 398ms 215msVersion 1.96 ------Sequential Create------ --------Random Create--------\nzinc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 487us 627us 407us 972us 29us 262us\n1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262usI don't know enough about bonnie++ to know if these differences are interesting.\nOne dramatic difference I noted via vmstat. On the old server, the I/O load during the bonnie++ run was steady, like this:procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3 86 10 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2 86 11\n 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4 86 10 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2 87 11\n 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3 86 10 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4 86 10\n 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3 86 10 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2 87 11\n 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2 86 12But the new server varied wildly during bonnie++:\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0 2 93 5 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0 1 94 5\n 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0 1 91 7 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0 1 93 6\n 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0 1 90 9 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0 1 96 4\n 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0 1 94 5 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1 2 91 6\n 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0 2 92 6 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0 2 93 5\n 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0 1 94 4 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0 3 90 7\n 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0 1 91 8 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0 1 96 3\nAny ideas where to look next would be greatly appreciated.Craig",
"msg_date": "Mon, 8 Oct 2012 15:29:17 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Oct 9, 2012, at 1:45 AM, Craig James <[email protected]> wrote:\n\n> This is driving me crazy. A new server, virtually identical to an old one, has 50% of the performance with pgbench. I've checked everything I can think of.\n> \n> The setups (call the servers \"old\" and \"new\"):\n> \n> old: 2 x 4-core Intel Xeon E5620\n> new: 4 x 4-core Intel Xeon E5606\n> \n> both:\n> \n> memory: 12 GB DDR EC\n> Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n> 2 disks, RAID1: OS (ext4) and postgres xlog (ext2)\n> 8 disks, RAID10: $PGDATA\n> \n> 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli)\n> indicates that the battery is charged and the cache is working on both units.\n> \n> Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was\n> actually cloned from old server).\n> \n> Postgres: 8.4.4 (yes, I should update. But both are identical.)\n> \n> The postgres.conf files are identical; diffs from the original are:\n> \n> max_connections = 500\n> shared_buffers = 1000MB\n> work_mem = 128MB\n> synchronous_commit = off\n> full_page_writes = off\n> wal_buffers = 256kB\n> checkpoint_segments = 30\n> effective_cache_size = 4GB\n> track_activities = on\n> track_counts = on\n> track_functions = none\n> autovacuum = on\n> autovacuum_naptime = 5min\n> escape_string_warning = off\n> \n> Note that the old server is in production and was serving a light load while this test was running, so in theory it should be slower, not faster, than the new server. \n> \n> pgbench: Old server\n> \n> pgbench -i -s 100 -U test\n> pgbench -U test -c ... -t ...\n> \n> -c -t TPS\n> 5 20000 3777\n> 10 10000 2622\n> 20 5000 3759\n> 30 3333 5712\n> 40 2500 5953\n> 50 2000 6141\n> \n> New server\n> -c -t TPS\n> 5 20000 2733\n> 10 10000 2783\n> 20 5000 3241\n> 30 3333 2987\n> 40 2500 2739\n> 50 2000 2119\n\nOn new server postgresql do not scale at all. Looks like contention. \n\n> \n> As you can see, the new server is dramatically slower than the old one.\n> \n> I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++. The xlog disks were almost identical in performance. The RAID10 pg-data disks looked like this:\n> \n> Old server:\n> Version 1.96 ------Sequential Output------ --Sequential Input- --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> xenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31 737.6 31\n> Latency 20512us 469ms 394ms 21402us 396ms 112ms\n> Version 1.96 ------Sequential Create------ --------Random Create--------\n> xenon -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> Latency 43291us 857us 519us 1588us 37us 178us\n> 1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n> +,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\n> \n> \n> New server:\n> Version 1.96 ------Sequential Output------ --Sequential Input- --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n> zinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17 752.0 23\n> Latency 15613us 598ms 597ms 2764us 398ms 215ms\n> Version 1.96 ------Sequential Create------ --------Random Create--------\n> zinc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> Latency 487us 627us 407us 972us 29us 262us\n> 1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n> ,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262us\n> \n> I don't know enough about bonnie++ to know if these differences are interesting.\n> \n> One dramatic difference I noted via vmstat. On the old server, the I/O load during the bonnie++ run was steady, like this:\n> \n> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3 86 10\n> 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2 86 11\n> 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4 86 10\n> 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2 87 11\n> 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3 86 10\n> 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4 86 10\n> 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3 86 10\n> 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2 87 11\n> 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2 86 12\n> \n> But the new server varied wildly during bonnie++:\n> \n> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0 2 93 5\n> 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0 1 94 5\n> 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0 1 91 7\n> 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0 1 93 6\n> 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0 1 90 9\n> 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0 1 96 4\n> 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0 1 94 5\n> 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1 2 91 6\n> 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0 2 92 6\n> 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0 2 93 5\n> 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0 1 94 4\n> 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0 3 90 7\n> 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0 1 91 8\n> 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0 1 96 3\n> \n\nAlso note the difference in free/cache distribution. Unless you took these numbers in completely different stages of bonnie++.\n\n> Any ideas where to look next would be greatly appreciated.\n> \n> Craig\n> \n\n\nOn Oct 9, 2012, at 1:45 AM, Craig James <[email protected]> wrote:This is driving me crazy. A new server, virtually identical to an old one, has 50% of the performance with pgbench. I've checked everything I can think of.The setups (call the servers \"old\" and \"new\"):\nold: 2 x 4-core Intel Xeon E5620new: 4 x 4-core Intel Xeon E5606both: memory: 12 GB DDR EC Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n 2 disks, RAID1: OS (ext4) and postgres xlog (ext2) 8 disks, RAID10: $PGDATA\n 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli) indicates that the battery is charged and the cache is working on both units.\n Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was actually cloned from old server).\n Postgres: 8.4.4 (yes, I should update. But both are identical.)The postgres.conf files are identical; diffs from the original are:\n max_connections = 500 shared_buffers = 1000MB\n work_mem = 128MB synchronous_commit = off\n full_page_writes = off wal_buffers = 256kB\n checkpoint_segments = 30 effective_cache_size = 4GB\n track_activities = on track_counts = on\n track_functions = none autovacuum = on\n autovacuum_naptime = 5min escape_string_warning = off\nNote that the old server is in production and was serving a light load while \nthis test was running, so in theory it should be slower, not faster, \nthan the new server. pgbench: Old server pgbench -i -s 100 -U test pgbench -U test -c ... -t ...\n -c -t TPS 5 20000 3777\n 10 10000 2622 20 5000 3759\n 30 3333 5712 40 2500 5953\n 50 2000 6141New server -c -t TPS\n 5 20000 2733 10 10000 2783\n 20 5000 3241 30 3333 2987\n 40 2500 2739 50 2000 2119On new server postgresql do not scale at all. Looks like contention. \nAs you can see, the new server is dramatically slower than the old one.I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++. The xlog disks were almost identical in performance. The RAID10 pg-data disks looked like this:\nOld server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPxenon 24064M 687 99 203098 26 81904 16 3889 96 403747 31 737.6 31\nLatency 20512us 469ms 394ms 21402us 396ms 112msVersion 1.96 ------Sequential Create------ --------Random Create--------\nxenon -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 15953 27 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 43291us 857us 519us 1588us 37us 178us\n1.96,1.96,xenon,1,1349726125,24064M,,687,99,203098,26,81904,16,3889,96,403747,31,737.6,31,16,,,,,15953,27,+++++,+++,+++++,++\\\n+,+++++,+++,+++++,+++,+++++,+++,20512us,469ms,394ms,21402us,396ms,112ms,43291us,857us,519us,1588us,37us,178us\nNew server:Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPzinc 24064M 862 99 212143 54 96008 14 4921 99 279239 17 752.0 23\nLatency 15613us 598ms 597ms 2764us 398ms 215msVersion 1.96 ------Sequential Create------ --------Random Create--------\nzinc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 20380 26 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++Latency 487us 627us 407us 972us 29us 262us\n1.96,1.96,zinc,1,1349722017,24064M,,862,99,212143,54,96008,14,4921,99,279239,17,752.0,23,16,,,,,20380,26,+++++,+++,+++++,+++\\\n,+++++,+++,+++++,+++,+++++,+++,15613us,598ms,597ms,2764us,398ms,215ms,487us,627us,407us,972us,29us,262usI don't know enough about bonnie++ to know if these differences are interesting.\nOne dramatic difference I noted via vmstat. On the old server, the I/O load during the bonnie++ run was steady, like this:procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3 86 10 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2 86 11\n 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4 86 10 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2 87 11\n 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3 86 10 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4 86 10\n 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3 86 10 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2 87 11\n 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2 86 12But the new server varied wildly during bonnie++:\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0 2 93 5 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0 1 94 5\n 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0 1 91 7 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0 1 93 6\n 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0 1 90 9 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0 1 96 4\n 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0 1 94 5 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1 2 91 6\n 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0 2 92 6 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0 2 93 5\n 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0 1 94 4 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0 3 90 7\n 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0 1 91 8 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0 1 96 3\nAlso note the difference in free/cache distribution. Unless you took these numbers in completely different stages of bonnie++.Any ideas where to look next would be greatly appreciated.Craig",
"msg_date": "Tue, 9 Oct 2012 02:33:56 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 3:33 PM, Evgeny Shishkin <[email protected]>wrote:\n\n>\n> On Oct 9, 2012, at 1:45 AM, Craig James <[email protected]> wrote:\n>\n> One dramatic difference I noted via vmstat. On the old server, the I/O\n> load during the bonnie++ run was steady, like this:\n>\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3\n> 86 10\n> 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2\n> 86 11\n> 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4\n> 86 10\n> 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2\n> 87 11\n> 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3\n> 86 10\n> 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4\n> 86 10\n> 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3\n> 86 10\n> 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2\n> 87 11\n> 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2\n> 86 12\n>\n> But the new server varied wildly during bonnie++:\n>\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0\n> 2 93 5\n> 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0\n> 1 94 5\n> 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0\n> 1 91 7\n> 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0\n> 1 93 6\n> 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0\n> 1 90 9\n> 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0\n> 1 96 4\n> 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0\n> 1 94 5\n> 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1\n> 2 91 6\n> 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0\n> 2 92 6\n> 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0\n> 2 93 5\n> 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0\n> 1 94 4\n> 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0\n> 3 90 7\n> 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0\n> 1 91 8\n> 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0\n> 1 96 3\n>\n>\n> Also note the difference in free/cache distribution. Unless you took these\n> numbers in completely different stages of bonnie++.\n>\n>\nThe old server is in production and is running Apache/Postgres requests.\n\nCraig\n\nOn Mon, Oct 8, 2012 at 3:33 PM, Evgeny Shishkin <[email protected]> wrote:\nOn Oct 9, 2012, at 1:45 AM, Craig James <[email protected]> wrote:\nOne dramatic difference I noted via vmstat. On the old server, the I/O load during the bonnie++ run was steady, like this:procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 2 71800 2117612 17940 9375660 0 0 82948 81944 1992 1341 1 3 86 10 0 2 71800 2113328 17948 9383896 0 0 76288 75806 1751 1167 0 2 86 11\n 0 1 71800 2111004 17948 9386540 92 0 93324 94232 2230 1510 0 4 86 10 0 1 71800 2106796 17948 9387436 114 0 67698 67588 1572 1088 0 2 87 11\n 0 1 71800 2106724 17956 9387968 50 0 81970 85710 1918 1287 0 3 86 10 1 1 71800 2103304 17956 9390700 0 0 92096 92160 1970 1194 0 4 86 10\n 0 2 71800 2103196 17976 9389204 0 0 70722 69680 1655 1116 1 3 86 10 1 1 71800 2099064 17980 9390824 0 0 57346 57348 1357 949 0 2 87 11\n 0 1 71800 2095596 17980 9392720 0 0 57344 57348 1379 987 0 2 86 12But the new server varied wildly during bonnie++:\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 0 4518352 12004 7167000 0 0 118894 120838 2613 1539 0 2 93 5 0 1 0 4517252 12004 7167824 0 0 52116 53248 1179 793 0 1 94 5\n 0 1 0 4515864 12004 7169088 0 0 46764 49152 1104 733 0 1 91 7 0 1 0 4515180 12012 7169764 0 0 32924 30724 750 542 0 1 93 6\n 0 1 0 4514328 12016 7170780 0 0 42188 45056 1019 664 0 1 90 9 0 1 0 4513072 12016 7171856 0 0 67528 65540 1487 993 0 1 96 4\n 0 1 0 4510852 12016 7173160 0 0 56876 57344 1358 942 0 1 94 5 0 1 0 4500280 12044 7179924 0 0 91564 94220 2505 2504 1 2 91 6\n 0 1 0 4495564 12052 7183492 0 0 102660 104452 2289 1473 0 2 92 6 0 1 0 4492092 12052 7187720 0 0 98498 96274 2140 1385 0 2 93 5\n 0 1 0 4488608 12060 7190772 0 0 97628 100358 2176 1398 0 1 94 4 1 0 0 4485880 12052 7192600 0 0 112406 114686 2461 1509 0 3 90 7\n 1 0 0 4483424 12052 7195612 0 0 64678 65536 1449 948 0 1 91 8 0 1 0 4480252 12052 7199404 0 0 99608 100356 2217 1452 0 1 96 3\nAlso note the difference in free/cache distribution. Unless you took these numbers in completely different stages of bonnie++.The old server is in production and is running Apache/Postgres requests.\nCraig",
"msg_date": "Mon, 8 Oct 2012 15:42:33 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 7:25 PM, Craig James <[email protected]> wrote:\n>> > But why? What have I overlooked?\n>>\n>> Do you have readahead properly set up on the new one?\n>\n>\n> # blockdev --getra /dev/sdb1\n> 256\n\n\nIt's probably this. 256 is way too low to saturate your I/O system.\nPump it up. I've found 8192 works nice for a system I have, 32000 I\nguess could work too.\n\n",
"msg_date": "Mon, 8 Oct 2012 19:44:04 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "\nOn Oct 9, 2012, at 2:44 AM, Claudio Freire <[email protected]> wrote:\n\n> On Mon, Oct 8, 2012 at 7:25 PM, Craig James <[email protected]> wrote:\n>>>> But why? What have I overlooked?\n>>> \n>>> Do you have readahead properly set up on the new one?\n>> \n>> \n>> # blockdev --getra /dev/sdb1\n>> 256\n> \n> \n> It's probably this. 256 is way too low to saturate your I/O system.\n> Pump it up. I've found 8192 works nice for a system I have, 32000 I\n> guess could work too.\n\nThis, i also suggest to rebenchmark with increased wal_buffers. May be that downscale comes from wal mutex contention.\n",
"msg_date": "Tue, 9 Oct 2012 02:46:38 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 3:44 PM, Claudio Freire <[email protected]>wrote:\n\n> On Mon, Oct 8, 2012 at 7:25 PM, Craig James <[email protected]> wrote:\n> >> > But why? What have I overlooked?\n> >>\n> >> Do you have readahead properly set up on the new one?\n> >\n> >\n> > # blockdev --getra /dev/sdb1\n> > 256\n>\n>\n> It's probably this. 256 is way too low to saturate your I/O system.\n> Pump it up. I've found 8192 works nice for a system I have, 32000 I\n> guess could work too.\n>\n\nBut again ... the two systems are identical. This can't explain it.\n\nThanks,\nCraig\n\nOn Mon, Oct 8, 2012 at 3:44 PM, Claudio Freire <[email protected]> wrote:\nOn Mon, Oct 8, 2012 at 7:25 PM, Craig James <[email protected]> wrote:\n>> > But why? What have I overlooked?\n>>\n>> Do you have readahead properly set up on the new one?\n>\n>\n> # blockdev --getra /dev/sdb1\n> 256\n\n\nIt's probably this. 256 is way too low to saturate your I/O system.\nPump it up. I've found 8192 works nice for a system I have, 32000 I\nguess could work too.\nBut again ... the two systems are identical. This can't explain it.Thanks,Craig",
"msg_date": "Mon, 8 Oct 2012 15:48:52 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 7:48 PM, Craig James <[email protected]> wrote:\n>> > # blockdev --getra /dev/sdb1\n>> > 256\n>>\n>>\n>> It's probably this. 256 is way too low to saturate your I/O system.\n>> Pump it up. I've found 8192 works nice for a system I have, 32000 I\n>> guess could work too.\n>\n>\n> But again ... the two systems are identical. This can't explain it.\n\nIs the read-ahead the same in both systems?\n\n",
"msg_date": "Mon, 8 Oct 2012 19:50:30 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 3:50 PM, Claudio Freire <[email protected]>wrote:\n\n> On Mon, Oct 8, 2012 at 7:48 PM, Craig James <[email protected]> wrote:\n> >> > # blockdev --getra /dev/sdb1\n> >> > 256\n> >>\n> >>\n> >> It's probably this. 256 is way too low to saturate your I/O system.\n> >> Pump it up. I've found 8192 works nice for a system I have, 32000 I\n> >> guess could work too.\n> >\n> >\n> > But again ... the two systems are identical. This can't explain it.\n>\n> Is the read-ahead the same in both systems?\n>\n\nYes, as I said in the original reply (it got cut off from your reply):\n\"Same on both servers.\"\n\nCraig\n\nOn Mon, Oct 8, 2012 at 3:50 PM, Claudio Freire <[email protected]> wrote:\nOn Mon, Oct 8, 2012 at 7:48 PM, Craig James <[email protected]> wrote:\n>> > # blockdev --getra /dev/sdb1\n>> > 256\n>>\n>>\n>> It's probably this. 256 is way too low to saturate your I/O system.\n>> Pump it up. I've found 8192 works nice for a system I have, 32000 I\n>> guess could work too.\n>\n>\n> But again ... the two systems are identical. This can't explain it.\n\nIs the read-ahead the same in both systems?\nYes, as I said in the original reply (it got cut off from your reply): \"Same on both servers.\"Craig",
"msg_date": "Mon, 8 Oct 2012 16:03:53 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 09/10/12 11:48, Craig James wrote:\n> On Mon, Oct 8, 2012 at 3:44 PM, Claudio Freire <[email protected]>wrote:\n>\n>> On Mon, Oct 8, 2012 at 7:25 PM, Craig James <[email protected]> wrote:\n>>>>> But why? What have I overlooked?\n>>>> Do you have readahead properly set up on the new one?\n>>>\n>>> # blockdev --getra /dev/sdb1\n>>> 256\n>>\n>> It's probably this. 256 is way too low to saturate your I/O system.\n>> Pump it up. I've found 8192 works nice for a system I have, 32000 I\n>> guess could work too.\n>>\n> But again ... the two systems are identical. This can't explain it.\n>\n\nMaybe check all sysctl's are the same - in particular:\n\nvm.zone_reclaim_mode\n\nhas a tendency to set itself to 1 on newer hardware, which will reduce \nperformance of database style workloads.\n\nCheers\n\nMark\n\n",
"msg_date": "Tue, 09 Oct 2012 12:10:54 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 8, 2012 at 8:03 PM, Craig James <[email protected]> wrote:\n>> > But again ... the two systems are identical. This can't explain it.\n>>\n>> Is the read-ahead the same in both systems?\n>\n>\n> Yes, as I said in the original reply (it got cut off from your reply): \"Same\n> on both servers.\"\n\nOh, yes. Google collapsed it. Wierd.\n\nAnyway, sequential I/O isn't the same in both servers, and usually you\ndon't get full sequential performance unless you bump up the\nread-ahead. I'm still betting on that for the difference in sequential\nperformance.\n\nAs for pgbench, I'm not sure, but I think pgbench doesn't really\nstress sequential performance. You seem to be getting bad queueing\nperformance. Did you check NCQ status on the RAID controller? Is it on\non both servers?\n\n",
"msg_date": "Mon, 8 Oct 2012 20:12:54 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 9.10.2012 01:03, Craig James wrote:\n> \n> \n> On Mon, Oct 8, 2012 at 3:50 PM, Claudio Freire <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> On Mon, Oct 8, 2012 at 7:48 PM, Craig James <[email protected]\n> <mailto:[email protected]>> wrote:\n> >> > # blockdev --getra /dev/sdb1\n> >> > 256\n> >>\n> >>\n> >> It's probably this. 256 is way too low to saturate your I/O system.\n> >> Pump it up. I've found 8192 works nice for a system I have, 32000 I\n> >> guess could work too.\n> >\n> >\n> > But again ... the two systems are identical. This can't explain it.\n> \n> Is the read-ahead the same in both systems?\n> \n> \n> Yes, as I said in the original reply (it got cut off from your reply):\n> \"Same on both servers.\"\n\nAnd what about read-ahead settings on the controller? 3WARE used to have\na read-ahead settings on their own (usually there are three options -\nread-ahead, no read-ahead and adaptive). Is this set to the same value\non both machines?\n\nTomas\n\n",
"msg_date": "Tue, 09 Oct 2012 01:16:15 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 9.10.2012 00:33, Evgeny Shishkin wrote:\n>>\n>> pgbench: Old server\n>>\n>> pgbench -i -s 100 -U test\n>> pgbench -U test -c ... -t ...\n>>\n>> -c -t TPS\n>> 5 20000 3777\n>> 10 10000 2622\n>> 20 5000 3759\n>> 30 3333 5712\n>> 40 2500 5953\n>> 50 2000 6141\n>>\n>> New server\n>> -c -t TPS\n>> 5 20000 2733\n>> 10 10000 2783\n>> 20 5000 3241\n>> 30 3333 2987\n>> 40 2500 2739\n>> 50 2000 2119\n> \n> On new server postgresql do not scale at all. Looks like contention. \n\nWhy? The evidence we've seen so far IMHO suggests a poorly performing\nI/O subsystem. Post a few lines of \"vmstat 1\" / \"iostat -x -k 1\"\ncollected when the pgbench is running, that might tell us more.\n\nTry a few very basic I/O tests that are easy to understand rather than\nrunning bonnie++ which is quite complex. For example try this:\n\ntime sh -c \"dd if=/dev/zero of=myfile.tmp bs=8192 count=4194304 && sync\"\n\ndd if=myfile.tmp of=/dev/null bs=8192\n\nThe former measures sequential write speed, the latter measures\nsequential read speed in a very primitive way. Watch vmstat/iostat and\ndon't bother running pgbench until you get a reasonable performance on\nboth systems.\n\n\nTomas\n\n",
"msg_date": "Tue, 09 Oct 2012 01:24:02 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "\nOn Oct 9, 2012, at 3:24 AM, Tomas Vondra <[email protected]> wrote:\n\n> On 9.10.2012 00:33, Evgeny Shishkin wrote:\n>>> \n>>> pgbench: Old server\n>>> \n>>> pgbench -i -s 100 -U test\n>>> pgbench -U test -c ... -t ...\n>>> \n>>> -c -t TPS\n>>> 5 20000 3777\n>>> 10 10000 2622\n>>> 20 5000 3759\n>>> 30 3333 5712\n>>> 40 2500 5953\n>>> 50 2000 6141\n>>> \n>>> New server\n>>> -c -t TPS\n>>> 5 20000 2733\n>>> 10 10000 2783\n>>> 20 5000 3241\n>>> 30 3333 2987\n>>> 40 2500 2739\n>>> 50 2000 2119\n>> \n>> On new server postgresql do not scale at all. Looks like contention. \n> \n> Why? The evidence we've seen so far IMHO suggests a poorly performing\n> I/O subsystem. Post a few lines of \"vmstat 1\" / \"iostat -x -k 1\"\n> collected when the pgbench is running, that might tell us more.\n> \n\nBecause 50 clients can push io even with small read ahead. And hear we see nice parabola. Just guessing anyway.\n\n> Try a few very basic I/O tests that are easy to understand rather than\n> running bonnie++ which is quite complex. For example try this:\n> \n> time sh -c \"dd if=/dev/zero of=myfile.tmp bs=8192 count=4194304 && sync\"\n> \n> dd if=myfile.tmp of=/dev/null bs=8192\n> \n> The former measures sequential write speed, the latter measures\n> sequential read speed in a very primitive way. Watch vmstat/iostat and\n> don't bother running pgbench until you get a reasonable performance on\n> both systems.\n> \n> \n> Tomas\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n",
"msg_date": "Tue, 9 Oct 2012 03:30:31 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "Nobody has commented on the hyperthreading question yet ... does it really\nmatter? The old (fast) server has hyperthreading disabled, and the new\n(slower) server has hyperthreads enabled.\n\nIf hyperthreading is definitely NOT an issue, it will save me a trip to the\nco-lo facility.\n\nThanks,\nCraig\n\nOn Mon, Oct 8, 2012 at 3:29 PM, Craig James <[email protected]> wrote:\n\n> One mistake in my descriptions...\n>\n> On Mon, Oct 8, 2012 at 2:45 PM, Craig James <[email protected]> wrote:\n>\n>> This is driving me crazy. A new server, virtually identical to an old\n>> one, has 50% of the performance with pgbench. I've checked everything I\n>> can think of.\n>>\n>> The setups (call the servers \"old\" and \"new\"):\n>>\n>> old: 2 x 4-core Intel Xeon E5620\n>> new: 4 x 4-core Intel Xeon E5606\n>>\n>\n> Actually it's not 16 cores. It's 8 cores, hyperthreaded. Hyperthreading\n> is disabled on the old system.\n>\n> Is that enough to make this radical difference? (The server is at a\n> co-location site, so I have to go down there to boot into the BIOS and\n> disable hyperthreading.)\n>\n> Craig\n>\n\nNobody has commented on the hyperthreading question yet ... does it really matter? The old (fast) server has hyperthreading disabled, and the new (slower) server has hyperthreads enabled.If hyperthreading is definitely NOT an issue, it will save me a trip to the co-lo facility.\nThanks,CraigOn Mon, Oct 8, 2012 at 3:29 PM, Craig James <[email protected]> wrote:\nOne mistake in my descriptions...On Mon, Oct 8, 2012 at 2:45 PM, Craig James <[email protected]> wrote:\nThis is driving me crazy. A new server, virtually identical to an old one, has 50% of the performance with pgbench. I've checked everything I can think of.\nThe setups (call the servers \"old\" and \"new\"):\nold: 2 x 4-core Intel Xeon E5620new: 4 x 4-core Intel Xeon E5606Actually it's not 16 cores. It's 8 cores, hyperthreaded. Hyperthreading is disabled on the old system.Is that enough to make this radical difference? (The server is at a co-location site, so I have to go down there to boot into the BIOS and disable hyperthreading.)\nCraig",
"msg_date": "Mon, 8 Oct 2012 16:40:31 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 09/10/12 12:40, Craig James wrote:\n> Nobody has commented on the hyperthreading question yet ... does it \n> really matter? The old (fast) server has hyperthreading disabled, and \n> the new (slower) server has hyperthreads enabled.\n>\n> If hyperthreading is definitely NOT an issue, it will save me a trip \n> to the co-lo facility.\n>\n> Thanks,\n> Craig\n>\n> On Mon, Oct 8, 2012 at 3:29 PM, Craig James <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> One mistake in my descriptions...\n>\n> On Mon, Oct 8, 2012 at 2:45 PM, Craig James <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> This is driving me crazy. A new server, virtually identical\n> to an old one, has 50% of the performance with pgbench. I've\n> checked everything I can think of.\n>\n> The setups (call the servers \"old\" and \"new\"):\n>\n> old: 2 x 4-core Intel Xeon E5620\n> new: 4 x 4-core Intel Xeon E5606\n>\n>\n> Actually it's not 16 cores. It's 8 cores, hyperthreaded.\n> Hyperthreading is disabled on the old system.\n>\n> Is that enough to make this radical difference? (The server is at\n> a co-location site, so I have to go down there to boot into the\n> BIOS and disable hyperthreading.)\n>\n> Craig\n>\n>\nMy latest development box (Intel Latest Core i7 3770K Ivy Bridge Quad \nCore with HT 3.4GHz) has hyperthreading - and it *_does_* make a \nsignificant difference.\n\nCheers,\nGavin\n\n\n\n\n\n\nOn 09/10/12 12:40, Craig James wrote:\n\nNobody has commented on the hyperthreading question\n yet ... does it really matter? The old (fast) server has\n hyperthreading disabled, and the new (slower) server has\n hyperthreads enabled.\n\n If hyperthreading is definitely NOT an issue, it will save me a\n trip to the co-lo facility.\n\n Thanks,\n Craig\n\nOn Mon, Oct 8, 2012 at 3:29 PM, Craig\n James <[email protected]>\n wrote:\n\n One mistake in my descriptions...\n\nOn Mon, Oct 8, 2012 at 2:45 PM, Craig\n James <[email protected]>\n wrote:\nThis is\n driving me crazy. A new server, virtually identical to an\n old one, has 50% of the performance with pgbench. I've\n checked everything I can think of.\n\n The setups (call the servers \"old\" and \"new\"):\n\n old: 2 x 4-core Intel Xeon E5620\n new: 4 x 4-core Intel Xeon E5606\n\n\n Actually it's not 16 cores. It's 8 cores, hyperthreaded. \n Hyperthreading is disabled on the old system.\n\n Is that enough to make this radical difference? (The\n server is at a co-location site, so I have to go down\n there to boot into the BIOS and disable hyperthreading.)\n\n Craig\n\n\n\n\n\n\n My latest development box (Intel Latest Core i7 3770K Ivy Bridge\n Quad Core with HT 3.4GHz) has hyperthreading - and it does\n make a significant difference.\n\n Cheers,\n Gavin",
"msg_date": "Tue, 09 Oct 2012 12:52:28 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Tue, Oct 9, 2012 at 2:40 AM, Craig James <[email protected]> wrote:\n> Nobody has commented on the hyperthreading question yet ... does it really\n> matter? The old (fast) server has hyperthreading disabled, and the new\n> (slower) server has hyperthreads enabled.\n>\n> If hyperthreading is definitely NOT an issue, it will save me a trip to the\n> co-lo facility.\n\nHyperthreading will make lock contention issues worse by having more\nthreads fighting. Test the new box with postgres 9.2, if the newer\nversion exhibits much better scaling behavior it strongly suggest lock\ncontention rather than IO being the root cause.\n\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n\n",
"msg_date": "Tue, 9 Oct 2012 03:00:20 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 2012-10-08 23:45, Craig James wrote:\n> This is driving me crazy. A new server, virtually identical to an old \n> one, has 50% of the performance with pgbench. I've checked everything \n> I can think of.\n>\n> The setups (call the servers \"old\" and \"new\"):\n>\n> old: 2 x 4-core Intel Xeon E5620\n> new: 4 x 4-core Intel Xeon E5606\n\nHow are the filesystems formatted and mounted (-o nobarrier?)\n\nregards\nYeb\n\n\n",
"msg_date": "Tue, 09 Oct 2012 13:20:14 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 10/08/2012 06:40 PM, Craig James wrote:\n\n> Nobody has commented on the hyperthreading question yet ... does it\n> really matter? The old (fast) server has hyperthreading disabled, and\n> the new (slower) server has hyperthreads enabled.\n\nI doubt it's this. With the newer post-Nehalem processors, \nhyperthreading is actually much better than it was before. But you also \nhave this:\n\nCPU Speed L3 Cache DDR3 Speed\nE5606 2.13Ghz 8MB 800Mhz\nE5620 2.4Ghz 12MB 1066Mhz\n\nEven with \"equal\" threads, the CPUs you have in the new server, as \nopposed to the old, are much worse. The E5606 doesn't even have \nhyper-threading, so it's not an issue here. In fact, if you enabled it \non the old server, it would likely get *much faster*.\n\nWe saw a 40% improvement by enabling hyper-threading. Sure, it's not \n100%, but it's not negative or zero, either.\n\nBasically we can see, at the very least, that your servers are not \n\"identical.\" Little things like this can make a massive difference. The \nold server has a much better CPU. Even crippled without hyperthreading, \nI could see it beating the new server.\n\nOne thing you might want to check in the BIOS of the new server, is to \nmake sure that power saving mode is disabled everywhere you can find it. \nSome servers come with that set by default, and that puts the CPU to \nsleep occasionally, and the spin-up necessary to re-engage it is \npunishing and inconsistent. We saw 20-40% drops in pgbench pretty much \nat random, when CPU power saving was enabled.\n\nThis doesn't cover why your IO subsystem is slower on the new system, \nbut I suspect it might have something to do with the memory speed. It \nsuggests a slower PCI bus, which could choke your RAID card.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Tue, 9 Oct 2012 11:02:29 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 08, 2012 at 04:40:31PM -0700, Craig James wrote:\n> Nobody has commented on the hyperthreading question yet ... does it\n> really matter? The old (fast) server has hyperthreading disabled, and\n> the new (slower) server has hyperthreads enabled.\n> If hyperthreading is definitely NOT an issue, it will save me a trip to\n> the co-lo facility.\n\n From my reading it seems that hyperthreading hasn't been a major issue\nfor quite sometime on modern kernels.\nhttp://archives.postgresql.org/pgsql-performance/2004-10/msg00052.php\n\nI doubt it would hurt much, but I wouldn't make a special trip to the\nco-lo to change it.\n-- \nDavidT\n\n",
"msg_date": "Tue, 9 Oct 2012 12:14:48 -0400",
"msg_from": "David Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different\n performance"
},
{
"msg_contents": "On Tue, Oct 9, 2012 at 9:02 AM, Shaun Thomas <[email protected]>wrote:\n\n> On 10/08/2012 06:40 PM, Craig James wrote:\n>\n> Nobody has commented on the hyperthreading question yet ... does it\n>> really matter? The old (fast) server has hyperthreading disabled, and\n>> the new (slower) server has hyperthreads enabled.\n>>\n>\n> I doubt it's this. With the newer post-Nehalem processors, hyperthreading\n> is actually much better than it was before. But you also have this:\n>\n> CPU Speed L3 Cache DDR3 Speed\n> E5606 2.13Ghz 8MB 800Mhz\n> E5620 2.4Ghz 12MB 1066Mhz\n>\n> Even with \"equal\" threads, the CPUs you have in the new server, as\n> opposed to the old, are much worse. The E5606 doesn't even have\n> hyper-threading, so it's not an issue here. In fact, if you enabled it on\n> the old server, it would likely get *much faster*.\n>\n\nEven more mysterious, because it turns out it's backwards. I\ncopy-and-pasted the CPU information wrong. I wrote:\n\n> old: 2 x 4-core Intel Xeon E5620\n> new: 4 x 4-core Intel Xeon E5606\n\nThe correct configuration is:\n\nold: 2x4-core Intel Xeon E2606 2.133 GHz\nnew: 2x4-core Intex Xeon E5620 2.40 GHz\n\nSo that makes the poor performance of the new system even more mystifying.\n\nI'm going down there right now to disable hyperthreading and see if that's\nthe answer. So far, that's the only concrete thing that I've been able to\ndiscover that's different between the two systems.\n\n\n\n>\n> We saw a 40% improvement by enabling hyper-threading. Sure, it's not 100%,\n> but it's not negative or zero, either.\n>\n> Basically we can see, at the very least, that your servers are not\n> \"identical.\" Little things like this can make a massive difference. The old\n> server has a much better CPU. Even crippled without hyperthreading, I could\n> see it beating the new server.\n>\n> One thing you might want to check in the BIOS of the new server, is to\n> make sure that power saving mode is disabled everywhere you can find it.\n> Some servers come with that set by default, and that puts the CPU to sleep\n> occasionally, and the spin-up necessary to re-engage it is punishing and\n> inconsistent. We saw 20-40% drops in pgbench pretty much at random, when\n> CPU power saving was enabled.\n>\n\nThanks, I'll double check that too. That's a good suspect.\n\n\n>\n> This doesn't cover why your IO subsystem is slower on the new system, but\n> I suspect it might have something to do with the memory speed. It suggests\n> a slower PCI bus, which could choke your RAID card.\n>\n\nThe motherboards are supposed to be identical. But I'll double check that\ntoo.\n\nCraig\n\n\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n>\n> ______________________________________________\n>\n> See http://www.peak6.com/email_disclaimer/ for terms and conditions\n> related to this email\n\nOn Tue, Oct 9, 2012 at 9:02 AM, Shaun Thomas <[email protected]> wrote:\n\nOn 10/08/2012 06:40 PM, Craig James wrote:\n\n\nNobody has commented on the hyperthreading question yet ... does it\nreally matter? The old (fast) server has hyperthreading disabled, and\nthe new (slower) server has hyperthreads enabled.\n\n\nI doubt it's this. With the newer post-Nehalem processors, \nhyperthreading is actually much better than it was before. But you also \nhave this:\n\nCPU Speed L3 Cache DDR3 Speed\nE5606 2.13Ghz 8MB 800Mhz\nE5620 2.4Ghz 12MB 1066Mhz\n\nEven with \"equal\" threads, the CPUs you have in the new server, as \nopposed to the old, are much worse. The E5606 doesn't even have \nhyper-threading, so it's not an issue here. In fact, if you enabled it \non the old server, it would likely get *much faster*.\nEven more mysterious, because it turns out it's backwards. I copy-and-pasted the CPU information wrong. I wrote:> old: 2 x 4-core Intel Xeon E5620> new: 4 x 4-core Intel Xeon E5606\nThe correct configuration is:old: 2x4-core Intel Xeon E2606 2.133 GHznew: 2x4-core Intex Xeon E5620 2.40 GHzSo that makes the poor performance of the new system even more mystifying.I'm\n going down there right now to disable hyperthreading and see if that's \nthe answer. So far, that's the only concrete thing that I've been able \nto discover that's different between the two systems.\n \n\nWe saw a 40% improvement by enabling hyper-threading. Sure, it's not 100%, but it's not negative or zero, either.\n\nBasically we can see, at the very least, that your servers are not \n\"identical.\" Little things like this can make a massive difference. The \nold server has a much better CPU. Even crippled without hyperthreading, I\n could see it beating the new server.\n\nOne thing you might want to check in the BIOS of the new server, is to \nmake sure that power saving mode is disabled everywhere you can find it.\n Some servers come with that set by default, and that puts the CPU to \nsleep occasionally, and the spin-up necessary to re-engage it is \npunishing and inconsistent. We saw 20-40% drops in pgbench pretty much \nat random, when CPU power saving was enabled.\nThanks, I'll double check that too. That's a good suspect. \n\nThis doesn't cover why your IO subsystem is slower on the new system, \nbut I suspect it might have something to do with the memory speed. It \nsuggests a slower PCI bus, which could choke your RAID card.\nThe motherboards are supposed to be identical. But I'll double check that too.Craig \n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email",
"msg_date": "Tue, 9 Oct 2012 09:41:27 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Tue, Oct 9, 2012 at 9:14 AM, David Thomas <[email protected]> wrote:\n\n> On Mon, Oct 08, 2012 at 04:40:31PM -0700, Craig James wrote:\n> > Nobody has commented on the hyperthreading question yet ... does it\n> > really matter? The old (fast) server has hyperthreading disabled, and\n> > the new (slower) server has hyperthreads enabled.\n> > If hyperthreading is definitely NOT an issue, it will save me a trip\n> to\n> > the co-lo facility.\n>\n> From my reading it seems that hyperthreading hasn't been a major issue\n> for quite sometime on modern kernels.\n> http://archives.postgresql.org/pgsql-performance/2004-10/msg00052.php\n>\n> I doubt it would hurt much, but I wouldn't make a special trip to the\n> co-lo to change it.\n>\n\nAt this point I've discovered no other options, so down to the co-lo I go.\nI'm also going to check power-save options and the RAID controller's\nbuilt-in configuration to see if I overlooked something there (readahead,\nblocksize, whatever).\n\nCraig\n\n\n> --\n> DavidT\n>\n\nOn Tue, Oct 9, 2012 at 9:14 AM, David Thomas <[email protected]> wrote:\nOn Mon, Oct 08, 2012 at 04:40:31PM -0700, Craig James wrote:\n> Nobody has commented on the hyperthreading question yet ... does it\n> really matter? The old (fast) server has hyperthreading disabled, and\n> the new (slower) server has hyperthreads enabled.\n> If hyperthreading is definitely NOT an issue, it will save me a trip to\n> the co-lo facility.\n\n>From my reading it seems that hyperthreading hasn't been a major issue\nfor quite sometime on modern kernels.\nhttp://archives.postgresql.org/pgsql-performance/2004-10/msg00052.php\n\nI doubt it would hurt much, but I wouldn't make a special trip to the\nco-lo to change it.At this point I've discovered no other options, so down to the co-lo I go. I'm also going to check power-save options and the RAID controller's built-in configuration to see if I overlooked something there (readahead, blocksize, whatever).\nCraig \n--\nDavidT",
"msg_date": "Tue, 9 Oct 2012 09:43:18 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 10/09/2012 01:40 AM, Craig James wrote:\n> Nobody has commented on the hyperthreading question yet ... does it really matter? The old (fast) server has hyperthreading disabled, and the new (slower) server has hyperthreads enabled.\n>\n> If hyperthreading is definitely NOT an issue, it will save me a trip to the co-lo facility.\n\n\nsorry to come late to the party, but being in a similar condition\nI've googled a bit and I've found a way to disable hyperthreading without\nthe need to reboot the system and entering the bios:\n\necho 0 >/sys/devices/system/node/node0/cpuX/online\n\nwhere X belongs to 1..(#cores * 2) if hyperthreading is enabled\n(cpu0 can't be switched off).\n\ndidn't try myself on live system, but I definitely will\nas soon as I have a new machine to test.\n\nAndrea\n\n\n\n",
"msg_date": "Thu, 11 Oct 2012 16:14:11 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Thu, Oct 11, 2012 at 11:14 AM, Andrea Suisani <[email protected]> wrote:\n> sorry to come late to the party, but being in a similar condition\n> I've googled a bit and I've found a way to disable hyperthreading without\n> the need to reboot the system and entering the bios:\n>\n> echo 0 >/sys/devices/system/node/node0/cpuX/online\n>\n> where X belongs to 1..(#cores * 2) if hyperthreading is enabled\n> (cpu0 can't be switched off).\n>\n> didn't try myself on live system, but I definitely will\n> as soon as I have a new machine to test.\n\nQuestion is... will that remove the performance penalty of HyperThreading?\n\nI don't think so, because a big one is the register file split (half\nthe hardware registers go to a CPU, half to the other). If that action\ndoesn't tell the CPU to \"unsplit\", some shared components may become\nunbogged, like the decode stage probably, but I'm not sure it's the\nsame as disabling it from the BIOS.\n\n",
"msg_date": "Thu, 11 Oct 2012 11:19:33 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 10/11/2012 04:19 PM, Claudio Freire wrote:\n> On Thu, Oct 11, 2012 at 11:14 AM, Andrea Suisani <[email protected]> wrote:\n>> sorry to come late to the party, but being in a similar condition\n>> I've googled a bit and I've found a way to disable hyperthreading without\n>> the need to reboot the system and entering the bios:\n>>\n>> echo 0 >/sys/devices/system/node/node0/cpuX/online\n>>\n>> where X belongs to 1..(#cores * 2) if hyperthreading is enabled\n>> (cpu0 can't be switched off).\n>>\n>> didn't try myself on live system, but I definitely will\n>> as soon as I have a new machine to test.\n>\n> Question is... will that remove the performance penalty of HyperThreading?\n\nSo I've added to my todo list to perform a test to verify this claim :)\n\n> I don't think so, because a big one is the register file split (half\n> the hardware registers go to a CPU, half to the other). If that action\n> doesn't tell the CPU to \"unsplit\", some shared components may become\n> unbogged, like the decode stage probably, but I'm not sure it's the\n> same as disabling it from the BIOS.\n\nAlthough I think that you're probably right to assume that disabling HT\nthrough the syfs interface won't remove the performance penalty for real.\n\nthanks\n\nAndrea\n\n",
"msg_date": "Thu, 11 Oct 2012 16:40:14 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 10/11/2012 04:40 PM, Andrea Suisani wrote:\n> On 10/11/2012 04:19 PM, Claudio Freire wrote:\n>> On Thu, Oct 11, 2012 at 11:14 AM, Andrea Suisani <[email protected]> wrote:\n>>> sorry to come late to the party, but being in a similar condition\n>>> I've googled a bit and I've found a way to disable hyperthreading without\n>>> the need to reboot the system and entering the bios:\n>>>\n>>> echo 0 >/sys/devices/system/node/node0/cpuX/online\n>>>\n>>> where X belongs to 1..(#cores * 2) if hyperthreading is enabled\n>>> (cpu0 can't be switched off).\n>>>\n>>> didn't try myself on live system, but I definitely will\n>>> as soon as I have a new machine to test.\n>>\n>> Question is... will that remove the performance penalty of HyperThreading?\n>\n> So I've added to my todo list to perform a test to verify this claim :)\n\ndone.\n\nin a brief: the box is dell a PowerEdge r720 with 16GB of RAM,\nthe cpu is a Xeon 5620 with 6 core, the OS is installed on a raid\n(sata disk 7.2k rpm) and the PGDATA is on separate RAID 1 array\n(sas 15K rpm) and the controller is a PERC H710 (bbwc with a cache\nof 512 MB).\n\nPostgres ver 9.2.1 (sorry for not having benchmarked 9.1,\nbut this what we plan to deploy in production). Both the OS\n(Ubuntu 12.04.1) and Postgres had been briefly tuned according\nto the usal standards while trying to mimic Craig's configuration\n(see specific settings at the bottom).\n\nTPS including connection establishing, pgbench run in a single\nthread mode, connection made through unix socket, OS cache dropped\nand Postgres restarted for every run.\n\nthose are the results:\n\n HT HT SYSFS DIS HT BIOS DISABLE\n-c -t r1 r2 r3 r1 r2 r3 r1 r2 r3\n5 20K 1641 1831 1496 2020 1974 2033 2005 1988 1967\n10 10K 2161 2134 2136 2277 2252 2216 1854 1824 1810\n20 5k 2550 2508 2558 2417 2388 2357 1924 1928 1954\n30 3333 2216 2272 2250 2333 2493 2496 1993 2009 2008\n40 2.5K 2179 2221 2250 2568 2535 2500 2025 2048 2018\n50 2K 2217 2213 2213 2487 2449 2604 2112 2016 2023\n\nDespite the fact the results don't match my expectation\n(I suspect that there's something wrong with the PERC\nbecause, having the controller cache enabled make no\ndifference in terms of TPS), it seems strange that disabling\nHT from the bios will give lesser TPS that HT disable through\nsysfs interface.\n\nOS conf:\n\nvm.swappiness=0\nvm.overcommit_memory=2\nvm.dirty_ratio=2\nvm.dirty_background_ratio=1\nkernel.shmmax=3454820352\nkernel.shmall=2048341\n/sbin/blockdev --setra 8192 /dev/sdb\n$PGDATA is on ext4 (rw,noatime)\nLinux cloud 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux\nsdb scheduler is [cfq]\n\nDB conf:\n\nmax_connections = 100\nshared_buffers = 3200MB\nwork_mem = 30MB\nmaintenance_work_mem = 800MB\nsynchronous_commit = off\nfull_page_writes = off\ncheckpoint_segments = 40\ncheckpoint_timeout = 5min\ncheckpoint_completion_target = 0.9\nrandom_page_cost = 3.5\neffective_cache_size = 10GB\nlog_autovacuum_min_duration = 0\nautovacuum_naptime = 5min\n\n\nAndrea\n\np.s. as last try in the process of increasing TPS\nI've change the scheduler from cfq to deadline\nand for -c 5 t 20K I've got r1=3007, r2=2930 and r3=2985.\n\n\n\n\n",
"msg_date": "Mon, 15 Oct 2012 10:27:10 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 15, 2012 at 1:27 AM, Andrea Suisani <[email protected]> wrote:\n> On 10/11/2012 04:40 PM, Andrea Suisani wrote:\n>>\n>> On 10/11/2012 04:19 PM, Claudio Freire wrote:\n>>>\n>>> On Thu, Oct 11, 2012 at 11:14 AM, Andrea Suisani <[email protected]>\n>>> wrote:\n>>>>\n>>>> sorry to come late to the party, but being in a similar condition\n>>>> I've googled a bit and I've found a way to disable hyperthreading\n>>>> without\n>>>> the need to reboot the system and entering the bios:\n>>>>\n>>>> echo 0 >/sys/devices/system/node/node0/cpuX/online\n>>>>\n>>>> where X belongs to 1..(#cores * 2) if hyperthreading is enabled\n>>>> (cpu0 can't be switched off).\n>>>>\n>>>> didn't try myself on live system, but I definitely will\n>>>> as soon as I have a new machine to test.\n>>>\n>>>\n>>> Question is... will that remove the performance penalty of\n>>> HyperThreading?\n>>\n>>\n>> So I've added to my todo list to perform a test to verify this claim :)\n>\n>\n> done.\n>\n> in a brief: the box is dell a PowerEdge r720 with 16GB of RAM,\n> the cpu is a Xeon 5620 with 6 core, the OS is installed on a raid\n> (sata disk 7.2k rpm) and the PGDATA is on separate RAID 1 array\n> (sas 15K rpm) and the controller is a PERC H710 (bbwc with a cache\n> of 512 MB).\n>\n> Postgres ver 9.2.1 (sorry for not having benchmarked 9.1,\n> but this what we plan to deploy in production). Both the OS\n> (Ubuntu 12.04.1) and Postgres had been briefly tuned according\n> to the usal standards while trying to mimic Craig's configuration\n> (see specific settings at the bottom).\n>\n> TPS including connection establishing, pgbench run in a single\n> thread mode, connection made through unix socket, OS cache dropped\n> and Postgres restarted for every run.\n>\n> those are the results:\n>\n> HT HT SYSFS DIS HT BIOS DISABLE\n> -c -t r1 r2 r3 r1 r2 r3 r1 r2 r3\n> 5 20K 1641 1831 1496 2020 1974 2033 2005 1988 1967\n> 10 10K 2161 2134 2136 2277 2252 2216 1854 1824 1810\n> 20 5k 2550 2508 2558 2417 2388 2357 1924 1928 1954\n> 30 3333 2216 2272 2250 2333 2493 2496 1993 2009 2008\n> 40 2.5K 2179 2221 2250 2568 2535 2500 2025 2048 2018\n> 50 2K 2217 2213 2213 2487 2449 2604 2112 2016 2023\n>\n> Despite the fact the results don't match my expectation\n\nYou have a RAID1 with 15K SAS disks. I have a RAID10 with 8 7200 SATA\ndisks plus another RAID1 for the XLOG file system. Ten 7K SATA disks\non two file systems should be quite a bit faster than two 15K SAS\ndisks, right?\n\n> (I suspect that there's something wrong with the PERC\n> because, having the controller cache enabled make no\n> difference in terms of TPS), it seems strange that disabling\n> HT from the bios will give lesser TPS that HT disable through\n> sysfs interface.\n\nWell, all I can say is that I like my 3WARE controllers, and it's the\nsecondary reason why I moved away from Dell (the primary reason is\nprice).\n\nCraig\n\n>\n> OS conf:\n>\n> vm.swappiness=0\n> vm.overcommit_memory=2\n> vm.dirty_ratio=2\n> vm.dirty_background_ratio=1\n> kernel.shmmax=3454820352\n> kernel.shmall=2048341\n> /sbin/blockdev --setra 8192 /dev/sdb\n> $PGDATA is on ext4 (rw,noatime)\n> Linux cloud 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012\n> x86_64 x86_64 x86_64 GNU/Linux\n> sdb scheduler is [cfq]\n>\n> DB conf:\n>\n> max_connections = 100\n> shared_buffers = 3200MB\n> work_mem = 30MB\n> maintenance_work_mem = 800MB\n> synchronous_commit = off\n> full_page_writes = off\n> checkpoint_segments = 40\n> checkpoint_timeout = 5min\n> checkpoint_completion_target = 0.9\n> random_page_cost = 3.5\n> effective_cache_size = 10GB\n> log_autovacuum_min_duration = 0\n> autovacuum_naptime = 5min\n>\n>\n> Andrea\n>\n> p.s. as last try in the process of increasing TPS\n> I've change the scheduler from cfq to deadline\n> and for -c 5 t 20K I've got r1=3007, r2=2930 and r3=2985.\n>\n>\n>\n\n",
"msg_date": "Mon, 15 Oct 2012 08:01:08 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 15, 2012 at 5:27 AM, Andrea Suisani <[email protected]> wrote:\n> it seems strange that disabling\n> HT from the bios will give lesser TPS that HT disable through\n> sysfs interface.\n\nIt does prove they're not equivalent though.\n\n",
"msg_date": "Mon, 15 Oct 2012 12:01:37 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 10/15/2012 05:01 PM, Claudio Freire wrote:\n> On Mon, Oct 15, 2012 at 5:27 AM, Andrea Suisani <[email protected]> wrote:\n>> it seems strange that disabling\n>> HT from the bios will give lesser TPS that HT disable through\n>> sysfs interface.\n>\n> It does prove they're not equivalent though.\n>\n\nsure you're right.\n\nIt's just that my bet was on a higher throughput\nwhen HT was isabled from the BIOS (as you stated\npreviously in this thread).\n\nAndrea\n\n\n\n\n",
"msg_date": "Mon, 15 Oct 2012 17:24:58 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 15, 2012 at 12:24 PM, Andrea Suisani <[email protected]> wrote:\n>> It does prove they're not equivalent though.\n>>\n>\n> sure you're right.\n>\n> It's just that my bet was on a higher throughput\n> when HT was isabled from the BIOS (as you stated\n> previously in this thread).\n\nYes, mine too. It's bizarre. If I were you, I'd look into it more\ndeeply. It may be a flaw in your test methodology (maybe you disabled\nthe wrong cores?). If not, it would be good to know why the extra TPS\nto replicate elsewhere.\n\n",
"msg_date": "Mon, 15 Oct 2012 12:28:45 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 15, 2012 at 9:01 AM, Craig James <[email protected]> wrote:\n> On Mon, Oct 15, 2012 at 1:27 AM, Andrea Suisani <[email protected]> wrote:\n>> (I suspect that there's something wrong with the PERC\n>> because, having the controller cache enabled make no\n>> difference in terms of TPS), it seems strange that disabling\n>> HT from the bios will give lesser TPS that HT disable through\n>> sysfs interface.\n>\n> Well, all I can say is that I like my 3WARE controllers, and it's the\n> secondary reason why I moved away from Dell (the primary reason is\n> price).\n\nMediocre performance, random lockups, and Dell's refusal to address\nsaid lockups are the reasons I abandoned Dell's PERC controllers. My\npreference is Areca 1680/1880, then 3Ware 96xx, then LSI, then\nAdaptec. Areca's web interface on a dedicated ethernet port make them\nsuper easy to configure while the machine is running with no need for\nspecialized software for a given OS, and they're performance and\nreliability are great. The 3Wares are very solid with later model\nBIOS on board. LSI gets a rasberry for MegaCLI, the 2nd klunkiest\ninterface ever, the worst being their horrible horrible BIOS boot\nsetup screen.\n\n",
"msg_date": "Mon, 15 Oct 2012 09:32:45 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Mon, Oct 15, 2012 at 9:28 AM, Claudio Freire <[email protected]> wrote:\n> On Mon, Oct 15, 2012 at 12:24 PM, Andrea Suisani <[email protected]> wrote:\n>> sure you're right.\n>>\n>> It's just that my bet was on a higher throughput\n>> when HT was isabled from the BIOS (as you stated\n>> previously in this thread).\n>\n> Yes, mine too. It's bizarre. If I were you, I'd look into it more\n> deeply. It may be a flaw in your test methodology (maybe you disabled\n> the wrong cores?). If not, it would be good to know why the extra TPS\n> to replicate elsewhere.\n\nI'd recommend more synthetic benchmarks when trying to compare systems\nlike this. bonnie++, the memory stream test that Greg Smith was\nworking on, and so on. Get an idea what core differences the machines\ndisplay under such testing.\n\n",
"msg_date": "Mon, 15 Oct 2012 09:34:39 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "[cut]\n\n\n>> TPS including connection establishing, pgbench run in a single\n>> thread mode, connection made through unix socket, OS cache dropped\n>> and Postgres restarted for every run.\n>>\n>> those are the results:\n>>\n>> HT HT SYSFS DIS HT BIOS DISABLE\n>> -c -t r1 r2 r3 r1 r2 r3 r1 r2 r3\n>> 5 20K 1641 1831 1496 2020 1974 2033 2005 1988 1967\n>> 10 10K 2161 2134 2136 2277 2252 2216 1854 1824 1810\n>> 20 5k 2550 2508 2558 2417 2388 2357 1924 1928 1954\n>> 30 3333 2216 2272 2250 2333 2493 2496 1993 2009 2008\n>> 40 2.5K 2179 2221 2250 2568 2535 2500 2025 2048 2018\n>> 50 2K 2217 2213 2213 2487 2449 2604 2112 2016 2023\n>>\n>> Despite the fact the results don't match my expectation\n>\n> You have a RAID1 with 15K SAS disks. I have a RAID10 with 8 7200 SATA\n> disks plus another RAID1 for the XLOG file system. Ten 7K SATA disks\n> on two file systems should be quite a bit faster than two 15K SAS\n> disks, right?\n\nI think you're right. But I never have the chance to try such\na configuration in first person. But, yes, spreading I/O on two\ndifferent subsystems (xlog and pgdata) and having pgdata on\na RAID10 should surely outperform my RAID1 with 15K SAS disks.\n\n>> (I suspect that there's something wrong with the PERC\n>> because, having the controller cache enabled make no\n>> difference in terms of TPS), it seems strange that disabling\n>> HT from the bios will give lesser TPS that HT disable through\n>> sysfs interface.\n>\n> Well, all I can say is that I like my 3WARE controllers, and it's the\n> secondary reason why I moved away from Dell (the primary reason is\n> price).\n\nSomething I surely will take into account the next time\nI will buy a new server.\n\nAndrea\n\n\n",
"msg_date": "Mon, 15 Oct 2012 17:45:24 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 10/15/2012 05:28 PM, Claudio Freire wrote:\n> On Mon, Oct 15, 2012 at 12:24 PM, Andrea Suisani <[email protected]> wrote:\n>>> It does prove they're not equivalent though.\n>>>\n>>\n>> sure you're right.\n>>\n>> It's just that my bet was on a higher throughput\n>> when HT was isabled from the BIOS (as you stated\n>> previously in this thread).\n>\n> Yes, mine too. It's bizarre. If I were you, I'd look into it more\n> deeply. It may be a flaw in your test methodology (maybe you disabled\n> the wrong cores?).\n\nthis is the first thing I thought after looking at the results\nbut I've double-checked cores topology (core_id, core_siblings_list end\nfriends under /sys/devices/system/cpu/cpu0/topology) and I seems\nto me that I've disabled the right ones.\n\nIt could be that I've messed up with something else...\n\n > If not, it would be good to know why the extra TPS\n> to replicate elsewhere.\n\ndefinitely I will try to understand the\nprobable causes performing other tests...\nany hints are welcome :)\n\n\n>\n\n\n",
"msg_date": "Mon, 15 Oct 2012 17:56:07 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 10/15/2012 05:34 PM, Scott Marlowe wrote:\n> On Mon, Oct 15, 2012 at 9:28 AM, Claudio Freire <[email protected]> wrote:\n>> On Mon, Oct 15, 2012 at 12:24 PM, Andrea Suisani <[email protected]> wrote:\n>>> sure you're right.\n>>>\n>>> It's just that my bet was on a higher throughput\n>>> when HT was isabled from the BIOS (as you stated\n>>> previously in this thread).\n>>\n>> Yes, mine too. It's bizarre. If I were you, I'd look into it more\n>> deeply. It may be a flaw in your test methodology (maybe you disabled\n>> the wrong cores?). If not, it would be good to know why the extra TPS\n>> to replicate elsewhere.\n>\n> I'd recommend more synthetic benchmarks when trying to compare systems\n> like this. bonnie++, the memory stream test that Greg Smith was\n> working on, and so on. Get an idea what core differences the machines\n> display under such testing.\n>\n\nWill try tomorrow\nthanks for the hint\n\nAndrea\n\n",
"msg_date": "Mon, 15 Oct 2012 17:56:44 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 15.10.2012 17:01, Craig James wrote:\n>>>> On Thu, Oct 11, 2012 at 11:14 AM, Andrea Suisani <[email protected]>\n>>>> wrote:\n>>>>> I've googled a bit and I've found a way to disable hyperthreading\n>>>>> without\n>>>>> the need to reboot the system and entering the bios:\n>>>>>\n>>>>> echo 0 >/sys/devices/system/node/node0/cpuX/online\n\nA safer method is probably to just add the \"noht\" kernel boot option and\nreboot.\n\nDid you set the same stride / stripe-width values on your FS when you\ninitialized them? Are both really freshly-made ext4 FS and not e.g. the\nold one an ext3 mounted as ext4? Do all the disks have the same cache,\nlink speed and NCQ settings (for their own caches, not the controller;\ntry /c0/p0 show all etc. with tw_cli)?\n\n-mjy\n\n",
"msg_date": "Tue, 16 Oct 2012 07:07:14 +0200",
"msg_from": "Marinos Yannikos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 10/15/2012 05:34 PM, Scott Marlowe wrote:\n> On Mon, Oct 15, 2012 at 9:28 AM, Claudio Freire <[email protected]> wrote:\n>> On Mon, Oct 15, 2012 at 12:24 PM, Andrea Suisani <[email protected]> wrote:\n>>> sure you're right.\n>>>\n>>> It's just that my bet was on a higher throughput\n>>> when HT was isabled from the BIOS (as you stated\n>>> previously in this thread).\n>>\n>> Yes, mine too. It's bizarre. If I were you, I'd look into it more\n>> deeply. It may be a flaw in your test methodology (maybe you disabled\n>> the wrong cores?). If not, it would be good to know why the extra TPS\n>> to replicate elsewhere.\n>\n> I'd recommend more synthetic benchmarks when trying to compare systems\n> like this. bonnie++,\n\nyou were right. bonnie++ (-f -n 0 -c 4) show that there's very little (if any)\ndifference in terms of sequential input whether or not cache is enabled on the\nRAID1 (SAS 15K, sdb).\n\nI've run 2 bonnie++ test with both cache enabled and disabled and what I get\n(see attachments for more details) it's a 400MB/s sequential input (cache) vs\n390MBs (nocache).\n\nI dunno why but I would have expected a higher delta (due to the 512MB cache)\nnot a mere 10MB/s, but this is only based on my gut feeling.\n\nI've also tried to test RAID1 array where the OS is installed (2 SATA 7.2Krpm, sda)\njust to verify if cache effect is comparable with the one I get from SAS disks.\n\nWell it seems that there's no cache effects or if it's is there is so small as to be\nconfused with the noise.\n\nBoth array are configured with this params\n\nRead Policy : Adaptive Read Ahead\nWrite Policy : Write Back\nStripe Element Size : 64 KB\nDisk Cache Policy : Disabled\n\nthose tests are performed with HT disable from the BIOS, but without\nusing noht kernel boot param. the scheduler for sdb was setted to deadline\nwhile the default cfq for sda.\n\n> the memory stream test that Greg Smith was\n> working on, and so on.\n\nthis one https://github.com/gregs1104/stream-scaling, right?\n\nI've executed the test with HT enabled, HT disabled from the BIOS\nand HT disable using sys interface. Attached 3 graphs and related\ntext files\n\n\n> Get an idea what core differences the machines\n> display under such testing.\n\nI'm trying... hard :)\n\nAndrea",
"msg_date": "Wed, 17 Oct 2012 17:45:23 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Wed, Oct 17, 2012 at 9:45 AM, Andrea Suisani <[email protected]> wrote:\n> On 10/15/2012 05:34 PM, Scott Marlowe wrote:\n>> I'd recommend more synthetic benchmarks when trying to compare systems\n>> like this. bonnie++,\n>\n>\n> you were right. bonnie++ (-f -n 0 -c 4) show that there's very little (if\n> any)\n> difference in terms of sequential input whether or not cache is enabled on\n> the\n> RAID1 (SAS 15K, sdb).\n\nI'm mainly wanting to know the difference between the two systems, so\nif you can run it on the old and new machine and compare that that's\nthe real test.\n\n> I've run 2 bonnie++ test with both cache enabled and disabled and what I get\n> (see attachments for more details) it's a 400MB/s sequential input (cache)\n> vs\n> 390MBs (nocache).\n>\n> I dunno why but I would have expected a higher delta (due to the 512MB\n> cache)\n> not a mere 10MB/s, but this is only based on my gut feeling.\n\nWell the sequential throughput doesn't really rely on caching. It's\nthe random writes that benefit from caching, and the other things\n(random reads and seq read/write) that indirectly benefit because the\nrandom writes are so much faster that they no longer get in the way.\nSo mostly compare random access between the old and new machines and\nlook for differences there.\n>> the memory stream test that Greg Smith was\n>> working on, and so on.\n>\n>\n> this one https://github.com/gregs1104/stream-scaling, right?\n\nYep.\n\n> I've executed the test with HT enabled, HT disabled from the BIOS\n> and HT disable using sys interface. Attached 3 graphs and related\n> text files\n\nWell it's pretty meh. I'd like to see the older machine compared to\nthe newer one here tho.\n\n> I'm trying... hard :)\n\nYou're doing great. These problems take effort to sort out.\n\n",
"msg_date": "Wed, 17 Oct 2012 10:35:05 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On 10/17/2012 06:35 PM, Scott Marlowe wrote:\n> On Wed, Oct 17, 2012 at 9:45 AM, Andrea Suisani <[email protected]> wrote:\n>> On 10/15/2012 05:34 PM, Scott Marlowe wrote:\n>>> I'd recommend more synthetic benchmarks when trying to compare systems\n>>> like this. bonnie++,\n>>\n>>\n>> you were right. bonnie++ (-f -n 0 -c 4) show that there's very little (if\n>> any)\n>> difference in terms of sequential input whether or not cache is enabled on\n>> the\n>> RAID1 (SAS 15K, sdb).\n\nMaybe there's a misunderstanding here.. :) Craig (James) is the one\nthe had started this thread. I've joined later suggesting a way to\ndisable HT without rebooting (using sysfs interface), trying to avoid\na trip to the data-center to Craig.\n\nAt that point Claudio Freire wondering if disabling HT from sysfs\nwould have removed the performance penalty that Craig has experienced.\n\nSo I decided to test this on a brand new box that I've just bought.\n\nWhen performing this test I've discovered by chance that\nthe raid controller (PERC H710) behave in an unexpected way,\ncause the hw cache has almost no effect in terms of TPS in\na pgbench session.\n\n> I'm mainly wanting to know the difference between the two systems, so\n> if you can run it on the old and new machine and compare that that's\n> the real test.\n\nThis is something that Craig can do.\n\n[cut]\n\n>> I dunno why but I would have expected a higher delta (due to the 512MB\n>> cache)\n>> not a mere 10MB/s, but this is only based on my gut feeling.\n >\n> Well the sequential throughput doesn't really rely on caching. It's\n> the random writes that benefit from caching, and the other things\n> (random reads and seq read/write) that indirectly benefit because the\n> random writes are so much faster that they no longer get in the way.\n> So mostly compare random access between the old and new machines and\n> look for differences there.\n\nmake sense.\n\nI will focus on tests that measure random path access.\n\n>>> the memory stream test that Greg Smith was\n>>> working on, and so on.\n>>\n>>\n>> this one https://github.com/gregs1104/stream-scaling, right?\n>\n> Yep.\n>\n>> I've executed the test with HT enabled, HT disabled from the BIOS\n>> and HT disable using sys interface. Attached 3 graphs and related\n>> text files\n>\n> Well it's pretty meh.\n\n:/\n\ndo you think that Xeon Xeon 5620 perform poorly ?\n\n> I'd like to see the older machine compared to\n> the newer one here tho.\n\nalso this one is on Craig side.\n\n>> I'm trying... hard :)\n>\n> You're doing great. These problems take effort to sort out.\n\nthanks\n\n\n\n",
"msg_date": "Thu, 18 Oct 2012 08:57:12 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "On Wed, Oct 17, 2012 at 11:57 PM, Andrea Suisani <[email protected]> wrote:\n> On 10/17/2012 06:35 PM, Scott Marlowe wrote:\n>>\n>> On Wed, Oct 17, 2012 at 9:45 AM, Andrea Suisani <[email protected]>\n>> wrote:\n>>>\n>>> On 10/15/2012 05:34 PM, Scott Marlowe wrote:\n>>>>\n>>>> I'd recommend more synthetic benchmarks when trying to compare systems\n>>>> like this. bonnie++,\n>>>\n>>>\n>>>\n>>> you were right. bonnie++ (-f -n 0 -c 4) show that there's very little (if\n>>> any)\n>>> difference in terms of sequential input whether or not cache is enabled\n>>> on\n>>> the\n>>> RAID1 (SAS 15K, sdb).\n>\n>\n> Maybe there's a misunderstanding here.. :) Craig (James) is the one\n> the had started this thread. I've joined later suggesting a way to\n> disable HT without rebooting (using sysfs interface), trying to avoid\n> a trip to the data-center to Craig.\n>\n> At that point Claudio Freire wondering if disabling HT from sysfs\n> would have removed the performance penalty that Craig has experienced.\n>\n> So I decided to test this on a brand new box that I've just bought.\n>\n> When performing this test I've discovered by chance that\n> the raid controller (PERC H710) behave in an unexpected way,\n> cause the hw cache has almost no effect in terms of TPS in\n> a pgbench session.\n>\n>> I'm mainly wanting to know the difference between the two systems, so\n>> if you can run it on the old and new machine and compare that that's\n>> the real test.\n>\n>\n> This is something that Craig can do.\n\nToo late ... the new machine is in production.\n\nCraig\n\n>\n> [cut]\n>\n>>> I dunno why but I would have expected a higher delta (due to the 512MB\n>>> cache)\n>>> not a mere 10MB/s, but this is only based on my gut feeling.\n>\n>>\n>>\n>> Well the sequential throughput doesn't really rely on caching. It's\n>> the random writes that benefit from caching, and the other things\n>> (random reads and seq read/write) that indirectly benefit because the\n>> random writes are so much faster that they no longer get in the way.\n>> So mostly compare random access between the old and new machines and\n>> look for differences there.\n>\n>\n> make sense.\n>\n> I will focus on tests that measure random path access.\n>\n>>>> the memory stream test that Greg Smith was\n>>>> working on, and so on.\n>>>\n>>>\n>>>\n>>> this one https://github.com/gregs1104/stream-scaling, right?\n>>\n>>\n>> Yep.\n>>\n>>> I've executed the test with HT enabled, HT disabled from the BIOS\n>>> and HT disable using sys interface. Attached 3 graphs and related\n>>> text files\n>>\n>>\n>> Well it's pretty meh.\n>\n>\n> :/\n>\n> do you think that Xeon Xeon 5620 perform poorly ?\n>\n>> I'd like to see the older machine compared to\n>> the newer one here tho.\n>\n>\n> also this one is on Craig side.\n>\n>>> I'm trying... hard :)\n>>\n>>\n>> You're doing great. These problems take effort to sort out.\n>\n>\n> thanks\n>\n>\n\n",
"msg_date": "Thu, 18 Oct 2012 09:39:45 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Two identical systems, radically different performance"
},
{
"msg_contents": "[sorry for resuming an old thread]\n\n[cut]\n\n>>> Question is... will that remove the performance penalty of HyperThreading?\n>>\n>> So I've added to my todo list to perform a test to verify this claim :)\n>\n> done.\n\non this box:\n\n> in a brief: the box is dell a PowerEdge r720 with 16GB of RAM,\n> the cpu is a Xeon 5620 with 6 core, the OS is installed on a raid\n> (sata disk 7.2k rpm) and the PGDATA is on separate RAID 1 array\n> (sas 15K rpm) and the controller is a PERC H710 (bbwc with a cache\n> of 512 MB). (ubuntu 12.04)\n\nwith postgres 9.2.1 and $PGDATA on a ext4 formatted partition\ni've got:\n\n> those are the results:\n>\n> HT HT SYSFS DIS HT BIOS DISABLE\n> -c -t r1 r2 r3 r1 r2 r3 r1 r2 r3\n> 5 20K 1641 1831 1496 2020 1974 2033 2005 1988 1967\n> 10 10K 2161 2134 2136 2277 2252 2216 1854 1824 1810\n> 20 5k 2550 2508 2558 2417 2388 2357 1924 1928 1954\n> 30 3333 2216 2272 2250 2333 2493 2496 1993 2009 2008\n> 40 2.5K 2179 2221 2250 2568 2535 2500 2025 2048 2018\n> 50 2K 2217 2213 2213 2487 2449 2604 2112 2016 2023\n\non the same machine with the same configuration,\nhaving PGDATA on a xfs formatted partition gives me\na much better TPS.\n\ne.g. pgbench -c 20 -t 5000 gives me 6305 TPS\n(3 runs with \"echo 3 > /proc/sys/vm/drop_caches && /etc/init.d/postgresql-9.2 restart\"\nin between).\n\nAnybody else have experienced this kind of differences\nbetween etx4 and xfs?\n\nAndrea\n\n\n\n\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 05 Dec 2012 16:34:24 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "xfs perform a lot better than ext4 [WAS: Re: Two identical\n\tsystems, radically different performance]"
},
{
"msg_contents": "On 12/05/2012 10:34 AM, Andrea Suisani wrote:\n> [sorry for resuming an old thread]\n>\n> [cut]\n>\n>>>> Question is... will that remove the performance penalty of\n>>>> HyperThreading?\n>>>\n>>> So I've added to my todo list to perform a test to verify this claim :)\n>>\n>> done.\n>\n> on this box:\n>\n>> in a brief: the box is dell a PowerEdge r720 with 16GB of RAM,\n>> the cpu is a Xeon 5620 with 6 core, the OS is installed on a raid\n>> (sata disk 7.2k rpm) and the PGDATA is on separate RAID 1 array\n>> (sas 15K rpm) and the controller is a PERC H710 (bbwc with a cache\n>> of 512 MB). (ubuntu 12.04)\n>\n> with postgres 9.2.1 and $PGDATA on a ext4 formatted partition\n> i've got:\n>\n>> those are the results:\n>>\n>> HT HT SYSFS DIS HT BIOS DISABLE\n>> -c -t r1 r2 r3 r1 r2 r3 r1 r2 r3\n>> 5 20K 1641 1831 1496 2020 1974 2033 2005 1988 1967\n>> 10 10K 2161 2134 2136 2277 2252 2216 1854 1824 1810\n>> 20 5k 2550 2508 2558 2417 2388 2357 1924 1928 1954\n>> 30 3333 2216 2272 2250 2333 2493 2496 1993 2009 2008\n>> 40 2.5K 2179 2221 2250 2568 2535 2500 2025 2048 2018\n>> 50 2K 2217 2213 2213 2487 2449 2604 2112 2016 2023\n>\n> on the same machine with the same configuration,\n> having PGDATA on a xfs formatted partition gives me\n> a much better TPS.\n>\n> e.g. pgbench -c 20 -t 5000 gives me 6305 TPS\n> (3 runs with \"echo 3 > /proc/sys/vm/drop_caches &&\n> /etc/init.d/postgresql-9.2 restart\"\n> in between).\n>\n> Anybody else have experienced this kind of differences\n> between etx4 and xfs?\n>\n> Andrea\n>\n>\n>\nI thought that postgreSQL did its own journalling, if that is the proper\nterm, so why not use an ext2 file system to lower overhead?\n\n\n\n\n\n\n On 12/05/2012 10:34 AM, Andrea Suisani wrote:\n [sorry\n for resuming an old thread]\n \n\n [cut]\n \n\n\n\nQuestion is... will that remove the\n performance penalty of HyperThreading?\n \n\n\n So I've added to my todo list to perform a test to verify this\n claim :)\n \n\n\n done.\n \n\n\n on this box:\n \n\nin a brief: the box is dell a PowerEdge\n r720 with 16GB of RAM,\n \n the cpu is a Xeon 5620 with 6 core, the OS is installed on a\n raid\n \n (sata disk 7.2k rpm) and the PGDATA is on separate RAID 1 array\n \n (sas 15K rpm) and the controller is a PERC H710 (bbwc with a\n cache\n \n of 512 MB). (ubuntu 12.04)\n \n\n\n with postgres 9.2.1 and $PGDATA on a ext4 formatted partition\n \n i've got:\n \n\nthose are the results:\n \n\n HT HT SYSFS DIS HT BIOS DISABLE\n \n -c -t r1 r2 r3 r1 r2 r3 r1 r2 r3\n \n 5 20K 1641 1831 1496 2020 1974 2033 2005 1988 1967\n \n 10 10K 2161 2134 2136 2277 2252 2216 1854 1824 1810\n \n 20 5k 2550 2508 2558 2417 2388 2357 1924 1928 1954\n \n 30 3333 2216 2272 2250 2333 2493 2496 1993 2009 2008\n \n 40 2.5K 2179 2221 2250 2568 2535 2500 2025 2048 2018\n \n 50 2K 2217 2213 2213 2487 2449 2604 2112 2016 2023\n \n\n\n on the same machine with the same configuration,\n \n having PGDATA on a xfs formatted partition gives me\n \n a much better TPS.\n \n\n e.g. pgbench -c 20 -t 5000 gives me 6305 TPS\n \n (3 runs with \"echo 3 > /proc/sys/vm/drop_caches &&\n /etc/init.d/postgresql-9.2 restart\"\n \n in between).\n \n\n Anybody else have experienced this kind of differences\n \n between etx4 and xfs?\n \n\n Andrea\n \n\n\n\n\n I thought that postgreSQL did its own journalling, if that is the\n proper term, so why not use an ext2 file system to lower overhead?",
"msg_date": "Wed, 05 Dec 2012 11:51:08 -0500",
"msg_from": "Jean-David Beyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xfs perform a lot better than ext4 [WAS: Re: Two\n\tidentical systems, radically different performance]"
},
{
"msg_contents": "On Wed, Dec 5, 2012 at 1:51 PM, Jean-David Beyer <[email protected]> wrote:\n> I thought that postgreSQL did its own journalling, if that is the proper\n> term, so why not use an ext2 file system to lower overhead?\n\nBecause you can still have metadata-level corruption.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Dec 2012 13:56:32 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xfs perform a lot better than ext4 [WAS: Re: Two\n\tidentical systems, radically different performance]"
},
{
"msg_contents": "\nOn 12/05/2012 11:51 AM, Jean-David Beyer wrote:\n>>\n>>\n> I thought that postgreSQL did its own journalling, if that is the \n> proper term, so why not use an ext2 file system to lower overhead?\n\n\nPostgres journalling will not save you from a corrupt file system.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 05 Dec 2012 12:00:56 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xfs perform a lot better than ext4 [WAS: Re: Two identical\n\tsystems, radically different performance]"
},
{
"msg_contents": "\n> on this box:\n>\n>> in a brief: the box is dell a PowerEdge r720 with 16GB of RAM,\n>> the cpu is a Xeon 5620 with 6 core, the OS is installed on a raid\n>> (sata disk 7.2k rpm) and the PGDATA is on separate RAID 1 array\n>> (sas 15K rpm) and the controller is a PERC H710 (bbwc with a cache\n>> of 512 MB). (ubuntu 12.04)\n>\n> on the same machine with the same configuration,\n> having PGDATA on a xfs formatted partition gives me\n> a much better TPS.\n>\n> e.g. pgbench -c 20 -t 5000 gives me 6305 TPS\n> (3 runs with \"echo 3 > /proc/sys/vm/drop_caches && \n> /etc/init.d/postgresql-9.2 restart\"\n> in between).\nHi, I found this interesting as I'm trying to do some benchmarks on my \nbox which is very similar to the above but I don't believe the tps is \nany where near what it should be. Is the 6305 figure from xfs? I'm \nassuming that your main data array is just 2 15k sas drives, are you \nputting the WAL on the data array or is that stored somewhere else? Can \nI ask what scaling params, etc you used to build the pgbench tables and \nlook at your postgresql.conf file to see if I missed something (offline \nif you wish)\n\nI'm running 8x SSDs in RAID 10 for the data and pull just under 10k on a \nxfs system which is much lower than I'd expect for that setup and isn't \nsignificantly greater than your reported results, so something must be \nvery wrong.\n\nThanks\n\nJohn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 06 Dec 2012 08:29:46 +0000",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xfs perform a lot better than ext4 [WAS: Re: Two identical\n\tsystems, radically different performance]"
},
{
"msg_contents": "Hi John,\n\nOn 12/06/2012 09:29 AM, John Lister wrote:\n>\n>> on this box:\n>>\n>>> in a brief: the box is dell a PowerEdge r720 with 16GB of RAM,\n>>> the cpu is a Xeon 5620 with 6 core, the OS is installed on a raid\n>>> (sata disk 7.2k rpm) and the PGDATA is on separate RAID 1 array\n>>> (sas 15K rpm) and the controller is a PERC H710 (bbwc with a cache\n>>> of 512 MB). (ubuntu 12.04)\n>>\n>> on the same machine with the same configuration,\n>> having PGDATA on a xfs formatted partition gives me\n>> a much better TPS.\n>>\n>> e.g. pgbench -c 20 -t 5000 gives me 6305 TPS\n>> (3 runs with \"echo 3 > /proc/sys/vm/drop_caches && /etc/init.d/postgresql-9.2 restart\"\n>> in between).\n\n\n> Hi, I found this interesting as I'm trying to do some benchmarks on my box which is\n > very similar to the above but I don't believe the tps is any where near what it should be.\n > Is the 6305 figure from xfs?\n\nyes, it is.\n\n> I'm assuming that your main data array is just 2 15k sas drives,\n\ncorrect\n\n> are you putting the WAL on the data array or is that stored somewhere else?\n\npg_xlog is placed in the data array.\n\n> Can I ask what scaling params,\n\nsure, I've initialized pgbench db issuing:\n\npgbench -i -s 10 pgbench\n\n> etc you used to build the pgbench tables and look at your postgresql.conf file to see if I missed something (offline if you wish)\n\nthose are non default values in postgresql.conf\n\nlisten_addresses = '*'\nmax_connections = 100\nshared_buffers = 3200MB\nwork_mem = 30MB\nmaintenance_work_mem = 800MB\nsynchronous_commit = off\nfull_page_writes = off\ncheckpoint_segments = 40\ncheckpoint_completion_target = 0.9\nrandom_page_cost = 3.5\neffective_cache_size = 10GB\nlog_timezone = 'localtime'\nstats_temp_directory = 'pg_stat_tmp_ram'\nautovacuum_naptime = 5min\n\nand then OS tweaks:\n\nHT bios disabled\n/sbin/blockdev --setra 8192 /dev/sdb\necho deadline > /sys/block/sdb/queue/scheduler\nvm.swappiness=0\nvm.overcommit_memory=2\nvm.dirty_ratio=2\nvm.dirty_background_ratio=1\nkernel.shmmax=3454820352\nkernel.shmall=2048341\n$PGDATA is on xfs (rw,noatime)\ntmpfs on /db/9.2/pg_stat_tmp_ram type tmpfs (rw,size=50M,uid=1001,gid=1001)\nkernel 3.2.0-32-generic\n\n\nAndrea\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 06 Dec 2012 09:44:32 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xfs perform a lot better than ext4 [WAS: Re: Two identical\n\tsystems, radically different performance]"
},
{
"msg_contents": "[added performance list back]\n\nOn 12/06/2012 10:04 AM, John Lister wrote:\n> Thanks for the info, I'll have a play and see what values I get with similar settings, etc\n\nyou're welcome\n\n> Still think something is wrong with my config, but we'll see.\n\nwhich kind of ssd disks do you have ?\nmaybe they are of the same typeShaun Thomas is having problem with here:\nhttp://archives.postgresql.org/pgsql-performance/2012-12/msg00030.php\n\nAndrea\n\n\n> john\n>\n> On 06/12/2012 08:44, Andrea Suisani wrote:\n>> Hi John,\n>>\n>> On 12/06/2012 09:29 AM, John Lister wrote:\n>>>\n>>>> on this box:\n>>>>\n>>>>> in a brief: the box is dell a PowerEdge r720 with 16GB of RAM,\n>>>>> the cpu is a Xeon 5620 with 6 core, the OS is installed on a raid\n>>>>> (sata disk 7.2k rpm) and the PGDATA is on separate RAID 1 array\n>>>>> (sas 15K rpm) and the controller is a PERC H710 (bbwc with a cache\n>>>>> of 512 MB). (ubuntu 12.04)\n>>>>\n>>>> on the same machine with the same configuration,\n>>>> having PGDATA on a xfs formatted partition gives me\n>>>> a much better TPS.\n>>>>\n>>>> e.g. pgbench -c 20 -t 5000 gives me 6305 TPS\n>>>> (3 runs with \"echo 3 > /proc/sys/vm/drop_caches && /etc/init.d/postgresql-9.2 restart\"\n>>>> in between).\n>>\n>>\n>>> Hi, I found this interesting as I'm trying to do some benchmarks on my box which is\n>> > very similar to the above but I don't believe the tps is any where near what it should be.\n>> > Is the 6305 figure from xfs?\n>>\n>> yes, it is.\n>>\n>>> I'm assuming that your main data array is just 2 15k sas drives,\n>>\n>> correct\n>>\n>>> are you putting the WAL on the data array or is that stored somewhere else?\n>>\n>> pg_xlog is placed in the data array.\n>>\n>>> Can I ask what scaling params,\n>>\n>> sure, I've initialized pgbench db issuing:\n>>\n>> pgbench -i -s 10 pgbench\n>>\n>>> etc you used to build the pgbench tables and look at your postgresql.conf file to see if I missed something (offline if you wish)\n>>\n>> those are non default values in postgresql.conf\n>>\n>> listen_addresses = '*'\n>> max_connections = 100\n>> shared_buffers = 3200MB\n>> work_mem = 30MB\n>> maintenance_work_mem = 800MB\n>> synchronous_commit = off\n>> full_page_writes = off\n>> checkpoint_segments = 40\n>> checkpoint_completion_target = 0.9\n>> random_page_cost = 3.5\n>> effective_cache_size = 10GB\n>> log_timezone = 'localtime'\n>> stats_temp_directory = 'pg_stat_tmp_ram'\n>> autovacuum_naptime = 5min\n>>\n>> and then OS tweaks:\n>>\n>> HT bios disabled\n>> /sbin/blockdev --setra 8192 /dev/sdb\n>> echo deadline > /sys/block/sdb/queue/scheduler\n>> vm.swappiness=0\n>> vm.overcommit_memory=2\n>> vm.dirty_ratio=2\n>> vm.dirty_background_ratio=1\n>> kernel.shmmax=3454820352\n>> kernel.shmall=2048341\n>> $PGDATA is on xfs (rw,noatime)\n>> tmpfs on /db/9.2/pg_stat_tmp_ram type tmpfs (rw,size=50M,uid=1001,gid=1001)\n>> kernel 3.2.0-32-generic\n>>\n>>\n>> Andrea\n>>\n>>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 06 Dec 2012 10:33:06 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xfs perform a lot better than ext4 [WAS: Re: Two identical\n\tsystems, radically different performance]"
},
{
"msg_contents": "On 06/12/2012 09:33, Andrea Suisani wrote:\n>\n> which kind of ssd disks do you have ?\n> maybe they are of the same typeShaun Thomas is having problem with here:\n> http://archives.postgresql.org/pgsql-performance/2012-12/msg00030.php\nYeah i saw that post, I'm running the same version of ubuntu with the \n3.2 kernel, so when I get a chance to take it down will try the new \nkernels, although ubuntu are on 3.5 now... Shaun didn't post what \nhardware he was running on, so it would be interesting to see how it \ncompares. They are intel 320s, which while not the newest should offer \nsome protection against power failure, etc\n\n\nJohn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 06 Dec 2012 11:37:30 +0000",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xfs perform a lot better than ext4 [WAS: Re: Two identical\n\tsystems, radically different performance]"
},
{
"msg_contents": "On 12/06/2012 12:37 PM, John Lister wrote:\n> On 06/12/2012 09:33, Andrea Suisani wrote:\n>>\n>> which kind of ssd disks do you have ?\n>> maybe they are of the same typeShaun Thomas is having problem with here:\n>> http://archives.postgresql.org/pgsql-performance/2012-12/msg00030.php\n> Yeah i saw that post, I'm running the same version of ubuntu with the 3.2 kernel, so when I get a chance to take it down will try the new kernels, although ubuntu are on 3.5 now... Shaun didn't post what hardware he was running on, so it would be interesting to see how it compares. They are intel\n> 320s, which while not the newest should offer some protection against power failure, etc\n\nreading again the thread I realized Shaun is using\nfusionIO driver and he said that the regression is due\nto \"some recent 3.2 kernel patch borks the driver in\nsome horrible way\".\n\nso maybe you're not on the same boat (since you're\nusing intel 320), or maybe the kernel regression\nhe's referring to is related to the kernel subsystem\nthat deal with ssd disks independently from brands.\nIn the latter case testing a different kernel would be worthy.\n\nAndrea\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 06 Dec 2012 13:53:23 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: xfs perform a lot better than ext4 [WAS: Re: Two\n\tidentical systems, radically different performance]"
}
] |
[
{
"msg_contents": "I've confirmed that hyperthreading causes a huge drop in performance on a\n2x4-core Intel Xeon E5620 2.40GHz system. The bottom line is:\n\n ~3200 TPS max with hyperthreading\n ~9000 TPS max without hyprethreading\n\nHere are my results.\n\n\"Hyprethreads\" (Run1) is \"out of the box\" with hyperthreads enabled. Only\none column of hyperthread results are shown, but prior to today's work I\nran this a dozen or so times, and \"Run1\" is very representative of all\nthose runs. I also rebooted and confirmed that rebooting had no effect.\n\n\"+NoHyperthreads\" (Run2-5) involved rebooting into the BIOS and disabling\nhyperthreads.\n\n\"+NoAutoVacuum\" (Run6-8) means I turned off autovacuum in the\npostgresql.conf file and restarted Postgres (with hyperthreads still\ndisabled).\n\n Hyperthreads +NoHyperthreads +NoAutoVacuum\n ------------ ---------------------- ----------------\n-c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8\n5 20000 2733 2152 2352 2398 2769 2767 2777 4463\n10 10000 2783 2404 3529 3365 4397 4457 4217 4172\n20 5000 3241 3128 3728 5170 5253 5252 4832 8123\n30 3333 2987 5699 6180 8173 8259 6435 8225 8123\n40 2500 2739 7133 6507 9298 7845 9133 9298 9230\n50 2000 2119 5420 5020 8411 5670 9344 7624 8304\n\nI'll be upgrading to 8.4.14 and making more changes to postgres.conf based\non feedback. The server is available for a day or so for more tests if\nanyone has suggestions.\n\nHere's how I got these results:\n\nsu postgres\nunset LANG\nexport LD_LIBRARY_PATH=/usr/local/pgsql/lib\n/usr/local/pgsql/bin/initdb --pgdata=/data/postgres/main \\\n --xlogdir=/postgres_xlog/xlog --username=postgres\n\nEdit config file:\n\n max_connections = 500\n shared_buffers = 1000MB\n work_mem = 128MB\n synchronous_commit = off\n full_page_writes = off\n wal_buffers = 256kB\n checkpoint_segments = 30\n effective_cache_size = 4GB\n track_activities = on\n track_counts = on\n track_functions = none\n autovacuum = on\n autovacuum_naptime = 5min\n escape_string_warning = off\n\ncreateuser -U postgres test\ncreatedb -U postgres -O test test\n\npgbench -i -s 100 -U test\nfor p in \"5 20000\" \"10 10000\" \"20 5000\" \"30 3333\" \"40 2500\" \"50 2000\" ; do\n echo\n c=`echo $p | cut -d' ' -f1`\n t=`echo $p | cut -d' ' -f2`\n cmd=\"pgbench -U test -c $c -t $t\"\n echo \"--------- $cmd ---------\"\n $cmd\ndone\n\nThe hardware:\n\n CPU: 2x4-core Intex Xeon E5620 2.40 GHz\n\n Memory: 12 GB DDR EC\n\n Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n XLOG: 2 disks, RAID1 ext2\n PGDATA: 8 disks, RAID10\n\n 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli)\n indicates that the battery is charged and the cache is working on both\nunits.\n\n Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was\n actually cloned from old server).\n\n Postgres: 8.4.4\n\nCraig\n\nI've confirmed that hyperthreading causes a huge drop in performance on a 2x4-core Intel Xeon E5620 2.40GHz system. The bottom line is: ~3200 TPS max with hyperthreading ~9000 TPS max without hyprethreading\nHere are my results.\"Hyprethreads\" (Run1) is \"out of the box\" with hyperthreads enabled. Only one column of hyperthread results are shown, but prior to today's work I ran this a dozen or so times, and \"Run1\" is very representative of all those runs. I also rebooted and confirmed that rebooting had no effect.\n\"+NoHyperthreads\" (Run2-5) involved rebooting into the BIOS and disabling hyperthreads.\"+NoAutoVacuum\" (Run6-8) means I turned off autovacuum in the postgresql.conf file and restarted Postgres (with hyperthreads still disabled).\n Hyperthreads +NoHyperthreads +NoAutoVacuum ------------ ---------------------- ----------------\n-c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run85 20000 2733 2152 2352 2398 2769 2767 2777 4463\n10 10000 2783 2404 3529 3365 4397 4457 4217 417220 5000 3241 3128 3728 5170 5253 5252 4832 8123\n30 3333 2987 5699 6180 8173 8259 6435 8225 812340 2500 2739 7133 6507 9298 7845 9133 9298 9230\n50 2000 2119 5420 5020 8411 5670 9344 7624 8304I'll be upgrading to 8.4.14 and making more changes to postgres.conf based on feedback. The server is available for a day or so for more tests if anyone has suggestions.\nHere's how I got these results:su postgresunset LANG\nexport LD_LIBRARY_PATH=/usr/local/pgsql/lib/usr/local/pgsql/bin/initdb --pgdata=/data/postgres/main \\\n --xlogdir=/postgres_xlog/xlog --username=postgresEdit config file: max_connections = 500\n shared_buffers = 1000MB\n work_mem = 128MB synchronous_commit = off\n full_page_writes = off wal_buffers = 256kB\n checkpoint_segments = 30 effective_cache_size = 4GB\n track_activities = on track_counts = on\n track_functions = none autovacuum = on\n autovacuum_naptime = 5min escape_string_warning = off\ncreateuser -U postgres testcreatedb -U postgres -O test test\npgbench -i -s 100 -U testfor p in \"5 20000\" \"10 10000\" \"20 5000\" \"30 3333\" \"40 2500\" \"50 2000\" ; do\n echo c=`echo $p | cut -d' ' -f1`\n t=`echo $p | cut -d' ' -f2` cmd=\"pgbench -U test -c $c -t $t\"\n echo \"--------- $cmd ---------\" $cmd\ndoneThe hardware: CPU: 2x4-core Intex Xeon E5620 2.40 GHz Memory: 12 GB DDR EC\n Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n XLOG: 2 disks, RAID1 ext2 PGDATA: 8 disks, RAID10\n 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli) indicates that the battery is charged and the cache is working on both units.\n Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was actually cloned from old server).\n Postgres: 8.4.4Craig",
"msg_date": "Tue, 9 Oct 2012 13:12:55 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hyperthreading (was: Two identical systems,\n\tradically different performance)"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> I've confirmed that hyperthreading causes a huge drop in performance on a\n> 2x4-core Intel Xeon E5620 2.40GHz system. The bottom line is:\n\nInteresting.\n\n> I'll be upgrading to 8.4.14 and making more changes to postgres.conf based\n> on feedback. The server is available for a day or so for more tests if\n> anyone has suggestions.\n\nIt would be nice to see similar tests done with 9.2. 8.4 is kind of old\nnews as far as server performance is concerned.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 09 Oct 2012 16:53:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hyperthreading (was: Two identical systems,\n\tradically different performance)"
},
{
"msg_contents": "On 10/09/2012 03:12 PM, Craig James wrote:\n\n> ~3200 TPS max with hyperthreading\n> ~9000 TPS max without hyprethreading\n\nThat's really odd. We got almost the opposite effect on our X5645's.\n\nAlso, there's no way your RAID is sustaining 9000TPS. Something here \nsounds fishy.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Tue, 9 Oct 2012 15:56:25 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hyperthreading (was: Two identical systems, radically\n\tdifferent performance)"
},
{
"msg_contents": "On 10/9/12 1:12 PM, Craig James wrote:\n> I've confirmed that hyperthreading causes a huge drop in performance on a\n> 2x4-core Intel Xeon E5620 2.40GHz system. The bottom line is:\n> \n> ~3200 TPS max with hyperthreading\n> ~9000 TPS max without hyprethreading\n\nOh, interesting! This is the first time I've seen results like this on\na later Linux kernel and processor. I thought Intel had licked the\n\"hyperthreading penalty\".\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Tue, 09 Oct 2012 13:56:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hyperthreading (was: Two identical systems, radically\n\tdifferent performance)"
},
{
"msg_contents": "On Tue, Oct 9, 2012 at 1:56 PM, Shaun Thomas <[email protected]> wrote:\n> On 10/09/2012 03:12 PM, Craig James wrote:\n>\n>> ~3200 TPS max with hyperthreading\n>> ~9000 TPS max without hyprethreading\n>\n>\n> That's really odd. We got almost the opposite effect on our X5645's.\n>\n> Also, there's no way your RAID is sustaining 9000TPS. Something here sounds\n> fishy.\n\nsynchronous-commit=off\n\n",
"msg_date": "Tue, 9 Oct 2012 14:24:28 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hyperthreading (was: Two identical systems, radically\n\tdifferent performance)"
},
{
"msg_contents": "For your amusement ... I upgraded from 8.4.4 to 9.2.1 results. Threw away\nthe DB completely and did a new init. Same hardware, postgres.conf and\nLinux as before.\n\nra is \"blockdev --getra\" (both PGDATA and XLOG disks)\nwalb is postgres.conf \"wal_buffers\"\n\n ra:8192 walb:1M ra:256 walb:1M ra:256 walb:256kB\n ---------------- ---------------- -----------------\n-c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8 Run9\n5 20000 1963 2103 2145 2292 2312 2353 2296 2175 2294\n10 10000 2587 2749 2762 3252 3265 3276 3267 3228 3263\n20 5000 3449 3578 3438 4910 4958 4949 4949 4927 4943\n30 3333 4222 3731 3992 6929 6924 6562 6754 6995 6869\n40 2500 4261 3722 4243 9286 9240 5712 9310 8530 8872\n50 2000 4138 4399 3865 9213 9351 9578 8011 7651 8362\n\nUnfortunately, distance prevents me from going to the co-location facility\nand trying this with hyperthreading turned back on.\n\nCraig\n\n\nOn Tue, Oct 9, 2012 at 1:12 PM, Craig James <[email protected]> wrote:\n\n> I've confirmed that hyperthreading causes a huge drop in performance on a\n> 2x4-core Intel Xeon E5620 2.40GHz system. The bottom line is:\n>\n> ~3200 TPS max with hyperthreading\n> ~9000 TPS max without hyprethreading\n>\n> Here are my results.\n>\n> \"Hyprethreads\" (Run1) is \"out of the box\" with hyperthreads enabled. Only\n> one column of hyperthread results are shown, but prior to today's work I\n> ran this a dozen or so times, and \"Run1\" is very representative of all\n> those runs. I also rebooted and confirmed that rebooting had no effect.\n>\n> \"+NoHyperthreads\" (Run2-5) involved rebooting into the BIOS and disabling\n> hyperthreads.\n>\n> \"+NoAutoVacuum\" (Run6-8) means I turned off autovacuum in the\n> postgresql.conf file and restarted Postgres (with hyperthreads still\n> disabled).\n>\n> Hyperthreads +NoHyperthreads +NoAutoVacuum\n> ------------ ---------------------- ----------------\n> -c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8\n> 5 20000 2733 2152 2352 2398 2769 2767 2777 4463\n> 10 10000 2783 2404 3529 3365 4397 4457 4217 4172\n> 20 5000 3241 3128 3728 5170 5253 5252 4832 8123\n> 30 3333 2987 5699 6180 8173 8259 6435 8225 8123\n> 40 2500 2739 7133 6507 9298 7845 9133 9298 9230\n> 50 2000 2119 5420 5020 8411 5670 9344 7624 8304\n>\n> I'll be upgrading to 8.4.14 and making more changes to postgres.conf based\n> on feedback. The server is available for a day or so for more tests if\n> anyone has suggestions.\n>\n> Here's how I got these results:\n>\n> su postgres\n> unset LANG\n> export LD_LIBRARY_PATH=/usr/local/pgsql/lib\n> /usr/local/pgsql/bin/initdb --pgdata=/data/postgres/main \\\n> --xlogdir=/postgres_xlog/xlog --username=postgres\n>\n> Edit config file:\n>\n> max_connections = 500\n> shared_buffers = 1000MB\n> work_mem = 128MB\n> synchronous_commit = off\n> full_page_writes = off\n> wal_buffers = 256kB\n> checkpoint_segments = 30\n> effective_cache_size = 4GB\n> track_activities = on\n> track_counts = on\n> track_functions = none\n> autovacuum = on\n> autovacuum_naptime = 5min\n> escape_string_warning = off\n>\n> createuser -U postgres test\n> createdb -U postgres -O test test\n>\n> pgbench -i -s 100 -U test\n> for p in \"5 20000\" \"10 10000\" \"20 5000\" \"30 3333\" \"40 2500\" \"50 2000\" ; do\n> echo\n> c=`echo $p | cut -d' ' -f1`\n> t=`echo $p | cut -d' ' -f2`\n> cmd=\"pgbench -U test -c $c -t $t\"\n> echo \"--------- $cmd ---------\"\n> $cmd\n> done\n>\n> The hardware:\n>\n> CPU: 2x4-core Intex Xeon E5620 2.40 GHz\n>\n> Memory: 12 GB DDR EC\n>\n> Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n> XLOG: 2 disks, RAID1 ext2\n> PGDATA: 8 disks, RAID10\n>\n> 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli)\n> indicates that the battery is charged and the cache is working on both\n> units.\n>\n> Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was\n> actually cloned from old server).\n>\n> Postgres: 8.4.4\n>\n> Craig\n>\n\nFor your amusement ... I upgraded from 8.4.4 to 9.2.1 results. Threw away the DB completely and did a new init. Same hardware, postgres.conf and Linux as before.ra is \"blockdev --getra\" (both PGDATA and XLOG disks)\nwalb is postgres.conf \"wal_buffers\" ra:8192 walb:1M ra:256 walb:1M ra:256 walb:256kB\n ---------------- ---------------- ------------------c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8 Run9\n5 20000 1963 2103 2145 2292 2312 2353 2296 2175 229410 10000 2587 2749 2762 3252 3265 3276 3267 3228 3263\n20 5000 3449 3578 3438 4910 4958 4949 4949 4927 494330 3333 4222 3731 3992 6929 6924 6562 6754 6995 6869\n40 2500 4261 3722 4243 9286 9240 5712 9310 8530 887250 2000 4138 4399 3865 9213 9351 9578 8011 7651 8362\nUnfortunately, distance prevents me from going to the co-location facility and trying this with hyperthreading turned back on.CraigOn Tue, Oct 9, 2012 at 1:12 PM, Craig James <[email protected]> wrote:\nI've confirmed that hyperthreading causes a huge drop in performance on a 2x4-core Intel Xeon E5620 2.40GHz system. The bottom line is:\n ~3200 TPS max with hyperthreading ~9000 TPS max without hyprethreading\nHere are my results.\"Hyprethreads\" (Run1) is \"out of the box\" with hyperthreads enabled. Only one column of hyperthread results are shown, but prior to today's work I ran this a dozen or so times, and \"Run1\" is very representative of all those runs. I also rebooted and confirmed that rebooting had no effect.\n\"+NoHyperthreads\" (Run2-5) involved rebooting into the BIOS and disabling hyperthreads.\"+NoAutoVacuum\" (Run6-8) means I turned off autovacuum in the postgresql.conf file and restarted Postgres (with hyperthreads still disabled).\n Hyperthreads +NoHyperthreads +NoAutoVacuum ------------ ---------------------- ----------------\n-c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run85 20000 2733 2152 2352 2398 2769 2767 2777 4463\n10 10000 2783 2404 3529 3365 4397 4457 4217 417220 5000 3241 3128 3728 5170 5253 5252 4832 8123\n30 3333 2987 5699 6180 8173 8259 6435 8225 812340 2500 2739 7133 6507 9298 7845 9133 9298 9230\n50 2000 2119 5420 5020 8411 5670 9344 7624 8304I'll be upgrading to 8.4.14 and making more changes to postgres.conf based on feedback. The server is available for a day or so for more tests if anyone has suggestions.\nHere's how I got these results:su postgresunset LANG\nexport LD_LIBRARY_PATH=/usr/local/pgsql/lib/usr/local/pgsql/bin/initdb --pgdata=/data/postgres/main \\\n --xlogdir=/postgres_xlog/xlog --username=postgresEdit config file: max_connections = 500\n shared_buffers = 1000MB\n work_mem = 128MB synchronous_commit = off\n full_page_writes = off wal_buffers = 256kB\n checkpoint_segments = 30 effective_cache_size = 4GB\n track_activities = on track_counts = on\n track_functions = none autovacuum = on\n autovacuum_naptime = 5min escape_string_warning = off\ncreateuser -U postgres testcreatedb -U postgres -O test test\npgbench -i -s 100 -U testfor p in \"5 20000\" \"10 10000\" \"20 5000\" \"30 3333\" \"40 2500\" \"50 2000\" ; do\n echo c=`echo $p | cut -d' ' -f1`\n t=`echo $p | cut -d' ' -f2` cmd=\"pgbench -U test -c $c -t $t\"\n echo \"--------- $cmd ---------\" $cmd\ndoneThe hardware: CPU: 2x4-core Intex Xeon E5620 2.40 GHz Memory: 12 GB DDR EC\n Disks: 12x500GB disks (Western Digital 7200RPM SATA)\n XLOG: 2 disks, RAID1 ext2 PGDATA: 8 disks, RAID10\n 3WARE 9650SE-12ML with battery-backed cache. The admin tool (tw_cli) indicates that the battery is charged and the cache is working on both units.\n Linux: 2.6.32-41-server #94-Ubuntu SMP (new server's disk was actually cloned from old server).\n Postgres: 8.4.4Craig",
"msg_date": "Tue, 9 Oct 2012 16:30:11 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hyperthreading (was: Two identical systems,\n\tradically different performance)"
},
{
"msg_contents": "On 10/09/2012 06:30 PM, Craig James wrote:\n\n> ra:8192 walb:1M ra:256 walb:1M ra:256 walb:256kB\n> ---------------- ---------------- -----------------\n> -c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8 Run9\n> 40 2500 4261 3722 4243 9286 9240 5712 9310 8530 8872\n> 50 2000 4138 4399 3865 9213 9351 9578 8011 7651 8362\n\nI think I speak for more than a few people here when I say: wat.\n\nAbout the only thing I can ask, is: did you make these tests fair? And \nby fair, I mean:\n\necho 3 > /proc/sys/vm/drop_caches\npg_ctl -D /your/pg/dir restart\n\nBetween every test to make sure shared buffers and the OS inode cache \nwas empty before the start of each test? If you're using that bash-style \nfor-loop you attached earlier, probably not. Still though, I don't think \nthat would account for this much variance between having read-ahead at \n8M as opposed to 256kb.\n\nMy head hurts.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Wed, 10 Oct 2012 07:52:37 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hyperthreading (was: Two identical systems, radically\n\tdifferent performance)"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 9:52 AM, Shaun Thomas <[email protected]> wrote:\n> On 10/09/2012 06:30 PM, Craig James wrote:\n>\n>> ra:8192 walb:1M ra:256 walb:1M ra:256 walb:256kB\n>> ---------------- ---------------- -----------------\n>> -c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8 Run9\n>> 40 2500 4261 3722 4243 9286 9240 5712 9310 8530 8872\n>> 50 2000 4138 4399 3865 9213 9351 9578 8011 7651 8362\n>\n>\n> I think I speak for more than a few people here when I say: wat.\n>\n> About the only thing I can ask, is: did you make these tests fair? And by\n> fair, I mean:\n>\n> echo 3 > /proc/sys/vm/drop_caches\n> pg_ctl -D /your/pg/dir restart\n\nYes, I was thinking the same. Especially if you check the tendency to\nperform better in higher-numbered runs. But, as you said, that doesn't\nexplain that jump to twice the TPS. I was thinking, and I'm not\npgbench expert, could it be that the database grows from run to run,\nchanging performance characteristics?\n\n> My head hurts.\n\nI'm just confused. No headache yet.\n\nBut really interesting numbers in any case. It these results are on\nthe level, then maybe the kernel's read-ahead algorithm isn't as\nfool-proof as we thought? Gotta read the source. BRB\n\n",
"msg_date": "Wed, 10 Oct 2012 11:44:50 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hyperthreading (was: Two identical systems, radically\n different performance)"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 11:44:50AM -0300, Claudio Freire wrote:\n> On Wed, Oct 10, 2012 at 9:52 AM, Shaun Thomas <[email protected]> wrote:\n> > On 10/09/2012 06:30 PM, Craig James wrote:\n> >\n> >> ra:8192 walb:1M ra:256 walb:1M ra:256 walb:256kB\n> >> ---------------- ---------------- -----------------\n> >> -c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8 Run9\n> >> 40 2500 4261 3722 4243 9286 9240 5712 9310 8530 8872\n> >> 50 2000 4138 4399 3865 9213 9351 9578 8011 7651 8362\n> >\n> >\n> > I think I speak for more than a few people here when I say: wat.\n> >\n> > About the only thing I can ask, is: did you make these tests fair? And by\n> > fair, I mean:\n> >\n> > echo 3 > /proc/sys/vm/drop_caches\n> > pg_ctl -D /your/pg/dir restart\n> \n> Yes, I was thinking the same. Especially if you check the tendency to\n> perform better in higher-numbered runs. But, as you said, that doesn't\n> explain that jump to twice the TPS. I was thinking, and I'm not\n> pgbench expert, could it be that the database grows from run to run,\n> changing performance characteristics?\n> \n> > My head hurts.\n> \n> I'm just confused. No headache yet.\n> \n> But really interesting numbers in any case. It these results are on\n> the level, then maybe the kernel's read-ahead algorithm isn't as\n> fool-proof as we thought? Gotta read the source. BRB\n\nWell, I have exactly the same setup here:\n\n\tnew: 2x4-core Intex Xeon E5620 2.40 GHz\n\nLet me know if you want any tests run, on SSDs or magnetic disk. I do\nhave hyperthreading enabled, and Greg Smith benchmarked my server and\nsaid it was good.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Wed, 10 Oct 2012 11:46:03 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hyperthreading (was: Two identical systems,\n\tradically different performance)"
},
{
"msg_contents": "Sent this to Claudio rather than the whole list ... here it is.\n\nOn Wed, Oct 10, 2012 at 7:44 AM, Claudio Freire <[email protected]>wrote:\n\n> On Wed, Oct 10, 2012 at 9:52 AM, Shaun Thomas <[email protected]>\n> wrote:\n> > On 10/09/2012 06:30 PM, Craig James wrote:\n> >\n> >> ra:8192 walb:1M ra:256 walb:1M ra:256 walb:256kB\n> >> ---------------- ---------------- -----------------\n> >> -c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8 Run9\n> >> 40 2500 4261 3722 4243 9286 9240 5712 9310 8530 8872\n> >> 50 2000 4138 4399 3865 9213 9351 9578 8011 7651 8362\n> >\n> >\n> > I think I speak for more than a few people here when I say: wat.\n> >\n> > About the only thing I can ask, is: did you make these tests fair? And by\n> > fair, I mean:\n> >\n> > echo 3 > /proc/sys/vm/drop_caches\n> > pg_ctl -D /your/pg/dir restart\n>\n>\nI showed the exact commands I used -- if it's not there, I didn't do it.\nSo the answer is no, I didn't drop caches.\n\nOn the other hand, I wanted to know what happened on cold start and after\nrunning for a while. Running pgbench once isn't as interesting as running\nit three times.\n\n\n> Yes, I was thinking the same. Especially if you check the tendency to\n> perform better in higher-numbered runs. But, as you said, that doesn't\n> explain that jump to twice the TPS. I was thinking, and I'm not\n> pgbench expert, could it be that the database grows from run to run,\n> changing performance characteristics?\n>\n> > My head hurts.\n>\n> I'm just confused. No headache yet.\n>\n> But really interesting numbers in any case. It these results are on\n> the level, then maybe the kernel's read-ahead algorithm isn't as\n> fool-proof as we thought? Gotta read the source. BRB\n>\n\nBig numbers, little numbers ... I'm just reporting what pgbench tells me\nand how I got them. I'm good at chemical databases, you guys are the\nPostgres performance experts.\n\nSent this to Claudio rather than the whole list ... here it is.On Wed, Oct 10, 2012 at 7:44 AM, Claudio Freire <[email protected]> wrote:\n\nOn Wed, Oct 10, 2012 at 9:52 AM, Shaun Thomas <[email protected]> wrote:\n> On 10/09/2012 06:30 PM, Craig James wrote:\n>\n>> ra:8192 walb:1M ra:256 walb:1M ra:256 walb:256kB\n>> ---------------- ---------------- -----------------\n>> -c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8 Run9\n>> 40 2500 4261 3722 4243 9286 9240 5712 9310 8530 8872\n>> 50 2000 4138 4399 3865 9213 9351 9578 8011 7651 8362\n>\n>\n> I think I speak for more than a few people here when I say: wat.\n>\n> About the only thing I can ask, is: did you make these tests fair? And by\n> fair, I mean:\n>\n> echo 3 > /proc/sys/vm/drop_caches\n> pg_ctl -D /your/pg/dir restart\nI showed the exact commands I used -- if it's not there, I didn't do it. So the answer is no, I didn't drop caches.On\n the other hand, I wanted to know what happened on cold start and after \nrunning for a while. Running pgbench once isn't as interesting as \nrunning it three times.\n \nYes, I was thinking the same. Especially if you check the tendency to\nperform better in higher-numbered runs. But, as you said, that doesn't\nexplain that jump to twice the TPS. I was thinking, and I'm not\npgbench expert, could it be that the database grows from run to run,\nchanging performance characteristics?\n\n> My head hurts.\n\nI'm just confused. No headache yet.\n\nBut really interesting numbers in any case. It these results are on\nthe level, then maybe the kernel's read-ahead algorithm isn't as\nfool-proof as we thought? Gotta read the source. BRB\nBig numbers, little numbers ... I'm just \nreporting what pgbench tells me and how I got them. I'm good at \nchemical databases, you guys are the Postgres performance experts.",
"msg_date": "Wed, 10 Oct 2012 10:28:08 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hyperthreading (was: Two identical systems, radically\n\tdifferent performance)"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 11:44 AM, Claudio Freire <[email protected]> wrote:\n> On Wed, Oct 10, 2012 at 9:52 AM, Shaun Thomas <[email protected]> wrote:\n>> On 10/09/2012 06:30 PM, Craig James wrote:\n>>\n>>> ra:8192 walb:1M ra:256 walb:1M ra:256 walb:256kB\n>>> ---------------- ---------------- -----------------\n>>> -c -t Run1 Run2 Run3 Run4 Run5 Run6 Run7 Run8 Run9\n>>> 40 2500 4261 3722 4243 9286 9240 5712 9310 8530 8872\n>>> 50 2000 4138 4399 3865 9213 9351 9578 8011 7651 8362\n>>\n...\n> But really interesting numbers in any case. It these results are on\n> the level, then maybe the kernel's read-ahead algorithm isn't as\n> fool-proof as we thought? Gotta read the source. BRB\n\nSo, I've been digging.\n\nNewer kernels (above 2.6.23) have a new read-ahead algorithm, called\nthe \"on-demand\" read ahead. Benchmarks on this new code[0] suggest\nthere is a penalty for random writes that wasn't there before. This\npenalty is small in the benchmarks (less than 10% in all cases), but I\nwould imagine the issue would be amplified in the case of a kernel\nwith a misconfigured bg_writer (this case, IIRC).\n\nThis makes sense, in fact. Back when I tested the 8MB read-ahead on my\nserver, it was using 2.6.18. Now, 2.6.23 is in fact rather old, so\nthose benchmarks may no longer be relevant. There are tons of commits\nsince then[1], though only one pertaining writes from what I can tell.\nHowever, I'm inclined to blame the bg_writer. How about tweaking the\ndirty_background_ratio and dirty_ratio and re-trying?\n\n\n[0] http://lwn.net/Articles/235181/\n[1] https://github.com/torvalds/linux/commits/f5a246eab9a268f51ba8189ea5b098a1bfff200e/mm/readahead.c?page=1\n\n",
"msg_date": "Wed, 10 Oct 2012 15:08:00 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hyperthreading (was: Two identical systems, radically\n\tdifferent performance)"
}
] |
[
{
"msg_contents": "Hello,\n\nFirst let me say thanks for a fantastic database system. The hard work \nthis community has put into Postgres really shows.\n\nA client of mine has been using Postgres quite effectively for a while \nnow, but has recently hit a performance issue with text index queries, \nspecifically when using ts_rank or ts_rank_cd functions.\n\nThe database has a text index of around 200,000 documents. \nInvestigation revealed that text queries are slow only when using \nts_rank or ts_rank_cd. Without a ts_rank function, any query is \nanswered within 200ms or so; with ts_rank function, queries take up to \n30 seconds. Deeper investigation using gprof showed that the problem is \nprobably not ts_rank or ts_rank_cd, but the fact that those functions \nretrieve thousands of TOASTed tsvectors.\n\nI estimate that the total size of the text index is 900 MB uncompressed \n(including the TOAST data). It is a table that stores only a docid, a \ntsvector, and a couple other small fields. Some queries cause ts_rank \nto retrieve 700 MB of that 900 MB. That means each time a user types a \nsimple query, Postgres retrieves and decompresses almost 700 MB of TOAST \ndata. That's the size of a CD. No wonder it sometimes takes so long!\n\nI tried using SET STORAGE to optimize access to the TOAST tuples, but \nthat only made things worse. Even with EXTERNAL storage, Postgres still \nhas to copy the TOAST data in RAM and that takes time. I also \nrecompiled Postgres with a larger TOAST chunk size. That did improve \nperformance by about 20%, but I'm looking for a much bigger win than that.\n\nThe tsvectors are large because the documents are of variable quality. \nWe can not rely on authors to provide good metadata; we really need the \ntext index to just index everything in the documents. I've seen how \nts_rank can do its job with millions of documents in milliseconds, but \nthat only works when the text index stores a small amount of data per \ndocument. Short metadata just won't work for my client. Also, my \nclient can not simply stop using ts_rank.\n\nI've thought of some possible solutions to speed up ts_rank:\n\n1. Cache the whole tsvectors in RAM. This would be different from the \nbuffer cache, which apparently only knows how to cache the chunks of a \nTOASTed tsvector. I have thought of 3 ways to do this:\n\n A. Write a C extension that maintains a cache of tsvectors in \nshared memory. I think I would use each document's hash as the cache key.\n\n B. Add to Postgres the ability to cache any decompressed TOAST \nvalue. This would probably involve expanding the SET STORAGE clause of \nALTER TABLE so that adminstrators can configure which TOAST values are \nworth caching.\n\n C. Alter the buffer cache to be more aware of TOAST; make it cache \nwhole TOAST values rather than chunks. This would be invasive and might \nbe a bad idea overall.\n\n2. Maintain 2 tsvectors per row: one for querying, another for ranking. \n The tsvector used for ranking would be trimmed to prevent TOASTing. \nThis would speed up queries, but it may impact the quality of ranking.\n\n3. Write a C extension that stores the tsvectors in memory-mapped files \non disk, allowing us to at least take advantage of kernel-level caching.\n\nWhat do you think? Do one of these solutions pop out to you as a good \nidea, or have I overlooked some simpler solution?\n\nShane\n\n\n",
"msg_date": "Tue, 09 Oct 2012 15:38:52 -0600",
"msg_from": "Shane Hathaway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ways to speed up ts_rank"
},
{
"msg_contents": "\nLe 2012-10-09 à 17:38, Shane Hathaway a écrit :\n\n> Hello,\n> \n> The database has a text index of around 200,000 documents. Investigation revealed that text queries are slow only when using ts_rank or ts_rank_cd. Without a ts_rank function, any query is answered within 200ms or so; with ts_rank function, queries take up to 30 seconds. Deeper investigation using gprof showed that the problem is probably not ts_rank or ts_rank_cd, but the fact that those functions retrieve thousands of TOASTed tsvectors.\n\nIs the query perhaps doing something like this:\n\nSELECT ...\nFROM table\nWHERE tsvectorcol @@ plainto_tsquery('...')\nORDER BY ts_rank(...)\n\nIf so, ts_rank() is run for every document. What you should do instead is:\n\nSELECT *\nFROM (\n SELECT ...\n FROM table\n WHERE tsvectorcol @@ plainto_tsquery('...')) AS t1\nORDER BY ts_rank(...)\n\nNotice the ts_rank() is on the outer query, which means it'll only run on the subset of documents which match the query. This is explicitly mentioned in the docs:\n\n\"\"\"Ranking can be expensive since it requires consulting the tsvector of each matching document, which can be I/O bound and therefore slow. Unfortunately, it is almost impossible to avoid since practical queries often result in large numbers of matches.\"\"\"\n\n(last paragraph of) http://www.postgresql.org/docs/current/static/textsearch-controls.html#TEXTSEARCH-RANKING\n\nHope that helps!\nFrançois Beausoleil\n",
"msg_date": "Wed, 10 Oct 2012 08:38:28 -0400",
"msg_from": "=?iso-8859-1?Q?Fran=E7ois_Beausoleil?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ways to speed up ts_rank"
},
{
"msg_contents": "We'll present in Prague some improvements in FTS. Unfortunately, we have\nonly several minutes during lighting talk. In short, we improved GIN to \nstore additional information, coordinates for fts, for example and return \nordered by rank search results, which gave us performance better than\nsphynx. It's just a prototype, but we already got median at 8 msec for \n6 mln classifieds.\n\nWe didn't tested for long documents yet.\n\nRegards,\nOleg\n\nOn Wed, 10 Oct 2012, Fran?ois Beausoleil wrote:\n\n>\n> Le 2012-10-09 ? 17:38, Shane Hathaway a ?crit :\n>\n>> Hello,\n>>\n>> The database has a text index of around 200,000 documents. Investigation revealed that text queries are slow only when using ts_rank or ts_rank_cd. Without a ts_rank function, any query is answered within 200ms or so; with ts_rank function, queries take up to 30 seconds. Deeper investigation using gprof showed that the problem is probably not ts_rank or ts_rank_cd, but the fact that those functions retrieve thousands of TOASTed tsvectors.\n>\n> Is the query perhaps doing something like this:\n>\n> SELECT ...\n> FROM table\n> WHERE tsvectorcol @@ plainto_tsquery('...')\n> ORDER BY ts_rank(...)\n>\n> If so, ts_rank() is run for every document. What you should do instead is:\n>\n> SELECT *\n> FROM (\n> SELECT ...\n> FROM table\n> WHERE tsvectorcol @@ plainto_tsquery('...')) AS t1\n> ORDER BY ts_rank(...)\n>\n> Notice the ts_rank() is on the outer query, which means it'll only run on the subset of documents which match the query. This is explicitly mentioned in the docs:\n>\n> \"\"\"Ranking can be expensive since it requires consulting the tsvector of each matching document, which can be I/O bound and therefore slow. Unfortunately, it is almost impossible to avoid since practical queries often result in large numbers of matches.\"\"\"\n>\n> (last paragraph of) http://www.postgresql.org/docs/current/static/textsearch-controls.html#TEXTSEARCH-RANKING\n>\n> Hope that helps!\n> Fran?ois Beausoleil\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n\n",
"msg_date": "Wed, 10 Oct 2012 18:59:18 +0400 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ways to speed up ts_rank"
},
{
"msg_contents": "On 10/10/2012 06:38 AM, Fran�ois Beausoleil wrote:\n> Notice the ts_rank() is on the outer query, which means it'll only\n> run on the subset of documents which match the query. This is\n> explicitly mentioned in the docs:\n>\n> \"\"\"Ranking can be expensive since it requires consulting the tsvector\n> of each matching document, which can be I/O bound and therefore slow.\n> Unfortunately, it is almost impossible to avoid since practical\n> queries often result in large numbers of matches.\"\"\"\n>\n> (last paragraph of)\n> http://www.postgresql.org/docs/current/static/textsearch-controls.html#TEXTSEARCH-RANKING\n\nIndeed, I have studied that paragraph in depth, trying to gather as much \npossible meaning from it as I can. :-)\n\nHowever, the following two queries take exactly the same time, \nsuggesting to me that ts_rank_cd is really only looking at matching \nrows, not all rows:\n\nSELECT docid, coefficient * ts_rank_cd('{0.1, 0.2, 0.5, 1.0}',\n text_vector, to_tsquery('english', 'stuff')) AS rank\nFROM pgtextindex\nWHERE (text_vector @@ to_tsquery('english', 'stuff'))\nORDER BY rank DESC\nlimit 3;\n\nSELECT docid, coefficient * ts_rank_cd('{0.1, 0.2, 0.5, 1.0}',\n text_vector, to_tsquery('english', 'stuff')) AS rank\nFROM (SELECT * FROM pgtextindex\n WHERE (text_vector @@ to_tsquery('english', 'stuff'))) AS filtered\nORDER BY rank DESC\nlimit 3;\n\nThanks for the suggestion though. By the way, all the tsvectors are \nalready loaded into the kernel cache when I execute the queries, so \nranking large documents is in fact CPU bound rather than I/O bound. The \nCPU is pegged for the whole time.\n\nShane\n\n\n",
"msg_date": "Wed, 10 Oct 2012 12:25:54 -0600",
"msg_from": "Shane Hathaway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ways to speed up ts_rank"
},
{
"msg_contents": "On 10/10/2012 08:59 AM, Oleg Bartunov wrote:\n> We'll present in Prague some improvements in FTS. Unfortunately, we have\n> only several minutes during lighting talk. In short, we improved GIN to\n> store additional information, coordinates for fts, for example and\n> return ordered by rank search results, which gave us performance better\n> than\n> sphynx. It's just a prototype, but we already got median at 8 msec for 6\n> mln classifieds.\n>\n> We didn't tested for long documents yet.\n\nThat sounds like the solution I'm looking for! Storing the info for \nts_rank in an index would probably do the trick. Can I get this code \nsomewhere? I've isolated the table I need to optimize and I can easily \nrun tests on non-production code.\n\nShane\n\n\n",
"msg_date": "Wed, 10 Oct 2012 12:32:55 -0600",
"msg_from": "Shane Hathaway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ways to speed up ts_rank"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been fighting with some CTE queries recently, and in the end I've\nended up with two basic cases. In one case the CTEs work absolutely\ngreat, making the estimates much more precise, while in the other the\nresults are pretty terrible. And I'm not sure why both of these behave\nthe way they do.\n\nI'm aware of the \"optimization fence\" and I suspect it's the cause here,\nbut I wonder why in one case it improves the estimates so much while in\nthe other one it performs so bad.\n\nIf anyone could explain what's happening, it'd be great.\n\nThe application is using a 'calendar table', with one row for each day\nbetween year 1900 and 2050 (i.e. ~55000 rows) and some additional\ncolumns about the date (month, quarter, week, ...) whatever. All these\nare sequences of numbers starting at 1900-01-01. So the table looks like\nthis:\n\n id | month | week\n--------------------\n 1 | 0 | 0\n 2 | 0 | 0\n .....\n 7 | 0 | 1 <- switch to 2nd week\n .....\n 31 | 0\n 32 | 1 <- switch to February 1900\n\nand so on. Let's go with 'month' only, so the table looks like this\n\n CREATE TABLE calendar_table (\n id INT PRIMARY KEY,\n month INT\n );\n\nand let's fill it with data - for simplicity let's suppose all months\nhave 30 days:\n\n INSERT INTO calendar_table(id, month)\n SELECT i, (i/30) FROM generate_series(1,55000) s(i);\n\nThe second table is a \"fact\" table containing primary key, date (a FK to\nthe calendar table) and some additional data (facts), but we don't need\nthem so let's use just this table\n\n CREATE TABLE fact_table (\n id INT PRIMARY KEY,\n date_id INT REFERENCES calendar_table (id)\n );\n\nNow let's fill the fact table with data for a very short period of time,\ne.g. one month. Let's find what days are min/max for month 1000 (which\nis ~1983):\n\n SELECT min(id), max(id) FROM calendar_table WHERE month = 1000;\n\n min | max\n -------+-------\n 30000 | 30029\n\nand let's fill some data randomly (1000000 rows):\n\n INSERT INTO fact_table(id, date_id)\n SELECT i, floor(30000 + random()*30)\n FROM generate_series(1,1000000) s(i);\n\nAnalyze the tables, and we're ready for the two queries.\n\n(A) significant improvement of estimates / performance\n\nA usual query is \"do something with data for month X\" so let's select\nall data from the fact table that join to month 1000.\n\nSELECT * FROM fact_table f JOIN calendar_table c ON (date_id = c.id)\n WHERE month = 1000;\n\nWe do know it should be 1000000 rows, but the explain plan shows an\nestimate of just 527 (http://explain.depesz.com/s/Q4oy):\n\n QUERY PLAN\n---------------------------------------------------------------------\n Hash Join (cost=9.18..23110.45 rows=527 width=49)\n Hash Cond: (f.date_id = c.id)\n -> Seq Scan on fact_table f (cost=0.00..19346.00 rows=1000000 ...)\n -> Hash (cost=8.82..8.82 rows=29 width=8)\n -> Index Scan using calendar_table_idx on calendar_table c\n (cost=0.00..8.82 rows=29 width=8)\n Index Cond: (month = 1000)\n(6 rows)\n\nNow, I'm pretty sure I understand where the estimate comes from. The\nmonth is ~0.055% of the calendar table, and by applying this to the fact\ntable we do get ~550 rows. That's almost exactly the estimate.\n\nNow, let's move the calendar table to a CTE (incl. the condition):\n\nWITH c as (SELECT * from calendar_table WHERE month = 1000)\nSELECT * FROM fact_table f JOIN c ON (f.date_id = c.id);\n\nThis gives us this plan (http://explain.depesz.com/s/5k9)\n\n QUERY PLAN\n---------------------------------------------------------------------\n Hash Join (cost=9.76..32772.43 rows=966667 width=49)\n Hash Cond: (f.date_id = c.id)\n CTE c\n -> Index Scan using calendar_table_idx on calendar_table\n (cost=0.00..8.82 rows=29 width=8)\n Index Cond: (month = 1000)\n -> Seq Scan on fact_table f (cost=0.00..19346.00 rows=1000000 ...)\n -> Hash (cost=0.58..0.58 rows=29 width=8)\n -> CTE Scan on c (cost=0.00..0.58 rows=29 width=8)\n(8 rows)\n\nNow, this gives us much better plan (almost exactly the actual number of\nrows). The queries are usually more complex (more joins, aggregations\netc.) and this precise estimate significantly improves the performance.\n\nI don't get it - how could it get so much more precise estimate? I'd\nexpect such behavior e.g. from temporary tables (because they may be\nanaklyzed), but that's not how CTE work internally - the materialization\ndoes not allow gathering stats prior to planning AFAIK.\n\nNow let's see the opposite direction.\n\n(B) significant deviation of estimates / performance\n\nLet's work with the fact table only - we've seen queries where narrow\n\"noodles\" of the fact table (basically a PK+column) were selected and\nthen joined using the PK. Yes, it's a dumb thing to do but that's not\nthe point. Let's see how this performs with CTEs\n\nWITH a AS (SELECT * from fact_table), b AS (SELECT * from fact_table)\n SELECT * from a JOIN b on (a.id = b.id);\n\nThe explain plan is here (http://explain.depesz.com/s/zGy) - notice the\nestimate which is ~5000x overestimating the actual number 1000000\n(because we're joining over a PK).\n\n QUERY PLAN\n-----------------------------------------------------------------------\n Merge Join (cost=278007.69..75283007.69 rows=5000000000 width=80)\n Merge Cond: (a.id = b.id)\n CTE a\n -> Seq Scan on fact_table (cost=0.00..19346.00 rows=1000000 ...)\n CTE b\n -> Seq Scan on fact_table (cost=0.00..19346.00 rows=1000000 ...)\n -> Sort (cost=119657.84..122157.84 rows=1000000 width=40)\n Sort Key: a.id\n -> CTE Scan on a (cost=0.00..20000.00 rows=1000000 width=40)\n -> Sort (cost=119657.84..122157.84 rows=1000000 width=40)\n Sort Key: b.id\n -> CTE Scan on b (cost=0.00..20000.00 rows=1000000 width=40)\n(12 rows)\n\nNow let's try without the CTEs:\n\n SELECT * from (SELECT * from fact_table) a\n JOIN (SELECT * from fact_table) b on (a.id = b.id);\n\nThat gives us this plan (http://explain.depesz.com/s/Fmv)\n\n QUERY PLAN\n-----------------------------------------------------------------------\n Merge Join (cost=0.00..85660.70 rows=1000000 width=82)\n Merge Cond: (public.fact_table.id = public.fact_table.id)\n -> Index Scan using fact_table_pkey on fact_table\n (cost=0.00..35330.35 rows=1000000 width=41)\n -> Index Scan using fact_table_pkey on fact_table\n (cost=0.00..35330.35 rows=1000000 width=41)\n(4 rows)\n\nNow we're getting much better estimates - exactly the right number.\n\nBoth queries might be improved (especially the second one, which may be\nrewritten as a plain select without a join), but that's not the point\nhere. Why does the CTE improve the estimates so much in the first\nexample and hurts in the second one?\n\nthanks\nTomas\n\n",
"msg_date": "Wed, 10 Oct 2012 00:21:31 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why am I getting great/terrible estimates with these CTE queries?"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> I've been fighting with some CTE queries recently, and in the end I've\n> ended up with two basic cases. In one case the CTEs work absolutely\n> great, making the estimates much more precise, while in the other the\n> results are pretty terrible. And I'm not sure why both of these behave\n> the way they do.\n\nYou're assuming the case where the estimate is better is better for a\nreason ... but it's only better as a result of blind dumb luck. The\nouter-level query planner doesn't know anything about the CTE's output\nexcept the estimated number of rows --- in particular, it doesn't drill\ndown to find any statistics about the join column. So what you're\ngetting there is a default selectivity estimate that just happens to\nmatch reality in this case. (If you work through the math in\neqjoinsel_inner for the case where the relation sizes are grossly\ndifferent and we don't have MCV stats, you'll find that it comes out to\nbe assuming that each row in the larger relation has one join partner in\nthe smaller one, which indeed is your situation here.) In the other\nexample, you're likewise getting a default selectivity estimate, only it\ndoesn't match so well, because the default does not include assuming\nthat the join keys are unique on both sides. Without the CTEs, the\noptimizer can see the keys are unique so it makes the right selectivity\nestimate.\n\nIn principle we could make the optimizer try to drill down for stats,\nwhich would make these examples work the same with or without the CTE\nlayers. I'm not sure it's worth the trouble though --- I'm dubious that\npeople would use a CTE for cases that are simple enough for the stats\nestimates to be worth anything.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 09 Oct 2012 19:09:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why am I getting great/terrible estimates with these CTE queries?"
},
{
"msg_contents": "On 10.10.2012 01:09, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> I've been fighting with some CTE queries recently, and in the end I've\n>> ended up with two basic cases. In one case the CTEs work absolutely\n>> great, making the estimates much more precise, while in the other the\n>> results are pretty terrible. And I'm not sure why both of these behave\n>> the way they do.\n> \n> You're assuming the case where the estimate is better is better for a\n> reason ... but it's only better as a result of blind dumb luck. The\n> outer-level query planner doesn't know anything about the CTE's output\n> except the estimated number of rows --- in particular, it doesn't drill\n> down to find any statistics about the join column. So what you're\n> getting there is a default selectivity estimate that just happens to\n> match reality in this case. (If you work through the math in\n> eqjoinsel_inner for the case where the relation sizes are grossly\n> different and we don't have MCV stats, you'll find that it comes out to\n> be assuming that each row in the larger relation has one join partner in\n> the smaller one, which indeed is your situation here.) In the other\n> example, you're likewise getting a default selectivity estimate, only it\n> doesn't match so well, because the default does not include assuming\n> that the join keys are unique on both sides. Without the CTEs, the\n> optimizer can see the keys are unique so it makes the right selectivity\n> estimate.\n\nThanks for explaining, now it finally makes some sense.\n\n\n> In principle we could make the optimizer try to drill down for stats,\n> which would make these examples work the same with or without the CTE\n> layers. I'm not sure it's worth the trouble though --- I'm dubious that\n> people would use a CTE for cases that are simple enough for the stats\n> estimates to be worth anything.\n\nI don't think we need this to be improved with CTEs, we've used them\nmostly as an attempt to make the queries faster (and it worked by luck,\nas it turned out). If we could get better estimates with plain CTE-free\nqueries, that'd definitely be the preferred solution.\n\nActually we need to improve only the first query, as the second one\n(joining over PK) is rather crazy.\n\nI'll check the eqjoinsel_inner and the other join estimates, but I'd bet\nthis all boils down to estimating selectivity of two correlated columns\n(because we're querying one and joining over another).\n\nthanks\nTomas\n\n",
"msg_date": "Wed, 10 Oct 2012 02:19:32 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why am I getting great/terrible estimates with these CTE queries?"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10\n15K SCSI drives which is runing Centos 6.2 x64. This server is mainly used\nfor inserting/updating large amounts of data via copy/insert/update\ncommands, and seldom for running select queries.\n\nHere are the relevant configuration parameters I changed:\n\nshared_buffers = 10GB\neffective_cache_size = 90GB\nwork_mem = 32MB\nmaintenance_work_mem = 512MB\ncheckpoint_segments = 64\ncheckpoint_completion_target = 0.8\n\nMy biggest concern are shared_buffers and effective_cache_size, should I\nincrease shared_buffers and decrease effective_cache_size? I read that\nvalues above 10GB for shared_buffers give lower performance, than smaller\namounts?\n\nfree is currently reporting (during the loading of data):\n\n$ free -m\n total used free shared buffers cached\nMem: 96730 96418 311 0 71 93120\n-/+ buffers/cache: 3227 93502\nSwap: 21000 51 20949\n\nSo it did a little swapping, but only minor, still I should probably\ndecrease shared_buffers so there is no swapping at all.\n\nThanks in advance,\nStrahinja\n\nHi everyone,I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is mainly used for inserting/updating large amounts of data via copy/insert/update commands, and seldom for running select queries.\nHere are the relevant configuration parameters I changed:shared_buffers = 10GBeffective_cache_size = 90GBwork_mem = 32MBmaintenance_work_mem = 512MBcheckpoint_segments = 64checkpoint_completion_target = 0.8\nMy biggest concern are shared_buffers and effective_cache_size, should I increase shared_buffers and decrease effective_cache_size? I read that values above 10GB for shared_buffers give lower performance, than smaller amounts?\nfree is currently reporting (during the loading of data):$ free -m total used free shared buffers cachedMem: 96730 96418 311 0 71 93120\n\n-/+ buffers/cache: 3227 93502Swap: 21000 51 20949So it did a little swapping, but only minor, still I should probably decrease shared_buffers so there is no swapping at all.\n\nThanks in advance,Strahinja",
"msg_date": "Wed, 10 Oct 2012 09:12:20 +0200",
"msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "Hm, I just notices that shared_buffers + effective_cache_size = 100 > 96GB,\nwhich can't be right. effective_cache_size should probably be 80GB.\n\nStrahinja Kustudić | System Engineer | Nordeus\n\n\n\nOn Wed, Oct 10, 2012 at 9:12 AM, Strahinja Kustudić\n<[email protected]>wrote:\n\n> Hi everyone,\n>\n> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and\n> RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is\n> mainly used for inserting/updating large amounts of data via\n> copy/insert/update commands, and seldom for running select queries.\n>\n> Here are the relevant configuration parameters I changed:\n>\n> shared_buffers = 10GB\n> effective_cache_size = 90GB\n> work_mem = 32MB\n> maintenance_work_mem = 512MB\n> checkpoint_segments = 64\n> checkpoint_completion_target = 0.8\n>\n> My biggest concern are shared_buffers and effective_cache_size, should I\n> increase shared_buffers and decrease effective_cache_size? I read that\n> values above 10GB for shared_buffers give lower performance, than smaller\n> amounts?\n>\n> free is currently reporting (during the loading of data):\n>\n> $ free -m\n> total used free shared buffers cached\n> Mem: 96730 96418 311 0 71 93120\n> -/+ buffers/cache: 3227 93502\n> Swap: 21000 51 20949\n>\n> So it did a little swapping, but only minor, still I should probably\n> decrease shared_buffers so there is no swapping at all.\n>\n> Thanks in advance,\n> Strahinja\n>\n\nHm, I just notices that shared_buffers + effective_cache_size = 100 > 96GB, which can't be right. effective_cache_size should probably be 80GB.\n\nStrahinja Kustudić | System Engineer | Nordeus\n\nOn Wed, Oct 10, 2012 at 9:12 AM, Strahinja Kustudić <[email protected]> wrote:\n\nHi everyone,I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is mainly used for inserting/updating large amounts of data via copy/insert/update commands, and seldom for running select queries.\nHere are the relevant configuration parameters I changed:shared_buffers = 10GBeffective_cache_size = 90GBwork_mem = 32MBmaintenance_work_mem = 512MBcheckpoint_segments = 64checkpoint_completion_target = 0.8\nMy biggest concern are shared_buffers and effective_cache_size, should I increase shared_buffers and decrease effective_cache_size? I read that values above 10GB for shared_buffers give lower performance, than smaller amounts?\nfree is currently reporting (during the loading of data):$ free -m total used free shared buffers cachedMem: 96730 96418 311 0 71 93120\n\n\n-/+ buffers/cache: 3227 93502Swap: 21000 51 20949So it did a little swapping, but only minor, still I should probably decrease shared_buffers so there is no swapping at all.\n\nThanks in advance,Strahinja",
"msg_date": "Wed, 10 Oct 2012 09:18:50 +0200",
"msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On 10.10.2012 09:12, Strahinja Kustudić wrote:\n> Hi everyone,\n> \n> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and\n> RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is\n> mainly used for inserting/updating large amounts of data via\n> copy/insert/update commands, and seldom for running select queries.\n> \n> Here are the relevant configuration parameters I changed:\n> \n> shared_buffers = 10GB\n> effective_cache_size = 90GB\n> work_mem = 32MB\n> maintenance_work_mem = 512MB\n> checkpoint_segments = 64\n> checkpoint_completion_target = 0.8\n> \n> My biggest concern are shared_buffers and effective_cache_size, should I\n> increase shared_buffers and decrease effective_cache_size? I read that\n> values above 10GB for shared_buffers give lower performance, than\n> smaller amounts?\n> \n> free is currently reporting (during the loading of data):\n> \n> $ free -m\n> total used free shared buffers cached\n> Mem: 96730 96418 311 0 71 93120\n> -/+ buffers/cache: 3227 93502\n> Swap: 21000 51 20949\n> \n> So it did a little swapping, but only minor, still I should probably\n> decrease shared_buffers so there is no swapping at all.\n\nThat's hardly caused by shared buffers. The main point is that\neffective_cache_size is just a hint to the optimizer how much cache\n(shared buffers + page cache) to expect. So it's unlikely PostgreSQL is\ngoing to allocate 100GB of RAM or something.\n\nWhat have you set to the main /proc/sys/vm/ parameters? Mainly these three:\n\n/proc/sys/vm/swappiness\n/proc/sys/vm/overcommit_memory\n/proc/sys/vm/overcommit_ratio\n\n\nTomas\n\n",
"msg_date": "Wed, 10 Oct 2012 09:32:54 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "Strahinja Kustudic wrote:\r\n>> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10 15K SCSI drives\r\n>> which is runing Centos 6.2 x64. This server is mainly used for inserting/updating large amounts of\r\n>> data via copy/insert/update commands, and seldom for running select queries.\r\n>> \r\n>> Here are the relevant configuration parameters I changed:\r\n>> \r\n>> shared_buffers = 10GB\r\n>> effective_cache_size = 90GB\r\n>> work_mem = 32MB\r\n>> maintenance_work_mem = 512MB\r\n>> checkpoint_segments = 64\r\n>> checkpoint_completion_target = 0.8\r\n>> \r\n>> My biggest concern are shared_buffers and effective_cache_size, should I increase shared_buffers\r\n>> and decrease effective_cache_size? I read that values above 10GB for shared_buffers give lower\r\n>> performance, than smaller amounts?\r\n>> \r\n>> free is currently reporting (during the loading of data):\r\n>> \r\n>> $ free -m\r\n>> total used free shared buffers cached\r\n>> Mem: 96730 96418 311 0 71 93120\r\n>> -/+ buffers/cache: 3227 93502\r\n>> Swap: 21000 51 20949\r\n>> \r\n>> So it did a little swapping, but only minor, still I should probably decrease shared_buffers so\r\n>> there is no swapping at all.\r\n\r\n> Hm, I just notices that shared_buffers + effective_cache_size = 100 > 96GB, which can't be right.\r\n> effective_cache_size should probably be 80GB.\r\n\r\nI think you misunderstood effective_cache_size.\r\nIt does not influence memory usage, but query planning.\r\nIt gives the planner an idea of how much memory there is for caching\r\ndata, including the filesystem cache.\r\n\r\nSo a good value for effective_cache_size would be\r\ntotal memory minus what the OS and others need minus what private\r\nmemory the PostgreSQL backends need.\r\nThe latter can be estimated as work_mem times max_connections.\r\n\r\nTo avoid swapping, consider setting vm.swappiness to 0 in\r\n/etc/sysctl.conf.\r\n\r\n10GB of shared_buffers is quite a lot.\r\nIf you can run realistic performance tests, start with a lower value\r\nand increase until you cannot see a notable improvement.\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Wed, 10 Oct 2012 09:39:47 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On 10/10/2012 09:12, Strahinja Kustudić wrote:\n> Hi everyone,\n\nHello,\n\n>\n> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and \n> RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is \n> mainly used for inserting/updating large amounts of data via \n> copy/insert/update commands, and seldom for running select queries.\n>\n> Here are the relevant configuration parameters I changed:\n>\n> shared_buffers = 10GB\n\nGenerally going over 4GB for shared_buffers doesn't help.. some of the \noverhead of bgwriter and checkpoints is more or less linear in the size \nof shared_buffers ..\n\n> effective_cache_size = 90GB\n\neffective_cache_size should be ~75% of the RAM (if it's a dedicated server)\n\n> work_mem = 32MB\n\nwith 96GB of RAM I would raise default work_mem to something like 128MB\n\n> maintenance_work_mem = 512MB\n\nagain, with 96GB of ram you can raise maintenance_work_mem to something \nlike 4GB\n\n> checkpoint_segments = 64\n> checkpoint_completion_target = 0.8\n>\n> My biggest concern are shared_buffers and effective_cache_size, should \n> I increase shared_buffers and decrease effective_cache_size? I read \n> that values above 10GB for shared_buffers give lower performance, than \n> smaller amounts?\n>\n> free is currently reporting (during the loading of data):\n>\n> $ free -m\n> total used free shared buffers cached\n> Mem: 96730 96418 311 0 71 93120\n> -/+ buffers/cache: 3227 93502\n> Swap: 21000 51 20949\n>\n> So it did a little swapping, but only minor, still I should probably \n> decrease shared_buffers so there is no swapping at all.\n>\n> Thanks in advance,\n> Strahinja\n\nJulien\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Wed, 10 Oct 2012 10:11:30 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "Thanks for very fast replies everyone :)\n\n@Laurenz I know that effective cache size is only used for the query\nplanner, what I was saying is that if I tell it that it can have 90GB\ncached items, that is not trues, since the OS and Postgres process itself\ncan take more than 6GB, which would mean 90GB is not the correct value, but\nif effective_cache size should be shared_buffers+page cache as Tomas said,\nthan 90GB, won't be a problem.\n\n\n@Tomas here are the values:\n\n# cat /proc/sys/vm/swappiness\n60\n# cat /proc/sys/vm/overcommit_memory\n0\n# cat /proc/sys/vm/overcommit_ratio\n50\n\nI will turn of swappiness, I was meaning to do that, but I don't know much\nabout the overcommit settings, I will read what they do.\n\n\n@Julien thanks for the suggestions, I will tweak them like you suggested.\n\nStrahinja Kustudić | System Engineer | Nordeus\n\n\n\nOn Wed, Oct 10, 2012 at 10:11 AM, Julien Cigar <[email protected]> wrote:\n\n> On 10/10/2012 09:12, Strahinja Kustudić wrote:\n>\n>> Hi everyone,\n>>\n>\n> Hello,\n>\n>\n>\n>> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and\n>> RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is\n>> mainly used for inserting/updating large amounts of data via\n>> copy/insert/update commands, and seldom for running select queries.\n>>\n>> Here are the relevant configuration parameters I changed:\n>>\n>> shared_buffers = 10GB\n>>\n>\n> Generally going over 4GB for shared_buffers doesn't help.. some of the\n> overhead of bgwriter and checkpoints is more or less linear in the size of\n> shared_buffers ..\n>\n> effective_cache_size = 90GB\n>>\n>\n> effective_cache_size should be ~75% of the RAM (if it's a dedicated server)\n>\n> work_mem = 32MB\n>>\n>\n> with 96GB of RAM I would raise default work_mem to something like 128MB\n>\n> maintenance_work_mem = 512MB\n>>\n>\n> again, with 96GB of ram you can raise maintenance_work_mem to something\n> like 4GB\n>\n>\n> checkpoint_segments = 64\n>> checkpoint_completion_target = 0.8\n>>\n>> My biggest concern are shared_buffers and effective_cache_size, should I\n>> increase shared_buffers and decrease effective_cache_size? I read that\n>> values above 10GB for shared_buffers give lower performance, than smaller\n>> amounts?\n>>\n>> free is currently reporting (during the loading of data):\n>>\n>> $ free -m\n>> total used free shared buffers cached\n>> Mem: 96730 96418 311 0 71 93120\n>> -/+ buffers/cache: 3227 93502\n>> Swap: 21000 51 20949\n>>\n>> So it did a little swapping, but only minor, still I should probably\n>> decrease shared_buffers so there is no swapping at all.\n>>\n>> Thanks in advance,\n>> Strahinja\n>>\n>\n> Julien\n>\n>\n> --\n> No trees were killed in the creation of this message.\n> However, many electrons were terribly inconvenienced.\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\nThanks for very fast replies everyone :)@Laurenz I know that effective cache size is only used for the query planner, what I was saying is that if I tell it that it can have 90GB cached items, that is not trues, since the OS and Postgres process itself can take more than 6GB, which would mean 90GB is not the correct value, but if effective_cache size should be shared_buffers+page cache as Tomas said, than 90GB, won't be a problem.\n@Tomas here are the values:# cat /proc/sys/vm/swappiness60# cat /proc/sys/vm/overcommit_memory0# cat /proc/sys/vm/overcommit_ratio50\n\nI will turn of swappiness, I was meaning to do that, but I don't know much about the overcommit settings, I will read what they do.@Julien thanks for the suggestions, I will tweak them like you suggested.\n\nStrahinja Kustudić | System Engineer | Nordeus\n\nOn Wed, Oct 10, 2012 at 10:11 AM, Julien Cigar <[email protected]> wrote:\nOn 10/10/2012 09:12, Strahinja Kustudić wrote:\n\nHi everyone,\n\n\nHello,\n\n\n\nI have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is mainly used for inserting/updating large amounts of data via copy/insert/update commands, and seldom for running select queries.\n\nHere are the relevant configuration parameters I changed:\n\nshared_buffers = 10GB\n\n\nGenerally going over 4GB for shared_buffers doesn't help.. some of the overhead of bgwriter and checkpoints is more or less linear in the size of shared_buffers ..\n\n\neffective_cache_size = 90GB\n\n\neffective_cache_size should be ~75% of the RAM (if it's a dedicated server)\n\n\nwork_mem = 32MB\n\n\nwith 96GB of RAM I would raise default work_mem to something like 128MB\n\n\nmaintenance_work_mem = 512MB\n\n\nagain, with 96GB of ram you can raise maintenance_work_mem to something like 4GB\n\n\ncheckpoint_segments = 64\ncheckpoint_completion_target = 0.8\n\nMy biggest concern are shared_buffers and effective_cache_size, should I increase shared_buffers and decrease effective_cache_size? I read that values above 10GB for shared_buffers give lower performance, than smaller amounts?\n\nfree is currently reporting (during the loading of data):\n\n$ free -m\ntotal used free shared buffers cached\nMem: 96730 96418 311 0 71 93120\n-/+ buffers/cache: 3227 93502\nSwap: 21000 51 20949\n\nSo it did a little swapping, but only minor, still I should probably decrease shared_buffers so there is no swapping at all.\n\nThanks in advance,\nStrahinja\n\n\nJulien\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 10 Oct 2012 10:30:03 +0200",
"msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On 10/10/2012 10:30, Strahinja Kustudić wrote:\n> Thanks for very fast replies everyone :)\n>\n> @Laurenz I know that effective cache size is only used for the query \n> planner, what I was saying is that if I tell it that it can have 90GB \n> cached items, that is not trues, since the OS and Postgres process \n> itself can take more than 6GB, which would mean 90GB is not the \n> correct value, but if effective_cache size should be \n> shared_buffers+page cache as Tomas said, than 90GB, won't be a problem.\n>\n>\n> @Tomas here are the values:\n>\n> # cat /proc/sys/vm/swappiness\n> 60\n> # cat /proc/sys/vm/overcommit_memory\n> 0\n> # cat /proc/sys/vm/overcommit_ratio\n> 50\n>\n> I will turn of swappiness, I was meaning to do that, but I don't know \n> much about the overcommit settings, I will read what they do.\n>\n>\n> @Julien thanks for the suggestions, I will tweak them like you suggested.\n>\n\nalso with 15k SCSI you can reduce random_page_cost to 3.5 (instead of 4.0)\nI also recommend to raise cpu_tuple_cost to 0.05 (instead of 0.01), set \nvm.swappiness to 0, vm.overcommit_memory to 2, and finally raise the \nread-ahead (something like 8192)\n\n> Strahinja Kustudić| System Engineer | Nordeus\n>\n>\n>\n> On Wed, Oct 10, 2012 at 10:11 AM, Julien Cigar <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> On 10/10/2012 09:12, Strahinja Kustudić wrote:\n>\n> Hi everyone,\n>\n>\n> Hello,\n>\n>\n>\n> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB\n> RAM and RAID10 15K SCSI drives which is runing Centos 6.2 x64.\n> This server is mainly used for inserting/updating large\n> amounts of data via copy/insert/update commands, and seldom\n> for running select queries.\n>\n> Here are the relevant configuration parameters I changed:\n>\n> shared_buffers = 10GB\n>\n>\n> Generally going over 4GB for shared_buffers doesn't help.. some of\n> the overhead of bgwriter and checkpoints is more or less linear in\n> the size of shared_buffers ..\n>\n> effective_cache_size = 90GB\n>\n>\n> effective_cache_size should be ~75% of the RAM (if it's a\n> dedicated server)\n>\n> work_mem = 32MB\n>\n>\n> with 96GB of RAM I would raise default work_mem to something like\n> 128MB\n>\n> maintenance_work_mem = 512MB\n>\n>\n> again, with 96GB of ram you can raise maintenance_work_mem to\n> something like 4GB\n>\n>\n> checkpoint_segments = 64\n> checkpoint_completion_target = 0.8\n>\n> My biggest concern are shared_buffers and\n> effective_cache_size, should I increase shared_buffers and\n> decrease effective_cache_size? I read that values above 10GB\n> for shared_buffers give lower performance, than smaller amounts?\n>\n> free is currently reporting (during the loading of data):\n>\n> $ free -m\n> total used free shared buffers cached\n> Mem: 96730 96418 311 0 71 93120\n> -/+ buffers/cache: 3227 93502\n> Swap: 21000 51 20949\n>\n> So it did a little swapping, but only minor, still I should\n> probably decrease shared_buffers so there is no swapping at all.\n>\n> Thanks in advance,\n> Strahinja\n>\n>\n> Julien\n>\n>\n> -- \n> No trees were killed in the creation of this message.\n> However, many electrons were terribly inconvenienced.\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Wed, 10 Oct 2012 10:52:34 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "Thanks for your help everyone.\n\nI set:\nshared_buffers = 4GB\neffective_cache_size = 72GB\nwork_mem = 128MB\nmaintenance_work_mem = 4GB\ncheckpoint_segments = 64\ncheckpoint_completion_target = 0.9\nrandom_page_cost = 3.5\ncpu_tuple_cost = 0.05\n\nWhere can I get the values for random_page_cost and for cpu_tuple_cost\nwhere they depend on hardware? I know that for SSDs random_page_cost should\nbe 1.0, but I have no idea what value this should be for different types of\ndrives.\n\nI also set:\nvm.swappiness = 0\nvm.overcommit_memory = 2\nvm.overcommit_ratio = 50\n\nBut I don't understand why do I need to set overcommit_memory, since I only\nhave postgres running, nothing else would allocate memory anyway?\n\nI will set readahead later, first I want to see how is this working.\n\nStrahinja Kustudić | System Engineer | Nordeus\n\n\n\nOn Wed, Oct 10, 2012 at 10:52 AM, Julien Cigar <[email protected]> wrote:\n\n> On 10/10/2012 10:30, Strahinja Kustudić wrote:\n>\n> Thanks for very fast replies everyone :)\n>\n> @Laurenz I know that effective cache size is only used for the query\n> planner, what I was saying is that if I tell it that it can have 90GB\n> cached items, that is not trues, since the OS and Postgres process itself\n> can take more than 6GB, which would mean 90GB is not the correct value, but\n> if effective_cache size should be shared_buffers+page cache as Tomas said,\n> than 90GB, won't be a problem.\n>\n>\n> @Tomas here are the values:\n>\n> # cat /proc/sys/vm/swappiness\n> 60\n> # cat /proc/sys/vm/overcommit_memory\n> 0\n> # cat /proc/sys/vm/overcommit_ratio\n> 50\n>\n> I will turn of swappiness, I was meaning to do that, but I don't know much\n> about the overcommit settings, I will read what they do.\n>\n>\n> @Julien thanks for the suggestions, I will tweak them like you suggested.\n>\n>\n> also with 15k SCSI you can reduce random_page_cost to 3.5 (instead of 4.0)\n> I also recommend to raise cpu_tuple_cost to 0.05 (instead of 0.01), set\n> vm.swappiness to 0, vm.overcommit_memory to 2, and finally raise the\n> read-ahead (something like 8192)\n>\n>\n> Strahinja Kustudić | System Engineer | Nordeus\n>\n>\n>\n> On Wed, Oct 10, 2012 at 10:11 AM, Julien Cigar <[email protected]> wrote:\n>\n>> On 10/10/2012 09:12, Strahinja Kustudić wrote:\n>>\n>>> Hi everyone,\n>>>\n>>\n>> Hello,\n>>\n>>\n>>\n>>> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and\n>>> RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is\n>>> mainly used for inserting/updating large amounts of data via\n>>> copy/insert/update commands, and seldom for running select queries.\n>>>\n>>> Here are the relevant configuration parameters I changed:\n>>>\n>>> shared_buffers = 10GB\n>>>\n>>\n>> Generally going over 4GB for shared_buffers doesn't help.. some of the\n>> overhead of bgwriter and checkpoints is more or less linear in the size of\n>> shared_buffers ..\n>>\n>> effective_cache_size = 90GB\n>>>\n>>\n>> effective_cache_size should be ~75% of the RAM (if it's a dedicated\n>> server)\n>>\n>> work_mem = 32MB\n>>>\n>>\n>> with 96GB of RAM I would raise default work_mem to something like 128MB\n>>\n>> maintenance_work_mem = 512MB\n>>>\n>>\n>> again, with 96GB of ram you can raise maintenance_work_mem to something\n>> like 4GB\n>>\n>>\n>> checkpoint_segments = 64\n>>> checkpoint_completion_target = 0.8\n>>>\n>>> My biggest concern are shared_buffers and effective_cache_size, should I\n>>> increase shared_buffers and decrease effective_cache_size? I read that\n>>> values above 10GB for shared_buffers give lower performance, than smaller\n>>> amounts?\n>>>\n>>> free is currently reporting (during the loading of data):\n>>>\n>>> $ free -m\n>>> total used free shared buffers cached\n>>> Mem: 96730 96418 311 0 71 93120\n>>> -/+ buffers/cache: 3227 93502\n>>> Swap: 21000 51 20949\n>>>\n>>> So it did a little swapping, but only minor, still I should probably\n>>> decrease shared_buffers so there is no swapping at all.\n>>>\n>>> Thanks in advance,\n>>> Strahinja\n>>>\n>>\n>> Julien\n>>\n>>\n>> --\n>> No trees were killed in the creation of this message.\n>> However, many electrons were terribly inconvenienced.\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>\n>\n> --\n> No trees were killed in the creation of this message.\n> However, many electrons were terribly inconvenienced.\n>\n>\n\nThanks for your help everyone.I set:shared_buffers = 4GBeffective_cache_size = 72GBwork_mem = 128MBmaintenance_work_mem = 4GBcheckpoint_segments = 64checkpoint_completion_target = 0.9random_page_cost = 3.5\n\ncpu_tuple_cost = 0.05Where can I get the values for random_page_cost and for cpu_tuple_cost where they depend on hardware? I know that for SSDs random_page_cost should be 1.0, but I have no idea what value this should be for different types of drives.\nI also set:vm.swappiness = 0vm.overcommit_memory = 2vm.overcommit_ratio = 50But I don't understand why do I need to set overcommit_memory, since I only have postgres running, nothing else would allocate memory anyway?\nI will set readahead later, first I want to see how is this working.\nStrahinja Kustudić | System Engineer | Nordeus\n\nOn Wed, Oct 10, 2012 at 10:52 AM, Julien Cigar <[email protected]> wrote:\n\n On 10/10/2012 10:30, Strahinja Kustudić wrote:\n Thanks for very fast replies everyone :)\n\n @Laurenz I know that effective cache size is only used for the\n query planner, what I was saying is that if I tell it that it can\n have 90GB cached items, that is not trues, since the OS and\n Postgres process itself can take more than 6GB, which would mean\n 90GB is not the correct value, but if effective_cache size should\n be shared_buffers+page cache as Tomas said, than 90GB, won't be a\n problem.\n\n\n @Tomas here are the values:\n\n# cat /proc/sys/vm/swappiness\n 60\n # cat /proc/sys/vm/overcommit_memory\n 0\n # cat /proc/sys/vm/overcommit_ratio\n 50\n\n I will turn of swappiness, I was meaning to do that, but I don't\n know much about the overcommit settings, I will read what they do.\n\n\n @Julien thanks for the suggestions, I will tweak them like you\n suggested.\n\n\n\n\n\n\n\n also with 15k SCSI you can reduce random_page_cost to 3.5 (instead\n of 4.0)\n I also recommend to raise cpu_tuple_cost to 0.05 (instead of 0.01),\n set vm.swappiness to 0, vm.overcommit_memory to 2, and finally raise\n the read-ahead (something like 8192)\n\n\n\n\n Strahinja Kustudić | System Engineer | Nordeus\n\n\n\n\nOn Wed, Oct 10, 2012 at 10:11 AM, Julien\n Cigar <[email protected]>\n wrote:\n\nOn 10/10/2012 09:12, Strahinja Kustudić wrote:\n\n Hi everyone,\n\n\n\n Hello,\n \n\n\n\n I have a Postgresql 9.1 dedicated server with 16 cores,\n 96GB RAM and RAID10 15K SCSI drives which is runing Centos\n 6.2 x64. This server is mainly used for inserting/updating\n large amounts of data via copy/insert/update commands, and\n seldom for running select queries.\n\n Here are the relevant configuration parameters I changed:\n\n shared_buffers = 10GB\n\n\n\n Generally going over 4GB for shared_buffers doesn't help..\n some of the overhead of bgwriter and checkpoints is more or\n less linear in the size of shared_buffers ..\n\n\n effective_cache_size = 90GB\n\n\n effective_cache_size should be ~75% of the RAM (if it's a\n dedicated server)\n\n\n work_mem = 32MB\n\n\n with 96GB of RAM I would raise default work_mem to something\n like 128MB\n\n\n maintenance_work_mem = 512MB\n\n\n again, with 96GB of ram you can raise maintenance_work_mem to\n something like 4GB\n \n\n\n\n checkpoint_segments = 64\n checkpoint_completion_target = 0.8\n\n My biggest concern are shared_buffers and\n effective_cache_size, should I increase shared_buffers\n and decrease effective_cache_size? I read that values\n above 10GB for shared_buffers give lower performance,\n than smaller amounts?\n\n free is currently reporting (during the loading of\n data):\n\n $ free -m\n total used free shared buffers cached\n Mem: 96730 96418 311 0 71 93120\n -/+ buffers/cache: 3227 93502\n Swap: 21000 51 20949\n\n So it did a little swapping, but only minor, still I\n should probably decrease shared_buffers so there is no\n swapping at all.\n\n Thanks in advance,\n Strahinja\n\n\n\n\n\n Julien\n\n\n -- \n No trees were killed in the creation of this message.\n However, many electrons were terribly inconvenienced.\n\n\n\n --\n Sent via pgsql-performance mailing list ([email protected])\n To make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Wed, 10 Oct 2012 13:33:45 +0200",
"msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On 10/10/2012 02:12 AM, Strahinja Kustudić wrote:\n\n> total used free shared buffers cached\n> Mem: 96730 96418 311 0 71 93120\n\nWow, look at all that RAM. Something nobody has mentioned yet, you'll \nwant to set some additional kernel parameters for this, to avoid getting \noccasional IO storms caused by dirty memory flushes.\n\nvm.dirty_background_ratio = 1\nvm.dirty_ratio = 5\n\nAgain, these would go in sysctl.conf, or /etc/sysctl.d/10-dbserver.conf \nor something. If you have a newer kernel, look into \nvm.dirty_background_bytes, and vm.dirty_bytes.\n\nThe why of this is brought up occasionally here, but it comes down to \nyour vast amount of memory. The defaults for even late Linux kernels is \n5% for dirty_background_ratio, and 10% for dirty_ratio. So if you \nmultiply it out, the kernel will allow about 4.8GB of dirty memory \nbefore attempting to flush it to disk. If that number reaches 9.6, the \nsystem goes synchronous, and no other disk writes can take place until \n*all 9.6GB* is flushed. Even with a fast disk subsystem, that's a pretty \nbig gulp.\n\nThe idea here is to keep it writing in the background by setting a low \nlimit, so it never reaches a critical mass that causes it to snowball \ninto the more dangerous upper limit. If you have a newer kernel, the \nability to set \"bytes\" is a much more granular knob that can be used to \nmatch RAID buffer sizes. You'll probably want to experiment with this a \nbit before committing to a setting.\n\n> So it did a little swapping, but only minor, still I should probably\n> decrease shared_buffers so there is no swapping at all.\n\nDon't worry about that amount of swapping. As others have said here, you \ncan reduce that to 0, and even then, the OS will still swap something \noccasionally. It's really just a hint to the kernel how much swapping \nyou want to go on, and it's free to ignore it in cases where it knows \nsome data won't be accessed after initialization or something, so it \nswaps it out anyway.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Wed, 10 Oct 2012 08:09:56 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "Shaun,\n\nrunning these commands:\n\n#sysctl vm.dirty_ratio\nvm.dirty_ratio = 40\n# sysctl vm.dirty_background_ratio\nvm.dirty_background_ratio = 10\n\nshows that these values are even higher by default. When you said RAID\nbuffer size, you meant the controllers cache memory size?\n\nRegards,\nStrahinja\n\n\nOn Wed, Oct 10, 2012 at 3:09 PM, Shaun Thomas <[email protected]>wrote:\n\n> On 10/10/2012 02:12 AM, Strahinja Kustudić wrote:\n>\n> total used free shared buffers cached\n>> Mem: 96730 96418 311 0 71 93120\n>>\n>\n> Wow, look at all that RAM. Something nobody has mentioned yet, you'll want\n> to set some additional kernel parameters for this, to avoid getting\n> occasional IO storms caused by dirty memory flushes.\n>\n> vm.dirty_background_ratio = 1\n> vm.dirty_ratio = 5\n>\n> Again, these would go in sysctl.conf, or /etc/sysctl.d/10-dbserver.conf or\n> something. If you have a newer kernel, look into vm.dirty_background_bytes,\n> and vm.dirty_bytes.\n>\n> The why of this is brought up occasionally here, but it comes down to your\n> vast amount of memory. The defaults for even late Linux kernels is 5% for\n> dirty_background_ratio, and 10% for dirty_ratio. So if you multiply it out,\n> the kernel will allow about 4.8GB of dirty memory before attempting to\n> flush it to disk. If that number reaches 9.6, the system goes synchronous,\n> and no other disk writes can take place until *all 9.6GB* is flushed. Even\n> with a fast disk subsystem, that's a pretty big gulp.\n>\n> The idea here is to keep it writing in the background by setting a low\n> limit, so it never reaches a critical mass that causes it to snowball into\n> the more dangerous upper limit. If you have a newer kernel, the ability to\n> set \"bytes\" is a much more granular knob that can be used to match RAID\n> buffer sizes. You'll probably want to experiment with this a bit before\n> committing to a setting.\n>\n>\n> So it did a little swapping, but only minor, still I should probably\n>> decrease shared_buffers so there is no swapping at all.\n>>\n>\n> Don't worry about that amount of swapping. As others have said here, you\n> can reduce that to 0, and even then, the OS will still swap something\n> occasionally. It's really just a hint to the kernel how much swapping you\n> want to go on, and it's free to ignore it in cases where it knows some data\n> won't be accessed after initialization or something, so it swaps it out\n> anyway.\n>\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n>\n> ______________________________**________________\n>\n> See http://www.peak6.com/email_**disclaimer/<http://www.peak6.com/email_disclaimer/>for terms and conditions related to this email\n>\n\nShaun,running these commands:#sysctl vm.dirty_ratio vm.dirty_ratio = 40# sysctl vm.dirty_background_ratio vm.dirty_background_ratio = 10shows that these values are even higher by default. When you said RAID buffer size, you meant the controllers cache memory size?\nRegards,Strahinja\n\n\nOn Wed, Oct 10, 2012 at 3:09 PM, Shaun Thomas <[email protected]> wrote:\nOn 10/10/2012 02:12 AM, Strahinja Kustudić wrote:\n\n\n total used free shared buffers cached\nMem: 96730 96418 311 0 71 93120\n\n\nWow, look at all that RAM. Something nobody has mentioned yet, you'll want to set some additional kernel parameters for this, to avoid getting occasional IO storms caused by dirty memory flushes.\n\nvm.dirty_background_ratio = 1\nvm.dirty_ratio = 5\n\nAgain, these would go in sysctl.conf, or /etc/sysctl.d/10-dbserver.conf or something. If you have a newer kernel, look into vm.dirty_background_bytes, and vm.dirty_bytes.\n\nThe why of this is brought up occasionally here, but it comes down to your vast amount of memory. The defaults for even late Linux kernels is 5% for dirty_background_ratio, and 10% for dirty_ratio. So if you multiply it out, the kernel will allow about 4.8GB of dirty memory before attempting to flush it to disk. If that number reaches 9.6, the system goes synchronous, and no other disk writes can take place until *all 9.6GB* is flushed. Even with a fast disk subsystem, that's a pretty big gulp.\n\nThe idea here is to keep it writing in the background by setting a low limit, so it never reaches a critical mass that causes it to snowball into the more dangerous upper limit. If you have a newer kernel, the ability to set \"bytes\" is a much more granular knob that can be used to match RAID buffer sizes. You'll probably want to experiment with this a bit before committing to a setting.\n\n\n\nSo it did a little swapping, but only minor, still I should probably\ndecrease shared_buffers so there is no swapping at all.\n\n\nDon't worry about that amount of swapping. As others have said here, you can reduce that to 0, and even then, the OS will still swap something occasionally. It's really just a hint to the kernel how much swapping you want to go on, and it's free to ignore it in cases where it knows some data won't be accessed after initialization or something, so it swaps it out anyway.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email",
"msg_date": "Wed, 10 Oct 2012 16:35:39 +0200",
"msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On 10/10/2012 09:35 AM, Strahinja Kustudić wrote:\n\n> #sysctl vm.dirty_ratio\n> vm.dirty_ratio = 40\n> # sysctl vm.dirty_background_ratio\n> vm.dirty_background_ratio = 10\n\nOuuuuch. That looks a lot like an old RHEL or CentOS system. Change \nthose ASAP. Currently your system won't start writing dirty buffers \nuntil it hits 9.6GB. :(\n\n> shows that these values are even higher by default. When you said\n> RAID buffer size, you meant the controllers cache memory size?\n\nYeah, that. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Wed, 10 Oct 2012 09:38:15 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "I will change those, but I don't think this is that big of an issue if most\nof the IO is done by Postgres, since Postgres has it's own mechanism to\ntell the OS to sync the data to disk. For example when it's writing a wal\nfile, or when it's writing a check point, those do not get cached.\n\nRegards,\nStrahinja\n\n\nOn Wed, Oct 10, 2012 at 4:38 PM, Shaun Thomas <[email protected]>wrote:\n\n> On 10/10/2012 09:35 AM, Strahinja Kustudić wrote:\n>\n> #sysctl vm.dirty_ratio\n>> vm.dirty_ratio = 40\n>> # sysctl vm.dirty_background_ratio\n>> vm.dirty_background_ratio = 10\n>>\n>\n> Ouuuuch. That looks a lot like an old RHEL or CentOS system. Change those\n> ASAP. Currently your system won't start writing dirty buffers until it hits\n> 9.6GB. :(\n>\n>\n> shows that these values are even higher by default. When you said\n>> RAID buffer size, you meant the controllers cache memory size?\n>>\n>\n> Yeah, that. :)\n>\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n>\n> ______________________________**________________\n>\n> See http://www.peak6.com/email_**disclaimer/<http://www.peak6.com/email_disclaimer/>for terms and conditions related to this email\n>\n\nI will change those, but I don't think this is that big of an issue if most of the IO is done by Postgres, since Postgres has it's own mechanism to tell the OS to sync the data to disk. For example when it's writing a wal file, or when it's writing a check point, those do not get cached.\nRegards,Strahinja\nOn Wed, Oct 10, 2012 at 4:38 PM, Shaun Thomas <[email protected]> wrote:\nOn 10/10/2012 09:35 AM, Strahinja Kustudić wrote:\n\n\n#sysctl vm.dirty_ratio\nvm.dirty_ratio = 40\n# sysctl vm.dirty_background_ratio\nvm.dirty_background_ratio = 10\n\n\nOuuuuch. That looks a lot like an old RHEL or CentOS system. Change those ASAP. Currently your system won't start writing dirty buffers until it hits 9.6GB. :(\n\n\nshows that these values are even higher by default. When you said\nRAID buffer size, you meant the controllers cache memory size?\n\n\nYeah, that. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email",
"msg_date": "Wed, 10 Oct 2012 16:49:47 +0200",
"msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On 10/10/2012 09:49 AM, Strahinja Kustudić wrote:\n\n> I will change those, but I don't think this is that big of an issue if\n> most of the IO is done by Postgres, since Postgres has it's own\n> mechanism to tell the OS to sync the data to disk. For example when it's\n> writing a wal file, or when it's writing a check point, those do not get\n> cached.\n\nYou'd be surprised. Greg Smith did a bunch of work a couple years back \nthat supported these changes. Most DBAs with heavily utilized systems \ncould even see this in action by turning on checkpoint logging, and \nthere's an occasional period where the sync time lags into the minutes \ndue to a synchronous IO switch.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Wed, 10 Oct 2012 09:54:19 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 09:12:20AM +0200, Strahinja Kustudić wrote:\n> Hi everyone,\n> \n> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10 15K\n> SCSI drives which is runing Centos 6.2 x64. This server is mainly used for\n> inserting/updating large amounts of data via copy/insert/update commands, and\n> seldom for running select queries.\n> \n> Here are the relevant configuration parameters I changed:\n> \n> shared_buffers = 10GB\n> effective_cache_size = 90GB\n> work_mem = 32MB\n> maintenance_work_mem = 512MB\n> checkpoint_segments = 64\n> checkpoint_completion_target = 0.8\n> \n> My biggest concern are shared_buffers and effective_cache_size, should I\n> increase shared_buffers and decrease effective_cache_size? I read that values\n> above 10GB for shared_buffers give lower performance, than smaller amounts?\n> \n> free is currently reporting (during the loading of data):\n> \n> $ free -m\n> total used free shared buffers cached\n> Mem: 96730 96418 311 0 71 93120\n> -/+ buffers/cache: 3227 93502\n> Swap: 21000 51 20949\n> \n> So it did a little swapping, but only minor, still I should probably decrease\n> shared_buffers so there is no swapping at all.\n\nYou might want to read my blog entry about swap space:\n\n\thttp://momjian.us/main/blogs/pgblog/2012.html#July_25_2012\n\nIt is probably swapping unused memory _out_ to make more use of RAM for\ncache.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Wed, 10 Oct 2012 12:05:34 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 10:11:30AM +0200, Julien Cigar wrote:\n> On 10/10/2012 09:12, Strahinja Kustudić wrote:\n> >Hi everyone,\n> \n> Hello,\n> \n> >\n> >I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM\n> >and RAID10 15K SCSI drives which is runing Centos 6.2 x64. This\n> >server is mainly used for inserting/updating large amounts of data\n> >via copy/insert/update commands, and seldom for running select\n> >queries.\n> >\n> >Here are the relevant configuration parameters I changed:\n> >\n> >shared_buffers = 10GB\n> \n> Generally going over 4GB for shared_buffers doesn't help.. some of\n> the overhead of bgwriter and checkpoints is more or less linear in\n> the size of shared_buffers ..\n> \n> >effective_cache_size = 90GB\n> \n> effective_cache_size should be ~75% of the RAM (if it's a dedicated server)\n\nWhy guess? Use 'free' to tell you the kernel cache size:\n\n\thttp://momjian.us/main/blogs/pgblog/2012.html#May_4_2012\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Wed, 10 Oct 2012 12:10:12 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 1:10 PM, Bruce Momjian <[email protected]> wrote:\n>> >shared_buffers = 10GB\n>>\n>> Generally going over 4GB for shared_buffers doesn't help.. some of\n>> the overhead of bgwriter and checkpoints is more or less linear in\n>> the size of shared_buffers ..\n>>\n>> >effective_cache_size = 90GB\n>>\n>> effective_cache_size should be ~75% of the RAM (if it's a dedicated server)\n>\n> Why guess? Use 'free' to tell you the kernel cache size:\n>\n> http://momjian.us/main/blogs/pgblog/2012.html#May_4_2012\n\nWhy does nobody every mention that concurrent access has to be taken\ninto account?\n\nIe: if I expect concurrent access to 10 really big indices, I'll set\neffective_cache_size = free ram / 10\n\n",
"msg_date": "Wed, 10 Oct 2012 14:05:20 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 02:05:20PM -0300, Claudio Freire wrote:\n> On Wed, Oct 10, 2012 at 1:10 PM, Bruce Momjian <[email protected]> wrote:\n> >> >shared_buffers = 10GB\n> >>\n> >> Generally going over 4GB for shared_buffers doesn't help.. some of\n> >> the overhead of bgwriter and checkpoints is more or less linear in\n> >> the size of shared_buffers ..\n> >>\n> >> >effective_cache_size = 90GB\n> >>\n> >> effective_cache_size should be ~75% of the RAM (if it's a dedicated server)\n> >\n> > Why guess? Use 'free' to tell you the kernel cache size:\n> >\n> > http://momjian.us/main/blogs/pgblog/2012.html#May_4_2012\n> \n> Why does nobody every mention that concurrent access has to be taken\n> into account?\n> \n> Ie: if I expect concurrent access to 10 really big indices, I'll set\n> effective_cache_size = free ram / 10\n\nIt is true that the estimate assumes a single session is using all the\ncache, but I think that is based on the assumion is that there is a\nmajor overlap between the cache needs of multiple sessions.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Wed, 10 Oct 2012 13:56:39 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On 10/10/2012 12:05 PM, Claudio Freire wrote:\n\n> Why does nobody every mention that concurrent access has to be taken\n> into account?\n\nThat's actually a good point. But if you have one giant database, the \noverlap of which tables are being accessed by various sessions is going \nto be immense.\n\nThere probably should be a point about this in the docs, though. There \nare more and more shared-hosting setups or places that spread their data \nhorizontally across separate databases for various clients, and in those \ncases, parallel usage does not imply overlap.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Wed, 10 Oct 2012 13:18:49 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 3:18 PM, Shaun Thomas <[email protected]> wrote:\n>> Why does nobody every mention that concurrent access has to be taken\n>> into account?\n>\n>\n> That's actually a good point. But if you have one giant database, the\n> overlap of which tables are being accessed by various sessions is going to\n> be immense.\n\nThat's why I said \"several huge indices\". If regularly accessed\nindices are separate, and big, it means they don't overlap nor do they\nfit in any cache.\n\n",
"msg_date": "Wed, 10 Oct 2012 15:24:42 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 12:12 AM, Strahinja Kustudić\n<[email protected]> wrote:\n> Hi everyone,\n>\n> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10\n> 15K SCSI drives which is runing Centos 6.2 x64.\n\nHow many drives in the RAID?\n\n> This server is mainly used\n> for inserting/updating large amounts of data via copy/insert/update\n> commands, and seldom for running select queries.\n\nAre there a lot of indexes?\n\n>\n> Here are the relevant configuration parameters I changed:\n>\n> shared_buffers = 10GB\n> effective_cache_size = 90GB\n> work_mem = 32MB\n> maintenance_work_mem = 512MB\n> checkpoint_segments = 64\n> checkpoint_completion_target = 0.8\n>\n> My biggest concern are shared_buffers and effective_cache_size, should I\n> increase shared_buffers and decrease effective_cache_size?\n\nAre you experiencing performance problems? If so, what are they?\n\n> I read that\n> values above 10GB for shared_buffers give lower performance, than smaller\n> amounts?\n\nThere are reports that large shared_buffers can lead to latency\nspikes. I don't know how sensitive your work load is to latency,\nthough. Nor how much those reports apply to 9.1.\n\n>\n> free is currently reporting (during the loading of data):\n>\n> $ free -m\n> total used free shared buffers cached\n> Mem: 96730 96418 311 0 71 93120\n> -/+ buffers/cache: 3227 93502\n> Swap: 21000 51 20949\n>\n> So it did a little swapping, but only minor,\n\nThe kernel has, over the entire time the server has been up, found 51\nMB of process memory to swap. That doesn't really mean anything. Do\nyou see active swapping going on, like with vmstat?\n\n\nCheers,\n\nJeff\n\n",
"msg_date": "Wed, 10 Oct 2012 12:30:46 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "@Bruce Thanks for your articles, after reading them all I don't think\ndisabling swap is a good idea now. Also you said to see the\neffective_cache_size I should check it with free. My question is should I\nuse the value that free is showing as cached, or a little lower one, since\nnot everything in the cache is because of Postgres.\n\n@Claudio So you are basically saying that if I have set\neffective_cache_size to 10GB and I have 10 concurrent processes which are\nusing 10 different indices which are for example 2GB, it would be better to\nset the effective_cache size to 1GB? Since if I leave it at 10GB each\nrunning process query planner will think the whole index is in cache and\nthat won't be true? Did I get that right?\n\n@Jeff I have 4 drives in RADI10. The database has around 80GB of indices.\nI'm not experiencing any slow downs, I would just like to increase the\nperformance of update/insert, since it needs to insert a lot of data and to\nmake the select queries faster since they are done on a lot of big tables.\nI am experiencing a lot of performance problems when autovacuum kicks in\nfor a few big tables, since it slows downs things a lot. I didn't notice\nany swapping and I know those 51MB which were swapped were just staying\nthere, so swap isn't an issue at all.\n\nStrahinja Kustudić | System Engineer | Nordeus\n\n\n\nOn Wed, Oct 10, 2012 at 9:30 PM, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Oct 10, 2012 at 12:12 AM, Strahinja Kustudić\n> <[email protected]> wrote:\n> > Hi everyone,\n> >\n> > I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and\n> RAID10\n> > 15K SCSI drives which is runing Centos 6.2 x64.\n>\n> How many drives in the RAID?\n>\n> > This server is mainly used\n> > for inserting/updating large amounts of data via copy/insert/update\n> > commands, and seldom for running select queries.\n>\n> Are there a lot of indexes?\n>\n> >\n> > Here are the relevant configuration parameters I changed:\n> >\n> > shared_buffers = 10GB\n> > effective_cache_size = 90GB\n> > work_mem = 32MB\n> > maintenance_work_mem = 512MB\n> > checkpoint_segments = 64\n> > checkpoint_completion_target = 0.8\n> >\n> > My biggest concern are shared_buffers and effective_cache_size, should I\n> > increase shared_buffers and decrease effective_cache_size?\n>\n> Are you experiencing performance problems? If so, what are they?\n>\n> > I read that\n> > values above 10GB for shared_buffers give lower performance, than smaller\n> > amounts?\n>\n> There are reports that large shared_buffers can lead to latency\n> spikes. I don't know how sensitive your work load is to latency,\n> though. Nor how much those reports apply to 9.1.\n>\n> >\n> > free is currently reporting (during the loading of data):\n> >\n> > $ free -m\n> > total used free shared buffers cached\n> > Mem: 96730 96418 311 0 71 93120\n> > -/+ buffers/cache: 3227 93502\n> > Swap: 21000 51 20949\n> >\n> > So it did a little swapping, but only minor,\n>\n> The kernel has, over the entire time the server has been up, found 51\n> MB of process memory to swap. That doesn't really mean anything. Do\n> you see active swapping going on, like with vmstat?\n>\n>\n> Cheers,\n>\n> Jeff\n>\n\n@Bruce Thanks for your articles, after reading them all I don't think disabling swap is a good idea now. Also you said to see the effective_cache_size I should check it with free. My question is should I use the value that free is showing as cached, or a little lower one, since not everything in the cache is because of Postgres.\n@Claudio So you are basically saying that if I have set effective_cache_size to 10GB and I have 10 concurrent processes which are using 10 different indices which are for example 2GB, it would be better to set the effective_cache size to 1GB? Since if I leave it at 10GB each running process query planner will think the whole index is in cache and that won't be true? Did I get that right?\n@Jeff I have 4 drives in RADI10. The database has around 80GB of indices. I'm not experiencing any slow downs, I would just like to increase the performance of update/insert, since it needs to insert a lot of data and to make the select queries faster since they are done on a lot of big tables. I am experiencing a lot of performance problems when autovacuum kicks in for a few big tables, since it slows downs things a lot. I didn't notice any swapping and I know those 51MB which were swapped were just staying there, so swap isn't an issue at all.\n\nStrahinja Kustudić | System Engineer | Nordeus\n\nOn Wed, Oct 10, 2012 at 9:30 PM, Jeff Janes <[email protected]> wrote:\nOn Wed, Oct 10, 2012 at 12:12 AM, Strahinja Kustudić\n<[email protected]> wrote:\n> Hi everyone,\n>\n> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10\n> 15K SCSI drives which is runing Centos 6.2 x64.\n\nHow many drives in the RAID?\n\n> This server is mainly used\n> for inserting/updating large amounts of data via copy/insert/update\n> commands, and seldom for running select queries.\n\nAre there a lot of indexes?\n\n>\n> Here are the relevant configuration parameters I changed:\n>\n> shared_buffers = 10GB\n> effective_cache_size = 90GB\n> work_mem = 32MB\n> maintenance_work_mem = 512MB\n> checkpoint_segments = 64\n> checkpoint_completion_target = 0.8\n>\n> My biggest concern are shared_buffers and effective_cache_size, should I\n> increase shared_buffers and decrease effective_cache_size?\n\nAre you experiencing performance problems? If so, what are they?\n\n> I read that\n> values above 10GB for shared_buffers give lower performance, than smaller\n> amounts?\n\nThere are reports that large shared_buffers can lead to latency\nspikes. I don't know how sensitive your work load is to latency,\nthough. Nor how much those reports apply to 9.1.\n\n>\n> free is currently reporting (during the loading of data):\n>\n> $ free -m\n> total used free shared buffers cached\n> Mem: 96730 96418 311 0 71 93120\n> -/+ buffers/cache: 3227 93502\n> Swap: 21000 51 20949\n>\n> So it did a little swapping, but only minor,\n\nThe kernel has, over the entire time the server has been up, found 51\nMB of process memory to swap. That doesn't really mean anything. Do\nyou see active swapping going on, like with vmstat?\n\n\nCheers,\n\nJeff",
"msg_date": "Wed, 10 Oct 2012 22:12:51 +0200",
"msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 5:12 PM, Strahinja Kustudić\n<[email protected]> wrote:\n> @Claudio So you are basically saying that if I have set effective_cache_size\n> to 10GB and I have 10 concurrent processes which are using 10 different\n> indices which are for example 2GB, it would be better to set the\n> effective_cache size to 1GB? Since if I leave it at 10GB each running\n> process query planner will think the whole index is in cache and that won't\n> be true? Did I get that right?\n\nYep. You might get away with setting 2GB, if you're willing to bet\nthere won't be 100% concurrency. But the safest setting would be 1G.\n\n",
"msg_date": "Wed, 10 Oct 2012 17:48:18 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 10:12:51PM +0200, Strahinja Kustudić wrote:\n> @Bruce Thanks for your articles, after reading them all I don't think disabling\n> swap is a good idea now. Also you said to see the effective_cache_size I should\n> check it with free. My question is should I use the value that free is showing\n> as cached, or a little lower one, since not everything in the cache is because\n> of Postgres.\n\nWell, you are right that some of that might not be Postgres, so yeah,\nyou can lower it somewhat.\n\n> @Claudio So you are basically saying that if I have set effective_cache_size to\n> 10GB and I have 10 concurrent processes which are using 10 different indices\n> which are for example 2GB, it would be better to set the effective_cache size\n> to 1GB? Since if I leave it at 10GB each running process query planner will\n> think the whole index is in cache and that won't be true? Did I get that right?\n\nWell, the real question is whether, while traversing the index, if some\nof the pages are going to be removed from the cache by other process\ncache usage. effective_cache_size is not figuring the cache will remain\nbetween queries.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Wed, 10 Oct 2012 17:03:15 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "Hi,\n\nOn 10 October 2012 19:11, Julien Cigar <[email protected]> wrote:\n>> shared_buffers = 10GB\n>\n>\n> Generally going over 4GB for shared_buffers doesn't help.. some of the\n> overhead of bgwriter and checkpoints is more or less linear in the size of\n> shared_buffers ..\n\nNothing is black or white; It's all shades of Grey :) It depends on\nworkload. In my case external consultants recommended 8GB and I was\nable to increase it up to 10GB. This was mostly read-only workload.\n>From my experience large buffer cache acts as handbrake for\nwrite-heavy workloads.\n\n-- \nOndrej Ivanic\n([email protected])\n(http://www.linkedin.com/in/ondrejivanic)\n\n",
"msg_date": "Thu, 11 Oct 2012 09:06:09 +1100",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 7:06 PM, Ondrej Ivanič <[email protected]> wrote:\n>> Generally going over 4GB for shared_buffers doesn't help.. some of the\n>> overhead of bgwriter and checkpoints is more or less linear in the size of\n>> shared_buffers ..\n>\n> Nothing is black or white; It's all shades of Grey :) It depends on\n> workload. In my case external consultants recommended 8GB and I was\n> able to increase it up to 10GB. This was mostly read-only workload.\n> From my experience large buffer cache acts as handbrake for\n> write-heavy workloads.\n\nWhich makes me ask...\n\n...why can't checkpoint_timeout be set above 1h? Mostly for the\ncheckpoint target thing.\n\nI know, you'd need an unholy amount of WAL and recovery time, but\nmodern systems I think can handle that (especially if you don't care\nmuch about recovery time).\n\nI usually set checkpoint_timeout to approach the time between periodic\nmass updates, and it works rather nice. Except when those updates are\nspaced more than 1h, my hands are tied.\n\n",
"msg_date": "Wed, 10 Oct 2012 19:14:45 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 2:03 PM, Bruce Momjian <[email protected]> wrote:\n> On Wed, Oct 10, 2012 at 10:12:51PM +0200, Strahinja Kustudić wrote:\n\n>> @Claudio So you are basically saying that if I have set effective_cache_size to\n>> 10GB and I have 10 concurrent processes which are using 10 different indices\n>> which are for example 2GB, it would be better to set the effective_cache size\n>> to 1GB? Since if I leave it at 10GB each running process query planner will\n>> think the whole index is in cache and that won't be true? Did I get that right?\n>\n> Well, the real question is whether, while traversing the index, if some\n> of the pages are going to be removed from the cache by other process\n> cache usage. effective_cache_size is not figuring the cache will remain\n> between queries.\n\nDoes anyone see effective_cache_size make a difference anyway? If so,\nin what circumstances?\n\nIn my hands, queries for which effective_cache_size might come into\nplay (for deciding between seq scan and index scan) are instead\nplanned as bitmap scans.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Wed, 10 Oct 2012 15:33:11 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 7:33 PM, Jeff Janes <[email protected]> wrote:\n>> Well, the real question is whether, while traversing the index, if some\n>> of the pages are going to be removed from the cache by other process\n>> cache usage. effective_cache_size is not figuring the cache will remain\n>> between queries.\n>\n> Does anyone see effective_cache_size make a difference anyway? If so,\n> in what circumstances?\n\nIn my case, if I set it too high, I get impossibly suboptimal plans\nwhen an index scan over millions of rows hits the disk way too often\nway too randomly. The difference is minutes for a seqscan vs hours for\nthe index scan. In fact, I prefer setting it too low than too high.\n\n",
"msg_date": "Wed, 10 Oct 2012 19:37:10 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 4:37 PM, Claudio Freire <[email protected]> wrote:\n> On Wed, Oct 10, 2012 at 7:33 PM, Jeff Janes <[email protected]> wrote:\n>>> Well, the real question is whether, while traversing the index, if some\n>>> of the pages are going to be removed from the cache by other process\n>>> cache usage. effective_cache_size is not figuring the cache will remain\n>>> between queries.\n>>\n>> Does anyone see effective_cache_size make a difference anyway? If so,\n>> in what circumstances?\n>\n> In my case, if I set it too high, I get impossibly suboptimal plans\n> when an index scan over millions of rows hits the disk way too often\n> way too randomly. The difference is minutes for a seqscan vs hours for\n> the index scan. In fact, I prefer setting it too low than too high.\n\nThere's a corollary for very fast disk subsystems. If you've got say\n40 15krpm disks in a RAID-10 you can get sequential read speeds into\nthe gigabytes per second, so that sequential page access costs MUCH\nlower than random page access, to the point that if seq page access is\nrated a 1, random page access should be much higher, sometimes on the\norder of 100 or so. I.e. sequential accesses are almost always\npreferred, especially if you're getting more than a tiny portion of\nthe table at one time.\n\nAs for the arguments for / against having a swap space, no one has\nmentioned the one I've run into on many older kernels, and that is\nBUGs. I have had to turn off swap on very large mem machines with\n2.6.xx series kernels in the past. These machines all had properly\nset vm.* settings for dirty buffers and percent etc. Didn't matter,\nas after 2 to 4 weeks of hard working uptimes, I'd get an alert on the\ndb server for high load, log in, and see kswapd working its butt off\ndoing... NOTHING. Load would be in the 50 to 150 range. iostat showed\nNOTHING in terms of si/so/bi/bo and so on. kswapd wasn't in a D\n(iowait) state, but rather R, pegging a CPU core at 100% while\nrunning, and apparently blocking a lot of other processes that wanted\nto access memory, all of which were S(leeping) or R(unning). Two\nseconds after a sudo swapoff -a completed and my machine went back to\na load of 2 to 5 as was normal for it. Honestly if you're running out\nof memory on a machine with 256G and needing swap, you've got other\nvery real memory usage issues you've been ignoring to get to that\npoint.\n\nAre all those bugs fixed in the 3.0.latest kernels? Not sure, but I\nhaven't had this issue on any big memory servers lately and they've\nall had swap turned on.\n\n",
"msg_date": "Wed, 10 Oct 2012 23:36:05 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "Jeff,\n\n> Does anyone see effective_cache_size make a difference anyway? If so,\n> in what circumstances?\n\nE_C_S, together with random_page_cost, the table and index sizes, the\nrow estimates and the cpu_* costs, form an equation which estimates the\ncost of doing various kinds of scans, particularly index scan vs. table\nscan. If you have an extremely small database (< shared_buffers) or a\nvery large database ( > 50X RAM ), the setting for E_C_S probably\ndoesn't matter, but in the fairly common case where some tables and\nindexes fit in RAM and some don't, it matters.\n\n> In my hands, queries for which effective_cache_size might come into\n> play (for deciding between seq scan and index scan) are instead\n> planned as bitmap scans.\n\nYou have a very unusual workload, or a very small database.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Thu, 11 Oct 2012 11:17:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Thu, Oct 11, 2012 at 11:17 AM, Josh Berkus <[email protected]> wrote:\n>\n>> Does anyone see effective_cache_size make a difference anyway? If so,\n>> in what circumstances?\n>\n> E_C_S, together with random_page_cost, the table and index sizes, the\n> row estimates and the cpu_* costs, form an equation which estimates the\n> cost of doing various kinds of scans, particularly index scan vs. table\n> scan.\n\nE_C_S only comes into play when the same table pages are (predicted to\nbe) visited repeatedly during the index scan, but this is the same\nsituation in which a bitmap scan is generally preferred anyway. In\nfact the two seem to be conceptually very similar (either avoid\nactually visiting the block repeatedly, or avoid the IO cost of\nvisiting the block repeatedly), and I'm not sure why bitmap scans\ncomes out on top--there doesn't seem to be a CPU cost estimate of\nvisiting a block which is assumed to already be in memory, nor is\nbitmap scan given credit for the use of effective_io_concurrency.\n\nBut I found a simple case (over in \"Unused index influencing\nsequential scan plan\") which is very sensitive to E_C_S. When the\nindex scan is being done to avoid a costly sort or aggregation, then\nit can't be usefully replaced with a bitmap scan since it won't\nproduce index-order sorted output.\n\n>> In my hands, queries for which effective_cache_size might come into\n>> play (for deciding between seq scan and index scan) are instead\n>> planned as bitmap scans.\n>\n> You have a very unusual workload, or a very small database.\n\nI think all real workloads are unusual, otherwise benchmarking would\nbe easy...but since complex queries are intractable to figure out what\nthe planner is thinking, I'm biased to using simple ones when trying\nto figure out general principles. I can make the database look as big\nor small as I want (relative to RAM), by feeding effective_cache_size\nfalse information.\n\nAnyway, it seems like the consequences of overestimating E_C_S (by\nunderestimating the number of processes that might expect to benefit\nfrom it concurrently) are worse than the consequences of\nunderestimating it--assuming you have the types of queries for which\nit makes much of a difference.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Thu, 18 Oct 2012 10:54:45 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 1:12 PM, Strahinja Kustudić\n<[email protected]> wrote:\n\n> @Claudio So you are basically saying that if I have set effective_cache_size\n> to 10GB and I have 10 concurrent processes which are using 10 different\n> indices which are for example 2GB,\n\nIt is the size of the table, not the index, which is primarily of\nconcern. However, that mostly factors into how postgres uses\neffective_cache_size, not how you set it.\n\n> it would be better to set the\n> effective_cache size to 1GB?\n\nIf 10GB were the correct setting for a system with only one process\ntrying to run that type of query at a time, then 1 GB would be the\ncorrect setting for 10 concurrent processing running that type of\nquery concurrently.\n\nBut, I think there is little reason to think that 10GB actually would\nbe the correct setting for the first case, so little reason to think\n1GB is the correct setting in the 2nd case.\n\nSince you have 96GB of RAM, I would think that 10GB is an appropriate\nsetting *already taking concurrency into account*, and would be too\nlow if you were not expecting any concurrency.\n\nIn any case, the setting of effective_cache size shouldn't affect\nsimple inserts or copies at all, since those operations don't use\nlarge index range scans.\n\n\n> Since if I leave it at 10GB each running\n> process query planner will think the whole index is in cache and that won't\n> be true? Did I get that right?\n\nIt isn't mostly about how much of the index is in cache, but rather\nhow much of the table is in cache.\n\n>\n> @Jeff I have 4 drives in RADI10. The database has around 80GB of indices.\n\nThat seems like a pretty small disk set for a server of this size.\n\nDo you know what percentage of that 80GB of indices gets dirtied\nduring any given round of batch loading/updating? I think that that\ncould easily be your bottleneck, how fast you can write out dirtied\nindex pages, which are likely being written randomly rather than\nsequentially.\n\n\n> I'm not experiencing any slow downs, I would just like to increase the\n> performance of update/insert, since it needs to insert a lot of data and to\n> make the select queries faster since they are done on a lot of big tables.\n\nI think these two things are in tension. The faster the inserts and\nupdates run, the more resources they will take away from the selects\nduring those periods. If you are doing batch copies, then as long as\none batch has finished before the next one needs to start, isn't that\nfast enough? Maybe the goal should be to throttle the inserts so that\nthe selects see a more steady competition for IO.\n\n> I\n> am experiencing a lot of performance problems when autovacuum kicks in for a\n> few big tables, since it slows downs things a lot.\n\nYou can tune the autovacuum to make them slower. But it sounds like\nmaybe you should have put more money into spindles and less into CPU\ncores. (I think that is a very common situation to be in).\n\nCheers,\n\nJeff\n\n",
"msg_date": "Thu, 18 Oct 2012 12:23:56 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 10:36 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Oct 10, 2012 at 4:37 PM, Claudio Freire <[email protected]> wrote:\n>>\n>> In my case, if I set it too high, I get impossibly suboptimal plans\n>> when an index scan over millions of rows hits the disk way too often\n>> way too randomly. The difference is minutes for a seqscan vs hours for\n>> the index scan. In fact, I prefer setting it too low than too high.\n>\n> There's a corollary for very fast disk subsystems. If you've got say\n> 40 15krpm disks in a RAID-10 you can get sequential read speeds into\n> the gigabytes per second, so that sequential page access costs MUCH\n> lower than random page access, to the point that if seq page access is\n> rated a 1, random page access should be much higher, sometimes on the\n> order of 100 or so.\n\nOn the other hand, if you have 40 very busy connections, then if they\nare all doing sequential scans on different tables they will interfere\nwith each other and will have to divide up the RAID throughput, while\nif they are doing random fetches they will get along nicely on that\nRAID. So you have to know how much concurrency of the relevant type\nyou expect to see.\n\nThe default page cost settings already assume that random fetches are\nfar more likely to be cache hits than sequential fetches are. If that\nis not true, then the default random page cost is way too low,\nregardless of the number of spindles or the concurrency.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Thu, 18 Oct 2012 12:50:32 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Thu, Oct 18, 2012 at 4:23 PM, Jeff Janes <[email protected]> wrote:\n>> @Claudio So you are basically saying that if I have set effective_cache_size\n>> to 10GB and I have 10 concurrent processes which are using 10 different\n>> indices which are for example 2GB,\n>\n> It is the size of the table, not the index, which is primarily of\n> concern. However, that mostly factors into how postgres uses\n> effective_cache_size, not how you set it.\n\nYou're right, I just noticed that a few minutes ago (talk about telepathy).\n\n",
"msg_date": "Thu, 18 Oct 2012 17:43:08 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
},
{
"msg_contents": "On Thu, Oct 18, 2012 at 1:50 PM, Jeff Janes <[email protected]> wrote:\n> On Wed, Oct 10, 2012 at 10:36 PM, Scott Marlowe <[email protected]> wrote:\n>> On Wed, Oct 10, 2012 at 4:37 PM, Claudio Freire <[email protected]> wrote:\n>>>\n>>> In my case, if I set it too high, I get impossibly suboptimal plans\n>>> when an index scan over millions of rows hits the disk way too often\n>>> way too randomly. The difference is minutes for a seqscan vs hours for\n>>> the index scan. In fact, I prefer setting it too low than too high.\n>>\n>> There's a corollary for very fast disk subsystems. If you've got say\n>> 40 15krpm disks in a RAID-10 you can get sequential read speeds into\n>> the gigabytes per second, so that sequential page access costs MUCH\n>> lower than random page access, to the point that if seq page access is\n>> rated a 1, random page access should be much higher, sometimes on the\n>> order of 100 or so.\n>\n> On the other hand, if you have 40 very busy connections, then if they\n> are all doing sequential scans on different tables they will interfere\n> with each other and will have to divide up the RAID throughput, while\n> if they are doing random fetches they will get along nicely on that\n> RAID. So you have to know how much concurrency of the relevant type\n> you expect to see.\n\nMy experience is that both read ahead and caching will make it more\nresilient than that, to the point that it can take several times the\nnumber of read clients than the number of spindles before things get\nas slow as random access. While it's a performance knee to be aware\nof, often by the time you have enough clients for it to matter you've\nalready maxxed out either memory bw or all the CPU cores. But again,\nthis is with dozens to hundreds of disk drives. Not four or eight\netc.\n\nAnd it's very access pattern dependent. On the last machine I had\nwhere we cranked up random versus sequential costs, it was a reporting\nserver with 5 or 6TB of data that regularly got trundled through\nregularly for aggregation. Daily slices of data, in partitions were\n1GB to 10GB. For this machine the access patterns were almost never\nrandom. Just a handful of queries running in a random access pattern\ncould interfere with a 5 minute long sequential reporting query and\nmake it suddenly take hours. It had 16x7200rpm 2TB drives and would\nregularly hand three or four queries at a time, all running plenty\nfast. But crank up one thread of pgbench and it would slow to a\ncrawl.\n\n",
"msg_date": "Thu, 18 Oct 2012 16:06:24 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers/effective_cache_size on 96GB server"
}
] |
[
{
"msg_contents": "Hello! Is it possible to speed up the plan? \n\nhashes=# \\d hashcheck\n Table \"public.hashcheck\"\n Column | Type | Modifiers \n--------+-------------------+--------------------------------------------------------\n id | integer | not null default nextval('hashcheck_id_seq'::regclass)\n name | character varying | \n value | character varying | \nIndexes:\n \"btr\" btree (name)\n\nhashes=# select version();\n version \n--------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2), 64-bit\n(1 row)\n\nhashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n QUERY PLAN \n \n---------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=573977.88..573978.38 rows=200 width=32) (actual time=10351.280..10351.551 rows=4000 loops=1)\n Output: name, (count(name))\n Sort Key: hashcheck.name\n Sort Method: quicksort Memory: 315kB\n -> HashAggregate (cost=573968.24..573970.24 rows=200 width=32) (actual time=10340.507..10341.288 rows=4000 loops=1)\n Output: name, count(name)\n -> Seq Scan on public.hashcheck (cost=0.00..447669.16 rows=25259816 width=32) (actual time=0.019..2798.058 rows=25259817 loops=1)\n Output: id, name, value\n Total runtime: 10351.989 ms\n(9 rows)\n\nhashes=# \n\nThank you.\n\n",
"msg_date": "Wed, 10 Oct 2012 20:09:02 +0400",
"msg_from": "Korisk <[email protected]>",
"msg_from_op": true,
"msg_subject": "hash aggregation"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 9:09 AM, Korisk <[email protected]> wrote:\n> Hello! Is it possible to speed up the plan?\n> Sort (cost=573977.88..573978.38 rows=200 width=32) (actual time=10351.280..10351.551 rows=4000 loops=1)\n> Output: name, (count(name))\n> Sort Key: hashcheck.name\n> Sort Method: quicksort Memory: 315kB\n> -> HashAggregate (cost=573968.24..573970.24 rows=200 width=32) (actual time=10340.507..10341.288 rows=4000 loops=1)\n> Output: name, count(name)\n> -> Seq Scan on public.hashcheck (cost=0.00..447669.16 rows=25259816 width=32) (actual time=0.019..2798.058 rows=25259817 loops=1)\n> Output: id, name, value\n> Total runtime: 10351.989 ms\n\nAFAIU there are no query optimization solution for this.\n\nIt may be worth to create a table hashcheck_stat (name, cnt) and\nincrement/decrement the cnt values with triggers if you need to get\ncounts fast.\n\n-- \nSergey Konoplev\n\na database and software architect\nhttp://www.linkedin.com/in/grayhemp\n\nJabber: [email protected] Skype: gray-hemp Phone: +14158679984\n\n",
"msg_date": "Wed, 10 Oct 2012 14:30:09 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "Thanx for the advice, but increment table is not acceptable because it should be a plenty of them. \nNevertheless in the investigations was achieved some progress (7.4 sec vs 19.6 sec).\nBut using IOS scan you can see that there is an abnormal cost calculations it make me suspicious of little bugs.\n\nThanks for your answer.\n\n\nhashes=# \\d hashcheck;\n Table \"public.hashcheck\"\n Column | Type | Modifiers \n--------+-------------------+--------------------------------------------------------\n id | integer | not null default nextval('hashcheck_id_seq'::regclass)\n name | character varying | \n value | character varying | \nIndexes:\n \"hashcheck_name_idx\" btree (name)\n\nhashes=# vacuum hashcheck;\nVACUUM\nhashes=# set random_page_cost=0.1;\nSET\nhashes=# set seq_page_cost=0.1;\nSET\n\nhashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=407366.72..407367.22 rows=200 width=32) (actual time=10712.505..10712.765 rows=4001 loops=1)\n Output: name, (count(name))\n Sort Key: hashcheck.name\n Sort Method: quicksort Memory: 315kB\n -> HashAggregate (cost=407357.08..407359.08 rows=200 width=32) (actual time=10702.285..10703.054 rows=4001 loops=1)\n Output: name, count(name)\n -> Seq Scan on public.hashcheck (cost=0.00..277423.12 rows=25986792 width=32) (actual time=0.054..2877.100 rows=25990002 loops=1)\n Output: id, name, value\n Total runtime: 10712.989 ms\n(9 rows)\n\nhashes=# set enable_seqscan = off;\nSET\nhashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=10000000000.00..10000528610.88 rows=200 width=32) (actual time=0.116..7452.005 rows=4001 loops=1)\n Output: name, count(name)\n -> Index Only Scan Backward using hashcheck_name_idx on public.hashcheck (cost=10000000000.00..10000398674.92 rows=25986792 width=32)\n (actual time=0.104..3785.767 rows=25990002 loops=1)\n Output: name\n Heap Fetches: 0\n Total runtime: 7452.509 ms\n(6 rows)\n\nБлагодаря шаманствам на:\nhttp://www.sql.ru/forum/actualthread.aspx?tid=974484\n\n11.10.2012, 01:30, \"Sergey Konoplev\" <[email protected]>:\n> On Wed, Oct 10, 2012 at 9:09 AM, Korisk <[email protected]> wrote:\n>\n>> Hello! Is it possible to speed up the plan?\n>> Sort (cost=573977.88..573978.38 rows=200 width=32) (actual time=10351.280..10351.551 rows=4000 loops=1)\n>> Output: name, (count(name))\n>> Sort Key: hashcheck.name\n>> Sort Method: quicksort Memory: 315kB\n>> -> HashAggregate (cost=573968.24..573970.24 rows=200 width=32) (actual time=10340.507..10341.288 rows=4000 loops=1)\n>> Output: name, count(name)\n>> -> Seq Scan on public.hashcheck (cost=0.00..447669.16 rows=25259816 width=32) (actual time=0.019..2798.058 rows=25259817 loops=1)\n>> Output: id, name, value\n>> Total runtime: 10351.989 ms\n>\n> AFAIU there are no query optimization solution for this.\n>\n> It may be worth to create a table hashcheck_stat (name, cnt) and\n> increment/decrement the cnt values with triggers if you need to get\n> counts fast.\n>\n> --\n> Sergey Konoplev\n>\n> a database and software architect\n> http://www.linkedin.com/in/grayhemp\n>\n> Jabber: [email protected] Skype: gray-hemp Phone: +14158679984\n\n",
"msg_date": "Thu, 11 Oct 2012 08:13:28 +0400",
"msg_from": "Korisk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "On 10/11/2012 12:13 PM, Korisk wrote:\n> Thanx for the advice, but increment table is not acceptable because it should be a plenty of them.\n> Nevertheless in the investigations was achieved some progress (7.4 sec vs 19.6 sec).\n> But using IOS scan\n\n\"IOS scan\" ?\n\nDo you mean some kind of I/O monitoring tool?\n\n> you can see that there is an abnormal cost calculations it make me suspicious of little bugs.\n\nAbnormal how?\n\nThe cost estimates aren't times, I/Os, or anything you know, they're a \npurely relative figure for comparing plan costs.\n\n> hashes=# set enable_seqscan = off;\n> SET\n\nWhat's your seq_page_cost and random_page_cost?\n\n\n> hashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=10000000000.00..10000528610.88 rows=200 width=32) (actual time=0.116..7452.005 rows=4001 loops=1)\n> Output: name, count(name)\n> -> Index Only Scan Backward using hashcheck_name_idx on public.hashcheck\n ^^^^^^^^^^^^^^^^^^^^^^^^\nIf you don't mind the increased cost of insert/update/delete try:\n\n CREATE INDEX hashcheck_name_rev_idx\n ON public.hashcheck (name DESC);\n\nie create the index in descending order.\n\n--\nCraig Ringer\n\n",
"msg_date": "Thu, 11 Oct 2012 12:38:45 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 9:13 PM, Korisk <[email protected]> wrote:\n> -> Index Only Scan Backward using hashcheck_name_idx on public.hashcheck (cost=10000000000.00..10000398674.92 rows=25986792 width=32)\n\nIt seems odd.\n\nIs it possible to look at the non default configuration?\n\nSELECT name, setting, reset_val FROM pg_settings WHERE setting <> reset_val;\n\n> (actual time=0.104..3785.767 rows=25990002 loops=1)\n> Output: name\n> Heap Fetches: 0\n> Total runtime: 7452.509 ms\n> (6 rows)\n>\n> Благодаря шаманствам на:\n> http://www.sql.ru/forum/actualthread.aspx?tid=974484\n>\n> 11.10.2012, 01:30, \"Sergey Konoplev\" <[email protected]>:\n>> On Wed, Oct 10, 2012 at 9:09 AM, Korisk <[email protected]> wrote:\n>>\n>>> Hello! Is it possible to speed up the plan?\n>>> Sort (cost=573977.88..573978.38 rows=200 width=32) (actual time=10351.280..10351.551 rows=4000 loops=1)\n>>> Output: name, (count(name))\n>>> Sort Key: hashcheck.name\n>>> Sort Method: quicksort Memory: 315kB\n>>> -> HashAggregate (cost=573968.24..573970.24 rows=200 width=32) (actual time=10340.507..10341.288 rows=4000 loops=1)\n>>> Output: name, count(name)\n>>> -> Seq Scan on public.hashcheck (cost=0.00..447669.16 rows=25259816 width=32) (actual time=0.019..2798.058 rows=25259817 loops=1)\n>>> Output: id, name, value\n>>> Total runtime: 10351.989 ms\n>>\n>> AFAIU there are no query optimization solution for this.\n>>\n>> It may be worth to create a table hashcheck_stat (name, cnt) and\n>> increment/decrement the cnt values with triggers if you need to get\n>> counts fast.\n>>\n>> --\n>> Sergey Konoplev\n>>\n>> a database and software architect\n>> http://www.linkedin.com/in/grayhemp\n>>\n>> Jabber: [email protected] Skype: gray-hemp Phone: +14158679984\n\n\n\n-- \nSergey Konoplev\n\na database and software architect\nhttp://www.linkedin.com/in/grayhemp\n\nJabber: [email protected] Skype: gray-hemp Phone: +14158679984\n\n",
"msg_date": "Wed, 10 Oct 2012 23:15:03 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "\"IOS scan\" ?\nIndex Only Scan\n\nWhat's your seq_page_cost and random_page_cost?\n\nhashes=# SELECT name, setting, reset_val FROM pg_settings WHERE setting <> reset_val;\n name | setting | reset_val \n-------------------------+----------------+-----------\n archive_command | (disabled) | \n enable_bitmapscan | off | on\n enable_indexscan | off | on\n enable_seqscan | off | on\n log_file_mode | 0600 | 384\n random_page_cost | 0.1 | 4\n seq_page_cost | 0.1 | 1\n transaction_isolation | read committed | default\n unix_socket_permissions | 0777 | 511\n(9 rows)\n\nPostgresql 9.2.1 was configured and built with default settings.\n\nThank you.\n\n",
"msg_date": "Thu, 11 Oct 2012 19:15:13 +0400",
"msg_from": "Korisk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "On Thu, Oct 11, 2012 at 8:15 AM, Korisk <[email protected]> wrote:\n> What's your seq_page_cost and random_page_cost?\n> hashes=# SELECT name, setting, reset_val FROM pg_settings WHERE setting <> reset_val;\n> name | setting | reset_val\n> -------------------------+----------------+-----------\n> archive_command | (disabled) |\n> enable_bitmapscan | off | on\n> enable_indexscan | off | on\n> enable_seqscan | off | on\n> log_file_mode | 0600 | 384\n> random_page_cost | 0.1 | 4\n> seq_page_cost | 0.1 | 1\n> transaction_isolation | read committed | default\n> unix_socket_permissions | 0777 | 511\n\nCould you please try to set *_page_cost to 1 and then EXPLAIN ANALYZE it again?\n\n> -> Index Only Scan Backward using hashcheck_name_idx on public.hashcheck\n> (cost=10000000000.00..10000398674.92 rows=25986792 width=32)\n> (actual time=0.104..3785.767 rows=25990002 loops=1)\n\nI am just guessing but it might probably be some kind of a precision\nbug, and I would like to check this.\n\n> (9 rows)\n>\n> Postgresql 9.2.1 was configured and built with default settings.\n>\n> Thank you.\n\n\n\n-- \nSergey Konoplev\n\na database and software architect\nhttp://www.linkedin.com/in/grayhemp\n\nJabber: [email protected] Skype: gray-hemp Phone: +14158679984\n\n",
"msg_date": "Thu, 11 Oct 2012 10:55:09 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "Again the same cost.\n\n\nhashes=# SELECT name, setting, reset_val FROM pg_settings WHERE setting <> reset_val;\n name | setting | reset_val \n-------------------------+----------------+-----------\n archive_command | (disabled) | \n enable_bitmapscan | off | on\n enable_indexscan | off | on\n enable_seqscan | off | on\n log_file_mode | 0600 | 384\n random_page_cost | 1 | 4\n transaction_isolation | read committed | default\n unix_socket_permissions | 0777 | 511\n(8 rows)\n\nhashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------\n GroupAggregate (cost=10000000000.00..10000596612.97 rows=200 width=32) (actual time=0.136..7272.240 rows=4001 loops=1)\n Output: name, count(name)\n -> Index Only Scan using hashcheck_name_rev_idx on public.hashcheck (cost=10000000000.00..10000466660.96 rows=25990002 width=32) (act\nual time=0.121..3624.624 rows=25990002 loops=1)\n Output: name\n Heap Fetches: 0\n Total runtime: 7272.735 ms\n(6 rows)\n\n\n\n\n\n\n11.10.2012, 21:55, \"Sergey Konoplev\" <[email protected]>:\n> On Thu, Oct 11, 2012 at 8:15 AM, Korisk <[email protected]> wrote:\n>\n>> What's your seq_page_cost and random_page_cost?\n>> hashes=# SELECT name, setting, reset_val FROM pg_settings WHERE setting <> reset_val;\n>> name | setting | reset_val\n>> -------------------------+----------------+-----------\n>> archive_command | (disabled) |\n>> enable_bitmapscan | off | on\n>> enable_indexscan | off | on\n>> enable_seqscan | off | on\n>> log_file_mode | 0600 | 384\n>> random_page_cost | 0.1 | 4\n>> seq_page_cost | 0.1 | 1\n>> transaction_isolation | read committed | default\n>> unix_socket_permissions | 0777 | 511\n>\n> Could you please try to set *_page_cost to 1 and then EXPLAIN ANALYZE it again?\n>\n>> -> Index Only Scan Backward using hashcheck_name_idx on public.hashcheck\n>> (cost=10000000000.00..10000398674.92 rows=25986792 width=32)\n>> (actual time=0.104..3785.767 rows=25990002 loops=1)\n>\n> I am just guessing but it might probably be some kind of a precision\n> bug, and I would like to check this.\n>\n>> (9 rows)\n>>\n>> Postgresql 9.2.1 was configured and built with default settings.\n>>\n>> Thank you.\n>\n> --\n> Sergey Konoplev\n>\n> a database and software architect\n> http://www.linkedin.com/in/grayhemp\n>\n> Jabber: [email protected] Skype: gray-hemp Phone: +14158679984\n\n",
"msg_date": "Fri, 12 Oct 2012 07:55:51 +0400",
"msg_from": "Korisk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "On Thu, Oct 11, 2012 at 8:55 PM, Korisk <[email protected]> wrote:\n> hashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n\nNow set enable_bitmapscan and enable_indexscan to on an try it again.\n\nThen set enable_seqscan to on and run it one more time.\n\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> ------------------------------------------------\n> GroupAggregate (cost=10000000000.00..10000596612.97 rows=200 width=32) (actual time=0.136..7272.240 rows=4001 loops=1)\n> Output: name, count(name)\n> -> Index Only Scan using hashcheck_name_rev_idx on public.hashcheck (cost=10000000000.00..10000466660.96 rows=25990002 width=32) (act\n> ual time=0.121..3624.624 rows=25990002 loops=1)\n> Output: name\n> Heap Fetches: 0\n> Total runtime: 7272.735 ms\n> (6 rows)\n>\n>\n>\n>\n>\n>\n> 11.10.2012, 21:55, \"Sergey Konoplev\" <[email protected]>:\n>> On Thu, Oct 11, 2012 at 8:15 AM, Korisk <[email protected]> wrote:\n>>\n>>> What's your seq_page_cost and random_page_cost?\n>>> hashes=# SELECT name, setting, reset_val FROM pg_settings WHERE setting <> reset_val;\n>>> name | setting | reset_val\n>>> -------------------------+----------------+-----------\n>>> archive_command | (disabled) |\n>>> enable_bitmapscan | off | on\n>>> enable_indexscan | off | on\n>>> enable_seqscan | off | on\n>>> log_file_mode | 0600 | 384\n>>> random_page_cost | 0.1 | 4\n>>> seq_page_cost | 0.1 | 1\n>>> transaction_isolation | read committed | default\n>>> unix_socket_permissions | 0777 | 511\n>>\n>> Could you please try to set *_page_cost to 1 and then EXPLAIN ANALYZE it again?\n>>\n>>> -> Index Only Scan Backward using hashcheck_name_idx on public.hashcheck\n>>> (cost=10000000000.00..10000398674.92 rows=25986792 width=32)\n>>> (actual time=0.104..3785.767 rows=25990002 loops=1)\n>>\n>> I am just guessing but it might probably be some kind of a precision\n>> bug, and I would like to check this.\n>>\n>>> (9 rows)\n>>>\n>>> Postgresql 9.2.1 was configured and built with default settings.\n>>>\n>>> Thank you.\n>>\n>> --\n>> Sergey Konoplev\n>>\n>> a database and software architect\n>> http://www.linkedin.com/in/grayhemp\n>>\n>> Jabber: [email protected] Skype: gray-hemp Phone: +14158679984\n\n\n\n-- \nSergey Konoplev\n\na database and software architect\nhttp://www.linkedin.com/in/grayhemp\n\nJabber: [email protected] Skype: gray-hemp Phone: +14158679984\n\n",
"msg_date": "Thu, 11 Oct 2012 21:01:21 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "Strange situation.\nAfter indexscan enabling the cost is seriously decreased.\n\nhashes=# set enable_bitmapscan=on;\nSET\nhashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------\n GroupAggregate (cost=10000000000.00..10000596612.97 rows=200 width=32) (actual time=0.187..7424.799 rows=4001 loops=1)\n Output: name, count(name)\n -> Index Only Scan using hashcheck_name_rev_idx on public.hashcheck (cost=10000000000.00..10000466660.96 rows=25990002 width=32) (act\nual time=0.166..3698.776 rows=25990002 loops=1)\n Output: name\n Heap Fetches: 0\n Total runtime: 7425.403 ms\n(6 rows)\n\nhashes=# set enable_indexscan=on;\nSET\nhashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------\n GroupAggregate (cost=0.00..596612.97 rows=200 width=32) (actual time=0.148..7339.115 rows=4001 loops=1)\n Output: name, count(name)\n -> Index Only Scan using hashcheck_name_rev_idx on public.hashcheck (cost=0.00..466660.96 rows=25990002 width=32) (actual time=0.129.\n.3653.848 rows=25990002 loops=1)\n Output: name\n Heap Fetches: 0\n Total runtime: 7339.592 ms\n(6 rows)\n\nhashes=# set enable_seqscan=on;\nSET\nhashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------\n-----\n Sort (cost=565411.67..565412.17 rows=200 width=32) (actual time=21746.799..21747.026 rows=4001 loops=1)\n Output: name, (count(name))\n Sort Key: hashcheck.name\n Sort Method: quicksort Memory: 315kB\n -> HashAggregate (cost=565402.03..565404.03 rows=200 width=32) (actual time=21731.551..21733.277 rows=4001 loops=1)\n Output: name, count(name)\n -> Seq Scan on public.hashcheck (cost=0.00..435452.02 rows=25990002 width=32) (actual time=29.431..13383.812 rows=25990002 loop\ns=1)\n Output: id, name, value\n Total runtime: 21747.356 ms\n(9 rows)\n\n\n\n\n\n\n",
"msg_date": "Fri, 12 Oct 2012 08:14:38 +0400",
"msg_from": "Korisk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "Hi,\n\nOn 12 October 2012 15:14, Korisk <[email protected]> wrote:\n> Strange situation.\n> After indexscan enabling the cost is seriously decreased.\n\nYou can not really disable any scan method. enable_xxx = off just sets\nvery high cost (=10000000000) for that operation.\n\n-- \nOndrej Ivanic\n([email protected])\n(http://www.linkedin.com/in/ondrejivanic)\n\n",
"msg_date": "Fri, 12 Oct 2012 15:32:53 +1100",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "On Thu, Oct 11, 2012 at 9:14 PM, Korisk <[email protected]> wrote:\n> Strange situation.\n> After indexscan enabling the cost is seriously decreased.\n\nAFAIK when the planner has to choose between index scans and seq scans\nand both of this options are off it uses one of this strategies anyway\nbut puts 10000000000.00 as a lower cost for this (thanks Maxim Boguk\nfor the explanation in chat).\n\n> -> Index Only Scan using hashcheck_name_rev_idx on public.hashcheck (cost=10000000000.00..10000466660.96 rows=25990002 width=32) (act\n> ual time=0.166..3698.776 rows=25990002 loops=1)\n\nSo when you enabled one of these options it started using it as usual.\n\n> hashes=# set enable_indexscan=on;\n> SET\n> hashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n\n[cut]\n\n> -> Index Only Scan using hashcheck_name_rev_idx on public.hashcheck (cost=0.00..466660.96 rows=25990002 width=32) (actual time=0.129.\n> .3653.848 rows=25990002 loops=1)\n\nWhat I can not understand is why the seq scan's estimated cost is\nbetter the index scan's one. It depends on the number of pages in\nindex/relation. May be the index is heavily bloated?\n\nLet's see the sizes:\n\nselect pg_total_relation_size('hashcheck')\nselect pg_total_relation_size('hashcheck_name_rev_idx');\n\n\n> hashes=# set enable_seqscan=on;\n> SET\n> hashes=# explain analyse verbose select name, count(name) as cnt from hashcheck group by name order by name desc;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> -----\n> Sort (cost=565411.67..565412.17 rows=200 width=32) (actual time=21746.799..21747.026 rows=4001 loops=1)\n> Output: name, (count(name))\n> Sort Key: hashcheck.name\n> Sort Method: quicksort Memory: 315kB\n> -> HashAggregate (cost=565402.03..565404.03 rows=200 width=32) (actual time=21731.551..21733.277 rows=4001 loops=1)\n> Output: name, count(name)\n> -> Seq Scan on public.hashcheck (cost=0.00..435452.02 rows=25990002 width=32) (actual time=29.431..13383.812 rows=25990002 loop\n> s=1)\n> Output: id, name, value\n> Total runtime: 21747.356 ms\n> (9 rows)\n>\n>\n>\n>\n>\n\n\n\n-- \nSergey Konoplev\n\na database and software architect\nhttp://www.linkedin.com/in/grayhemp\n\nJabber: [email protected] Skype: gray-hemp Phone: +14158679984\n\n",
"msg_date": "Fri, 12 Oct 2012 00:10:06 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "> What I can not understand is why the seq scan's estimated cost is\n> better the index scan's one. It depends on the number of pages in\n> index/relation. May be the index is heavily bloated?\nMm i don't know how to see bloating level. But the index was created by \ncreate index on hashcheck using btree (name)\nafter the table population.\n\nSizes:\nhashes=# select pg_total_relation_size('hashcheck');\n pg_total_relation_size \n------------------------\n 2067701760\n(1 row)\n\nhashes=# select pg_total_relation_size('hashcheck_name_rev_idx');\n pg_total_relation_size \n------------------------\n 629170176\n(1 row)\n\n\n",
"msg_date": "Fri, 12 Oct 2012 18:37:48 +0400",
"msg_from": "Korisk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "On 12.10.2012 09:10, Sergey Konoplev wrote:\n> What I can not understand is why the seq scan's estimated cost is\n> better the index scan's one. It depends on the number of pages in\n> index/relation. May be the index is heavily bloated?\n\nThe IOS cost depends on other things too. The index can't be read simply\nas a sequence of pages, the scan needs to jump around the tree to read\nthe tuples in the right order.\n\nWith the index size being close to the size of the table, the cost of\nthese operations may easily outweight the benefits. And I suspect this\nis the case here, because the table has only 3 columns (INT and two text\nones), and each row has some overhead (header), that may further\ndecrease the difference between index and table size.\n\nNevertheless, the cost estimate here is wrong - either it's estimating\nsomething wrong, or maybe everything is in the case and the planner does\nnot know about that.\n\nTomas\n\n",
"msg_date": "Fri, 12 Oct 2012 23:14:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash aggregation"
},
{
"msg_contents": "On 11.10.2012 17:15, Korisk wrote:\n> \"IOS scan\" ?\n> Index Only Scan\n> \n> What's your seq_page_cost and random_page_cost?\n> \n> hashes=# SELECT name, setting, reset_val FROM pg_settings WHERE setting <> reset_val;\n> name | setting | reset_val \n> -------------------------+----------------+-----------\n> archive_command | (disabled) | \n> enable_bitmapscan | off | on\n> enable_indexscan | off | on\n> enable_seqscan | off | on\n> log_file_mode | 0600 | 384\n> random_page_cost | 0.1 | 4\n> seq_page_cost | 0.1 | 1\n> transaction_isolation | read committed | default\n> unix_socket_permissions | 0777 | 511\n> (9 rows)\n> \n> Postgresql 9.2.1 was configured and built with default settings.\n> \n> Thank you.\n\nHi,\n\nso how much RAM does the system have? Because if you're using the\ndefault shared buffers size (32MB IIRC), that's the first thing you\nshould bump up. It's usually recommended to set it to ~25% of RAM, but\nnot more than ~10GB. Set also the work_mem and maintenance_work_mem,\ndepending on the amount of RAM you have.\n\nThen set effective_cache_size to 75% of RAM (this is just a hint to the\nplanner, it won't really allocate memory).\n\nRestart the database and try the queries again. Don't run them with\nEXPLAIN ANALYZE because that adds overhead that may easily make some of\nthe queries much slower.\n\nIt's great to see the estimates and actual row counts, but for timing\nqueries it's a poor choice (even the TIMING OFF added in 9.2 is not\nexactly overhead-free). Maybe this is what made the seqscan look much\nslower?\n\nI usually run them from psql like this\n\n\\o /dev/null\n\\timing on\nSELECT ...\n\nwhich gives me more reliable timing results (especially when executed\nright on the server).\n\nOnly if all this tuning fails, it's time to fine-tune the knobs, i.e.\nthe cost variables. Please, don't change the seq_page_cost, always keep\nit at 1.0 and change only the other values.\n\nFor example if everything fits into the RAM, you may change the\nrandom_page_cost to 1.5 or lower (I'd never recommend to set it lower\nthan seq_page_cost), and then you may start tuning the cpu_* costs.\n\nBut please, this is the last thing you should do - tune the server\nproperly first. There's even a very nice wiki page about tuning\nPostgreSQL servers:\n\n http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nregards\nTomas\n\n",
"msg_date": "Fri, 12 Oct 2012 23:28:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash aggregation"
}
] |
[
{
"msg_contents": "Hi,\n\nI have pretty large tables, with columns that might never receive any \ndata, or always receive data, based on the customer needs.\nThe index on these columns are really big, even if the column is never \nused, so I tend to add a \"where col is not null\" clause on those indexes.\n\nWhat are the drawbacks of defining my index with a \"where col is not \nnull\" clause ?\n\nFranck",
"msg_date": "Wed, 10 Oct 2012 19:06:23 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Drawbacks of create index where is not null ?"
},
{
"msg_contents": "On 10/11/2012 01:06 AM, Franck Routier wrote:\n> Hi,\n>\n> I have pretty large tables, with columns that might never receive any \n> data, or always receive data, based on the customer needs.\n> The index on these columns are really big, even if the column is never \n> used, so I tend to add a \"where col is not null\" clause on those indexes.\n>\n> What are the drawbacks of defining my index with a \"where col is not \n> null\" clause ?\n\n* You can't CLUSTER on a partial index; and\n\n* The partial index will only be used for queries that use the condition \n\"WHERE col IS NOT NULL\" themselves. The planner isn't super-smart about \nhow it matches index WHERE conditions to query WHERE conditions, so \nyou'll want to use exactly the same condition text where possible.\n\n--\nCraig Ringer\n\n",
"msg_date": "Thu, 11 Oct 2012 13:26:03 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Drawbacks of create index where is not null ?"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 11:26 PM, Craig Ringer <[email protected]> wrote:\n> On 10/11/2012 01:06 AM, Franck Routier wrote:\n>>\n>> Hi,\n>>\n>> I have pretty large tables, with columns that might never receive any\n>> data, or always receive data, based on the customer needs.\n>> The index on these columns are really big, even if the column is never\n>> used, so I tend to add a \"where col is not null\" clause on those indexes.\n>>\n>> What are the drawbacks of defining my index with a \"where col is not null\"\n>> clause ?\n>\n>\n> * You can't CLUSTER on a partial index; and\n>\n> * The partial index will only be used for queries that use the condition\n> \"WHERE col IS NOT NULL\" themselves. The planner isn't super-smart about how\n> it matches index WHERE conditions to query WHERE conditions, so you'll want\n> to use exactly the same condition text where possible.\n\nI think the query planner has gotten a little smarter of late:\n\nsmarlowe=# select version();\n version\n----------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.1.6 on x86_64-pc-linux-gnu, compiled by gcc-4.6.real\n(Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1, 64-bit\n(1 row)\n\nsmarlowe=# drop table a;\nDROP TABLE\nsmarlowe=# create table a (i int);\nCREATE TABLE\nsmarlowe=# insert into a select null from generate_series(1,10000);\nINSERT 0 10000\nsmarlowe=# insert into a values (10);\nINSERT 0 1\nsmarlowe=# insert into a select null from generate_series(1,10000);\nINSERT 0 10000\nsmarlowe=# create index on a (i) where i is not null;\nCREATE INDEX\nsmarlowe=# explain select * from a where i =10;\n QUERY PLAN\n------------------------------------------------------------------------\n Bitmap Heap Scan on a (cost=4.28..78.00 rows=100 width=4)\n Recheck Cond: (i = 10)\n -> Bitmap Index Scan on a_i_idx (cost=0.00..4.26 rows=100 width=0)\n Index Cond: (i = 10)\n(4 rows)\n\n",
"msg_date": "Wed, 10 Oct 2012 23:42:20 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Drawbacks of create index where is not null ?"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 11:42 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Oct 10, 2012 at 11:26 PM, Craig Ringer <[email protected]> wrote:\n>> On 10/11/2012 01:06 AM, Franck Routier wrote:\n>>>\n>>> Hi,\n>>>\n>>> I have pretty large tables, with columns that might never receive any\n>>> data, or always receive data, based on the customer needs.\n>>> The index on these columns are really big, even if the column is never\n>>> used, so I tend to add a \"where col is not null\" clause on those indexes.\n>>>\n>>> What are the drawbacks of defining my index with a \"where col is not null\"\n>>> clause ?\n>>\n>>\n>> * You can't CLUSTER on a partial index; and\n>>\n>> * The partial index will only be used for queries that use the condition\n>> \"WHERE col IS NOT NULL\" themselves. The planner isn't super-smart about how\n>> it matches index WHERE conditions to query WHERE conditions, so you'll want\n>> to use exactly the same condition text where possible.\n>\n> I think the query planner has gotten a little smarter of late:\n>\n> smarlowe=# select version();\n> version\n> ----------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.1.6 on x86_64-pc-linux-gnu, compiled by gcc-4.6.real\n> (Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1, 64-bit\n> (1 row)\n>\n> smarlowe=# drop table a;\n> DROP TABLE\n> smarlowe=# create table a (i int);\n> CREATE TABLE\n> smarlowe=# insert into a select null from generate_series(1,10000);\n> INSERT 0 10000\n> smarlowe=# insert into a values (10);\n> INSERT 0 1\n> smarlowe=# insert into a select null from generate_series(1,10000);\n> INSERT 0 10000\n> smarlowe=# create index on a (i) where i is not null;\n> CREATE INDEX\n> smarlowe=# explain select * from a where i =10;\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Bitmap Heap Scan on a (cost=4.28..78.00 rows=100 width=4)\n> Recheck Cond: (i = 10)\n> -> Bitmap Index Scan on a_i_idx (cost=0.00..4.26 rows=100 width=0)\n> Index Cond: (i = 10)\n> (4 rows)\n\n\nActually after an analyze it just uses the plain index no bitmap scan.\n So I get the same explain output with or without the \"and i is not\nnull\" clause added in.\n\n",
"msg_date": "Wed, 10 Oct 2012 23:44:55 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Drawbacks of create index where is not null ?"
},
{
"msg_contents": "Le 11/10/2012 07:26, Craig Ringer a écrit :\n> * The partial index will only be used for queries that use the \n> condition \"WHERE col IS NOT NULL\" themselves. The planner isn't \n> super-smart about how it matches index WHERE conditions to query WHERE \n> conditions, so you'll want to use exactly the same condition text \n> where possible.\n>\n\n From my experiments, the planner seems to be smart enougth to tell that \n\"where col = 'myvalue' \" will match with partial index \"where col is not \nnull\".\nSo it will use the index and not do a full tablescan. (this is on 8.4).\nThis is also what Scott says in his reply.\nI'm not thinking of using more complex where predicat for my indexes, \njust \"is not null\". So I think I should not be hit by this...\n\nThanks,\nFranck",
"msg_date": "Thu, 11 Oct 2012 10:22:53 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Drawbacks of create index where is not null ?"
},
{
"msg_contents": "On Wed, Oct 10, 2012 at 10:42 PM, Scott Marlowe <[email protected]> wrote:\n> I think the query planner has gotten a little smarter of late:\n>\n> smarlowe=# create index on a (i) where i is not null;\n> CREATE INDEX\n> smarlowe=# explain select * from a where i =10;\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Bitmap Heap Scan on a (cost=4.28..78.00 rows=100 width=4)\n> Recheck Cond: (i = 10)\n> -> Bitmap Index Scan on a_i_idx (cost=0.00..4.26 rows=100 width=0)\n> Index Cond: (i = 10)\n> (4 rows)\n\nIt is even smarter a little bit more:\n\n[local]:5432 grayhemp@grayhemp=# create index h_idx1 on h (n) where v\nis not null;\nCREATE INDEX\n\n[local]:5432 grayhemp@grayhemp=# explain analyze select * from h where\nv = '0.5';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on h (cost=1616.10..8494.68 rows=1 width=30)\n(actual time=111.735..111.735 rows=0 loops=1)\n Recheck Cond: (v IS NOT NULL)\n Filter: (v = '0.5'::text)\n -> Bitmap Index Scan on h_idx1 (cost=0.00..1616.10 rows=102367\nwidth=0) (actual time=19.027..19.027 rows=100271 loops=1)\n(5 rows)\n\n\n-- \nSergey Konoplev\n\na database and software architect\nhttp://www.linkedin.com/in/grayhemp\n\nJabber: [email protected] Skype: gray-hemp Phone: +14158679984\n\n",
"msg_date": "Thu, 11 Oct 2012 15:31:13 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Drawbacks of create index where is not null ?"
}
] |
[
{
"msg_contents": "I have a table with a column of type timestamp with time zone, this column\nhas an index\n\n \n\nIf I do a select like this\n\n \n\nselect * from mytable where cast(my_date as timestamp without time zone) >\n'2012-10-12 20:00:00'\n\n \n\nthis query will use the index over the my_date column?\n\n \n\nThanks\n\n \n\n\nI have a table with a column of type timestamp with time zone, this column has an index If I do a select like this select * from mytable where cast(my_date as timestamp without time zone) > '2012-10-12 20:00:00' this query will use the index over the my_date column? Thanks",
"msg_date": "Fri, 12 Oct 2012 17:05:52 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Do cast affects index usage?"
},
{
"msg_contents": "On Fri, Oct 12, 2012 at 2:05 PM, Anibal David Acosta <[email protected]> wrote:\n> I have a table with a column of type timestamp with time zone, this column\n> has an index\n>\n> If I do a select like this\n>\n> select * from mytable where cast(my_date as timestamp without time zone) >\n> '2012-10-12 20:00:00'\n>\n> this query will use the index over the my_date column?\n\nNo. but it will use a functional index:\n\ncreate index yada on blah (cast(my_date as timestaqmp without time zone));\n\n",
"msg_date": "Fri, 12 Oct 2012 14:37:37 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do cast affects index usage?"
},
{
"msg_contents": "\"Anibal David Acosta\" <[email protected]> writes:\n> I have a table with a column of type timestamp with time zone, this column\n> has an index\n\n> If I do a select like this\n> select * from mytable where cast(my_date as timestamp without time zone) >\n> '2012-10-12 20:00:00'\n> this query will use the index over the my_date column?\n\nNo. The cast seems rather pointless though ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 12 Oct 2012 16:38:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do cast affects index usage?"
},
{
"msg_contents": "Because I need to get all rows where datetime is greater than (for example)\n'2012-10-10 00:00:00' but ignoring timezone, so basically I need to truncate\ntimezone\nThis can be done by converting to timestamp without timezone.\n\n\n\n-----Mensaje original-----\nDe: Tom Lane [mailto:[email protected]] \nEnviado el: viernes, 12 de octubre de 2012 05:39 p.m.\nPara: Anibal David Acosta\nCC: [email protected]\nAsunto: Re: [PERFORM] Do cast affects index usage?\n\n\"Anibal David Acosta\" <[email protected]> writes:\n> I have a table with a column of type timestamp with time zone, this \n> column has an index\n\n> If I do a select like this\n> select * from mytable where cast(my_date as timestamp without time \n> zone) >\n> '2012-10-12 20:00:00'\n> this query will use the index over the my_date column?\n\nNo. The cast seems rather pointless though ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Oct 2012 21:27:42 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do cast affects index usage?"
},
{
"msg_contents": "\"Anibal David Acosta\" <[email protected]> writes:\n> Because I need to get all rows where datetime is greater than (for example)\n> '2012-10-10 00:00:00' but ignoring timezone, so basically I need to truncate\n> timezone\n> This can be done by converting to timestamp without timezone.\n\n[ shrug... ] It can also be done without that. Whatever your cutoff\ntime is can be expressed as a timestamp *with* tz.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 12 Oct 2012 21:34:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do cast affects index usage?"
}
] |
[
{
"msg_contents": "Hi,\n\nGiven I have a large table implemented with partitions and need fast\naccess to a (primary) key value in a scenario where every minute\nupdates (inserts/updates/deletes) are coming in.\n\nNow since PG does not allow any index (nor constraint) on \"master\"\ntable, I have a performance issue (and a possible parallelization\nopportunity).\n\nSay, there is a table with 250 mio. rows split into 250 tables with 1\nmio. rows each. And say the the index behavior is O(log n). Then a\nsearch for a key takes O(log(250*n)) or 8.4 time units. What PG (9.1)\ncurrently probably does is a iterative call to all 250 partitioned\ntables, which will take O(250*log(n)) - or 1500 time units in this\ncase. This is about 180 times slower.\n\nWhat do you think about introducing a \"global index\" over all\npartitions (like Ora :->)? This would be a (logically) single index\nwhich can be even be parallelized given the partitioned tables are\noptimally distributed like in different tablespaces.\n\nWhat do you think about this?\n\n-S.\n\n",
"msg_date": "Sun, 14 Oct 2012 02:43:23 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index over all partitions (aka global index)?"
},
{
"msg_contents": "On Sat, Oct 13, 2012 at 5:43 PM, Stefan Keller <[email protected]> wrote:\n>\n> Say, there is a table with 250 mio. rows split into 250 tables with 1\n> mio. rows each. And say the the index behavior is O(log n). Then a\n> search for a key takes O(log(250*n)) or 8.4 time units. What PG (9.1)\n> currently probably does is a iterative call to all 250 partitioned\n> tables, which will take O(250*log(n)) - or 1500 time units in this\n> case. This is about 180 times slower.\n>\n> What do you think about introducing a \"global index\" over all\n> partitions (like Ora :->)? This would be a (logically) single index\n> which can be even be parallelized given the partitioned tables are\n> optimally distributed like in different tablespaces.\n>\n> What do you think about this?\n\nWhat you already have is a logically single index. What you want is\nphysically single index. But wouldn't that remove most of the\nbenefits of partitioning? You could no longer add or remove\npartitions instantaneously, for example.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Sat, 13 Oct 2012 22:39:12 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index over all partitions (aka global index)?"
},
{
"msg_contents": "Yes a physical index would be one solution - but it's not the only one.\n\nThe indexes could be treated in parallel in their physical places\nwhere they are. That's why I called it still logical.\n\nI don't think so that I would loose all benefits of partition since an\nindex could adapt itself when partitions are attached or removed.\nThat's probably how Oracle resolves it which knows global indexes\nprobably since version 8(!) [1]\n\nYours, S.\n\n[1] http://www.oracle-base.com/articles/8i/partitioned-tables-and-indexes.php\n\n\n2012/10/14 Jeff Janes <[email protected]>:\n> On Sat, Oct 13, 2012 at 5:43 PM, Stefan Keller <[email protected]> wrote:\n>>\n>> Say, there is a table with 250 mio. rows split into 250 tables with 1\n>> mio. rows each. And say the the index behavior is O(log n). Then a\n>> search for a key takes O(log(250*n)) or 8.4 time units. What PG (9.1)\n>> currently probably does is a iterative call to all 250 partitioned\n>> tables, which will take O(250*log(n)) - or 1500 time units in this\n>> case. This is about 180 times slower.\n>>\n>> What do you think about introducing a \"global index\" over all\n>> partitions (like Ora :->)? This would be a (logically) single index\n>> which can be even be parallelized given the partitioned tables are\n>> optimally distributed like in different tablespaces.\n>>\n>> What do you think about this?\n>\n> What you already have is a logically single index. What you want is\n> physically single index. But wouldn't that remove most of the\n> benefits of partitioning? You could no longer add or remove\n> partitions instantaneously, for example.\n>\n> Cheers,\n>\n> Jeff\n\n",
"msg_date": "Sun, 14 Oct 2012 14:22:41 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index over all partitions (aka global index)?"
}
] |
[
{
"msg_contents": "On PG 9.1 and 9.2 I'm running the following query:\nSELECT *FROM stream_store JOIN ( SELECT UNNEST(stream_store_ids) AS id FROM stream_store_version_index WHERE stream_id = 607106 AND version = 11 ) AS records USING (id)ORDER BY id DESC\nThis takes several (10 to 20) milliseconds at most.\nWhen I add a LIMIT 1 to the end of the query, the query time goes to several hours(!).\nThe full version String of PG 9.1 is \"PostgreSQL 9.1.5 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit\". The 9.1 machine is a socket 771 dual quad core at 3.16Ghz with 64GB memory and 10 Intel x25M SSDs in a RAID5 setup on 2 ARECA 1680 RAID controllers. The \"stream_store\" table has 122 million rows and is partitioned. The array that's being unnested for the join has 27 entries.\nAny idea? \t\t \t \t\t \n\n\n\n\nOn PG 9.1 and 9.2 I'm running the following query:SELECT *FROM stream_store JOIN ( SELECT UNNEST(stream_store_ids) AS id FROM stream_store_version_index WHERE stream_id = 607106 AND version = 11 ) AS records USING (id)ORDER BY id DESCThis takes several (10 to 20) milliseconds at most.When I add a LIMIT 1 to the end of the query, the query time goes to several hours(!).The full version String of PG 9.1 is \"PostgreSQL 9.1.5 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit\". The 9.1 machine is a socket 771 dual quad core at 3.16Ghz with 64GB memory and 10 Intel x25M SSDs in a RAID5 setup on 2 ARECA 1680 RAID controllers. The \"stream_store\" table has 122 million rows and is partitioned. The array that's being unnested for the join has 27 entries.Any idea?",
"msg_date": "Sun, 14 Oct 2012 08:55:34 +0200",
"msg_from": "henk de wit <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query with limit goes from few ms to hours"
},
{
"msg_contents": "Hi,\n\nFor some reason the mailinglist software seems to block the email as soon as the planner details are in it, so I pasted those on pastebin.com: http://pastebin.com/T5JTwh5T\nKind regards \t\t \t \t\t \t\t \t \t\t \n\n\n\n\nHi,For some reason the mailinglist software seems to block the email as soon as the planner details are in it, so I pasted those on pastebin.com: http://pastebin.com/T5JTwh5TKind regards",
"msg_date": "Sun, 14 Oct 2012 09:04:35 +0200",
"msg_from": "henk de wit <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with limit goes from few ms to hours"
},
{
"msg_contents": "henk de wit <[email protected]> writes:\n> For some reason the mailinglist software seems to block the email as soon as the planner details are in it, so I pasted those on pastebin.com: http://pastebin.com/T5JTwh5T\n\nYou need a less horrid estimate for the join size. Possibly an ANALYZE\non the parent table (stream_store) would help.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 14 Oct 2012 12:15:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with limit goes from few ms to hours"
},
{
"msg_contents": "Hi Henk,\n\nOn Sun, Oct 14, 2012 at 9:04 AM, henk de wit <[email protected]> wrote:\n> Hi,\n>\n> For some reason the mailinglist software seems to block the email as soon as\n> the planner details are in it, so I pasted those on pastebin.com:\n> http://pastebin.com/T5JTwh5T\n\nJust an additional data point: for whatever reason your email was\nplaced in my GMail spam folder.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n\n",
"msg_date": "Mon, 15 Oct 2012 14:31:47 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with limit goes from few ms to hours"
},
{
"msg_contents": "Hi,\n\n> henk de wit <[email protected]> writes:\n> > For some reason the mailinglist software seems to block the email as soon as the planner details are in it, so I pasted those on pastebin.com: http://pastebin.com/T5JTwh5T\n> \n> You need a less horrid estimate for the join size. Possibly an ANALYZE\n> on the parent table (stream_store) would help.\n\nWell, what do you know! That did work indeed. Immediately after the ANALYZE on that parent table (taking only a few seconds) a fast plan was created and the query executed in ms again. Silly me, I should have tried that earlier.\nThanks!\nKind regards \t\t \t \t\t \n\n\n\n\nHi,> henk de wit <[email protected]> writes:> > For some reason the mailinglist software seems to block the email as soon as the planner details are in it, so I pasted those on pastebin.com: http://pastebin.com/T5JTwh5T> > You need a less horrid estimate for the join size. Possibly an ANALYZE> on the parent table (stream_store) would help.Well, what do you know! That did work indeed. Immediately after the ANALYZE on that parent table (taking only a few seconds) a fast plan was created and the query executed in ms again. Silly me, I should have tried that earlier.Thanks!Kind regards",
"msg_date": "Mon, 15 Oct 2012 19:50:18 +0200",
"msg_from": "henk de wit <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with limit goes from few ms to hours"
}
] |
[
{
"msg_contents": "Hi,\n\nOur IT Company systems architecture is based on IBM Websphere\nApplication Server, we would like to migrate our databases to\nPostgres, the main problem which stops us from doing that is Postgres\nis not supported by IBM Websphere Application Server.\nThere is a Request for Enhancement that has been opened in IBM Web in\norder to solve this issue, if you are interested in this enhancement\nto be done, please vote for the Enhancement in the following link:\n\nhttp://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=27313\n\nPlease distribute the link in any other forum who might be interested\nin this enhancement.\n\nThanks in advance and regards,\n\n",
"msg_date": "Mon, 15 Oct 2012 10:22:00 +0200",
"msg_from": "John Nash <[email protected]>",
"msg_from_op": true,
"msg_subject": "WebSphere Application Server support for postgres"
},
{
"msg_contents": "[Removing all lists except -hackers. Please do not cross-post to every\nlist again!]\n\nOn Mon, Oct 15, 2012 at 9:22 AM, John Nash\n<[email protected]> wrote:\n> Hi,\n>\n> Our IT Company systems architecture is based on IBM Websphere\n> Application Server, we would like to migrate our databases to\n> Postgres, the main problem which stops us from doing that is Postgres\n> is not supported by IBM Websphere Application Server.\n> There is a Request for Enhancement that has been opened in IBM Web in\n> order to solve this issue, if you are interested in this enhancement\n> to be done, please vote for the Enhancement in the following link:\n>\n> http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=27313\n\nA login is required to access that site. Can you provide a link or\ninfo that doesn't require login please?\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 15 Oct 2012 09:28:13 +0100",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WebSphere Application Server support for postgres"
},
{
"msg_contents": "Zitat von John Nash <[email protected]>:\n\n> Hi,\n>\n> Our IT Company systems architecture is based on IBM Websphere\n> Application Server, we would like to migrate our databases to\n> Postgres, the main problem which stops us from doing that is Postgres\n> is not supported by IBM Websphere Application Server.\n> There is a Request for Enhancement that has been opened in IBM Web in\n> order to solve this issue, if you are interested in this enhancement\n> to be done, please vote for the Enhancement in the following link:\n>\n> http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=27313\n>\n\nFor sure i vote for it. Our Java Application Server based Software is \nsupported with DB2,Oracle,MS-SQL and PostgreSQL, but i doubt IBM will \nconsider it, because they sell a competitor Database called DB2 ;-)\n\nWe will see...\n\nRegards\n\nAndreas",
"msg_date": "Mon, 15 Oct 2012 13:30:09 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: WebSphere Application Server support for postgres"
},
{
"msg_contents": "\nOn 10/15/2012 01:22 AM, John Nash wrote:\n>\n> Hi,\n\nThere is no reason to email all the lists. Removing ones that aren't \nrequired. You are just going to irritate everyone.\n\n>\n> Our IT Company systems architecture is based on IBM Websphere\n> Application Server, we would like to migrate our databases to\n> Postgres, the main problem which stops us from doing that is Postgres\n> is not supported by IBM Websphere Application Server.\n\nSure it is. I have clients that use it. IBM just doesn't want you to use \nit, they would rather charge you. All you need is a JDBC driver and we \nhave that.\n\nNow if you are saying IBM doesn't officially support Postgres, that is \nan entirely different argument.\n\nJD\n\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC\n@cmdpromptinc - 509-416-6579\n\n",
"msg_date": "Mon, 15 Oct 2012 10:18:55 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WebSphere Application Server support for postgres"
},
{
"msg_contents": "On Mon, Oct 15, 2012 at 10:28 AM, Dave Page <[email protected]> wrote:\n> [Removing all lists except -hackers. Please do not cross-post to every\n> list again!]\n>\n> On Mon, Oct 15, 2012 at 9:22 AM, John Nash\n> <[email protected]> wrote:\n>> Hi,\n>>\n>> Our IT Company systems architecture is based on IBM Websphere\n>> Application Server, we would like to migrate our databases to\n>> Postgres, the main problem which stops us from doing that is Postgres\n>> is not supported by IBM Websphere Application Server.\n>> There is a Request for Enhancement that has been opened in IBM Web in\n>> order to solve this issue, if you are interested in this enhancement\n>> to be done, please vote for the Enhancement in the following link:\n>>\n>> http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=27313\n>\n> A login is required to access that site. Can you provide a link or\n> info that doesn't require login please?\n\n\nThe relevant content of the ticket is basically just:\n\nDescription: WebSphere Application Server support for JDBC access to\nPosgreSQL databases.\nUse case: In WAS, be able to define and use JDBC providers and\ndatasources pointing to PosgreSQL databases.\n\nFlorent\n\n-- \nFlorent Guillaume, Director of R&D, Nuxeo\nOpen Source, Java EE based, Enterprise Content Management (ECM)\nhttp://www.nuxeo.com http://www.nuxeo.org +33 1 40 33 79 87\n\n",
"msg_date": "Wed, 17 Oct 2012 06:38:01 +0200",
"msg_from": "Florent Guillaume <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WebSphere Application Server support for postgres"
}
] |
[
{
"msg_contents": "Hello,\n I'm trying to do a simple SQL query over Postgresl 9.0 running on Ubuntu.\n\nI have a large table (over 100 million records) with three fields, \nid_signal (bigint), time_stamp (timestamp) and var_value (float).\n\nMy query looks like this:\n\nselect var_value from ism_floatvalues where id_signal = 29660 order by \ntime_stamp desc limit 1;\n\nSo I want to select the last value from a determinated ID (is_signal).\n\nThis query runs FOREVER, while if I delete \"limit 1\" it runs instantly....\n\nAny help?\n\nRegards.\n\n\n\n",
"msg_date": "Mon, 15 Oct 2012 19:44:46 +0200",
"msg_from": "=?ISO-8859-1?Q?Pedro_Jim=E9nez?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "limit order by performance issue"
},
{
"msg_contents": "2012/10/15 Pedro Jiménez <[email protected]>:\n> Hello,\n> I'm trying to do a simple SQL query over Postgresl 9.0 running on Ubuntu.\n>\n> I have a large table (over 100 million records) with three fields, id_signal\n> (bigint), time_stamp (timestamp) and var_value (float).\n>\n> My query looks like this:\n>\n> select var_value from ism_floatvalues where id_signal = 29660 order by\n> time_stamp desc limit 1;\n>\n> So I want to select the last value from a determinated ID (is_signal).\n>\n> This query runs FOREVER, while if I delete \"limit 1\" it runs instantly....\n\ndid you ANALYZE your tables?\n\nCan you send EXPLAIN ANALYZE result of both queries?\n\nRegards\n\nPavel Stehule\n\n\n>\n> Any help?\n>\n> Regards.\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Tue, 16 Oct 2012 21:23:34 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit order by performance issue"
},
{
"msg_contents": "On 10/15/2012 12:44 PM, Pedro Jiménez wrote:\n\n> select var_value from ism_floatvalues where id_signal = 29660 order by\n> time_stamp desc limit 1;\n\nWell, we'd have to see an EXPLAIN plan to really know what's going on \nhere, but it often boils down to the planner being overly optimistic \nwhen low limits are specified. I bet you have an index on time_stamp, \ndon't you?\n\nIn that case, the planner would reverse index-scan that index, \nestimating that the chances of it finding ID 29660 are less expensive \nthan fetching all of the rows that match the ID directly, and throwing \naway all but 1 row. Remember, it would have to read all of those values \nto know which is the most recent.\n\nYou can fix this a couple of ways:\n\n1. Put a two-column index on these values:\n\nCREATE INDEX idx_ordered_signal\n ON ism_floatvalues (id_signal, time_stamp DESC);\n\nWhich turns any request for that particular combo into a single index fetch.\n\n2. You can trick the planner by introducing an optimization fence:\n\nSELECT var_value\n FROM (\n SELECT var_value, time_stamp\n FROM ism_floatvalues\n WHERE id_signal = 29660\n OFFSET 0\n )\n ORDER BY time_stamp DESC\n LIMIT 1;\n\nQuite a few people will probably grouse at me for giving you that as an \noption, but it does work better than LIMIT 1 more often than it probably \nshould.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Tue, 16 Oct 2012 14:28:15 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit order by performance issue"
},
{
"msg_contents": "Put an index on time_stamp (I assume there is one on id_signal already)\n\nOn 10/15/2012 12:44 PM, Pedro Jiménez wrote:\n> Hello,\n> I'm trying to do a simple SQL query over Postgresl 9.0 running on\n> Ubuntu.\n>\n> I have a large table (over 100 million records) with three fields,\n> id_signal (bigint), time_stamp (timestamp) and var_value (float).\n>\n> My query looks like this:\n>\n> select var_value from ism_floatvalues where id_signal = 29660 order by\n> time_stamp desc limit 1;\n>\n> So I want to select the last value from a determinated ID (is_signal).\n>\n> This query runs FOREVER, while if I delete \"limit 1\" it runs\n> instantly....\n>\n> Any help?\n>\n> Regards.\n>\n>\n>\n>\n\n-- \n-- Karl Denninger\n/The Market Ticker ®/ <http://market-ticker.org>\nCuda Systems LLC\n\n\n\n\n\n\n Put an index on time_stamp (I assume there is one on id_signal\n already)\n\nOn 10/15/2012 12:44 PM, Pedro Jiménez\n wrote:\n\nHello,\n \n I'm trying to do a simple SQL query over Postgresl 9.0 running\n on Ubuntu.\n \n\n I have a large table (over 100 million records) with three fields,\n id_signal (bigint), time_stamp (timestamp) and var_value (float).\n \n\n My query looks like this:\n \n\n select var_value from ism_floatvalues where id_signal = 29660\n order by time_stamp desc limit 1;\n \n\n So I want to select the last value from a determinated ID\n (is_signal).\n \n\n This query runs FOREVER, while if I delete \"limit 1\" it runs\n instantly....\n \n\n Any help?\n \n\n Regards.\n \n\n\n\n\n\n\n-- \n -- Karl Denninger\nThe Market Ticker ®\n Cuda Systems LLC",
"msg_date": "Tue, 16 Oct 2012 14:47:10 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit order by performance issue"
},
{
"msg_contents": "On Tue, Oct 16, 2012 at 10:47 PM, Karl Denninger <[email protected]> wrote:\n> Put an index on time_stamp (I assume there is one on id_signal already)\n\nWell the optimal index for this particular query would include both columns:\n(id_signal, time_stamp) -- in this order.\n\nAdditionally, if you want to take advantage of the index-only scans\nfeature, add the SELECTed column too:\n(id_signal, time_stamp, var_value)\n\nRegards,\nMarti\n\n",
"msg_date": "Wed, 17 Oct 2012 01:05:39 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit order by performance issue"
},
{
"msg_contents": "For this query:\n\nselect var_value from ism_floatvalues where id_signal = 29660 order by \ntime_stamp desc limit 1;\n\nThis is what EXPLAIN returns (can't make EXPLAIN ANALYZE because it \n\"never\" ends):\n\n\"Limit (cost=0.00..258.58 rows=1 width=16)\"\n\" -> Index Scan Backward using ism_floatvalues_index_time_stamp on \nism_floatvalues (cost=0.00..8912076.82 rows=34466 width=16)\"\n\" Filter: (id_signal = 29660)\"\n\nThis is EXPLAIN ANALYZE without \"limit 1\":\n\n\"Sort (cost=93683.39..93769.56 rows=34466 width=16) (actual \ntime=188.643..188.650 rows=1 loops=1)\"\n\" Sort Key: time_stamp\"\n\" Sort Method: quicksort Memory: 17kB\"\n\" -> Index Scan using ism_floatvalues_index on ism_floatvalues \n(cost=0.00..90494.38 rows=34466 width=16) (actual time=188.019..188.030 \nrows=1 loops=1)\"\n\" Index Cond: (id_signal = 29660)\"\n\"Total runtime: 189.033 ms\"\n\nNote that I have created two indexes, the first on id_signal and the \nsecond on time_stamp.\nRegards.\n\nEl 16/10/2012 21:23, Pavel Stehule escribió:\n> 2012/10/15 Pedro Jiménez <[email protected]>:\n>> Hello,\n>> I'm trying to do a simple SQL query over Postgresl 9.0 running on Ubuntu.\n>>\n>> I have a large table (over 100 million records) with three fields, id_signal\n>> (bigint), time_stamp (timestamp) and var_value (float).\n>>\n>> My query looks like this:\n>>\n>> select var_value from ism_floatvalues where id_signal = 29660 order by\n>> time_stamp desc limit 1;\n>>\n>> So I want to select the last value from a determinated ID (is_signal).\n>>\n>> This query runs FOREVER, while if I delete \"limit 1\" it runs instantly....\n> did you ANALYZE your tables?\n>\n> Can you send EXPLAIN ANALYZE result of both queries?\n>\n> Regards\n>\n> Pavel Stehule\n>\n>\n>> Any help?\n>>\n>> Regards.\n>>\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n\n-- \nDocumento sin título\n\n**Pedro Jiménez Pérez\n**[email protected]\n\n****\n\t\n\n**Innovación en Sistemas de Monitorización, S.L.**\nEdificio Hevimar\nC/ Iván Pavlov 2 y 4 - Parcela 4 2ª Planta Local 9\nParque Tecnológico de Andalucía\n29590 Campanillas (Málaga)\nTlfno. 952 02 07 13\[email protected]\n\nfirma_gpt.jpg, 1 kB\n\n\t\n\nAntes de imprimir, piensa en tu responsabilidad y compromiso con el \nMEDIO AMBIENTE!\n\nBefore printing, think about your responsibility and commitment with the \nENVIRONMENT!\n\nCLÁUSULA DE CONFIDENCIALIDAD.- Este mensaje, y en su caso, cualquier \nfichero anexo al mismo, puede contener información confidencial o \nlegalmente protegida (LOPD 15/1999 de 13 de Diciembre), siendo para uso \nexclusivo del destinatario. No hay renuncia a la confidencialidad o \nsecreto profesional por cualquier transmisión defectuosa o errónea, y \nqueda expresamente prohibida su divulgación, copia o distribución a \nterceros sin la autorización expresa del remitente. Si ha recibido este \nmensaje por error, se ruega lo notifique al remitente enviando un \nmensaje al correo electrónico [email protected] y proceda \ninmediatamente al borrado del mensaje original y de todas sus copias. \nGracias por su colaboración.",
"msg_date": "Wed, 17 Oct 2012 11:14:05 +0200",
"msg_from": "=?UTF-8?B?UGVkcm8gSmltw6luZXogUMOpcmV6?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit order by performance issue"
},
{
"msg_contents": "On Wed, Oct 17, 2012 at 6:14 AM, Pedro Jiménez Pérez <[email protected]\n> wrote:\n\n> select var_value from ism_floatvalues where id_signal = 29660 order by\n> time_stamp desc limit 1;\n>\n> This is what EXPLAIN returns (can't make EXPLAIN ANALYZE because it\n> \"never\" ends):\n>\n> \"Limit (cost=0.00..258.58 rows=1 width=16)\"\n> \" -> Index Scan Backward using ism_floatvalues_index_time_stamp on\n> ism_floatvalues (cost=0.00..8912076.82 rows=34466 width=16)\"\n> \" Filter: (id_signal = 29660)\"\n>\n> This is EXPLAIN ANALYZE without \"limit 1\":\n\n\nAdd (or modify the existing) an index on id_signal, time_stamp desc, and\nyou're done.\n\nIt must be a case of descending time stamps not hitting the filter\ncondition (id_signal) soon enough.\n\nOn Wed, Oct 17, 2012 at 6:14 AM, Pedro Jiménez Pérez <[email protected]> wrote:\nselect var_value from ism_floatvalues where id_signal = 29660 order\n by\n time_stamp desc limit 1;\n\n This is what EXPLAIN returns (can't make EXPLAIN ANALYZE because it\n \"never\" ends):\n\n \"Limit (cost=0.00..258.58 rows=1 width=16)\"\n \" -> Index Scan Backward using ism_floatvalues_index_time_stamp\n on ism_floatvalues (cost=0.00..8912076.82 rows=34466 width=16)\"\n \" Filter: (id_signal = 29660)\"\n\n This is EXPLAIN ANALYZE without \"limit 1\":Add (or modify the existing) an index on id_signal, time_stamp desc, and you're done.It must be a case of descending time stamps not hitting the filter condition (id_signal) soon enough.",
"msg_date": "Fri, 19 Oct 2012 13:19:17 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit order by performance issue"
}
] |
[
{
"msg_contents": "Dear all,\nWe have a DB containing transactional data. \nThere are about *50* to *100 x 10^6* rows in one *huge* table.\nWe are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing us\na constant seeking time.\n\nA typical select (see below) takes about 200 secs. As the database is the\nbackend for a web-based reporting facility 200 to 500 or even more secs\nresponse times are not acceptable for the customer.\n\nIs there any way to speed up select statements like this:\n\nSELECT\n SUM(T.x),\n SUM(T.y),\n SUM(T.z),\n AVG(T.a),\n AVG(T.b)\nFROM T\nGROUP BY \n T.c\nWHERE \n T.creation_date=$SOME_DATE;\n\nThere is an Index on T.c. But would it help to partition the table by T.c?\nIt should be mentioned, that T.c is actually a foreign key to a Table\ncontaining a \ntiny number of rows (15 rows representing different companies).\nmy postgres.conf is actually the default one, despite the fact that we\nincreased the value for work_mem=128MB\n\nThanks in advance\nHouman\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/SELECT-AND-AGG-huge-tables-tp5728306.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Mon, 15 Oct 2012 13:59:16 -0700 (PDT)",
"msg_from": "houmanb <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT AND AGG huge tables"
},
{
"msg_contents": "On Mon, Oct 15, 2012 at 3:59 PM, houmanb <[email protected]> wrote:\n> Dear all,\n> We have a DB containing transactional data.\n> There are about *50* to *100 x 10^6* rows in one *huge* table.\n> We are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing us\n> a constant seeking time.\n>\n> A typical select (see below) takes about 200 secs. As the database is the\n> backend for a web-based reporting facility 200 to 500 or even more secs\n> response times are not acceptable for the customer.\n>\n> Is there any way to speed up select statements like this:\n>\n> SELECT\n> SUM(T.x),\n> SUM(T.y),\n> SUM(T.z),\n> AVG(T.a),\n> AVG(T.b)\n> FROM T\n> GROUP BY\n> T.c\n> WHERE\n> T.creation_date=$SOME_DATE;\n>\n> There is an Index on T.c. But would it help to partition the table by T.c?\n> It should be mentioned, that T.c is actually a foreign key to a Table\n> containing a\n> tiny number of rows (15 rows representing different companies).\n> my postgres.conf is actually the default one, despite the fact that we\n> increased the value for work_mem=128MB\n\nit might help a little bit or a lot -- it depends on the plan. I'd\nalso advise raising shared buffers to around 25% of ram for queries\nlike this.\n\nwhat's your server load look like while aggregating -- are you storage\nor cpu bound? which ssd? how much data churn do you have?\n\nmerlin\n\n",
"msg_date": "Mon, 15 Oct 2012 16:09:51 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT AND AGG huge tables"
},
{
"msg_contents": "On Mon, Oct 15, 2012 at 5:59 PM, houmanb <[email protected]> wrote:\n\n> Dear all,\n> We have a DB containing transactional data.\n> There are about *50* to *100 x 10^6* rows in one *huge* table.\n> We are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing\n> us\n> a constant seeking time.\n>\n> A typical select (see below) takes about 200 secs. As the database is the\n> backend for a web-based reporting facility 200 to 500 or even more secs\n> response times are not acceptable for the customer.\n>\n> Is there any way to speed up select statements like this:\n>\n> SELECT\n> SUM(T.x),\n> SUM(T.y),\n> SUM(T.z),\n> AVG(T.a),\n> AVG(T.b)\n> FROM T\n> GROUP BY\n> T.c\n> WHERE\n> T.creation_date=$SOME_DATE;\n>\n> There is an Index on T.c. But would it help to partition the table by T.c?\n> It should be mentioned, that T.c is actually a foreign key to a Table\n> containing a\n> tiny number of rows (15 rows representing different companies).\n>\n\nHow selective is T.creation_date? Looks like an index on this column would\nbe better than T.c (could use also, of course), which would be also true\nfor the partitioning - something like per month or per year partitioning.\n\n\n> my postgres.conf is actually the default one, despite the fact that we\n> increased the value for work_mem=128MB\n>\n>\nHow much memory do you have? Could you increase shared_buffers?\n\nAlso with a SSD you could decrease random_page_cost a little bit.\n\nSee [1].\n\n[1] http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nRegards.\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados PostgreSQL\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Oct 15, 2012 at 5:59 PM, houmanb <[email protected]> wrote:\n\nDear all,We have a DB containing transactional data.There are about *50* to *100 x 10^6* rows in one *huge* table.We are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing usa constant seeking time.\nA typical select (see below) takes about 200 secs. As the database is thebackend for a web-based reporting facility 200 to 500 or even more secsresponse times are not acceptable for the customer.Is there any way to speed up select statements like this:\nSELECT SUM(T.x), SUM(T.y), SUM(T.z), AVG(T.a), AVG(T.b)FROM TGROUP BY T.cWHERE T.creation_date=$SOME_DATE;There is an Index on T.c. But would it help to partition the table by T.c?\n\nIt should be mentioned, that T.c is actually a foreign key to a Tablecontaining atiny number of rows (15 rows representing different companies).How selective is T.creation_date? Looks like an index on this column would be better than T.c (could use also, of course), which would be also true for the partitioning - something like per month or per year partitioning.\n my postgres.conf is actually the default one, despite the fact that we\n\nincreased the value for work_mem=128MBHow much memory do you have? Could you increase shared_buffers?Also with a SSD you could decrease random_page_cost a little bit.\nSee [1].[1] http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nRegards.-- Matheus de OliveiraAnalista de Banco de Dados PostgreSQLDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 15 Oct 2012 18:42:02 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT AND AGG huge tables"
},
{
"msg_contents": "Hi,\n\nOn 16 October 2012 07:59, houmanb <[email protected]> wrote:\n> Dear all,\n> We have a DB containing transactional data.\n> There are about *50* to *100 x 10^6* rows in one *huge* table.\n> We are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing us\n> a constant seeking time.\n\nHow many columns? What's the average row size?\n\n> Is there any way to speed up select statements like this:\n>\n> SELECT\n> SUM(T.x),\n> SUM(T.y),\n> SUM(T.z),\n> AVG(T.a),\n> AVG(T.b)\n> FROM T\n> GROUP BY\n> T.c\n> WHERE\n> T.creation_date=$SOME_DATE;\n>\n> There is an Index on T.c. But would it help to partition the table by T.c?\n> It should be mentioned, that T.c is actually a foreign key to a Table\n> containing a\n> tiny number of rows (15 rows representing different companies).\n> my postgres.conf is actually the default one, despite the fact that we\n> increased the value for work_mem=128MB\n\nPartitioning by T.c is not going to help. You should partition by\nT.creation_date. The question is if all queries have T.creation_date\nin where clause. Moreover, you need to choose partition size base on\nquery range so majority of queries can operate on one or two\npartitions.\n\nYou can try vertical partitioning ie. split table based on column usage:\n- group by frequency of use\n- group by number of NULLs (null_frac in pg_stats)\n\nHaving \"SSD card on PCIex\" joining tables should be the problem.\n\nIn my case table has > 200 columns and monthly partitions (> 30 mil\nrows on average) and aggregation queries performed better than 200sec.\n\n-- \nOndrej Ivanic\n([email protected])\n(http://www.linkedin.com/in/ondrejivanic)\n\n",
"msg_date": "Tue, 16 Oct 2012 08:54:47 +1100",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT AND AGG huge tables"
},
{
"msg_contents": "Houman,\n\nPartition by date and revise your processes to create and load a new child table every day. Since you already know the date append it to the table base name and go straight to the data you need. Also, the index on T.c won't help for this query, you're looking at a full table scan every time. \n\nBob\n\nSent from my iPhone\n\nOn Oct 15, 2012, at 3:59 PM, houmanb <[email protected]> wrote:\n\n> Dear all,\n> We have a DB containing transactional data. \n> There are about *50* to *100 x 10^6* rows in one *huge* table.\n> We are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing us\n> a constant seeking time.\n> \n> A typical select (see below) takes about 200 secs. As the database is the\n> backend for a web-based reporting facility 200 to 500 or even more secs\n> response times are not acceptable for the customer.\n> \n> Is there any way to speed up select statements like this:\n> \n> SELECT\n> SUM(T.x),\n> SUM(T.y),\n> SUM(T.z),\n> AVG(T.a),\n> AVG(T.b)\n> FROM T\n> GROUP BY \n> T.c\n> WHERE \n> T.creation_date=$SOME_DATE;\n> \n> There is an Index on T.c. But would it help to partition the table by T.c?\n> It should be mentioned, that T.c is actually a foreign key to a Table\n> containing a \n> tiny number of rows (15 rows representing different companies).\n> my postgres.conf is actually the default one, despite the fact that we\n> increased the value for work_mem=128MB\n> \n> Thanks in advance\n> Houman\n> \n> \n> \n> \n> \n> \n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/SELECT-AND-AGG-huge-tables-tp5728306.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 15 Oct 2012 18:44:59 -0500",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT AND AGG huge tables"
},
{
"msg_contents": "On Mon, Oct 15, 2012 at 1:59 PM, houmanb <[email protected]> wrote:\n> Dear all,\n> We have a DB containing transactional data.\n> There are about *50* to *100 x 10^6* rows in one *huge* table.\n> We are using postgres 9.1.6 on linux with a *SSD card on PCIex* providing us\n> a constant seeking time.\n>\n> A typical select (see below) takes about 200 secs. As the database is the\n> backend for a web-based reporting facility 200 to 500 or even more secs\n> response times are not acceptable for the customer.\n>\n> Is there any way to speed up select statements like this:\n>\n> SELECT\n> SUM(T.x),\n> SUM(T.y),\n> SUM(T.z),\n> AVG(T.a),\n> AVG(T.b)\n> FROM T\n> GROUP BY\n> T.c\n> WHERE\n> T.creation_date=$SOME_DATE;\n>\n> There is an Index on T.c. But would it help to partition the table by T.c?\n\nProbably not.\n\nBut an index on creation_date, or on (creation_date, c) might. How\nmany records are there per day? If you add a count(*) to your select,\nwhat would typical values be?\n\nCheers,\n\nJeff\n\n",
"msg_date": "Mon, 15 Oct 2012 17:04:34 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT AND AGG huge tables"
},
{
"msg_contents": "On 10/16/2012 04:59 AM, houmanb wrote:\n\n> There is an Index on T.c. But would it help to partition the table by T.c?\n\nYou should really post EXPLAIN ANALYZE for questions like this. See\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n--\nCraig Ringer\n\n\n\n",
"msg_date": "Tue, 16 Oct 2012 10:42:36 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT AND AGG huge tables"
},
{
"msg_contents": "Hi all,\nThanks for your advice and the link about posting my question in an\nappropriate form.\nHere are the info. I thank all of you in advance.\n\nBest regards\nHouman\n\n\n\nPostgres version: 9.1.4\n=================================================\nPostgres.conf\nmax_connections = 100\nshared_buffers = 8192MB\nwork_mem = 500MB\nlog_statement = 'none'\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\ndefault_text_search_config = 'pg_catalog.english'\nmax_locks_per_transaction = 256\n\n=================================================\nHardware: \nCPU Quad Core Intel CPU\nprocessor\t: 0-7\nvendor_id\t: GenuineIntel\ncpu family\t: 6\nmodel\t\t: 45\nmodel name\t: Intel(R) Core(TM) i7-3820 CPU @ 3.60GHz\n\nMemory:\nMemTotal: 32927920 kB\n\nHDD:\nOCZ VeloDrive - Solid-State-Disk - 600 GB - intern - PCI Express 2.0 x8\nMulti-Level-Cell (MLC)\nPCI Express 2.0 x8\n========================IO/stat===================\niostat sdb1 1\nLinux 3.2.0-23-generic (regula2) \t10/17/2012 \t_x86_64_\t(8 CPU)\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsdb1 6.44 217.91 240.45 1956400373 2158777589\nsdb1 0.00 0.00 0.00 0 0\nsdb1 0.00 0.00 0.00 0 0\nsdb1 0.00 0.00 0.00 0 0\nsdb1 0.00 0.00 0.00 0 0\nsdb1 0.00 0.00 0.00 0 0\n=========================vmstat==========================\nprocs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n 1 0 44376 2417096 210784 28664024 0 0 30 35 0 0 0 0\n100 0\n 0 0 44376 2416964 210784 28664024 0 0 0 0 80 138 0 0\n100 0\n 1 0 44376 2416592 210784 28664024 0 0 0 0 278 228 7 0\n93 0\n 1 0 44376 2416592 210784 28664280 0 0 0 0 457 305 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 472 303 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 462 296 13 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 478 293 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 470 317 12 0\n87 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 455 299 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 459 301 12 0\n87 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 370 291 7 5\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 29 459 319 12 1\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 453 295 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 449 284 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 8 462 304 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 459 307 12 0\n88 0\n 2 0 44376 2416716 210784 28664280 0 0 0 0 461 300 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 457 299 12 0\n87 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 439 295 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 439 306 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 448 305 12 0\n88 0\n 1 0 44376 2416716 210784 28664280 0 0 0 0 457 289 12 0\n88 0\n 0 0 44376 2416716 210784 28664280 0 0 0 0 174 179 3 0\n97 0\n 0 0 44376 2416716 210784 28664280 0 0 0 0 73 133 0 0\n100 0\n 0 0 44376 2416716 210784 28664280 0 0 0 0 75 133 0 0\n100 0\n 0 0 44376 2416716 210784 28664280 0 0 0 0 70 127 0 0\n100 0\n\n\n\n Column | Type | \nModifiers \n-----------------------+-----------------------------+-------------------------------------------------------\n modifying_action | integer | \n modifying_client | integer | \n modification_time | timestamp without time zone | \n instance_entity | integer | \n id | integer | not null default\nnextval('enigma.fact_seq'::regclass)\n successor | integer | \n reporting_date | integer | \n legal_entity | integer | \n client_system | integer | \n customer | integer | \n customer_type | integer | \n borrower | integer | \n nace | integer | \n lsk | integer | \n review_date | integer | \n uci_status | integer | \n rating | integer | \n rating_date | integer | \n asset_class_sta_flags | integer | \n asset_class_flags | integer | \n balance_indicator | integer | \n quantity | integer | \n credit_line | numeric | \n outstanding | numeric | \n ead | numeric | \n ead_collateralized | numeric | \n ead_uncollateralized | numeric | \n el | numeric | \n rwa | numeric | \n lgd | numeric | \n pd | numeric | \n economic_capital | numeric | \n unit | integer | \n========================================================================\nIndexes:\n \"fact_pkey\" PRIMARY KEY, btree (id)\n \"enigma_fact_id_present\" UNIQUE CONSTRAINT, btree (id)\n \"indx_enigma_fact_legal_entity\" btree (legal_entity)\n \"indx_enigma_fact_reporting_date\" btree (reporting_date)\nTriggers:\n fact_before_update_referrers_trigger BEFORE DELETE ON enigma.fact FOR\nEACH ROW EXECUTE PROCEDURE enigma.fact_update_referrers_function()\n========================================================================\ngenesis=# SELECT count(*) FROM enigma.fact;\n count \n---------\n 7493958\n========================================================================\nEXPLAIN analyze SELECT \nSUM(T.quantity) AS T__quantity, \nSUM(T.credit_line) AS T__credit_line, \nSUM(T.outstanding) AS T__outstanding, \nSUM(T.ead) AS T__ead, \nSUM(T.ead_collateralized) AS T__ead_collateralized, \nSUM(T.ead_uncollateralized) AS T__ead_uncollateralized, \nSUM(T.el) AS T__el, \nSUM(T.rwa) AS T__rwa, \nAVG(T.lgd) AS T__lgd, \nAVG(T.pd) AS T__pd\nFROM enigma.fact T \nGROUP BY T.legal_entity \nORDER BY T.legal_entity;\n----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1819018.32..1819018.36 rows=15 width=48) (actual\ntime=20436.264..20436.264 rows=15 loops=1)\n Sort Key: legal_entity\n Sort Method: quicksort Memory: 27kB\n -> HashAggregate (cost=1819017.80..1819018.02 rows=15 width=48) (actual\ntime=20436.221..20436.242 rows=15 loops=1)\n -> Seq Scan on fact t (cost=0.00..959291.68 rows=31262768\nwidth=48) (actual time=2.619..1349.523 rows=7493958 loops=1)\n Total runtime: 20436.410 ms\n\n========================================================================\n\nEXPLAIN (BUFFERS true, ANALYZE) SELECT SUM(T.quantity) AS T__quantity, \nSUM(T.credit_line) AS T__credit_line, \nSUM(T.outstanding) AS T__outstanding, \nSUM(T.ead) AS T__ead, \nSUM(T.ead_collateralized) AS T__ead_collateralized, \nSUM(T.ead_uncollateralized) AS T__ead_uncollateralized, \nSUM(T.el) AS T__el, \nSUM(T.rwa) AS T__rwa, \nAVG(T.lgd) AS T__lgd, \nAVG(T.pd) AS T__pd\nFROM enigma.fact T \nGROUP BY T.legal_entity \nORDER BY T.legal_entity;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1819018.32..1819018.36 rows=15 width=48) (actual\ntime=20514.976..20514.977 rows=15 loops=1)\n Sort Key: legal_entity\n Sort Method: quicksort Memory: 27kB\n Buffers: shared hit=2315 read=644351\n -> HashAggregate (cost=1819017.80..1819018.02 rows=15 width=48) (actual\ntime=20514.895..20514.917 rows=15 loops=1)\n Buffers: shared hit=2313 read=644351\n -> Seq Scan on fact t (cost=0.00..959291.68 rows=31262768\nwidth=48) (actual time=2.580..1385.491 rows=7493958 loops=1)\n Buffers: shared hit=2313 read=644351\n Total runtime: 20515.369 ms\n\n\n QUERY PLAN \n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/SELECT-AND-AGG-huge-tables-tp5728306p5728572.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Wed, 17 Oct 2012 04:24:06 -0700 (PDT)",
"msg_from": "houmanb <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT AND AGG huge tables"
},
{
"msg_contents": "On Wed, Oct 17, 2012 at 2:24 PM, houmanb <[email protected]> wrote:\n> Hi all,\n> Thanks for your advice and the link about posting my question in an\n> appropriate form.\n> Here are the info. I thank all of you in advance.\n\nCan you run the EXPLAIN once more with EXPLAIN (ANALYZE, BUFFERS,\nTIMING OFF). Given the number of rows processed by the query, the\ndetailed per node timing overhead might be a considerable factor here.\n\nWhat happened to the \"WHERE T.creation_date=$SOME_DATE\" part of the\nquery. These examples go through the whole table. The plans shown are\nabout as fast as it gets. Summarizing 5GB of data will never be fast.\nIf you need that information quickly, you'll need to actively maintain\nthe aggregate values via triggers.\n\nRegards,\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n\n",
"msg_date": "Fri, 19 Oct 2012 15:24:46 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT AND AGG huge tables"
}
] |
[
{
"msg_contents": "Hi to all, \n\n\nI've got a trouble with some delete statements. My db contains a little more than 10000 tables and runs on a dedicated server (Debian 6 - bi quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2 and 3 million rows and no foreign keys exist between them. Each is indexed (btree) on start_date / end_date fields (bigint). The Postgresql server has been tuned (I can give modified values if needed). \n\n\nI perform recurrent DELETE upon a table subset (~1900 tables) and each time, I delete a few lines (between 0 and 1200). Usually it takes between 10s and more than 2mn. It seems to me to be a huge amount of time ! An EXPLAIN ANALYZE on a DELETE shows me that the planner uses a Seq Scan instead of an Index Scan. Autovaccum is on and I expect the db stats to be updated in real time (pg_stats file is stored in /dev/shm RAM disk for quick access). \n\n\nDo you have any idea about this trouble ? \n\n\n\nSylvain Caillet \nBureau : + 33 5 59 41 51 10 \[email protected] \n\nALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart \nwww.alaloop.com \n\nHi to all,I've got a trouble with some delete statements. My db contains a little more than 10000 tables and runs on a dedicated server (Debian 6 - bi quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2 and 3 million rows and no foreign keys exist between them. Each is indexed (btree) on start_date / end_date fields (bigint). The Postgresql server has been tuned (I can give modified values if needed).I perform recurrent DELETE upon a table subset (~1900 tables) and each time, I delete a few lines (between 0 and 1200). Usually it takes between 10s and more than 2mn. It seems to me to be a huge amount of time ! An EXPLAIN ANALYZE on a DELETE shows me that the planner uses a Seq Scan instead of an Index Scan. Autovaccum is on and I expect the db stats to be updated in real time (pg_stats file is stored in /dev/shm RAM disk for quick access).Do you have any idea about this trouble ?Sylvain CailletBureau : + 33 5 59 41 51 [email protected] S.A.S. - Technopole Izarbel - 64210 Bidartwww.alaloop.com",
"msg_date": "Tue, 16 Oct 2012 09:50:12 +0200 (CEST)",
"msg_from": "Sylvain CAILLET <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Delete : Seq scan instead of index scan"
},
{
"msg_contents": "Hi Sylvain,\n\nMight sound like a nasty question, and gurus will correct me if I'm wrong,\nbut first thing to investigate is why the index is not used :\n- You have 2/3 million rows per table so the planner should use the index.\nSeqscan is prefered for small tables.\n- Maybe the WHERE clause of your DELETE statement doesn't make use of your\nstart and end date columns ? If so, in which order ?\n\nPlease, provide with your Pg version and the table setup with the index.\n\nRegards,\n\nSekine\n\n2012/10/16 Sylvain CAILLET <[email protected]>\n\n> Hi to all,\n>\n> I've got a trouble with some delete statements. My db contains a little\n> more than 10000 tables and runs on a dedicated server (Debian 6 - bi quad\n> - 16Gb - SAS disks raid 0). Most of the tables contains between 2 and 3\n> million rows and no foreign keys exist between them. Each is indexed\n> (btree) on start_date / end_date fields (bigint). The Postgresql server has\n> been tuned (I can give modified values if needed).\n>\n> I perform recurrent DELETE upon a table subset (~1900 tables) and each\n> time, I delete a few lines (between 0 and 1200). Usually it takes between\n> 10s and more than 2mn. It seems to me to be a huge amount of time ! An\n> EXPLAIN ANALYZE on a DELETE shows me that the planner uses a Seq Scan\n> instead of an Index Scan. Autovaccum is on and I expect the db stats to be\n> updated in real time (pg_stats file is stored in /dev/shm RAM disk for\n> quick access).\n>\n> Do you have any idea about this trouble ?\n>\n> Sylvain Caillet\n> Bureau : + 33 5 59 41 51 10\n> [email protected]\n>\n> ALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart\n> www.alaloop.com\n>\n>\n\nHi Sylvain,\nMight sound like a nasty question, and gurus will correct me if I'm wrong, but first thing to investigate is why the index is not used :\n- You have 2/3 million rows per table so the planner should use the index. Seqscan is prefered for small tables.\n- Maybe the WHERE clause of your DELETE statement doesn't make use of your start and end date columns ? If so, in which order ?\n\n\nPlease, provide with your Pg version and the table setup with the index.\n\nRegards,\n\nSekine2012/10/16 Sylvain CAILLET <[email protected]>\nHi to all,I've got a trouble with some delete statements. My db contains a little more than 10000 tables and runs on a dedicated server (Debian 6 - bi quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2 and 3 million rows and no foreign keys exist between them. Each is indexed (btree) on start_date / end_date fields (bigint). The Postgresql server has been tuned (I can give modified values if needed).\nI perform recurrent DELETE upon a table subset (~1900 tables) and each time, I delete a few lines (between 0 and 1200). Usually it takes between 10s and more than 2mn. It seems to me to be a huge amount of time ! An EXPLAIN ANALYZE on a DELETE shows me that the planner uses a Seq Scan instead of an Index Scan. Autovaccum is on and I expect the db stats to be updated in real time (pg_stats file is stored in /dev/shm RAM disk for quick access).\nDo you have any idea about this trouble ?Sylvain CailletBureau : + 33 5 59 41 51 10\[email protected] S.A.S. - Technopole Izarbel - 64210 Bidartwww.alaloop.com",
"msg_date": "Tue, 16 Oct 2012 10:01:01 +0200",
"msg_from": "=?UTF-8?Q?S=C3=A9kine_Coulibaly?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Delete : Seq scan instead of index scan"
},
{
"msg_contents": "On 10/16/2012 03:50 PM, Sylvain CAILLET wrote:\n> Hi to all,\n>\n> I've got a trouble with some delete statements. My db contains a little\n> more than 10000 tables and runs on a dedicated server (Debian 6 - bi\n> quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2\n> and 3 million rows and no foreign keys exist between them. Each is\n> indexed (btree) on start_date / end_date fields (bigint). The Postgresql\n> server has been tuned (I can give modified values if needed).\n>\n> I perform recurrent DELETE upon a table subset (~1900 tables) and each\n> time, I delete a few lines (between 0 and 1200). Usually it takes\n> between 10s and more than 2mn. It seems to me to be a huge amount of\n> time ! An EXPLAIN ANALYZE on a DELETE shows me that the planner uses a\n> Seq Scan instead of an Index Scan.\n\nCan you post that (or paste to explain.depesz.com and link to it here) \nalong with a \"\\d tablename\" from psql?\n\n--\nCraig Ringer\n\n",
"msg_date": "Tue, 16 Oct 2012 16:09:21 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Delete : Seq scan instead of index scan"
},
{
"msg_contents": "Hi Sékine, \n\nYou're right : my question is why the planner doesn't use the index ! My DELETE statements have WHERE clause like : start_date<1346486100000. They are executed to delete too old rows. \nMy postgresql version is 8.4. Below is an example of a table (they all have the same structure) : \n\nCREATE TABLE agg_t100_outgoing_a39_src_net_f5 \n( \ntotal_pkts bigint, \nend_date bigint, \nsrc_network inet, \nstart_date bigint, \ntotal_flows bigint, \ntotal_bytes bigint \n) \nWITH ( \nOIDS=FALSE \n); \n\nCREATE INDEX agg_t100_outgoing_a39_src_net_f5_end_date \nON agg_t100_outgoing_a39_src_net_f5 \nUSING btree \n(end_date); \n\nCREATE INDEX agg_t100_outgoing_a39_src_net_f5_start_date \nON agg_t100_outgoing_a39_src_net_f5 \nUSING btree \n(start_date); \n\nI have investigated in the pg_stat_all_tables table and it seems the autovaccum / autoanalyze don't do their job. Many tables have no last_autovacuum / last_autoanalyze dates ! So the planner doesn't have fresh stats to decide. Don't you think it could be a good reason for slow DELETE ? In this case, the trouble could come from the autovaccum configuration. \n\nRegards, \n\nSylvain \n----- Mail original -----\n\n> Hi Sylvain,\n\n> Might sound like a nasty question, and gurus will correct me if I'm\n> wrong, but first thing to investigate is why the index is not used :\n> - You have 2/3 million rows per table so the planner should use the\n> index. Seqscan is prefered for small tables.\n> - Maybe the WHERE clause of your DELETE statement doesn't make use of\n> your start and end date columns ? If so, in which order ?\n\n> Please, provide with your Pg version and the table setup with the\n> index.\n\n> Regards,\n\n> Sekine\n\n> 2012/10/16 Sylvain CAILLET < [email protected] >\n\n> > Hi to all,\n> \n\n> > I've got a trouble with some delete statements. My db contains a\n> > little more than 10000 tables and runs on a dedicated server\n> > (Debian\n> > 6 - bi quad - 16Gb - SAS disks raid 0). Most of the tables contains\n> > between 2 and 3 million rows and no foreign keys exist between\n> > them.\n> > Each is indexed (btree) on start_date / end_date fields (bigint).\n> > The Postgresql server has been tuned (I can give modified values if\n> > needed).\n> \n\n> > I perform recurrent DELETE upon a table subset (~1900 tables) and\n> > each time, I delete a few lines (between 0 and 1200). Usually it\n> > takes between 10s and more than 2mn. It seems to me to be a huge\n> > amount of time ! An EXPLAIN ANALYZE on a DELETE shows me that the\n> > planner uses a Seq Scan instead of an Index Scan. Autovaccum is on\n> > and I expect the db stats to be updated in real time (pg_stats file\n> > is stored in /dev/shm RAM disk for quick access).\n> \n\n> > Do you have any idea about this trouble ?\n> \n\n> > Sylvain Caillet\n> \n> > Bureau : + 33 5 59 41 51 10\n> \n> > [email protected]\n> \n\n> > ALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart\n> \n> > www.alaloop.com\n> \n\nHi Sékine,You're right : my question is why the planner doesn't use the index ! My DELETE statements have WHERE clause like : start_date<1346486100000. They are executed to delete too old rows.My postgresql version is 8.4. Below is an example of a table (they all have the same structure) : CREATE TABLE agg_t100_outgoing_a39_src_net_f5( total_pkts bigint, end_date bigint, src_network inet, start_date bigint, total_flows bigint, total_bytes bigint)WITH ( OIDS=FALSE);CREATE INDEX agg_t100_outgoing_a39_src_net_f5_end_date ON agg_t100_outgoing_a39_src_net_f5 USING btree (end_date);CREATE INDEX agg_t100_outgoing_a39_src_net_f5_start_date ON agg_t100_outgoing_a39_src_net_f5 USING btree (start_date);I have investigated in the pg_stat_all_tables table and it seems the autovaccum / autoanalyze don't do their job. Many tables have no last_autovacuum / last_autoanalyze dates ! So the planner doesn't have fresh stats to decide. Don't you think it could be a good reason for slow DELETE ? In this case, the trouble could come from the autovaccum configuration.Regards,SylvainHi Sylvain,\nMight sound like a nasty question, and gurus will correct me if I'm wrong, but first thing to investigate is why the index is not used :\n- You have 2/3 million rows per table so the planner should use the index. Seqscan is prefered for small tables.\n- Maybe the WHERE clause of your DELETE statement doesn't make use of your start and end date columns ? If so, in which order ?\n\n\nPlease, provide with your Pg version and the table setup with the index.\n\nRegards,\n\nSekine2012/10/16 Sylvain CAILLET <[email protected]>\nHi to all,I've got a trouble with some delete statements. My db contains a little more than 10000 tables and runs on a dedicated server (Debian 6 - bi quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2 and 3 million rows and no foreign keys exist between them. Each is indexed (btree) on start_date / end_date fields (bigint). The Postgresql server has been tuned (I can give modified values if needed).\nI perform recurrent DELETE upon a table subset (~1900 tables) and each time, I delete a few lines (between 0 and 1200). Usually it takes between 10s and more than 2mn. It seems to me to be a huge amount of time ! An EXPLAIN ANALYZE on a DELETE shows me that the planner uses a Seq Scan instead of an Index Scan. Autovaccum is on and I expect the db stats to be updated in real time (pg_stats file is stored in /dev/shm RAM disk for quick access).\nDo you have any idea about this trouble ?Sylvain CailletBureau : + 33 5 59 41 51 10\[email protected] S.A.S. - Technopole Izarbel - 64210 Bidartwww.alaloop.com",
"msg_date": "Tue, 16 Oct 2012 10:13:10 +0200 (CEST)",
"msg_from": "Sylvain CAILLET <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Delete : Seq scan instead of index scan"
},
{
"msg_contents": "Hi Craig, \n\nHere are the outputs : \n\nflows=# explain analyze delete from agg_t377_incoming_a40_dst_net_f5 where start_date < 1346487911000; \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------- \nSeq Scan on agg_t377_incoming_a40_dst_net_f5 (cost=0.00..34448.96 rows=657622 width=6) (actual time=3429.058..7135.901 rows=143 loops=1) \nFilter: (start_date < 1346487911000::bigint) \nTotal runtime: 7136.191 ms \n(3 rows) \n\nflows=# \\d agg_t377_incoming_a40_dst_net_f5 \nTable \"public.agg_t377_incoming_a40_dst_net_f5\" \nColumn | Type | Modifiers \n-------------+--------+----------- \nend_date | bigint | \ndst_network | inet | \ntotal_pkts | bigint | \ntotal_bytes | bigint | \nstart_date | bigint | \ntotal_flows | bigint | \nIndexes: \n\"agg_t377_incoming_a40_dst_net_f5_end_date\" btree (end_date) \n\"agg_t377_incoming_a40_dst_net_f5_start_date\" btree (start_date) \n\nThanks for your help, \n\nSylvain \n----- Mail original -----\n\n> On 10/16/2012 03:50 PM, Sylvain CAILLET wrote:\n> > Hi to all,\n> >\n> > I've got a trouble with some delete statements. My db contains a\n> > little\n> > more than 10000 tables and runs on a dedicated server (Debian 6 -\n> > bi\n> > quad - 16Gb - SAS disks raid 0). Most of the tables contains\n> > between 2\n> > and 3 million rows and no foreign keys exist between them. Each is\n> > indexed (btree) on start_date / end_date fields (bigint). The\n> > Postgresql\n> > server has been tuned (I can give modified values if needed).\n> >\n> > I perform recurrent DELETE upon a table subset (~1900 tables) and\n> > each\n> > time, I delete a few lines (between 0 and 1200). Usually it takes\n> > between 10s and more than 2mn. It seems to me to be a huge amount\n> > of\n> > time ! An EXPLAIN ANALYZE on a DELETE shows me that the planner\n> > uses a\n> > Seq Scan instead of an Index Scan.\n\n> Can you post that (or paste to explain.depesz.com and link to it\n> here)\n> along with a \"\\d tablename\" from psql?\n\n> --\n> Craig Ringer\n\nHi Craig,Here are the outputs :flows=# explain analyze delete from agg_t377_incoming_a40_dst_net_f5 where start_date < 1346487911000; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Seq Scan on agg_t377_incoming_a40_dst_net_f5 (cost=0.00..34448.96 rows=657622 width=6) (actual time=3429.058..7135.901 rows=143 loops=1) Filter: (start_date < 1346487911000::bigint) Total runtime: 7136.191 ms(3 rows)flows=# \\d agg_t377_incoming_a40_dst_net_f5Table \"public.agg_t377_incoming_a40_dst_net_f5\" Column | Type | Modifiers -------------+--------+----------- end_date | bigint | dst_network | inet | total_pkts | bigint | total_bytes | bigint | start_date | bigint | total_flows | bigint | Indexes: \"agg_t377_incoming_a40_dst_net_f5_end_date\" btree (end_date) \"agg_t377_incoming_a40_dst_net_f5_start_date\" btree (start_date)Thanks for your help,SylvainOn 10/16/2012 03:50 PM, Sylvain CAILLET wrote:> Hi to all,>> I've got a trouble with some delete statements. My db contains a little> more than 10000 tables and runs on a dedicated server (Debian 6 - bi> quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2> and 3 million rows and no foreign keys exist between them. Each is> indexed (btree) on start_date / end_date fields (bigint). The Postgresql> server has been tuned (I can give modified values if needed).>> I perform recurrent DELETE upon a table subset (~1900 tables) and each> time, I delete a few lines (between 0 and 1200). Usually it takes> between 10s and more than 2mn. It seems to me to be a huge amount of> time ! An EXPLAIN ANALYZE on a DELETE shows me that the planner uses a> Seq Scan instead of an Index Scan.Can you post that (or paste to explain.depesz.com and link to it here) along with a \"\\d tablename\" from psql?--Craig Ringer",
"msg_date": "Tue, 16 Oct 2012 10:24:42 +0200 (CEST)",
"msg_from": "Sylvain CAILLET <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Delete : Seq scan instead of index scan"
},
{
"msg_contents": "the first thing you should probably do is run an 'analyze' on one of these\ntables and then run again the delete statement. if there are no stats for\nthese tables, it's normal not to have very good plans.\n\n\n\nOn Tue, Oct 16, 2012 at 11:24 AM, Sylvain CAILLET <[email protected]>wrote:\n\n> Hi Craig,\n>\n> Here are the outputs :\n>\n> flows=# explain analyze delete from agg_t377_incoming_a40_dst_net_f5 where\n> start_date < 1346487911000;\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on agg_t377_incoming_a40_dst_net_f5 (cost=0.00..34448.96\n> rows=657622 width=6) (actual time=3429.058..7135.901 rows=143 loops=1)\n> Filter: (start_date < 1346487911000::bigint)\n> Total runtime: 7136.191 ms\n> (3 rows)\n> flows=# \\d agg_t377_incoming_a40_dst_net_f5\n> Table \"public.agg_t377_incoming_a40_dst_net_f5\"\n> Column | Type | Modifiers\n> -------------+--------+-----------\n> end_date | bigint |\n> dst_network | inet |\n> total_pkts | bigint |\n> total_bytes | bigint |\n> start_date | bigint |\n> total_flows | bigint |\n> Indexes:\n> \"agg_t377_incoming_a40_dst_net_f5_end_date\" btree (end_date)\n> \"agg_t377_incoming_a40_dst_net_f5_start_date\" btree (start_date)\n>\n> Thanks for your help,\n>\n> Sylvain\n>\n> ------------------------------\n>\n> On 10/16/2012 03:50 PM, Sylvain CAILLET wrote:\n> > Hi to all,\n> >\n> > I've got a trouble with some delete statements. My db contains a little\n> > more than 10000 tables and runs on a dedicated server (Debian 6 - bi\n> > quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2\n> > and 3 million rows and no foreign keys exist between them. Each is\n> > indexed (btree) on start_date / end_date fields (bigint). The Postgresql\n> > server has been tuned (I can give modified values if needed).\n> >\n> > I perform recurrent DELETE upon a table subset (~1900 tables) and each\n> > time, I delete a few lines (between 0 and 1200). Usually it takes\n> > between 10s and more than 2mn. It seems to me to be a huge amount of\n> > time ! An EXPLAIN ANALYZE on a DELETE shows me that the planner uses a\n> > Seq Scan instead of an Index Scan.\n>\n> Can you post that (or paste to explain.depesz.com and link to it here)\n> along with a \"\\d tablename\" from psql?\n>\n> --\n> Craig Ringer\n>\n>\n>\n\nthe first thing you should probably do is run an 'analyze' on one of these tables and then run again the delete statement. if there are no stats for these tables, it's normal not to have very good plans. \nOn Tue, Oct 16, 2012 at 11:24 AM, Sylvain CAILLET <[email protected]> wrote:\nHi Craig,\nHere are the outputs :\nflows=# explain analyze delete from agg_t377_incoming_a40_dst_net_f5 where start_date < 1346487911000;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on agg_t377_incoming_a40_dst_net_f5 (cost=0.00..34448.96 rows=657622 width=6) (actual time=3429.058..7135.901 rows=143 loops=1)\n Filter: (start_date < 1346487911000::bigint) Total runtime: 7136.191 ms\n(3 rows)flows=# \\d agg_t377_incoming_a40_dst_net_f5Table \"public.agg_t377_incoming_a40_dst_net_f5\"\n Column | Type | Modifiers -------------+--------+----------- end_date | bigint | \n dst_network | inet | total_pkts | bigint | total_bytes | bigint | \n start_date | bigint | total_flows | bigint | Indexes:\n \"agg_t377_incoming_a40_dst_net_f5_end_date\" btree (end_date) \"agg_t377_incoming_a40_dst_net_f5_start_date\" btree (start_date)\nThanks for your help,\nSylvain\n\nOn 10/16/2012 03:50 PM, Sylvain CAILLET wrote:> Hi to all,>> I've got a trouble with some delete statements. My db contains a little> more than 10000 tables and runs on a dedicated server (Debian 6 - bi\n> quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2> and 3 million rows and no foreign keys exist between them. Each is> indexed (btree) on start_date / end_date fields (bigint). The Postgresql\n> server has been tuned (I can give modified values if needed).>> I perform recurrent DELETE upon a table subset (~1900 tables) and each> time, I delete a few lines (between 0 and 1200). Usually it takes\n> between 10s and more than 2mn. It seems to me to be a huge amount of> time ! An EXPLAIN ANALYZE on a DELETE shows me that the planner uses a> Seq Scan instead of an Index Scan.Can you post that (or paste to explain.depesz.com and link to it here) \nalong with a \"\\d tablename\" from psql?--Craig Ringer",
"msg_date": "Tue, 16 Oct 2012 11:41:02 +0300",
"msg_from": "Filippos Kalamidas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Delete : Seq scan instead of index scan"
},
{
"msg_contents": "On 10/16/2012 04:41 PM, Filippos Kalamidas wrote:\n> the first thing you should probably do is run an 'analyze' on one of\n> these tables and then run again the delete statement. if there are no\n> stats for these tables, it's normal not to have very good plans.\n\nYep, and the fact that the stats are that bad suggests that autovaccum \nprobably isn't running, or isn't running often enough.\n\nIf you have a high INSERT/UPDATE/DELETE load, then turn autovacuum up on \nthat table. See:\n\n http://www.postgresql.org/docs/current/static/routine-vacuuming.html\n\n \nhttp://www.postgresql.org/docs/current/static/runtime-config-autovacuum.html\n\n\nIf the table is badly bloated it might be worth running \"VACUUM FULL\" on \nit or (if you're on PostgreSQL 8.4 or below) instead CLUSTER the table \non an index, as \"VACUUM FULL\" is very inefficient in 8.4 and older (I \nthink; I might be misremembering the versions).\n\n\nPlease specify your PostgreSQL version in all questions. See \nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Wed, 17 Oct 2012 14:19:49 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Delete : Seq scan instead of index scan"
}
] |
[
{
"msg_contents": "Hi communities,\n\nI am investigating a performance issue involved with LIKE 'xxxx%' on an\nindex in a complex query with joins. \n\nThe problem boils down into this simple scenario---:\n====Scenario====\nMy database locale is C, using UTF-8 encoding. I tested this on 9.1.6 and 9.\n2.1.\n\nQ1.\nSELECT * FROM shipments WHERE shipment_id LIKE '12345678%'\n \nQ2.\nSELECT * FROM shipments WHERE shipment_id >= '12345678' AND shipment_id <\n'12345679'\n\nshipments is a table with million rows and 20 columns. Shipment_id is the\nprimary key with text and non-null field.\n\nCREATE TABLE cod.shipments\n(\n shipment_id text NOT NULL,\n -- other columns omitted\n CONSTRAINT shipments_pkey PRIMARY KEY (shipment_id)\n)\n\nAnalyze Q1 gives this:\nIndex Scan using shipments_pkey on shipments (cost=0.00..39.84 rows=1450\nwidth=294) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id <\n'12345679'::text))\n Filter: (shipment_id ~~ '12345678%'::text)\n Buffers: shared hit=4\n\nAnalyze Q2 gives this:\nIndex Scan using shipments_pkey on shipments (cost=0.00..39.83 rows=1\nwidth=294) (actual time=0.027..0.027 rows=1 loops=1)\n Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id <\n'12345679'::text))\n Buffers: shared hit=4\n\n====Problem Description====\nIn Q1, the planner thought there will be 1450 rows, and Q2 gave a much\nbetter estimate of 1.\nThe problem is when I combine such condition with a join to other table,\npostgres will prefer a merge join (or hash) rather than a nested loop.\n\n====Question====\nIs Q1 and Q2 equivalent? From what I see and the result they seems to be the\nsame, or did I miss something? (Charset: C, Encoding: UTF-8)\nIf they are equivalent, is that a bug of the planner?\n\nMany Thanks,\nSam\n\n\n",
"msg_date": "Tue, 16 Oct 2012 16:15:21 +0800",
"msg_from": "\"Sam Wong\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "LIKE op with B-Tree Index?"
},
{
"msg_contents": "On Tue, Oct 16, 2012 at 3:15 AM, Sam Wong <[email protected]> wrote:\n> Hi communities,\n>\n> I am investigating a performance issue involved with LIKE 'xxxx%' on an\n> index in a complex query with joins.\n>\n> The problem boils down into this simple scenario---:\n> ====Scenario====\n> My database locale is C, using UTF-8 encoding. I tested this on 9.1.6 and 9.\n> 2.1.\n>\n> Q1.\n> SELECT * FROM shipments WHERE shipment_id LIKE '12345678%'\n>\n> Q2.\n> SELECT * FROM shipments WHERE shipment_id >= '12345678' AND shipment_id <\n> '12345679'\n>\n> shipments is a table with million rows and 20 columns. Shipment_id is the\n> primary key with text and non-null field.\n>\n> CREATE TABLE cod.shipments\n> (\n> shipment_id text NOT NULL,\n> -- other columns omitted\n> CONSTRAINT shipments_pkey PRIMARY KEY (shipment_id)\n> )\n>\n> Analyze Q1 gives this:\n> Index Scan using shipments_pkey on shipments (cost=0.00..39.84 rows=1450\n> width=294) (actual time=0.018..0.018 rows=1 loops=1)\n> Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id <\n> '12345679'::text))\n> Filter: (shipment_id ~~ '12345678%'::text)\n> Buffers: shared hit=4\n>\n> Analyze Q2 gives this:\n> Index Scan using shipments_pkey on shipments (cost=0.00..39.83 rows=1\n> width=294) (actual time=0.027..0.027 rows=1 loops=1)\n> Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id <\n> '12345679'::text))\n> Buffers: shared hit=4\n>\n> ====Problem Description====\n> In Q1, the planner thought there will be 1450 rows, and Q2 gave a much\n> better estimate of 1.\n> The problem is when I combine such condition with a join to other table,\n> postgres will prefer a merge join (or hash) rather than a nested loop.\n>\n> ====Question====\n> Is Q1 and Q2 equivalent? From what I see and the result they seems to be the\n> same, or did I miss something? (Charset: C, Encoding: UTF-8)\n> If they are equivalent, is that a bug of the planner?\n\nThey are most certainly not equivalent. What if the shipping_id is 12345678Z?\n\nmerlin\n\n",
"msg_date": "Tue, 16 Oct 2012 15:29:44 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE op with B-Tree Index?"
},
{
"msg_contents": "> On Wednesday, October 17, 2012 4:30, Merlin Moncure wrote,\n> \n> On Tue, Oct 16, 2012 at 3:15 AM, Sam Wong <[email protected]> wrote:\n> > Hi communities,\n> >\n> > I am investigating a performance issue involved with LIKE 'xxxx%' on\n> > an index in a complex query with joins.\n> >\n> > The problem boils down into this simple scenario---:\n> > ====Scenario====\n> > My database locale is C, using UTF-8 encoding. I tested this on 9.1.6\nand 9.\n> > 2.1.\n> >\n> > Q1.\n> > SELECT * FROM shipments WHERE shipment_id LIKE '12345678%'\n> >\n> > Q2.\n> > SELECT * FROM shipments WHERE shipment_id >= '12345678' AND\n> > shipment_id < '12345679'\n> >\n> > ...snip...\n> >\n> > ====Question====\n> > Is Q1 and Q2 equivalent? From what I see and the result they seems to\n> > be the same, or did I miss something? (Charset: C, Encoding: UTF-8) If\n> > they are equivalent, is that a bug of the planner?\n> \n> They are most certainly not equivalent. What if the shipping_id is\n> 12345678Z?\n> \n> merlin\n>\nBut '12345678Z' is indeed >= '12345678' AND < '12345679'. Just like 'apple'\n< 'apples' < 'apply' in a dictionary.\n\nA quick test:\nvitalink=# select * from ss;\n id\n-----------\n 12345678\n 12345678Z\n 12345679\n(3 rows)\n\nvitalink=# select * from ss WHERE id >= '12345678' AND id < '12345679';\n id\n-----------\n 12345678\n 12345678Z\n(2 rows)\n\nSam\n\n\n",
"msg_date": "Wed, 17 Oct 2012 09:01:09 +0800",
"msg_from": "\"Sam Wong\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIKE op with B-Tree Index?"
},
{
"msg_contents": "On Tue, Oct 16, 2012 at 8:01 PM, Sam Wong <[email protected]> wrote:\n>> On Wednesday, October 17, 2012 4:30, Merlin Moncure wrote,\n>>\n>> On Tue, Oct 16, 2012 at 3:15 AM, Sam Wong <[email protected]> wrote:\n>> > Hi communities,\n>> >\n>> > I am investigating a performance issue involved with LIKE 'xxxx%' on\n>> > an index in a complex query with joins.\n>> >\n>> > The problem boils down into this simple scenario---:\n>> > ====Scenario====\n>> > My database locale is C, using UTF-8 encoding. I tested this on 9.1.6\n> and 9.\n>> > 2.1.\n>> >\n>> > Q1.\n>> > SELECT * FROM shipments WHERE shipment_id LIKE '12345678%'\n>> >\n>> > Q2.\n>> > SELECT * FROM shipments WHERE shipment_id >= '12345678' AND\n>> > shipment_id < '12345679'\n>> >\n>> > ...snip...\n>> >\n>> > ====Question====\n>> > Is Q1 and Q2 equivalent? From what I see and the result they seems to\n>> > be the same, or did I miss something? (Charset: C, Encoding: UTF-8) If\n>> > they are equivalent, is that a bug of the planner?\n>>\n>> They are most certainly not equivalent. What if the shipping_id is\n>> 12345678Z?\n>>\n>> merlin\n>>\n> But '12345678Z' is indeed >= '12345678' AND < '12345679'. Just like 'apple'\n> < 'apples' < 'apply' in a dictionary.\n\nRight -- I didn't visualize it properly. Still, you're asking the\nserver to infer that since you're looking between to adjacent textual\ncharacters range bounded [) it convert the 'between' to a partial\nstring search. That hold up logically but probably isn't worth\nspending cycles to do, particularly in cases of non-ascii mappable\nunicode characters.\n\nmerlin\n\n",
"msg_date": "Wed, 17 Oct 2012 12:45:25 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE op with B-Tree Index?"
},
{
"msg_contents": "> Moncure wrote on Thursday, October 18, 2012 1:45 \n> On Tue, Oct 16, 2012 at 8:01 PM, Sam Wong <[email protected]> wrote:\n> >> On Wednesday, October 17, 2012 4:30, Merlin Moncure wrote,\n> >>\n> >> On Tue, Oct 16, 2012 at 3:15 AM, Sam Wong <[email protected]> wrote:\n> >> > Hi communities,\n> >> >\n> >> > I am investigating a performance issue involved with LIKE 'xxxx%'\n> >> > on an index in a complex query with joins.\n> >> >\n> >> > The problem boils down into this simple scenario---:\n> >> > ====Scenario====\n> >> > My database locale is C, using UTF-8 encoding. I tested this on\n> >> > 9.1.6\n> > and 9.\n> >> > 2.1.\n> >> >\n> >> > Q1.\n> >> > SELECT * FROM shipments WHERE shipment_id LIKE '12345678%'\n> >> >\n> >> > Q2.\n> >> > SELECT * FROM shipments WHERE shipment_id >= '12345678' AND\n> >> > shipment_id < '12345679'\n> >> >\n> >> > ...snip...\n> >> >\n> >> > ====Question====\n> >> > Is Q1 and Q2 equivalent? From what I see and the result they seems\n> >> > to be the same, or did I miss something? (Charset: C, Encoding:\n> >> > UTF-8) If they are equivalent, is that a bug of the planner?\n> >>\n> >> They are most certainly not equivalent. What if the shipping_id is\n> >> 12345678Z?\n> >>\n> >> merlin\n> >>\n> > But '12345678Z' is indeed >= '12345678' AND < '12345679'. Just like\n'apple'\n> > < 'apples' < 'apply' in a dictionary.\n> \n> Right -- I didn't visualize it properly. Still, you're asking the server\nto infer that\n> since you're looking between to adjacent textual characters range bounded\n[) it\n> convert the 'between' to a partial\n> string search. That hold up logically but probably isn't worth\n> spending cycles to do, particularly in cases of non-ascii mappable unicode\n> characters.\n> merlin\n\nPostgresql did that already. Refer to the analyze result of Q1 and Q2, it\ngives\n\"Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id <\n'12345679'::text))\"\n(I also just realized they did it just now)\n\nYet, with additional Filter (ref Q1 analyze), it's surprisingly that it\nestimates Q1 will have more rows that Q2.\n\nFYI, I made a self-contained test case and submitted a bug #7610.\n\n\n",
"msg_date": "Thu, 18 Oct 2012 13:58:40 +0800",
"msg_from": "\"Sam Wong\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIKE op with B-Tree Index?"
},
{
"msg_contents": "Sam Wong wrote:\n>>>>> I am investigating a performance issue involved with LIKE 'xxxx%'\n>>>>> on an index in a complex query with joins.\n\n>>>>> Q1.\n>>>>> SELECT * FROM shipments WHERE shipment_id LIKE '12345678%'\n>>>>>\n>>>>> Q2.\n>>>>> SELECT * FROM shipments WHERE shipment_id >= '12345678' AND\n>>>>> shipment_id < '12345679'\n\n[Q1 and Q2 have different row estimates]\n\nMerlin wrote:\n>> Right -- I didn't visualize it properly. Still, you're asking\n>> the server to infer that\n>> since you're looking between to adjacent textual characters range\nbounded\n>> [) it convert the 'between' to a partial\n>> string search. That hold up logically but probably isn't worth\n>> spending cycles to do, particularly in cases of non-ascii mappable\nunicode\n>> characters.\n\n> Postgresql did that already. Refer to the analyze result of Q1 and Q2,\nit\n> gives\n> \"Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id <\n> '12345679'::text))\"\n> (I also just realized they did it just now)\n> \n> Yet, with additional Filter (ref Q1 analyze), it's surprisingly that\nit\n> estimates Q1 will have more rows that Q2.\n> \n> FYI, I made a self-contained test case and submitted a bug #7610.\n\nDid you try to increase the statistics for column \"shipment_id\"?\n\nThis will probably not make the difference go away, but\nif the estimate gets better, it might be good enough for\nthe planner to pick the correct plan.\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Thu, 18 Oct 2012 09:03:13 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE op with B-Tree Index?"
}
] |
[
{
"msg_contents": "Hi communities,\n\n \n\nI am investigating a performance issue involved with LIKE 'xxxx%' on an\nindex in a complex query with joins. \n\n \n\nThe problem boils down into this simple scenario---:\n\n====Scenario====\n\nMy database locale is C, using UTF-8 encoding. I tested this on 9.1.6 and 9.\n2.1.\n\n \n\nQ1.\n\nSELECT * FROM shipments WHERE shipment_id LIKE '12345678%'\n\n \n\nQ2.\n\nSELECT * FROM shipments WHERE shipment_id >= '12345678' AND shipment_id <\n'12345679'\n\n \n\nshipments is a table with million rows and 20 columns. Shipment_id is the\nprimary key with text and non-null field.\n\n \n\nCREATE TABLE cod.shipments\n\n(\n\n shipment_id text NOT NULL,\n\n -- other columns omitted\n\n CONSTRAINT shipments_pkey PRIMARY KEY (shipment_id)\n\n)\n\n \n\nAnalyze Q1 gives this:\n\nIndex Scan using shipments_pkey on shipments (cost=0.00..39.84 rows=1450\nwidth=294) (actual time=0.018..0.018 rows=1 loops=1)\n\n Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id <\n'12345679'::text))\n\n Filter: (shipment_id ~~ '12345678%'::text)\n\n Buffers: shared hit=4\n\n \n\nAnalyze Q2 gives this:\n\nIndex Scan using shipments_pkey on shipments (cost=0.00..39.83 rows=1\nwidth=294) (actual time=0.027..0.027 rows=1 loops=1)\n\n Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id <\n'12345679'::text))\n\n Buffers: shared hit=4\n\n \n\n====Problem Description====\n\nIn Q1, the planner thought there will be 1450 rows, and Q2 gave a much\nbetter estimate of 1.\n\nThe problem is when I combine such condition with a join to other table,\npostgres will prefer a merge join (or hash) rather than a nested loop.\n\n \n\n====Question====\n\nIs Q1 and Q2 equivalent? From what I see and the result they seems to be the\nsame, or did I miss something? (Charset: C, Encoding: UTF-8) If they are\nequivalent, is that a bug of the planner?\n\n \n\nMany Thanks,\n\nSam\n\n \n\n(The email didn’t seems to go through without subscription. Resending)\n\n\nHi communities, I am investigating a performance issue involved with LIKE 'xxxx%' on an index in a complex query with joins. The problem boils down into this simple scenario---:====Scenario====My database locale is C, using UTF-8 encoding. I tested this on 9.1.6 and 9.2.1. Q1.SELECT * FROM shipments WHERE shipment_id LIKE '12345678%' Q2.SELECT * FROM shipments WHERE shipment_id >= '12345678' AND shipment_id < '12345679' shipments is a table with million rows and 20 columns. Shipment_id is the primary key with text and non-null field. CREATE TABLE cod.shipments( shipment_id text NOT NULL, -- other columns omitted CONSTRAINT shipments_pkey PRIMARY KEY (shipment_id)) Analyze Q1 gives this:Index Scan using shipments_pkey on shipments (cost=0.00..39.84 rows=1450 width=294) (actual time=0.018..0.018 rows=1 loops=1) Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id < '12345679'::text)) Filter: (shipment_id ~~ '12345678%'::text) Buffers: shared hit=4 Analyze Q2 gives this:Index Scan using shipments_pkey on shipments (cost=0.00..39.83 rows=1 width=294) (actual time=0.027..0.027 rows=1 loops=1) Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id < '12345679'::text)) Buffers: shared hit=4 ====Problem Description====In Q1, the planner thought there will be 1450 rows, and Q2 gave a much better estimate of 1.The problem is when I combine such condition with a join to other table, postgres will prefer a merge join (or hash) rather than a nested loop. ====Question====Is Q1 and Q2 equivalent? From what I see and the result they seems to be the same, or did I miss something? (Charset: C, Encoding: UTF-8) If they are equivalent, is that a bug of the planner? Many Thanks,Sam (The email didn’t seems to go through without subscription. Resending)",
"msg_date": "Tue, 16 Oct 2012 16:46:42 +0800",
"msg_from": "\"Sam Wong\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "LIKE op with B-Tree Index?"
}
] |
[
{
"msg_contents": "Hi,\n\nWhy PostgreSQL, the EnterpriseBD supports create/alter/drop package and the opensource doesn't?\nIs a project or never will have support?\n\n\nThanks\n\nHi,Why PostgreSQL, the EnterpriseBD supports create/alter/drop package and the opensource doesn't?Is a project or never will have support?Thanks",
"msg_date": "Tue, 16 Oct 2012 13:26:37 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support Create package"
},
{
"msg_contents": "2012/10/16 Alejandro Carrillo <[email protected]>:\n> Hi,\n>\n> Why PostgreSQL, the EnterpriseBD supports create/alter/drop package and the\n> opensource doesn't?\n> Is a project or never will have support?\n\nPackages are part of EnterpriseDB Oracle compatibility layer.\nPostgreSQL doesn't support this functionality. Packages are in our\nToDo, but probably nobody working on it and I don't expect it in next\nfew years.\n\nRegards\n\nPavel Stehule\n\n>\n> Thanks\n\n",
"msg_date": "Tue, 16 Oct 2012 14:47:16 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support Create package"
},
{
"msg_contents": "On Tue, Oct 16, 2012 at 01:26:37PM +0100, Alejandro Carrillo wrote:\n> Hi,\n> \n> Why PostgreSQL, the EnterpriseBD supports create/alter/drop package and the opensource doesn't?\n> Is a project or never will have support?\n> \nHi Alejandro,\n\nIsn't that part of their Oracle compatibility secret sauce? For the opensource\nversion, it has never been important enough to anyone invest in the development\neffort.\n\nCheers,\nKen\n\n",
"msg_date": "Tue, 16 Oct 2012 07:52:49 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support Create package"
}
] |
[
{
"msg_contents": "Hi guys,\n\nPG = 9.1.5\nOS = winDOS 2008R8\n\nI have a table that currently has 207 million rows.\nthere is a timestamp field that contains data.\nmore data gets copied from another database into this database.\nHow do I make this do an index scan instead?\nI did an \"analyze audittrailclinical\" to no avail.\nI tested different indexes - no same behavior.\n\nThe query does this:\n\nSELECT \naudittrailclinical.pgid, \naudittrailclinical.timestamp, \nmmuser.logon, \naudittrailclinical.entityname, \naudittrailclinical.clinicalactivity, \naudittrailclinical.audittraileventcode, \naccount.accountnumber, \npatient.dnsortpersonnumber \nFROM \npublic.account, \npublic.audittrailclinical, \npublic.encounter, \npublic.entity, \npublic.mmuser, \npublic.patient, \npublic.patientaccount \nWHERE \n audittrailclinical.encountersid = encounter.encountersid \nand audittrailclinical.timestamp >= '2008-01-01'::timestamp without time zone \nand audittrailclinical.timestamp <= '2012-10-05'::timestamp without time zone\nAND encounter.practiceid = patient.practiceid \nAND encounter.patientid = patient.patientid \nAND encounter.staffid = patient.staffid \nAND entity.entitysid = audittrailclinical.entitysid \nAND mmuser.mmusersid = audittrailclinical.mmusersid \nAND patient.practiceid = patientaccount.practiceid \nAND patient.patientid = patientaccount.patientid \nAND patientaccount.accountsid = account.accountsid \nAND patientaccount.defaultaccount = 'Y' \nAND patient.dnsortpersonnumber = '347450' ;\n\nThe query plan says:\n\n\" -> Seq Scan on audittrailclinical (cost=0.00..8637598.76 rows=203856829 width=62)\"\n\" Filter: ((\"timestamp\" >= '2008-01-01 00:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2012-10-05 00:00:00'::timestamp without time zone))\"\n\nwhich takes forever.\n\nHow do I make this do an index scan instead?\nI did an \"analyze audittrailclinical\" to no avail.\n\nthe table definitions are (the createstamp field is empty - I know, bad data):\n\nCREATE TABLE audittrailclinical\n(\n audittrailid text,\n audittraileventcode text,\n clinicalactivity text,\n eventsuccessful text,\n externalunique text,\n recordstamp timestamp without time zone,\n recorddescription text,\n encountersid integer,\n eventuserlogon text,\n computername text,\n applicationcode text,\n practiceid integer,\n mmusersid integer,\n entitysid integer,\n entityname text,\n \"timestamp\" timestamp without time zone,\n lastuser integer,\n createstamp timestamp without time zone,\n pgid bigint DEFAULT nextval(('\"bravepoint_seq\"'::text)::regclass)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE audittrailclinical\n OWNER TO intergy;\nGRANT ALL ON TABLE audittrailclinical TO intergy;\nGRANT SELECT ON TABLE audittrailclinical TO rb;\n\n-- Index: atc_en_time\n\nCREATE INDEX atc_en_time\n ON audittrailclinical\n USING btree\n (entitysid , \"timestamp\" );\n\n-- Index: atc_id\n\n-- DROP INDEX atc_id;\n\nCREATE INDEX atc_id\n ON audittrailclinical\n USING btree\n (audittrailid COLLATE pg_catalog.\"default\" );\n\n\n\n\n\n",
"msg_date": "Tue, 16 Oct 2012 19:45:48 -0400",
"msg_from": "Chris Ruprecht <[email protected]>",
"msg_from_op": true,
"msg_subject": "have: seq scan - want: index scan"
},
{
"msg_contents": "On Tue, Oct 16, 2012 at 4:45 PM, Chris Ruprecht <[email protected]> wrote:\n\n> Hi guys,\n>\n> PG = 9.1.5\n> OS = winDOS 2008R8\n>\n> I have a table that currently has 207 million rows.\n> there is a timestamp field that contains data.\n> more data gets copied from another database into this database.\n> How do I make this do an index scan instead?\n> I did an \"analyze audittrailclinical\" to no avail.\n> I tested different indexes - no same behavior.\n>\n>\n> The query plan says:\n>\n> \" -> Seq Scan on audittrailclinical (cost=0.00..8637598.76\n> rows=203856829 width=62)\"\n> \" Filter: ((\"timestamp\" >= '2008-01-01\n> 00:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2012-10-05\n> 00:00:00'::timestamp without time zone))\"\n>\n> which takes forever.\n>\n> How do I make this do an index scan instead?\n> I did an \"analyze audittrailclinical\" to no avail.\n>\n\nanalyze says 203 million out of 207 million rows are matched by your\ntimestamp filter, so it is definitely going to favour a sequential scan,\nsince an index scan that matches that many rows will inevitably be slower\nthan simply scanning the table, since it will have to both do the lookups\nand load the actual records from the table (all of them, basically) in\norder to determine their visibility to you, so your index scan will just\nturn sequential access of the table pages into random access and require\nindex lookups as well. You can possibly verify this by setting\nenable_seqscan to false and running your analyze again and see how the plan\nchanges, though I don't believe that will necessarily remove all sequential\nscans, it just reduces their likelihood, so you may see that nothing\nchanges. If the estimate for the number of matching rows is incorrect,\nyou'll want to increase the statistics gathering for that table or just\nthat column.\n\nALTER TABLE <table> ALTER COLUMN <column> SET STATISTICS <number>\n\nwhere number is between 10 and 1000 and I think the default is 100. Then\nre-analyze the table and see if the query plan shows better estimates. I\nthink 9.2 also supports \"index only scans\" which eliminate the need to load\nthe matched records in certain circumstances. However, all of the columns\nused by the query would need to be in the index, and you are using an awful\nlot of columns between the select clause and the table joins.\n\nAre you lacking indexes on the columns used for joins that would allow more\nselective index scans on those columns which could then just filter by\ntimestamp? I'm not much of an expert on the query planner, so I'm not sure\nwhat exactly will cause that behaviour, but I'd think that good statistics\nand useful indexes should allow the rest of the where clause to be more\nselective of the rows from audittrailclinical unless\npatientaccount.defaultaccount\n= 'Y' and patient.dnsortpersonnumber = '347450' are similarly\nnon-selective, though patient.dnsortpersonnumber would seem like it is\nprobably the strong filter, so make sure you've got indexes and accurate\nstats on all of the foreign keys that connect patient table and\naudittrailclinical table. It'd be useful to see the rest of the explain\nanalyze output so we could see how it is handling the joins and why. Note\nthat because you have multiple composite foreign keys joining tables in\nyour query, you almost certainly won't those composite keys in a single\nindex. If you have indexes on those columns but they are single-column\nindexes, that may be what is causing the planner to try to filter the atc\ntable on the timestamp rather than via the joins. I'm sure someone more\nknowledgable than I will be along eventually to correct any misinformation\nI may have passed along. Without knowing anything about your schema or the\nrest of the explain analyze output, I'm mostly just guessing. There is an\nentire page devoted to formulating useful mailing list questions,\nincidentally. Yours really isn't. Or if the atc table definition is\ncomplete, you are definitely missing potentially useful indexes, since you\nare joining to that table via encountersid and you don't show an index on\nthat column - yet that is the column that eventually joins out to the\npatient and patientaccount tables, which have the stronger filters on them.\n\nIncidentally, why the join to the entity table via entitysid? No columns\nfrom that table appear to be used anywhere else in the query.\n\n--sam\n\nOn Tue, Oct 16, 2012 at 4:45 PM, Chris Ruprecht <[email protected]> wrote:\nHi guys,\n\nPG = 9.1.5\nOS = winDOS 2008R8\n\nI have a table that currently has 207 million rows.\nthere is a timestamp field that contains data.\nmore data gets copied from another database into this database.\nHow do I make this do an index scan instead?\nI did an \"analyze audittrailclinical\" to no avail.\nI tested different indexes - no same behavior.\n\nThe query plan says:\n\n\" -> Seq Scan on audittrailclinical (cost=0.00..8637598.76 rows=203856829 width=62)\"\n\" Filter: ((\"timestamp\" >= '2008-01-01 00:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2012-10-05 00:00:00'::timestamp without time zone))\"\n\nwhich takes forever.\n\nHow do I make this do an index scan instead?\nI did an \"analyze audittrailclinical\" to no avail.analyze says 203 million out of 207 million rows are matched by your timestamp filter, so it is definitely going to favour a sequential scan, since an index scan that matches that many rows will inevitably be slower than simply scanning the table, since it will have to both do the lookups and load the actual records from the table (all of them, basically) in order to determine their visibility to you, so your index scan will just turn sequential access of the table pages into random access and require index lookups as well. You can possibly verify this by setting enable_seqscan to false and running your analyze again and see how the plan changes, though I don't believe that will necessarily remove all sequential scans, it just reduces their likelihood, so you may see that nothing changes. If the estimate for the number of matching rows is incorrect, you'll want to increase the statistics gathering for that table or just that column.\nALTER TABLE <table> ALTER COLUMN <column> SET STATISTICS <number>where number is between 10 and 1000 and I think the default is 100. Then re-analyze the table and see if the query plan shows better estimates. I think 9.2 also supports \"index only scans\" which eliminate the need to load the matched records in certain circumstances. However, all of the columns used by the query would need to be in the index, and you are using an awful lot of columns between the select clause and the table joins.\nAre you lacking indexes on the columns used for joins that would allow more selective index scans on those columns which could then just filter by timestamp? I'm not much of an expert on the query planner, so I'm not sure what exactly will cause that behaviour, but I'd think that good statistics and useful indexes should allow the rest of the where clause to be more selective of the rows from audittrailclinical unless patientaccount.defaultaccount = 'Y' and patient.dnsortpersonnumber = '347450' are similarly non-selective, though patient.dnsortpersonnumber would seem like it is probably the strong filter, so make sure you've got indexes and accurate stats on all of the foreign keys that connect patient table and audittrailclinical table. It'd be useful to see the rest of the explain analyze output so we could see how it is handling the joins and why. Note that because you have multiple composite foreign keys joining tables in your query, you almost certainly won't those composite keys in a single index. If you have indexes on those columns but they are single-column indexes, that may be what is causing the planner to try to filter the atc table on the timestamp rather than via the joins. I'm sure someone more knowledgable than I will be along eventually to correct any misinformation I may have passed along. Without knowing anything about your schema or the rest of the explain analyze output, I'm mostly just guessing. There is an entire page devoted to formulating useful mailing list questions, incidentally. Yours really isn't. Or if the atc table definition is complete, you are definitely missing potentially useful indexes, since you are joining to that table via encountersid and you don't show an index on that column - yet that is the column that eventually joins out to the patient and patientaccount tables, which have the stronger filters on them.\nIncidentally, why the join to the entity table via entitysid? No columns from that table appear to be used anywhere else in the query.\n--sam",
"msg_date": "Wed, 17 Oct 2012 02:08:15 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: have: seq scan - want: index scan"
}
] |
[
{
"msg_contents": "Hi guys,\n\nPG = 9.1.5\nOS = winDOS 2008R8\n\nI have a table that currently has 207 million rows.\nthere is a timestamp field that contains data.\nmore data gets copied from another database into this database.\nHow do I make this do an index scan instead?\nI did an \"analyze audittrailclinical\" to no avail.\nI tested different indexes - no same behavior.\n\nThe query does this:\n\nSELECT \naudittrailclinical.pgid, \naudittrailclinical.timestamp, \nmmuser.logon, \naudittrailclinical.entityname, \naudittrailclinical.clinicalactivity, \naudittrailclinical.audittraileventcode, \naccount.accountnumber, \npatient.dnsortpersonnumber \nFROM \npublic.account, \npublic.audittrailclinical, \npublic.encounter, \npublic.entity, \npublic.mmuser, \npublic.patient, \npublic.patientaccount \nWHERE \n audittrailclinical.encountersid = encounter.encountersid \nand audittrailclinical.timestamp >= '2008-01-01'::timestamp without time zone \nand audittrailclinical.timestamp <= '2012-10-05'::timestamp without time zone\nAND encounter.practiceid = patient.practiceid \nAND encounter.patientid = patient.patientid \nAND encounter.staffid = patient.staffid \nAND entity.entitysid = audittrailclinical.entitysid \nAND mmuser.mmusersid = audittrailclinical.mmusersid \nAND patient.practiceid = patientaccount.practiceid \nAND patient.patientid = patientaccount.patientid \nAND patientaccount.accountsid = account.accountsid \nAND patientaccount.defaultaccount = 'Y' \nAND patient.dnsortpersonnumber = '347450' ;\n\nThe query plan says:\n\n\" -> Seq Scan on audittrailclinical (cost=0.00..8637598.76 rows=203856829 width=62)\"\n\" Filter: ((\"timestamp\" >= '2008-01-01 00:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2012-10-05 00:00:00'::timestamp without time zone))\"\n\nwhich takes forever.\n\nHow do I make this do an index scan instead?\nI did an \"analyze audittrailclinical\" to no avail.\n\nthe table definitions are (the createstamp field is empty - I know, bad data):\n\nCREATE TABLE audittrailclinical\n(\n audittrailid text,\n audittraileventcode text,\n clinicalactivity text,\n eventsuccessful text,\n externalunique text,\n recordstamp timestamp without time zone,\n recorddescription text,\n encountersid integer,\n eventuserlogon text,\n computername text,\n applicationcode text,\n practiceid integer,\n mmusersid integer,\n entitysid integer,\n entityname text,\n \"timestamp\" timestamp without time zone,\n lastuser integer,\n createstamp timestamp without time zone,\n pgid bigint DEFAULT nextval(('\"bravepoint_seq\"'::text)::regclass)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE audittrailclinical\n OWNER TO intergy;\nGRANT ALL ON TABLE audittrailclinical TO intergy;\nGRANT SELECT ON TABLE audittrailclinical TO rb;\n\n-- Index: atc_en_time\n\nCREATE INDEX atc_en_time\n ON audittrailclinical\n USING btree\n (entitysid , \"timestamp\" );\n\n-- Index: atc_id\n\n-- DROP INDEX atc_id;\n\nCREATE INDEX atc_id\n ON audittrailclinical\n USING btree\n (audittrailid COLLATE pg_catalog.\"default\" );\n\n",
"msg_date": "Tue, 16 Oct 2012 19:52:33 -0400",
"msg_from": "Chris Ruprecht <[email protected]>",
"msg_from_op": true,
"msg_subject": "Have: Seq Scan - Want: Index Scan - what am I doing wrong?"
},
{
"msg_contents": "\nOn Oct 17, 2012, at 3:52 AM, Chris Ruprecht <[email protected]> wrote:\n\n> Hi guys,\n> \n> PG = 9.1.5\n> OS = winDOS 2008R8\n> \n> I have a table that currently has 207 million rows.\n> there is a timestamp field that contains data.\n> more data gets copied from another database into this database.\n> How do I make this do an index scan instead?\n> I did an \"analyze audittrailclinical\" to no avail.\n> I tested different indexes - no same behavior.\n> \n> The query does this:\n> \n> SELECT \n> audittrailclinical.pgid, \n> audittrailclinical.timestamp, \n> mmuser.logon, \n> audittrailclinical.entityname, \n> audittrailclinical.clinicalactivity, \n> audittrailclinical.audittraileventcode, \n> account.accountnumber, \n> patient.dnsortpersonnumber \n> FROM \n> public.account, \n> public.audittrailclinical, \n> public.encounter, \n> public.entity, \n> public.mmuser, \n> public.patient, \n> public.patientaccount \n> WHERE \n> audittrailclinical.encountersid = encounter.encountersid \n> and audittrailclinical.timestamp >= '2008-01-01'::timestamp without time zone \n> and audittrailclinical.timestamp <= '2012-10-05'::timestamp without time zone\n> AND encounter.practiceid = patient.practiceid \n> AND encounter.patientid = patient.patientid \n> AND encounter.staffid = patient.staffid \n> AND entity.entitysid = audittrailclinical.entitysid \n> AND mmuser.mmusersid = audittrailclinical.mmusersid \n> AND patient.practiceid = patientaccount.practiceid \n> AND patient.patientid = patientaccount.patientid \n> AND patientaccount.accountsid = account.accountsid \n> AND patientaccount.defaultaccount = 'Y' \n> AND patient.dnsortpersonnumber = '347450' ;\n> \n> The query plan says:\n> \n> \" -> Seq Scan on audittrailclinical (cost=0.00..8637598.76 rows=203856829 width=62)\"\n> \" Filter: ((\"timestamp\" >= '2008-01-01 00:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2012-10-05 00:00:00'::timestamp without time zone))\"\n> \n> which takes forever.\n> \n\nSelecting 5 yours of data is not selective at all, so postgres decides it is cheaper to do seqscan. \n\nDo you have an index on patient.dnsortpersonnumber? Can you post a result from \nselect count(*) from patient where dnsortpersonnumber = '347450'; ?\n\n\n> How do I make this do an index scan instead?\n> I did an \"analyze audittrailclinical\" to no avail.\n> \n> the table definitions are (the createstamp field is empty - I know, bad data):\n> \n> CREATE TABLE audittrailclinical\n> (\n> audittrailid text,\n> audittraileventcode text,\n> clinicalactivity text,\n> eventsuccessful text,\n> externalunique text,\n> recordstamp timestamp without time zone,\n> recorddescription text,\n> encountersid integer,\n> eventuserlogon text,\n> computername text,\n> applicationcode text,\n> practiceid integer,\n> mmusersid integer,\n> entitysid integer,\n> entityname text,\n> \"timestamp\" timestamp without time zone,\n> lastuser integer,\n> createstamp timestamp without time zone,\n> pgid bigint DEFAULT nextval(('\"bravepoint_seq\"'::text)::regclass)\n> )\n> WITH (\n> OIDS=FALSE\n> );\n> ALTER TABLE audittrailclinical\n> OWNER TO intergy;\n> GRANT ALL ON TABLE audittrailclinical TO intergy;\n> GRANT SELECT ON TABLE audittrailclinical TO rb;\n> \n> -- Index: atc_en_time\n> \n> CREATE INDEX atc_en_time\n> ON audittrailclinical\n> USING btree\n> (entitysid , \"timestamp\" );\n> \n> -- Index: atc_id\n> \n> -- DROP INDEX atc_id;\n> \n> CREATE INDEX atc_id\n> ON audittrailclinical\n> USING btree\n> (audittrailid COLLATE pg_catalog.\"default\" );\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n",
"msg_date": "Wed, 17 Oct 2012 04:01:19 +0400",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have: Seq Scan - Want: Index Scan - what am I doing wrong?"
},
{
"msg_contents": "\nOn Oct 16, 2012, at 20:01 , Evgeny Shishkin <[email protected]> wrote:\n\n> Selecting 5 yours of data is not selective at all, so postgres decides it is cheaper to do seqscan. \n> \n> Do you have an index on patient.dnsortpersonnumber? Can you post a result from \n> select count(*) from patient where dnsortpersonnumber = '347450'; ?\n> \n\nYes, there is an index:\n\n\"Aggregate (cost=6427.06..6427.07 rows=1 width=0)\"\n\" -> Index Scan using patient_pracsortpatientnumber on patient (cost=0.00..6427.06 rows=1 width=0)\"\n\" Index Cond: (dnsortpersonnumber = '347450'::text)\"\n\n\nIn fact, all the other criteria is picked using an index. I fear that the >= and <= on the timestamp is causing the issue. If I do a \"=\" of just one of them, I get an index scan. But I need to scan the entire range. I get queries like \"give me everything that was entered into the system for this patient between these two dates\". A single date wouldn't work.\n",
"msg_date": "Tue, 16 Oct 2012 20:19:43 -0400",
"msg_from": "Chris Ruprecht <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Have: Seq Scan - Want: Index Scan - what am I doing wrong?"
},
{
"msg_contents": "On Tue, Oct 16, 2012 at 08:19:43PM -0400, Chris Ruprecht wrote:\n> \n> On Oct 16, 2012, at 20:01 , Evgeny Shishkin <[email protected]> wrote:\n> \n> > Selecting 5 yours of data is not selective at all, so postgres decides it is cheaper to do seqscan. \n> > \n> > Do you have an index on patient.dnsortpersonnumber? Can you post a result from \n> > select count(*) from patient where dnsortpersonnumber = '347450'; ?\n> > \n> \n> Yes, there is an index:\n> \n> \"Aggregate (cost=6427.06..6427.07 rows=1 width=0)\"\n> \" -> Index Scan using patient_pracsortpatientnumber on patient (cost=0.00..6427.06 rows=1 width=0)\"\n> \" Index Cond: (dnsortpersonnumber = '347450'::text)\"\n> \n> \n> In fact, all the other criteria is picked using an index. I fear that the >= and <= on the timestamp is causing the issue. If I do a \"=\" of just one of them, I get an index scan. But I need to scan the entire range. I get queries like \"give me everything that was entered into the system for this patient between these two dates\". A single date wouldn't work.\n\nHave you read our FAQ on this matter?\n\n\thttp://wiki.postgresql.org/wiki/FAQ#Why_are_my_queries_slow.3F_Why_don.27t_they_use_my_indexes.3F\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Tue, 16 Oct 2012 20:31:06 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have: Seq Scan - Want: Index Scan - what am I doing\n wrong?"
},
{
"msg_contents": "Thanks Bruce, \n\nI have, and I even thought, I understood it :). \n\nI just ran an explain analyze on another table - and ever since the query plan changed. It's now using the index as expected. I guess, I have some more reading to do.\n\nOn Oct 16, 2012, at 20:31 , Bruce Momjian <[email protected]> wrote:\n\n> \n> Have you read our FAQ on this matter?\n> \n\n\n",
"msg_date": "Tue, 16 Oct 2012 20:43:00 -0400",
"msg_from": "Chris Ruprecht <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Have: Seq Scan - Want: Index Scan - what am I doing wrong?"
}
] |
[
{
"msg_contents": "What is the adequate *pgbounce* *max_client_conn ,default_pool_size* values\nfor a postgres config which has *max_connections = 400*.\n\nWe want to move to pgbouncer to let postgres do the only db job but it\nconfused us. We have over 120 databases in a single postgres engine with\nas i said max_connections = 400 .\n\n120+ databases are driven from 3 separate web servers, we are planning to\ninstall pgbounce to three of them.\n\nPS: in source code of pgbouncer it looks like opens separate fd's\nfor max_client_conn * default_pool_size value.\n\nRegards,\n\nYetkin Öztürk\n\nWhat is the adequate pgbounce max_client_conn ,default_pool_size values for a postgres config which has max_connections = 400.We want to move to pgbouncer to let postgres do the only db job but it confused us. We have over 120 databases in a single postgres engine with as i said max_connections = 400 .\n120+ databases are driven from 3 separate web servers, we are planning to install pgbounce to three of them.PS: in source code of pgbouncer it looks like opens separate fd's for max_client_conn * default_pool_size value.\nRegards,Yetkin Öztürk",
"msg_date": "Wed, 17 Oct 2012 10:05:05 +0300",
"msg_from": "=?ISO-8859-1?Q?Yetkin_=D6zt=FCrk?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbounce max_client_conn and default_pool_size"
}
] |
[
{
"msg_contents": "We've run into a perplexing issue with a customer database. He moved\nfrom a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB\nRAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is\nnow regularly getting constant errors regarding running out of shared\nmemory (there were none on the old system in the recent couple of\ndays' logs from before the upgrade):\n\nERROR: out of shared memory\nHINT: You might need to increase max_pred_locks_per_transaction.\n\nThe query causing this has structurally identical plans on both systems:\n\nold: http://explain.depesz.com/s/Epzq\nnew: http://explain.depesz.com/s/WZo\n\nThe settings ( \"select name, setting from pg_settings where source <>\n'default' and name not like 'log%' and name not like 'ssl%' and name\nnot like 'syslog%'\" ) are almost identical\n(max_pred_locks_per_transaction itself is at the default):\n\n17c17\n< effective_cache_size | 1530000\n---\n> effective_cache_size | 337500\n38c38\n< shared_buffers | 424960\n---\n> shared_buffers | 93696\n\nThe kernels are both 2.6.32. The workload has not changed\nsignificantly. Could something in 9.1.6 be to blame here? Looking at\nthe changelog, this seems vanishingly unlikely. Any ideas?\n\n",
"msg_date": "Wed, 17 Oct 2012 01:26:31 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6"
},
{
"msg_contents": "> \n> We've run into a perplexing issue with a customer database. He moved\n> from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB\n> RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is\n> now regularly getting constant errors regarding running out of shared\n> memory (there were none on the old system in the recent couple of\n> days' logs from before the upgrade):\n> \n> ERROR: out of shared memory\n> HINT: You might need to increase max_pred_locks_per_transaction.\n> \n> The query causing this has structurally identical plans on both systems:\n> \n> old: http://explain.depesz.com/s/Epzq\n> new: http://explain.depesz.com/s/WZo\n> \n> The settings ( \"select name, setting from pg_settings where source <>\n> 'default' and name not like 'log%' and name not like 'ssl%' and name\n> not like 'syslog%'\" ) are almost identical\n> (max_pred_locks_per_transaction itself is at the default):\n> \n> 17c17\n> < effective_cache_size | 1530000\n> ---\n> > effective_cache_size | 337500\n> 38c38\n> < shared_buffers | 424960\n> ---\n> > shared_buffers | 93696\n> \n> The kernels are both 2.6.32. The workload has not changed\n> significantly. Could something in 9.1.6 be to blame here? Looking at\n> the changelog, this seems vanishingly unlikely. Any ideas?\n> \n\n\nWhat are the settings for:\n\nwork_mem\nmaintenance_work_mem\n\nHow many concurrent connections are there?\n\nHave you ran explain analyze on the query that doesn't crash (i.e the old \nbox) to get the exact execution plan?\n\nHas the DB been vacuum analyzed?\n\nCheers\n\n=============================================\n\nRomax Technology Limited\nRutherford House\nNottingham Science & Technology Park\nNottingham, \nNG7 2PZ\nEngland\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that \nis confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf \nof the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your \nsystem and contact the sender. Thank you for your cooperation.\n=================================================\n> \n> We've run into a perplexing issue with a customer database. He moved\n> from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB\n> RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and\nis\n> now regularly getting constant errors regarding running out of shared\n> memory (there were none on the old system in the recent couple of\n> days' logs from before the upgrade):\n> \n> ERROR: out of shared memory\n> HINT: You might need to increase max_pred_locks_per_transaction.\n> \n> The query causing this has structurally identical plans on both systems:\n> \n> old: http://explain.depesz.com/s/Epzq\n> new: http://explain.depesz.com/s/WZo\n> \n> The settings ( \"select name, setting from pg_settings where source\n<>\n> 'default' and name not like 'log%' and name not like 'ssl%' and name\n> not like 'syslog%'\" ) are almost identical\n> (max_pred_locks_per_transaction itself is at the default):\n> \n> 17c17\n> < effective_cache_size | 1530000\n> ---\n> > effective_cache_size | 337500\n> 38c38\n> < shared_buffers \n | 424960\n> ---\n> > shared_buffers \n | 93696\n> \n> The kernels are both 2.6.32. The workload has not changed\n> significantly. Could something in 9.1.6 be to blame here? Looking\nat\n> the changelog, this seems vanishingly unlikely. Any ideas?\n> \n\n\nWhat are the settings for:\n\nwork_mem\nmaintenance_work_mem\n\nHow many concurrent connections are\nthere?\n\nHave you ran explain analyze on the\nquery that doesn't crash (i.e the old box) to get the exact execution plan?\n\nHas the DB been vacuum analyzed?\n\nCheers\n\n=============================================\n\nRomax Technology Limited\nRutherford House\nNottingham Science & Technology Park\nNottingham, \nNG7 2PZ\nEngland\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that\nis confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf\nof the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your\nsystem and contact the sender. Thank you for your cooperation.\n=================================================",
"msg_date": "Wed, 17 Oct 2012 09:53:18 +0100",
"msg_from": "Martin French <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6"
},
{
"msg_contents": "On Wed, Oct 17, 2012 at 1:53 AM, Martin French\n<[email protected]> wrote:\n\nThanks for your response.\n\n> What are the settings for:\n> work_mem\n 100MB\n\n> maintenance_work_mem\n 64MB\n\n> How many concurrent connections are there?\n~20\n\n> Have you ran explain analyze on the query that doesn't crash (i.e the old\n> box) to get the exact execution plan?\n\nI can try that in the morning, but I didn't think this was relevant. I\nknow cost estimates can be off, but can the plan actually change\nbetween a vanilla explain and an explain analyze?\n\n> Has the DB been vacuum analyzed?\n\nNot outside of autovacuum, no, but it's actually a former replica of\nthe first database (sorry I neglected to mention this earlier).\n\n",
"msg_date": "Wed, 17 Oct 2012 02:13:43 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6"
},
{
"msg_contents": "> On Wed, Oct 17, 2012 at 1:53 AM, Martin French\n> <[email protected]> wrote:\n> \n> Thanks for your response.\n> \n> > What are the settings for:\n> > work_mem\n> 100MB\nThis is a little higher than I would ordinarily set. I tend to cap at \nabout 64MB\n\n> \n> > maintenance_work_mem\n> 64MB\nIn Contrast, this is a little low for me, but I guess that table size is a \nbig factor here.\n\n> \n> > How many concurrent connections are there?\n> ~20\n> \n> > Have you ran explain analyze on the query that doesn't crash (i.e the \nold\n> > box) to get the exact execution plan?\n> \n> I can try that in the morning, but I didn't think this was relevant. I\n> know cost estimates can be off, but can the plan actually change\n> between a vanilla explain and an explain analyze?\n> \nThe explain analyze gives a more detailed output.\n\n> \n> > Has the DB been vacuum analyzed?\n> \n> Not outside of autovacuum, no, but it's actually a former replica of\n> the first database (sorry I neglected to mention this earlier).\n> \n\nThis may be worthwhile. Even with autovacuum on, I still Vac Analyze \nmanually during quiet periods. whether it's actually necessary or not, \nfigure it's belt and braces.\n\nLooking at the explain, It'd suggest the tables aren't very large, so I \ncan't see really why there'd be a problem. Notwithstanding the fact that \nyou are only relatively small shared_buffers.\n\nAre there no other messages in the log files re: out of memory. There \nshould be a dump which will show you where the memory usage is occurring. \n\nOther than that, you may want to consider increasing the shared buffers \nand see if that has any effect. Alternately, you may want to increase \nmax_pred_locks_per_transaction beyond the default of 64, although this is \nnot a parameter I've had to yet adjust.\n\nCheers\n\n=============================================\n\nRomax Technology Limited\nRutherford House\nNottingham Science & Technology Park\nNottingham, \nNG7 2PZ\nEngland\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that \nis confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf \nof the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your \nsystem and contact the sender. Thank you for your cooperation.\n=================================================\n> On Wed, Oct 17, 2012 at 1:53 AM, Martin French\n> <[email protected]> wrote:\n> \n> Thanks for your response.\n> \n> > What are the settings for:\n> > work_mem\n> 100MB\nThis is a little higher than I would ordinarily set. I tend to cap at about\n64MB\n\n> \n> > maintenance_work_mem\n> 64MB\nIn Contrast, this is a little low for me, but I guess\nthat table size is a big factor here.\n\n> \n> > How many concurrent connections are there?\n> ~20\n> \n> > Have you ran explain analyze on the query that doesn't crash\n(i.e the old\n> > box) to get the exact execution plan?\n> \n> I can try that in the morning, but I didn't think this was relevant.\nI\n> know cost estimates can be off, but can the plan actually change\n> between a vanilla explain and an explain analyze?\n> \nThe explain analyze gives a more detailed output.\n\n> \n> > Has the DB been vacuum analyzed?\n> \n> Not outside of autovacuum, no, but it's actually a former replica\nof\n> the first database (sorry I neglected to mention this earlier).\n> \n\nThis may be worthwhile. Even with autovacuum on, I\nstill Vac Analyze manually during quiet periods. whether it's actually\nnecessary or not, figure it's belt and braces.\n\nLooking at the explain, It'd suggest the tables aren't\nvery large, so I can't see really why there'd be a problem. Notwithstanding\nthe fact that you are only relatively small shared_buffers.\n\nAre there no other messages in the log files re: out\nof memory. There should be a dump which will show you where the memory\nusage is occurring. \n\nOther than that, you may want to consider increasing\nthe shared buffers and see if that has any effect. Alternately, you may\nwant to increase max_pred_locks_per_transaction beyond the default of 64,\nalthough this is not a parameter I've had to yet adjust.\n\nCheers\n\n=============================================\n\nRomax Technology Limited\nRutherford House\nNottingham Science & Technology Park\nNottingham, \nNG7 2PZ\nEngland\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that\nis confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf\nof the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your\nsystem and contact the sender. Thank you for your cooperation.\n=================================================",
"msg_date": "Wed, 17 Oct 2012 10:28:55 +0100",
"msg_from": "Martin French <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6"
},
{
"msg_contents": "Maciek Sakrejda <[email protected]> writes:\n> We've run into a perplexing issue with a customer database. He moved\n> from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB\n> RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is\n> now regularly getting constant errors regarding running out of shared\n> memory (there were none on the old system in the recent couple of\n> days' logs from before the upgrade):\n\n> ERROR: out of shared memory\n> HINT: You might need to increase max_pred_locks_per_transaction.\n\nThis has nothing to do with work_mem nor maintenance_work_mem; rather,\nit means you're running out of space in the database-wide lock table.\nYou need to take the hint's advice.\n\n> The query causing this has structurally identical plans on both systems:\n\n> old: http://explain.depesz.com/s/Epzq\n> new: http://explain.depesz.com/s/WZo\n\nThe query in itself doesn't seem very exceptional. I wonder whether\nyou recently switched your application to use serializable mode? But\nanyway, a query's demand for predicate locks can depend on a lot of\nnot-very-visible factors, such as how many physical pages the tuples\nit accesses are spread across. I don't find it too hard to credit\nthat yesterday you were just under the limit and today you're just\nover even though \"nothing changed\".\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 17 Oct 2012 10:18:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6"
},
{
"msg_contents": "On Wed, Oct 17, 2012 at 7:18 AM, Tom Lane <[email protected]> wrote:\n>> ERROR: out of shared memory\n>> HINT: You might need to increase max_pred_locks_per_transaction.\n>\n> This has nothing to do with work_mem nor maintenance_work_mem; rather,\n> it means you're running out of space in the database-wide lock table.\n> You need to take the hint's advice.\n\nSure, just trying to understand why this happened in the first place.\n\n> The query in itself doesn't seem very exceptional. I wonder whether\n> you recently switched your application to use serializable mode?\n\nThe change (for some transactions) was relatively recent, but predated\nthe switch to the replica by several days. Before the switch,\neverything was running fine.\n\n> But\n> anyway, a query's demand for predicate locks can depend on a lot of\n> not-very-visible factors, such as how many physical pages the tuples\n> it accesses are spread across. I don't find it too hard to credit\n> that yesterday you were just under the limit and today you're just\n> over even though \"nothing changed\".\n\nInteresting, thanks for the input. So it could be just a coincidence\nthat the errors occurred in lock-step with the promotion? Or does a\nreplica have a different (or different enough) physical layout that\nthis could have been a factor (my understanding of replication is\nrelatively high level--read: vague)?\n\n",
"msg_date": "Wed, 17 Oct 2012 08:09:11 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6"
}
] |
[
{
"msg_contents": "Hello\n\nI am working on a potentially large database table, let's call it \"observation\", that has a foreign key to table \"measurement\". Each measurement is associated with either none or around five observations. In this kind of situation, it is well known that the statistics on the foreign key column in observation table can get arbitrarily bad as the row count increases. Especially, the estimate of the number of distinct values in the foreign key column can be completely off.\n\nTo combat this issue I have set n_distinct=-0.2 on the foreign key column. With this, the query planner gets good estimates of row counts, but it would appear that this setting does not affect the cost estimate of an index scan. With this, I get odd cost estimates, as if fetching these approximately five rows using index scan would take hundreds of random disk reads. Due to this high cost estimate, when joining these two tables, PostgreSQL changes from using multiple small index scans to scanning the whole table a lot earlier than would be beneficial.\n\nSo, in more detail:\n\nI am using PostgreSQL 9.2.1 on Windows 7 SP 1 installed with EnterpriseDB one-click installer and with default settings.\nSELECT version();\nPostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\n\nI have set up a simple testing database as follows (running these commands takes around an hour on my DB. It's slow, sorry, but I need lots of rows to show the issue):\n\nCREATE TABLE observation\n(\n id bigserial NOT NULL,\n measurement_id bigint NOT NULL,\n CONSTRAINT observation_pkey PRIMARY KEY (id)\n);\n\nCREATE INDEX observation_measurement_id_idx\n ON observation\n USING btree\n (measurement_id);\n\nINSERT INTO observation\n SELECT x as id,\n CASE WHEN (x - 1) % 15 < 3 THEN ((x - 1) / 15) * 4\n WHEN (x - 1) % 15 < 8 THEN ((x - 1) / 15) * 4 + 1\n ELSE ((x - 1) / 15) * 4 + 2\n END AS measurement_id\n FROM (SELECT generate_series(1, 100000000) AS x) AS series;\n \nANALYZE observation;\n\nHere the measurement_id stands for the foreign key to table \"measurement\". Actually having that table is not required to show the issue, though. Each number from range 1...26666666 (1e8 * 4 / 15) appears 0, 3, 5 or 7 times in measurement_id column.\n\nAfter this, the statistics on measurement_id column look something like this:\nDistinct Values 1.23203e+006\nMost Common Values {6895590, 8496970, 23294094, 75266, 128877, 150786, 175001, 192645, 216918, 262742, ...\nMost Common Frequencies\t{0.0001, 0.0001, 0.0001, 6.66667e-005, 6.66667e-005, 6.66667e-005, 6.66667e-005, 6.66667e-005, 6.66667e-005, 6.66667e-005, ...\nHistogram Bounds {14, 208364, 511400, 840725, 1091642, 1392585, 1713998, 1945482, 2204897, 2476654, ...\nCorrelation 1\n\nAs there are 20e6 distinct values, the 1.2e6 estimate is already somewhat off and will keep on going worse if more rows are added.\n\nLet's have a look on the query plan of fetching all observations of a single measurement:\nEXPLAIN (ANALYZE on, VERBOSE on, COSTS on, BUFFERS on, TIMING on )\nSELECT id, measurement_id\n FROM observation\n WHERE measurement_id = 200001;\n\nIndex Scan using observation_measurement_id_idx on public.observation (cost=0.00..119.36 rows=82 width=16) (actual time=0.060..0.062 rows=5 loops=1)\n Output: id, measurement_id\n Index Cond: (observation.measurement_id = 200001)\n Buffers: shared read=5\nTotal runtime: 0.081 ms\n\nThe row estimate is off by a factor of 10, so let's help the planner and tell how many rows it is likely to find:\nALTER TABLE observation\n ALTER COLUMN measurement_id\n SET (n_distinct=-0.2);\nANALYZE observation;\n\nThis doesn't change the row statistics noticeably, except for changing the number of distinct values.\n\nEXPLAIN (ANALYZE on, VERBOSE on, COSTS on, BUFFERS on, TIMING on )\nSELECT id, measurement_id\n FROM observation\n WHERE measurement_id = 200001;\n\nIndex Scan using observation_measurement_id_idx on public.observation (cost=0.00..118.01 rows=5 width=16) (actual time=0.060..0.061 rows=5 loops=1)\n Output: id, measurement_id\n Index Cond: (observation.measurement_id = 200001)\n Buffers: shared read=5\nTotal runtime: 0.073 ms\n\nSo, the row count estimate is now good, but the cost estimate has not changed much. If I halve the random_page_cost (from 4 to 2), it nearly exactly halves the estimated cost, so it would appear this cost is almost completely from random page accesses, approximately 29 of them. In the original plan this made sense, as it expected to fetch 81 rows, but for five rows it would seem quite excessive. \n\nEnlarging the table from 100 million to 200 million rows almost doubles the estimated cost (from 118.01 to 227.69). Since the estimated and actual count of returned rows stay the same and the query is using a B tree, I'd expect the cost to rise only slightly.\n\nI have a distinct feeling that this is either a bug in the cost estimator or there's a quite valid reason which I have missed. Regardless, any insights you might have to this issue would be appreciated.\n\n-- \nNiko Kiirala\n\n\n",
"msg_date": "Wed, 17 Oct 2012 13:52:15 +0000",
"msg_from": "Niko Kiirala <[email protected]>",
"msg_from_op": true,
"msg_subject": "High cost estimates when n_distinct is set"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've created a test table containing 21 million random dates and\ntimes, but I get wildly different results when I introduce a\nfunctional index then ANALYSE again, even though it doesn't use the\nindex:\n\npostgres=# CREATE TABLE test (id serial, sampledate timestamp);\nCREATE TABLE\npostgres=# INSERT INTO test (sampledate) SELECT '1970-01-01\n00:00:00'::timestamp + (random()*1350561945 || ' seconds')::interval\nFROM generate_series(1,21000000);\nINSERT 0 21000000\npostgres=# VACUUM;\nVACUUM\npostgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)\nFROM test GROUP BY extract(month FROM sampledate);\n QUERY PLAN\n----------------------------------------------------------------------\n HashAggregate (cost=481014.00..481016.50 rows=200 width=8)\n -> Seq Scan on test (cost=0.00..376014.00 rows=21000000 width=8)\n(2 rows)\n\npostgres=# ANALYSE;\nANALYZE\npostgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)\nFROM test GROUP BY extract(month FROM sampledate);\n QUERY PLAN\n----------------------------------------------------------------------------\n GroupAggregate (cost=4078473.42..4498473.90 rows=21000024 width=8)\n -> Sort (cost=4078473.42..4130973.48 rows=21000024 width=8)\n Sort Key: (date_part('month'::text, sampledate))\n -> Seq Scan on test (cost=0.00..376014.30 rows=21000024 width=8)\n(4 rows)\n\npostgres=# CREATE INDEX idx_test_sampledate_month ON test\n(extract(month FROM sampledate));\nCREATE INDEX\npostgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)\nFROM test GROUP BY extract(month FROM sampledate);\n QUERY PLAN\n----------------------------------------------------------------------------\n GroupAggregate (cost=4078470.03..4498470.03 rows=21000000 width=8)\n -> Sort (cost=4078470.03..4130970.03 rows=21000000 width=8)\n Sort Key: (date_part('month'::text, sampledate))\n -> Seq Scan on test (cost=0.00..376014.00 rows=21000000 width=8)\n(4 rows)\n\npostgres=# ANALYSE;\nANALYZE\npostgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)\nFROM test GROUP BY extract(month FROM sampledate);\n QUERY PLAN\n----------------------------------------------------------------------\n HashAggregate (cost=481012.85..481013.00 rows=12 width=8)\n -> Seq Scan on test (cost=0.00..376013.17 rows=20999934 width=8)\n(2 rows)\n\n\nThe estimate is down to almost a 10th of what it was before. What's going on?\n\nAnd as a side note, how come it's impossible to get the planner to use\nan index-only scan to satisfy the query (disabling sequential and\nregular index scans)?\n\n-- \nThom\n\n",
"msg_date": "Thu, 18 Oct 2012 17:11:51 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unused index influencing sequential scan plan"
},
{
"msg_contents": "On 18 October 2012 17:11, Thom Brown <[email protected]> wrote:\n> Hi all,\n>\n> I've created a test table containing 21 million random dates and\n> times, but I get wildly different results when I introduce a\n> functional index then ANALYSE again, even though it doesn't use the\n> index:\n>\n> postgres=# CREATE TABLE test (id serial, sampledate timestamp);\n> CREATE TABLE\n> postgres=# INSERT INTO test (sampledate) SELECT '1970-01-01\n> 00:00:00'::timestamp + (random()*1350561945 || ' seconds')::interval\n> FROM generate_series(1,21000000);\n> INSERT 0 21000000\n> postgres=# VACUUM;\n> VACUUM\n> postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)\n> FROM test GROUP BY extract(month FROM sampledate);\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> HashAggregate (cost=481014.00..481016.50 rows=200 width=8)\n> -> Seq Scan on test (cost=0.00..376014.00 rows=21000000 width=8)\n> (2 rows)\n>\n> postgres=# ANALYSE;\n> ANALYZE\n> postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)\n> FROM test GROUP BY extract(month FROM sampledate);\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> GroupAggregate (cost=4078473.42..4498473.90 rows=21000024 width=8)\n> -> Sort (cost=4078473.42..4130973.48 rows=21000024 width=8)\n> Sort Key: (date_part('month'::text, sampledate))\n> -> Seq Scan on test (cost=0.00..376014.30 rows=21000024 width=8)\n> (4 rows)\n>\n> postgres=# CREATE INDEX idx_test_sampledate_month ON test\n> (extract(month FROM sampledate));\n> CREATE INDEX\n> postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)\n> FROM test GROUP BY extract(month FROM sampledate);\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> GroupAggregate (cost=4078470.03..4498470.03 rows=21000000 width=8)\n> -> Sort (cost=4078470.03..4130970.03 rows=21000000 width=8)\n> Sort Key: (date_part('month'::text, sampledate))\n> -> Seq Scan on test (cost=0.00..376014.00 rows=21000000 width=8)\n> (4 rows)\n>\n> postgres=# ANALYSE;\n> ANALYZE\n> postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)\n> FROM test GROUP BY extract(month FROM sampledate);\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> HashAggregate (cost=481012.85..481013.00 rows=12 width=8)\n> -> Seq Scan on test (cost=0.00..376013.17 rows=20999934 width=8)\n> (2 rows)\n>\n>\n> The estimate is down to almost a 10th of what it was before. What's going on?\n>\n> And as a side note, how come it's impossible to get the planner to use\n> an index-only scan to satisfy the query (disabling sequential and\n> regular index scans)?\n\nI should perhaps mention this is on 9.3devel as of today.\n\n-- \nThom\n\n",
"msg_date": "Thu, 18 Oct 2012 17:13:53 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unused index influencing sequential scan plan"
},
{
"msg_contents": "On 18 October 2012 17:11, Thom Brown <[email protected]> wrote:\n> The estimate is down to almost a 10th of what it was before. What's going on?\n\nEven though the index isn't used, the pg_statistic entries that the\nexpression index would have made available are. It's as if you\nmaterialised the expression into a column, analyzed and grouped by\nthat.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n",
"msg_date": "Thu, 18 Oct 2012 17:24:42 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unused index influencing sequential scan plan"
},
{
"msg_contents": "Thom Brown <[email protected]> writes:\n> I've created a test table containing 21 million random dates and\n> times, but I get wildly different results when I introduce a\n> functional index then ANALYSE again, even though it doesn't use the\n> index:\n\nAs Peter said, the existence of the index causes ANALYZE to gather stats\nabout the expression, which will affect rowcount estimates whether or\nnot the planner chooses to use the index.\n\n> And as a side note, how come it's impossible to get the planner to use\n> an index-only scan to satisfy the query (disabling sequential and\n> regular index scans)?\n\nImplementation restriction - we don't yet have a way to match index-only\nscans to expressions.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 18 Oct 2012 12:44:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unused index influencing sequential scan plan"
},
{
"msg_contents": "On 18 October 2012 17:24, Peter Geoghegan <[email protected]> wrote:\n> On 18 October 2012 17:11, Thom Brown <[email protected]> wrote:\n>> The estimate is down to almost a 10th of what it was before. What's going on?\n>\n> Even though the index isn't used, the pg_statistic entries that the\n> expression index would have made available are. It's as if you\n> materialised the expression into a column, analyzed and grouped by\n> that.\n\nD'oh, of course! Thanks Peter.\n\n-- \nThom\n\n",
"msg_date": "Thu, 18 Oct 2012 17:46:12 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unused index influencing sequential scan plan"
},
{
"msg_contents": "On 18 October 2012 17:44, Tom Lane <[email protected]> wrote:\n> Thom Brown <[email protected]> writes:\n>> And as a side note, how come it's impossible to get the planner to use\n>> an index-only scan to satisfy the query (disabling sequential and\n>> regular index scans)?\n>\n> Implementation restriction - we don't yet have a way to match index-only\n> scans to expressions.\n\nAh, I suspected it might be, but couldn't find notes on what scenarios\nit's yet to be able to work in. Thanks.\n\n-- \nThom\n\n",
"msg_date": "Thu, 18 Oct 2012 17:47:42 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unused index influencing sequential scan plan"
},
{
"msg_contents": "Thom Brown <[email protected]> writes:\n> On 18 October 2012 17:44, Tom Lane <[email protected]> wrote:\n>> Thom Brown <[email protected]> writes:\n>>> And as a side note, how come it's impossible to get the planner to use\n>>> an index-only scan to satisfy the query (disabling sequential and\n>>> regular index scans)?\n\n>> Implementation restriction - we don't yet have a way to match index-only\n>> scans to expressions.\n\n> Ah, I suspected it might be, but couldn't find notes on what scenarios\n> it's yet to be able to work in. Thanks.\n\nI forgot to mention that there is a klugy workaround: add the required\nvariable(s) as extra index columns. That is,\n\n\tcreate index i on t (foo(x), x);\n\nThe planner isn't terribly bright about this, but it will use that index\nfor a query that only requires foo(x), and it won't re-evaluate foo()\n(though I think it will cost the plan on the assumption it does :-().\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 18 Oct 2012 12:52:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unused index influencing sequential scan plan"
},
{
"msg_contents": "On 18 October 2012 17:52, Tom Lane <[email protected]> wrote:\n> I forgot to mention that there is a klugy workaround: add the required\n> variable(s) as extra index columns. That is,\n>\n> create index i on t (foo(x), x);\n\nIs there a case to be made for a index access method whose\npseudo-indexes costs essentially nothing to maintain, and simply\nrepresent an ongoing obligation for ANALYZE to provide statistics for\nan expression?\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n",
"msg_date": "Thu, 18 Oct 2012 18:00:43 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unused index influencing sequential scan plan"
},
{
"msg_contents": "On 18 October 2012 17:52, Tom Lane <[email protected]> wrote:\n> Thom Brown <[email protected]> writes:\n>> On 18 October 2012 17:44, Tom Lane <[email protected]> wrote:\n>>> Thom Brown <[email protected]> writes:\n>>>> And as a side note, how come it's impossible to get the planner to use\n>>>> an index-only scan to satisfy the query (disabling sequential and\n>>>> regular index scans)?\n>\n>>> Implementation restriction - we don't yet have a way to match index-only\n>>> scans to expressions.\n>\n>> Ah, I suspected it might be, but couldn't find notes on what scenarios\n>> it's yet to be able to work in. Thanks.\n>\n> I forgot to mention that there is a klugy workaround: add the required\n> variable(s) as extra index columns. That is,\n>\n> create index i on t (foo(x), x);\n>\n> The planner isn't terribly bright about this, but it will use that index\n> for a query that only requires foo(x), and it won't re-evaluate foo()\n> (though I think it will cost the plan on the assumption it does :-().\n\nAh, yes, I've tested this and got it using an index-only scan, and it\nwas faster than than the sequential scan (index only scan 5024.545 ms\nvs seq scan 6627.072 ms).\n\nSo this is probably a dumb question, but is it possible to achieve the\noptimisation provided by index statistics but without the index, and\nwithout a messy workaround using a supplementary column which stores\nfunction-derived values? If not, is that something which can be\nintroduced?\n-- \nThom\n\n",
"msg_date": "Thu, 18 Oct 2012 18:01:05 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unused index influencing sequential scan plan"
},
{
"msg_contents": "On 18 October 2012 18:00, Peter Geoghegan <[email protected]> wrote:\n> On 18 October 2012 17:52, Tom Lane <[email protected]> wrote:\n>> I forgot to mention that there is a klugy workaround: add the required\n>> variable(s) as extra index columns. That is,\n>>\n>> create index i on t (foo(x), x);\n>\n> Is there a case to be made for a index access method whose\n> pseudo-indexes costs essentially nothing to maintain, and simply\n> represent an ongoing obligation for ANALYZE to provide statistics for\n> an expression?\n\nHeh, that's pretty much the question I posted just a few seconds ago.\n-- \nThom\n\n",
"msg_date": "Thu, 18 Oct 2012 18:01:51 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unused index influencing sequential scan plan"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> Is there a case to be made for a index access method whose\n> pseudo-indexes costs essentially nothing to maintain, and simply\n> represent an ongoing obligation for ANALYZE to provide statistics for\n> an expression?\n\nIf we were going to support it, I think we'd be better off exposing such\na feature as DDL having nothing to do with indexes. Not sure it's worth\nthe trouble though. The ANALYZE wart to compute stats for index\nexpressions has been there a long time, and there's been essentially\nzero field demand for another way to do it. What people really seem to\ncare about is more intelligence about making use of expression indexes\nto avoid recalculation of the expression --- something you'd not get\nfrom a stats-only feature.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 18 Oct 2012 13:06:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unused index influencing sequential scan plan"
}
] |
[
{
"msg_contents": "I have replication set up on servers with 9.1 and want to upgrade to 9.2\nI was hoping I could just bring them both down, upgrade them both and bring\nthem both up and continue replication, but that doesn't seem to work, the\nreplication server won't come up.\nIs there anyway to do this upgrade with out taking a new base backup and\nrebuilding the replication drive?\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-to-upgrade-from-9-1-to-9-2-with-replication-tp5728941.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Thu, 18 Oct 2012 15:21:22 -0700 (PDT)",
"msg_from": "delongboy <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On 10/18/2012 5:21 PM, delongboy wrote:\n> I have replication set up on servers with 9.1 and want to upgrade to 9.2\n> I was hoping I could just bring them both down, upgrade them both and bring\n> them both up and continue replication, but that doesn't seem to work, the\n> replication server won't come up.\n> Is there anyway to do this upgrade with out taking a new base backup and\n> rebuilding the replication drive?\nNot that I know of.\n\nI tried this as well when the development branches were out in a\n\"sandbox\" and it failed as it did for you.\n\nFor 9.1 -> 9.2 what I did was bring down the cluster, upgrade the\nmaster, then initdb the slave and run the script that brings over a new\nbasebackup with the WAL archives (\"-x\" switch), and when complete just\nstarted the slave back up in slave mode.\n\nThis unfortunately does require a new data copy to be pulled across to\nthe slave. For the local copies this isn't so bad as wire speed is fast\nenough to make it reasonable; for the actual backup units at a remove it\ntakes a while as the copy has to go across a WAN link. I cheat on that\nby using a SSH tunnel with compression turned on (which, incidentally,\nit would be really nice if Postgres supported internally, and it could\nquite easily -- I've considered working up a patch set for this and\nsubmitting it.)\n\nFor really BIG databases (as opposed to moderately-big) this could be a\nmuch-more material problem than it is for me.\n\n--\n-- Karl Denninger\n/The Market Ticker ®/ <http://market-ticker.org>\nCuda Systems LLC\n\n\n\n\n\n\n\nOn 10/18/2012 5:21 PM, delongboy wrote:\n\n\nI have replication set up on servers with 9.1 and want to upgrade to 9.2\nI was hoping I could just bring them both down, upgrade them both and bring\nthem both up and continue replication, but that doesn't seem to work, the\nreplication server won't come up.\nIs there anyway to do this upgrade with out taking a new base backup and\nrebuilding the replication drive?\n\n\n Not that I know of.\n\n I tried this as well when the development branches were out in a\n \"sandbox\" and it failed as it did for you.\n\n For 9.1 -> 9.2 what I did was bring down the cluster, upgrade the\n master, then initdb the slave and run the script that brings over a\n new basebackup with the WAL archives (\"-x\" switch), and when\n complete just started the slave back up in slave mode.\n\n This unfortunately does require a new data copy to be pulled across\n to the slave. For the local copies this isn't so bad as wire speed\n is fast enough to make it reasonable; for the actual backup units at\n a remove it takes a while as the copy has to go across a WAN link. \n I cheat on that by using a SSH tunnel with compression turned on\n (which, incidentally, it would be really nice if Postgres supported\n internally, and it could quite easily -- I've considered working up\n a patch set for this and submitting it.)\n\n For really BIG databases (as opposed to moderately-big) this could\n be a much-more material problem than it is for me.\n\n\n\n--\n -- Karl Denninger\nThe Market Ticker ®\n Cuda Systems LLC",
"msg_date": "Fri, 19 Oct 2012 09:44:25 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On 10/19/2012 09:44 AM, Karl Denninger wrote:\n\n> For really BIG databases (as opposed to moderately-big) this could be a\n> much-more material problem than it is for me.\n\nWhich reminds me. I really wish pg_basebackup let you specify an \nalternative compression handler. We've been using pigz on our systems \nbecause our database is so large. It cuts backup time drastically, from \nabout 2.5 hours to 28 minutes.\n\nUntil a CPU can compress at the same speed it can read data from disk \ndevices, that's going to continue to be a problem. Parallel compression \nis great.\n\nSo even after our recent upgrade, we've kept using our home-grown backup \nsystem. :(\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 19 Oct 2012 09:51:17 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Fri, Oct 19, 2012 at 11:44 AM, Karl Denninger <[email protected]> wrote:\n> On 10/18/2012 5:21 PM, delongboy wrote:\n>\n> I have replication set up on servers with 9.1 and want to upgrade to 9.2\n> I was hoping I could just bring them both down, upgrade them both and bring\n> them both up and continue replication, but that doesn't seem to work, the\n> replication server won't come up.\n> Is there anyway to do this upgrade with out taking a new base backup and\n> rebuilding the replication drive?\n>\n> Not that I know of.\n>\n> I tried this as well when the development branches were out in a \"sandbox\"\n> and it failed as it did for you.\n>\n> For 9.1 -> 9.2 what I did was bring down the cluster, upgrade the master,\n> then initdb the slave and run the script that brings over a new basebackup\n> with the WAL archives (\"-x\" switch), and when complete just started the\n> slave back up in slave mode.\n>\n> This unfortunately does require a new data copy to be pulled across to the\n> slave. For the local copies this isn't so bad as wire speed is fast enough\n> to make it reasonable; for the actual backup units at a remove it takes a\n> while as the copy has to go across a WAN link. I cheat on that by using a\n> SSH tunnel with compression turned on (which, incidentally, it would be\n> really nice if Postgres supported internally, and it could quite easily --\n> I've considered working up a patch set for this and submitting it.)\n>\n> For really BIG databases (as opposed to moderately-big) this could be a\n> much-more material problem than it is for me.\n\nDid you try?\n\nBring both down.\npg_upgrade master\nBring master up\npg_upgrade slave\nrsync master->slave (differential update, much faster than basebackup)\nBring slave up\n\n",
"msg_date": "Fri, 19 Oct 2012 12:02:49 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On 10/19/2012 10:02 AM, Claudio Freire wrote:\n> On Fri, Oct 19, 2012 at 11:44 AM, Karl Denninger <[email protected]> wrote:\n>> On 10/18/2012 5:21 PM, delongboy wrote:\n>>\n>> I have replication set up on servers with 9.1 and want to upgrade to 9.2\n>> I was hoping I could just bring them both down, upgrade them both and bring\n>> them both up and continue replication, but that doesn't seem to work, the\n>> replication server won't come up.\n>> Is there anyway to do this upgrade with out taking a new base backup and\n>> rebuilding the replication drive?\n>>\n>> Not that I know of.\n>>\n>> I tried this as well when the development branches were out in a \"sandbox\"\n>> and it failed as it did for you.\n>>\n>> For 9.1 -> 9.2 what I did was bring down the cluster, upgrade the master,\n>> then initdb the slave and run the script that brings over a new basebackup\n>> with the WAL archives (\"-x\" switch), and when complete just started the\n>> slave back up in slave mode.\n>>\n>> This unfortunately does require a new data copy to be pulled across to the\n>> slave. For the local copies this isn't so bad as wire speed is fast enough\n>> to make it reasonable; for the actual backup units at a remove it takes a\n>> while as the copy has to go across a WAN link. I cheat on that by using a\n>> SSH tunnel with compression turned on (which, incidentally, it would be\n>> really nice if Postgres supported internally, and it could quite easily --\n>> I've considered working up a patch set for this and submitting it.)\n>>\n>> For really BIG databases (as opposed to moderately-big) this could be a\n>> much-more material problem than it is for me.\n> Did you try?\n>\n> Bring both down.\n> pg_upgrade master\n> Bring master up\n> pg_upgrade slave\n> rsync master->slave (differential update, much faster than basebackup)\n> Bring slave up\nThat's an interesting idea that might work; are replicated servers in a\nconsistent state guaranteed to have byte-identical filespaces? (other\nthan the config file(s), of course) I have not checked that assumption.\n\nSurprises in that regard could manifest in very unfortunate results that\nonly become apparent a significant distance down the road.\n\n-- \n-- Karl Denninger\n/The Market Ticker ®/ <http://market-ticker.org>\nCuda Systems LLC\n\n\n\n\n\n\n On 10/19/2012 10:02 AM, Claudio Freire wrote:\n\nOn Fri, Oct 19, 2012 at 11:44 AM, Karl Denninger <[email protected]> wrote:\n\n\nOn 10/18/2012 5:21 PM, delongboy wrote:\n\nI have replication set up on servers with 9.1 and want to upgrade to 9.2\nI was hoping I could just bring them both down, upgrade them both and bring\nthem both up and continue replication, but that doesn't seem to work, the\nreplication server won't come up.\nIs there anyway to do this upgrade with out taking a new base backup and\nrebuilding the replication drive?\n\nNot that I know of.\n\nI tried this as well when the development branches were out in a \"sandbox\"\nand it failed as it did for you.\n\nFor 9.1 -> 9.2 what I did was bring down the cluster, upgrade the master,\nthen initdb the slave and run the script that brings over a new basebackup\nwith the WAL archives (\"-x\" switch), and when complete just started the\nslave back up in slave mode.\n\nThis unfortunately does require a new data copy to be pulled across to the\nslave. For the local copies this isn't so bad as wire speed is fast enough\nto make it reasonable; for the actual backup units at a remove it takes a\nwhile as the copy has to go across a WAN link. I cheat on that by using a\nSSH tunnel with compression turned on (which, incidentally, it would be\nreally nice if Postgres supported internally, and it could quite easily --\nI've considered working up a patch set for this and submitting it.)\n\nFor really BIG databases (as opposed to moderately-big) this could be a\nmuch-more material problem than it is for me.\n\n\n\nDid you try?\n\nBring both down.\npg_upgrade master\nBring master up\npg_upgrade slave\nrsync master->slave (differential update, much faster than basebackup)\nBring slave up\n\n That's an interesting idea that might work; are replicated servers\n in a consistent state guaranteed to have byte-identical filespaces?\n (other than the config file(s), of course) I have not checked that\n assumption.\n\n Surprises in that regard could manifest in very unfortunate results\n that only become apparent a significant distance down the road.\n\n\n\n-- \n -- Karl Denninger\nThe Market Ticker ®\n Cuda Systems LLC",
"msg_date": "Fri, 19 Oct 2012 10:49:25 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On 10/19/2012 10:49 AM, Karl Denninger wrote:\n\n> That's an interesting idea that might work; are replicated servers in a\n> consistent state guaranteed to have byte-identical filespaces? (other\n> than the config file(s), of course) I have not checked that assumption.\n\nWell, if they didn't before, they will after the rsync is finished. \nUpdate the config and start as a slave, and it's the same as a basebackup.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 19 Oct 2012 11:03:26 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "I brought down the master then the slave and upgraded both. Then I did the\nrsync and brought both up.. This worked. However with the database being\nvery large it took quite a while. It seemed rsync had to make a lot of\nchanges.. this surprised me. I thought they would be almost identical.\nBut in the end it did work. just took longer than I had hoped. \nWe will soon be tripling the size of our database as we move oracle data\nin.. so this process may not be so feasible next time.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-to-upgrade-from-9-1-to-9-2-with-replication-tp5728941p5729618.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Thu, 25 Oct 2012 07:12:04 -0700 (PDT)",
"msg_from": "delongboy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On 10/25/2012 9:12 AM, delongboy wrote:\n> I brought down the master then the slave and upgraded both. Then I did the\n> rsync and brought both up.. This worked. However with the database being\n> very large it took quite a while. It seemed rsync had to make a lot of\n> changes.. this surprised me. I thought they would be almost identical.\n> But in the end it did work. just took longer than I had hoped. \n> We will soon be tripling the size of our database as we move oracle data\n> in.. so this process may not be so feasible next time.\nWhat I have done successfully is this.\n\n1. Set up a SECOND instance of the slave with the NEW software version,\nbut do not populate it.\n\n2. Turn off the original slave.\n\n3. Upgrade the master. This is your \"hard\" downtime you cannot avoid. \nRestart the master on the new version and resume operations.\n\n3. At this point the slave cannot connect as it has a version mismatch,\nso do NOT restart it.\n\n4. pg_start_backup('Upgrading') and rsync the master to the NEW slave\ndirectory ex config files (postgresql.conf, recovery.conf and\npg_hba.conf, plus the SSL keys if you're using it). Do NOT rsync\npg_xlog's contents or the WAL archive logs from the master. Then\npg_stop_backup(). Copy in the config files from your slave repository\n(very important as you must NOT start the slave server without the\ncorrect slave config or it will immediately destroy the context that\nallows it come up as a slave and you get to start over with #4.)\n\n5. Bring up the NEW slave instance. It will immediately connect back to\nthe new master and catch up. This will not take very long as the only\ndata it needs to fetch is that which changed during #4 above.\n\nIf you have multiple slaves you can do multiple rsync's (in parallel if\nyou wish) to them between the pg_start_backup and pg_stop_backup\ncalls. The only \"gotcha\" doing it this way is that you must be keeping\nenough WAL records on the master to cover the time between the\npg_start_backup call and when you bring the slaves back up in\nreplication mode so they're able to retrieve the WAL data and come back\ninto sync. If you come up short the restart will fail.\n\nWhen the slaves restart they will come into consistency almost\nimmediately but will be materially behind until the replication protocol\ncatches up.\n\nBTW this is /*much*/ faster than using pg_basebackup (by a factor of\nfour or more at my installation!) -- it appears that the latter does not\neffectively use compression of the data stream even if your SSL config\nis in use and would normally use it; rsync used with the \"z\" option does\nuse it and very effectively so.\n\n-- \n-- Karl Denninger\n/The Market Ticker ®/ <http://market-ticker.org>\nCuda Systems LLC\n\n\n\n\n\n\n On 10/25/2012 9:12 AM, delongboy wrote:\n\nI brought down the master then the slave and upgraded both. Then I did the\nrsync and brought both up.. This worked. However with the database being\nvery large it took quite a while. It seemed rsync had to make a lot of\nchanges.. this surprised me. I thought they would be almost identical.\nBut in the end it did work. just took longer than I had hoped. \nWe will soon be tripling the size of our database as we move oracle data\nin.. so this process may not be so feasible next time.\n\n\n What I have done successfully is this.\n\n 1. Set up a SECOND instance of the slave with the NEW software\n version, but do not populate it.\n\n 2. Turn off the original slave.\n\n 3. Upgrade the master. This is your \"hard\" downtime you cannot\n avoid. Restart the master on the new version and resume operations.\n\n 3. At this point the slave cannot connect as it has a version\n mismatch, so do NOT restart it.\n\n 4. pg_start_backup('Upgrading') and rsync the master to the NEW\n slave directory ex config files (postgresql.conf, recovery.conf and\n pg_hba.conf, plus the SSL keys if you're using it). Do NOT rsync\n pg_xlog's contents or the WAL archive logs from the master. Then\n pg_stop_backup(). Copy in the config files from your slave\n repository (very important as you must NOT start the slave server\n without the correct slave config or it will immediately destroy the\n context that allows it come up as a slave and you get to start over\n with #4.)\n\n 5. Bring up the NEW slave instance. It will immediately connect\n back to the new master and catch up. This will not take very long\n as the only data it needs to fetch is that which changed during #4\n above.\n\n If you have multiple slaves you can do multiple rsync's (in parallel\n if you wish) to them between the pg_start_backup and pg_stop_backup\n calls. The only \"gotcha\" doing it this way is that you must be\n keeping enough WAL records on the master to cover the time between\n the pg_start_backup call and when you bring the slaves back up in\n replication mode so they're able to retrieve the WAL data and come\n back into sync. If you come up short the restart will fail.\n\n When the slaves restart they will come into consistency almost\n immediately but will be materially behind until the replication\n protocol catches up.\n\n BTW this is much faster than using pg_basebackup (by\n a factor of four or more at my installation!) -- it appears that the\n latter does not effectively use compression of the data stream even\n if your SSL config is in use and would normally use it; rsync used\n with the \"z\" option does use it and very effectively so.\n\n-- \n -- Karl Denninger\nThe Market Ticker ®\n Cuda Systems LLC",
"msg_date": "Sun, 28 Oct 2012 10:15:45 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Sun, Oct 28, 2012 at 12:15 PM, Karl Denninger <[email protected]> wrote:\n> 4. pg_start_backup('Upgrading') and rsync the master to the NEW slave\n> directory ex config files (postgresql.conf, recovery.conf and pg_hba.conf,\n> plus the SSL keys if you're using it). Do NOT rsync pg_xlog's contents or\n> the WAL archive logs from the master. Then pg_stop_backup(). Copy in the\n> config files from your slave repository (very important as you must NOT\n> start the slave server without the correct slave config or it will\n> immediately destroy the context that allows it come up as a slave and you\n> get to start over with #4.)\n>\n> 5. Bring up the NEW slave instance. It will immediately connect back to the\n> new master and catch up. This will not take very long as the only data it\n> needs to fetch is that which changed during #4 above.\n>\n> If you have multiple slaves you can do multiple rsync's (in parallel if you\n> wish) to them between the pg_start_backup and pg_stop_backup calls. The\n> only \"gotcha\" doing it this way is that you must be keeping enough WAL\n> records on the master to cover the time between the pg_start_backup call and\n> when you bring the slaves back up in replication mode so they're able to\n> retrieve the WAL data and come back into sync. If you come up short the\n> restart will fail.\n>\n> When the slaves restart they will come into consistency almost immediately\n> but will be materially behind until the replication protocol catches up.\n\nThat's why I perform two rsyncs, one without pg_start_backup, and one\nwith. Without, you get no guarantees, but it helps rsync be faster\nnext time. So you cut down on the amount of changes that second rsync\nwill have to transfer, you may even skip whole segments, if your\nupdate patterns aren't too random.\n\nI still have a considerable amount of time between the start_backup\nand end_backup, but I have minimal downtimes and it never failed.\n\nJust for the record, we do this quite frequently in our pre-production\nservers, since the network there is a lot slower and replication falls\nirreparably out of sync quite often. And nobody notices when we\nre-sync the slave. (ie: downtime at the master is nonexistent).\n\n",
"msg_date": "Sun, 28 Oct 2012 20:40:02 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Sun, Oct 28, 2012 at 9:40 PM, Claudio Freire <[email protected]>wrote:\n\n> On Sun, Oct 28, 2012 at 12:15 PM, Karl Denninger <[email protected]>\n> wrote:\n> > 4. pg_start_backup('Upgrading') and rsync the master to the NEW slave\n> > directory ex config files (postgresql.conf, recovery.conf and\n> pg_hba.conf,\n> > plus the SSL keys if you're using it). Do NOT rsync pg_xlog's contents\n> or\n> > the WAL archive logs from the master. Then pg_stop_backup(). Copy in\n> the\n> > config files from your slave repository (very important as you must NOT\n> > start the slave server without the correct slave config or it will\n> > immediately destroy the context that allows it come up as a slave and you\n> > get to start over with #4.)\n> >\n> > 5. Bring up the NEW slave instance. It will immediately connect back to\n> the\n> > new master and catch up. This will not take very long as the only data\n> it\n> > needs to fetch is that which changed during #4 above.\n> >\n> > If you have multiple slaves you can do multiple rsync's (in parallel if\n> you\n> > wish) to them between the pg_start_backup and pg_stop_backup calls. The\n> > only \"gotcha\" doing it this way is that you must be keeping enough WAL\n> > records on the master to cover the time between the pg_start_backup call\n> and\n> > when you bring the slaves back up in replication mode so they're able to\n> > retrieve the WAL data and come back into sync. If you come up short the\n> > restart will fail.\n> >\n> > When the slaves restart they will come into consistency almost\n> immediately\n> > but will be materially behind until the replication protocol catches up.\n>\n> That's why I perform two rsyncs, one without pg_start_backup, and one\n> with. Without, you get no guarantees, but it helps rsync be faster\n> next time. So you cut down on the amount of changes that second rsync\n> will have to transfer, you may even skip whole segments, if your\n> update patterns aren't too random.\n>\n> I still have a considerable amount of time between the start_backup\n> and end_backup, but I have minimal downtimes and it never failed.\n>\n\nI also think that's a good option for most case, but not because it is\nfaster, in fact if you count the whole process, it is slower. But the\nmaster will be on backup state (between pg_start_backup and pg_stop_backup)\nfor a small period of time which make things go faster on the master\n(nothing different on slave though).\n\n\n> Just for the record, we do this quite frequently in our pre-production\n> servers, since the network there is a lot slower and replication falls\n> irreparably out of sync quite often. And nobody notices when we\n> re-sync the slave. (ie: downtime at the master is nonexistent).\n>\n>\nIf you have incremental backup, a restore_command on recovery.conf seems\nbetter than running rsync again when the slave get out of sync. Doesn't it?\n\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados PostgreSQL\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Sun, Oct 28, 2012 at 9:40 PM, Claudio Freire <[email protected]> wrote:\nOn Sun, Oct 28, 2012 at 12:15 PM, Karl Denninger <[email protected]> wrote:\n> 4. pg_start_backup('Upgrading') and rsync the master to the NEW slave\n> directory ex config files (postgresql.conf, recovery.conf and pg_hba.conf,\n> plus the SSL keys if you're using it). Do NOT rsync pg_xlog's contents or\n> the WAL archive logs from the master. Then pg_stop_backup(). Copy in the\n> config files from your slave repository (very important as you must NOT\n> start the slave server without the correct slave config or it will\n> immediately destroy the context that allows it come up as a slave and you\n> get to start over with #4.)\n>\n> 5. Bring up the NEW slave instance. It will immediately connect back to the\n> new master and catch up. This will not take very long as the only data it\n> needs to fetch is that which changed during #4 above.\n>\n> If you have multiple slaves you can do multiple rsync's (in parallel if you\n> wish) to them between the pg_start_backup and pg_stop_backup calls. The\n> only \"gotcha\" doing it this way is that you must be keeping enough WAL\n> records on the master to cover the time between the pg_start_backup call and\n> when you bring the slaves back up in replication mode so they're able to\n> retrieve the WAL data and come back into sync. If you come up short the\n> restart will fail.\n>\n> When the slaves restart they will come into consistency almost immediately\n> but will be materially behind until the replication protocol catches up.\n\nThat's why I perform two rsyncs, one without pg_start_backup, and one\nwith. Without, you get no guarantees, but it helps rsync be faster\nnext time. So you cut down on the amount of changes that second rsync\nwill have to transfer, you may even skip whole segments, if your\nupdate patterns aren't too random.\n\nI still have a considerable amount of time between the start_backup\nand end_backup, but I have minimal downtimes and it never failed.I also think that's a good option for most case, but not because it is faster, in fact if you count the whole process, it is slower. But the master will be on backup state (between pg_start_backup and pg_stop_backup) for a small period of time which make things go faster on the master (nothing different on slave though).\n\n\nJust for the record, we do this quite frequently in our pre-production\nservers, since the network there is a lot slower and replication falls\nirreparably out of sync quite often. And nobody notices when we\nre-sync the slave. (ie: downtime at the master is nonexistent).\nIf you have incremental backup, a restore_command on recovery.conf seems better than running rsync again when the slave get out of sync. Doesn't it?\nRegards,-- Matheus de OliveiraAnalista de Banco de Dados PostgreSQLDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 29 Oct 2012 08:41:46 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Mon, Oct 29, 2012 at 7:41 AM, Matheus de Oliveira\n<[email protected]> wrote:\n> I also think that's a good option for most case, but not because it is\n> faster, in fact if you count the whole process, it is slower. But the master\n> will be on backup state (between pg_start_backup and pg_stop_backup) for a\n> small period of time which make things go faster on the master (nothing\n> different on slave though).\n\nExactly the point.\n\n>>\n>> Just for the record, we do this quite frequently in our pre-production\n>> servers, since the network there is a lot slower and replication falls\n>> irreparably out of sync quite often. And nobody notices when we\n>> re-sync the slave. (ie: downtime at the master is nonexistent).\n>>\n>\n> If you have incremental backup, a restore_command on recovery.conf seems\n> better than running rsync again when the slave get out of sync. Doesn't it?\n\nWhat do you mean?\n\nUsually, when it falls out of sync like that, it's because the\ndatabase is undergoing structural changes, and the link between master\nand slave (both streaming and WAL shipping) isn't strong enough to\nhandle the massive rewrites. A backup is of no use there either. We\ncould make the rsync part of a recovery command, but we don't want to\nbe left out of the loop so we prefer to do it manually. As noted, it\nalways happens when someone's doing structural changes so it's not\nentirely unexpected.\n\nOr am I missing some point?\n\n",
"msg_date": "Mon, 29 Oct 2012 08:53:06 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Mon, Oct 29, 2012 at 9:53 AM, Claudio Freire <[email protected]>wrote:\n\n> On Mon, Oct 29, 2012 at 7:41 AM, Matheus de Oliveira\n> <[email protected]> wrote:\n>\n> >>\n> >> Just for the record, we do this quite frequently in our pre-production\n> >> servers, since the network there is a lot slower and replication falls\n> >> irreparably out of sync quite often. And nobody notices when we\n> >> re-sync the slave. (ie: downtime at the master is nonexistent).\n> >>\n> >\n> > If you have incremental backup, a restore_command on recovery.conf seems\n> > better than running rsync again when the slave get out of sync. Doesn't\n> it?\n>\n> What do you mean?\n>\n> Usually, when it falls out of sync like that, it's because the\n> database is undergoing structural changes, and the link between master\n> and slave (both streaming and WAL shipping) isn't strong enough to\n> handle the massive rewrites. A backup is of no use there either. We\n> could make the rsync part of a recovery command, but we don't want to\n> be left out of the loop so we prefer to do it manually. As noted, it\n> always happens when someone's doing structural changes so it's not\n> entirely unexpected.\n>\n> Or am I missing some point?\n>\n\nWhat I meant is that *if* you save you log segments somewhere (with\narchive_command), you can always use the restore_command on the slave side\nto catch-up with the master, even if streaming replication failed and you\ngot out of sync. Of course if you structural changes is *really big*,\nperhaps recovering from WAL archives could even be slower than rsync (I\nreally think it's hard to happen though).\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados PostgreSQL\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Oct 29, 2012 at 9:53 AM, Claudio Freire <[email protected]> wrote:\nOn Mon, Oct 29, 2012 at 7:41 AM, Matheus de Oliveira\n<[email protected]> wrote:\n\n>>\n>> Just for the record, we do this quite frequently in our pre-production\n>> servers, since the network there is a lot slower and replication falls\n>> irreparably out of sync quite often. And nobody notices when we\n>> re-sync the slave. (ie: downtime at the master is nonexistent).\n>>\n>\n> If you have incremental backup, a restore_command on recovery.conf seems\n> better than running rsync again when the slave get out of sync. Doesn't it?\n\nWhat do you mean?\n\nUsually, when it falls out of sync like that, it's because the\ndatabase is undergoing structural changes, and the link between master\nand slave (both streaming and WAL shipping) isn't strong enough to\nhandle the massive rewrites. A backup is of no use there either. We\ncould make the rsync part of a recovery command, but we don't want to\nbe left out of the loop so we prefer to do it manually. As noted, it\nalways happens when someone's doing structural changes so it's not\nentirely unexpected.\n\nOr am I missing some point?\nWhat I meant is that *if* you save you log segments somewhere (with archive_command), you can always use the restore_command on the slave side to catch-up with the master, even if streaming replication failed and you got out of sync. Of course if you structural changes is *really big*, perhaps recovering from WAL archives could even be slower than rsync (I really think it's hard to happen though).\nRegards,-- Matheus de OliveiraAnalista de Banco de Dados PostgreSQLDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 29 Oct 2012 10:09:51 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Mon, Oct 29, 2012 at 9:09 AM, Matheus de Oliveira\n<[email protected]> wrote:\n>> > If you have incremental backup, a restore_command on recovery.conf seems\n>> > better than running rsync again when the slave get out of sync. Doesn't\n>> > it?\n>>\n>> What do you mean?\n>>\n>> Usually, when it falls out of sync like that, it's because the\n>> database is undergoing structural changes, and the link between master\n>> and slave (both streaming and WAL shipping) isn't strong enough to\n>> handle the massive rewrites. A backup is of no use there either. We\n>> could make the rsync part of a recovery command, but we don't want to\n>> be left out of the loop so we prefer to do it manually. As noted, it\n>> always happens when someone's doing structural changes so it's not\n>> entirely unexpected.\n>>\n>> Or am I missing some point?\n>\n>\n> What I meant is that *if* you save you log segments somewhere (with\n> archive_command), you can always use the restore_command on the slave side\n> to catch-up with the master, even if streaming replication failed and you\n> got out of sync. Of course if you structural changes is *really big*,\n> perhaps recovering from WAL archives could even be slower than rsync (I\n> really think it's hard to happen though).\n\nI imagine it's automatic. We have WAL shipping in place, but even that\ngets out of sync (more segments generated than our quota on the\narchive allows - we can't really keep more since we lack the space on\nthe server we put them).\n\n",
"msg_date": "Mon, 29 Oct 2012 09:23:37 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Mon, Oct 29, 2012 at 10:23 AM, Claudio Freire <[email protected]>wrote:\n\n> On Mon, Oct 29, 2012 at 9:09 AM, Matheus de Oliveira\n> <[email protected]> wrote:\n> >> > If you have incremental backup, a restore_command on recovery.conf\n> seems\n> >> > better than running rsync again when the slave get out of sync.\n> Doesn't\n> >> > it?\n> >>\n> >> What do you mean?\n> >>\n> >> Usually, when it falls out of sync like that, it's because the\n> >> database is undergoing structural changes, and the link between master\n> >> and slave (both streaming and WAL shipping) isn't strong enough to\n> >> handle the massive rewrites. A backup is of no use there either. We\n> >> could make the rsync part of a recovery command, but we don't want to\n> >> be left out of the loop so we prefer to do it manually. As noted, it\n> >> always happens when someone's doing structural changes so it's not\n> >> entirely unexpected.\n> >>\n> >> Or am I missing some point?\n> >\n> >\n> > What I meant is that *if* you save you log segments somewhere (with\n> > archive_command), you can always use the restore_command on the slave\n> side\n> > to catch-up with the master, even if streaming replication failed and you\n> > got out of sync. Of course if you structural changes is *really big*,\n> > perhaps recovering from WAL archives could even be slower than rsync (I\n> > really think it's hard to happen though).\n>\n> I imagine it's automatic.\n\n\nIf you don't set restore_command *and* get more segments than\nmax_wal_keep_segments, PostgreSQL will not read the archived segments (it\ndoes not even know where it is actually).\n\n\n> We have WAL shipping in place, but even that\n> gets out of sync (more segments generated than our quota on the\n> archive allows - we can't really keep more since we lack the space on\n> the server we put them).\n>\n\nYeah, in that case there is no way. If you cannot keep *all* segments\nduring your \"structural changes\" you will have to go with a rsync (or\nsomething similar).\nBut that's an option for you to know, *if* you have enough segments, than\nit is possible to restore from them. In some customers of mine (with little\ndisk space) I even don't set max_wal_keep_segments too high, and prefer to\n\"keep\" the segments with archive_command, but that's not the better\nscenario.\n\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados PostgreSQL\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Oct 29, 2012 at 10:23 AM, Claudio Freire <[email protected]> wrote:\nOn Mon, Oct 29, 2012 at 9:09 AM, Matheus de Oliveira\n<[email protected]> wrote:\n>> > If you have incremental backup, a restore_command on recovery.conf seems\n>> > better than running rsync again when the slave get out of sync. Doesn't\n>> > it?\n>>\n>> What do you mean?\n>>\n>> Usually, when it falls out of sync like that, it's because the\n>> database is undergoing structural changes, and the link between master\n>> and slave (both streaming and WAL shipping) isn't strong enough to\n>> handle the massive rewrites. A backup is of no use there either. We\n>> could make the rsync part of a recovery command, but we don't want to\n>> be left out of the loop so we prefer to do it manually. As noted, it\n>> always happens when someone's doing structural changes so it's not\n>> entirely unexpected.\n>>\n>> Or am I missing some point?\n>\n>\n> What I meant is that *if* you save you log segments somewhere (with\n> archive_command), you can always use the restore_command on the slave side\n> to catch-up with the master, even if streaming replication failed and you\n> got out of sync. Of course if you structural changes is *really big*,\n> perhaps recovering from WAL archives could even be slower than rsync (I\n> really think it's hard to happen though).\n\nI imagine it's automatic.If you don't set restore_command *and* get more segments than max_wal_keep_segments, PostgreSQL will not read the archived segments (it does not even know where it is actually).\n\n We have WAL shipping in place, but even that\ngets out of sync (more segments generated than our quota on the\narchive allows - we can't really keep more since we lack the space on\nthe server we put them).\nYeah, in that case there is no way. If you cannot keep *all* segments during your \"structural changes\" you will have to go with a rsync (or something similar).But that's an option for you to know, *if* you have enough segments, than it is possible to restore from them. In some customers of mine (with little disk space) I even don't set max_wal_keep_segments too high, and prefer to \"keep\" the segments with archive_command, but that's not the better scenario.\nRegards,-- Matheus de OliveiraAnalista de Banco de Dados PostgreSQLDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 29 Oct 2012 10:30:26 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Fri, Oct 19, 2012 at 12:02:49PM -0300, Claudio Freire wrote:\n> > This unfortunately does require a new data copy to be pulled across to the\n> > slave. For the local copies this isn't so bad as wire speed is fast enough\n> > to make it reasonable; for the actual backup units at a remove it takes a\n> > while as the copy has to go across a WAN link. I cheat on that by using a\n> > SSH tunnel with compression turned on (which, incidentally, it would be\n> > really nice if Postgres supported internally, and it could quite easily --\n> > I've considered working up a patch set for this and submitting it.)\n> >\n> > For really BIG databases (as opposed to moderately-big) this could be a\n> > much-more material problem than it is for me.\n> \n> Did you try?\n> \n> Bring both down.\n> pg_upgrade master\n> Bring master up\n> pg_upgrade slave\n\nIs there any reason to upgrade the slave when you are going to do rsync\nanyway? Of course you need to install the new binaries and libs, but it\nseems running pg_upgrade on the standby is unnecessary.\n\n> rsync master->slave (differential update, much faster than basebackup)\n> Bring slave up\n\nGood ideas. I have applied the attached doc patch to pg_upgrade head\nand 9.2 docs to suggest using rsync as part of base backup.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +",
"msg_date": "Wed, 7 Nov 2012 13:36:33 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Wed, Nov 7, 2012 at 3:36 PM, Bruce Momjian <[email protected]> wrote:\n>> Bring both down.\n>> pg_upgrade master\n>> Bring master up\n>> pg_upgrade slave\n>\n> Is there any reason to upgrade the slave when you are going to do rsync\n> anyway? Of course you need to install the new binaries and libs, but it\n> seems running pg_upgrade on the standby is unnecessary.\n\nJust to speed up the rsync\n\n",
"msg_date": "Wed, 7 Nov 2012 15:44:13 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Wed, Nov 7, 2012 at 03:44:13PM -0300, Claudio Freire wrote:\n> On Wed, Nov 7, 2012 at 3:36 PM, Bruce Momjian <[email protected]> wrote:\n> >> Bring both down.\n> >> pg_upgrade master\n> >> Bring master up\n> >> pg_upgrade slave\n> >\n> > Is there any reason to upgrade the slave when you are going to do rsync\n> > anyway? Of course you need to install the new binaries and libs, but it\n> > seems running pg_upgrade on the standby is unnecessary.\n> \n> Just to speed up the rsync\n\npg_upgrade is mostly modifying the system tables --- not sure if that is\nfaster than just having rsync copy those. The file modification times\nwould be different after pg_upgrade, so rsync might copy the file anyway\nwhen you run pg_upgrade. It would be good for you to test if it really\nis a win --- I would be surprised if pg_upgrade was in this case on the\nstandby.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Wed, 7 Nov 2012 15:59:18 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Wed, Nov 7, 2012 at 5:59 PM, Bruce Momjian <[email protected]> wrote:\n>> > Is there any reason to upgrade the slave when you are going to do rsync\n>> > anyway? Of course you need to install the new binaries and libs, but it\n>> > seems running pg_upgrade on the standby is unnecessary.\n>>\n>> Just to speed up the rsync\n>\n> pg_upgrade is mostly modifying the system tables --- not sure if that is\n> faster than just having rsync copy those. The file modification times\n> would be different after pg_upgrade, so rsync might copy the file anyway\n> when you run pg_upgrade. It would be good for you to test if it really\n> is a win --- I would be surprised if pg_upgrade was in this case on the\n> standby.\n\nI guess it depends on the release (ie: whether a table rewrite is necessary).\n\nI'll check next time I upgrade a database, but I don't expect it to be\nanytime soon.\n\n",
"msg_date": "Wed, 7 Nov 2012 18:18:06 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a self-referencing table that defines a hierarchy of projects and sub-projects.\n\nThis is the table definition:\n\nCREATE TABLE project\n(\n project_id integer primary key,\n project_name text,\n pl_name text,\n parent_id integer\n);\n\nALTER TABLE project\n ADD CONSTRAINT project_parent_id_fkey FOREIGN KEY (parent_id)\n REFERENCES project (project_id)\n ON UPDATE NO ACTION\n ON DELETE NO ACTION;\n\n\nThe table contains ~11000 rows\n\nThe following statement:\n\nwith recursive project_tree as (\n select project_id,\n parent_id,\n pl_name as root_pl,\n pl_name as sub_pl,\n 1 as lvl\n from project\n where parent_id is null\n union all\n select c.project_id,\n c.parent_id,\n coalesce(p.root_pl, c.pl_name) as root_pl,\n coalesce(c.pl_name, p.sub_pl) as sub_pl,\n p.lvl + 1\n from project c\n join project_tree p on p.project_id = c.parent_id\n)\nselect count(*), max(lvl)\n from project_tree\n where root_pl <> sub_pl;\n\nusually runs in something like 60-80ms when the parent_id column is *not* indexed.\n\nThis is the execution plan without index: http://explain.depesz.com/s/ecCT\n\nWhen I create an index on parent_id execution time increases to something between 110ms and 130ms\n\nThis is the execution plan with index: http://explain.depesz.com/s/xiL\n\nAs far as I can tell, the choice for the nested loop is the reason for the (slightly) slower execution.\nI increased the statistics for the parent_id column to 10000 (and did an analyze of course) but that didn't change anything.\n\nI have no problem with that performance, so this is more a \"I'm curious on why this happens\" type of question.\n(And I thought you might be interested in this behaviour as well)\n\nMy environment:\n\n *Windows 7 Professional 64bit\n * PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\n\n\nRegards\nThomas\n\n\n\n\n",
"msg_date": "Fri, 19 Oct 2012 12:47:08 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recursive query gets slower when adding an index"
},
{
"msg_contents": "Thomas Kellerer <[email protected]> writes:\n> This is the execution plan without index: http://explain.depesz.com/s/ecCT\n> When I create an index on parent_id execution time increases to something between 110ms and 130ms\n> This is the execution plan with index: http://explain.depesz.com/s/xiL\n\nThe reason you get a bad plan choice here is the severe underestimate of\nthe average number of rows coming out of the worktable scan (ie, the\nsize of the \"recursive\" result carried forward in each iteration).\n\nUnfortunately, it's really hard to see how we might make that number\nbetter. The current rule of thumb is \"10 times the size of the\nnonrecursive term\", which is why you get 10 here. We could choose\nanother multiplier but it'd be just as bogus as the current one\n(unless somebody has some evidence about typical expansion factors?)\n\nI suppose though that there's some argument for discouraging the planner\nfrom assuming that the carried-forward result is small; so maybe we\nshould use something larger than 10.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 19 Oct 2012 10:20:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recursive query gets slower when adding an index"
},
{
"msg_contents": "Tom Lane wrote on 19.10.2012 16:20:\n> Thomas Kellerer <[email protected]> writes:\n>> This is the execution plan without index: http://explain.depesz.com/s/ecCT\n>> When I create an index on parent_id execution time increases to something between 110ms and 130ms\n>> This is the execution plan with index: http://explain.depesz.com/s/xiL\n>\n> The reason you get a bad plan choice here is the severe underestimate of\n> the average number of rows coming out of the worktable scan (ie, the\n> size of the \"recursive\" result carried forward in each iteration).\n>\n> Unfortunately, it's really hard to see how we might make that number\n> better. The current rule of thumb is \"10 times the size of the\n> nonrecursive term\", which is why you get 10 here. We could choose\n> another multiplier but it'd be just as bogus as the current one\n> (unless somebody has some evidence about typical expansion factors?)\n>\n> I suppose though that there's some argument for discouraging the planner\n> from assuming that the carried-forward result is small; so maybe we\n> should use something larger than 10.\n>\n\nThanks for the feedback.\n\nI just noticed this behaviour because we ran the same query on SQL Server 2008 and that took well over 30seconds without the index\nSQL Server *really* improved with the index and returned the result in 0.5 seconds whith the index in place.\n\nSo I was curious how much faster Postgres would be *with* the index ;)\n\nRegards\nThomas\n\n\n\n\n",
"msg_date": "Fri, 19 Oct 2012 19:22:05 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Recursive query gets slower when adding an index"
}
] |
[
{
"msg_contents": "Hello Perf,\n\nLately I've been pondering. As systems get more complex, it's not \nuncommon for tiered storage to enter the picture. Say for instance, a \nuser has some really fast tables on a NVRAM-based device, and \nslower-access stuff on a RAID, even slower stuff on an EDB, and variants \nlike local disk or a RAM drive.\n\nYet there's only one global setting for random_page_cost, and \nseq_page_cost, and so on.\n\nWould there be any benefit at all to adding these as parameters to the \ntablespaces themselves? I can imagine the planner could override the \ndefault with the tablespace setting on a per-table basis when \ncalculating the cost of retrieving rows from tables/indexes on faster or \nslower storage.\n\nThis is especially true since each of the storage engines I listed have \ndrastically different performance profiles, but no way to hint to the \nplanner. There was a talk at the last PG Open about his EDB tests vastly \npreferring partitioning and sequential access because random access was \nso terrible. But NVRAM has the opposite metric. Currently, tuning for \none necessarily works against the other.\n\nI didn't see anything in the Todo Wiki, so I figured I'd ask. :)\n\nThanks!\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 19 Oct 2012 09:29:09 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tablespace-derived stats?"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> Yet there's only one global setting for random_page_cost, and \n> seq_page_cost, and so on.\n\nWe've had tablespace-specific settings for those for some time.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 19 Oct 2012 10:51:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespace-derived stats?"
},
{
"msg_contents": "On 10/19/2012 09:51 AM, Tom Lane wrote:\n\n> We've had tablespace-specific settings for those for some time.\n\nAh, my apologies. I didn't see any in the CREATE TABLESPACE page, and \ndidn't think to check ALTER TABLESPACE.\n\nI withdraw my question. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 19 Oct 2012 09:54:14 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tablespace-derived stats?"
},
{
"msg_contents": "On Fri, Oct 19, 2012 at 7:29 AM, Shaun Thomas <[email protected]> wrote:\n> Hello Perf,\n>\n> Lately I've been pondering. As systems get more complex, it's not uncommon\n> for tiered storage to enter the picture. Say for instance, a user has some\n> really fast tables on a NVRAM-based device, and slower-access stuff on a\n> RAID, even slower stuff on an EDB, and variants like local disk or a RAM\n> drive.\n>\n> Yet there's only one global setting for random_page_cost, and seq_page_cost,\n> and so on.\n>\n> Would there be any benefit at all to adding these as parameters to the\n> tablespaces themselves?\n\nBeen done already:\n\nhttp://www.postgresql.org/docs/9.0/static/sql-altertablespace.html\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 19 Oct 2012 08:05:10 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespace-derived stats?"
},
{
"msg_contents": "On 10/19/2012 10:05 AM, Jeff Janes wrote:\n\n> http://www.postgresql.org/docs/9.0/static/sql-altertablespace.html\n\nYep. I realized my error was not checking the ALTER page after going \nthrough CREATE. I swore I remembered seeing it in the past, but was \nsurprised it wasn't there.\n\nI keep forgetting Postgres prefers a CREATE + ALTER style than \noverloading every CREATE with all ALTER options. Though in my opinion \nthat just adds extra unnecessary steps.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 19 Oct 2012 10:07:03 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tablespace-derived stats?"
},
{
"msg_contents": "On Fri, Oct 19, 2012 at 8:07 AM, Shaun Thomas <[email protected]> wrote:\n> On 10/19/2012 10:05 AM, Jeff Janes wrote:\n>\n>> http://www.postgresql.org/docs/9.0/static/sql-altertablespace.html\n>\n>\n> Yep. I realized my error was not checking the ALTER page after going through\n> CREATE. I swore I remembered seeing it in the past, but was surprised it\n> wasn't there.\n\nWhen I didn't see it under CREATE, I went to the docs page for the\nglobal page_cost settings, and that page directed me to the ALTER for\nthe tablespace specific ones. It does seem like a statement in the\nCREATION page indicating that more options are available only via the\nALTER of an already existing tablespace would be beneficial.\n\n\n\n> I keep forgetting Postgres prefers a CREATE + ALTER style than overloading\n> every CREATE with all ALTER options. Though in my opinion that just adds\n> extra unnecessary steps.\n\nI was surprised by the absence of the option in CREATE. I didn't\nrecognize that as a general pgsql pattern though, I just thought it\nwas peculiar to tablespaces. But I haven't surveyed the universe of\ncreate and alter commands.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 19 Oct 2012 08:26:24 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tablespace-derived stats?"
}
] |
[
{
"msg_contents": "Am I reading this correctly -- it appears that if SSL negotiation is\nenabled for a connection (say, when using pg_basebackup over a WAN) that\ncompression /*is automatically used*/ (provided it is supported on both\nends)?\n\nIs there a way to check and see if it _*is*_ on for a given connection?\n\nI was looking to hack in zlib support and saw that appears to be\nalready-present support, provided SSL connection security is enabled.\n\n-- \n-- Karl Denninger\n/The Market Ticker ®/ <http://market-ticker.org>\nCuda Systems LLC\n\n\n\n\n\n\n Am I reading this correctly -- it appears that if SSL negotiation is\n enabled for a connection (say, when using pg_basebackup over a WAN)\n that compression is automatically used (provided it\n is supported on both ends)?\n\n Is there a way to check and see if it is on for a\n given connection?\n\n I was looking to hack in zlib support and saw that appears to be\n already-present support, provided SSL connection security is\n enabled.\n\n-- \n -- Karl Denninger\nThe Market Ticker ®\n Cuda Systems LLC",
"msg_date": "Fri, 19 Oct 2012 20:10:24 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Connection Options -- SSL already uses compression?"
},
{
"msg_contents": "On Sat, Oct 20, 2012 at 3:10 AM, Karl Denninger <[email protected]> wrote:\n> Am I reading this correctly -- it appears that if SSL negotiation is enabled\n> for a connection (say, when using pg_basebackup over a WAN) that compression\n> is automatically used (provided it is supported on both ends)?\n\nThat would depend on the OpenSSL defaults, I believe. Pretty sure you\ncan configure that system wide.\n\n\n> Is there a way to check and see if it is on for a given connection?\n\nYou can use PQgetssl() and then use OpenSSL functions to get the info\nfrom there. I'm not sure exactly how, but it would surprise me if it's\nnot possible.\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n\n",
"msg_date": "Sun, 21 Oct 2012 09:51:03 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Connection Options -- SSL already uses compression?"
}
] |
[
{
"msg_contents": "Hey everyone!\n\nThis is pretty embarrassing, but I've never seen this before. This is \nour system's current memory allocation from 'free -m':\n\n total used free buffers cached\nMem: 72485 58473 14012 3 34020\n-/+ buffers/cache: 24449 48036\n\nSo, I've got 14GB of RAM that the OS is just refusing to use for disk or \npage cache. Does anyone know what might cause that?\n\nOur uname -sir, for reference:\n\nLinux 3.2.0-31-generic x86_64\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Mon, 22 Oct 2012 12:35:32 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On Mon, Oct 22, 2012 at 2:35 PM, Shaun Thomas <[email protected]> wrote:\n> So, I've got 14GB of RAM that the OS is just refusing to use for disk or\n> page cache. Does anyone know what might cause that?\n\nMaybe there's just nothing to put inside?\n\nHow big is your database? How much of it gets accessed?\n\n",
"msg_date": "Mon, 22 Oct 2012 14:44:51 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On 10/22/2012 12:44 PM, Claudio Freire wrote:\n\n\n> Maybe there's just nothing to put inside?\n> How big is your database? How much of it gets accessed?\n\nTrust me, there's plenty. We have a DB that's 6x larger than RAM that's \ncurrently experiencing 6000TPS, and according to iostat, anywhere from \n20-60% disk utilization that's mostly reads.\n\nIt's pretty aggressively keeping that 14GB free, and it's driving me \nnuts. :)\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Mon, 22 Oct 2012 12:49:49 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On Mon, 22 Oct 2012 12:35:32 -0500\nShaun Thomas <[email protected]> wrote:\n\n> Hey everyone!\n> \n> This is pretty embarrassing, but I've never seen this before. This is \n> our system's current memory allocation from 'free -m':\n> \n> total used free buffers cached\n> Mem: 72485 58473 14012 3 34020\n> -/+ buffers/cache: 24449 48036\n> \n> So, I've got 14GB of RAM that the OS is just refusing to use for disk\n> or page cache. Does anyone know what might cause that?\n\nMaybe it's not needed? What make you think the OS shall allocate all the\nmemory? \n-- \nFrank Lanitz <[email protected]>",
"msg_date": "Mon, 22 Oct 2012 19:49:57 +0200",
"msg_from": "Frank Lanitz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On Mon, Oct 22, 2012 at 2:49 PM, Shaun Thomas <[email protected]> wrote:\n>> Maybe there's just nothing to put inside?\n>> How big is your database? How much of it gets accessed?\n>\n>\n> Trust me, there's plenty. We have a DB that's 6x larger than RAM that's\n> currently experiencing 6000TPS, and according to iostat, anywhere from\n> 20-60% disk utilization that's mostly reads.\n>\n> It's pretty aggressively keeping that 14GB free, and it's driving me nuts.\n> :)\n\nDid you check the kernel's zone_reclaim_mode ?\n\n",
"msg_date": "Mon, 22 Oct 2012 14:53:32 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On Mon, Oct 22, 2012 at 12:49:49PM -0500, Shaun Thomas wrote:\n \n> Trust me, there's plenty. We have a DB that's 6x larger than RAM\n> that's currently experiencing 6000TPS, and according to iostat,\n> anywhere from 20-60% disk utilization that's mostly reads.\n\nCould it be related to zone_reclaim_mode? What is vm.zone_reclaim_mode set to?\n\n/marcus\n\n\n",
"msg_date": "Mon, 22 Oct 2012 19:56:17 +0200",
"msg_from": "Marcus Larsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On 10/22/2012 12:53 PM, Claudio Freire wrote:\n\n> Did you check the kernel's zone_reclaim_mode ?\n\nIt's currently set to 0, which as I'm led to believe, is the setting I \nwant there. But here's something interesting:\n\nnumactl --hardware\n\navailable: 2 nodes (0-1)\nnode 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22\nnode 0 size: 36853 MB\nnode 0 free: 13816 MB\nnode 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23\nnode 1 size: 36863 MB\nnode 1 free: 751 MB\nnode distances:\nnode 0 1\n 0: 10 20\n 1: 20 10\n\n\nLooks like CPU 0 is hoarding memory. :(\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Mon, 22 Oct 2012 13:01:28 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On Mon, Oct 22, 2012 at 3:01 PM, Shaun Thomas <[email protected]> wrote:\n>\n>> Did you check the kernel's zone_reclaim_mode ?\n>\n>\n> It's currently set to 0, which as I'm led to believe, is the setting I want\n> there.\n\nYep\n\n> But here's something interesting:\n>\n> numactl --hardware\n>\n> available: 2 nodes (0-1)\n> node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22\n> node 0 size: 36853 MB\n> node 0 free: 13816 MB\n> node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23\n> node 1 size: 36863 MB\n> node 1 free: 751 MB\n> node distances:\n> node 0 1\n> 0: 10 20\n> 1: 20 10\n>\n>\n> Looks like CPU 0 is hoarding memory. :(\n\nYou may want to try setting the numa policy before launching postgres:\n\nnumactl --interleave=all pg_ctl start\n\nor\n\nnumactl --preferred=+0 pg_ctl start\n\n",
"msg_date": "Mon, 22 Oct 2012 15:14:03 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "This is a good general discussion of the problem - looks like you could\nreplace \"MySQL\" with \"PostgreSQL\" everywhere without loss of generality:\n\nhttp://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-archite\ncture/\n\n\nDan\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Claudio\nFreire\nSent: Monday, October 22, 2012 2:14 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [PERFORM] Tons of free RAM. Can't make it go away.\n\nOn Mon, Oct 22, 2012 at 3:01 PM, Shaun Thomas <[email protected]>\nwrote:\n>\n>> Did you check the kernel's zone_reclaim_mode ?\n>\n>\n> It's currently set to 0, which as I'm led to believe, is the setting I\nwant\n> there.\n\nYep\n\n> But here's something interesting:\n>\n> numactl --hardware\n>\n> available: 2 nodes (0-1)\n> node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22\n> node 0 size: 36853 MB\n> node 0 free: 13816 MB\n> node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23\n> node 1 size: 36863 MB\n> node 1 free: 751 MB\n> node distances:\n> node 0 1\n> 0: 10 20\n> 1: 20 10\n>\n>\n> Looks like CPU 0 is hoarding memory. :(\n\nYou may want to try setting the numa policy before launching postgres:\n\nnumactl --interleave=all pg_ctl start\n\nor\n\nnumactl --preferred=+0 pg_ctl start\n\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 22 Oct 2012 14:20:09 -0400",
"msg_from": "\"Franklin, Dan (FEN)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On 10/22/2012 01:20 PM, Franklin, Dan (FEN) wrote:\n\n> http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-archite\n> cture/\n\nYeah, I remember reading that a while back. While interesting, it \ndoesn't really apply to PG, in that unlike MySQL, we don't allocate any \nlarge memory segments directly to any large block. With MySQL, it's not \nuncommon to dedicate over 50% of RAM to the MySQL process itself, but I \ndon't often see PG systems with more than 8GB in shared_buffers.\n\nAll the rest should be available for random allocation in general. At \nleast, in theory.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Mon, 22 Oct 2012 13:24:59 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On 10/22/2012 01:14 PM, Claudio Freire wrote:\n\n> You may want to try setting the numa policy before launching postgres:\n>\n> numactl --interleave=all pg_ctl start\n\nI thought about that. I'd try it on one of our stage nodes, but both of \nthem show an even memory split. I'm not sure why our prod node is acting \nthis way. We've used bcfg2 so every server has the exact same \nconfiguration, including kernel parameters, startup settings, and so on. \nI can only conclude that there's something about the activity itself \nthat's causing it.\n\nI'll have to take another look after the market closes to see if the \nunallocated chunk shrinks.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Mon, 22 Oct 2012 13:28:19 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On Mon, Oct 22, 2012 at 3:24 PM, Shaun Thomas <[email protected]> wrote:\n>> http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-archite\n>> cture/\n>\n>\n> Yeah, I remember reading that a while back. While interesting, it doesn't\n> really apply to PG, in that unlike MySQL, we don't allocate any large memory\n> segments directly to any large block. With MySQL, it's not uncommon to\n> dedicate over 50% of RAM to the MySQL process itself, but I don't often see\n> PG systems with more than 8GB in shared_buffers.\n\nActually, one problem that creeps up in PG is that shared buffers\ntends to be allocated all within one node (the postmaster's), stealing\na lot from workers.\n\nI had written a patch that sets the policy to interleave in the\nmaster, while launching (and setting up shared buffers), and then back\nto preferring local when forking a worker.\n\nI never had a chance to test it. I only have one numa system, and it's\nin production so I can't really test much there.\n\nI think, unless it gives you trouble with the page cache, numactl\n--prefer=+0 should work nicely for postgres overall. Failing that,\nnumactl --interleave=all would, IMO, be better than the system\ndefault.\n\n",
"msg_date": "Mon, 22 Oct 2012 15:44:50 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On 10/22/2012 01:44 PM, Claudio Freire wrote:\n\n> I think, unless it gives you trouble with the page cache, numactl\n> --prefer=+0 should work nicely for postgres overall. Failing that,\n> numactl --interleave=all would, IMO, be better than the system\n> default.\n\nThanks, I'll consider that.\n\nFWIW, our current stage cluster node is *not* doing this at all. In \nfact, here's a numastat from stage:\n\n node0 node1\nnuma_hit 1623243097 1558610594\nnuma_miss 257459057 310098727\nnuma_foreign 310098727 257459057\ninterleave_hit 25822175 26010606\nlocal_node 1616379287 1545600377\nother_node 264322867 323108944\n\nThen from prod:\n\n node0 node1\nnuma_hit 4987625178 3695967931\nnuma_miss 1678204346 418284176\nnuma_foreign 418284176 1678204370\ninterleave_hit 27578 27720\nlocal_node 4988131216 3696305260\nother_node 1677698308 417946847\n\n\nNote how ridiculously uneven node0 and node1 are in comparison to what \nwe're seeing in stage. I'm willing to bet something is just plain wrong \nwith our current production node. So I'm working with our NOC team to \nschedule a failover to the alternate node. If that resolves it, I'll see \nif I can't get some kind of answer from our infrastructure guys to share \nin case someone else encounters this.\n\nYes, even if that answer is \"reboot.\" :)\n\nThanks again!\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Tue, 23 Oct 2012 11:49:00 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "Sorry for late response, but may be you are still strugling.\n\nIt can be that some query(s) use a lot of work mem, either because of high\nwork_mem setting or because of planner error. In this case the moment query\nruns it will need memory that will later be returned and become free.\nUsually this can be seen as active memory spike with a lot of free memory\nafter.\n\n2012/10/22 Shaun Thomas <[email protected]>\n\n> Hey everyone!\n>\n> This is pretty embarrassing, but I've never seen this before. This is our\n> system's current memory allocation from 'free -m':\n>\n> total used free buffers cached\n> Mem: 72485 58473 14012 3 34020\n> -/+ buffers/cache: 24449 48036\n>\n> So, I've got 14GB of RAM that the OS is just refusing to use for disk or\n> page cache. Does anyone know what might cause that?\n>\n> Our uname -sir, for reference:\n>\n> Linux 3.2.0-31-generic x86_64\n>\n> --\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nSorry for late response, but may be you are still strugling.It can be that some query(s) use a lot of work mem, either because of high work_mem setting or because of planner error. In this case the moment query runs it will need memory that will later be returned and become free. Usually this can be seen as active memory spike with a lot of free memory after.\n2012/10/22 Shaun Thomas <[email protected]>\nHey everyone!\n\nThis is pretty embarrassing, but I've never seen this before. This is our system's current memory allocation from 'free -m':\n\n total used free buffers cached\nMem: 72485 58473 14012 3 34020\n-/+ buffers/cache: 24449 48036\n\nSo, I've got 14GB of RAM that the OS is just refusing to use for disk or page cache. Does anyone know what might cause that?\n\nOur uname -sir, for reference:\n\nLinux 3.2.0-31-generic x86_64\n\n-- -- Best regards, Vitalii Tymchyshyn",
"msg_date": "Sat, 27 Oct 2012 23:49:25 -0400",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
},
{
"msg_contents": "On 10/27/2012 10:49 PM, Віталій Тимчишин wrote:\n\n> It can be that some query(s) use a lot of work mem, either because of\n> high work_mem setting or because of planner error. In this case the\n> moment query runs it will need memory that will later be returned and\n> become free. Usually this can be seen as active memory spike with a lot\n> of free memory after.\n\nYeah, I had briefly considered that. But our work-mem is only 16MB, and \neven a giant query would have trouble allocating 10+GB with that size of \nwork-mem buckets.\n\nThat's why I later listed the numa info. In our case, processor 0 is \nheavily unbalanced with its memory accesses compared to processor 1. I \nthink the theory that we didn't start with interleave put an 8GB (our \nshared_buffers) segment all on processor 0, which unbalanced a lot of \nother stuff.\n\nOf course, that leaves 4-6GB unaccounted for. And numactl still shows a \nheavy preference for freeing memory from proc 0. It seems to only do it \non this node, so we're going to switch nodes soon and see if the problem \nreappears. We may have to perform a node hardware audit if this persists.\n\nThanks for your input, though. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Mon, 29 Oct 2012 11:17:15 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tons of free RAM. Can't make it go away."
}
] |
[
{
"msg_contents": "I have a problem with prepared statements choosing a bad query plan - I was hoping that 9.2 would have eradicated the problem :(\n\nTaken from the postgresql log:\n\n <2012-10-23 15:21:03 UTC acme_metastore 13798 5086b49e.35e6> LOG: duration: 20513.809 ms execute S_6: SELECT S.Subj, S.Prop, S.Obj\n FROM jena_g1t1_stmt S WHERE S.Obj = $1 AND S.Subj = $2 AND S.Prop = $3 AND S.GraphID = $4\n\n <2012-10-23 15:21:03 UTC acme_metastore 13798 5086b49e.35e6> DETAIL: parameters: $1 = 'Uv::http://www.w3.org/2006/vcard/ns#Organization', $2 = 'Uv::http://acme.metastore.acmeemca.com/content/journals/10.1049/acme-ipr.2010.0367-af2-org', $3 = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type', $4 = '1'\n <2012-10-23 15:21:03 UTC acme_metastore 13798 5086b49e.35e6> LOG: duration: 20513.790 ms plan:\n Query Text: SELECT S.Subj, S.Prop, S.Obj\n FROM jena_g1t1_stmt S WHERE S.Obj = $1 AND S.Subj = $2 AND S.Prop = $3 AND S.GraphID = $4\n\n Index Scan using jena_g1t1_stmt_ixpo on jena_g1t1_stmt s (cost=0.00..134.32 rows=1 width=183)\n Index Cond: (((prop)::text = ($3)::text) AND ((obj)::text = ($1)::text))\n Filter: (((subj)::text = ($2)::text) AND (graphid = $4))\n\n\nThe same query written in line: as you can see its using a different index and is therefore orders of magnitude quicker.\n\n\n SELECT S.Subj, S.Prop, S.Obj\n FROM jena_g1t1_stmt S WHERE S.Obj = 'Uv::http://www.w3.org/2006/vcard/ns#Organization' AND S.Subj = 'Uv::http://acme.metastore.acmeemca.com/content/journals/10.1049/acme-ipr.2010.0367-af2-org' AND S.Prop = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND S.GraphID = '1';\n\n Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt s (cost=0.00..168.64 rows=1 width=183) (actual time=0.181..0.183 rows=1 loops=1)\n Index Cond: (((subj)::text = 'Uv::http://acme.metastore.acmeemca.com/content/journals/10.1049/acme-ipr.2010.0367-af2-org'::text) AND ((prop)::text = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text))\n Filter: (((obj)::text = 'Uv::http://www.w3.org/2006/vcard/ns#Organization'::text) AND (graphid = 1))\n Total runtime: 0.268 ms\n (4 rows)\n\n\nIf I write it as a prepared statement in psql it also now chooses the correct index (in v9.1 it would pick the wrong one)\n\n\n prepare testplan as SELECT S.Subj, S.Prop, S.Obj\n FROM jena_g1t1_stmt S WHERE S.Obj = $1 AND S.Subj = $2 AND S.Prop = $3 AND S.GraphID = $4;\n\n explain analyze execute testplan ('Uv::http://www.w3.org/2006/vcard/ns#Organization','Uv::http://acme.metastore.acmeemca.com/content/journals/10.1049/acme-ipr.2010.0367-af2-org','Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type','1')\n\n Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt s (cost=0.00..168.64 rows=1 width=183) (actual time=0.276..0.278 rows=1 loops=1)\n Index Cond: (((subj)::text = 'Uv::http://acme.metastore.acmeemca.com/content/journals/10.1049/acme-ipr.2010.0367-af2-org'::text) AND ((prop)::text = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text))\n Filter: (((obj)::text = 'Uv::http://www.w3.org/2006/vcard/ns#Organization'::text) AND (graphid = 1))\n Total runtime: 0.310 ms\n (4 rows)\n\nThe queries are generated by Apache Jena / sparql. I have tried adding ?protocolVersion=2 to the jbdc connection string - but I still see the queries as prepared statements.\n\n From the wiki:\n\n\"Prepared statements used to be optimized once, without any knowledge of the parameters' values. With 9.2, the planner will use specific plans regarding to the parameters sent (the query will be planned at execution), except if the query is executed several times and the planner decides that the generic plan is not too much more expensive than the specific plans.\"\n\nIs there a way to force the planner to use the specific rather than generic plans?\n\nDan\n\n\nThe information in this message is intended solely for the addressee and should be considered confidential. Publishing Technology does not accept legal responsibility for the contents of this message and any statements contained herein which do not relate to the official business of Publishing Technology are neither given nor endorsed by Publishing Technology and are those of the individual and not of Publishing Technology. This message has been scanned for viruses using the most current and reliable tools available and Publishing Technology excludes all liability related to any viruses that might exist in any attachment or which may have been acquired in transit.\n\n\n\n\n\n\n\n\nI have a problem with prepared statements choosing a bad query plan - I was hoping that 9.2 would have eradicated the problem :(\n \nTaken from the postgresql log:\n \n <2012-10-23 15:21:03 UTC acme_metastore 13798 5086b49e.35e6> LOG: duration: 20513.809 ms execute S_6: SELECT S.Subj, S.Prop, S.Obj\n FROM jena_g1t1_stmt S WHERE S.Obj = $1 AND S.Subj = $2 AND S.Prop = $3 AND S.GraphID = $4\n \n <2012-10-23 15:21:03 UTC acme_metastore 13798 5086b49e.35e6> DETAIL: parameters: $1 = 'Uv::http://www.w3.org/2006/vcard/ns#Organization', $2 = 'Uv::http://acme.metastore.acmeemca.com/content/journals/10.1049/acme-ipr.2010.0367-af2-org',\n $3 = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type', $4 = '1'\n <2012-10-23 15:21:03 UTC acme_metastore 13798 5086b49e.35e6> LOG: duration: 20513.790 ms plan:\n Query Text: SELECT S.Subj, S.Prop, S.Obj\n FROM jena_g1t1_stmt S WHERE S.Obj = $1 AND S.Subj = $2 AND S.Prop = $3 AND S.GraphID = $4\n \n Index Scan using jena_g1t1_stmt_ixpo on jena_g1t1_stmt s (cost=0.00..134.32 rows=1 width=183)\n Index Cond: (((prop)::text = ($3)::text) AND ((obj)::text = ($1)::text))\n Filter: (((subj)::text = ($2)::text) AND (graphid = $4))\n \n \nThe same query written in line: as you can see its using a different index and is therefore orders of magnitude quicker.\n \n \n SELECT S.Subj, S.Prop, S.Obj\n FROM jena_g1t1_stmt S WHERE S.Obj = 'Uv::http://www.w3.org/2006/vcard/ns#Organization' AND S.Subj = 'Uv::http://acme.metastore.acmeemca.com/content/journals/10.1049/acme-ipr.2010.0367-af2-org'\n AND S.Prop = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND S.GraphID = '1';\n \n\n Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt s (cost=0.00..168.64 rows=1 width=183) (actual time=0.181..0.183 rows=1 loops=1)\n Index Cond: (((subj)::text = 'Uv::http://acme.metastore.acmeemca.com/content/journals/10.1049/acme-ipr.2010.0367-af2-org'::text) AND ((prop)::text = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text))\n Filter: (((obj)::text = 'Uv::http://www.w3.org/2006/vcard/ns#Organization'::text) AND (graphid = 1))\n Total runtime: 0.268 ms\n (4 rows)\n \n \nIf I write it as a prepared statement in psql it also now chooses the correct index (in v9.1 it would pick the wrong one)\n \n \n prepare testplan as SELECT S.Subj, S.Prop, S.Obj\n FROM jena_g1t1_stmt S WHERE S.Obj = $1 AND S.Subj = $2 AND S.Prop = $3 AND S.GraphID = $4;\n \n\n explain analyze execute testplan ('Uv::http://www.w3.org/2006/vcard/ns#Organization','Uv::http://acme.metastore.acmeemca.com/content/journals/10.1049/acme-ipr.2010.0367-af2-org','Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type','1')\n \n\n Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt s (cost=0.00..168.64 rows=1 width=183) (actual time=0.276..0.278 rows=1 loops=1)\n Index Cond: (((subj)::text = 'Uv::http://acme.metastore.acmeemca.com/content/journals/10.1049/acme-ipr.2010.0367-af2-org'::text) AND ((prop)::text = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text))\n Filter: (((obj)::text = 'Uv::http://www.w3.org/2006/vcard/ns#Organization'::text) AND (graphid = 1))\n Total runtime: 0.310 ms\n (4 rows)\n \nThe queries are generated by Apache Jena / sparql. I have tried adding ?protocolVersion=2 to the jbdc connection string - but I still see the queries as prepared statements.\n \nFrom the wiki: \n \n\"Prepared statements used to be optimized once, without any knowledge of the parameters' values. With 9.2, the planner will use specific plans regarding to the parameters sent (the query will be planned at execution), except if the query\n is executed several times and the planner decides that the generic plan is not too much more expensive than the specific plans.\"\n \nIs there a way to force the planner to use the specific rather than generic plans?\n \nDan\n\n \n\n\nThe information in this message is intended solely for the addressee and should be considered confidential. Publishing Technology does not accept legal responsibility for the contents of this message and any statements contained herein which do not relate to the official business of Publishing Technology are neither given nor endorsed by Publishing Technology and are those of the individual and not of Publishing Technology. This message has been scanned for viruses using the most current and reliable tools available and Publishing Technology excludes all liability related to any viruses that might exist in any attachment or which may have been acquired in transit.",
"msg_date": "Tue, 23 Oct 2012 15:42:10 +0000",
"msg_from": "Daniel Burbridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Prepared statements slow in 9.2 still (bad query plan)"
},
{
"msg_contents": "Daniel Burbridge <[email protected]> writes:\n> I have a problem with prepared statements choosing a bad query plan - I was hoping that 9.2 would have eradicated the problem :(\n\n9.2 will only pick the \"right\" plan if that plan's estimated cost is a\ngood bit cheaper than the \"wrong\" parameterized plan. In this case,\nnot only is there not a lot of difference, but the difference is in the\nwrong direction. You need to fix that --- perhaps increasing stats\ntargets would help?\n\nA more radical question is whether you have a well-chosen set of indexes\nin the first place. These two seem a bit odd, and certainly not\nterribly well matched to this query.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 28 Oct 2012 11:06:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements slow in 9.2 still (bad query plan)"
},
{
"msg_contents": "On 10/28/2012 10:06 AM, Tom Lane wrote:\n\n> 9.2 will only pick the \"right\" plan if that plan's estimated cost is a\n> good bit cheaper than the \"wrong\" parameterized plan.\n\nIs it also possible that the planner differences between extended and \nsimple query mode caused this? That really bit us in the ass until \nEnterpriseDB sent us a patch. From browsing the threads, didn't someone \nsay a similar problem existed in PG core?\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Mon, 29 Oct 2012 08:25:15 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prepared statements slow in 9.2 still (bad query plan)"
},
{
"msg_contents": "Thanks,\n\ndefault_statistics_target is currently at 500 (I have tried from 100-5000 without any success)\n\nWould upping the stats for one specific column help? If so, I presume I should up the stats on the subj column...\n\nYou may well be onto something wrt the indexes and their usage - this is a not a system that I have built but as is often the case been asked to look at the performance of....\n\nIt is an RDB triplestore for Apache-Jena with approx 17 million triples/rows.\nThere are only 4 columns - subj,prop,obj and graphid (which in our case is always 1)\nAccording to the stats that have been collected subj has approx 350,000 distinct values, prop 88 and obj around 150,000\n\nDan\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: 28 October 2012 15:06\nTo: Daniel Burbridge\nCc: [email protected]\nSubject: Re: [PERFORM] Prepared statements slow in 9.2 still (bad query plan)\n\nDaniel Burbridge writes:\n> I have a problem with prepared statements choosing a bad query plan - \n> I was hoping that 9.2 would have eradicated the problem :(\n\n9.2 will only pick the \"right\" plan if that plan's estimated cost is a good bit cheaper than the \"wrong\" parameterized plan. In this case, not only is there not a lot of difference, but the difference is in the wrong direction. You need to fix that --- perhaps increasing stats targets would help?\n\nA more radical question is whether you have a well-chosen set of indexes in the first place. These two seem a bit odd, and certainly not terribly well matched to this query.\n\n\t\t\tregards, tom lane\n\nThe information in this message is intended solely for the addressee and should be considered confidential. Publishing Technology does not accept legal responsibility for the contents of this message and any statements contained herein which do not relate to the official business of Publishing Technology are neither given nor endorsed by Publishing Technology and are those of the individual and not of Publishing Technology. This message has been scanned for viruses using the most current and reliable tools available and Publishing Technology excludes all liability related to any viruses that might exist in any attachment or which may have been acquired in transit.\n\n",
"msg_date": "Tue, 30 Oct 2012 16:41:49 +0000",
"msg_from": "Daniel Burbridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Prepared statements slow in 9.2 still (bad query plan)"
}
] |
[
{
"msg_contents": "henk de wit wrote:\n\n> Well, what do you know! That did work indeed. Immediately after the\n> ANALYZE on that parent table (taking only a few seconds) a fast\n> plan was created and the query executed in ms again. Silly me, I\n> should have tried that earlier.\n\nOf course, if your autovacuum settings are aggressive enough, you\nshould gernerally not need to run ANALYZE explicitly. You should\ndouble-check that autovacuum is turned on and configured at least as\naggressively as the default settings, or you will probably get little\nsurprises like this when you least expect them.\n\n-Kevin\n\n",
"msg_date": "Tue, 23 Oct 2012 14:33:16 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with limit goes from few ms to hours"
},
{
"msg_contents": "On 10/23/2012 11:33 AM, Kevin Grittner wrote:\n> henk de wit wrote:\n>\n>> Well, what do you know! That did work indeed. Immediately after the\n>> ANALYZE on that parent table (taking only a few seconds) a fast\n>> plan was created and the query executed in ms again. Silly me, I\n>> should have tried that earlier.\n> Of course, if your autovacuum settings are aggressive enough, you\n> should gernerally not need to run ANALYZE explicitly. You should\n> double-check that autovacuum is turned on and configured at least as\n> aggressively as the default settings, or you will probably get little\n> surprises like this when you least expect them.\n>\n>\nThe exception I'd make to Kevin's good advice is for cases when a \nprocess makes substantial statistics-altering changes to your data (bulk \ninsert/delete/update) immediately followed by a query against the \nupdated table(s). In those cases there is a good possibility that the \nstatistics will not have been automatically updated before the \nsubsequent query is planned so an explicit ANALYZE between the update \nand the query can be of value.\n\nCheers,\nSteve\n\n\n",
"msg_date": "Tue, 23 Oct 2012 13:08:15 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with limit goes from few ms to hours"
}
] |
[
{
"msg_contents": "Hi,\n\ni've got a very strange problem on PostgreSQL 8.4, where the queryplaner goes absolutely havoc, when slightly changing one parameter.\n\nFirst the Tables which are involved:\n1. Table \"public.spsdata\"\n Column | Type | Modifiers \n-----------------------------+-----------------------------+---------------------------------------------------------------\n data_id | bigint | not null default nextval('spsdata_data_id_seq'::regclass)\n machine_id | integer | \n timestamp | timestamp with time zone | \n value1 | ….\n value2 | ….\n errorcode | integer\n...\n\nThis table is partitioned (per month) and holds about 3.86203 * 10^9 records (the machines are generating data every 5 seconds)\nEvery partition (=month) has about 36 * 10^6 records and has following indexes/constraints:\nIndexes:\n \"spsdata_2012m09_machine_id_key\" UNIQUE, btree (machine_id, \"timestamp\")\nCheck constraints:\n \"spsdata_2012m09_timestamp_check\" CHECK (\"timestamp\" >= '2012-09-01 00:00:00+02'::timestamp with time zone AND \"timestamp\" < '2012-10-01 00:00:00+02'::timestamp with time zone)\nInherits: spsdata\n\nconstraint_exclusion is set to 'partition'\n\n2. Table \"public.events\"\n Column | Type | Modifiers \n-----------------------+-----------------------------+----------------------------------------------------------------\n event_id | bigint | not null default nextval('events_event_id_seq'::regclass)\n machine_id | integer | \n timestamp | timestamp without time zone | \n code | integer | \nIndexes:\n \"events_pkey\" PRIMARY KEY, btree (event_id)\n \"events_unique_key\" UNIQUE, btree (machine_id, \"timestamp\", code)\n \"events_code\" btree (code)\n \"events_timestamp\" btree (\"timestamp\");\n\nTHE PROBLEM:\nWe're trying to select certain rows from the spsdata-table which happened before the event. The event is filtered By code. Because the timestamp of event and data is not in sync, we look into the last 30 seconds. Here is the select:\ndb=# SELECT m.machine_id, s.timestamp, s.errorcode\nFROM events m INNER JOIN spsdata as s ON (m.machine_id= m.machine_id AND s.timestamp BETWEEN m.timestamp - interval '30 seconds' AND m.timestamp)\nWHERE m.code IN 2024 AND m.timestamp BETWEEN '2012-08-14' AND '2012-08-29' AND s.errorcode in '2024';\n machine_id | timestamp | errorcode \n------------+------------------------+-----------\n 183 | 2012-08-18 18:21:29+02 | 2024\n 216 | 2012-08-20 15:40:39+02 | 2024\n 183 | 2012-08-21 12:56:49+02 | 2024\n 183 | 2012-08-27 17:04:34+02 | 2024\n 214 | 2012-08-27 23:33:44+02 | 2024\n(5 rows)\n\nTime: 6087.911 ms\n\nWhen I'm changing \"m.timestamp BETWEEN '2012-08-14' AND '2012-08-29'\" to \"m.timestamp BETWEEN '2012-08-13' AND '2012-08-29'\" the query takes HOURS. \nHere are some statistics for different ranges\n2012-08-14' AND '2012-08-29' -> ca 4sec\n2012-08-14' AND '2012-09-30' -> ca 4sec\n2012-08-13' AND '2012-08-15' -> ca 4sec\n2012-08-13' AND '2012-08-22' -> ca 4sec\n2012-08-13' AND '2012-08-25' -> ca 4sec\n2012-08-13' AND '2012-08-26' -> FOREVER\n2012-08-14' AND '2012-08-26' -> ca 4sec\n2012-08-13' AND ( >'2012-08-26' ) -> FOREVER\n\nThe problem is the change of the query plan.\nFAST:\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..144979241.24 rows=42662 width=14)\n Join Filter: ((s.\"timestamp\" <= m.\"timestamp\") AND (m.machine_id = s.machine_id) AND (s.\"timestamp\" >= (m.\"timestamp\" - '00:00:30'::interval)))\n -> Index Scan using events_code on events m (cost=0.00..4911.18 rows=25 width=12)\n Index Cond: (code = 2024)\n Filter: ((\"timestamp\" >= '2012-08-14 00:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2012-08-26 00:00:00'::timestamp without time zone))\n -> Append (cost=0.00..5770958.44 rows=1400738 width=14)\n -> Index Scan using spsdata_machine_id on spsdata s (cost=0.00..4.11 rows=1 width=14)\n Index Cond: (s.machine_id = m.machine_id)\n\nSLOW:\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=631.37..158275670.34 rows=47782 width=14)\n Hash Cond: (s.machine_id = m.machine_id)\n Join Filter: ((s.\"timestamp\" <= m.\"timestamp\") AND (s.\"timestamp\" >= (m.\"timestamp\" - '00:00:30'::interval)))\n -> Append (cost=0.00..158152325.56 rows=3071675 width=14)\n -> Seq Scan on spsdata s (cost=0.00..10.75 rows=1 width=14)\n Filter: (errorcode = 2024::smallint)\n -> Seq Scan on spsdata_2009m11 s (cost=0.00..10.75 rows=1 width=14)\n Filter: (errorcode = 2024::smallint)\n -> Seq Scan on spsdata_2009m12 s (cost=0.00..24897.60 rows=32231 width=14)\n Filter: (errorcode = 2024::smallint)\n -> Seq Scan on spsdata_2010m01 s (cost=0.00..113650.43 rows=153779 width=14)\n Filter: (errorcode = 2024::smallint)\n -> Seq Scan on spsdata_2010m02 s (cost=0.00..451577.41 rows=9952 width=14)\n Filter: (errorcode = 2024::smallint)\n -> Seq Scan on spsdata_2010m03 s (cost=0.00..732979.41 rows=16001 width=14)\n Filter: (errorcode = 2024::smallint)\n -> Seq Scan on spsdata_2010m04 s (cost=0.00..940208.95 rows=17699 width=14)\n\nAs you can imagine, Seq Scanning a Table(s) with 3.86203 * 10^9 records is not a good idea.\nWhat can I do to prevent that behavior ?\n\nThanks\n\nAndy\n\n-- \nAndreas Böckler\[email protected]\n\n\n",
"msg_date": "Wed, 24 Oct 2012 17:41:07 +0200",
"msg_from": "=?iso-8859-1?Q?B=F6ckler_Andreas?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "Hi Jeff,\n\nthanks for your answer!\n\nAm 24.10.2012 um 19:00 schrieb Jeff Janes:\n\n> On Wed, Oct 24, 2012 at 8:41 AM, Böckler Andreas <[email protected]> wrote:\n> \n>> SELECT m.machine_id, s.timestamp, s.errorcode\n>> FROM events m INNER JOIN spsdata as s ON (m.machine_id= s.machine_id\n> \n> m.machine_id is equal to itself? you must be retyping the query by hand…\nYes I did … i changed the vars from german to english .. \nThat should be m.machine_id=s.machine_id\n> \n> You should report the results of \"EXPLAIN ANALYZE\" rather than merely\n> EXPLAIN, as that would make it much easier to verify where the\n> selectivity estimates are off.\n> \nOK .. \ni can do that for the FAST query. \nBut the other one would take days. (see below )\n\n> \n>> FAST:\n>> QUERY PLAN\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Nested Loop (cost=0.00..144979241.24 rows=42662 width=14)\n>> Join Filter: ((s.\"timestamp\" <= m.\"timestamp\") AND (m.machine_id = s.machine_id) AND (s.\"timestamp\" >= (m.\"timestamp\" - '00:00:30'::interval)))\n>> -> Index Scan using events_code on events m (cost=0.00..4911.18 rows=25 width=12)\n>> Index Cond: (code = 2024)\n>> Filter: ((\"timestamp\" >= '2012-08-14 00:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2012-08-26 00:00:00'::timestamp without time zone))\n>> -> Append (cost=0.00..5770958.44 rows=1400738 width=14)\n>> -> Index Scan using spsdata_machine_id on spsdata s (cost=0.00..4.11 rows=1 width=14)\n>> Index Cond: (s.machine_id = m.machine_id)\n> \n> Was there more to the plan that you snipped? If not, why isn't it\n> checking all the other partitions?\n\nYour right. It's checking all partitions!. So the constraint exclusion doesn't kick in.\nThis can be fixed with\nSELECT \n\tm.machine_id, s.timestamp, s.errorcode\nFROM \n\tevents m \n\tINNER JOIN spsdata as s ON (m.machine_id=s.machine_id AND s.timestamp BETWEEN m.timestamp - interval '30 seconds' AND m.timestamp)\nWHERE \n\tm.code IN (2024) \n\tAND m.timestamp BETWEEN '2012-08-01' AND '2012-08-29' \n\tAND s.timestamp BETWEEN '2012-08-01' AND '2012-08-29' \n\tAND s.errorcode in ('2024');\n\nIt doesn't take hours to end, but it's not the performance gain you would expect.\n\nI'v changed the query to one partition spsdata_2012m08 and attached the slow and fast cases with EXPLAIN ANALYZE.\n\nThe difference is one day in the WHERE-Clause\n290.581 ms VS 687887.674 ms !\nThats 2372 times slower.\n\nHow can i force the fast query plan in a select?\n\nAt least I know that spsdata_2012m08 has way more records than events\nspsdata_2012m08: reltuples -> 5.74082 * 10^7\nevents: count(1) for that time range -> 51383\n\n> \n> If you can't fix the selectivity estimates, one thing you could do to\n> drive it to the faster query is to decrease random_page_cost to be the\n> same seq_page_cost. That should push the cross-over point to the\n> sequential scan out to a region you might not care about. However, it\n> could also drive other queries in your system to use worse plans than\n> they currently are.\n> Or, you could \"set enable_seqscan = off\" before running this\n> particular query, then reset it afterwards.\n> \n> Cheers,\n> \n> Jeff\nI've played with seq_page_cost and enable_seqscan already, but you have to know the right values before SELECT to get good results ;)\n\nCheers,\n\nAndy\n\n\n-- \nAndreas Böckler\[email protected]",
"msg_date": "Wed, 24 Oct 2012 20:51:33 +0200",
"msg_from": "=?iso-8859-1?Q?B=F6ckler_Andreas?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "On Wed, Oct 24, 2012 at 11:51 AM, Böckler Andreas <[email protected]> wrote:\n>>\n>> Was there more to the plan that you snipped? If not, why isn't it\n>> checking all the other partitions?\n>\n> Your right. It's checking all partitions!. So the constraint exclusion doesn't kick in.\n> This can be fixed with\n> SELECT\n> m.machine_id, s.timestamp, s.errorcode\n> FROM\n> events m\n> INNER JOIN spsdata as s ON (m.machine_id=s.machine_id AND s.timestamp BETWEEN m.timestamp - interval '30 seconds' AND m.timestamp)\n> WHERE\n> m.code IN (2024)\n> AND m.timestamp BETWEEN '2012-08-01' AND '2012-08-29'\n> AND s.timestamp BETWEEN '2012-08-01' AND '2012-08-29'\n> AND s.errorcode in ('2024');\n\nEven checking all the partitions it seemed to be pretty fast (78 ms).\nIs it worth adding all of that spinach (which could easily get out of\ndate) just to improve a query that is already fast?\n\n\n\n>\n> It doesn't take hours to end, but it's not the performance gain you would expect.\n>\n> I'v changed the query to one partition spsdata_2012m08 and attached the slow and fast cases with EXPLAIN ANALYZE.\n>\n> The difference is one day in the WHERE-Clause\n> 290.581 ms VS 687887.674 ms !\n> Thats 2372 times slower.\n\n From the fast case:\n\n -> Bitmap Index Scan on spsdata_2012m08_machine_id_key\n(cost=0.00..2338.28 rows=56026 width=0) (actual time=0.262..0.262\nrows=6 loops=186)\n Index Cond: ((s.machine_id = m.machine_id) AND\n(s.\"timestamp\" > (m.\"timestamp\" - '00:00:30'::interval)) AND\n(s.\"timestamp\" <= m.\"timestamp\"))\n\nThe difference in predicted rows to actual rows, 56026 to 6, is pretty\nimpressive. That is why the cost of the fast method is vastly\noverestimated, and making it just slightly bigger yet pushes it over\nthe edge to looking more expensive than the slower sequential scan.\nIt does seem to be the case of the range selectivity not being\nestimate correctly.\n\n> How can i force the fast query plan in a select?\n\nI'd probably punt and do it in the application code. Do the select on\nthe event table, then loop over the results issues the queries on the\nspsdata table. That way the range endpoints would be constants rather\nthan coming from joins, and the planner should do a better job.\n\nCan you load the data into 9.2 and see if it does better? (I'm not\noptimistic that it will be.)\n\n\n> I've played with seq_page_cost and enable_seqscan already, but you have to know the right values before SELECT to get good results ;)\n\nNot sure what you mean here. If you change the settings just for the\nquery, it should be safe because when the query is already fast it is\nnot using the seq scan, so discouraging it from using one even further\nis not going to do any harm.\n\nOr do you mean you have lots of queries which are slow other than the\none shown, and you can't track all of them down?\n\nCheers,\n\nJeff\n\n",
"msg_date": "Thu, 25 Oct 2012 09:20:56 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "\nAm 25.10.2012 um 18:20 schrieb Jeff Janes:\n\n> Can you load the data into 9.2 and see if it does better? (I'm not\n> optimistic that it will be.)\n\nThis takes months, the customer has to pay us for that ;)\nThere are already talks about moving it to a new server, but this is for next year.\n\nAnd it will be no child's play to migrate about 1.6TB of data from 8.4 to 9.2.\n\nCheers,\n\nAndy\n-- \nAndreas Böckler\[email protected]\n\n\n",
"msg_date": "Fri, 26 Oct 2012 17:30:47 +0200",
"msg_from": "=?iso-8859-1?Q?B=F6ckler_Andreas?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "On Fri, Oct 26, 2012 at 8:30 AM, Böckler Andreas <[email protected]> wrote:\n>\n> Am 25.10.2012 um 18:20 schrieb Jeff Janes:\n>\n>> Can you load the data into 9.2 and see if it does better? (I'm not\n>> optimistic that it will be.)\n>\n> This takes months, the customer has to pay us for that ;)\n\nYou probably only need to load one partition to figure out if does a\nbetter job there.\n\nOnce you know if it solves the problem, then you can make an informed\ndecision on whether migration might be worthwhile.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 26 Oct 2012 11:00:55 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "\nAm 26.10.2012 um 20:00 schrieb Jeff Janes:\n\n> You probably only need to load one partition to figure out if does a\n> better job there.\n> \n> Once you know if it solves the problem, then you can make an informed\n> decision on whether migration might be worthwhile.\n> \n> Cheers,\n> \n> Jeff\nok .. i'll give it a try ...\n-- \nAndreas Böckler\[email protected]\n\n\n",
"msg_date": "Fri, 26 Oct 2012 20:33:45 +0200",
"msg_from": "=?iso-8859-1?Q?B=F6ckler_Andreas?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
}
] |
[
{
"msg_contents": "Hey everyone,\n\nSo recently we upgraded to 9.1 and have noticed a ton of our queries got \nmuch worse. It turns out that 9.1 is *way* more optimistic about our \nfunctional indexes, even when they're entirely the wrong path. So after \ngoing through the docs, I see that the normal way to increase stats is \nto alter columns directly on a table, or change the \ndefault_statistics_target itself.\n\nBut there doesn't seem to be any way to increase stats for a functional \nindex unless you modify default_statistics_target. I did some testing, \nand for a particularly bad plan, we don't get a good result until the \nstats are at 5000 or higher. As you can imagine, this drastically \nincreases our analyze time, and there's no way we're setting that \nsystem-wide.\n\nI tested this by:\n\nSET default_statistics_target = 5000;\n\nANALYZE my_table;\n\nEXPLAIN SELECT [ugly query];\n\nI only tested 1000, 2000, 3000, 4000, and 5000 before it switched plans. \nThis is a 30M row table, and the \"good\" plan is 100x faster than the bad \none. You can see this behavior yourself with this test case:\n\nCREATE TABLE date_test (\n id SERIAL,\n col1 varchar,\n col2 numeric,\n action_date TIMESTAMP WITHOUT TIME ZONE\n);\n\ninsert into date_test (col1, col2, action_date)\nselect 'S:' || ((random()*a.num)::int % 10000),\n (random()*a.num)::int % 15000,\n current_date - (random()*a.num)::int % 1000\n from generate_series(1,10000000) a(num);\n\ncreate index idx_date_test_action_date_trunc\n on date_test (date_trunc('day', action_date));\n\ncreate index idx_date_test_col1_col2\n on date_test (col1, col2);\n\nexplain analyze\nselect *\n from date_test\n where col1 IN ('S:96')\n and col2 = 657\n and date_trunc('day', action_date) >= '2012-10-24'\n order by id desc, action_date\n\n\nThis seems to cause the problem more consistently when using a value \nwhere col1 and col2 have no matches. In this particular example, I \ndidn't get the good plan until using 1000 as the default stats target. \nIt can't be a coincidence that there are 1000 distinct values in the \ntable for that column, and we get a terrible plan until a statistic is \nrecorded for each and every one in the functional index so it can \nexclude itself. This seems counter-intuitive to pg_stats with default \nstats at 500:\n\nSELECT attname,n_distinct FROM pg_stats WHERE tablename='date_test';\n\n attname | n_distinct\n-------------+------------\n id | -1\n action_date | 1000\n col2 | 14999\n col1 | 10000\n\nSELECT stadistinct FROM pg_statistic\n WHERE starelid='idx_date_test_col1_col2'::regclass\n\n stadistinct\n-------------\n 1000\n\nJust on pure selectivity, it should prefer the index on col1 and col2. \nAnyway, we're getting all the devs to search out that particular \nfunctional index and eradicate it, but that will take a while to get \nthrough testing and deployment. The overriding problem seems to be two-fold:\n\n1. Is there any way to specifically set stats on a functional index?\n2. Why is the planner so ridiculously optimistic with functional \nindexes, even in the case of much higher selectivity as reported by \npg_stats on the named columns?\n\nThanks!\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Wed, 24 Oct 2012 11:55:15 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Setting Statistics on Functional Indexes"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> 1. Is there any way to specifically set stats on a functional index?\n\nSure, the same way you would for a table.\n\nregression=# create table foo (f1 int, f2 int);\nCREATE TABLE\nregression=# create index fooi on foo ((f1 + f2));\nCREATE INDEX\nregression=# \\d fooi\n Index \"public.fooi\"\n Column | Type | Definition \n--------+---------+------------\n expr | integer | (f1 + f2)\nbtree, for table \"public.foo\"\n\nregression=# alter index fooi alter column expr set statistics 5000;\nALTER INDEX\n\nThe weak spot in this, and the reason this isn't \"officially\" supported,\nis that the column name for an index expression isn't set in stone.\nBut as long as you check what it's called you can set its target.\n\n> 2. Why is the planner so ridiculously optimistic with functional \n> indexes, even in the case of much higher selectivity as reported by \n> pg_stats on the named columns?\n\nIt's not particularly (not that you've even defined what you think\n\"optimistic\" is, much less mentioned what baseline you're comparing to).\nI tried your example on HEAD and I got what seemed pretty decent\nrowcount estimates ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 24 Oct 2012 15:11:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "On 10/24/2012 02:11 PM, Tom Lane wrote:\n\n> It's not particularly (not that you've even defined what you think\n> \"optimistic\" is, much less mentioned what baseline you're comparing\n> to).\n\nThe main flaw with my example is that it's random. But I swear I'm not \nmaking it up! :)\n\nThere seems to be a particularly nasty edge case we're triggering, then. \nLike I said, it's worse when col1+col2 don't match anything. In that \ncase, it's using the trunc index on the date column, which has \ndemonstrably worse performance. Here are the two analyzes I got \nbefore/after front-loading statistics.\n\nBefore stats increase:\n\n Sort (cost=9.38..9.39 rows=1 width=23) (actual time=78.282..78.282 \nrows=0 loops=1)\n Sort Key: id, action_date\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_date_test_action_date_trunc on date_test \n(cost=0.00..9.37 rows=1 width=23) (actual time=78.274..78.274 rows=0 \nloops=1)\n Index Cond: (date_trunc('day'::text, action_date) >= \n'2012-10-24 00:00:00'::timestamp without time zone)\n Filter: (((col1)::text = 'S:96'::text) AND (col2 = 657::numeric))\n Total runtime: 78.317 ms\n\n\nAnd then after. I used your unofficial trick to set it to 1000:\n\nalter index idx_date_test_action_date_trunc\n alter column date_trunc set statistics 1000;\nanalyze date_test;\n\n\n Sort (cost=9.83..9.83 rows=1 width=23) (actual time=0.038..0.038 \nrows=0 loops=1)\n Sort Key: id, action_date\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_date_test_col1_col2 on date_test \n(cost=0.00..9.82 rows=1 width=23) (actual time=0.030..0.030 rows=0 loops=1)\n Index Cond: (((col1)::text = 'S:96'::text) AND (col2 = \n657::numeric))\n Filter: (date_trunc('day'::text, action_date) >= '2012-10-24 \n00:00:00'::timestamp without time zone)\n Total runtime: 0.066 ms\n\n\nThis is on a bone-stock PG 9.1.6 from Ubuntu 12.04 LTS, with \ndefault_statistics increased to 500. The only thing I bumped up was the \nfunctional index between those two query plans.\n\nBut then I noticed something else. I reverted back to the old 500 \ndefault for everything, and added an index:\n\ncreate index idx_date_test_action_date_trunc_col1\n on date_test (date_trunc('day', action_date), col1);\n\nI think we can agree that this index would be more selective than the \none on date_trunc by itself. Yet:\n\n Sort (cost=9.38..9.39 rows=1 width=23) (actual time=77.055..77.055 \nrows=0 loops=1)\n Sort Key: id, action_date\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_date_test_action_date_trunc on date_test \n(cost=0.00..9.37 rows=1 width=23) (actual time=77.046..77.046 rows=0 \nloops=1)\n Index Cond: (date_trunc('day'::text, action_date) >= \n'2012-10-24 00:00:00'::timestamp without time zone)\n Filter: (((col1)::text = 'S:96'::text) AND (col2 = 657::numeric))\n Total runtime: 77.091 ms\n\n\nAll I have to say about that is: wat.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Wed, 24 Oct 2012 14:31:11 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "On 10/24/2012 02:31 PM, Shaun Thomas wrote:\n\n> The main flaw with my example is that it's random. But I swear I'm not\n> making it up! :)\n\nAnd then I find a way to make it non-random. Hooray:\n\nCREATE TABLE date_test (\n id SERIAL,\n col1 varchar,\n col2 numeric,\n action_date TIMESTAMP WITHOUT TIME ZONE\n);\n\ninsert into date_test (col1, col2, action_date)\nselect 'S:' || (a.num % 10000), a.num % 15000,\n current_date - a.num % 1000\n from generate_series(1,10000000) a(num);\n\ncreate index idx_date_test_action_date_trunc\n on date_test (date_trunc('day', action_date));\n\ncreate index idx_date_test_col1_col2\n on date_test (col1, col2);\n\nset default_statistics_target = 500;\nvacuum analyze date_test;\n\nexplain analyze\nselect *\n from date_test\n where col1 IN ('S:96')\n and col2 = 657\n and date_trunc('day', action_date) >= '2012-10-24'\n order by id desc, action_date;\n\n\n Sort (cost=9.38..9.39 rows=1 width=23) (actual time=83.418..83.418 \nrows=0 loops=1)\n Sort Key: id, action_date\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_date_test_action_date_trunc on date_test \n(cost=0.00..9.37 rows=1 width=23) (actual time=83.409..83.409 rows=0 \nloops=1)\n Index Cond: (date_trunc('day'::text, action_date) >= \n'2012-10-24 00:00:00'::timestamp without time zone)\n Filter: (((col1)::text = 'S:96'::text) AND (col2 = 657::numeric))\n Total runtime: 83.451 ms\n\n\nalter index idx_date_test_action_date_trunc\n alter column date_trunc set statistics 1000;\nanalyze date_test;\n\n\n Sort (cost=9.83..9.83 rows=1 width=23) (actual time=0.077..0.077 \nrows=0 loops=1)\n Sort Key: id, action_date\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_date_test_col1_col2 on date_test \n(cost=0.00..9.82 rows=1 width=23) (actual time=0.069..0.069 rows=0 loops=1)\n Index Cond: (((col1)::text = 'S:96'::text) AND (col2 = \n657::numeric))\n Filter: (date_trunc('day'::text, action_date) >= '2012-10-24 \n00:00:00'::timestamp without time zone)\n Total runtime: 0.105 m\n\n\nThen for fun:\n\n\ncreate index idx_date_test_action_date_trunc_col1\n on date_test (date_trunc('day', action_date), col1);\nalter index idx_date_test_action_date_trunc\n alter column date_trunc set statistics -1;\nanalyze date_test;\n\n\n Sort (cost=9.38..9.39 rows=1 width=23) (actual time=84.375..84.375 \nrows=0 loops=1)\n Sort Key: id, action_date\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_date_test_action_date_trunc on date_test \n(cost=0.00..9.37 rows=1 width=23) (actual time=84.366..84.366 rows=0 \nloops=1)\n Index Cond: (date_trunc('day'::text, action_date) >= \n'2012-10-24 00:00:00'::timestamp without time zone)\n Filter: (((col1)::text = 'S:96'::text) AND (col2 = 657::numeric))\n Total runtime: 84.410 ms\n\n\no_O\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Wed, 24 Oct 2012 14:54:52 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> On 10/24/2012 02:31 PM, Shaun Thomas wrote:\n>> The main flaw with my example is that it's random. But I swear I'm not\n>> making it up! :)\n\n> And then I find a way to make it non-random. Hooray:\n\nI can't reproduce this. In 9.1 for instance, I get\n\n Sort (cost=9.83..9.83 rows=1 width=23) (actual time=0.029..0.029 rows=0 loops=1)\n Sort Key: id, action_date\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_date_test_col1_col2 on date_test (cost=0.00..9.82 rows=1 width=23) (actual time=0.021..0.021 rows=0 loops=1)\n Index Cond: (((col1)::text = 'S:96'::text) AND (col2 = 657::numeric))\n Filter: (date_trunc('day'::text, action_date) >= '2012-10-24 00:00:00'::timestamp without time zone)\n Total runtime: 0.086 ms\n\nand those estimates don't change materially with the stats adjustments.\nIf I drop that index altogether, it goes over to this:\n\n Sort (cost=521.83..521.83 rows=1 width=23) (actual time=2.544..2.544 rows=0 loops=1)\n Sort Key: id, action_date\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_date_test_action_date_trunc_col1 on date_test (cost=0.00..521.82 rows=1 width=23) (actual time=2.536..2.536 rows=0 loops=1)\n Index Cond: ((date_trunc('day'::text, action_date) >= '2012-10-24 00:00:00'::timestamp without time zone) AND ((col1)::text = 'S:96'::text))\n Filter: (col2 = 657::numeric)\n Total runtime: 2.600 ms\n\nSo the planner's conclusions look fairly sane from here. I get about\nthe same results from HEAD, 9.2 branch tip, or 9.1 branch tip.\n\nSo I'm wondering exactly what \"9.1\" version you're using, and also\nwhether you've got any nondefault planner cost parameters.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 26 Oct 2012 15:35:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "On 10/26/2012 02:35 PM, Tom Lane wrote:\n\n> So I'm wondering exactly what \"9.1\" version you're using, and also\n> whether you've got any nondefault planner cost parameters.\n\nJust a plain old 9.1.6 from Ubuntu 12.04. Only thing I personally \nchanged was the default_statistics_target. Later, I bumped up shared \nbuffers and work mem, but that just reduced the run time. Still uses the \nbad index.\n\nBut I just noticed the lag in your response. :) It turns out, even \nthough I was substituting 2012-10-24 or 2012-10-25, what I really meant \nwas current_date. That does make all the difference, actually. If the \ndate in the where clause isn't the current date, it comes up with the \nright plan. Even a single day in the past makes it work right. It only \nseems to break on the very edge. This should work:\n\n\nDROP TABLE IF EXISTS date_test;\n\nCREATE TABLE date_test (\n id SERIAL,\n col1 varchar,\n col2 numeric,\n action_date TIMESTAMP WITHOUT TIME ZONE\n);\n\ninsert into date_test (col1, col2, action_date)\nselect 'S:' || (a.num % 10000), a.num % 15000,\n current_date - a.num % 1000\n from generate_series(1,10000000) a(num);\n\ncreate index idx_date_test_action_date_trunc\n on date_test (date_trunc('day', action_date));\n\ncreate index idx_date_test_col1_col2\n on date_test (col1, col2);\n\nset default_statistics_target = 500;\nvacuum analyze date_test;\n\nexplain analyze\nselect *\n from date_test\n where col1 IN ('S:96')\n and col2 = 657\n and date_trunc('day', action_date) >= current_date\n order by id desc, action_date;\n\n\n Sort (cost=9.39..9.39 rows=1 width=23) (actual time=10.679..10.679 \nrows=0 loops=1)\n Sort Key: id, action_date\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_date_test_action_date_trunc on date_test \n(cost=0.01..9.38 rows=1 width=23) (actual time=10.670..10.670 rows=0 \nloops=1)\n Index Cond: (date_trunc('day'::text, action_date) >= \n('now'::text)::date)\n Filter: (((col1)::text = 'S:96'::text) AND (col2 = 657::numeric))\n Total runtime: 10.713 ms\n\n\nAnd if this helps:\n\n\nfoo=# select name,setting from pg_settings where setting != boot_val;\n name | setting\n----------------------------+---------------------\n application_name | psql\n archive_command | (disabled)\n client_encoding | UTF8\n default_statistics_target | 500\n default_text_search_config | pg_catalog.english\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n lc_messages | en_US.UTF-8\n lc_monetary | en_US.UTF-8\n lc_numeric | en_US.UTF-8\n lc_time | en_US.UTF-8\n log_file_mode | 0600\n log_line_prefix | %t\n max_stack_depth | 2048\n server_encoding | UTF8\n shared_buffers | 3072\n ssl | on\n transaction_isolation | read committed\n unix_socket_directory | /var/run/postgresql\n unix_socket_permissions | 0777\n wal_buffers | 96\n\nThat's every single setting that's not a default from the compiled PG. \nSome of these were obviously modified by Ubuntu, but I didn't touch \nanything else. I was trying to produce a clean-room to showcase this. \nBut I'm seeing it everywhere I test, even with sane settings.\n\nOur EDB server is doing the same thing on much beefier hardware and \ncorrespondingly increased settings, which is what prompted me to test it \nin plain PG.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 26 Oct 2012 14:57:14 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> But I just noticed the lag in your response. :) It turns out, even \n> though I was substituting 2012-10-24 or 2012-10-25, what I really meant \n> was current_date. That does make all the difference, actually.\n\nAh. [ pokes at that for awhile... ] OK, this has nothing to do with\nfunctional indexes, and everything to do with the edge-case behavior of\nscalarltsel. What you've got is a histogram whose last entry\n(corresponding to the highest observed value of the date) is\ncurrent_date, and the question is what we should assume when estimating\nhow many rows have a value >= that. The answer of course is \"one, plus\nany duplicates\" ... but we don't know how many duplicates there are,\nand what we do know is it's not a particularly large number because the\nvalue isn't present in the most-common-values stats. So the code there\nassumes there aren't any dups.\n\nOnce you have enough histogram resolution for current_date to show up\nas the next-to-last as well as the last histogram entry, then of course\nthe estimate gets a lot better, since we can now tell that there's at\nleast one histogram bin's worth of duplicates.\n\nInterestingly, this is a case where the get_actual_variable_range patch\n(commit 40608e7f, which appeared in 9.0) makes the results worse.\nBefore that, there was a (very arbitrary) lower bound on what we'd\nbelieve as the selectivity of a >= condition, but now, when we know the\nactual upper limit of the variable, we don't clamp the result that way.\nI think the clamp must have been saving you in your previous version,\nbecause it more-or-less-accidentally accounted for the end value not\nbeing unique.\n\nSo the bottom line is that this is a case where you need a lot of\nresolution in the histogram. I'm not sure there's anything good\nwe can do to avoid that. I spent a bit of time thinking about whether\nwe could use n_distinct to get some idea of how many duplicates there\nmight be for the endpoint value, but n_distinct is unreliable enough\nthat I can't develop a lot of faith in such a thing. Or we could just\narbitarily assume some fraction-of-a-histogram-bin's worth of\nduplicates, but that would make the results worse for some people.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 26 Oct 2012 17:08:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "On Fri, Oct 26, 2012 at 6:08 PM, Tom Lane <[email protected]> wrote:\n>\n> Interestingly, this is a case where the get_actual_variable_range patch\n> (commit 40608e7f, which appeared in 9.0) makes the results worse.\n> Before that, there was a (very arbitrary) lower bound on what we'd\n> believe as the selectivity of a >= condition, but now, when we know the\n> actual upper limit of the variable, we don't clamp the result that way.\n> I think the clamp must have been saving you in your previous version,\n> because it more-or-less-accidentally accounted for the end value not\n> being unique.\n\nIIRC, that patch was performing an index query (index_last) to get the\nreal largest value, right?\n\nHow many duplicates would you think the planner would require to\nchoose another (better) plan?\n\nBecause once you've accessed that last index page, it would be rather\ntrivial finding out how many duplicate tids are in that page and, with\na small CPU cost (no disk access if you don't query other index pages)\nyou could verify the assumption of near-uniqueness.\n\n",
"msg_date": "Fri, 26 Oct 2012 18:19:05 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> Because once you've accessed that last index page, it would be rather\n> trivial finding out how many duplicate tids are in that page and, with\n> a small CPU cost (no disk access if you don't query other index pages)\n> you could verify the assumption of near-uniqueness.\n\nI thought about that too, but I'm not sure how promising the idea is.\nIn the first place, it's not clear when to stop counting duplicates, and\nin the second, I'm not sure we could get away with not visiting the heap\nto check for tuple liveness. There might be a lot of apparent\nduplicates in the index that just represent unreaped old versions of a\nfrequently-updated endpoint tuple. (The existing code is capable of\nreturning a \"wrong\" answer if the endpoint tuple is dead, but I don't\nthink it matters much in most cases. I'm less sure such an argument\ncould be made for dup-counting.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 26 Oct 2012 18:01:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "On Fri, Oct 26, 2012 at 7:01 PM, Tom Lane <[email protected]> wrote:\n> Claudio Freire <[email protected]> writes:\n>> Because once you've accessed that last index page, it would be rather\n>> trivial finding out how many duplicate tids are in that page and, with\n>> a small CPU cost (no disk access if you don't query other index pages)\n>> you could verify the assumption of near-uniqueness.\n>\n> I thought about that too, but I'm not sure how promising the idea is.\n> In the first place, it's not clear when to stop counting duplicates, and\n> in the second, I'm not sure we could get away with not visiting the heap\n> to check for tuple liveness. There might be a lot of apparent\n> duplicates in the index that just represent unreaped old versions of a\n> frequently-updated endpoint tuple. (The existing code is capable of\n> returning a \"wrong\" answer if the endpoint tuple is dead, but I don't\n> think it matters much in most cases. I'm less sure such an argument\n> could be made for dup-counting.)\n\nWould checking the visibility map be too bad? An index page worth of\ntuples should also fit within a page in the visibility map.\n\n",
"msg_date": "Fri, 26 Oct 2012 19:04:56 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "On Fri, Oct 26, 2012 at 7:04 PM, Claudio Freire <[email protected]> wrote:\n> On Fri, Oct 26, 2012 at 7:01 PM, Tom Lane <[email protected]> wrote:\n>> Claudio Freire <[email protected]> writes:\n>>> Because once you've accessed that last index page, it would be rather\n>>> trivial finding out how many duplicate tids are in that page and, with\n>>> a small CPU cost (no disk access if you don't query other index pages)\n>>> you could verify the assumption of near-uniqueness.\n>>\n>> I thought about that too, but I'm not sure how promising the idea is.\n>> In the first place, it's not clear when to stop counting duplicates, and\n>> in the second, I'm not sure we could get away with not visiting the heap\n>> to check for tuple liveness. There might be a lot of apparent\n>> duplicates in the index that just represent unreaped old versions of a\n>> frequently-updated endpoint tuple. (The existing code is capable of\n>> returning a \"wrong\" answer if the endpoint tuple is dead, but I don't\n>> think it matters much in most cases. I'm less sure such an argument\n>> could be made for dup-counting.)\n>\n> Would checking the visibility map be too bad? An index page worth of\n> tuples should also fit within a page in the visibility map.\n\nScratch that, they're sorted by tid. So it could be lots of pages in\nrandom order.\n\n",
"msg_date": "Fri, 26 Oct 2012 19:05:27 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "On 10/26/2012 04:08 PM, Tom Lane wrote:\n\n> So the bottom line is that this is a case where you need a lot of\n> resolution in the histogram. I'm not sure there's anything good\n> we can do to avoid that.\n\nI kinda hoped it wouldn't be something like that. For the particularly \npainful instance, it was easy to replace the index with a better (if \nlarger) dual index and drop the bad old one. But in some cases, I'm \nhaving to maintain two indexes that make me sad:\n\nCREATE TABLE activity (\n activity_id SERIAL NOT NULL PRIMARY KEY,\n account_id BIGINT NOT NULL,\n action_date TIMESTAMP WITHOUT TIME ZONE\n);\n\nCREATE INDEX idx_activity_action_date_account_id\n ON activity (action_date, activity_id);\n\nCREATE INDEX idx_activity_account_id_action_date\n ON activity (activity_id, action_date);\n\nBecause in the first case, we needed the action_date to be first for \nanalytics that *don't* supply account_id. But in the second case, we \nneed the account_id first, so we can get the most recent action(s) for \nthat account without a very expensive backwards index scan on the first \nindex.\n\nI know that current_date seems like an edge case, but I can't see how \ngetting the most recent activity for something is an uncommon activity. \nTip tracking is actually the most frequent pattern in the systems I've \nseen. Admittedly, those are almost always high TPS trading systems.\n\nAt this point, I'm almost willing to start putting in optimization \nfences to force it along the right path. Which is gross, because that's \neffectively no better than Oracle hints. But I also don't like setting \nmy statistics to 5000+ on problematic column/index combos to get the \nright heuristics, or having semi-duplicate multi-column indexes to \nexploit sorting performance.\n\nI mean, I get it. I just wonder if this particular tweak isn't more of a \nregression than initially thought.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Mon, 29 Oct 2012 10:26:57 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "On Fri, Oct 26, 2012 at 5:08 PM, Tom Lane <[email protected]> wrote:\n> So the bottom line is that this is a case where you need a lot of\n> resolution in the histogram. I'm not sure there's anything good\n> we can do to avoid that. I spent a bit of time thinking about whether\n> we could use n_distinct to get some idea of how many duplicates there\n> might be for the endpoint value, but n_distinct is unreliable enough\n> that I can't develop a lot of faith in such a thing. Or we could just\n> arbitarily assume some fraction-of-a-histogram-bin's worth of\n> duplicates, but that would make the results worse for some people.\n\nI looked at this a bit. It seems to me that the root of this issue is\nthat we aren't distinguishing (at least, not as far as I can see)\nbetween > and >=. ISTM that if the operator is >, we're doing exactly\nthe right thing, but if it's >=, we're giving exactly the same\nestimate that we would give for >. That doesn't seem right.\n\nWorse, I suspect that in this case we're actually giving a smaller\nestimate for >= than we would for =, because = would estimate as if we\nwere searching for an arbitrary non-MCV, while >= acts like > and\nsays, hey, there's nothing beyond the end.\n\nShouldn't there be a separate estimator for scalarlesel? Or should\nthe existing estimator be adjusted to handle the two cases\ndifferently?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Wed, 14 Nov 2012 15:36:19 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "On Wed, Nov 14, 2012 at 5:36 PM, Robert Haas <[email protected]> wrote:\n> Shouldn't there be a separate estimator for scalarlesel? Or should\n> the existing estimator be adjusted to handle the two cases\n> differently?\n\nWoulnd't adding eqsel to scalar(lt|gt)sel work? (saving duplication\nwith mvc_selectivity)\n\n",
"msg_date": "Wed, 14 Nov 2012 17:55:39 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> Shouldn't there be a separate estimator for scalarlesel? Or should\n> the existing estimator be adjusted to handle the two cases\n> differently?\n\nWell, it does handle it differently to some extent, in that the operator\nitself is invoked when checking the MCV values, so we get the right\nanswer for those.\n\nThe fact that there's not separate estimators for < and <= is something\nwe inherited from Berkeley, so I can't give the original rationale for\ncertain, but I think the notion was that the difference is imperceptible\nwhen dealing with a continuous distribution. The question is whether\nyou think that the \"=\" case contributes any significant amount to the\nprobability given that the bound is not one of the MCV values. (If it\nis, the MCV check will have accounted for it, so adding anything would\nbe wrong.) I guess we could add 1/ndistinct or something like that,\nbut I'm not convinced that will really make the estimates better, mainly\nbecause ndistinct is none too reliable itself.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 14 Nov 2012 16:00:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
}
] |
[
{
"msg_contents": "Maciek Sakrejda wrote:\n\n> Before the switch, everything was running fine.\n\nOne thing to look for is a connection stuck in \"idle in transaction\"\nor old prepared transactions in pg_prepared_xacts. Either will cause\nall sorts of problems, but if you are using serializable transactions\nthe error you are seeing is often the first symptom. I'm starting to\nthink we should add something about that to the hint.\n\nOn the other hand, it could just be that you need to increase the\nsetting the hint currently references. For complex databases it is\ndefinitely on the low side. It is really low if you have tables with\nhundreds of partitions which might get referenced by a single query.\n\n-Kevin\n\n",
"msg_date": "Wed, 24 Oct 2012 16:29:57 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Out of shared mem on new box with more mem, 9.1.5 ->\n 9.1.6"
}
] |
[
{
"msg_contents": "Shaun Thomas wrote:\n\n> Update the config and start as a slave, and it's the same as a\n> basebackup.\n\n... as long as the rsync was bracketed by calls to pg_start_backup()\nand pg_stop_backup().\n\n-Kevin\n\n",
"msg_date": "Thu, 25 Oct 2012 08:10:03 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On 10/25/2012 07:10 AM, Kevin Grittner wrote:\n\n> ... as long as the rsync was bracketed by calls to pg_start_backup()\n> and pg_stop_backup().\n\nOr they took it during a filesystem snapshot, or shut the database down.\n\nI thought that the only thing start/stop backup did was mark the \nbeginning and end transaction logs for the duration of the backup so \nthey could be backed up separately for a minimal replay.\n\nAn rsync doesn't need that, because it's binary compatible. You get two \nexact copies of the database, provided data wasn't changing. That's easy \nenough to accomplish, really.\n\nOr is there some embedded magic in streaming replication that requires \nstart/stop backup? I've never had problems starting slaves built from an \nrsync before.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Thu, 25 Oct 2012 07:47:58 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
},
{
"msg_contents": "On Thu, Oct 25, 2012 at 9:47 AM, Shaun Thomas <[email protected]> wrote:\n>> ... as long as the rsync was bracketed by calls to pg_start_backup()\n>> and pg_stop_backup().\n>\n>\n> Or they took it during a filesystem snapshot, or shut the database down.\n>\n> I thought that the only thing start/stop backup did was mark the beginning\n> and end transaction logs for the duration of the backup so they could be\n> backed up separately for a minimal replay.\n>\n> An rsync doesn't need that, because it's binary compatible. You get two\n> exact copies of the database, provided data wasn't changing. That's easy\n> enough to accomplish, really.\n ... as long as the rsync was bracketed by calls to pg_start_backup()\n and pg_stop_backup().\n\n\nOr they took it during a filesystem snapshot, or shut the database down.\n\nI thought that the only thing start/stop backup did was mark the\nbeginning and end transaction logs for the duration of the backup so\nthey could be backed up separately for a minimal replay.\n\nAn rsync doesn't need that, because it's binary compatible. You get\ntwo exact copies of the database, provided data wasn't changing.\nThat's easy enough to accomplish, really.\nWell, that's the thing. Without pg_start_backup, the database is\nchanging and rsync will not make a perfect copy. With pg_start_backup,\nthe replica will replay the WAL from the start_backup point, and any\ndifference rsync left will be ironed out.\n\nThat's why I say:\n\nrsync - the first one takes a long time\nstart backup\nrsync - this one will take a lot less\nstop backup\n\n",
"msg_date": "Thu, 25 Oct 2012 11:27:51 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to upgrade from 9.1 to 9.2 with replication?"
}
] |
[
{
"msg_contents": "Böckler Andreas wrote:\n\n> I've played with seq_page_cost and enable_seqscan already, but you\n> have to know the right values before SELECT to get good results ;)\n\nThe idea is to model actual costs on your system. You don't show\nyour configuration or describe your hardware, but you show an\nestimate of retrieving over 4000 rows through an index and describe a\nresponse time of 4 seconds, so you must have some significant part of\nthe data cached.\n\nI would see how the workload behaves with the following settings:\n\neffective_cache_size = <your shared_buffers setting plus what the OS\n shows as cached pages>\nseq_page_cost = 1\nrandom_page_cost = 2\ncpu_tuple_cost = 0.05\n\nYou can set these in a session and check the plan with EXPLAIN. Try\nvarious other important important queries with these settings and\nvariations on them. Once you hit the right factors to model your\nactual costs, the optimizaer will make better choices without needing\nto tinker with it each time.\n\n-Kevin\n\n",
"msg_date": "Thu, 25 Oct 2012 14:22:26 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "Hi,\n\n\nAm 25.10.2012 um 20:22 schrieb Kevin Grittner:\n\n> \n> The idea is to model actual costs on your system. You don't show\n> your configuration or describe your hardware, but you show an\n> estimate of retrieving over 4000 rows through an index and describe a\n> response time of 4 seconds, so you must have some significant part of\n> the data cached.\nSure my effective_cache_size 10 GB\nBut my right Table has the size of 1.2 TB (yeah Terra) at the moment (partitioned a 40GB slices) and has 3 * 10^9 records\n\nMy left table has only the size of 227MB and 1million records. Peanuts.\n> I would see how the workload behaves with the following settings:\n> \n> effective_cache_size = <your shared_buffers setting plus what the OS\n> shows as cached pages>\n> seq_page_cost = 1\n> random_page_cost = 2\n> cpu_tuple_cost = 0.05\n> \n> You can set these in a session and check the plan with EXPLAIN. Try\n> various other important important queries with these settings and\n> variations on them. Once you hit the right factors to model your\n> actual costs, the optimizaer will make better choices without needing\n> to tinker with it each time.\n\n i've played with that already ….\n\nNESTED LOOP -> GOOD\nSEQSCAN -> VERY BAD\n\nSET random_page_cost = 4;\n2012-08-14' AND '2012-08-30' -> NESTED LOOP\n2012-08-13' AND '2012-08-30' -> SEQSCAN\nSET random_page_cost = 2;\n2012-08-14' AND '2012-08-30' -> NESTED LOOP\n2012-08-07' AND '2012-08-30' -> NESTED LOOP\n2012-08-06' AND '2012-08-30' -> SEQSCAN\nSET random_page_cost = 1;\n2012-08-14' AND '2012-08-30' -> NESTED LOOP\n2012-08-07' AND '2012-08-30' -> NESTED LOOP\n2012-07-07' AND '2012-08-30' -> NESTED LOOP\n2012-07-06' AND '2012-08-30' -> SEQSCAN\n\nThe thing is ..\n- You can alter what you want. The planner will switch at a certain time range.\n- There is not one case, where the SEQSCAN-Method will be better .. It's not possible.\n\nSo the only way to tell the planner that he's doomed is \nSET enable_seqscan=0\nwhich is not very elegant. (Query Hints would be BTW jehovah!)\n\nYou would be forced to write something like this:\nvar lastValueEnable_seqscan = \"SHOw enable_seqscan\"\nSET enable_seqscan=0;\nSELECT ...\nSET enable_seqscan=lastValueEnable_seqscan;\n\nKind regards\n\nAndy\n\n-- \nAndreas Böckler\[email protected]\n\n\n",
"msg_date": "Fri, 26 Oct 2012 16:37:33 +0200",
"msg_from": "=?iso-8859-1?Q?B=F6ckler_Andreas?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "On Fri, Oct 26, 2012 at 04:37:33PM +0200, Böckler Andreas wrote:\n> Hi,\n> \n> \n> Am 25.10.2012 um 20:22 schrieb Kevin Grittner:\n> \n> > \n> > The idea is to model actual costs on your system. You don't show\n> > your configuration or describe your hardware, but you show an\n> > estimate of retrieving over 4000 rows through an index and describe a\n> > response time of 4 seconds, so you must have some significant part of\n> > the data cached.\n> Sure my effective_cache_size 10 GB\n> But my right Table has the size of 1.2 TB (yeah Terra) at the moment (partitioned a 40GB slices) and has 3 * 10^9 records\n> \n> My left table has only the size of 227MB and 1million records. Peanuts.\n> > I would see how the workload behaves with the following settings:\n> > \n> > effective_cache_size = <your shared_buffers setting plus what the OS\n> > shows as cached pages>\n> > seq_page_cost = 1\n> > random_page_cost = 2\n> > cpu_tuple_cost = 0.05\n> > \n> > You can set these in a session and check the plan with EXPLAIN. Try\n> > various other important important queries with these settings and\n> > variations on them. Once you hit the right factors to model your\n> > actual costs, the optimizaer will make better choices without needing\n> > to tinker with it each time.\n> \n> i've played with that already ….\n> \n> NESTED LOOP -> GOOD\n> SEQSCAN -> VERY BAD\n> \n> SET random_page_cost = 4;\n> 2012-08-14' AND '2012-08-30' -> NESTED LOOP\n> 2012-08-13' AND '2012-08-30' -> SEQSCAN\n> SET random_page_cost = 2;\n> 2012-08-14' AND '2012-08-30' -> NESTED LOOP\n> 2012-08-07' AND '2012-08-30' -> NESTED LOOP\n> 2012-08-06' AND '2012-08-30' -> SEQSCAN\n> SET random_page_cost = 1;\n> 2012-08-14' AND '2012-08-30' -> NESTED LOOP\n> 2012-08-07' AND '2012-08-30' -> NESTED LOOP\n> 2012-07-07' AND '2012-08-30' -> NESTED LOOP\n> 2012-07-06' AND '2012-08-30' -> SEQSCAN\n> \n> The thing is ..\n> - You can alter what you want. The planner will switch at a certain time range.\n> - There is not one case, where the SEQSCAN-Method will be better .. It's not possible.\n> \n> So the only way to tell the planner that he's doomed is \n> SET enable_seqscan=0\n> which is not very elegant. (Query Hints would be BTW jehovah!)\n> \n> You would be forced to write something like this:\n> var lastValueEnable_seqscan = \"SHOw enable_seqscan\"\n> SET enable_seqscan=0;\n> SELECT ...\n> SET enable_seqscan=lastValueEnable_seqscan;\n> \n> Kind regards\n> \n> Andy\n> \n\nHi Andy,\n\nYou have the sequential_page_cost = 1 which is better than or equal to\nthe random_page_cost in all of your examples. It sounds like you need\na sequential_page_cost of 5, 10, 20 or more.\n\nRegards,\nKen\n\n",
"msg_date": "Fri, 26 Oct 2012 09:55:13 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "Hi Ken,\n\nAm 26.10.2012 um 16:55 schrieb [email protected]:\n\n> Hi Andy,\n> \n> You have the sequential_page_cost = 1 which is better than or equal to\n> the random_page_cost in all of your examples.\n> It sounds like you need\n> a sequential_page_cost of 5, 10, 20 or more.\n\nYou're right it was sequential_page_cost = 1 because it's really irrelevant what I do here:\nset random_page_cost=2;\nset seq_page_cost=5;\n'2012-05-01' AND '2012-08-30' -> NESTEDLOOP\n'2012-04-01' AND '2012-08-30' -> SEQSCAN\n\na) there will be a point, where things will go bad \n this is like patching up a roof 'till you find the next hole instead of making it right at the beginning of construction process\nb) they high seq costs might be true for that table (partition at 40gb), but not for the rest of the database \n Seqscan-Costs per table would be great.\n\nRegards,\n\nAndy\n\n\n-- \nAndreas Böckler\[email protected]\n\n",
"msg_date": "Fri, 26 Oct 2012 17:15:05 +0200",
"msg_from": "=?iso-8859-1?Q?B=F6ckler_Andreas?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "On Fri, Oct 26, 2012 at 05:15:05PM +0200, Böckler Andreas wrote:\n> Hi Ken,\n> \n> Am 26.10.2012 um 16:55 schrieb [email protected]:\n> \n> > Hi Andy,\n> > \n> > You have the sequential_page_cost = 1 which is better than or equal to\n> > the random_page_cost in all of your examples.\n> > It sounds like you need\n> > a sequential_page_cost of 5, 10, 20 or more.\n> \n> You're right it was sequential_page_cost = 1 because it's really irrelevant what I do here:\n> set random_page_cost=2;\n> set seq_page_cost=5;\n> '2012-05-01' AND '2012-08-30' -> NESTEDLOOP\n> '2012-04-01' AND '2012-08-30' -> SEQSCAN\n> \n> a) there will be a point, where things will go bad \n> this is like patching up a roof 'till you find the next hole instead of making it right at the beginning of construction process\n> b) they high seq costs might be true for that table (partition at 40gb), but not for the rest of the database \n> Seqscan-Costs per table would be great.\n> \n> Regards,\n> \n> Andy\n> \n\nHi Andy,\n\nYou can set them per tablespace. Maybe you could put the appropriate tables\nthat need the higher costing on the same one.\n\nRegards,\nKen\n\n",
"msg_date": "Fri, 26 Oct 2012 10:30:49 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "On Fri, Oct 26, 2012 at 8:15 AM, Böckler Andreas <[email protected]> wrote:\n> Hi Ken,\n>\n> Am 26.10.2012 um 16:55 schrieb [email protected]:\n>\n>> Hi Andy,\n>>\n>> You have the sequential_page_cost = 1 which is better than or equal to\n>> the random_page_cost in all of your examples.\n>> It sounds like you need\n>> a sequential_page_cost of 5, 10, 20 or more.\n>\n> You're right it was sequential_page_cost = 1 because it's really irrelevant what I do here:\n> set random_page_cost=2;\n> set seq_page_cost=5;\n> '2012-05-01' AND '2012-08-30' -> NESTEDLOOP\n> '2012-04-01' AND '2012-08-30' -> SEQSCAN\n>\n> a) there will be a point, where things will go bad\n\nSure. And there truly is some point at which the sequential scan\nactually will become faster.\n\n> this is like patching up a roof 'till you find the next hole instead of making it right at the beginning of construction process\n\nWe are not at the beginning of the construction process. You are\nalready living in the house.\n\nVersion 9.3 is currently under construction. Maybe this will be a fix\nfor this problem in that release. The hackers mailing list would be\nthe place to discuss that.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 26 Oct 2012 10:58:19 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
}
] |
[
{
"msg_contents": "Böckler Andreas wrote:\n> Am 25.10.2012 um 20:22 schrieb Kevin Grittner:\n\n>> The idea is to model actual costs on your system. You don't show\n>> your configuration or describe your hardware, but you show an\n>> estimate of retrieving over 4000 rows through an index and\n>> describe a response time of 4 seconds, so you must have some\n>> significant part of the data cached.\n> Sure my effective_cache_size 10 GB\n> But my right Table has the size of 1.2 TB (yeah Terra) at the\n> moment (partitioned a 40GB slices) and has 3 * 10^9 records\n\nYou're getting up to a third of the size of what I've managed, so\nwe're in the same ballpark. I've gone to hundreds of millions of rows\nin a table without partitioning with good performance. I realize in\nraw rowcount for one table you're at almost ten times what I've run\nthat way, but I have no reason to think that it falls over between\nthose points. There are situations where partitioning helps, but I\nhave not found raw rowcount to be a very good basis for making the\ncall. What are your reasons for going that way?\n\n> My left table has only the size of 227MB and 1million records.\n> Peanuts.\n\nAbsolutely.\n\n>> I would see how the workload behaves with the following settings:\n>> \n>> effective_cache_size = <your shared_buffers setting plus what the\n>> OS shows as cached pages>\n>> seq_page_cost = 1\n>> random_page_cost = 2\n>> cpu_tuple_cost = 0.05\n>> \n>> You can set these in a session and check the plan with EXPLAIN.\n>> Try various other important important queries with these settings\n>> and variations on them. Once you hit the right factors to model\n>> your actual costs, the optimizaer will make better choices without\n>> needing to tinker with it each time.\n> \n> i've played with that already ….\n> \n> NESTED LOOP -> GOOD\n> SEQSCAN -> VERY BAD\n> \n> SET random_page_cost = 4;\n> 2012-08-14' AND '2012-08-30' -> NESTED LOOP\n> 2012-08-13' AND '2012-08-30' -> SEQSCAN\n> SET random_page_cost = 2;\n> 2012-08-14' AND '2012-08-30' -> NESTED LOOP\n> 2012-08-07' AND '2012-08-30' -> NESTED LOOP\n> 2012-08-06' AND '2012-08-30' -> SEQSCAN\n> SET random_page_cost = 1;\n> 2012-08-14' AND '2012-08-30' -> NESTED LOOP\n> 2012-08-07' AND '2012-08-30' -> NESTED LOOP\n> 2012-07-07' AND '2012-08-30' -> NESTED LOOP\n> 2012-07-06' AND '2012-08-30' -> SEQSCAN\n\nWhat impact did setting cpu_tuple_cost have?\n\n> The thing is ..\n> - You can alter what you want. The planner will switch at a certain\n> time range.\n> - There is not one case, where the SEQSCAN-Method will be better ..\n> It's not possible.\n\nI would be interested to see what you consider to be the proof of\nthat. In most benchmarks where people have actually measured it, the\nseqscan becomes faster when you are selecting more than about 10% of\na table, since the index scan will be jumping all over the disk to\nread the index pages and the actual data in the heap, which a seqscan\ncan take advantage of the OS's readahead. (Perhaps you need to tweak\nthat OS setting?) Of course, the more the data is cached, the less\npenalty there is for random access and the less attractive seqscans\nbecome.\n\nAny attempt to force plans using sequential scans to always be\nignored is sure to make some types of queries slower -- sometimes\nmuch slower. Hints or other ways to force a plan are far inferior to\nmodelling costs better. You might want to give that a try.\n\n-Kevin\n\n",
"msg_date": "Fri, 26 Oct 2012 10:59:28 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
}
] |
[
{
"msg_contents": "[email protected] wrote:\n\n> You have the sequential_page_cost = 1 which is better than or equal\n> to the random_page_cost in all of your examples. It sounds like you\n> need a sequential_page_cost of 5, 10, 20 or more.\n\nThe goal should be to set the cost factors so that they model actual\ncosts for you workload in your environment. In what cases have you\nseen the sequential scan of a large number of adjacent pages from\ndisk take longer than randomly reading the same number of pages from\ndisk? (I would love to see the bonnie++ number for that, if you have\nthem.)\n\n-Kevin\n\n",
"msg_date": "Fri, 26 Oct 2012 11:30:05 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
},
{
"msg_contents": "On Fri, Oct 26, 2012 at 8:30 AM, Kevin Grittner <[email protected]> wrote:\n> [email protected] wrote:\n>\n>> You have the sequential_page_cost = 1 which is better than or equal\n>> to the random_page_cost in all of your examples. It sounds like you\n>> need a sequential_page_cost of 5, 10, 20 or more.\n>\n> The goal should be to set the cost factors so that they model actual\n> costs for you workload in your environment.\n\nUnfortunately the random_page_cost is getting multiplied by an\nestimated page count which is 4 orders of magnitude too high.\nrandom_page_cost and seq_page_cost (and enable_seqscan) might not be\nthe right knobs, but they are the knobs that currently exist.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 26 Oct 2012 13:14:14 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
}
] |
[
{
"msg_contents": "Hi,\nI have a tree-structure managed with ltree and gist index.\nSimplified schema is\n\nCREATE TABLE crt (\n\tidcrt INT NOT NULL,\n\t...\n\tpathname LTREE\n)\nidcrt primary key and other index ix_crt_pathname on pathname with gist\n\nCREATE TABLE doc (\n\tiddoc INT NOT NULL, ...)\niddoc primary key\n\nCREATE TABLE folder_document (\n\tid_folder int not null,\n\tid_document int not null,\n\t...\n\tpath_folder ltree not null\n);\nid_folder , id_document are primary key\nix_folder_document_path_folder on path_folder with gist\n\nwhen enable_bitmapscan is set on query go on 1000 seconds, when I turned\noff bitmapscan query go on 36 seconds.\n\nI've noticed query use all buffer with ix_folder_document_path_folder,\nusing contrib pg_buffercache.\n\nTable crt have about 1.3 milion row folder_document 15 milion row and doc\nabout 8 milion row.\n\nQuery plan with enable_bitmapscan = ON is http://explain.depesz.com/s/d97\nQuery plan with enable_bitmapscan = OFF is http://explain.depesz.com/s/wgp\n\nAll query are execute after reboot machine.\n\nother parameter set\nshared_buffer = 1GB\nwork_mem = 128MB\nmaintenance_work_mem = 512MB\neffective_cache_size = 1GB\n\nI've test same query on PostgreSQL 9.1.5 and query go ok with\nenable_bitmapscan = on.\n\nI see in release note 9.0.5\n\"Fix performance problem when constructing a large, lossy bitmap\", is same\nproblem with 9.0.10?\n\n\nMy enviroment\nLinux OpenSUSE 12.2 x64\nPostgreSQL release 9.0.10 compiled from source\nPostgreSQL release 9.1.5 from official repository\n\nProcessor Intel Core i5 2.8GHz and 8GB RAM\n\n\n\n",
"msg_date": "Fri, 26 Oct 2012 17:30:23 +0200",
"msg_from": "Alberto Marchesini <[email protected]>",
"msg_from_op": true,
"msg_subject": "BAD performance with enable_bitmapscan = on with Postgresql 9.0.X\n\t(X = 3 and 10)"
}
] |
[
{
"msg_contents": "Böckler Andreas wrote:\n\n> b) they high seq costs might be true for that table (partition at\n> 40gb), but not for the rest of the database Seqscan-Costs per\n> table would be great.\n\nYou can set those per tablespace. Again, with about 40 spindles in\nour RAID, we got about ten times the speed with a sequential scan as\nrandom access, and an index scan has to hit more pages (index and\nheap rather than just the heap), so you can easily shoot yourself in\nthe foot by assuming that accessing a large portion of the table by\nindex is faster.\n\nReally, if you stop focusing on what you think the solution is, and\nprovide a more clear statement of your problem, with sufficient\ndatail, you are likely to get a real solution.\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n-Kevin\n\n",
"msg_date": "Fri, 26 Oct 2012 11:41:12 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query-Planer from 6seconds TO DAYS"
}
] |
[
{
"msg_contents": "Hey guys,\n\nI have a pretty nasty heads-up. If you have hardware using an Intel XEON \nand a newer Linux kernel, you may be experiencing very high CPU latency. \nYou can check yourself:\n\ncat /sys/devices/system/cpu/cpuidle/current_driver\n\nIf it says intel_idle, the Linux kernel will *aggressively* put your CPU \nto sleep. We definitely noticed this, and it's pretty darn painful. But \nit's *more* painful in your asynchronous, standby, or otherwise less \nbusy nodes. Why?\n\nAs you can imagine, the secondary nodes don't get much activity, so \nspend most of their time sleeping. Now the CPU has a lot more sleep \ntime, and wake latency while trying to copy data or process new WAL traffic.\n\nTo fix this, you must actually hint to, or outright disable, the driver \nby picking your own C-state, probably the one you wanted in the BIOS in \nthe first place. We did this by adding the following options to \nGRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub, but your distro may differ.\n\nintel_idle.max_cstate=0 processor.max_cstate=0 idle=mwait\n\nThen reboot. Here are the benefits we got:\n\n* %util difference between backing device and DRBD went down by 30-40% \non our replicating nodes.\n* TCP RTT is almost 10x faster.\n\nI'm totally not kidding about that last one. Due to the time necessary \nto wake a CPU to handle the network traffic, latency was massively \nincreased using the intel_idle driver. Our RTT average was 0.375ms on a \n10G link before. Now it's 0.04ms after using the settings above.\n\nConsider this a PSA. DRBD is unfairly being blamed for bad performance \nwith the intel_idle cpuidle driver in newer kernels! If you have DRBD on \na newer Intel system, I highly recommend you make the above changes, \nespecially since it directly affects your replication speed.\n\nIt took us days to figure this out, so I figured I'd share.\n\nThanks, everyone!\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 26 Oct 2012 12:58:57 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "PSA: New Kernels and intel_idle cpuidle Driver!"
}
] |
[
{
"msg_contents": "All... first let me say thank you for this forum.... I am new to it and\nrelatively new to postgres, more of a sysadmin than a DBA, but let me\nexplain my issue. I'll try also to post relevant information as well.\n\nOur IT group took over an app that we have running using postgres and it has\nbeen on version 8.2.11 since we acquired it. It is time to get current, so\nI have created instances of our production database that mirror exact\nhardware for our existing implementation on version 8.2.11 (running Fedora\nCore 8 - wow I know) and also version 9.1.6 on Fedora 17. I am able to\nmimic the production 8.2 environment exactly without any of the load of\nproduction and the same for the new 9.1 environment so there is no\nperverting of numbers based on load that I can't control\n\nMachines are Cloud based images running 4 (dual Core) Processors, with 15GB\nof memory... AMAZON m1.Xlarge boxes - 64 bit OS.\n\nI'm running my query using PSQL from the server\n\nHere is what I discovered. I have this Query:\n\n SELECT s.customernumber AS \"Cust Num\",\n s.name AS \"Site\",\n UPPER( p.name ) AS \"Product\",\n UPPER( ii.lotnumber ) AS \"Lot Number\",\n SUM( ii.quantityremaining ) AS \"On Hand\"\n FROM inventoryitems ii\n INNER JOIN inventories i ON i.inventoryid = ii.inventoryid\n INNER JOIN sites s ON s.siteid = i.siteid\n INNER JOIN accounts a ON a.accountid = s.accountid\n INNER JOIN products p ON p.productid = ii.productid \n WHERE a.customernumber = 'DS-1007'\n GROUP BY s.customernumber, s.name, UPPER( p.name ), UPPER( ii.lotnumber )\n HAVING SUM( ii.quantityremaining ) > 0\n ORDER BY s.name, UPPER( p.name );\n\nEXPLAIN ANALYZE OUTPUT on 8.2.11 is as follows:\n http://explain.depesz.com/s/JdW\n-or-\n\n(20 rows)\n\nEXPLAIN ANALYZE OUTPUT on 9.1.6 is as follows:\n http://explain.depesz.com/s/QZVF\n\nI KNOW, I KNOW the difference is VERY small in terms of actual time, but\npercentage wise this is statistically relevant and we are under a crunch to\nmake our application perform better.\n\nIn looking at the explain analyze output, it appears that in every case, 9.1\nout performed the 8.2.11 in actually getting the data, but the NESTED LOOP\ntime is slow enough to make the Total Runtime but as much as a 10th of a\nsecond slower on average...\n\nI have tried tweaking every parameter I can think of and here are some of\nthe relevant Parameters from my POSTGRESQL.CONF file (and both machines are\nrunning with KERNEL value \" sysctl -w kernel.shmmax=665544320\" ) \n\n9.1.6 values\nmax_connections = 250\nshared_buffers = 800MB\ntemp_buffers = 8MB\nwork_mem = 10MB\nmaintenance_work_mem = 100MB\nwal_buffers = 16MB\neffective_cache_size = 8GB\n\n8.2.11 values\nmax_connections = 250\nshared_buffers = 600MB\ntemp_buffers = 1024\nwork_mem = 6MB\nmaintenance_work_mem = 100MB\nwal_buffers = 64kB\neffective_cache_size = 8GB\n\nIn my first attempt at migrating to 9.1 I had a different lc_collate value\nat the default and the 9.1 query was running at around 2500 to 2600 ms and\nthat was huge... When I re-init'd my DB with the proper lc_locale set, I\nexpected my issue to be gone, and while it was to the extent of performance\nbefore, it is still slower consistently. \n\nAGAIN, the time difference is in the nested loop nodes themselves, not in\nthe Index Scan's. I don't understand this...\n\nAny help will be greatly appreciated.\n\nRob Cron\[email protected]\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slower-Performance-on-Postgres-9-1-6-vs-8-2-11-tp5729749.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Fri, 26 Oct 2012 11:30:00 -0700 (PDT)",
"msg_from": "robcron <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slower Performance on Postgres 9.1.6 vs 8.2.11"
},
{
"msg_contents": "robcron <[email protected]> writes:\n> Our IT group took over an app that we have running using postgres and it has\n> been on version 8.2.11 since we acquired it. It is time to get current, so\n> I have created instances of our production database that mirror exact\n> hardware for our existing implementation on version 8.2.11 (running Fedora\n> Core 8 - wow I know) and also version 9.1.6 on Fedora 17. I am able to\n> mimic the production 8.2 environment exactly without any of the load of\n> production and the same for the new 9.1 environment so there is no\n> perverting of numbers based on load that I can't control\n\n> Machines are Cloud based images running 4 (dual Core) Processors, with 15GB\n> of memory... AMAZON m1.Xlarge boxes - 64 bit OS.\n\nHm ... Amazon cloud is not exactly known for providing rock-stable\nperformance environment, but anyway the first thing I would have guessed\nat, seeing that the plans are basically the same, was a non-C locale\nsetting. Another thing to check is whether the new machine has higher\ntiming overhead --- is the speed difference the same when you just run\nthe query, rather than EXPLAIN ANALYZE'ing it? (If not,\ncontrib/pg_test_timing from 9.2 or later might yield useful data.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 26 Oct 2012 15:58:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slower Performance on Postgres 9.1.6 vs 8.2.11"
},
{
"msg_contents": "Sorry,\n\nAgain, I'm really new and so don't know how I would go about getting results\nfrom \"contrib/pg_test_timing\"\n\nIs this something that can be done from psql prompt, or will I need my\ndevelopers to get involved and write me something...?\n\nSorry for being such a newbie....:)\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slower-Performance-on-Postgres-9-1-6-vs-8-2-11-tp5729749p5729764.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Fri, 26 Oct 2012 13:26:26 -0700 (PDT)",
"msg_from": "robcron <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slower Performance on Postgres 9.1.6 vs 8.2.11"
},
{
"msg_contents": "Okay, so I took EXPLAIN ANALYZE off and made sure that timing is on \"psql\" \ncommand \\timing shows \n\nTiming = on\n\nRun the query several times..\n\n9.1.6 runs this query an average of 354 ms\n8.2.11 runs this query an average of 437 ms\n\nSo 9.1 IS FASTER\n\nWhy is EXPLAIN ANALYZE showing the reverse...of that...?\n\nEvidently, since I fixed the database Collation ( set to a value of \"C\") it\nhas been faster but I got locked into looking at the EXPLAIN ANALYZE\nresults...\n\nMMMM very curious.\n\nRob\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slower-Performance-on-Postgres-9-1-6-vs-8-2-11-tp5729749p5729768.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Fri, 26 Oct 2012 13:56:56 -0700 (PDT)",
"msg_from": "robcron <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slower Performance on Postgres 9.1.6 vs 8.2.11"
},
{
"msg_contents": "Hello\n\n2012/10/26 robcron <[email protected]>:\n> Okay, so I took EXPLAIN ANALYZE off and made sure that timing is on \"psql\"\n> command \\timing shows\n>\n> Timing = on\n>\n> Run the query several times..\n>\n> 9.1.6 runs this query an average of 354 ms\n> 8.2.11 runs this query an average of 437 ms\n>\n> So 9.1 IS FASTER\n>\n> Why is EXPLAIN ANALYZE showing the reverse...of that...?\n>\n> Evidently, since I fixed the database Collation ( set to a value of \"C\") it\n> has been faster but I got locked into looking at the EXPLAIN ANALYZE\n> results...\n>\n> MMMM very curious.\n\n9.1 EXPLAIN ANALYZE collect significantly more information about\nexecution - so there can be higher overhead\n\nRegards\n\nPavel\n\n>\n> Rob\n>\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Slower-Performance-on-Postgres-9-1-6-vs-8-2-11-tp5729749p5729768.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Sat, 27 Oct 2012 06:49:36 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slower Performance on Postgres 9.1.6 vs 8.2.11"
},
{
"msg_contents": "Thank you all for your replies.\n\nI did figure out what is going on.\n\n9.1 is indeed faster than 8.2.11 so we are good to go forward.\n\nThank you again\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slower-Performance-on-Postgres-9-1-6-vs-8-2-11-tp5729749p5729991.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Mon, 29 Oct 2012 14:40:08 -0700 (PDT)",
"msg_from": "robcron <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slower Performance on Postgres 9.1.6 vs 8.2.11"
}
] |
[
{
"msg_contents": "I am configuring streaming replication with hot standby\nwith PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).\nPostgreSQL was compiled from source.\n\nIt works fine, except that starting the standby took for ever:\nit took the system more than 80 minutes to replay 48 WAL files\nand connect to the primary.\n\nCan anybody think of an explanation why it takes that long?\n\nThis is decent hardware: 24 cores of AMD Opteron 6174, 128 GB RAM,\nNetApp SAN attached with 8 GBit Fibrechannel (ext4 file system).\nAn identical system performed fine in performance tests.\n\nHere is the log; I have edited it for readability:\n\n2012-10-29 09:22:22.945 database system was interrupted; last known up\nat 2012-10-26 01:11:59 CEST\n2012-10-29 09:22:22.945 creating missing WAL directory\n\"pg_xlog/archive_status\"\n2012-10-29 09:22:22.947 entering standby mode\n2012-10-29 09:22:23.434 restored log file \"00000001000001D1000000C4\"\nfrom archive\n2012-10-29 09:22:23.453 redo starts at 1D1/C4000020\n2012-10-29 09:22:25.847 restored log file \"00000001000001D1000000C5\"\nfrom archive\n2012-10-29 09:22:27.457 restored log file \"00000001000001D1000000C6\"\nfrom archive\n2012-10-29 09:22:28.946 restored log file \"00000001000001D1000000C7\"\nfrom archive\n2012-10-29 09:22:30.421 restored log file \"00000001000001D1000000C8\"\nfrom archive\n2012-10-29 09:22:31.243 restored log file \"00000001000001D1000000C9\"\nfrom archive\n2012-10-29 09:22:32.194 restored log file \"00000001000001D1000000CA\"\nfrom archive\n2012-10-29 09:22:33.169 restored log file \"00000001000001D1000000CB\"\nfrom archive\n2012-10-29 09:22:33.565 restored log file \"00000001000001D1000000CC\"\nfrom archive\n2012-10-29 09:23:35.451 restored log file \"00000001000001D1000000CD\"\nfrom archive\n\nEverything is nice until here.\nReplaying this WAL file suddenly takes 1.5 minutes instead\nof mere seconds as before.\n\n2012-10-29 09:24:54.761 restored log file \"00000001000001D1000000CE\"\nfrom archive\n2012-10-29 09:27:23.013 restartpoint starting: time\n2012-10-29 09:28:12.200 restartpoint complete: wrote 242 buffers\n(0.0%);\n 0 transaction log file(s) added, 0 removed, 0\nrecycled;\n write=48.987 s, sync=0.185 s, total=49.184 s;\n sync files=1096, longest=0.016 s, average=0.000\ns\n2012-10-29 09:28:12.206 recovery restart point at 1D1/CC618278\n2012-10-29 09:28:31.226 restored log file \"00000001000001D1000000CF\"\nfrom archive\n\nAgain there is a difference of 2.5 minutes\nbetween these WAL files, only 50 seconds of\nwhich were spent in the restartpoint.\n\n From here on it continues in quite the same vein.\nSome WAL files are restored in seconds, but some take\nmore than 4 minutes.\n\nI'll skip to the end of the log:\n\n2012-10-29 10:37:53.809 restored log file \"00000001000001D1000000EF\"\nfrom archive\n2012-10-29 10:38:53.194 restartpoint starting: time\n2012-10-29 10:39:25.929 restartpoint complete: wrote 161 buffers\n(0.0%);\n 0 transaction log file(s) added, 0 removed, 0\nrecycled;\n write=32.661 s, sync=0.066 s, total=32.734 s;\n sync files=251, longest=0.003 s, average=0.000\ns\n2012-10-29 10:39:25.929 recovery restart point at 1D1/ED95C728\n2012-10-29 10:42:56.153 restored log file \"00000001000001D1000000F0\"\nfrom archive\n2012-10-29 10:43:53.062 restartpoint starting: time\n2012-10-29 10:45:36.871 restored log file \"00000001000001D1000000F1\"\nfrom archive\n2012-10-29 10:45:39.832 restartpoint complete: wrote 594 buffers\n(0.0%);\n 0 transaction log file(s) added, 0 removed, 0\nrecycled;\n write=106.666 s, sync=0.093 s, total=106.769 s;\n sync files=729, longest=0.004 s, average=0.000\ns\n2012-10-29 10:45:39.832 recovery restart point at 1D1/EF5D4340\n2012-10-29 10:46:13.602 restored log file \"00000001000001D1000000F2\"\nfrom archive\n2012-10-29 10:47:38.396 restored log file \"00000001000001D1000000F3\"\nfrom archive\n2012-10-29 10:47:38.962 streaming replication successfully connected to\nprimary\n\nI'd be happy if somebody could shed light on this.\n\nYours,\nLaurenz Albe\n\nPS: Here is the configuration:\n\n name | current_setting \n------------------------------+---------------------------\n version | PostgreSQL 9.1.3 on\nx86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red Hat\n4.4.6-3), 64-bit\n archive_command | gzip -1 <\"%p\" | tee\n/POSTGRES/data/exchange/\"%f\".gz >/POSTGRES/data/backups/ELAK/\"%f\".gz\n archive_mode | on\n checkpoint_completion_target | 0.9\n checkpoint_segments | 30\n client_encoding | UTF8\n constraint_exclusion | off\n cursor_tuple_fraction | 1\n custom_variable_classes | pg_stat_statements\n default_statistics_target | 1000\n effective_cache_size | 64GB\n hot_standby | on\n lc_collate | de_DE.UTF8\n lc_ctype | de_DE.UTF8\n listen_addresses | *\n log_checkpoints | on\n log_connections | on\n log_destination | csvlog\n log_directory | /POSTGRES/data/logs/ELAK\n log_disconnections | on\n log_filename | ELAK-%Y-%m-%d.log\n log_lock_waits | on\n log_min_duration_statement | 3s\n log_min_error_statement | log\n log_min_messages | log\n log_rotation_size | 0\n log_statement | ddl\n log_temp_files | 0\n logging_collector | on\n maintenance_work_mem | 1GB\n max_connections | 800\n max_prepared_transactions | 800\n max_stack_depth | 9MB\n max_standby_archive_delay | 0\n max_standby_streaming_delay | 0\n max_wal_senders | 2\n pg_stat_statements.max | 5000\n pg_stat_statements.track | all\n port | 55503\n server_encoding | UTF8\n shared_buffers | 16GB\n shared_preload_libraries | pg_stat_statements,passwordcheck\n ssl | on\n tcp_keepalives_count | 0\n tcp_keepalives_idle | 0\n TimeZone | Europe/Vienna\n wal_buffers | 16MB\n wal_level | hot_standby\n work_mem | 8MB\n(49 rows)\n\n",
"msg_date": "Mon, 29 Oct 2012 14:05:24 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "On Mon, Oct 29, 2012 at 02:05:24PM +0100, Albe Laurenz wrote:\n> I am configuring streaming replication with hot standby\n> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).\n> PostgreSQL was compiled from source.\n> \n> It works fine, except that starting the standby took for ever:\n> it took the system more than 80 minutes to replay 48 WAL files\n> and connect to the primary.\n> \n> Can anybody think of an explanation why it takes that long?\n> \n> This is decent hardware: 24 cores of AMD Opteron 6174, 128 GB RAM,\n> NetApp SAN attached with 8 GBit Fibrechannel (ext4 file system).\n> An identical system performed fine in performance tests.\n> \n> Here is the log; I have edited it for readability:\n> \n> 2012-10-29 09:22:22.945 database system was interrupted; last known up\n> at 2012-10-26 01:11:59 CEST\n> 2012-10-29 09:22:22.945 creating missing WAL directory\n> \"pg_xlog/archive_status\"\n> 2012-10-29 09:22:22.947 entering standby mode\n> 2012-10-29 09:22:23.434 restored log file \"00000001000001D1000000C4\"\n> from archive\n> 2012-10-29 09:22:23.453 redo starts at 1D1/C4000020\n> 2012-10-29 09:22:25.847 restored log file \"00000001000001D1000000C5\"\n> from archive\n> 2012-10-29 09:22:27.457 restored log file \"00000001000001D1000000C6\"\n> from archive\n> 2012-10-29 09:22:28.946 restored log file \"00000001000001D1000000C7\"\n> from archive\n> 2012-10-29 09:22:30.421 restored log file \"00000001000001D1000000C8\"\n> from archive\n> 2012-10-29 09:22:31.243 restored log file \"00000001000001D1000000C9\"\n> from archive\n> 2012-10-29 09:22:32.194 restored log file \"00000001000001D1000000CA\"\n> from archive\n> 2012-10-29 09:22:33.169 restored log file \"00000001000001D1000000CB\"\n> from archive\n> 2012-10-29 09:22:33.565 restored log file \"00000001000001D1000000CC\"\n> from archive\n> 2012-10-29 09:23:35.451 restored log file \"00000001000001D1000000CD\"\n> from archive\n> \n> Everything is nice until here.\n> Replaying this WAL file suddenly takes 1.5 minutes instead\n> of mere seconds as before.\n> \n> 2012-10-29 09:24:54.761 restored log file \"00000001000001D1000000CE\"\n> from archive\n> 2012-10-29 09:27:23.013 restartpoint starting: time\n> 2012-10-29 09:28:12.200 restartpoint complete: wrote 242 buffers\n> (0.0%);\n> 0 transaction log file(s) added, 0 removed, 0\n> recycled;\n> write=48.987 s, sync=0.185 s, total=49.184 s;\n> sync files=1096, longest=0.016 s, average=0.000\n> s\n> 2012-10-29 09:28:12.206 recovery restart point at 1D1/CC618278\n> 2012-10-29 09:28:31.226 restored log file \"00000001000001D1000000CF\"\n> from archive\n> \n> Again there is a difference of 2.5 minutes\n> between these WAL files, only 50 seconds of\n> which were spent in the restartpoint.\n> \n> From here on it continues in quite the same vein.\n> Some WAL files are restored in seconds, but some take\n> more than 4 minutes.\n> \n> I'll skip to the end of the log:\n> \n> 2012-10-29 10:37:53.809 restored log file \"00000001000001D1000000EF\"\n> from archive\n> 2012-10-29 10:38:53.194 restartpoint starting: time\n> 2012-10-29 10:39:25.929 restartpoint complete: wrote 161 buffers\n> (0.0%);\n> 0 transaction log file(s) added, 0 removed, 0\n> recycled;\n> write=32.661 s, sync=0.066 s, total=32.734 s;\n> sync files=251, longest=0.003 s, average=0.000\n> s\n> 2012-10-29 10:39:25.929 recovery restart point at 1D1/ED95C728\n> 2012-10-29 10:42:56.153 restored log file \"00000001000001D1000000F0\"\n> from archive\n> 2012-10-29 10:43:53.062 restartpoint starting: time\n> 2012-10-29 10:45:36.871 restored log file \"00000001000001D1000000F1\"\n> from archive\n> 2012-10-29 10:45:39.832 restartpoint complete: wrote 594 buffers\n> (0.0%);\n> 0 transaction log file(s) added, 0 removed, 0\n> recycled;\n> write=106.666 s, sync=0.093 s, total=106.769 s;\n> sync files=729, longest=0.004 s, average=0.000\n> s\n> 2012-10-29 10:45:39.832 recovery restart point at 1D1/EF5D4340\n> 2012-10-29 10:46:13.602 restored log file \"00000001000001D1000000F2\"\n> from archive\n> 2012-10-29 10:47:38.396 restored log file \"00000001000001D1000000F3\"\n> from archive\n> 2012-10-29 10:47:38.962 streaming replication successfully connected to\n> primary\n> \n> I'd be happy if somebody could shed light on this.\n> \n> Yours,\n> Laurenz Albe\n> \n> PS: Here is the configuration:\n> \n> name | current_setting \n> ------------------------------+---------------------------\n> version | PostgreSQL 9.1.3 on\n> x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red Hat\n> 4.4.6-3), 64-bit\n> archive_command | gzip -1 <\"%p\" | tee\n> /POSTGRES/data/exchange/\"%f\".gz >/POSTGRES/data/backups/ELAK/\"%f\".gz\n> archive_mode | on\n> checkpoint_completion_target | 0.9\n> checkpoint_segments | 30\n> client_encoding | UTF8\n> constraint_exclusion | off\n> cursor_tuple_fraction | 1\n> custom_variable_classes | pg_stat_statements\n> default_statistics_target | 1000\n> effective_cache_size | 64GB\n> hot_standby | on\n> lc_collate | de_DE.UTF8\n> lc_ctype | de_DE.UTF8\n> listen_addresses | *\n> log_checkpoints | on\n> log_connections | on\n> log_destination | csvlog\n> log_directory | /POSTGRES/data/logs/ELAK\n> log_disconnections | on\n> log_filename | ELAK-%Y-%m-%d.log\n> log_lock_waits | on\n> log_min_duration_statement | 3s\n> log_min_error_statement | log\n> log_min_messages | log\n> log_rotation_size | 0\n> log_statement | ddl\n> log_temp_files | 0\n> logging_collector | on\n> maintenance_work_mem | 1GB\n> max_connections | 800\n> max_prepared_transactions | 800\n> max_stack_depth | 9MB\n> max_standby_archive_delay | 0\n> max_standby_streaming_delay | 0\n> max_wal_senders | 2\n> pg_stat_statements.max | 5000\n> pg_stat_statements.track | all\n> port | 55503\n> server_encoding | UTF8\n> shared_buffers | 16GB\n> shared_preload_libraries | pg_stat_statements,passwordcheck\n> ssl | on\n> tcp_keepalives_count | 0\n> tcp_keepalives_idle | 0\n> TimeZone | Europe/Vienna\n> wal_buffers | 16MB\n> wal_level | hot_standby\n> work_mem | 8MB\n> (49 rows)\n> \n> \nHi Albe,\n\nMy first guess would be that there was something using I/O resources on your\nNetApp. That is the behavior you would expect once the I/O cache on the NetApp\nhas been filled and you actually have to perform writes to the underlying\ndisks. Is this a dedicated box? Can you get I/O stats from the box during the\nrecovery?\n\nRegards,\nKen\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n",
"msg_date": "Mon, 29 Oct 2012 08:19:17 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "Albe Laurenz wrote:\n> I am configuring streaming replication with hot standby\n> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).\n> PostgreSQL was compiled from source.\n> \n> It works fine, except that starting the standby took for ever:\n> it took the system more than 80 minutes to replay 48 WAL files\n> and connect to the primary.\n> \n> Can anybody think of an explanation why it takes that long?\n\nCan you do a quick xlogdump of those files? Maybe there is something\nunusual (say particular types of GIN/GiST index updates) on the files\nthat take longer.\n\n-- \nÁlvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 29 Oct 2012 11:29:01 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "Alvaro Herrera wrote:\n>> I am configuring streaming replication with hot standby\n>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).\n>> PostgreSQL was compiled from source.\n>>\n>> It works fine, except that starting the standby took for ever:\n>> it took the system more than 80 minutes to replay 48 WAL files\n>> and connect to the primary.\n>>\n>> Can anybody think of an explanation why it takes that long?\n> \n> Can you do a quick xlogdump of those files? Maybe there is something\n> unusual (say particular types of GIN/GiST index updates) on the files\n> that take longer.\n\nThere are no GIN and GiST indexes in this cluster.\n\nHere's the output of \"xlogdump -S\" on one of the WAL files\nthat took over 4 minutes:\n\n00000001000001D1000000EF:\n\nUnable to read continuation page?\n ** maybe continues to next segment **\n---------------------------------------------------------------\nTimeLineId: 1, LogId: 465, LogSegment: 239\n\nResource manager stats:\n [0]XLOG : 2 records, 112 bytes (avg 56.0 bytes)\n checkpoint: 2, switch: 0, backup end: 0\n [1]Transaction: 427 records, 96512 bytes (avg 226.0 bytes)\n commit: 427, abort: 0\n [2]Storage : 0 record, 0 byte (avg 0.0 byte)\n [3]CLOG : 0 record, 0 byte (avg 0.0 byte)\n [4]Database : 0 record, 0 byte (avg 0.0 byte)\n [5]Tablespace: 0 record, 0 byte (avg 0.0 byte)\n [6]MultiXact : 0 record, 0 byte (avg 0.0 byte)\n [7]RelMap : 0 record, 0 byte (avg 0.0 byte)\n [8]Standby : 84 records, 1352 bytes (avg 16.1 bytes)\n [9]Heap2 : 325 records, 9340 bytes (avg 28.7 bytes)\n [10]Heap : 7611 records, 4118483 bytes (avg 541.1 bytes)\n ins: 2498, upd/hot_upd: 409/2178, del: 2494\n [11]Btree : 3648 records, 120814 bytes (avg 33.1 bytes)\n [12]Hash : 0 record, 0 byte (avg 0.0 byte)\n [13]Gin : 0 record, 0 byte (avg 0.0 byte)\n [14]Gist : 0 record, 0 byte (avg 0.0 byte)\n [15]Sequence : 0 record, 0 byte (avg 0.0 byte)\n\nBackup block stats: 2600 blocks, 11885880 bytes (avg 4571.5 bytes)\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Mon, 29 Oct 2012 16:04:00 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz <[email protected]> wrote:\n> I am configuring streaming replication with hot standby\n> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).\n> PostgreSQL was compiled from source.\n>\n> It works fine, except that starting the standby took for ever:\n> it took the system more than 80 minutes to replay 48 WAL files\n> and connect to the primary.\n>\n> Can anybody think of an explanation why it takes that long?\n\nCould the slow log files be replaying into randomly scattered pages\nwhich are not yet in RAM?\n\nDo you have sar or vmstat reports?\n\nCheers,\n\nJeff\n\n",
"msg_date": "Mon, 29 Oct 2012 09:42:24 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "\nOn Oct 29, 2012, at 12:42 PM, Jeff Janes wrote:\n\n> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz <[email protected]> wrote:\n>> I am configuring streaming replication with hot standby\n>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).\n>> PostgreSQL was compiled from source.\n>> \n>> It works fine, except that starting the standby took for ever:\n>> it took the system more than 80 minutes to replay 48 WAL files\n>> and connect to the primary.\n>> \n>> Can anybody think of an explanation why it takes that long?\n> \n> Could the slow log files be replaying into randomly scattered pages\n> which are not yet in RAM?\n> \n> Do you have sar or vmstat reports?\n> \n> Cheers,\n\n\nIf you do not have good random io performance log replay is nearly unbearable.\n\nalso, what io scheduler are you using? if it is cfq change that to deadline or noop. \nthat can make a huge difference.\n\n\n--\nJeff Trout <[email protected]>\n\n\n\n\n\n\n",
"msg_date": "Mon, 29 Oct 2012 14:27:06 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "\nOn Oct 29, 2012, at 12:42 PM, Jeff Janes wrote:\n\n> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz <[email protected]> wrote:\n>> I am configuring streaming replication with hot standby\n>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).\n>> PostgreSQL was compiled from source.\n>> \n>> It works fine, except that starting the standby took for ever:\n>> it took the system more than 80 minutes to replay 48 WAL files\n>> and connect to the primary.\n>> \n>> Can anybody think of an explanation why it takes that long?\n> \n> Could the slow log files be replaying into randomly scattered pages\n> which are not yet in RAM?\n> \n> Do you have sar or vmstat reports?\n> \n\n\nIf you do not have good random io performance log replay is nearly unbearable. (I've run into this before many times)\n\nAlso, what io scheduler are you using? if it is cfq change that to deadline or noop. \nthat can make a huge difference.\n\n--\nJeff Trout <[email protected]\n\n\n",
"msg_date": "Mon, 29 Oct 2012 14:36:36 -0400",
"msg_from": "Jeff Trout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": ">> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz\n<[email protected]> wrote:\n>>> I am configuring streaming replication with hot standby\n>>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).\n>>> PostgreSQL was compiled from source.\n>>>\n>>> It works fine, except that starting the standby took for ever:\n>>> it took the system more than 80 minutes to replay 48 WAL files\n>>> and connect to the primary.\n>>>\n>>> Can anybody think of an explanation why it takes that long?\n\nJeff Janes wrote:\n>> Could the slow log files be replaying into randomly scattered pages\n>> which are not yet in RAM?\n>>\n>> Do you have sar or vmstat reports?\n\nThe sar reports from the time in question tell me that I read\nabout 350 MB/s and wrote less than 0.2 MB/s. The disks were\nfairly busy (around 90%).\n\nJeff Trout wrote:\n> If you do not have good random io performance log replay is nearly\nunbearable.\n> \n> also, what io scheduler are you using? if it is cfq change that to\ndeadline or noop.\n> that can make a huge difference.\n\nWe use the noop scheduler.\nAs I said, an identical system performed well in load tests.\n\nThe sar reports give credit to Jeff Janes' theory.\nWhy does WAL replay read much more than it writes?\nI thought that pretty much every block read during WAL\nreplay would also get dirtied and hence written out.\n\nI wonder why the performance is good in the first few seconds.\nWhy should exactly the pages that I need in the beginning\nhappen to be in cache?\n\nAnd finally: are the numbers I observe (replay 48 files in 80\nminutes) ok or is this terribly slow as it seems to me?\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Tue, 30 Oct 2012 09:50:44 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "On 30.10.2012 10:50, Albe Laurenz wrote:\n> Why does WAL replay read much more than it writes?\n> I thought that pretty much every block read during WAL\n> replay would also get dirtied and hence written out.\n\nNot necessarily. If a block is modified and written out of the buffer \ncache before next checkpoint, the latest version of the block is already \non disk. On replay, the redo routine reads the block, sees that the \nchange was applied, and does nothing.\n\n> I wonder why the performance is good in the first few seconds.\n> Why should exactly the pages that I need in the beginning\n> happen to be in cache?\n\nThis is probably because of full_page_writes=on. When replay has a full \npage image of a block, it doesn't need to read the old contents from \ndisk. It can just blindly write the image to disk. Writing a block to \ndisk also puts that block in the OS cache, so this also efficiently \nwarms the cache from the WAL. Hence in the beginning of replay, you just \nwrite a lot of full page images to the OS cache, which is fast, and you \nonly start reading from disk after you've filled up the OS cache. If \nthis theory is true, you should see a pattern in the I/O stats, where in \nthe first seconds there is no I/O, but the CPU is 100% busy while it \nreads from WAL and writes out the pages to the OS cache. After the OS \ncache fills up with the dirty pages (up to dirty_ratio, on Linux), you \nwill start to see a lot of writes. As the replay progresses, you will \nsee more and more reads, as you start to get cache misses.\n\n- Heikki\n\n",
"msg_date": "Tue, 30 Oct 2012 12:07:48 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "On Tue, Oct 30, 2012 at 09:50:44AM +0100, Albe Laurenz wrote:\n> >> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz\n> <[email protected]> wrote:\n> >>> I am configuring streaming replication with hot standby\n> >>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).\n> >>> PostgreSQL was compiled from source.\n> >>>\n> >>> It works fine, except that starting the standby took for ever:\n> >>> it took the system more than 80 minutes to replay 48 WAL files\n> >>> and connect to the primary.\n> >>>\n> >>> Can anybody think of an explanation why it takes that long?\n> \n> Jeff Janes wrote:\n> >> Could the slow log files be replaying into randomly scattered pages\n> >> which are not yet in RAM?\n> >>\n> >> Do you have sar or vmstat reports?\n> \n> The sar reports from the time in question tell me that I read\n> about 350 MB/s and wrote less than 0.2 MB/s. The disks were\n> fairly busy (around 90%).\n> \n> Jeff Trout wrote:\n> > If you do not have good random io performance log replay is nearly\n> unbearable.\n> > \n> > also, what io scheduler are you using? if it is cfq change that to\n> deadline or noop.\n> > that can make a huge difference.\n> \n> We use the noop scheduler.\n> As I said, an identical system performed well in load tests.\n> \n> The sar reports give credit to Jeff Janes' theory.\n> Why does WAL replay read much more than it writes?\n> I thought that pretty much every block read during WAL\n> replay would also get dirtied and hence written out.\n> \n> I wonder why the performance is good in the first few seconds.\n> Why should exactly the pages that I need in the beginning\n> happen to be in cache?\n> \n> And finally: are the numbers I observe (replay 48 files in 80\n> minutes) ok or is this terribly slow as it seems to me?\n> \n> Yours,\n> Laurenz Albe\n> \n\nHi,\n\nThe load tests probably had the \"important\" data already cached. Processing\na WAL file would involve bringing all the data back into memory using a\nrandom I/O pattern. Perhaps priming the file cache using some sequential\nreads would allow the random I/O to hit memory instead of disk. I may be\nmisremembering, but wasn't there an associated project/program that would\nparse the WAL files and generate cache priming reads?\n\nRegards,\nKen\n\n",
"msg_date": "Tue, 30 Oct 2012 08:05:33 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "Heikki Linnakangas wrote:\n>> Why does WAL replay read much more than it writes?\n>> I thought that pretty much every block read during WAL\n>> replay would also get dirtied and hence written out.\n> \n> Not necessarily. If a block is modified and written out of the buffer\n> cache before next checkpoint, the latest version of the block is\nalready\n> on disk. On replay, the redo routine reads the block, sees that the\n> change was applied, and does nothing.\n\nTrue. Could that account for 1000 times more reads than writes?\n\n>> I wonder why the performance is good in the first few seconds.\n>> Why should exactly the pages that I need in the beginning\n>> happen to be in cache?\n> \n> This is probably because of full_page_writes=on. When replay has a\nfull\n> page image of a block, it doesn't need to read the old contents from\n> disk. It can just blindly write the image to disk. Writing a block to\n> disk also puts that block in the OS cache, so this also efficiently\n> warms the cache from the WAL. Hence in the beginning of replay, you\njust\n> write a lot of full page images to the OS cache, which is fast, and\nyou\n> only start reading from disk after you've filled up the OS cache. If\n> this theory is true, you should see a pattern in the I/O stats, where\nin\n> the first seconds there is no I/O, but the CPU is 100% busy while it\n> reads from WAL and writes out the pages to the OS cache. After the OS\n> cache fills up with the dirty pages (up to dirty_ratio, on Linux), you\n> will start to see a lot of writes. As the replay progresses, you will\n> see more and more reads, as you start to get cache misses.\n\nThat makes sense to me.\nUnfortunately I don't have statistics in the required resolution\nto verify that.\n\nThanks for the explanations.\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Tue, 30 Oct 2012 14:10:24 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "[email protected] wrote:\n>>> If you do not have good random io performance log replay is nearly\n>>> unbearable.\n>>>\n>>> also, what io scheduler are you using? if it is cfq change that to\n>>> deadline or noop.\n>>> that can make a huge difference.\n>>\n>> We use the noop scheduler.\n>> As I said, an identical system performed well in load tests.\n\n> The load tests probably had the \"important\" data already cached.\nProcessing\n> a WAL file would involve bringing all the data back into memory using\na\n> random I/O pattern.\n\nThe database is way too big (1 TB) to fit into cache.\n\nWhat are \"all the data\" that have to be brought back?\nSurely only the database blocks that are modified by the WAL,\nright?\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Tue, 30 Oct 2012 14:16:57 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
},
{
"msg_contents": "On Tue, Oct 30, 2012 at 02:16:57PM +0100, Albe Laurenz wrote:\n> [email protected] wrote:\n> >>> If you do not have good random io performance log replay is nearly\n> >>> unbearable.\n> >>>\n> >>> also, what io scheduler are you using? if it is cfq change that to\n> >>> deadline or noop.\n> >>> that can make a huge difference.\n> >>\n> >> We use the noop scheduler.\n> >> As I said, an identical system performed well in load tests.\n> \n> > The load tests probably had the \"important\" data already cached.\n> Processing\n> > a WAL file would involve bringing all the data back into memory using\n> a\n> > random I/O pattern.\n> \n> The database is way too big (1 TB) to fit into cache.\n> \n> What are \"all the data\" that have to be brought back?\n> Surely only the database blocks that are modified by the WAL,\n> right?\n> \n> Yours,\n> Laurenz Albe\n> \n\nRight, it would only read the blocks that are modified.\n\nRegards,\nKen\n\n",
"msg_date": "Tue, 30 Oct 2012 08:41:24 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replaying 48 WAL files takes 80 minutes"
}
] |
[
{
"msg_contents": "Hi, thanks for any help. I've tried to be thorough, but let me know if I should\nprovide more information.\n\nA description of what you are trying to achieve and what results you expect:\n I have a large (3 million row) table called \"tape\" that represents files,\n which I join to a small (100 row) table called \"filesystem\" that represents\n filesystems. I have a web interface that allows you to sort by a number of\n fields in the tape table and view the results 100 at a time (using LIMIT\n and OFFSET).\n\n The data only changes hourly and I do a \"vacuum analyze\" after all changes.\n\n The tables are defined as:\n\n create table filesystem (\n id serial primary key,\n host varchar(256),\n storage_path varchar(2048) not null check (storage_path != ''),\n mounted_on varchar(2048) not null check (mounted_on != ''),\n constraint unique_fs unique(host, storage_path)\n );\n create table tape (\n id serial primary key,\n volser char(255) not null check (volser != ''),\n path varchar(2048) not null check (path != ''),\n scratched boolean not null default FALSE,\n last_write_date timestamp not null default current_timestamp,\n last_access_date timestamp not null default current_timestamp,\n filesystem_id integer references filesystem not null,\n size bigint not null check (size >= 0),\n worm_status char,\n encryption char,\n job_name char(8),\n job_step char(8),\n dsname char(17),\n recfm char(3),\n block_size int,\n lrecl int,\n constraint filesystem_already_has_that_volser unique(filesystem_id, volser)\n );\n\n An example query that's running slowly for me is:\n\n select tape.volser,\n tape.path,\n tape.scratched,\n tape.size,\n extract(epoch from tape.last_write_date) as last_write_date,\n extract(epoch from tape.last_access_date) as last_access_date\n from tape\n inner join filesystem\n on (tape.filesystem_id = filesystem.id)\n order by last_write_date desc\n limit 100\n offset 100;\n\n On Postgres 8.1.17 this takes about 60 seconds. I would like it to be faster.\n\n Here's the explain output:\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3226201.13..3226201.38 rows=100 width=308) (actual time=66311.929..66312.053 rows=100 loops=1)\n -> Sort (cost=3226200.88..3234250.28 rows=3219757 width=308) (actual time=66311.826..66311.965 rows=200 loops=1)\n Sort Key: date_part('epoch'::text, tape.last_write_date)\n -> Hash Join (cost=3.26..242948.97 rows=3219757 width=308) (actual time=3.165..31680.830 rows=3219757 loops=1)\n Hash Cond: (\"outer\".filesystem_id = \"inner\".id)\n -> Seq Scan on tape (cost=0.00..178550.57 rows=3219757 width=312) (actual time=2.824..18175.863 rows=3219757 loops=1)\n -> Hash (cost=3.01..3.01 rows=101 width=4) (actual time=0.204..0.204 rows=101 loops=1)\n -> Seq Scan on filesystem (cost=0.00..3.01 rows=101 width=4) (actual time=0.004..0.116 rows=101 loops=1)\n Total runtime: 66553.643 ms\n\n Here's a depesz link with that output: http://explain.depesz.com/s/AUR\n\n\nThings I've tried:\n\n 1. I added an index on last_write_date with:\n\n create index tape_last_write_date_idx on tape(last_write_date);\n\n and there was no improvement in query time.\n\n 2. I bumped:\n effective_cache_size to 1/2 system RAM (1GB)\n shared_buffers to 1/4 system RAM (512MB)\n work_mem to 10MB\n and there was no improvement in query time.\n\n 3. I ran the query against the same data in Postgres 9.1.6 rather than 8.1.17\n using the same hardware and it was about 5 times faster (nice work,\n whoever did that!). Unfortunately upgrading is not an option, so this\n is more of an anecdote. I would think the query could go much faster\n in either environment with some optimization.\n\n\n\nThe EXACT PostgreSQL version you are running:\n PostgreSQL 8.1.17 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070115 (SUSE Linux)\n\nHow you installed PostgreSQL:\n Standard SuSE SLES 10-SP3 RPMs:\n postgresql-devel-8.1.17-0.3\n postgresql-pl-8.1.17-0.4\n postgresql-libs-8.1.17-0.3\n postgresql-8.1.17-0.3\n postgresql-server-8.1.17-0.3\n postgresql-contrib-8.1.17-0.3\n\nChanges made to the settings in the postgresql.conf file:\n Only the memory changes mentioned above.\n\nOperating system and version:\n Linux acp1 2.6.16.60-0.54.5-default #1 Fri Sep 4 01:28:03 UTC 2009 i686 i686 i386 GNU/Linux\n\n SLES 10-SP3\n\nWhat program you're using to connect to PostgreSQL:\n Perl DBI\n Perl v5.8.8\n\nWhat version of the ODBC/JDBC/ADO/etc driver you're using, if any:\n perl-DBD-Pg 1.43\n\nIf you're using a connection pool, load balancer or application server, which one you're using and its version:\n None.\n\nIs there anything remotely unusual in the PostgreSQL server logs?\n No, they're empty.\n\n\nCPU manufacturer and model:\n Intel Celeron CPU 440 @ 2.00GHz\n\nAmount and size of RAM installed:\n 2GB RAM\n\nStorage details (important for performance and corruption questions):\n\n Do you use a RAID controller?\n No.\n How many hard disks are connected to the system and what types are they?\n We use a single Hitachi HDT72102 SATA drive (250GB) 7200 RPM.\n How are your disks arranged for storage?\n Postgres lives on the same 100GB ext3 partition as the OS.\n\n\nThanks,\nSean\n\n",
"msg_date": "Mon, 29 Oct 2012 13:41:23 -0400",
"msg_from": "\"Woolcock, Sean\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Request for help with slow query"
},
{
"msg_contents": "Did you try to add an index on filesystem_id \n\n\n\n\n\n________________________________\n From: \"Woolcock, Sean\" <[email protected]>\nTo: \"[email protected]\" <[email protected]> \nSent: Monday, October 29, 2012 6:41 PM\nSubject: [PERFORM] Request for help with slow query\n \nHi, thanks for any help. I've tried to be thorough, but let me know if I should\nprovide more information.\n\nA description of what you are trying to achieve and what results you expect:\n I have a large (3 million row) table called \"tape\" that represents files,\n which I join to a small (100 row) table called \"filesystem\" that represents\n filesystems. I have a web interface that allows you to sort by a number of\n fields in the tape table and view the results 100 at a time (using LIMIT\n and OFFSET).\n\n The data only changes hourly and I do a \"vacuum analyze\" after all changes.\n\n The tables are defined as:\n\n create table filesystem (\n id serial primary key,\n host varchar(256),\n storage_path varchar(2048) not null check (storage_path != ''),\n mounted_on varchar(2048) not null check (mounted_on != ''),\n constraint unique_fs unique(host, storage_path)\n );\n create table tape (\n id serial primary key,\n volser char(255) not null check (volser != ''),\n path varchar(2048) not null check (path != ''),\n scratched boolean not null default FALSE,\n last_write_date timestamp not null default current_timestamp,\n last_access_date timestamp not null default current_timestamp,\n filesystem_id integer references filesystem not null,\n size bigint not null check (size >= 0),\n worm_status char,\n encryption char,\n job_name char(8),\n job_step char(8),\n dsname char(17),\n recfm char(3),\n block_size int,\n lrecl int,\n constraint filesystem_already_has_that_volser unique(filesystem_id, volser)\n );\n\n An example query that's running slowly for me is:\n\n select tape.volser,\n tape.path,\n tape.scratched,\n tape.size,\n extract(epoch from tape.last_write_date) as last_write_date,\n extract(epoch from tape.last_access_date) as last_access_date\n from tape\n inner join filesystem\n on (tape.filesystem_id = filesystem.id)\n order by last_write_date desc\n limit 100\n offset 100;\n\n On Postgres 8.1.17 this takes about 60 seconds. I would like it to be faster.\n\n Here's the explain output:\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3226201.13..3226201.38 rows=100 width=308) (actual time=66311.929..66312.053 rows=100 loops=1)\n -> Sort (cost=3226200.88..3234250.28 rows=3219757 width=308) (actual time=66311.826..66311.965 rows=200 loops=1)\n Sort Key: date_part('epoch'::text, tape.last_write_date)\n -> Hash Join (cost=3.26..242948.97 rows=3219757 width=308) (actual time=3.165..31680.830 rows=3219757 loops=1)\n Hash Cond: (\"outer\".filesystem_id = \"inner\".id)\n -> Seq Scan on tape (cost=0.00..178550.57 rows=3219757 width=312) (actual time=2.824..18175.863 rows=3219757 loops=1)\n -> Hash (cost=3.01..3.01 rows=101 width=4) (actual time=0.204..0.204 rows=101 loops=1)\n -> Seq Scan on filesystem (cost=0.00..3.01 rows=101 width=4) (actual time=0.004..0.116 rows=101 loops=1)\n Total runtime: 66553.643 ms\n\n Here's a depesz link with that output: http://explain.depesz.com/s/AUR\n\n\nThings I've tried:\n\n 1. I added an index on last_write_date with:\n\n create index tape_last_write_date_idx on tape(last_write_date);\n\n and there was no improvement in query time.\n\n 2. I bumped:\n effective_cache_size to 1/2 system RAM (1GB)\n shared_buffers to 1/4 system RAM (512MB)\n work_mem to 10MB\n and there was no improvement in query time.\n\n 3. I ran the query against the same data in Postgres 9.1.6 rather than 8.1.17\n using the same hardware and it was about 5 times faster (nice work,\n whoever did that!). Unfortunately upgrading is not an option, so this\n is more of an anecdote. I would think the query could go much faster\n in either environment with some optimization.\n\n\n\nThe EXACT PostgreSQL version you are running:\n PostgreSQL 8.1.17 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070115 (SUSE Linux)\n\nHow you installed PostgreSQL:\n Standard SuSE SLES 10-SP3 RPMs:\n postgresql-devel-8.1.17-0.3\n postgresql-pl-8.1.17-0.4\n postgresql-libs-8.1.17-0.3\n postgresql-8.1.17-0.3\n postgresql-server-8.1.17-0.3\n postgresql-contrib-8.1.17-0.3\n\nChanges made to the settings in the postgresql.conf file:\n Only the memory changes mentioned above.\n\nOperating system and version:\n Linux acp1 2.6.16.60-0.54.5-default #1 Fri Sep 4 01:28:03 UTC 2009 i686 i686 i386 GNU/Linux\n\n SLES 10-SP3\n\nWhat program you're using to connect to PostgreSQL:\n Perl DBI\n Perl v5.8.8\n\nWhat version of the ODBC/JDBC/ADO/etc driver you're using, if any:\n perl-DBD-Pg 1.43\n\nIf you're using a connection pool, load balancer or application server, which one you're using and its version:\n None.\n\nIs there anything remotely unusual in the PostgreSQL server logs?\n No, they're empty.\n\n\nCPU manufacturer and model:\n Intel Celeron CPU 440 @ 2.00GHz\n\nAmount and size of RAM installed:\n 2GB RAM\n\nStorage details (important for performance and corruption questions):\n\n Do you use a RAID controller?\n No.\n How many hard disks are connected to the system and what types are they?\n We use a single Hitachi HDT72102 SATA drive (250GB) 7200 RPM.\n How are your disks arranged for storage?\n Postgres lives on the same 100GB ext3 partition as the OS.\n\n\nThanks,\nSean\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nDid you try to add an index on filesystem_id From: \"Woolcock, Sean\" <[email protected]> To: \"[email protected]\" <[email protected]> Sent: Monday, October 29, 2012 6:41 PM Subject: [PERFORM] Request for help with slow query \nHi, thanks for any help. I've tried to be thorough, but let me know if I shouldprovide more information.A description of what you are trying to achieve and what results you expect: I have a large (3 million row) table called \"tape\" that represents files, which I join to a small (100 row) table called \"filesystem\" that represents filesystems. I have a web interface that allows you to sort by a number of fields in the tape table and view the results 100 at a time (using LIMIT and OFFSET). The data only changes hourly and I do a \"vacuum analyze\" after all changes. The tables are defined as: create table filesystem ( id serial primary key, host \n varchar(256), storage_path varchar(2048) not null check (storage_path != ''), mounted_on varchar(2048) not null check (mounted_on != ''), constraint unique_fs unique(host, storage_path) ); create table tape ( id serial primary key, volser char(255) not null check (volser != ''), path varchar(2048) not null check (path != ''), scratched boolean not null default\n FALSE, last_write_date timestamp not null default current_timestamp, last_access_date timestamp not null default current_timestamp, filesystem_id integer references filesystem not null, size bigint not null check (size >= 0), worm_status char, encryption char, job_name char(8), job_step char(8), dsname char(17), \n recfm char(3), block_size int, lrecl int, constraint filesystem_already_has_that_volser unique(filesystem_id, volser) ); An example query that's running slowly for me is: select tape.volser, tape.path, tape.scratched, tape.size, extract(epoch from tape.last_write_date) as last_write_date, extract(epoch from tape.last_access_date) as\n last_access_date from tape inner join filesystem on (tape.filesystem_id = filesystem.id) order by last_write_date desc limit 100 offset 100; On Postgres 8.1.17 this takes about 60 seconds. I would like it to be faster. Here's the explain output: QUERY PLAN \n --------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=3226201.13..3226201.38 rows=100 width=308) (actual time=66311.929..66312.053 rows=100 loops=1) -> Sort (cost=3226200.88..3234250.28 rows=3219757 width=308) (actual time=66311.826..66311.965 rows=200 loops=1) Sort Key: date_part('epoch'::text, tape.last_write_date) -> Hash Join (cost=3.26..242948.97 rows=3219757 width=308) (actual time=3.165..31680.830 rows=3219757 loops=1) Hash Cond: (\"outer\".filesystem_id = \"inner\".id) -> Seq Scan on\n tape (cost=0.00..178550.57 rows=3219757 width=312) (actual time=2.824..18175.863 rows=3219757 loops=1) -> Hash (cost=3.01..3.01 rows=101 width=4) (actual time=0.204..0.204 rows=101 loops=1) -> Seq Scan on filesystem (cost=0.00..3.01 rows=101 width=4) (actual time=0.004..0.116 rows=101 loops=1) Total runtime: 66553.643 ms Here's a depesz link with that output: http://explain.depesz.com/s/AURThings I've tried: 1. I added an index on last_write_date with: create index tape_last_write_date_idx on tape(last_write_date); and there was no improvement in query time. 2. I\n bumped: effective_cache_size to 1/2 system RAM (1GB) shared_buffers to 1/4 system RAM (512MB) work_mem to 10MB and there was no improvement in query time. 3. I ran the query against the same data in Postgres 9.1.6 rather than 8.1.17 using the same hardware and it was about 5 times faster (nice work, whoever did that!). Unfortunately upgrading is not an option, so this is more of an anecdote. I would think the query could go much faster in either environment with some optimization.The EXACT PostgreSQL version you are running: PostgreSQL 8.1.17 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070115 (SUSE Linux)How you\n installed PostgreSQL: Standard SuSE SLES 10-SP3 RPMs: postgresql-devel-8.1.17-0.3 postgresql-pl-8.1.17-0.4 postgresql-libs-8.1.17-0.3 postgresql-8.1.17-0.3 postgresql-server-8.1.17-0.3 postgresql-contrib-8.1.17-0.3Changes made to the settings in the postgresql.conf file: Only the memory changes mentioned above.Operating system and version: Linux acp1 2.6.16.60-0.54.5-default #1 Fri Sep 4 01:28:03 UTC 2009 i686 i686 i386 GNU/Linux SLES 10-SP3What program you're using to connect to PostgreSQL: Perl DBI Perl v5.8.8What version of the ODBC/JDBC/ADO/etc driver you're using, if any: perl-DBD-Pg 1.43If you're using a\n connection pool, load balancer or application server, which one you're using and its version: None.Is there anything remotely unusual in the PostgreSQL server logs? No, they're empty.CPU manufacturer and model: Intel Celeron CPU 440 @ 2.00GHzAmount and size of RAM installed: 2GB RAMStorage details (important for performance and corruption questions): Do you use a RAID controller? No. How many hard disks are connected to the system and what types are they? We use a single Hitachi HDT72102 SATA drive (250GB) 7200 RPM. How are your disks arranged for storage? Postgres lives on the same 100GB ext3 partition as the OS.Thanks,Sean-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 29 Oct 2012 12:18:19 -0700 (PDT)",
"msg_from": "salah jubeh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Request for help with slow query"
},
{
"msg_contents": "I thought that an index was implicitly created for foreign keys, but I see that that's not true. I've just created one now and re-ran the query but it did not change the query plan or run time.\n \nThanks,\nSean\n\n________________________________________\nFrom: salah jubeh [[email protected]]\nSent: Monday, October 29, 2012 3:18 PM\nTo: Woolcock, Sean; [email protected]\nSubject: Re: [PERFORM] Request for help with slow query\n\nDid you try to add an index on filesystem_id\n\n\n________________________________\nFrom: \"Woolcock, Sean\" <[email protected]>\nTo: \"[email protected]\" <[email protected]>\nSent: Monday, October 29, 2012 6:41 PM\nSubject: [PERFORM] Request for help with slow query\n\nHi, thanks for any help. I've tried to be thorough, but let me know if I should\nprovide more information.\n\nA description of what you are trying to achieve and what results you expect:\n I have a large (3 million row) table called \"tape\" that represents files,\n which I join to a small (100 row) table called \"filesystem\" that represents\n filesystems. I have a web interface that allows you to sort by a number of\n fields in the tape table and view the results 100 at a time (using LIMIT\n and OFFSET).\n\n The data only changes hourly and I do a \"vacuum analyze\" after all changes.\n\n The tables are defined as:\n\n create table filesystem (\n id serial primary key,\n host varchar(256),\n storage_path varchar(2048) not null check (storage_path != ''),\n mounted_on varchar(2048) not null check (mounted_on != ''),\n constraint unique_fs unique(host, storage_path)\n );\n create table tape (\n id serial primary key,\n volser char(255) not null check (volser != ''),\n path varchar(2048) not null check (path != ''),\n scratched boolean not null default FALSE,\n last_write_date timestamp not null default current_timestamp,\n last_access_date timestamp not null default current_timestamp,\n filesystem_id integer references filesystem not null,\n size bigint not null check (size >= 0),\n worm_status char,\n encryption char,\n job_name char(8),\n job_step char(8),\n dsname char(17),\n recfm char(3),\n block_size int,\n lrecl int,\n constraint filesystem_already_has_that_volser unique(filesystem_id, volser)\n );\n\n An example query that's running slowly for me is:\n\n select tape.volser,\n tape.path,\n tape.scratched,\n tape.size,\n extract(epoch from tape.last_write_date) as last_write_date,\n extract(epoch from tape.last_access_date) as last_access_date\n from tape\n inner join filesystem\n on (tape.filesystem_id = filesystem.id<http://filesystem.id/>)\n order by last_write_date desc\n limit 100\n offset 100;\n\n On Postgres 8.1.17 this takes about 60 seconds. I would like it to be faster.\n\n Here's the explain output:\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3226201.13..3226201.38 rows=100 width=308) (actual time=66311.929..66312.053 rows=100 loops=1)\n -> Sort (cost=3226200.88..3234250.28 rows=3219757 width=308) (actual time=66311.826..66311.965 rows=200 loops=1)\n Sort Key: date_part('epoch'::text, tape.last_write_date)\n -> Hash Join (cost=3.26..242948.97 rows=3219757 width=308) (actual time=3.165..31680.830 rows=3219757 loops=1)\n Hash Cond: (\"outer\".filesystem_id = \"inner\".id)\n -> Seq Scan on tape (cost=0.00..178550.57 rows=3219757 width=312) (actual time=2.824..18175.863 rows=3219757 loops=1)\n -> Hash (cost=3.01..3.01 rows=101 width=4) (actual time=0.204..0.204 rows=101 loops=1)\n -> Seq Scan on filesystem (cost=0.00..3.01 rows=101 width=4) (actual time=0.004..0.116 rows=101 loops=1)\n Total runtime: 66553.643 ms\n\n Here's a depesz link with that output: http://explain.depesz.com/s/AUR\n\n\nThings I've tried:\n\n 1. I added an index on last_write_date with:\n\n create index tape_last_write_date_idx on tape(last_write_date);\n\n and there was no improvement in query time.\n\n 2. I bumped:\n effective_cache_size to 1/2 system RAM (1GB)\n shared_buffers to 1/4 system RAM (512MB)\n work_mem to 10MB\n and there was no improvement in query time.\n\n 3. I ran the query against the same data in Postgres 9.1.6 rather than 8.1.17\n using the same hardware and it was about 5 times faster (nice work,\n whoever did that!). Unfortunately upgrading is not an option, so this\n is more of an anecdote. I would think the query could go much faster\n in either environment with some optimization.\n\n\n\nThe EXACT PostgreSQL version you are running:\n PostgreSQL 8.1.17 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070115 (SUSE Linux)\n\nHow you installed PostgreSQL:\n Standard SuSE SLES 10-SP3 RPMs:\n postgresql-devel-8.1.17-0.3\n postgresql-pl-8.1.17-0.4\n postgresql-libs-8.1.17-0.3\n postgresql-8.1.17-0.3\n postgresql-server-8.1.17-0.3\n postgresql-contrib-8.1.17-0.3\n\nChanges made to the settings in the postgresql.conf file:\n Only the memory changes mentioned above.\n\nOperating system and version:\n Linux acp1 2.6.16.60-0.54.5-default #1 Fri Sep 4 01:28:03 UTC 2009 i686 i686 i386 GNU/Linux\n\n SLES 10-SP3\n\nWhat program you're using to connect to PostgreSQL:\n Perl DBI\n Perl v5.8.8\n\nWhat version of the ODBC/JDBC/ADO/etc driver you're using, if any:\n perl-DBD-Pg 1.43\n\nIf you're using a connection pool, load balancer or application server, which one you're using and its version:\n None.\n\nIs there anything remotely unusual in the PostgreSQL server logs?\n No, they're empty.\n\n\nCPU manufacturer and model:\n Intel Celeron CPU 440 @ 2.00GHz\n\nAmount and size of RAM installed:\n 2GB RAM\n\nStorage details (important for performance and corruption questions):\n\n Do you use a RAID controller?\n No.\n How many hard disks are connected to the system and what types are they?\n We use a single Hitachi HDT72102 SATA drive (250GB) 7200 RPM.\n How are your disks arranged for storage?\n Postgres lives on the same 100GB ext3 partition as the OS.\n\n\nThanks,\nSean\n\n\n--\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n",
"msg_date": "Mon, 29 Oct 2012 15:25:56 -0400",
"msg_from": "\"Woolcock, Sean\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Request for help with slow query"
},
{
"msg_contents": "\nOn 10/29/2012 12:25 PM, Woolcock, Sean wrote:\n>\n> I thought that an index was implicitly created for foreign keys, but I see that that's not true. I've just created one now and re-ran the query but it did not change the query plan or run time.\n\n1. Explain analyze, not explain please\n\nCheck to see if estimated rows differs wildly from actual.\n\n2. Seriously... 8.1? That is not supported. Please upgrade to a \nsupported version of PostgreSQL.\n\nhttp://www.postgresql.org/support/versioning/\n\n3. Simple things:\n\n A. Have you run analyze on the two tables?\n B. What is your default_statistics_target?\n\nJoshua D. Drake\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC\n@cmdpromptinc - 509-416-6579\n\n",
"msg_date": "Mon, 29 Oct 2012 12:32:43 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Request for help with slow query"
},
{
"msg_contents": "On 10/29/2012 12:41 PM, Woolcock, Sean wrote:\n\n> An example query that's running slowly for me is:\n>\n> select tape.volser,\n> tape.path,\n> tape.scratched,\n> tape.size,\n> extract(epoch from tape.last_write_date) as last_write_date,\n> extract(epoch from tape.last_access_date) as last_access_date\n> from tape\n> inner join filesystem\n> on (tape.filesystem_id = filesystem.id)\n> order by last_write_date desc\n> limit 100\n> offset 100;\n\nIs this a representative example? From the looks of this, you could \nentirely drop the join against the filesystems table, because you're not \nusing it in the SELECT or WHERE sections at all. You don't need that \njoin in this example.\n\n> -> Seq Scan on tape (cost=0.00..178550.57 rows=3219757 width=312)\n> (actual time=2.824..18175.863 rows=3219757 loops=1)\n> -> Hash (cost=3.01..3.01 rows=101 width=4) (actual\n> time=0.204..0.204 rows=101 loops=1)\n> -> Seq Scan on filesystem (cost=0.00..3.01 rows=101 width=4)\n> (actual time=0.004..0.116 rows=101 loops=1)\n> Total runtime: 66553.643 ms\n\nI think we can stop looking at this point. Because of the ORDER clause, \nit has to read the entire tape table because you have no information on \nlast_write_date it can use. Then, it has to read the entire filesystem \ntable because you asked it to do a join, even if you threw away the results.\n\n> 1. I added an index on last_write_date with:\n> and there was no improvement in query time.\n\nI'm not sure 8.1 knows what to do with that. But I can guarantee newer \nversions would do a reverse index scan on this index to find the top 100 \nrows, even with the offset. You can also do this with newer versions, \nsince it's the most common query you run:\n\ncreate index tape_last_write_date_idx on tape (last_write_date DESC);\n\nWhich would at least give you forward read order when addressing this index.\n\n> 3. I ran the query against the same data in Postgres 9.1.6 rather than 8.1.17\n> using the same hardware and it was about 5 times faster (nice work,\n\nIt would be an order of magnitude faster than that if you add the index \nalso.\n\n> Unfortunately upgrading is not an option, so this is more of an\n> anecdote. I would think the query could go much faster in either\n> environment with some optimization.\n\nYou desperately need to reconsider this. PostgreSQL 8.1 is no longer \nsupported, and was last updated in late 2010. Any bug fixes, including \nknown corruption and security bugs, are no longer being backported. \nEvery day you run on an 8.1 install is a risk. The story is similar with \n8.2. Even 8.3 is on the way to retirement. You're *six* major versions \nbehind the main release.\n\nAt the very least, you need to upgrade PostgreSQL from 8.1.17 to 8.1.23. \nYou're still on a version of PG that's almost 7-years old, but at least \nyou'd have the most recent patch level.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Mon, 29 Oct 2012 14:36:21 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Request for help with slow query"
},
{
"msg_contents": "As shaun has indicated, there is no need for join, also as Joshua suggested, it is good to upgrade your server. also add indexes for your predicates and foreign keys and you will get a desired result.\n\nRegards\n\n\n\n\n\n________________________________\n From: Shaun Thomas <[email protected]>\nTo: \"Woolcock, Sean\" <[email protected]> \nCc: \"[email protected]\" <[email protected]> \nSent: Monday, October 29, 2012 8:36 PM\nSubject: Re: [PERFORM] Request for help with slow query\n \nOn 10/29/2012 12:41 PM, Woolcock, Sean wrote:\n\n> An example query that's running slowly for me is:\n> \n> select tape.volser,\n> tape.path,\n> tape.scratched,\n> tape.size,\n> extract(epoch from tape.last_write_date) as last_write_date,\n> extract(epoch from tape.last_access_date) as last_access_date\n> from tape\n> inner join filesystem\n> on (tape.filesystem_id = filesystem.id)\n> order by last_write_date desc\n> limit 100\n> offset 100;\n\nIs this a representative example? From the looks of this, you could entirely drop the join against the filesystems table, because you're not using it in the SELECT or WHERE sections at all. You don't need that join in this example.\n\n> -> Seq Scan on tape (cost=0.00..178550.57 rows=3219757 width=312)\n> (actual time=2.824..18175.863 rows=3219757 loops=1)\n> -> Hash (cost=3.01..3.01 rows=101 width=4) (actual\n> time=0.204..0.204 rows=101 loops=1)\n> -> Seq Scan on filesystem (cost=0.00..3.01 rows=101 width=4)\n> (actual time=0.004..0.116 rows=101 loops=1)\n> Total runtime: 66553.643 ms\n\nI think we can stop looking at this point. Because of the ORDER clause, it has to read the entire tape table because you have no information on last_write_date it can use. Then, it has to read the entire filesystem table because you asked it to do a join, even if you threw away the results.\n\n> 1. I added an index on last_write_date with:\n> and there was no improvement in query time.\n\nI'm not sure 8.1 knows what to do with that. But I can guarantee newer versions would do a reverse index scan on this index to find the top 100 rows, even with the offset. You can also do this with newer versions, since it's the most common query you run:\n\ncreate index tape_last_write_date_idx on tape (last_write_date DESC);\n\nWhich would at least give you forward read order when addressing this index.\n\n> 3. I ran the query against the same data in Postgres 9.1.6 rather than 8.1.17\n> using the same hardware and it was about 5 times faster (nice work,\n\nIt would be an order of magnitude faster than that if you add the index also.\n\n> Unfortunately upgrading is not an option, so this is more of an\n> anecdote. I would think the query could go much faster in either\n> environment with some optimization.\n\nYou desperately need to reconsider this. PostgreSQL 8.1 is no longer supported, and was last updated in late 2010. Any bug fixes, including known corruption and security bugs, are no longer being backported. Every day you run on an 8.1 install is a risk. The story is similar with 8.2. Even 8.3 is on the way to retirement. You're *six* major versions behind the main release.\n\nAt the very least, you need to upgrade PostgreSQL from 8.1.17 to 8.1.23. You're still on a version of PG that's almost 7-years old, but at least you'd have the most recent patch level.\n\n\n-- Shaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nAs shaun has indicated, there is no need for join, also as Joshua suggested, it is good to upgrade your server. also add indexes for your predicates and foreign keys and you will get a desired result.Regards From: Shaun Thomas <[email protected]> To: \"Woolcock, Sean\" <[email protected]> Cc: \"[email protected]\"\n <[email protected]> Sent: Monday, October 29, 2012 8:36 PM Subject: Re: [PERFORM] Request for help with slow query \nOn 10/29/2012 12:41 PM, Woolcock, Sean wrote:> An example query that's running slowly for me is:> > select tape.volser,> tape.path,> tape.scratched,> tape.size,> extract(epoch from tape.last_write_date) as last_write_date,> extract(epoch from tape.last_access_date) as last_access_date> from tape> inner join filesystem> on (tape.filesystem_id = filesystem.id)> order by last_write_date desc> limit 100> offset 100;Is this a representative example? From the looks of this, you could entirely drop the join against the filesystems table, because you're not using it in the SELECT or WHERE sections at all. You don't need that join in this example.> -> Seq Scan on tape (cost=0.00..178550.57 rows=3219757 width=312)> (actual time=2.824..18175.863 rows=3219757 loops=1)> -> Hash (cost=3.01..3.01 rows=101 width=4) (actual> time=0.204..0.204 rows=101 loops=1)> -> Seq Scan on filesystem (cost=0.00..3.01 rows=101 width=4)> (actual time=0.004..0.116 rows=101 loops=1)> Total\n runtime: 66553.643 msI think we can stop looking at this point. Because of the ORDER clause, it has to read the entire tape table because you have no information on last_write_date it can use. Then, it has to read the entire filesystem table because you asked it to do a join, even if you threw away the results.> 1. I added an index on last_write_date with:> and there was no improvement in query time.I'm not sure 8.1 knows what to do with that. But I can guarantee newer versions would do a reverse index scan on this index to find the top 100 rows, even with the offset. You can also do this with newer versions, since it's the most common query you run:create index tape_last_write_date_idx on tape (last_write_date DESC);Which would at least give you forward read order when addressing this index.> 3. I ran the query against the same data in Postgres\n 9.1.6 rather than 8.1.17> using the same hardware and it was about 5 times faster (nice work,It would be an order of magnitude faster than that if you add the index also.> Unfortunately upgrading is not an option, so this is more of an> anecdote. I would think the query could go much faster in either> environment with some optimization.You desperately need to reconsider this. PostgreSQL 8.1 is no longer supported, and was last updated in late 2010. Any bug fixes, including known corruption and security bugs, are no longer being backported. Every day you run on an 8.1 install is a risk. The story is similar with 8.2. Even 8.3 is on the way to retirement. You're *six* major versions behind the main release.At the very least, you need to upgrade PostgreSQL from 8.1.17 to 8.1.23. You're still on a version of PG that's almost 7-years old, but at least you'd have the most recent patch\n level.-- Shaun ThomasOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, [email protected]______________________________________________See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 29 Oct 2012 12:49:52 -0700 (PDT)",
"msg_from": "salah jubeh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Request for help with slow query"
},
{
"msg_contents": "I'm going to push for the upgrade and make the other suggested changes.\n\nThanks to all for the help,\nSean\n\n________________________________________\nFrom: salah jubeh [[email protected]]\nSent: Monday, October 29, 2012 3:49 PM\nTo: [email protected]; Woolcock, Sean\nCc: [email protected]\nSubject: Re: [PERFORM] Request for help with slow query\n\nAs shaun has indicated, there is no need for join, also as Joshua suggested, it is good to upgrade your server. also add indexes for your predicates and foreign keys and you will get a desired result.\n\nRegards\n\n\n________________________________\nFrom: Shaun Thomas <[email protected]>\nTo: \"Woolcock, Sean\" <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSent: Monday, October 29, 2012 8:36 PM\nSubject: Re: [PERFORM] Request for help with slow query\n\nOn 10/29/2012 12:41 PM, Woolcock, Sean wrote:\n\n> An example query that's running slowly for me is:\n>\n> select tape.volser,\n> tape.path,\n> tape.scratched,\n> tape.size,\n> extract(epoch from tape.last_write_date) as last_write_date,\n> extract(epoch from tape.last_access_date) as last_access_date\n> from tape\n> inner join filesystem\n> on (tape.filesystem_id = filesystem.id<http://filesystem.id/>)\n> order by last_write_date desc\n> limit 100\n> offset 100;\n\nIs this a representative example? From the looks of this, you could entirely drop the join against the filesystems table, because you're not using it in the SELECT or WHERE sections at all. You don't need that join in this example.\n\n> -> Seq Scan on tape (cost=0.00..178550.57 rows=3219757 width=312)\n> (actual time=2.824..18175.863 rows=3219757 loops=1)\n> -> Hash (cost=3.01..3.01 rows=101 width=4) (actual\n> time=0.204..0.204 rows=101 loops=1)\n> -> Seq Scan on filesystem (cost=0.00..3.01 rows=101 width=4)\n> (actual time=0.004..0.116 rows=101 loops=1)\n> Total runtime: 66553.643 ms\n\nI think we can stop looking at this point. Because of the ORDER clause, it has to read the entire tape table because you have no information on last_write_date it can use. Then, it has to read the entire filesystem table because you asked it to do a join, even if you threw away the results.\n\n> 1. I added an index on last_write_date with:\n> and there was no improvement in query time.\n\nI'm not sure 8.1 knows what to do with that. But I can guarantee newer versions would do a reverse index scan on this index to find the top 100 rows, even with the offset. You can also do this with newer versions, since it's the most common query you run:\n\ncreate index tape_last_write_date_idx on tape (last_write_date DESC);\n\nWhich would at least give you forward read order when addressing this index.\n\n> 3. I ran the query against the same data in Postgres 9.1.6 rather than 8.1.17\n> using the same hardware and it was about 5 times faster (nice work,\n\nIt would be an order of magnitude faster than that if you add the index also.\n\n> Unfortunately upgrading is not an option, so this is more of an\n> anecdote. I would think the query could go much faster in either\n> environment with some optimization.\n\nYou desperately need to reconsider this. PostgreSQL 8.1 is no longer supported, and was last updated in late 2010. Any bug fixes, including known corruption and security bugs, are no longer being backported. Every day you run on an 8.1 install is a risk. The story is similar with 8.2. Even 8.3 is on the way to retirement. You're *six* major versions behind the main release.\n\nAt the very least, you need to upgrade PostgreSQL from 8.1.17 to 8.1.23. You're still on a version of PG that's almost 7-years old, but at least you'd have the most recent patch level.\n\n\n-- Shaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]<mailto:[email protected]>\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- Sent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n",
"msg_date": "Mon, 29 Oct 2012 15:53:41 -0400",
"msg_from": "\"Woolcock, Sean\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Request for help with slow query"
},
{
"msg_contents": "Sean Woolcock wrote:\n> I have a large (3 million row) table called \"tape\" that represents\nfiles,\n> which I join to a small (100 row) table called \"filesystem\" that\nrepresents\n> filesystems. I have a web interface that allows you to sort by a\nnumber of\n> fields in the tape table and view the results 100 at a time (using\nLIMIT\n> and OFFSET).\n> \n> The data only changes hourly and I do a \"vacuum analyze\" after all\nchanges.\n\n> An example query that's running slowly for me is:\n> \n> select tape.volser,\n> tape.path,\n> tape.scratched,\n> tape.size,\n> extract(epoch from tape.last_write_date) as\nlast_write_date,\n> extract(epoch from tape.last_access_date) as\nlast_access_date\n> from tape\n> inner join filesystem\n> on (tape.filesystem_id = filesystem.id)\n> order by last_write_date desc\n> limit 100\n> offset 100;\n> \n> On Postgres 8.1.17 this takes about 60 seconds. I would like it to\nbe faster.\n\n> Here's a depesz link with that output:\nhttp://explain.depesz.com/s/AUR\n\nI don't see anything obviously wrong there.\n\nAt least the sequential scan on \"tape\" is necessary.\n\n> Things I've tried:\n[...]\n> 3. I ran the query against the same data in Postgres 9.1.6 rather\nthan 8.1.17\n> using the same hardware and it was about 5 times faster (nice\nwork,\n> whoever did that!). Unfortunately upgrading is not an option,\nso this\n> is more of an anecdote. I would think the query could go much\nfaster\n> in either environment with some optimization.\n\nCan you post EXPLAIN ANALYZE for the query on 9.1.6?\n\nStaying on 8.1 is not a good idea, but I guess you know that.\n\n> Storage details (important for performance and corruption questions):\n> Do you use a RAID controller?\n> No.\n> How many hard disks are connected to the system and what types are\nthey?\n> We use a single Hitachi HDT72102 SATA drive (250GB) 7200 RPM.\n> How are your disks arranged for storage?\n> Postgres lives on the same 100GB ext3 partition as the OS.\n\nI'd say that a query like this will always be disk bound.\nGetting faster storage should help.\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Tue, 30 Oct 2012 10:02:19 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Request for help with slow query"
}
] |
[
{
"msg_contents": "Shaun Thomas wrote:\n\n> I know that current_date seems like an edge case, but I can't see\n> how getting the most recent activity for something is an uncommon\n> activity. Tip tracking is actually the most frequent pattern in the\n> systems I've seen.\n\nYeah, this has been a recurring problem with database statistics\nwith various products for at least 20 years. For a while I was using\na product whose optimizer engineers referred to it as \"data skew\" and\nrecommended adding a \"dummy\" entry to get a single value out past the\nmaximum end of the range. If you couldn't stomach the dummy data,\nthey had detailed instructions for dumping your statistics, tinkering\nwith the end of it to allow for the issue, and writing it back over\nthe actual statistics gathered. We need a better answer than that.\n\n> I just wonder if this particular tweak isn't more of a regression\n> than initially thought.\n\nIt does seem like we have a serious regression in terms of this\nparticular issue.\n\n-Kevin\n\n",
"msg_date": "Mon, 29 Oct 2012 14:51:14 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting Statistics on Functional Indexes"
}
] |
[
{
"msg_contents": "Woolcock, Sean wrote:\n\n> A description of what you are trying to achieve and what results\n> you expect:\n> I have a large (3 million row) table called \"tape\" that represents\n> files, which I join to a small (100 row) table called \"filesystem\"\n> that represents filesystems. I have a web interface that allows\n> you to sort by a number of fields in the tape table and view the\n> results 100 at a time (using LIMIT and OFFSET).\n\nHigher OFFSET settings may be slow because it has to read through\nOFFSET result rows before returning anything. There are other ways\nthis problem can be solved, like saving key values at both ends of\nthe displayed range.\n\n> On Postgres 8.1.17 this takes about 60 seconds. I would like it to\n> be faster.\n\nThere was a major overall speed improvement in 8.2. And another in\n8.3. Etc. 8.1 has been out of support for about two years now.\n\nhttp://www.postgresql.org/support/versioning/\n\n> 1. I added an index on last_write_date with:\n> \n> create index tape_last_write_date_idx on tape(last_write_date);\n> \n> and there was no improvement in query time.\n\nI was going to ask whether you tried an index on tape\n(last_write_date DESC) -- but that feature was added in 8.3. Never\nmind.\n\n> 2. I bumped:\n> effective_cache_size to 1/2 system RAM (1GB)\n> shared_buffers to 1/4 system RAM (512MB)\n> work_mem to 10MB\n> and there was no improvement in query time.\n\nNot bad adjustments probably, anyway.\n\n> 3. I ran the query against the same data in Postgres 9.1.6 rather\n> than 8.1.17 using the same hardware and it was about 5 times\n> faster (nice work, whoever did that!). Unfortunately upgrading is\n> not an option,\n\nThat is unfortunate.\n\n> CPU manufacturer and model:\n> Intel Celeron CPU 440 @ 2.00GHz\n> \n> Amount and size of RAM installed:\n> 2GB RAM\n> \n> Storage details (important for performance and corruption\n> questions):\n> \n> Do you use a RAID controller?\n> No.\n> How many hard disks are connected to the system and what types are\n> they?\n> We use a single Hitachi HDT72102 SATA drive (250GB) 7200 RPM.\n> How are your disks arranged for storage?\n> Postgres lives on the same 100GB ext3 partition as the OS.\n\nThat's not exactly blazingly fast hardware. If you value that data at\nall, I hope you have paid a lot of attention to backups, because that\nsounds like a machine likely to have a drive over 5 years old, which\nmakes it highly likely to fail hard without a lot of advance warning.\n\nYou seem to be heavily cached. Have you tried these settings?:\n\nseq_page_cost = 0.1\nrandom_page_cost = 0.1\ncpu_tuple_cost = 0.03\n\nThat might encourage it to use that index you added. Well, if a\nversion of PostgreSQL that old did reverse index scans. If not you\nmight be able to add a functional index and coax it into use.\n\n-Kevin\n\n",
"msg_date": "Mon, 29 Oct 2012 15:52:29 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Request for help with slow query"
}
] |
[
{
"msg_contents": "As I increase concurrency I'm experiencing what I believe are too slow\nqueries given the minuscule amount of data in my tables.\n\nI have 20 Django worker processes and use ab to generate 3000 requests\nto a particular URL which is doing some read only queries. I ran this\nwith ab concurrency level set to 4, 12 and 20. With some aggregation\nusing pgbadger here are the results:\n\nconcurrency 4\nNumber of queries: 39,046\nTotal query duration: 4.255s\nSlowest query: 33ms\nTotal taken to execute slowest query 6000 times: 1.633s\nNumber of queries taking over 100ms: 0\nNumber of queries taking over 50ms: 0\nNumber of queries taking over 25ms: 1\nNumber of queries taking over 10ms: 7\n\nconcurrency 12\nNumber of queries: 39,035\nTotal query duration: 7.435s\nSlowest query: 174ms\nTotal taken to execute slowest query 6000 times: 2.617s\nNumber of queries taking over 100ms: 2\nNumber of queries taking over 50ms: 4\nNumber of queries taking over 25ms: 17\nNumber of queries taking over 10ms: 99\n\nconcurrency 20\nNumber of queries: 39,043\nTotal query duration: 11.614s\nSlowest query: 198ms\nTotal taken to execute slowest query 6000 times: 4.286s\nNumber of queries taking over 100ms: 5\nNumber of queries taking over 50ms: 19\nNumber of queries taking over 25ms: 52\nNumber of queries taking over 10ms: 255\n\nAll tests have 0 INSERTs, 0 UPDATEs, 0 DELETEs, aprox. 18000 SELECTs\nand 21000 OTHERs (Django's ORM sends a lot of SET TIME ZONE, SET\ndefault_transaction_isolation TO 'READ committed'; etc)\n\nThe 3 queries that take longest in total are:\nSELECT \"django_site\".\"id\", \"django_site\".\"domain\",\n\"django_site\".\"name\", \"vwf_customsite\".\"site_ptr_id\",\n\"vwf_customsite\".\"geo_reference_id\",\n\"vwf_customsite\".\"friendly_domain\", \"vwf_customsite\".\"ws_machine\",\n\"vwf_customsite\".\"public\", \"vwf_customsite\".\"user_limit\",\n\"vwf_customsite\".\"hidden_login_and_registration\",\n\"vwf_customsite\".\"logo\", \"vwf_customsite\".\"LANGUAGE\",\n\"vwf_customsite\".\"ga_tracker_id\", \"vwf_customsite\".\"always_running\",\n\"vwf_customsite\".\"deleted\", \"vwf_customsite\".\"version\",\n\"vwf_customsite\".\"contact_email\" FROM \"vwf_customsite\" INNER JOIN\n\"django_site\" ON ( \"vwf_customsite\".\"site_ptr_id\" = \"django_site\".\"id\"\n) WHERE \"vwf_customsite\".\"site_ptr_id\" = 0;\n\nSELECT \"vwf_plugin\".\"id\", \"vwf_plugin\".\"name\", \"vwf_plugin\".\"site_id\",\n\"vwf_plugin\".\"enabled\" FROM \"vwf_plugin\" WHERE (\n\"vwf_plugin\".\"site_id\" = 0 AND \"vwf_plugin\".\"name\" = '' ) ;\n\nSELECT \"django_site\".\"id\", \"django_site\".\"domain\",\n\"django_site\".\"name\" FROM \"django_site\" WHERE \"django_site\".\"domain\" =\n'';\n\n\nThe tables are extremely small: django_site has 8 rows, vwf_customsite\nhas 7 and vwf_plugin 43. My intuition would say that for these read\nonly queries on tables this small no query should take more than 5 ms\neven for a concurrency level of 20 and that performance shouldn't\ndegrade at all when going from 4 to 20 concurrent ab requests. The\nCPUs are also used only about 10% so there should be plenty of\ncapacity for more concurrency.\n\nThe numbers above show a different situation though. The average for\nthe slowest query stays under 1ms but it grows when increasing\nconcurrency and there are spikes that really take too long IMO.\n\nAm I right that it should be possible to do better and if so how?\nThanks a lot for any ideas or insights!\n\nMore details about my setup:\n\nThe schemas:\n Table \"public.django_site\"\n Column | Type | Modifiers\n--------+------------------------+----------------------------------------------------------\n id | integer | not null default\nnextval('django_site_id_seq'::regclass)\n domain | character varying(100) | not null\n name | character varying(50) | not null\nIndexes:\n \"django_site_pkey\" PRIMARY KEY, btree (id)\nReferenced by:\n<snip list of 25 tables>\n\n Table \"public.vwf_customsite\"\n Column | Type | Modifiers\n-------------------------------+------------------------+-----------\n site_ptr_id | integer | not null\n geo_reference_id | integer |\n friendly_domain | character varying(100) | not null\n public | boolean | not null\n logo | character varying(100) |\n language | character varying(2) | not null\n ga_tracker_id | character varying(16) | not null\n version | character varying(100) | not null\n contact_email | character varying(254) | not null\n always_running | boolean | not null\n deleted | boolean | not null\n ws_machine | character varying(100) | not null\n user_limit | integer | not null\n hidden_login_and_registration | boolean | not null\nIndexes:\n \"vwf_customsite_pkey\" PRIMARY KEY, btree (site_ptr_id)\n \"vwf_customsite_geo_reference_id\" btree (geo_reference_id)\nForeign-key constraints:\n \"geo_reference_id_refs_id_488579c58f2d1a89\" FOREIGN KEY\n(geo_reference_id) REFERENCES geo_reference_georeference(id)\nDEFERRABLE INITIALLY DEFERRED\n \"site_ptr_id_refs_id_712ff223c9517f55\" FOREIGN KEY (site_ptr_id)\nREFERENCES django_site(id) DEFERRABLE INITIALLY DEFERRED\nReferenced by:\n<snip list of 1 table>\n\n Table \"public.vwf_plugin\"\n Column | Type | Modifiers\n---------+------------------------+---------------------------------------------------------\n id | integer | not null default\nnextval('vwf_plugin_id_seq'::regclass)\n name | character varying(255) | not null\n site_id | integer | not null\n enabled | boolean | not null default false\nIndexes:\n \"vwf_plugin_pkey\" PRIMARY KEY, btree (id)\n \"vwf_plugin_site_id\" btree (site_id)\nForeign-key constraints:\n \"site_id_refs_id_4ac2846d79527bae\" FOREIGN KEY (site_id)\nREFERENCES django_site(id) DEFERRABLE INITIALLY DEFERRED\n\nHardware:\nVirtual machine running on top of VMWare\n4 cores, Intel(R) Xeon(R) CPU E5645 @ 2.40GHz\n4GB of RAM\n\nDisk that is virtual enough that I have no idea what it is, I know\nthat there's some big storage shared between multiple virtual\nmachines. Filesystem is ext4 with default mount options. I can imagine\nIO performance is not great for this machine, however, for the\nreadonly queries and the very small tables above I would expect\neverything to be cached in memory and the disk not to matter.\n\nUbuntu 12.04 with Postgres installed from Ubuntu's packages\n\npgbouncer 1.4.2 installed from Ubuntu's packages on the same machine\nas Postgres. Django connects via TCP/IP to pgbouncer (it does one\nconnection and one transaction per request) and pgbouncer keeps\nconnections open to Postgres via Unix socket. The Python client is\nself compiled psycopg2-2.4.5.\n\nuname -a\nLinux wcea014.virtuocity.eu 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26\n21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux\n\nNon default settings\n name |\n current_setting\n----------------------------+------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.1.6 on\nx86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro\n4.6.3-1ubuntu5) 4.6.3, 64-bit\n client_encoding | UTF8\n effective_cache_size | 1000MB\n external_pid_file | /var/run/postgresql/9.1-main.pid\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n log_checkpoints | on\n log_connections | on\n log_destination | stderr\n log_directory | /var/log/postgresql\n log_disconnections | on\n log_filename | postgresql-%Y-%m-%d-concTODO.log\n log_line_prefix | %t [%p]: [%l-1]\n log_lock_waits | on\n log_min_duration_statement | 0\n log_rotation_size | 0\n log_temp_files | 0\n logging_collector | on\n maintenance_work_mem | 400MB\n max_connections | 100\n max_stack_depth | 2MB\n port | 2345\n random_page_cost | 2\n server_encoding | UTF8\n shared_buffers | 800MB\n ssl | on\n TimeZone | localtime\n unix_socket_directory | /var/run/postgresql\n wal_buffers | 16MB\n work_mem | 10MB\n\n",
"msg_date": "Tue, 30 Oct 2012 01:11:31 +0100",
"msg_from": "Catalin Iacob <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob <[email protected]> wrote:\n\n> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine\n> as Postgres. Django connects via TCP/IP to pgbouncer (it does one\n> connection and one transaction per request) and pgbouncer keeps\n> connections open to Postgres via Unix socket.\n\nIsn't pgbouncer single-threaded?\n\nIf you hitting it with tiny queries as fast as possible from 20\nconnections, I would think that it would become the bottleneck.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Tue, 30 Oct 2012 14:58:33 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "Jeff / Catalin --\n\nJeff Janes wrote:\n\n>On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob <[email protected]> wrote:\n>\n>> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine\n>> as Postgres. Django connects via TCP/IP to pgbouncer (it does one\n>> connection and one transaction per request) and pgbouncer keeps\n>> connections open to Postgres via Unix socket.\n>\n>Isn't pgbouncer single-threaded?\n>\n>If you hitting it with tiny queries as fast as possible from 20\n>connections, I would think that it would become the bottleneck.\n>\n>Cheers,\n>\n\n\nI'm sure pgbouncer has some threshold where it breaks down, but we have servers (postgres 8.4 and 9.1) with connections from runtime (fed via haproxy) to pgbouncer that routinely have tens of thousands of connections in but only 40-70 postgres connections to the postgres cluster itself. Mix of queries but most are simple. Typically a few thousand queries a second to the readonly boxes, about the same to a beefier read / write master.\n\nThis is a slightly old pgbouncer at that ... used is a fairly basic mode.\n\nGreg Williamson\n\n\n",
"msg_date": "Tue, 30 Oct 2012 15:11:54 -0700 (PDT)",
"msg_from": "Greg Williamson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Tue, Oct 30, 2012 at 4:11 PM, Greg Williamson\n<[email protected]> wrote:\n> Jeff / Catalin --\n>\n> Jeff Janes wrote:\n>\n>>On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob <[email protected]> wrote:\n>>\n>>> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine\n>>> as Postgres. Django connects via TCP/IP to pgbouncer (it does one\n>>> connection and one transaction per request) and pgbouncer keeps\n>>> connections open to Postgres via Unix socket.\n>>\n>>Isn't pgbouncer single-threaded?\n>>\n>>If you hitting it with tiny queries as fast as possible from 20\n>>connections, I would think that it would become the bottleneck.\n>>\n>>Cheers,\n>>\n>\n>\n> I'm sure pgbouncer has some threshold where it breaks down, but we have servers (postgres 8.4 and 9.1) with connections from runtime (fed via haproxy) to pgbouncer that routinely have tens of thousands of connections in but only 40-70 postgres connections to the postgres cluster itself. Mix of queries but most are simple. Typically a few thousand queries a second to the readonly boxes, about the same to a beefier read / write master.\n>\n> This is a slightly old pgbouncer at that ... used is a fairly basic mode.\n\nI've used pgbouncer in two different environments now with thousands\nof connections and hundreds upon hundreds of queries per second and it\nhas yet to be a bottleneck in either place as well.\n\n",
"msg_date": "Tue, 30 Oct 2012 16:16:25 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Tue, Oct 30, 2012 at 3:16 PM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Oct 30, 2012 at 4:11 PM, Greg Williamson\n> <[email protected]> wrote:\n>> Jeff / Catalin --\n>>\n>> Jeff Janes wrote:\n>>\n>>>On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob <[email protected]> wrote:\n>>>\n>>>> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine\n>>>> as Postgres. Django connects via TCP/IP to pgbouncer (it does one\n>>>> connection and one transaction per request) and pgbouncer keeps\n>>>> connections open to Postgres via Unix socket.\n>>>\n>>>Isn't pgbouncer single-threaded?\n>>>\n>>>If you hitting it with tiny queries as fast as possible from 20\n>>>connections, I would think that it would become the bottleneck.\n>>>\n>>>Cheers,\n>>>\n>>\n>>\n>> I'm sure pgbouncer has some threshold where it breaks down, but we have servers (postgres 8.4 and 9.1) with connections from runtime (fed via haproxy) to pgbouncer that routinely have tens of thousands of connections in but only 40-70 postgres connections to the postgres cluster itself. Mix of queries but most are simple. Typically a few thousand queries a second to the readonly boxes, about the same to a beefier read / write master.\n>>\n>> This is a slightly old pgbouncer at that ... used is a fairly basic mode.\n>\n> I've used pgbouncer in two different environments now with thousands\n> of connections and hundreds upon hundreds of queries per second and it\n> has yet to be a bottleneck in either place as well.\n\nThe original poster has over 9000 queries per second in his best case,\nso I think that that is at the upper range of your experience. Using\n\"pgbench -S\" type workload, pgbouncer is definitely a bottleneck (1.7\nfold slower at -c4 -j4 on a 4 CPU machine, and using -f with a dummy\nstatement of \"select 1;\" it is 3 fold slower than going directly to\nthe server. As -c increases, pgbouncer actually falls off faster than\ndirect connections do up through at least -c20 -j20).\n\nOf course with your thousands of connections, direct connections are\nprobably not feasible (and with that many connections, most of them\nare probably idle most of the time, pgbouncer's strength)\n\nAnyway, opening and closing connections to pgbouncer is far less\ncostly than opening them directly to psql, but still very expensive\ncompared to not doing so. The original poster should see if he can\navoid that.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Wed, 31 Oct 2012 09:45:25 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Tue, Oct 30, 2012 at 4:58 PM, Jeff Janes <[email protected]> wrote:\n> On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob <[email protected]> wrote:\n>\n>> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine\n>> as Postgres. Django connects via TCP/IP to pgbouncer (it does one\n>> connection and one transaction per request) and pgbouncer keeps\n>> connections open to Postgres via Unix socket.\n>\n> Isn't pgbouncer single-threaded?\n>\n> If you hitting it with tiny queries as fast as possible from 20\n> connections, I would think that it would become the bottleneck.\n\nSingle threaded asynchronous servers are known to scale better for\nthis type of workload than multi-threaded systems because you don't\nhave to do locking and context switching. By 'for this type of\nworkload', I mean workloads where most of the real work done is i/o --\npgbouncer as it's just routing data between network sockets is\nbasically a textbook case for single threaded server.\n\nstunnel, by comparison, which has non-triival amounts of non i/o work\ngoing on, is more suited for threads. It also has severe scaling\nlimits relative to pgbouncer.\n\npgbouncer is an absolute marvel and should be standard kit in any case\nyou're concerned about server scaling in terms of number of active\nconnections to the database. I'm in the camp that application side\nconnection pools are junk and should be avoided when possible.\n\nmerlin\n\n",
"msg_date": "Wed, 31 Oct 2012 13:39:53 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Wed, Oct 31, 2012 at 11:39 AM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Oct 30, 2012 at 4:58 PM, Jeff Janes <[email protected]> wrote:\n>> On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob <[email protected]> wrote:\n>>\n>>> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine\n>>> as Postgres. Django connects via TCP/IP to pgbouncer (it does one\n>>> connection and one transaction per request) and pgbouncer keeps\n>>> connections open to Postgres via Unix socket.\n>>\n>> Isn't pgbouncer single-threaded?\n>>\n>> If you hitting it with tiny queries as fast as possible from 20\n>> connections, I would think that it would become the bottleneck.\n>\n> Single threaded asynchronous servers are known to scale better for\n> this type of workload than multi-threaded systems because you don't\n> have to do locking and context switching.\n\nHow much locking would there be in what pgbouncer does?\n\nOn a 4 CPU machine, if I run pgbench -c10 -j10 with dummy queries\n(like \"select 1;\" or \"set timezone...\") against 2 instances of\npgbouncer, I get nearly twice the throughput as if I use only one\ninstance.\n\nA rather odd workload, maybe, but it does seem to be similar to the\none that started this thread.\n\n\n> pgbouncer is an absolute marvel and should be standard kit in any case\n> you're concerned about server scaling in terms of number of active\n> connections to the database. I'm in the camp that application side\n> connection pools are junk and should be avoided when possible.\n\nI have nothing against pgbouncer, but it is not without consequences.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Sat, 3 Nov 2012 16:53:38 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Sat, Nov 3, 2012 at 6:53 PM, Jeff Janes <[email protected]> wrote:\n> On Wed, Oct 31, 2012 at 11:39 AM, Merlin Moncure <[email protected]> wrote:\n>> On Tue, Oct 30, 2012 at 4:58 PM, Jeff Janes <[email protected]> wrote:\n>>> On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob <[email protected]> wrote:\n>>>\n>>>> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine\n>>>> as Postgres. Django connects via TCP/IP to pgbouncer (it does one\n>>>> connection and one transaction per request) and pgbouncer keeps\n>>>> connections open to Postgres via Unix socket.\n>>>\n>>> Isn't pgbouncer single-threaded?\n>>>\n>>> If you hitting it with tiny queries as fast as possible from 20\n>>> connections, I would think that it would become the bottleneck.\n>>\n>> Single threaded asynchronous servers are known to scale better for\n>> this type of workload than multi-threaded systems because you don't\n>> have to do locking and context switching.\n>\n> How much locking would there be in what pgbouncer does?\n>\n> On a 4 CPU machine, if I run pgbench -c10 -j10 with dummy queries\n> (like \"select 1;\" or \"set timezone...\") against 2 instances of\n> pgbouncer, I get nearly twice the throughput as if I use only one\n> instance.\n>\n> A rather odd workload, maybe, but it does seem to be similar to the\n> one that started this thread.\n>\n>\n>> pgbouncer is an absolute marvel and should be standard kit in any case\n>> you're concerned about server scaling in terms of number of active\n>> connections to the database. I'm in the camp that application side\n>> connection pools are junk and should be avoided when possible.\n>\n> I have nothing against pgbouncer, but it is not without consequences.\n\nagreed -- also, I was curious and independently verified you results.\npgbouncer doesn't lock -- if you strace it, it just goes epoll_wait,\nrecv_from, send_to endlessly while under heavy load from pgbench.\nThis suggests that the bottleneck *is* pgbouncer, at least in some\ncases. It's hard to believe all the userland copying is causing that,\nbut I guess that must be the case.\n\nmerlin\n\n",
"msg_date": "Mon, 5 Nov 2012 16:51:44 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Sun, Nov 4, 2012 at 1:53 AM, Jeff Janes <[email protected]> wrote:\n> On a 4 CPU machine, if I run pgbench -c10 -j10 with dummy queries\n> (like \"select 1;\" or \"set timezone...\") against 2 instances of\n> pgbouncer, I get nearly twice the throughput as if I use only one\n> instance.\n>\n> A rather odd workload, maybe, but it does seem to be similar to the\n> one that started this thread.\n\nEvery-connection-is-busy is pessimal workload for pgbouncer,\nas it has nothing useful to contribute to setup, just overhead.\n\n-- \nmarko\n\n",
"msg_date": "Tue, 6 Nov 2012 00:58:34 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Mon, Nov 5, 2012 at 2:58 PM, Marko Kreen <[email protected]> wrote:\n> On Sun, Nov 4, 2012 at 1:53 AM, Jeff Janes <[email protected]> wrote:\n>> On a 4 CPU machine, if I run pgbench -c10 -j10 with dummy queries\n>> (like \"select 1;\" or \"set timezone...\") against 2 instances of\n>> pgbouncer, I get nearly twice the throughput as if I use only one\n>> instance.\n>>\n>> A rather odd workload, maybe, but it does seem to be similar to the\n>> one that started this thread.\n>\n> Every-connection-is-busy is pessimal workload for pgbouncer,\n> as it has nothing useful to contribute to setup, just overhead.\n\nIt still has something to contribute if connections are made and\nbroken too often (pgbench -C type workload), as seems to be the case\nhere.\n\nIf he can get an application-side pooler (or perhaps just a change in\nconfiguration) such that the connections are not made and broken so\noften, then removing pgbouncer from the loop would probably be a win.\n\n\nCheers,\n\nJeff\n\n",
"msg_date": "Mon, 5 Nov 2012 15:31:52 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Tue, Nov 6, 2012 at 1:31 AM, Jeff Janes <[email protected]> wrote:\n> On Mon, Nov 5, 2012 at 2:58 PM, Marko Kreen <[email protected]> wrote:\n>> On Sun, Nov 4, 2012 at 1:53 AM, Jeff Janes <[email protected]> wrote:\n>>> On a 4 CPU machine, if I run pgbench -c10 -j10 with dummy queries\n>>> (like \"select 1;\" or \"set timezone...\") against 2 instances of\n>>> pgbouncer, I get nearly twice the throughput as if I use only one\n>>> instance.\n>>>\n>>> A rather odd workload, maybe, but it does seem to be similar to the\n>>> one that started this thread.\n>>\n>> Every-connection-is-busy is pessimal workload for pgbouncer,\n>> as it has nothing useful to contribute to setup, just overhead.\n>\n> It still has something to contribute if connections are made and\n> broken too often (pgbench -C type workload), as seems to be the case\n> here.\n\nI did not notice -C in your message above.\n\nIn such case, in a practical, non-pgbench workload, you should\nmove pgbouncer to same machine as app, so any overhead\nis just CPU, spread over all app instances, and does not\ninclude network latency.\n\n> If he can get an application-side pooler (or perhaps just a change in\n> configuration) such that the connections are not made and broken so\n> often, then removing pgbouncer from the loop would probably be a win.\n\nYes, if app has good pooling, there is less use for pgbouncer.\n\nIn any case, only long connections should go over network.\n\n-- \nmarko\n\n",
"msg_date": "Tue, 6 Nov 2012 01:58:07 +0200",
"msg_from": "Marko Kreen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Mon, Nov 5, 2012 at 3:58 PM, Marko Kreen <[email protected]> wrote:\n> On Tue, Nov 6, 2012 at 1:31 AM, Jeff Janes <[email protected]> wrote:\n>> On Mon, Nov 5, 2012 at 2:58 PM, Marko Kreen <[email protected]> wrote:\n>>> On Sun, Nov 4, 2012 at 1:53 AM, Jeff Janes <[email protected]> wrote:\n>>>> On a 4 CPU machine, if I run pgbench -c10 -j10 with dummy queries\n>>>> (like \"select 1;\" or \"set timezone...\") against 2 instances of\n>>>> pgbouncer, I get nearly twice the throughput as if I use only one\n>>>> instance.\n>>>>\n>>>> A rather odd workload, maybe, but it does seem to be similar to the\n>>>> one that started this thread.\n>>>\n>>> Every-connection-is-busy is pessimal workload for pgbouncer,\n>>> as it has nothing useful to contribute to setup, just overhead.\n>>\n>> It still has something to contribute if connections are made and\n>> broken too often (pgbench -C type workload), as seems to be the case\n>> here.\n>\n> I did not notice -C in your message above.\n\nRight, I was assuming he would somehow solve that problem and was\nlooking ahead to the next one.\n\nI had also tested the -C case, and pgbouncer can be the bottleneck\nthere as well, but bypassing it will not solve the bottleneck because\nit will be even worse with direct connections. Running multiple\ninstances of pgbouncer can, but only if you can make the application\ndo some kind of load balancing between them.\n\nI think there are three different uses of pgbouncer.\n\n1) connections made and closed too often, even if there are never very\nmany at a time (e.g. stateless CGI)\n2) hundreds or thousands of connections, with most idle at any given time.\n3) hundreds or thousands, all of which want to be active at once but\nwhich need to be forced not to be so the server doesn't fall over due\nto contention.\n\nI'm not sure 2 and 3 are really fundamentally different.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Mon, 5 Nov 2012 17:30:36 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "Thanks to everybody for their help, sorry for not getting back earlier\nbut available time shrunk very quickly as the deadline approached and\nafterwards this kind of slipped off my mind.\n\nOn Tue, Nov 6, 2012 at 12:31 AM, Jeff Janes <[email protected]> wrote:\n> It still has something to contribute if connections are made and\n> broken too often (pgbench -C type workload), as seems to be the case\n> here.\n\nDjango opens a connection for every request and closes it at the end\nof the request. As far as I know you can't override this, they tell\nyou that if connection overhead is too big you should use a connection\npool like pgbouncer. You still get latency by doing the connection and\nsome overhead in pgbouncer but you skip creating a Postgres process to\nhandle the new connection. And indeed, after starting to use pgbouncer\nwe could handle more concurrent users.\n\n> If he can get an application-side pooler (or perhaps just a change in\n> configuration) such that the connections are not made and broken so\n> often, then removing pgbouncer from the loop would probably be a win.\n\nDjango doesn't offer application-side poolers, they tell you to use\npgbouncer (see above). So pgbouncer is a net gain since it avoids\nPostgres process spawning overhead.\n\nFollowing recommendations in this thread, I replaced the global\npgbouncer on the DB machine by one pgbouncer for each webserver\nmachine and that helped. I didn't run the synthetic ab test in my\ninitial message on the new configuration but for our more realistic\ntests, page response times did shorten. The system is in production\nnow so it's harder to run the tests again to see exactly how much it\nhelped but it definitely did.\n\nSo it seems we're just doing too many connections and too many\nqueries. Each page view from a user translates to multiple requests to\nthe application server and each of those translates to a connection\nand at least a few queries (which are done in middleware and therefore\nhappen for each and every query). One pgbouncer can handle lots of\nconcurrent idle connections and lots of queries/second but our 9000\nqueries/second to seem push it too much. The longer term solution for\nus would probably be to do less connections (by doing less Django\nrequests for a page) and less queries, before our deadline we were\njust searching for a short term solution to handle an expected traffic\nspike.\n\nCheers,\nCatalin Iacob\n\n",
"msg_date": "Sun, 25 Nov 2012 17:30:04 +0100",
"msg_from": "Catalin Iacob <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On 25.11.2012 18:30, Catalin Iacob wrote:\n> So it seems we're just doing too many connections and too many\n> queries. Each page view from a user translates to multiple requests to\n> the application server and each of those translates to a connection\n> and at least a few queries (which are done in middleware and therefore\n> happen for each and every query). One pgbouncer can handle lots of\n> concurrent idle connections and lots of queries/second but our 9000\n> queries/second to seem push it too much. The longer term solution for\n> us would probably be to do less connections (by doing less Django\n> requests for a page) and less queries, before our deadline we were\n> just searching for a short term solution to handle an expected traffic\n> spike.\n\nThe typical solution to that is caching, see \nhttps://docs.djangoproject.com/en/1.4/topics/cache/.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 26 Nov 2012 09:46:33 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On Mon, Nov 26, 2012 at 12:46 AM, Heikki Linnakangas\n<[email protected]> wrote:\n> On 25.11.2012 18:30, Catalin Iacob wrote:\n>>\n>> So it seems we're just doing too many connections and too many\n>> queries. Each page view from a user translates to multiple requests to\n>> the application server and each of those translates to a connection\n>> and at least a few queries (which are done in middleware and therefore\n>> happen for each and every query). One pgbouncer can handle lots of\n>> concurrent idle connections and lots of queries/second but our 9000\n>> queries/second to seem push it too much. The longer term solution for\n>> us would probably be to do less connections (by doing less Django\n>> requests for a page) and less queries, before our deadline we were\n>> just searching for a short term solution to handle an expected traffic\n>> spike.\n>\n>\n> The typical solution to that is caching, see\n> https://docs.djangoproject.com/en/1.4/topics/cache/.\n\nThe first caching solution they recommend is memcached, which I too\nhighly recommend. Put a single instance on each server in your farm\ngive it 1G in each place and go to town. You can get MASSIVE\nperformance boosts from memcache.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 Nov 2012 16:17:55 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
}
] |
[
{
"msg_contents": "Hello all,\n\nI have been pulling my hair out over the last few days trying to get any useful performance out of the following \npainfully slow query.\nThe query is JPA created, I've just cleaned the aliases to make it more readable.\nUsing 'distinct' or 'group by' deliver about the same results, but 'distinct' is marginally better.\nHardware is pretty low end (a test box), but is mostly dedicated to PostgreSQL.\nThe box spec and configuration is included at the end of this post - Some of the values have been changed just to see if \nthings get better.\nInserts have also become extremely slow. I was expecting a drop off when the database grew out of memory, but not this much.\n\nAm I really missing the target somewhere?\nAny help and or suggestions will be very much appreciated.\n\nBest regards,\n\nAndy.\n\nhttp://explain.depesz.com/s/cfb\n\nselect distinct tr.nr as tnr\n, tr.time_end as tend\n, c.id_board as cb\n, c.id_board_mini as cbm\n, ti.id_test_result as itr\nfrom test_item ti\n, test_result tr\n, component c\n, recipe_version rv\nwhere ti.id_test_result = tr.id\nand ti.id_component = c.id\nand tr.id_recipe_version = rv.id\nand (rv.id_recipe in ('6229bf04-ae38-11e1-a955-0021974df2b2'))\nand tr.time_end <> cast('1970-01-01 01:00:00.000' as timestamp)\nand tr.time_begin >= cast('2012-10-22 00:00:14.383' as timestamp)\nand ti.type = 'Component'\n--group by tr.nr , tr.time_end , c.id_board , c.id_board_mini , ti.id_test_result\norder by tr.time_end asc limit 10000\n\n-- ########################\n\n-- Table: test_item\n\n-- Table Size 2119 MB\n-- Indexes Size 1845 MB\n-- Live Tuples 6606871\n\n-- DROP TABLE test_item;\n\nCREATE TABLE test_item\n(\n id character varying(36) NOT NULL,\n angle double precision NOT NULL,\n description character varying(1000),\n designation character varying(128) NOT NULL,\n failed boolean NOT NULL,\n node integer NOT NULL,\n nr integer NOT NULL,\n nr_verified integer,\n occurred timestamp without time zone NOT NULL,\n ocr character varying(384),\n pack_industry_name character varying(255),\n passed boolean NOT NULL,\n pin character varying(8),\n pos_valid boolean NOT NULL,\n pos_x double precision NOT NULL,\n pos_y double precision NOT NULL,\n pos_z double precision NOT NULL,\n qref character varying(255) NOT NULL,\n reference_id character varying(128) NOT NULL,\n repaired boolean NOT NULL,\n size_x double precision NOT NULL,\n size_y double precision NOT NULL,\n sort integer NOT NULL,\n subtype character varying(20) NOT NULL,\n type character varying(20) NOT NULL,\n valid boolean NOT NULL,\n version integer,\n id_component character varying(36),\n id_pack character varying(36),\n id_test_item character varying(36),\n id_test_result character varying(36) NOT NULL,\n CONSTRAINT test_item_pkey PRIMARY KEY (id),\n CONSTRAINT fk_test_item_component FOREIGN KEY (id_component)\n REFERENCES component (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_test_item_pack FOREIGN KEY (id_pack)\n REFERENCES pack (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_test_item_test_item FOREIGN KEY (id_test_item)\n REFERENCES test_item (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_test_item_test_result FOREIGN KEY (id_test_result)\n REFERENCES test_result (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\n\n-- Index: ix_test_item_c\n\n-- DROP INDEX ix_test_item_c;\n\nCREATE INDEX ix_test_item_c\n ON test_item\n USING btree\n (type COLLATE pg_catalog.\"default\")\n WHERE type::text = 'Component'::text;\n\n-- Index: ix_test_item_id_component\n\n-- DROP INDEX ix_test_item_id_component;\n\nCREATE INDEX ix_test_item_id_component\n ON test_item\n USING btree\n (id_component COLLATE pg_catalog.\"default\");\n\n-- Index: ix_test_item_id_test_item\n\n-- DROP INDEX ix_test_item_id_test_item;\n\nCREATE INDEX ix_test_item_id_test_item\n ON test_item\n USING btree\n (id_test_item COLLATE pg_catalog.\"default\");\n\n-- Index: ix_test_item_id_test_result\n\n-- DROP INDEX ix_test_item_id_test_result;\n\nCREATE INDEX ix_test_item_id_test_result\n ON test_item\n USING btree\n (id_test_result COLLATE pg_catalog.\"default\");\n\n-- Index: ix_test_item_type\n\n-- DROP INDEX ix_test_item_type;\n\nCREATE INDEX ix_test_item_type\n ON test_item\n USING btree\n (type COLLATE pg_catalog.\"default\");\n\n-- Table: test_result\n\n-- DROP TABLE test_result;\n\nCREATE TABLE test_result\n(\n id character varying(36) NOT NULL,\n description character varying(255) NOT NULL,\n name character varying(100) NOT NULL,\n nr integer NOT NULL,\n state integer NOT NULL,\n time_begin timestamp without time zone NOT NULL,\n time_end timestamp without time zone NOT NULL,\n version integer,\n id_machine character varying(36) NOT NULL,\n id_recipe_version character varying(36) NOT NULL,\n CONSTRAINT test_result_pkey PRIMARY KEY (id),\n CONSTRAINT fk_test_result_machine FOREIGN KEY (id_machine)\n REFERENCES machine (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_test_result_recipe_version FOREIGN KEY (id_recipe_version)\n REFERENCES recipe_version (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\n\n-- Index: ix_test_result_id_id_recipe_version\n\n-- DROP INDEX ix_test_result_id_id_recipe_version;\n\nCREATE INDEX ix_test_result_id_id_recipe_version\n ON test_result\n USING btree\n (id COLLATE pg_catalog.\"default\", id_recipe_version COLLATE pg_catalog.\"default\");\n\n-- Index: ix_test_result_id_id_recipe_version_time_end_time_begin\n\n-- DROP INDEX ix_test_result_id_id_recipe_version_time_end_time_begin;\n\nCREATE INDEX ix_test_result_id_id_recipe_version_time_end_time_begin\n ON test_result\n USING btree\n (id COLLATE pg_catalog.\"default\", id_recipe_version COLLATE pg_catalog.\"default\", time_end, time_begin);\n\n-- Index: ix_test_result_id_machine\n\n-- DROP INDEX ix_test_result_id_machine;\n\nCREATE INDEX ix_test_result_id_machine\n ON test_result\n USING btree\n (id_machine COLLATE pg_catalog.\"default\");\n\n-- Index: ix_test_result_id_recipe_version\n\n-- DROP INDEX ix_test_result_id_recipe_version;\n\nCREATE INDEX ix_test_result_id_recipe_version\n ON test_result\n USING btree\n (id_recipe_version COLLATE pg_catalog.\"default\");\n\n-- Index: ix_test_result_id_recipe_version_time\n\n-- DROP INDEX ix_test_result_id_recipe_version_time;\n\nCREATE INDEX ix_test_result_id_recipe_version_time\n ON test_result\n USING btree\n (id_recipe_version COLLATE pg_catalog.\"default\", time_end, time_begin);\n\n-- Index: ix_test_result_time_begin\n\n-- DROP INDEX ix_test_result_time_begin;\n\nCREATE INDEX ix_test_result_time_begin\n ON test_result\n USING btree\n (time_begin);\n\n-- Index: ix_test_result_time_end\n\n-- DROP INDEX ix_test_result_time_end;\n\nCREATE INDEX ix_test_result_time_end\n ON test_result\n USING btree\n (time_end);\n\n-- Index: ix_test_result_time_id_recipe_version\n\n-- DROP INDEX ix_test_result_time_id_recipe_version;\n\nCREATE INDEX ix_test_result_time_id_recipe_version\n ON test_result\n USING btree\n (time_end, id_recipe_version COLLATE pg_catalog.\"default\");\n\n-- Table: component\n\n-- DROP TABLE component;\n\nCREATE TABLE component\n(\n id character varying(36) NOT NULL,\n cad_angle double precision NOT NULL,\n cad_part character varying(100),\n cad_type character varying(100),\n cad_x double precision NOT NULL,\n cad_y double precision NOT NULL,\n cad_z double precision NOT NULL,\n cid integer NOT NULL,\n comment character varying(255),\n name character varying(80) NOT NULL,\n ocr character varying(384),\n pin_count integer NOT NULL,\n pos_angle double precision NOT NULL,\n pos_height double precision NOT NULL,\n pos_width double precision NOT NULL,\n pos_x double precision NOT NULL,\n pos_y double precision NOT NULL,\n pos_z double precision NOT NULL,\n ref_des character varying(100),\n ref_id character varying(100),\n type character varying(25) NOT NULL,\n version integer,\n id_board character varying(36),\n id_board_mini character varying(36),\n id_frame character varying(36),\n id_pack character varying(36),\n id_recipe_version character varying(36) NOT NULL,\n global_x double precision NOT NULL,\n global_y double precision NOT NULL,\n global_angle double precision NOT NULL,\n CONSTRAINT component_pkey PRIMARY KEY (id),\n CONSTRAINT fk_component_board FOREIGN KEY (id_board)\n REFERENCES board (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_component_board_mini FOREIGN KEY (id_board_mini)\n REFERENCES board_mini (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_component_frame FOREIGN KEY (id_frame)\n REFERENCES frame (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_component_pack FOREIGN KEY (id_pack)\n REFERENCES pack (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_component_recipe_version FOREIGN KEY (id_recipe_version)\n REFERENCES recipe_version (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\n\n-- Index: ix_component_cid\n\n-- DROP INDEX ix_component_cid;\n\nCREATE INDEX ix_component_cid\n ON component\n USING btree\n (cid);\n\n-- Index: ix_component_id_board\n\n-- DROP INDEX ix_component_id_board;\n\nCREATE INDEX ix_component_id_board\n ON component\n USING btree\n (id_board COLLATE pg_catalog.\"default\");\n\n-- Index: ix_component_id_board_mini\n\n-- DROP INDEX ix_component_id_board_mini;\n\nCREATE INDEX ix_component_id_board_mini\n ON component\n USING btree\n (id_board_mini COLLATE pg_catalog.\"default\");\n\n-- Index: ix_component_id_frame\n\n-- DROP INDEX ix_component_id_frame;\n\nCREATE INDEX ix_component_id_frame\n ON component\n USING btree\n (id_frame COLLATE pg_catalog.\"default\");\n\n-- Index: ix_component_id_pack\n\n-- DROP INDEX ix_component_id_pack;\n\nCREATE INDEX ix_component_id_pack\n ON component\n USING btree\n (id_pack COLLATE pg_catalog.\"default\");\n\n-- Index: ix_component_id_recipe_version\n\n-- DROP INDEX ix_component_id_recipe_version;\n\nCREATE INDEX ix_component_id_recipe_version\n ON component\n USING btree\n (id_recipe_version COLLATE pg_catalog.\"default\");\n\n-- Table: recipe_version\n\n-- DROP TABLE recipe_version;\n\nCREATE TABLE recipe_version\n(\n id character varying(36) NOT NULL,\n certified smallint NOT NULL,\n deprecated boolean NOT NULL,\n edit timestamp without time zone NOT NULL,\n name character varying(255),\n qpc_identifier integer NOT NULL,\n recipe_version integer NOT NULL,\n revision character varying(150),\n version integer,\n id_comment character varying(36),\n id_recipe character varying(36) NOT NULL,\n id_recipe_version character varying(36),\n intention smallint NOT NULL DEFAULT 0,\n CONSTRAINT recipe_version_pkey PRIMARY KEY (id),\n CONSTRAINT fk_recipe_version_comment FOREIGN KEY (id_comment)\n REFERENCES comment (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_recipe_version_recipe FOREIGN KEY (id_recipe)\n REFERENCES recipe (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_recipe_version_recipe_version FOREIGN KEY (id_recipe_version)\n REFERENCES recipe_version (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\n\n-- Index: ix_recipe_version_certified\n\n-- DROP INDEX ix_recipe_version_certified;\n\nCREATE INDEX ix_recipe_version_certified\n ON recipe_version\n USING btree\n (certified);\n\n-- Index: ix_recipe_version_id_id_recipe\n\n-- DROP INDEX ix_recipe_version_id_id_recipe;\n\nCREATE INDEX ix_recipe_version_id_id_recipe\n ON recipe_version\n USING btree\n (id COLLATE pg_catalog.\"default\", id_recipe COLLATE pg_catalog.\"default\");\n\n-- Index: ix_recipe_version_id_recipe\n\n-- DROP INDEX ix_recipe_version_id_recipe;\n\nCREATE INDEX ix_recipe_version_id_recipe\n ON recipe_version\n USING btree\n (id_recipe COLLATE pg_catalog.\"default\");\n\n-- Index: ix_recipe_version_name\n\n-- DROP INDEX ix_recipe_version_name;\n\nCREATE INDEX ix_recipe_version_name\n ON recipe_version\n USING btree\n (name COLLATE pg_catalog.\"default\");\n\n-- Index: ix_recipe_version_recipe_version\n\n-- DROP INDEX ix_recipe_version_recipe_version;\n\nCREATE INDEX ix_recipe_version_recipe_version\n ON recipe_version\n USING btree\n (recipe_version);\n\n-- Index: ix_recipe_version_recipe_version_test\n\n-- DROP INDEX ix_recipe_version_recipe_version_test;\n\nCREATE INDEX ix_recipe_version_recipe_version_test\n ON recipe_version\n USING btree\n (id_recipe COLLATE pg_catalog.\"default\", certified, id COLLATE pg_catalog.\"default\");\n\n\n-- ########################\n\n\"version\";\"PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 32-bit\"\n\"autovacuum\";\"on\"\n\"autovacuum_analyze_scale_factor\";\"0.01\"\n\"autovacuum_analyze_threshold\";\"20\"\n\"autovacuum_max_workers\";\"5\"\n\"autovacuum_naptime\";\"15s\"\n\"autovacuum_vacuum_cost_delay\";\"20ms\"\n\"autovacuum_vacuum_scale_factor\";\"0.01\"\n\"autovacuum_vacuum_threshold\";\"20\"\n\"bytea_output\";\"escape\"\n\"checkpoint_completion_target\";\"0.9\"\n\"checkpoint_segments\";\"64\"\n\"client_encoding\";\"UNICODE\"\n\"client_min_messages\";\"notice\"\n\"cpu_index_tuple_cost\";\"0.001\"\n\"cpu_operator_cost\";\"0.0005\"\n\"cpu_tuple_cost\";\"0.003\"\n\"deadlock_timeout\";\"30s\"\n\"default_statistics_target\";\"200\"\n\"effective_cache_size\";\"3GB\"\n\"escape_string_warning\";\"off\"\n\"external_pid_file\";\"orprovision.pid\"\n\"from_collapse_limit\";\"12\"\n\"fsync\";\"off\"\n\"geqo_threshold\";\"14\"\n\"join_collapse_limit\";\"12\"\n\"lc_collate\";\"German_Germany.1252\"\n\"lc_ctype\";\"German_Germany.1252\"\n\"listen_addresses\";\"*\"\n\"log_autovacuum_min_duration\";\"10s\"\n\"log_checkpoints\";\"on\"\n\"log_destination\";\"stderr\"\n\"log_filename\";\"day-%d.log\"\n\"log_line_prefix\";\"%t:%r:%u@%d:[%p]: \"\n\"log_lock_waits\";\"on\"\n\"log_min_duration_statement\";\"3s\"\n\"log_min_error_statement\";\"log\"\n\"log_min_messages\";\"log\"\n\"log_rotation_size\";\"1MB\"\n\"log_statement\";\"none\"\n\"log_truncate_on_rotation\";\"on\"\n\"logging_collector\";\"on\"\n\"maintenance_work_mem\";\"256MB\"\n\"max_connections\";\"50\"\n\"max_locks_per_transaction\";\"500\"\n\"max_prepared_transactions\";\"250\"\n\"max_stack_depth\";\"2MB\"\n\"port\";\"6464\"\n\"random_page_cost\";\"5\"\n\"seq_page_cost\";\"2\"\n\"server_encoding\";\"UTF8\"\n\"shared_buffers\";\"256MB\"\n\"statement_timeout\";\"40min\"\n\"synchronous_commit\";\"off\"\n\"wal_buffers\";\"16MB\"\n\"work_mem\";\"16MB\"\n\nOperating System: Windows 7 Home Premium 32-bit (6.1, Build 7601) Service Pack 1 (7601.win7sp1_gdr.120830-0333)\nLanguage: German (Regional Setting: German)\nSystem Manufacturer: Acer\nSystem Model: Aspire X1700\nBIOS: Default System BIOS\nProcessor: Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz (4 CPUs), ~2.3GHz\nMemory: 4096MB RAM\nAvailable OS Memory: 3072MB RAM\nPage File: 3328MB used, 2811MB available\n\n\n\n",
"msg_date": "Tue, 30 Oct 2012 08:33:53 +0100",
"msg_from": "Andy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query, where am I going wrong?"
},
{
"msg_contents": "Andy wrote:\n> I have been pulling my hair out over the last few days trying to get\nany useful performance out of the\n> following\n> painfully slow query.\n> The query is JPA created, I've just cleaned the aliases to make it\nmore readable.\n> Using 'distinct' or 'group by' deliver about the same results, but\n'distinct' is marginally better.\n> Hardware is pretty low end (a test box), but is mostly dedicated to\nPostgreSQL.\n> The box spec and configuration is included at the end of this post -\nSome of the values have been\n> changed just to see if\n> things get better.\n> Inserts have also become extremely slow. I was expecting a drop off\nwhen the database grew out of\n> memory, but not this much.\n> \n> Am I really missing the target somewhere?\n> Any help and or suggestions will be very much appreciated.\n> \n> Best regards,\n> \n> Andy.\n> \n> http://explain.depesz.com/s/cfb\n\nThe estimate on the join between recipe_version and test_result is not\ngood.\n\nMaybe things will improve if you increase the statistics on\ntest_result.id_recipe_version.\n\nIf that does not help, maybe the nested loop join that takes\nall your time can be sped up with the following index:\n\nCREATE INDEX any_name ON test_item (id_test_result, type);\n\nBut I would not expect much improvement there.\n\nBTW, you seem to have an awful lot of indexes defined, some\nof which seem redundant.\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Tue, 30 Oct 2012 09:25:27 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, where am I going wrong?"
},
{
"msg_contents": "Thanks very much Laurenz.\n\nI'll put your suggestions into motion right away and let you know the\nresults.\n\n\nAlbe Laurenz *EXTERN* wrote\n> BTW, you seem to have an awful lot of indexes defined, some\n> of which seem redundant.\n\nI am in the process of pruning unused/useless indexes on this database - So\nmany of them will be dropped. Most of them are not in production and are\npast play things on this test system.\n\nThe actual production test_item table gets about 140k inserts a day (avg).\nHaving this test system slow, dirty and bloated is quite good as it helps us\nidentify potential bottlenecks before they hit production. Partitioning is\nalso on the cards, but solving this current issue is only going to help.\n\nThanks again.\n\nAndy\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730025.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Tue, 30 Oct 2012 02:47:51 -0700 (PDT)",
"msg_from": "AndyG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, where am I going wrong?"
},
{
"msg_contents": "A marginal improvement.\n\nhttp://explain.depesz.com/s/y63\n\nI am going to normalize the table some more before partitioning.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730059.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Tue, 30 Oct 2012 07:54:33 -0700 (PDT)",
"msg_from": "AndyG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, where am I going wrong?"
},
{
"msg_contents": "AndyG wrote:\n> A marginal improvement.\n> \n> http://explain.depesz.com/s/y63\n\nThat's what I thought.\n\nIncreasing the statistics for test_result.id_recipe_version\nhad no effect?\n\n> I am going to normalize the table some more before partitioning.\n\nHow do you think that partitioning will help?\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Tue, 30 Oct 2012 16:48:56 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, where am I going wrong?"
},
{
"msg_contents": "Albe Laurenz *EXTERN* wrote\n> Increasing the statistics for test_result.id_recipe_version\n> had no effect?\n> \n>> I am going to normalize the table some more before partitioning.\n> \n> How do you think that partitioning will help?\n\nI increased the statistics in steps up to 5000 (with vacuum analyse) - Seems\nto be as good as it gets.\n\nhttp://explain.depesz.com/s/z2a\n\nThe simulated data is about a months worth. Partitioning is only really\nexpected to help on insert, but that's pretty critical for us.\n\nAt the moment test_item contains way too much repeated data IMHO, and I will\naddress that asap (is going to hurt ).\n\nI will also look into creating an aggregate table to hold the 'distinct'\nvalues.\n\nAndy.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730140.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Wed, 31 Oct 2012 02:18:23 -0700 (PDT)",
"msg_from": "AndyG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, where am I going wrong?"
},
{
"msg_contents": "AndyG wrote:\n>> Increasing the statistics for test_result.id_recipe_version\n>> had no effect?\n\n> I increased the statistics in steps up to 5000 (with vacuum analyse) -\nSeems\n> to be as good as it gets.\n> \n> http://explain.depesz.com/s/z2a\n\nJust out of curiosity, do you get a better plan with\nenable_nestloop=off?\nNot that I think it would be a good idea to change that\nsetting in general.\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Wed, 31 Oct 2012 10:35:54 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, where am I going wrong?"
},
{
"msg_contents": "Much better...\n\nhttp://explain.depesz.com/s/uFi\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730145.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Wed, 31 Oct 2012 03:18:11 -0700 (PDT)",
"msg_from": "AndyG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, where am I going wrong?"
},
{
"msg_contents": "But why? Is there a way to force the planner into this?\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730151.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Wed, 31 Oct 2012 04:28:12 -0700 (PDT)",
"msg_from": "AndyG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, where am I going wrong?"
},
{
"msg_contents": "> But why? Is there a way to force the planner into this?\n\nI don't know enough about the planner to answer the \"why\",\nbut the root of the problem seems to be the mis-estimate\nfor the join between test_result and recipe_version\n(1348 instead of 21983 rows).\n\nThat makes the planner think that a nested loop join\nwould be cheaper, but it really is not.\n\nI had hoped that improving statistics would improve that\nestimate.\n\nThe only way to force the planner to do it that way is\nto set enable_nestloop=off, but only for that one query.\nAnd even that is a bad idea, because for different\nconstant values or when the table data change, a nested\nloop join might actually be the best choice.\n\nI don't know how to solve that problem.\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Wed, 31 Oct 2012 14:29:44 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, where am I going wrong?"
},
{
"msg_contents": "Externalizing the limit has improved the speed a lot. Distinct is half a\nsecond faster than group by.\n\nhttp://explain.depesz.com/s/vP1\n\nwith tmp as (\nselect distinct tr.nr as tnr\n, tr.time_end as tend\n, c.id_board as cb\n, c.id_board_mini as cbm\n, ti.id_test_result as itr \nfrom test_item ti\n, test_result tr\n, component c \n, recipe_version rv \nwhere ti.id_test_result = tr.id \nand ti.id_component = c.id \nand tr.id_recipe_version = rv.id \nand (rv.id_recipe in ('6229bf04-ae38-11e1-a955-0021974df2b2')) \nand tr.time_end <> cast('1970-01-01 01:00:00.000' as timestamp) \nand tr.time_begin >= cast('2012-10-27 08:00:17.045' as timestamp)\nand ti.type = 'Component' \n--group by tr.nr , tr.time_end , c.id_board , c.id_board_mini ,\nti.id_test_result \norder by tr.time_end asc) \nselect * from tmp\nlimit 10000\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730185.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Wed, 31 Oct 2012 07:59:12 -0700 (PDT)",
"msg_from": "AndyG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query, where am I going wrong?"
}
] |
[
{
"msg_contents": "hi\n\n\ni have sql file (it's size are 1GB )\nwhen i execute it then the String is 987098801 bytr too long for encoding\nconversion error occured .\npls give me solution about\n\ni have XP 64-bit with 8 GB RAM shared_buffer 1GB check point = 34\n\n\nwith thanks\nmahavir\n\nhi i have sql file (it's size are 1GB )when i execute it then the String is 987098801 bytr too long for encoding conversion error occured .pls give me solution about \ni have XP 64-bit with 8 GB RAM shared_buffer 1GB check point = 34 with thanksmahavir",
"msg_date": "Tue, 30 Oct 2012 14:44:49 +0530",
"msg_from": "Mahavir Trivedi <[email protected]>",
"msg_from_op": true,
"msg_subject": "out of memory"
},
{
"msg_contents": "> i have sql file (it's size are 1GB )\n> when i execute it then the String is 987098801 bytr too long for encoding\n> conversion error occured .\n> pls give me solution about\n\nYou hit the upper limit of internal memory allocation limit in\nPostgreSQL. IMO, there's no way to avoid the error except you use\nclient encoding identical to backend.\n\nHackers:\nThe particular limit seem to be set considering TOAST(from\ninclude/utils/memutils.h):\n\n * XXX This is deliberately chosen to correspond to the limiting size\n * of varlena objects under TOAST.\tSee VARSIZE_4B() and related macros\n * in postgres.h. Many datatypes assume that any allocatable size can\n * be represented in a varlena header.\n\nIMO the SQL string size limit is totally different from\nTOAST. Shouldn't we have different limit for SQL string?\n(MAX_CONVERSION_GROWTH is different story, of course)\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n\n",
"msg_date": "Tue, 30 Oct 2012 19:08:59 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: out of memory"
},
{
"msg_contents": "On Tue, Oct 30, 2012 at 6:08 AM, Tatsuo Ishii <[email protected]> wrote:\n>> i have sql file (it's size are 1GB )\n>> when i execute it then the String is 987098801 bytr too long for encoding\n>> conversion error occured .\n>> pls give me solution about\n>\n> You hit the upper limit of internal memory allocation limit in\n> PostgreSQL. IMO, there's no way to avoid the error except you use\n> client encoding identical to backend.\n\nWe recently had a customer who suffered a failed in pg_dump because\nthe quadruple-allocation required by COPY OUT for an encoding\nconversion exceeded allocatable memory. I wonder whether it would be\npossible to rearrange things so that we can do a \"streaming\" encoding\nconversion. That is, if we have a large datum that we're trying to\nsend back to the client, could we perhaps chop off the first 50MB or\nso, do the encoding on that amount of data, send the data to the\nclient, lather, rinse, repeat?\n\nYour recent work to increase the maximum possible size of large\nobjects (for which I thank you) seems like it could make these sorts\nof issues more common. As objects get larger, I don't think we can go\non assuming that it's OK for peak memory utilization to keep hitting\n5x or more.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 5 Nov 2012 12:27:17 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] out of memory"
},
{
"msg_contents": "On 11/05/12 9:27 AM, Robert Haas wrote:\n> That is, if we have a large datum that we're trying to\n> send back to the client, could we perhaps chop off the first 50MB or\n> so, do the encoding on that amount of data, send the data to the\n> client, lather, rinse, repeat?\n\nI'd suggest work_mem sized chunks for this?\n\n\n\n-- \njohn r pierce N 37, W 122\nsanta cruz ca mid-left coast\n\n\n",
"msg_date": "Mon, 05 Nov 2012 09:30:30 -0800",
"msg_from": "John R Pierce <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] out of memory"
}
] |
[
{
"msg_contents": "Hi,\n When i start my postgres. Iam getting this error. I had \ninstalled 8.4 and 9.1\nIt was working good yesterday but not now.\nservice postgresql restart\n * Restarting PostgreSQL 8.4databaseserver\n* The PostgreSQL server failed to start. Please check the log output.\n\nIf i see the log. it shows yesterday's log report. please give me suggestion\nThanks for reply.\n\n-- \nRegards,\nVignesh.T\n\n\n",
"msg_date": "Tue, 30 Oct 2012 14:54:00 +0530",
"msg_from": "vignesh <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL server failed to start"
},
{
"msg_contents": "On Tue, Oct 30, 2012 at 2:24 AM, vignesh <[email protected]> wrote:\n> Hi,\n> When i start my postgres. Iam getting this error.\n\nYou may want to ask on the pgsql-general mailing list [1]. This list\nis just for Postgres performance questions. While, technically,\nfailing to start outright could be considered a performance problem,\nthe general list may be better able to help you.\n\nAlso, please provide more details when you ask there (e.g., what\noperating system, how did you install Postgres, what changed between\nyesterday and now, etc.).\n\n[1]: http://archives.postgresql.org/pgsql-general/\n\n",
"msg_date": "Tue, 30 Oct 2012 08:43:27 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL server failed to start"
}
] |
[
{
"msg_contents": "Catalin Iacob wrote:\n\n> Hardware:\n> Virtual machine running on top of VMWare\n> 4 cores, Intel(R) Xeon(R) CPU E5645 @ 2.40GHz\n> 4GB of RAM\n\nYou should carefully test transaction-based pools limited to around 8\nDB connections. Experiment with different size limits.\n\nhttp://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n\n> Disk that is virtual enough that I have no idea what it is, I know\n> that there's some big storage shared between multiple virtual\n> machines. Filesystem is ext4 with default mount options.\n\nCan you change to noatime?\n\n> pgbouncer 1.4.2 installed from Ubuntu's packages on the same\n> machine as Postgres. Django connects via TCP/IP to pgbouncer (it\n> does one connection and one transaction per request) and pgbouncer\n> keeps connections open to Postgres via Unix socket. The Python\n> client is self compiled psycopg2-2.4.5.\n\nIs there a good transaction-based connection pooler in Python? You're\nbetter off with a good pool built in to the client application than\nwith a good pool running as a separate process between the client and\nthe database, IMO.\n\n> random_page_cost | 2\n\nFor fully cached databases I recommend random_page_cost = 1, and I\nalways recommend cpu_tuple_cost = 0.03.\n\n-Kevin\n\n",
"msg_date": "Tue, 30 Oct 2012 07:55:54 -0400",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
},
{
"msg_contents": "On 10/30/2012 06:55 AM, Kevin Grittner wrote:\n\n> Is there a good transaction-based connection pooler in Python?\n> You're better off with a good pool built in to the client application\n> than with a good pool running as a separate process between the\n> client and the database, IMO.\n\nCould you explain this a little more? My experience is almost always the \nexact opposite, especially in large clusters that may have dozens of \nservers all hitting the same database. A centralized pool has much less \nduplication and can serve from a smaller pool than having 12 servers \neach have 25 connections reserved in their own private pool or something.\n\nI mean... a pool is basically a proxy server. I don't have 12 individual \nproxy servers for 12 webservers.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Tue, 30 Oct 2012 09:02:28 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to keep queries low latency as concurrency increases"
}
] |
[
{
"msg_contents": "Hi all\n\nI have a problem with a data import procedure that involve the following query:\n\nselect a,b,c,d\nfrom big_table b\njoin data_sequences_table ds\non b.key1 = ds.key1 and b.key2 = ds.key2\nwhere ds.import_id=xxxxxxxxxx\n\nThe \"big table\" has something like 10.000.000 records ore more\n(depending on the table, there are more than one of them).\nThe data are uploaded in 20k record blocks, and the keys are written\non \"data_sequences_table\". The keys are composite (key1,key2), and\nevery 5-10 sequences (depending on the size of the upload) the\ndata_sequences_table records are deleted.\nI have indexes on both the key on the big table and the import_id on\nthe sequence table.\n\nthe query plan evualuate like this:\n\nMerge Join (cost=2604203.98..2774528.51 rows=129904 width=20)\n Merge Cond: (((( big_table.key1)::numeric) =\ndata_sequences_table.key1) AND ((( big_table.key2)::numeric) =\ndata_sequences_table.key2))\n -> Sort (cost=2602495.47..2635975.81 rows=13392135 width=20)\n Sort Key: ((big_table.key1)::numeric), ((big_table.key2)::numeric)\n -> Seq Scan on big_table (cost=0.00..467919.35 rows=13392135 width=20)\n -> Sort (cost=1708.51..1709.48 rows=388 width=32)\n Sort Key: data_sequences_table.key1, data_sequences_table.key2\n -> Seq Scan on data_sequences_table (cost=0.00..1691.83\nrows=388 width=32)\n Filter: (import_id = 1351592072::numeric)\n\nIt executes in something like 80 seconds. The import procedure has\nmore than 500 occurrences of this situation. :(\nWhy is the big table evaluated with a seq scan? The result is 0 to\n20.000 records (the query returns the records that already exists and\nshould be updated, not inserted).. Can I do something to speed this\nup?\n\n-- \nVincenzo.\nImola Informatica\n\nAi sensi del D.Lgs. 196/2003 si precisa che le informazioni contenute\nin questo messaggio sono riservate ed a uso esclusivo del\ndestinatario.\nPursuant to Legislative Decree No. 196/2003, you are hereby informed\nthat this message contains confidential information intended only for\nthe use of the addressee.\n\n",
"msg_date": "Tue, 30 Oct 2012 13:15:10 +0100",
"msg_from": "Vincenzo Melandri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seq scan on 10million record table.. why?"
},
{
"msg_contents": " Hi Vincenzo,\n\n On Tue, 30 Oct 2012 13:15:10 +0100, Vincenzo Melandri \n <[email protected]> wrote:\n> I have indexes on both the key on the big table and the import_id on\n> the sequence table.\n\n Forgive my quick answer, but it might be that the data you are \n retrieving is scattered throughout the whole table, and the index scan \n does not kick in (as it is more expensive to perform lots of random \n fetches rather than a single scan).\n\n To be able to help you though, I'd need to deeply look at the ETL \n process - I am afraid you need to use a different approach, involving \n either queues or partitioning.\n\n Sorry for not being able to help you more in this case.\n\n Cheers,\n Gabriele\n-- \n Gabriele Bartolini - 2ndQuadrant Italia\n PostgreSQL Training, Services and Support\n [email protected] - www.2ndQuadrant.it\n\n",
"msg_date": "Tue, 30 Oct 2012 14:45:11 +0100",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan on 10million record table..\n =?UTF-8?Q?why=3F?="
},
{
"msg_contents": "On 10/30/2012 07:15 AM, Vincenzo Melandri wrote:\n\n> Merge Join (cost=2604203.98..2774528.51 rows=129904 width=20)\n> Merge Cond: (((( big_table.key1)::numeric) =\n> data_sequences_table.key1) AND ((( big_table.key2)::numeric) =\n> data_sequences_table.key2))\n> -> Sort (cost=2602495.47..2635975.81 rows=13392135 width=20)\n> Sort Key: ((big_table.key1)::numeric), ((big_table.key2)::numeric)\n> -> Seq Scan on big_table (cost=0.00..467919.35 rows=13392135 width=20)\n> -> Sort (cost=1708.51..1709.48 rows=388 width=32)\n> Sort Key: data_sequences_table.key1, data_sequences_table.key2\n> -> Seq Scan on data_sequences_table (cost=0.00..1691.83\n> rows=388 width=32)\n> Filter: (import_id = 1351592072::numeric)\n\nAs always, we need to see an EXPLAIN ANALYZE, not just an EXPLAIN. We \nalso need to know the version of PostgreSQL and your server settings. \nPlease refer to this:\n\nhttp://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nI see a lot of NUMERIC conversions in there, which suggests you're using \nNUMERIC for your keys. That's not really recommended practice, but also \nsuggests the possibility that all your types are not the same. So it \nwould be very helpful to also see the actual CREATE TABLE, and CREATE \nINDEX statements for those tables.\n\nWe can't help you with this limited information. Sorry.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Tue, 30 Oct 2012 08:53:00 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan on 10million record table.. why?"
},
{
"msg_contents": "It seems that your tables has different types for columns, see \"::numeric\"\nin \"((( big_table.key2)::numeric) =data_sequences_table.key2))\"\nPostgresql always uses widening conversion, so it can't use index. There\nare next options to fix:\n1) Make all types the same\n2) If you are using some narrow type for big_table (say, int2) to save\nspace, you can force narrowing conversion, e.g. \"b.key1=ds.key1::int2\".\nNote that if ds.key1 has any values that don't fit into int2, you will have\nproblems. And of course, use your type used instead of int2.\n3) Create an index on (key1::numeric),(key2::numeric) . This is last\noptions this this index will be very specific to the query (or similar\nones).\n\nBest regards, Vitalii Tymchyshyn\n\n2012/10/30 Vincenzo Melandri <[email protected]>\n\n> Hi all\n>\n> I have a problem with a data import procedure that involve the following\n> query:\n>\n> select a,b,c,d\n> from big_table b\n> join data_sequences_table ds\n> on b.key1 = ds.key1 and b.key2 = ds.key2\n> where ds.import_id=xxxxxxxxxx\n>\n> The \"big table\" has something like 10.000.000 records ore more\n> (depending on the table, there are more than one of them).\n> The data are uploaded in 20k record blocks, and the keys are written\n> on \"data_sequences_table\". The keys are composite (key1,key2), and\n> every 5-10 sequences (depending on the size of the upload) the\n> data_sequences_table records are deleted.\n> I have indexes on both the key on the big table and the import_id on\n> the sequence table.\n>\n> the query plan evualuate like this:\n>\n> Merge Join (cost=2604203.98..2774528.51 rows=129904 width=20)\n> Merge Cond: (((( big_table.key1)::numeric) =\n> data_sequences_table.key1) AND ((( big_table.key2)::numeric) =\n> data_sequences_table.key2))\n> -> Sort (cost=2602495.47..2635975.81 rows=13392135 width=20)\n> Sort Key: ((big_table.key1)::numeric), ((big_table.key2)::numeric)\n> -> Seq Scan on big_table (cost=0.00..467919.35 rows=13392135\n> width=20)\n> -> Sort (cost=1708.51..1709.48 rows=388 width=32)\n> Sort Key: data_sequences_table.key1, data_sequences_table.key2\n> -> Seq Scan on data_sequences_table (cost=0.00..1691.83\n> rows=388 width=32)\n> Filter: (import_id = 1351592072::numeric)\n>\n> It executes in something like 80 seconds. The import procedure has\n> more than 500 occurrences of this situation. :(\n> Why is the big table evaluated with a seq scan? The result is 0 to\n> 20.000 records (the query returns the records that already exists and\n> should be updated, not inserted).. Can I do something to speed this\n> up?\n>\n> --\n> Vincenzo.\n> Imola Informatica\n>\n> Ai sensi del D.Lgs. 196/2003 si precisa che le informazioni contenute\n> in questo messaggio sono riservate ed a uso esclusivo del\n> destinatario.\n> Pursuant to Legislative Decree No. 196/2003, you are hereby informed\n> that this message contains confidential information intended only for\n> the use of the addressee.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nIt seems that your tables has different types for columns, see \"::numeric\" in \"((( big_table.key2)::numeric) =data_sequences_table.key2))\"Postgresql always uses widening conversion, so it can't use index. There are next options to fix:\n1) Make all types the same2) If you are using some narrow type for big_table (say, int2) to save space, you can force narrowing conversion, e.g. \"b.key1=ds.key1::int2\". Note that if ds.key1 has any values that don't fit into int2, you will have problems. And of course, use your type used instead of int2.\n3) Create an index on (key1::numeric),(key2::numeric) . This is last options this this index will be very specific to the query (or similar ones).Best regards, Vitalii Tymchyshyn\n2012/10/30 Vincenzo Melandri <[email protected]>\nHi all\n\nI have a problem with a data import procedure that involve the following query:\n\nselect a,b,c,d\nfrom big_table b\njoin data_sequences_table ds\non b.key1 = ds.key1 and b.key2 = ds.key2\nwhere ds.import_id=xxxxxxxxxx\n\nThe \"big table\" has something like 10.000.000 records ore more\n(depending on the table, there are more than one of them).\nThe data are uploaded in 20k record blocks, and the keys are written\non \"data_sequences_table\". The keys are composite (key1,key2), and\nevery 5-10 sequences (depending on the size of the upload) the\ndata_sequences_table records are deleted.\nI have indexes on both the key on the big table and the import_id on\nthe sequence table.\n\nthe query plan evualuate like this:\n\nMerge Join (cost=2604203.98..2774528.51 rows=129904 width=20)\n Merge Cond: (((( big_table.key1)::numeric) =\ndata_sequences_table.key1) AND ((( big_table.key2)::numeric) =\ndata_sequences_table.key2))\n -> Sort (cost=2602495.47..2635975.81 rows=13392135 width=20)\n Sort Key: ((big_table.key1)::numeric), ((big_table.key2)::numeric)\n -> Seq Scan on big_table (cost=0.00..467919.35 rows=13392135 width=20)\n -> Sort (cost=1708.51..1709.48 rows=388 width=32)\n Sort Key: data_sequences_table.key1, data_sequences_table.key2\n -> Seq Scan on data_sequences_table (cost=0.00..1691.83\nrows=388 width=32)\n Filter: (import_id = 1351592072::numeric)\n\nIt executes in something like 80 seconds. The import procedure has\nmore than 500 occurrences of this situation. :(\nWhy is the big table evaluated with a seq scan? The result is 0 to\n20.000 records (the query returns the records that already exists and\nshould be updated, not inserted).. Can I do something to speed this\nup?\n\n--\nVincenzo.\nImola Informatica\n\nAi sensi del D.Lgs. 196/2003 si precisa che le informazioni contenute\nin questo messaggio sono riservate ed a uso esclusivo del\ndestinatario.\nPursuant to Legislative Decree No. 196/2003, you are hereby informed\nthat this message contains confidential information intended only for\nthe use of the addressee.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Tue, 30 Oct 2012 10:03:54 -0400",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan on 10million record table.. why?"
},
{
"msg_contents": "> 1) Make all types the same\n> 2) If you are using some narrow type for big_table (say, int2) to save\n> space, you can force narrowing conversion, e.g. \"b.key1=ds.key1::int2\". Note\n> that if ds.key1 has any values that don't fit into int2, you will have\n> problems. And of course, use your type used instead of int2.\n>\n> Best regards, Vitalii Tymchyshyn\n>\n\nThis fixed my problem :)\nThanks Vitalii!\n\nFor the other suggestions made from Gabriele, unfortunately I can't\nmake an accurate data-partitioning 'cause (obviously) it will be quite\na big work and the customer finished the budget for this year, so\nunless I choose to work for free... ;)\n\n\n-- \nVincenzo.\n\n",
"msg_date": "Tue, 30 Oct 2012 19:18:51 +0100",
"msg_from": "Vincenzo Melandri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seq scan on 10million record table.. why?"
}
] |
[
{
"msg_contents": "Hello there,\n\nI have PostgreSQL 8.3.18 server running on Centos 6.2 (2.6.32-220.7.1) with\nthis specs:\n\n2x CPU AMD Opteron 6282\n128GB RAM\nRaid 10 (12HD 15k rpm 1GB cache) with data\nRaid 10 (4HD 15k rpm 1GB cache) with xlog\nRaid 1 (15k rpm 1GB cache shared with xlog) with system\n\nOn this server I have only one database with 312GB of data. The database\nhad run fine during 4 months, but from two months ago, during high work\nload periods, the server is collapsed by \"%sys\" type load.\n\nFor example \"dstat -ar --socket --tcp\" during %sys load problem:\nhttp://pastebin.com/7zfDNvPh\n\nReboot the server mitigates the problem during few days, but always\nreappear.\nServer not is swapping, don't have excessive I/O, don't have %IRQ load.\n\nI don't have any ideas...\n\nThank you very much for your help.\n\nMy sysctl and postgres.conf:\n\nsysclt -a:\nhttp://pastebin.com/EEVnNxsZ\n\nMy Postgres.conf:\nmax_connections = 500 # (change requires restart)\nunix_socket_directory = '/var/run/postgres' # (change requires restart)\nshared_buffers = 18GB # min 128kB or max_connections*16kB\nwork_mem = 30MB # min 64kB\nmaintenance_work_mem = 1GB # min 1MB\nmax_fsm_pages = 8553600 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 409000 # min 100, ~70 bytes each\nfsync = on # turns forced synchronization on or off\nsynchronous_commit = off # immediate fsync at commit\nwal_buffers = 8MB # min 32kB\ncheckpoint_segments = 64 # in logfile segments, min 1, 16MB each\ncheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0\narchive_mode = on # allows archiving to be done\narchive_command = 'exit 0'\neffective_cache_size = 100GB\nconstraint_exclusion = on\ndefault_text_search_config = 'pg_catalog.spanish'\nmax_locks_per_transaction = 100\n\n-- \nCésar Martín Pérez\[email protected]\n\nHello there,I have PostgreSQL 8.3.18 server running on Centos 6.2 (2.6.32-220.7.1) with this specs:2x CPU AMD Opteron 6282128GB RAMRaid 10 (12HD 15k rpm 1GB cache) with data\nRaid 10 (4HD 15k rpm 1GB cache) with xlogRaid 1 (15k rpm 1GB cache shared with xlog) with systemOn this server I have only one database with 312GB of data. The database had run fine during 4 months, but from two months ago, during high work load periods, the server is collapsed by \"%sys\" type load.\nFor example \"dstat -ar --socket --tcp\" during %sys load problem:http://pastebin.com/7zfDNvPhReboot the server mitigates the problem during few days, but always reappear.\nServer not is swapping, don't have excessive I/O, don't have %IRQ load.I don't have any ideas...Thank you very much for your help.\nMy sysctl and postgres.conf:sysclt -a:http://pastebin.com/EEVnNxsZMy Postgres.conf:max_connections = 500 # (change requires restart)\nunix_socket_directory = '/var/run/postgres' # (change requires restart)shared_buffers = 18GB # min 128kB or max_connections*16kB\nwork_mem = 30MB # min 64kBmaintenance_work_mem = 1GB # min 1MBmax_fsm_pages = 8553600 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 409000 # min 100, ~70 bytes eachfsync = on # turns forced synchronization on or off\nsynchronous_commit = off # immediate fsync at commitwal_buffers = 8MB # min 32kB\ncheckpoint_segments = 64 # in logfile segments, min 1, 16MB eachcheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0\narchive_mode = on # allows archiving to be donearchive_command = 'exit 0'effective_cache_size = 100GBconstraint_exclusion = on\ndefault_text_search_config = 'pg_catalog.spanish'max_locks_per_transaction = 100-- César Martín Pé[email protected]",
"msg_date": "Tue, 30 Oct 2012 13:54:23 +0100",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "High %SYS CPU usage"
},
{
"msg_contents": "Cesar,\n\n> On this server I have only one database with 312GB of data. The database\n> had run fine during 4 months, but from two months ago, during high work\n> load periods, the server is collapsed by \"%sys\" type load.\n\nHmmm. Have you updated Linux any time recently? I'm wondering if this\nis a PostgreSQL problem at all. It sounds like an OS issue.\n\nCan you give us the results of mpstat -P ALL 3 ? The dstat output\ndoesn't tell me much.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Tue, 30 Oct 2012 12:07:37 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High %SYS CPU usage"
},
{
"msg_contents": "Hi Josh,\n\nToday is not the worse day for this issue, because Fridays DB have little\nworkload... But I have this output:\n\nhttp://pastebin.com/bKg8tfKC\n\nI don't have updated OS server from the install.\nI also begin to think that the problem is OS misconfiguration (Kernel\nparameters??) or Hardware problem...\n\nThanks for your help.\n\n\n2012/10/30 Josh Berkus <[email protected]>\n\n> Cesar,\n>\n> > On this server I have only one database with 312GB of data. The database\n> > had run fine during 4 months, but from two months ago, during high work\n> > load periods, the server is collapsed by \"%sys\" type load.\n>\n> Hmmm. Have you updated Linux any time recently? I'm wondering if this\n> is a PostgreSQL problem at all. It sounds like an OS issue.\n>\n> Can you give us the results of mpstat -P ALL 3 ? The dstat output\n> doesn't tell me much.\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCésar Martín Pérez\[email protected]\n\nHi Josh,Today is not the worse day for this issue, because Fridays DB have little workload... But I have this output:http://pastebin.com/bKg8tfKC\nI don't have updated OS server from the install.I also begin to think that the problem is OS misconfiguration (Kernel parameters??) or Hardware problem...Thanks for your help.\n2012/10/30 Josh Berkus <[email protected]>\nCesar,\n\n> On this server I have only one database with 312GB of data. The database\n> had run fine during 4 months, but from two months ago, during high work\n> load periods, the server is collapsed by \"%sys\" type load.\n\nHmmm. Have you updated Linux any time recently? I'm wondering if this\nis a PostgreSQL problem at all. It sounds like an OS issue.\n\nCan you give us the results of mpstat -P ALL 3 ? The dstat output\ndoesn't tell me much.\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- César Martín Pé[email protected]",
"msg_date": "Fri, 2 Nov 2012 08:47:42 +0100",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High %SYS CPU usage"
},
{
"msg_contents": "Finally I resolv the problem\nsetting /sys/kernel/mm/redhat_transparent_hugepage/enabled to \"never\"\n\n\n2012/11/2 Cesar Martin <[email protected]>\n\n> Hi Josh,\n>\n> Today is not the worse day for this issue, because Fridays DB have little\n> workload... But I have this output:\n>\n> http://pastebin.com/bKg8tfKC\n>\n> I don't have updated OS server from the install.\n> I also begin to think that the problem is OS misconfiguration (Kernel\n> parameters??) or Hardware problem...\n>\n> Thanks for your help.\n>\n>\n> 2012/10/30 Josh Berkus <[email protected]>\n>\n>> Cesar,\n>>\n>> > On this server I have only one database with 312GB of data. The database\n>> > had run fine during 4 months, but from two months ago, during high work\n>> > load periods, the server is collapsed by \"%sys\" type load.\n>>\n>> Hmmm. Have you updated Linux any time recently? I'm wondering if this\n>> is a PostgreSQL problem at all. It sounds like an OS issue.\n>>\n>> Can you give us the results of mpstat -P ALL 3 ? The dstat output\n>> doesn't tell me much.\n>>\n>> --\n>> Josh Berkus\n>> PostgreSQL Experts Inc.\n>> http://pgexperts.com\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n> --\n> César Martín Pérez\n> [email protected]\n>\n>\n\n\n-- \nCésar Martín Pérez\[email protected]\n\nFinally I resolv the problem setting /sys/kernel/mm/redhat_transparent_hugepage/enabled to \"never\"2012/11/2 Cesar Martin <[email protected]>\nHi Josh,Today is not the worse day for this issue, because Fridays DB have little workload... But I have this output:\nhttp://pastebin.com/bKg8tfKC\nI don't have updated OS server from the install.I also begin to think that the problem is OS misconfiguration (Kernel parameters??) or Hardware problem...Thanks for your help.\n2012/10/30 Josh Berkus <[email protected]>\n\nCesar,\n\n> On this server I have only one database with 312GB of data. The database\n> had run fine during 4 months, but from two months ago, during high work\n> load periods, the server is collapsed by \"%sys\" type load.\n\nHmmm. Have you updated Linux any time recently? I'm wondering if this\nis a PostgreSQL problem at all. It sounds like an OS issue.\n\nCan you give us the results of mpstat -P ALL 3 ? The dstat output\ndoesn't tell me much.\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- César Martín Pé[email protected]\n\n-- César Martín Pé[email protected]",
"msg_date": "Mon, 31 Dec 2012 14:08:10 +0100",
"msg_from": "Cesar Martin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High %SYS CPU usage"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was wondering if it is safe to install pg_buffercache on production\nsystems?\n\nThank you.\n\nHi all,I was wondering if it is safe to install pg_buffercache on production systems?Thank you.",
"msg_date": "Tue, 30 Oct 2012 14:34:31 -0400",
"msg_from": "pg noob <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_buffercache"
},
{
"msg_contents": "On Tue, Oct 30, 2012 at 1:34 PM, pg noob <[email protected]> wrote:\n>\n> Hi all,\n>\n> I was wondering if it is safe to install pg_buffercache on production\n> systems?\n\nWell, why wouldn't you expect it to be safe? Core extensions should\nbe mostly assumed safe unless there is a good reasons to believe\notherwise. That said, there may be some performance impacts, In\nparticular, take note:\n\n\"When the pg_buffercache view is accessed, internal buffer manager\nlocks are taken for long enough to copy all the buffer state data that\nthe view will display. This ensures that the view produces a\nconsistent set of results, while not blocking normal buffer activity\nlonger than necessary. Nonetheless there could be some impact on\ndatabase performance if this view is read often.\"\n\nmerlin\n\n",
"msg_date": "Thu, 1 Nov 2012 16:26:21 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_buffercache"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.