threads
listlengths
1
275
[ { "msg_contents": "Hello,\n\nI have a PostgreSQL 9.2 instance running on RHEL 6.3, 8-core machine with\n16GB of RAM. The server is dedicated to this database, the disks are local\nRAID10. Given that the default postgresql.conf is quite conservative\nregarding memory settings, I thought it might be a good idea to allow\nPostgres to use more memory. To my surprise, following advice in the\nperformance tuning guide on Postgres wiki[2] significantly slowed down\npractically every query I run but it's more noticeable on the more complex\nqueries.\n\nI also tried running pgtune[1] which gave the following recommendation with\nmore parameters tuned, but that didn't change anything. It suggests\nshared_buffers of 1/4 of RAM size which seems to in line with advice\nelsewhere (and on PG wiki in particular).\n\n default_statistics_target = 50\n maintenance_work_mem = 960MB\n constraint_exclusion = on\n checkpoint_completion_target = 0.9\n effective_cache_size = 11GB\n work_mem = 96MB\n wal_buffers = 8MB\n checkpoint_segments = 16\n shared_buffers = 3840MB\n max_connections = 80\n\nI tried reindexing the whole database after changing the settings (using\nREINDEX DATABASE), but that didn't help either. I played around with\nshared_buffers and work_mem. Gradually changing them from the very\nconservative default values (128k / 1MB) also gradually decreased\nperformance.\n\nI ran EXPLAIN (ANALYZE,BUFFERS) on a few queries and the culprit seems to\nbe that Hash Join is significantly slower. It's not clear to me why.\n\nTo give some specific example, I have the following query. It runs in\n~2100ms on the default configuration and ~3300ms on the configuration with\nincreased buffer sizes:\n\n select count(*) from contest c\n left outer join contestparticipant cp on c.id=cp.contestId\n left outer join teammember tm on tm.contestparticipantid=cp.id\n left outer join staffmember sm on cp.id=sm.contestparticipantid\n left outer join person p on p.id=cp.personid\n left outer join personinfo pi on pi.id=cp.personinfoid\n where pi.lastname like '%b%' or pi.firstname like '%a%';\n\nEXPLAIN (ANALYZE,BUFFERS) for the query above:\n\n - Default buffers: http://explain.depesz.com/s/xaHJ\n - Bigger buffers: http://explain.depesz.com/s/Plk\n\nThe tables don't have anything special in them\n\nThe question is why am I observing decreased performance when I increase\nbuffer sizes? The machine is definitely not running out of memory.\nAllocation if shared memory in OS is (`shmmax` and `shmall`) is set to very\nlarge values, that should not be a problem. I'm not getting any errors in\nthe Postgres log either. I'm running autovacuum in the default\nconfiguration but I don't expect that has anything to do with it. All\nqueries were run on the same machine few seconds apart, just with changed\nconfiguration (and restarted PG).\n\nI also found a blog post [3] which experiments with various work_mem values\nthat run into similar behavior I'm experiencing but it doesn't really\nexplain it.\n\n [1]: http://pgfoundry.org/projects/pgtune/\n [2]: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n [3]:\nhttp://www.depesz.com/2011/07/03/understanding-postgresql-conf-work_mem/\n\nThanks,\nPetr Praus\n\nPS:\nI also posted the question here:\nhttp://dba.stackexchange.com/questions/27893/increasing-work-mem-and-shared-buffers-on-postgres-9-2-significantly-slows-downbut\na few people suggested\n\nHello,I have a PostgreSQL 9.2 instance running on RHEL 6.3, 8-core machine with 16GB of RAM. The server is dedicated to this database, the disks are local RAID10. Given that the default postgresql.conf is quite conservative regarding memory settings, I thought it might be a good idea to allow Postgres to use more memory. To my surprise, following advice in the performance tuning guide on Postgres wiki[2] significantly slowed down practically every query I run but it's more noticeable on the more complex queries.\nI also tried running pgtune[1] which gave the following recommendation with more parameters tuned, but that didn't change anything. It suggests shared_buffers of 1/4 of RAM size which seems to in line with advice elsewhere (and on PG wiki in particular).\n    default_statistics_target = 50    maintenance_work_mem = 960MB    constraint_exclusion = on    checkpoint_completion_target = 0.9    effective_cache_size = 11GB\n    work_mem = 96MB    wal_buffers = 8MB    checkpoint_segments = 16    shared_buffers = 3840MB    max_connections = 80I tried reindexing the whole database after changing the settings (using REINDEX DATABASE), but that didn't help either. I played around with shared_buffers and work_mem. Gradually changing them from the very conservative default values (128k / 1MB) also gradually decreased performance.\nI ran EXPLAIN (ANALYZE,BUFFERS) on a few queries and the culprit seems to be that Hash Join is significantly slower. It's not clear to me why.To give some specific example, I have the following query. It runs in ~2100ms on the default configuration and ~3300ms on the configuration with increased buffer sizes:\n    select count(*) from contest c    left outer join contestparticipant cp on c.id=cp.contestId    left outer join teammember tm on tm.contestparticipantid=cp.id\n    left outer join staffmember sm on cp.id=sm.contestparticipantid    left outer join person p on p.id=cp.personid    left outer join personinfo pi on pi.id=cp.personinfoid\n    where pi.lastname like '%b%' or pi.firstname like '%a%';EXPLAIN (ANALYZE,BUFFERS) for the query above: - Default buffers: http://explain.depesz.com/s/xaHJ\n - Bigger buffers: http://explain.depesz.com/s/PlkThe tables don't have anything special in themThe question is why am I observing decreased performance when I increase buffer sizes? The machine is definitely not running out of memory. Allocation if shared memory in OS is (`shmmax` and `shmall`) is set to very large values, that should not be a problem. I'm not getting any errors in the Postgres log either. I'm running autovacuum in the default configuration but I don't expect that has anything to do with it. All queries were run on the same machine few seconds apart, just with changed configuration (and restarted PG).\nI also found a blog post [3] which experiments with various work_mem values that run into similar behavior I'm experiencing but it doesn't really explain it.  [1]: http://pgfoundry.org/projects/pgtune/\n  [2]: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server  [3]: http://www.depesz.com/2011/07/03/understanding-postgresql-conf-work_mem/\nThanks,Petr PrausPS:I also posted the question here: http://dba.stackexchange.com/questions/27893/increasing-work-mem-and-shared-buffers-on-postgres-9-2-significantly-slows-down but a few people suggested", "msg_date": "Tue, 30 Oct 2012 14:08:56 -0500", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Increasing work_mem and shared_buffers on Postgres 9.2 significantly\n\tslows down queries" }, { "msg_contents": "I just found one particularly interesting fact: when I perform the same\ntest on my mid-2010 iMac (OSX 10.7.5) also with Postgres 9.2.1 and 16GB\nRAM, I don't experience the slow down.\nSpecifically:\nset work_mem='1MB';\nselect ...; // running time is ~1800 ms\nset work_mem='96MB';\nselect ...' // running time is ~1500 ms\n\nWhen I do exactly the same query (the one from my previous post) with\nexactly the same data on the server:\nI get 2100 ms with work_mem=1MB and 3200 ms with 96 MB.\n\nThe Mac has SSD so it's understandably faster, but it exhibits a behavior I\nwould expect. What am I doing wrong here?\n\nThanks.\n\nOn 30 October 2012 14:08, Petr Praus <[email protected]> wrote:\n\n> Hello,\n>\n> I have a PostgreSQL 9.2 instance running on RHEL 6.3, 8-core machine with\n> 16GB of RAM. The server is dedicated to this database, the disks are local\n> RAID10. Given that the default postgresql.conf is quite conservative\n> regarding memory settings, I thought it might be a good idea to allow\n> Postgres to use more memory. To my surprise, following advice in the\n> performance tuning guide on Postgres wiki[2] significantly slowed down\n> practically every query I run but it's more noticeable on the more complex\n> queries.\n>\n> I also tried running pgtune[1] which gave the following recommendation\n> with more parameters tuned, but that didn't change anything. It suggests\n> shared_buffers of 1/4 of RAM size which seems to in line with advice\n> elsewhere (and on PG wiki in particular).\n>\n> default_statistics_target = 50\n> maintenance_work_mem = 960MB\n> constraint_exclusion = on\n> checkpoint_completion_target = 0.9\n> effective_cache_size = 11GB\n> work_mem = 96MB\n> wal_buffers = 8MB\n> checkpoint_segments = 16\n> shared_buffers = 3840MB\n> max_connections = 80\n>\n> I tried reindexing the whole database after changing the settings (using\n> REINDEX DATABASE), but that didn't help either. I played around with\n> shared_buffers and work_mem. Gradually changing them from the very\n> conservative default values (128k / 1MB) also gradually decreased\n> performance.\n>\n> I ran EXPLAIN (ANALYZE,BUFFERS) on a few queries and the culprit seems to\n> be that Hash Join is significantly slower. It's not clear to me why.\n>\n> To give some specific example, I have the following query. It runs in\n> ~2100ms on the default configuration and ~3300ms on the configuration with\n> increased buffer sizes:\n>\n> select count(*) from contest c\n> left outer join contestparticipant cp on c.id=cp.contestId\n> left outer join teammember tm on tm.contestparticipantid=cp.id\n> left outer join staffmember sm on cp.id=sm.contestparticipantid\n> left outer join person p on p.id=cp.personid\n> left outer join personinfo pi on pi.id=cp.personinfoid\n> where pi.lastname like '%b%' or pi.firstname like '%a%';\n>\n> EXPLAIN (ANALYZE,BUFFERS) for the query above:\n>\n> - Default buffers: http://explain.depesz.com/s/xaHJ\n> - Bigger buffers: http://explain.depesz.com/s/Plk\n>\n> The tables don't have anything special in them\n>\n> The question is why am I observing decreased performance when I increase\n> buffer sizes? The machine is definitely not running out of memory.\n> Allocation if shared memory in OS is (`shmmax` and `shmall`) is set to very\n> large values, that should not be a problem. I'm not getting any errors in\n> the Postgres log either. I'm running autovacuum in the default\n> configuration but I don't expect that has anything to do with it. All\n> queries were run on the same machine few seconds apart, just with changed\n> configuration (and restarted PG).\n>\n> I also found a blog post [3] which experiments with various work_mem\n> values that run into similar behavior I'm experiencing but it doesn't\n> really explain it.\n>\n> [1]: http://pgfoundry.org/projects/pgtune/\n> [2]: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n> [3]:\n> http://www.depesz.com/2011/07/03/understanding-postgresql-conf-work_mem/\n>\n> Thanks,\n> Petr Praus\n>\n> PS:\n> I also posted the question here:\n> http://dba.stackexchange.com/questions/27893/increasing-work-mem-and-shared-buffers-on-postgres-9-2-significantly-slows-downbut a few people suggested\n>\n\nI just found one particularly interesting fact: when I perform the same test on my mid-2010 iMac (OSX 10.7.5) also with Postgres 9.2.1 and 16GB RAM, I don't experience the slow down.Specifically:set work_mem='1MB';\nselect ...; // running time is ~1800 msset work_mem='96MB';select ...' // running time is ~1500 msWhen I do exactly the same query (the one from my previous post) with exactly the same data on the server:\nI get 2100 ms with work_mem=1MB and 3200 ms with 96 MB.The Mac has SSD so it's understandably faster, but it exhibits a behavior I would expect. What am I doing wrong here?\nThanks.On 30 October 2012 14:08, Petr Praus <[email protected]> wrote:\nHello,I have a PostgreSQL 9.2 instance running on RHEL 6.3, 8-core machine with 16GB of RAM. The server is dedicated to this database, the disks are local RAID10. Given that the default postgresql.conf is quite conservative regarding memory settings, I thought it might be a good idea to allow Postgres to use more memory. To my surprise, following advice in the performance tuning guide on Postgres wiki[2] significantly slowed down practically every query I run but it's more noticeable on the more complex queries.\nI also tried running pgtune[1] which gave the following recommendation with more parameters tuned, but that didn't change anything. It suggests shared_buffers of 1/4 of RAM size which seems to in line with advice elsewhere (and on PG wiki in particular).\n    default_statistics_target = 50    maintenance_work_mem = 960MB    constraint_exclusion = on    checkpoint_completion_target = 0.9    effective_cache_size = 11GB\n    work_mem = 96MB    wal_buffers = 8MB    checkpoint_segments = 16    shared_buffers = 3840MB    max_connections = 80I tried reindexing the whole database after changing the settings (using REINDEX DATABASE), but that didn't help either. I played around with shared_buffers and work_mem. Gradually changing them from the very conservative default values (128k / 1MB) also gradually decreased performance.\nI ran EXPLAIN (ANALYZE,BUFFERS) on a few queries and the culprit seems to be that Hash Join is significantly slower. It's not clear to me why.To give some specific example, I have the following query. It runs in ~2100ms on the default configuration and ~3300ms on the configuration with increased buffer sizes:\n    select count(*) from contest c    left outer join contestparticipant cp on c.id=cp.contestId    left outer join teammember tm on tm.contestparticipantid=cp.id\n    left outer join staffmember sm on cp.id=sm.contestparticipantid    left outer join person p on p.id=cp.personid\n    left outer join personinfo pi on pi.id=cp.personinfoid\n    where pi.lastname like '%b%' or pi.firstname like '%a%';EXPLAIN (ANALYZE,BUFFERS) for the query above: - Default buffers: http://explain.depesz.com/s/xaHJ\n - Bigger buffers: http://explain.depesz.com/s/PlkThe tables don't have anything special in themThe question is why am I observing decreased performance when I increase buffer sizes? The machine is definitely not running out of memory. Allocation if shared memory in OS is (`shmmax` and `shmall`) is set to very large values, that should not be a problem. I'm not getting any errors in the Postgres log either. I'm running autovacuum in the default configuration but I don't expect that has anything to do with it. All queries were run on the same machine few seconds apart, just with changed configuration (and restarted PG).\nI also found a blog post [3] which experiments with various work_mem values that run into similar behavior I'm experiencing but it doesn't really explain it.  [1]: http://pgfoundry.org/projects/pgtune/\n  [2]: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server  [3]: http://www.depesz.com/2011/07/03/understanding-postgresql-conf-work_mem/\nThanks,Petr PrausPS:I also posted the question here: http://dba.stackexchange.com/questions/27893/increasing-work-mem-and-shared-buffers-on-postgres-9-2-significantly-slows-down but a few people suggested", "msg_date": "Tue, 30 Oct 2012 14:44:53 -0500", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing work_mem and shared_buffers on Postgres 9.2\n\tsignificantly slows down queries" }, { "msg_contents": "- I'm using ext4\n- Kernel: Linux 2.6.32-279.9.1.el6.x86_64 #1 SMP Fri Aug 31 09:04:24 EDT\n2012 x86_64 x86_64 x86_64 GNU/Linux\n- I haven't tuned kernel in any way except setting kernel.shmmax and\nkernel.shmall to:\nkernel.shmmax = 68719476736\nkernel.shmall = 4294967296\n- We are using 15k drives (magnetic) connected through SAS in RAID10 setup,\nI don't know precise model numbers (I can find out),\n\n\n\nOn 1 November 2012 15:40, Marcos Ortiz <[email protected]> wrote:\n\n> Regards, Petr.\n> Tuning PostgreSQL is not just change the postgresql.conf, it includes more\n> things like:\n> - the filesystem that you are using\n> - the kernel version that you using (particularly in Linux systems)\n> - the tuning to kernel variables\n> - the type of discs that you are using (SSDs are very fast, like you saw\n> in your iMac system)\n>\n>\n> On 10/30/2012 02:44 PM, Petr Praus wrote:\n>\n> I just found one particularly interesting fact: when I perform the same\n> test on my mid-2010 iMac (OSX 10.7.5) also with Postgres 9.2.1 and 16GB\n> RAM, I don't experience the slow down.\n> Specifically:\n> set work_mem='1MB';\n> select ...; // running time is ~1800 ms\n> set work_mem='96MB';\n> select ...' // running time is ~1500 ms\n>\n> When I do exactly the same query (the one from my previous post) with\n> exactly the same data on the server:\n> I get 2100 ms with work_mem=1MB and 3200 ms with 96 MB.\n>\n> The Mac has SSD so it's understandably faster, but it exhibits a\n> behavior I would expect. What am I doing wrong here?\n>\n> Thanks.\n>\n> On 30 October 2012 14:08, Petr Praus <[email protected]> wrote:\n>\n>> Hello,\n>>\n>> I have a PostgreSQL 9.2 instance running on RHEL 6.3, 8-core machine\n>> with 16GB of RAM. The server is dedicated to this database, the disks are\n>> local RAID10. Given that the default postgresql.conf is quite conservative\n>> regarding memory settings, I thought it might be a good idea to allow\n>> Postgres to use more memory. To my surprise, following advice in the\n>> performance tuning guide on Postgres wiki[2] significantly slowed down\n>> practically every query I run but it's more noticeable on the more complex\n>> queries.\n>>\n>> I also tried running pgtune[1] which gave the following recommendation\n>> with more parameters tuned, but that didn't change anything. It suggests\n>> shared_buffers of 1/4 of RAM size which seems to in line with advice\n>> elsewhere (and on PG wiki in particular).\n>>\n>> default_statistics_target = 50\n>> maintenance_work_mem = 960MB\n>> constraint_exclusion = on\n>> checkpoint_completion_target = 0.9\n>> effective_cache_size = 11GB\n>> work_mem = 96MB\n>> wal_buffers = 8MB\n>> checkpoint_segments = 16\n>> shared_buffers = 3840MB\n>> max_connections = 80\n>>\n>> I tried reindexing the whole database after changing the settings\n>> (using REINDEX DATABASE), but that didn't help either. I played around with\n>> shared_buffers and work_mem. Gradually changing them from the very\n>> conservative default values (128k / 1MB) also gradually decreased\n>> performance.\n>>\n>> I ran EXPLAIN (ANALYZE,BUFFERS) on a few queries and the culprit seems\n>> to be that Hash Join is significantly slower. It's not clear to me why.\n>>\n>> To give some specific example, I have the following query. It runs in\n>> ~2100ms on the default configuration and ~3300ms on the configuration with\n>> increased buffer sizes:\n>>\n>> select count(*) from contest c\n>> left outer join contestparticipant cp on c.id=cp.contestId\n>> left outer join teammember tm on tm.contestparticipantid=cp.id\n>> left outer join staffmember sm on cp.id=sm.contestparticipantid\n>> left outer join person p on p.id=cp.personid\n>> left outer join personinfo pi on pi.id=cp.personinfoid\n>> where pi.lastname like '%b%' or pi.firstname like '%a%';\n>>\n>> EXPLAIN (ANALYZE,BUFFERS) for the query above:\n>>\n>> - Default buffers: http://explain.depesz.com/s/xaHJ\n>> - Bigger buffers: http://explain.depesz.com/s/Plk\n>>\n>> The tables don't have anything special in them\n>>\n>> The question is why am I observing decreased performance when I\n>> increase buffer sizes? The machine is definitely not running out of memory.\n>> Allocation if shared memory in OS is (`shmmax` and `shmall`) is set to very\n>> large values, that should not be a problem. I'm not getting any errors in\n>> the Postgres log either. I'm running autovacuum in the default\n>> configuration but I don't expect that has anything to do with it. All\n>> queries were run on the same machine few seconds apart, just with changed\n>> configuration (and restarted PG).\n>>\n>> I also found a blog post [3] which experiments with various work_mem\n>> values that run into similar behavior I'm experiencing but it doesn't\n>> really explain it.\n>>\n>> [1]: http://pgfoundry.org/projects/pgtune/\n>> [2]: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>> [3]:\n>> http://www.depesz.com/2011/07/03/understanding-postgresql-conf-work_mem/\n>>\n>> Thanks,\n>> Petr Praus\n>>\n>> PS:\n>> I also posted the question here:\n>> http://dba.stackexchange.com/questions/27893/increasing-work-mem-and-shared-buffers-on-postgres-9-2-significantly-slows-downbut a few people suggested\n>>\n>\n>\n> --\n> **\n>\n> Marcos Luis Ortíz Valmaseda\n> about.me/marcosortiz\n> @marcosluis2186 <http://twitter.com/marcosluis2186>\n> **\n>\n> <http://www.uci.cu/>\n>\n>\n\n- I'm using ext4- Kernel: Linux 2.6.32-279.9.1.el6.x86_64 #1 SMP Fri Aug 31 09:04:24 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux- I haven't tuned kernel in any way except setting kernel.shmmax and kernel.shmall to:\nkernel.shmmax = 68719476736kernel.shmall = 4294967296- We are using 15k drives (magnetic) connected through SAS in RAID10 setup, I don't know precise model numbers (I can find out),\nOn 1 November 2012 15:40, Marcos Ortiz <[email protected]> wrote:\n\n\n Regards, Petr.\n Tuning PostgreSQL is not just change the postgresql.conf, it\n includes more things like:\n - the filesystem that you are using\n - the kernel version that you using (particularly in Linux systems)\n - the tuning to kernel variables \n - the type of discs that you are using (SSDs are very fast, like you\n saw in your iMac system)\n\nOn 10/30/2012 02:44 PM, Petr Praus\n wrote:\n\nI just found one particularly interesting fact: when I\n perform the same test on my mid-2010 iMac (OSX 10.7.5) also with\n Postgres 9.2.1 and 16GB RAM, I don't experience the slow down.\n Specifically:\nset work_mem='1MB';\nselect ...; // running time is ~1800 ms\nset work_mem='96MB';\nselect ...' // running time is ~1500 ms\n\n\nWhen I do exactly the same query (the one from my previous\n post) with exactly the same data on the server:\nI get 2100 ms with work_mem=1MB and 3200 ms with 96 MB.\n\n\nThe Mac has SSD so it's understandably faster, but it\n exhibits a behavior I would expect. What am I doing wrong here?\n\n\nThanks.\n\nOn 30 October 2012 14:08, Petr Praus <[email protected]>\n wrote:\n\nHello,\n\n\nI have a PostgreSQL 9.2 instance running on RHEL 6.3,\n 8-core machine with 16GB of RAM. The server is dedicated\n to this database, the disks are local RAID10. Given that\n the default postgresql.conf is quite conservative\n regarding memory settings, I thought it might be a good\n idea to allow Postgres to use more memory. To my surprise,\n following advice in the performance tuning guide on\n Postgres wiki[2] significantly slowed down practically\n every query I run but it's more noticeable on the more\n complex queries.\n\n\nI also tried running pgtune[1] which gave the following\n recommendation with more parameters tuned, but that didn't\n change anything. It suggests shared_buffers of 1/4 of RAM\n size which seems to in line with advice elsewhere (and on\n PG wiki in particular).\n\n\n    default_statistics_target = 50\n    maintenance_work_mem = 960MB\n    constraint_exclusion = on\n    checkpoint_completion_target = 0.9\n    effective_cache_size = 11GB\n    work_mem = 96MB\n    wal_buffers = 8MB\n    checkpoint_segments = 16\n    shared_buffers = 3840MB\n    max_connections = 80\n\n\nI tried reindexing the whole database after changing\n the settings (using REINDEX DATABASE), but that didn't\n help either. I played around with shared_buffers and\n work_mem. Gradually changing them from the very\n conservative default values (128k / 1MB) also gradually\n decreased performance.\n\n\nI ran EXPLAIN (ANALYZE,BUFFERS) on a few queries and\n the culprit seems to be that Hash Join is significantly\n slower. It's not clear to me why.\n\n\nTo give some specific example, I have the following\n query. It runs in ~2100ms on the default configuration and\n ~3300ms on the configuration with increased buffer sizes:\n\n\n    select count(*) from contest c\n    left outer join contestparticipant cp on c.id=cp.contestId\n    left outer join teammember tm on\n tm.contestparticipantid=cp.id\n    left outer join staffmember sm on cp.id=sm.contestparticipantid\n    left outer join person p on p.id=cp.personid\n    left outer join personinfo pi on pi.id=cp.personinfoid\n    where pi.lastname like '%b%' or pi.firstname like\n '%a%';\n\n\nEXPLAIN (ANALYZE,BUFFERS) for the query above:\n\n\n - Default buffers: http://explain.depesz.com/s/xaHJ\n - Bigger buffers: http://explain.depesz.com/s/Plk\n\n\nThe tables don't have anything special in them\n\n\nThe question is why am I observing decreased\n performance when I increase buffer sizes? The machine is\n definitely not running out of memory. Allocation if shared\n memory in OS is (`shmmax` and `shmall`) is set to very\n large values, that should not be a problem. I'm not\n getting any errors in the Postgres log either. I'm running\n autovacuum in the default configuration but I don't expect\n that has anything to do with it. All queries were run on\n the same machine few seconds apart, just with changed\n configuration (and restarted PG).\n\n\nI also found a blog post [3] which experiments with\n various work_mem values that run into similar behavior I'm\n experiencing but it doesn't really explain it.\n\n\n  [1]: http://pgfoundry.org/projects/pgtune/\n  [2]: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n  [3]: http://www.depesz.com/2011/07/03/understanding-postgresql-conf-work_mem/\n\n\nThanks,\nPetr Praus\n\n\nPS:\nI also posted the question here: http://dba.stackexchange.com/questions/27893/increasing-work-mem-and-shared-buffers-on-postgres-9-2-significantly-slows-down\n but a few people suggested \n\n\n\n\n\n\n-- \n\n\n Marcos Luis Ortíz Valmaseda\nabout.me/marcosortiz\n@marcosluis2186", "msg_date": "Thu, 1 Nov 2012 14:53:13 -0500", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "Regards, Petr.\nTuning PostgreSQL is not just change the postgresql.conf, it includes \nmore things like:\n- the filesystem that you are using\n- the kernel version that you using (particularly in Linux systems)\n- the tuning to kernel variables\n- the type of discs that you are using (SSDs are very fast, like you saw \nin your iMac system)\n\nOn 10/30/2012 02:44 PM, Petr Praus wrote:\n> I just found one particularly interesting fact: when I perform the \n> same test on my mid-2010 iMac (OSX 10.7.5) also with Postgres 9.2.1 \n> and 16GB RAM, I don't experience the slow down.\n> Specifically:\n> set work_mem='1MB';\n> select ...; // running time is ~1800 ms\n> set work_mem='96MB';\n> select ...' // running time is ~1500 ms\n>\n> When I do exactly the same query (the one from my previous post) with \n> exactly the same data on the server:\n> I get 2100 ms with work_mem=1MB and 3200 ms with 96 MB.\n>\n> The Mac has SSD so it's understandably faster, but it exhibits a \n> behavior I would expect. What am I doing wrong here?\n>\n> Thanks.\n>\n> On 30 October 2012 14:08, Petr Praus <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hello,\n>\n> I have a PostgreSQL 9.2 instance running on RHEL 6.3, 8-core\n> machine with 16GB of RAM. The server is dedicated to this\n> database, the disks are local RAID10. Given that the default\n> postgresql.conf is quite conservative regarding memory settings, I\n> thought it might be a good idea to allow Postgres to use more\n> memory. To my surprise, following advice in the performance tuning\n> guide on Postgres wiki[2] significantly slowed down practically\n> every query I run but it's more noticeable on the more complex\n> queries.\n>\n> I also tried running pgtune[1] which gave the following\n> recommendation with more parameters tuned, but that didn't change\n> anything. It suggests shared_buffers of 1/4 of RAM size which\n> seems to in line with advice elsewhere (and on PG wiki in particular).\n>\n> default_statistics_target = 50\n> maintenance_work_mem = 960MB\n> constraint_exclusion = on\n> checkpoint_completion_target = 0.9\n> effective_cache_size = 11GB\n> work_mem = 96MB\n> wal_buffers = 8MB\n> checkpoint_segments = 16\n> shared_buffers = 3840MB\n> max_connections = 80\n>\n> I tried reindexing the whole database after changing the settings\n> (using REINDEX DATABASE), but that didn't help either. I played\n> around with shared_buffers and work_mem. Gradually changing them\n> from the very conservative default values (128k / 1MB) also\n> gradually decreased performance.\n>\n> I ran EXPLAIN (ANALYZE,BUFFERS) on a few queries and the culprit\n> seems to be that Hash Join is significantly slower. It's not clear\n> to me why.\n>\n> To give some specific example, I have the following query. It runs\n> in ~2100ms on the default configuration and ~3300ms on the\n> configuration with increased buffer sizes:\n>\n> select count(*) from contest c\n> left outer join contestparticipant cp on c.id\n> <http://c.id>=cp.contestId\n> left outer join teammember tm on tm.contestparticipantid=cp.id\n> <http://cp.id>\n> left outer join staffmember sm on cp.id\n> <http://cp.id>=sm.contestparticipantid\n> left outer join person p on p.id <http://p.id>=cp.personid\n> left outer join personinfo pi on pi.id\n> <http://pi.id>=cp.personinfoid\n> where pi.lastname like '%b%' or pi.firstname like '%a%';\n>\n> EXPLAIN (ANALYZE,BUFFERS) for the query above:\n>\n> - Default buffers: http://explain.depesz.com/s/xaHJ\n> - Bigger buffers: http://explain.depesz.com/s/Plk\n>\n> The tables don't have anything special in them\n>\n> The question is why am I observing decreased performance when I\n> increase buffer sizes? The machine is definitely not running out\n> of memory. Allocation if shared memory in OS is (`shmmax` and\n> `shmall`) is set to very large values, that should not be a\n> problem. I'm not getting any errors in the Postgres log either.\n> I'm running autovacuum in the default configuration but I don't\n> expect that has anything to do with it. All queries were run on\n> the same machine few seconds apart, just with changed\n> configuration (and restarted PG).\n>\n> I also found a blog post [3] which experiments with various\n> work_mem values that run into similar behavior I'm experiencing\n> but it doesn't really explain it.\n>\n> [1]: http://pgfoundry.org/projects/pgtune/\n> [2]: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n> [3]:\n> http://www.depesz.com/2011/07/03/understanding-postgresql-conf-work_mem/\n>\n> Thanks,\n> Petr Praus\n>\n> PS:\n> I also posted the question here:\n> http://dba.stackexchange.com/questions/27893/increasing-work-mem-and-shared-buffers-on-postgres-9-2-significantly-slows-down\n> but a few people suggested\n>\n>\n\n-- \n\nMarcos Luis Ortíz Valmaseda\nabout.me/marcosortiz <http://about.me/marcosortiz>\n@marcosluis2186 <http://twitter.com/marcosluis2186>\n\n\n\n10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...\nCONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION\n\nhttp://www.uci.cu\nhttp://www.facebook.com/universidad.uci\nhttp://www.flickr.com/photos/universidad_uci\n\n\n\n\n\n Regards, Petr.\n Tuning PostgreSQL is not just change the postgresql.conf, it\n includes more things like:\n - the filesystem that you are using\n - the kernel version that you using (particularly in Linux systems)\n - the tuning to kernel variables \n - the type of discs that you are using (SSDs are very fast, like you\n saw in your iMac system)\n\nOn 10/30/2012 02:44 PM, Petr Praus\n wrote:\n\nI just found one particularly interesting fact: when I\n perform the same test on my mid-2010 iMac (OSX 10.7.5) also with\n Postgres 9.2.1 and 16GB RAM, I don't experience the slow down.\n Specifically:\nset work_mem='1MB';\nselect ...; // running time is ~1800 ms\nset work_mem='96MB';\nselect ...' // running time is ~1500 ms\n\n\nWhen I do exactly the same query (the one from my previous\n post) with exactly the same data on the server:\nI get 2100 ms with work_mem=1MB and 3200 ms with 96 MB.\n\n\nThe Mac has SSD so it's understandably faster, but it\n exhibits a behavior I would expect. What am I doing wrong here?\n\n\nThanks.\n\nOn 30 October 2012 14:08, Petr Praus <[email protected]>\n wrote:\n\nHello,\n\n\nI have a PostgreSQL 9.2 instance running on RHEL 6.3,\n 8-core machine with 16GB of RAM. The server is dedicated\n to this database, the disks are local RAID10. Given that\n the default postgresql.conf is quite conservative\n regarding memory settings, I thought it might be a good\n idea to allow Postgres to use more memory. To my surprise,\n following advice in the performance tuning guide on\n Postgres wiki[2] significantly slowed down practically\n every query I run but it's more noticeable on the more\n complex queries.\n\n\nI also tried running pgtune[1] which gave the following\n recommendation with more parameters tuned, but that didn't\n change anything. It suggests shared_buffers of 1/4 of RAM\n size which seems to in line with advice elsewhere (and on\n PG wiki in particular).\n\n\n    default_statistics_target = 50\n    maintenance_work_mem = 960MB\n    constraint_exclusion = on\n    checkpoint_completion_target = 0.9\n    effective_cache_size = 11GB\n    work_mem = 96MB\n    wal_buffers = 8MB\n    checkpoint_segments = 16\n    shared_buffers = 3840MB\n    max_connections = 80\n\n\nI tried reindexing the whole database after changing\n the settings (using REINDEX DATABASE), but that didn't\n help either. I played around with shared_buffers and\n work_mem. Gradually changing them from the very\n conservative default values (128k / 1MB) also gradually\n decreased performance.\n\n\nI ran EXPLAIN (ANALYZE,BUFFERS) on a few queries and\n the culprit seems to be that Hash Join is significantly\n slower. It's not clear to me why.\n\n\nTo give some specific example, I have the following\n query. It runs in ~2100ms on the default configuration and\n ~3300ms on the configuration with increased buffer sizes:\n\n\n    select count(*) from contest c\n    left outer join contestparticipant cp on c.id=cp.contestId\n    left outer join teammember tm on\n tm.contestparticipantid=cp.id\n    left outer join staffmember sm on cp.id=sm.contestparticipantid\n    left outer join person p on p.id=cp.personid\n    left outer join personinfo pi on pi.id=cp.personinfoid\n    where pi.lastname like '%b%' or pi.firstname like\n '%a%';\n\n\nEXPLAIN (ANALYZE,BUFFERS) for the query above:\n\n\n - Default buffers: http://explain.depesz.com/s/xaHJ\n - Bigger buffers: http://explain.depesz.com/s/Plk\n\n\nThe tables don't have anything special in them\n\n\nThe question is why am I observing decreased\n performance when I increase buffer sizes? The machine is\n definitely not running out of memory. Allocation if shared\n memory in OS is (`shmmax` and `shmall`) is set to very\n large values, that should not be a problem. I'm not\n getting any errors in the Postgres log either. I'm running\n autovacuum in the default configuration but I don't expect\n that has anything to do with it. All queries were run on\n the same machine few seconds apart, just with changed\n configuration (and restarted PG).\n\n\nI also found a blog post [3] which experiments with\n various work_mem values that run into similar behavior I'm\n experiencing but it doesn't really explain it.\n\n\n  [1]: http://pgfoundry.org/projects/pgtune/\n  [2]: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n  [3]: http://www.depesz.com/2011/07/03/understanding-postgresql-conf-work_mem/\n\n\nThanks,\nPetr Praus\n\n\nPS:\nI also posted the question here: http://dba.stackexchange.com/questions/27893/increasing-work-mem-and-shared-buffers-on-postgres-9-2-significantly-slows-down\n but a few people suggested \n\n\n\n\n\n\n-- \n\n\n\n\n Marcos Luis Ortíz Valmaseda\nabout.me/marcosortiz\n@marcosluis2186", "msg_date": "Thu, 01 Nov 2012 15:40:34 -0500", "msg_from": "Marcos Ortiz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "Am 01.11.2012 21:40, schrieb Marcos Ortiz:\n> Regards, Petr.\n> Tuning PostgreSQL is not just change the postgresql.conf, it includes \n> more things like:\n> - the filesystem that you are using\n> - the kernel version that you using (particularly in Linux systems)\n> - the tuning to kernel variables\n> - the type of discs that you are using (SSDs are very fast, like you \n> saw in your iMac system)\n>\n> On 10/30/2012 02:44 PM, Petr Praus wrote:\n>> I just found one particularly interesting fact: when I perform the \n>> same test on my mid-2010 iMac (OSX 10.7.5) also with Postgres 9.2.1 \n>> and 16GB RAM, I don't experience the slow down.\n>> Specifically:\n>> set work_mem='1MB';\n>> select ...; // running time is ~1800 ms\n>> set work_mem='96MB';\n>> select ...' // running time is ~1500 ms\n>>\n>> When I do exactly the same query (the one from my previous post) with \n>> exactly the same data on the server:\n>> I get 2100 ms with work_mem=1MB and 3200 ms with 96 MB.\n>>\nJust some thoughts (interested in this, once seen a Sybase ASE come \nclose to a halt when we threw a huge lot of SHM at it...).\n\n8 cores, so probably on 2 sockets? What CPU generation?\n\nBoth explain outputs show an amount of \"read\" buffers. Did you warm the \ncaches before testing?\n\nMaybe you're hitting a NUMA issue there? If those reads come from the \nOS' cache, the scheduler might decide to move your process to a \ndifferent core (that can access the cache better), then moves it back \nwhen you access the SHM segment more (the ~4GB get allocated at startup, \nso probably \"close\" to the CPU the postmaster ist running on). A \nmigration to a different cacheline is very expensive.\n\nThe temp reads/writes (i.e., the OS cache for the temp files) would \nprobably be allocated close to the CPU requesting the temp file.\n\nJust groping about in the dark though... but the iMac is obviously not \naffected by this, with one socket/memory channel/cache line.\n\nMight be worth to\n- manually pin (with taskset) the session you test this in to a \nparticular CPU (once on each socket) to see if the times change\n- try reducing work_mem in the session you're testing in (so you have \nlarge SHM, but small work mem)\n\nCheers,\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne\n\n\n\n\n\n\n\nAm 01.11.2012 21:40, schrieb Marcos\n Ortiz:\n\n\n\n Regards, Petr.\n Tuning PostgreSQL is not just change the postgresql.conf, it\n includes more things like:\n - the filesystem that you are using\n - the kernel version that you using (particularly in Linux\n systems)\n - the tuning to kernel variables \n - the type of discs that you are using (SSDs are very fast, like\n you saw in your iMac system)\n\nOn 10/30/2012 02:44 PM, Petr Praus\n wrote:\n\nI just found one particularly interesting fact: when\n I perform the same test on my mid-2010 iMac (OSX 10.7.5) also\n with Postgres 9.2.1 and 16GB RAM, I don't experience the slow\n down.\n Specifically:\nset work_mem='1MB';\nselect ...; // running time is ~1800 ms\nset work_mem='96MB';\nselect ...' // running time is ~1500 ms\n\n\nWhen I do exactly the same query (the one from my previous\n post) with exactly the same data on the server:\nI get 2100 ms with work_mem=1MB and 3200 ms with 96 MB.\n\n\n\n Just some thoughts (interested in this, once seen a Sybase ASE come\n close to a halt when we threw a huge lot of SHM at it...).\n\n 8 cores, so probably on 2 sockets? What CPU generation?\n\n Both explain outputs show an amount of \"read\" buffers. Did you warm\n the caches before testing?\n\n Maybe you're hitting a NUMA issue there? If those reads come from\n the OS' cache, the scheduler might decide to move your process to a\n different core (that can access the cache better), then moves it\n back when you access the SHM segment more (the ~4GB get allocated at\n startup, so probably \"close\" to the CPU the postmaster ist running\n on). A migration to a different cacheline is very expensive.\n\n The temp reads/writes (i.e., the OS cache for the temp files) would\n probably be allocated close to the CPU requesting the temp file.\n\n Just groping about in the dark though... but the iMac is obviously\n not affected by this, with one socket/memory channel/cache line.\n\n Might be worth to\n - manually pin (with taskset) the session you test this in to a\n particular CPU (once on each socket) to see if the times change\n - try reducing work_mem in the session you're testing in (so you\n have large SHM, but small work mem)\n\n Cheers,\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Fri, 02 Nov 2012 00:25:16 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "Two possibilities:\n\ncaching. make sure to run each query several times in a row.\n\nzone reclaim mode. If this has gotten turned on turn it back off.\n\nHow to tell:\n\nsysctl -n vm.zone_reclaim_mode\n\nOutput should be 0. If it's not, then add this to /etc/sysctl.conf:\n\nvm.zone_reclaim_mode=0\n\nand run: sudo sysctl -p\n\nand see if that helps.\n\n", "msg_date": "Thu, 1 Nov 2012 23:39:59 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing work_mem and shared_buffers on Postgres 9.2\n\tsignificantly slows down queries" }, { "msg_contents": "I did run each query several times, the results I posted are for ~10th run\nof the query.\n\nThe zone reclaim mode is 0.\n\n\nOn 2 November 2012 00:39, Scott Marlowe <[email protected]> wrote:\n\n> Two possibilities:\n>\n> caching. make sure to run each query several times in a row.\n>\n> zone reclaim mode. If this has gotten turned on turn it back off.\n>\n> How to tell:\n>\n> sysctl -n vm.zone_reclaim_mode\n>\n> Output should be 0. If it's not, then add this to /etc/sysctl.conf:\n>\n> vm.zone_reclaim_mode=0\n>\n> and run: sudo sysctl -p\n>\n> and see if that helps.\n>\n\nI did run each query several times, the results I posted are for ~10th run of the query.The zone reclaim mode is 0.On 2 November 2012 00:39, Scott Marlowe <[email protected]> wrote:\nTwo possibilities:\n\ncaching.  make sure to run each query several times in a row.\n\nzone reclaim mode. If this has gotten turned on turn it back off.\n\nHow to tell:\n\nsysctl -n vm.zone_reclaim_mode\n\nOutput should be 0.  If it's not, then add this to /etc/sysctl.conf:\n\nvm.zone_reclaim_mode=0\n\nand run: sudo sysctl -p\n\nand see if that helps.", "msg_date": "Fri, 2 Nov 2012 09:09:52 -0500", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing work_mem and shared_buffers on Postgres 9.2\n\tsignificantly slows down queries" }, { "msg_contents": "On 1 November 2012 18:25, Gunnar \"Nick\" Bluth <[email protected]>wrote:\n\n> Am 01.11.2012 21:40, schrieb Marcos Ortiz:\n>\n> Regards, Petr.\n> Tuning PostgreSQL is not just change the postgresql.conf, it includes more\n> things like:\n> - the filesystem that you are using\n> - the kernel version that you using (particularly in Linux systems)\n> - the tuning to kernel variables\n> - the type of discs that you are using (SSDs are very fast, like you saw\n> in your iMac system)\n>\n> On 10/30/2012 02:44 PM, Petr Praus wrote:\n>\n> I just found one particularly interesting fact: when I perform the same\n> test on my mid-2010 iMac (OSX 10.7.5) also with Postgres 9.2.1 and 16GB\n> RAM, I don't experience the slow down.\n> Specifically:\n> set work_mem='1MB';\n> select ...; // running time is ~1800 ms\n> set work_mem='96MB';\n> select ...' // running time is ~1500 ms\n>\n> When I do exactly the same query (the one from my previous post) with\n> exactly the same data on the server:\n> I get 2100 ms with work_mem=1MB and 3200 ms with 96 MB.\n>\n> Just some thoughts (interested in this, once seen a Sybase ASE come\n> close to a halt when we threw a huge lot of SHM at it...).\n>\n> 8 cores, so probably on 2 sockets? What CPU generation?\n>\n\nThe processors are two quad core Intel x7350 Xeon at 2.93Ghz. It's somewhat\nolder (released late 2007) but it's not absolute speed I'm after - it's the\ndifference in speed when increasing work_mem.\n\n\n> Both explain outputs show an amount of \"read\" buffers. Did you warm the\n> caches before testing?\n>\n\nI did warm the caches before testing.\n\n\n>\n> Maybe you're hitting a NUMA issue there? If those reads come from the OS'\n> cache, the scheduler might decide to move your process to a different core\n> (that can access the cache better), then moves it back when you access the\n> SHM segment more (the ~4GB get allocated at startup, so probably \"close\" to\n> the CPU the postmaster ist running on). A migration to a different\n> cacheline is very expensive.\n>\n> The temp reads/writes (i.e., the OS cache for the temp files) would\n> probably be allocated close to the CPU requesting the temp file.\n>\n> Just groping about in the dark though... but the iMac is obviously not\n> affected by this, with one socket/memory channel/cache line.\n>\n\nI made a test with Ubuntu 12.04 VM machine (vmware workstation 4.1.3 on the\nsame iMac) with 4GB memory and shared_buffers=1GB. To my slight surprise,\nthe query is faster on Ubuntu VM machine then on the OSX (~1050ms vs.\n~1500ms with work_mem=1MB). This might be caused\nby effective_io_concurrency which is enabled on Ubuntu but can't be enabled\non OSX because postgres does not support it there. The interesting thing is\nthat increasing work_mem to 96MB on Ubuntu slows down the query to about\n~1250ms from ~1050ms.\n\n\n>\n> Might be worth to\n> - manually pin (with taskset) the session you test this in to a particular\n> CPU (once on each socket) to see if the times change\n>\n\nI tested this and it does not seem to have any effect (assuming I used\ntaskset correctly but I think so: taskset 02 psql to pin down to CPU #1 and\ntaskset 01 psql to pin to CPU #0).\n\n\n> - try reducing work_mem in the session you're testing in (so you have\n> large SHM, but small work mem)\n>\n\nDid this and it indicates to me that shared_buffers setting actually does\nnot have an effect on this behaviour as I previously thought it has. It\nreally boils down to work_mem: when I set shared_buffers to something large\n(say 4GB) and just play with work_mem the problem persists.\n\n\n>\n> Cheers,\n>\n> --\n> Gunnar \"Nick\" Bluth\n> RHCE/SCLA\n>\n> Mobil +49 172 8853339\n> Email: [email protected]\n> __________________________________________________________________________\n> In 1984 mainstream users were choosing VMS over UNIX. Ten years later\n> they are choosing Windows over UNIX. What part of that message aren't you\n> getting? - Tom Payne\n>\n>\n\nOn 1 November 2012 18:25, Gunnar \"Nick\" Bluth <[email protected]> wrote:\n\n\nAm 01.11.2012 21:40, schrieb Marcos\n Ortiz:\n\n\n \n Regards, Petr.\n Tuning PostgreSQL is not just change the postgresql.conf, it\n includes more things like:\n - the filesystem that you are using\n - the kernel version that you using (particularly in Linux\n systems)\n - the tuning to kernel variables \n - the type of discs that you are using (SSDs are very fast, like\n you saw in your iMac system)\n\nOn 10/30/2012 02:44 PM, Petr Praus\n wrote:\n\nI just found one particularly interesting fact: when\n I perform the same test on my mid-2010 iMac (OSX 10.7.5) also\n with Postgres 9.2.1 and 16GB RAM, I don't experience the slow\n down.\n Specifically:\nset work_mem='1MB';\nselect ...; // running time is ~1800 ms\nset work_mem='96MB';\nselect ...' // running time is ~1500 ms\n\n\nWhen I do exactly the same query (the one from my previous\n post) with exactly the same data on the server:\nI get 2100 ms with work_mem=1MB and 3200 ms with 96 MB.\n\n\n\n Just some thoughts (interested in this, once seen a Sybase ASE come\n close to a halt when we threw a huge lot of SHM at it...).\n\n 8 cores, so probably on 2 sockets? What CPU generation?The processors are two quad core Intel x7350 Xeon at 2.93Ghz. It's somewhat older (released late 2007) but it's not absolute speed I'm after - it's the difference in speed when increasing work_mem.\n\n \n Both explain outputs show an amount of \"read\" buffers. Did you warm\n the caches before testing?I did warm the caches before testing.  \n\n\n Maybe you're hitting a NUMA issue there? If those reads come from\n the OS' cache, the scheduler might decide to move your process to a\n different core (that can access the cache better), then moves it\n back when you access the SHM segment more (the ~4GB get allocated at\n startup, so probably \"close\" to the CPU the postmaster ist running\n on). A migration to a different cacheline is very expensive.\n\n The temp reads/writes (i.e., the OS cache for the temp files) would\n probably be allocated close to the CPU requesting the temp file.\n\n Just groping about in the dark though... but the iMac is obviously\n not affected by this, with one socket/memory channel/cache line.I made a test with Ubuntu 12.04 VM machine (vmware workstation 4.1.3 on the same iMac) with 4GB memory and shared_buffers=1GB. To my slight surprise, the query is faster on Ubuntu VM machine then on the OSX (~1050ms vs. ~1500ms with work_mem=1MB). This might be caused by effective_io_concurrency which is enabled on Ubuntu but can't be enabled on OSX because postgres does not support it there. The interesting thing is that increasing work_mem to 96MB on Ubuntu slows down the query to about ~1250ms from ~1050ms.\n \n\n Might be worth to\n - manually pin (with taskset) the session you test this in to a\n particular CPU (once on each socket) to see if the times changeI tested this and it does not seem to have any effect (assuming I used taskset correctly but I think so: taskset 02 psql to pin down to CPU #1 and taskset 01 psql to pin to CPU #0).\n \n - try reducing work_mem in the session you're testing in (so you\n have large SHM, but small work mem)Did this and it indicates to me that shared_buffers setting actually does not have an effect on this behaviour as I previously thought it has. It really boils down to work_mem: when I set shared_buffers to something large (say 4GB) and just play with work_mem the problem persists.\n \n\n Cheers,\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Fri, 2 Nov 2012 11:12:22 -0500", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "Am 02.11.2012 17:12, schrieb Petr Praus:\n\nYour CPUs are indeed pretty oldschool. FSB based, IIRC, not NUMA. A \nprocess migration would be even more expensive there.\n\n> Might be worth to\n> - manually pin (with taskset) the session you test this in to a\n> particular CPU (once on each socket) to see if the times change\n>\n>\n> I tested this and it does not seem to have any effect (assuming I used \n> taskset correctly but I think so: taskset 02 psql to pin down to CPU \n> #1 and taskset 01 psql to pin to CPU #0).\nWell, that pinned your _client_ to the CPUs, not the server side session ;-)\nYou'd have to spot for the PID of the new \"IDLE\" server process and pin \nthat using \"taskset -p\". Also, 01 and 02 are probably cores in the same \npackage/socket. Try \"lscpu\" first and spot for \"NUMA node*\" lines at the \nbottom.\nBut anyway... let's try something else first:\n>\n> - try reducing work_mem in the session you're testing in (so you\n> have large SHM, but small work mem)\n>\n>\n> Did this and it indicates to me that shared_buffers setting actually \n> does not have an effect on this behaviour as I previously thought it \n> has. It really boils down to work_mem: when I set shared_buffers to \n> something large (say 4GB) and just play with work_mem the problem \n> persists.\nThis only confirms what we've seen before. As soon as your work_mem \npermits an in-memory sort of the intermediate result set (which at that \npoint in time is where? In the SHM, or in the private memory of the \nbackend? I can't tell, tbth), the sort takes longer than when it's using \na temp file.\n\nWhat if you reduce the shared_buffers to your original value and only \nincrease/decrease the session's work_mem? Same behaviour?\n\nCheers,\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne\n\n\n\n\n\n\n\nAm 02.11.2012 17:12, schrieb Petr\n Praus:\n\n Your CPUs are indeed pretty oldschool. FSB based, IIRC, not NUMA.\n A process migration would be even more expensive there.\n\n\n\n\n\n\n Might be worth to\n - manually pin (with taskset) the session you test this in\n to a particular CPU (once on each socket) to see if the\n times change\n\n\n\n\nI tested this and it does not seem to have any effect\n (assuming I used taskset correctly but I think so: taskset\n 02 psql to pin down to CPU #1 and taskset 01 psql to pin to\n CPU #0).\n\n\n\n Well, that pinned your _client_ to the CPUs, not the server side\n session ;-)\n You'd have to spot for the PID of the new \"IDLE\" server process and\n pin that using \"taskset -p\". Also, 01 and 02 are probably cores in\n the same package/socket. Try \"lscpu\" first and spot for \"NUMA node*\"\n lines at the bottom. \n But anyway... let's try something else first:\n\n\n\n \n\n - try reducing\n work_mem in the session you're testing in (so you have\n large SHM, but small work mem)\n\n\n\n\nDid this and it indicates to me that shared_buffers\n setting actually does not have an effect on this behaviour\n as I previously thought it has. It really boils down to\n work_mem: when I set shared_buffers to something large (say\n 4GB) and just play with work_mem the problem persists.\n\n\n\n This only confirms what we've seen before. As soon as your work_mem\n permits an in-memory sort of the intermediate result set (which at\n that point in time is where? In the SHM, or in the private memory of\n the backend? I can't tell, tbth), the sort takes longer than when\n it's using a temp file.\n\n What if you reduce the shared_buffers to your original value and\n only increase/decrease the session's work_mem? Same behaviour? \n\n Cheers,\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Sat, 03 Nov 2012 11:31:28 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On 3 November 2012 05:31, Gunnar \"Nick\" Bluth <[email protected]>wrote:\n\n> Am 02.11.2012 17:12, schrieb Petr Praus:\n>\n> Your CPUs are indeed pretty oldschool. FSB based, IIRC, not NUMA. A\n> process migration would be even more expensive there.\n>\n> Might be worth to\n>> - manually pin (with taskset) the session you test this in to a\n>> particular CPU (once on each socket) to see if the times change\n>>\n>\n> I tested this and it does not seem to have any effect (assuming I used\n> taskset correctly but I think so: taskset 02 psql to pin down to CPU #1 and\n> taskset 01 psql to pin to CPU #0).\n>\n> Well, that pinned your _client_ to the CPUs, not the server side session\n> ;-)\n> You'd have to spot for the PID of the new \"IDLE\" server process and pin\n> that using \"taskset -p\". Also, 01 and 02 are probably cores in the same\n> package/socket. Try \"lscpu\" first and spot for \"NUMA node*\" lines at the\n> bottom.\n>\nAh, stupid me :)\n\n\n> But anyway... let's try something else first:\n>\n>\n>\n>> - try reducing work_mem in the session you're testing in (so you have\n>> large SHM, but small work mem)\n>>\n>\n> Did this and it indicates to me that shared_buffers setting actually\n> does not have an effect on this behaviour as I previously thought it has.\n> It really boils down to work_mem: when I set shared_buffers to something\n> large (say 4GB) and just play with work_mem the problem persists.\n>\n> This only confirms what we've seen before. As soon as your work_mem\n> permits an in-memory sort of the intermediate result set (which at that\n> point in time is where? In the SHM, or in the private memory of the\n> backend? I can't tell, tbth), the sort takes longer than when it's using a\n> temp file.\n>\n> What if you reduce the shared_buffers to your original value and only\n> increase/decrease the session's work_mem? Same behaviour?\n>\n\nYes, same behaviour. I let the shared_buffers be the default (which is\n8MB). With work_mem 1MB the query runs fast, with 96MB it runs slow (same\ntimes as before). It really seems that the culprit is work_mem.\n\n\n>\n> Cheers,\n>\n> --\n> Gunnar \"Nick\" Bluth\n> RHCE/SCLA\n>\n> Mobil +49 172 8853339\n> Email: [email protected]\n> __________________________________________________________________________\n> In 1984 mainstream users were choosing VMS over UNIX. Ten years later\n> they are choosing Windows over UNIX. What part of that message aren't you\n> getting? - Tom Payne\n>\n>\n\nOn 3 November 2012 05:31, Gunnar \"Nick\" Bluth <[email protected]> wrote:\n\n\nAm 02.11.2012 17:12, schrieb Petr\n Praus:\n\n Your CPUs are indeed pretty oldschool. FSB based, IIRC, not NUMA.\n A process migration would be even more expensive there.\n\n\n\n\n\n\n Might be worth to\n - manually pin (with taskset) the session you test this in\n to a particular CPU (once on each socket) to see if the\n times change\n\n\n\n\nI tested this and it does not seem to have any effect\n (assuming I used taskset correctly but I think so: taskset\n 02 psql to pin down to CPU #1 and taskset 01 psql to pin to\n CPU #0).\n\n\n\n Well, that pinned your _client_ to the CPUs, not the server side\n session ;-)\n You'd have to spot for the PID of the new \"IDLE\" server process and\n pin that using \"taskset -p\". Also, 01 and 02 are probably cores in\n the same package/socket. Try \"lscpu\" first and spot for \"NUMA node*\"\n lines at the bottom. Ah, stupid me :) \n\n\n But anyway... let's try something else first:\n\n\n\n \n\n - try reducing\n work_mem in the session you're testing in (so you have\n large SHM, but small work mem)\n\n\n\n\nDid this and it indicates to me that shared_buffers\n setting actually does not have an effect on this behaviour\n as I previously thought it has. It really boils down to\n work_mem: when I set shared_buffers to something large (say\n 4GB) and just play with work_mem the problem persists.\n\n\n\n This only confirms what we've seen before. As soon as your work_mem\n permits an in-memory sort of the intermediate result set (which at\n that point in time is where? In the SHM, or in the private memory of\n the backend? I can't tell, tbth), the sort takes longer than when\n it's using a temp file.\n\n What if you reduce the shared_buffers to your original value and\n only increase/decrease the session's work_mem? Same behaviour? Yes, same behaviour. I let the shared_buffers be the default (which is 8MB). With work_mem 1MB the query runs fast, with 96MB it runs slow (same times as before). It really seems that the culprit is work_mem.\n \n\n Cheers,\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Sat, 3 Nov 2012 10:20:59 -0500", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "Am 03.11.2012 16:20, schrieb Petr Praus:\n>\n> Your CPUs are indeed pretty oldschool. FSB based, IIRC, not NUMA.\n> A process migration would be even more expensive there.\n>\n\nOk, I've actually looked these up now... at the time these were current, \nI was in the lucky situation to only deal with Opterons. And actually, \nwith these CPUs it is pretty possible that Scott Marlowe's hint (check \nvm.zone_reclaim_mode) was pointing in the right direction. Did you check \nthat?\n\n\n> Yes, same behaviour. I let the shared_buffers be the default\n> (which is 8MB). With work_mem 1MB the query runs fast, with 96MB\n> it runs slow (same times as before). It really seems that the\n> culprit is work_mem.\n>\n>\n\nWell, I'm pretty sure that having more work_mem is a good thing (tm) \nnormally ;-)\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne\n\n\n\n\n\n\n\nAm 03.11.2012 16:20, schrieb Petr\n Praus:\n\n\n\n\n\n\n Your CPUs are indeed pretty oldschool. FSB based,\n IIRC, not NUMA. A process migration would be even more\n expensive there.\n\n\n\n\n\n\n\n\n Ok, I've actually looked these up now... at the time these were\n current, I was in the lucky situation to only deal with Opterons.\n And actually, with these CPUs it is pretty possible that Scott\n Marlowe's hint (check vm.zone_reclaim_mode) was pointing in the\n right direction. Did you check that? \n\n\n\n\n\n\n\n \n Yes, same behaviour. I let the shared_buffers be the\n default (which is 8MB). With work_mem 1MB the query runs\n fast, with 96MB it runs slow (same times as before). It\n really seems that the culprit is work_mem.\n\n\n\n\n\n\n Well, I'm pretty sure that having more work_mem is a good thing (tm)\n normally ;-) \n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Sat, 03 Nov 2012 18:09:03 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On 3 November 2012 12:09, Gunnar \"Nick\" Bluth <[email protected]>wrote:\n\n> Am 03.11.2012 16:20, schrieb Petr Praus:\n>\n>\n> Your CPUs are indeed pretty oldschool. FSB based, IIRC, not NUMA. A\n>> process migration would be even more expensive there.\n>>\n>>\n> Ok, I've actually looked these up now... at the time these were current, I\n> was in the lucky situation to only deal with Opterons. And actually, with\n> these CPUs it is pretty possible that Scott Marlowe's hint (check\n> vm.zone_reclaim_mode) was pointing in the right direction. Did you check\n> that?\n>\n\nI did check that, it's zero. I responded to his message, but my messages to\nthe mailing list are getting delayed by ~24 hours because somebody has to\nalways bless them.\n\n\n>\n>\n> Yes, same behaviour. I let the shared_buffers be the default (which\n>> is 8MB). With work_mem 1MB the query runs fast, with 96MB it runs slow\n>> (same times as before). It really seems that the culprit is work_mem.\n>>\n>\n>\n> Well, I'm pretty sure that having more work_mem is a good thing (tm)\n> normally ;-)\n>\n\nWell, that's what I always thought too! :-)\n\n\n> --\n> Gunnar \"Nick\" Bluth\n> RHCE/SCLA\n>\n> Mobil +49 172 8853339\n> Email: [email protected]\n> __________________________________________________________________________\n> In 1984 mainstream users were choosing VMS over UNIX. Ten years later\n> they are choosing Windows over UNIX. What part of that message aren't you\n> getting? - Tom Payne\n>\n>\n\nOn 3 November 2012 12:09, Gunnar \"Nick\" Bluth <[email protected]> wrote:\n\n\nAm 03.11.2012 16:20, schrieb Petr\n Praus:\n\n\n\n\n\n\n Your CPUs are indeed pretty oldschool. FSB based,\n IIRC, not NUMA. A process migration would be even more\n expensive there.\n\n\n\n\n\n\n\n\n Ok, I've actually looked these up now... at the time these were\n current, I was in the lucky situation to only deal with Opterons.\n And actually, with these CPUs it is pretty possible that Scott\n Marlowe's hint (check vm.zone_reclaim_mode) was pointing in the\n right direction. Did you check that? I did check that, it's zero. I responded to his message, but my messages to the mailing list are getting delayed by ~24 hours because somebody has to always bless them.\n \n\n\n\n\n\n\n\n \n Yes, same behaviour. I let the shared_buffers be the\n default (which is 8MB). With work_mem 1MB the query runs\n fast, with 96MB it runs slow (same times as before). It\n really seems that the culprit is work_mem.\n\n\n\n\n\n\n Well, I'm pretty sure that having more work_mem is a good thing (tm)\n normally ;-) Well, that's what I always thought too! :-)  \n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Sat, 3 Nov 2012 12:19:06 -0500", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "Am 03.11.2012 18:19, schrieb Petr Praus:\n> On 3 November 2012 12:09, Gunnar \"Nick\" Bluth \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Am 03.11.2012 16:20, schrieb Petr Praus:\n>>\n>> Your CPUs are indeed pretty oldschool. FSB based, IIRC, not\n>> NUMA. A process migration would be even more expensive there.\n>>\n>\n> Ok, I've actually looked these up now... at the time these were\n> current, I was in the lucky situation to only deal with Opterons.\n> And actually, with these CPUs it is pretty possible that Scott\n> Marlowe's hint (check vm.zone_reclaim_mode) was pointing in the\n> right direction. Did you check that?\n>\n>\n> I did check that, it's zero. I responded to his message, but my \n> messages to the mailing list are getting delayed by ~24 hours because \n> somebody has to always bless them.\n>\n>\n>\n>> Yes, same behaviour. I let the shared_buffers be the default\n>> (which is 8MB). With work_mem 1MB the query runs fast, with\n>> 96MB it runs slow (same times as before). It really seems\n>> that the culprit is work_mem.\n>>\n>>\n>\n> Well, I'm pretty sure that having more work_mem is a good thing\n> (tm) normally ;-)\n>\n>\n> Well, that's what I always thought too! :-)\n>\nSo, to sum this up (and make someone more competent bite on it maybe \n;-), on your SMP, FSB, \"fake-multicore\" system all \"hash\"-related works \nthat potentially switch to different implementations internally (but \nw/out telling us so) when given more work_mem are slower.\n\nI'm pretty sure you're hitting some subtle, memory-access-related \ncornercase here.\n\nThe L2 cache of your X7350 CPUs is 2MB, could you run the tests with, \nsay, 1, 2, 4 and 8MB of work_mem and post the results?\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne\n\n\n\n\n\n\n\nAm 03.11.2012 18:19, schrieb Petr\n Praus:\n\nOn 3 November 2012 12:09, Gunnar \"Nick\" Bluth <[email protected]>\n wrote:\n\n\n\n\nAm 03.11.2012 16:20, schrieb Petr Praus:\n\n\n\n\n\n\n\n Your CPUs are indeed pretty oldschool.\n FSB based, IIRC, not NUMA. A process\n migration would be even more expensive\n there.\n\n\n\n\n\n\n\n\n\n Ok, I've actually looked these up now... at the time these\n were current, I was in the lucky situation to only deal\n with Opterons. And actually, with these CPUs it is pretty\n possible that Scott Marlowe's hint (check\n vm.zone_reclaim_mode) was pointing in the right direction.\n Did you check that? \n\n\n\n\nI did check that, it's zero. I responded to his message,\n but my messages to the mailing list are getting delayed by\n ~24 hours because somebody has to always bless them.\n \n\n\n \n\n\n\n\n\n\n \n Yes, same behaviour. I let the shared_buffers\n be the default (which is 8MB). With work_mem\n 1MB the query runs fast, with 96MB it runs\n slow (same times as before). It really seems\n that the culprit is work_mem.\n\n\n\n\n\n\n\n Well, I'm pretty sure that having more work_mem is a good\n thing (tm) normally ;-) \n\n\n\n\nWell, that's what I always thought too! :-) \n \n\n\n\n\n So, to sum this up (and make someone more competent bite on it maybe\n ;-), on your SMP, FSB, \"fake-multicore\" system all \"hash\"-related\n works that potentially switch to different implementations\n internally (but w/out telling us so) when given more work_mem are\n slower.\n\n I'm pretty sure you're hitting some subtle, memory-access-related\n cornercase here.\n\n The L2 cache of your X7350 CPUs is 2MB, could you run the tests\n with, say, 1, 2, 4 and 8MB of work_mem and post the results?\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Sun, 04 Nov 2012 09:48:11 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On Sat, Nov 3, 2012 at 10:09 AM, Gunnar \"Nick\" Bluth\n<[email protected]> wrote:\n\n> Well, I'm pretty sure that having more work_mem is a good thing (tm)\n> normally ;-)\n\nIn my experience when doing sorts in isolation, having more work_mem\nis a bad thing, unless it enables you to remove a layer of\ntape-merging. I always blamed it on the L1/L2 etc. levels of caching.\n\nCheers,\n\nJeff\n\n", "msg_date": "Mon, 5 Nov 2012 08:44:34 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On Mon, Nov 5, 2012 at 1:44 PM, Jeff Janes <[email protected]> wrote:\n>> Well, I'm pretty sure that having more work_mem is a good thing (tm)\n>> normally ;-)\n>\n> In my experience when doing sorts in isolation, having more work_mem\n> is a bad thing, unless it enables you to remove a layer of\n> tape-merging. I always blamed it on the L1/L2 etc. levels of caching.\n\nBlame it on quicksort, which is quite cache-unfriendly.\n\nPerhaps PG should consider using in-memory mergesort for the bigger chunks.\n\n", "msg_date": "Mon, 5 Nov 2012 13:48:55 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On Mon, Nov 5, 2012 at 8:48 AM, Claudio Freire <[email protected]> wrote:\n> On Mon, Nov 5, 2012 at 1:44 PM, Jeff Janes <[email protected]> wrote:\n>>> Well, I'm pretty sure that having more work_mem is a good thing (tm)\n>>> normally ;-)\n>>\n>> In my experience when doing sorts in isolation, having more work_mem\n>> is a bad thing, unless it enables you to remove a layer of\n>> tape-merging. I always blamed it on the L1/L2 etc. levels of caching.\n>\n> Blame it on quicksort, which is quite cache-unfriendly.\n\nThe observation applies to heap sort. If you can't set work_mem large\nenough to do the sort in memory, then you want to set it just barely\nlarge enough to avoid two layers of tape sorting. Any larger than\nthat reduces performance rather than increasing it. Of course that\nassumes you have the luxury of knowing ahead of time exactly how large\nyour sort will be and can set work_mem accordingly on a case by case\nbasis, which is unlikely in the real world.\n\n> Perhaps PG should consider using in-memory mergesort for the bigger chunks.\n\nCheers,\n\nJeff\n\n", "msg_date": "Mon, 5 Nov 2012 09:09:08 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On Mon, Nov 5, 2012 at 2:09 PM, Jeff Janes <[email protected]> wrote:\n>>> In my experience when doing sorts in isolation, having more work_mem\n>>> is a bad thing, unless it enables you to remove a layer of\n>>> tape-merging. I always blamed it on the L1/L2 etc. levels of caching.\n>>\n>> Blame it on quicksort, which is quite cache-unfriendly.\n>\n> The observation applies to heap sort.\n\nWell, heapsort is worse, but quicksort is also quite bad.\n\n", "msg_date": "Mon, 5 Nov 2012 14:40:31 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On Mon, Nov 5, 2012 at 2:40 PM, Claudio Freire <[email protected]> wrote:\n> On Mon, Nov 5, 2012 at 2:09 PM, Jeff Janes <[email protected]> wrote:\n>>>> In my experience when doing sorts in isolation, having more work_mem\n>>>> is a bad thing, unless it enables you to remove a layer of\n>>>> tape-merging. I always blamed it on the L1/L2 etc. levels of caching.\n>>>\n>>> Blame it on quicksort, which is quite cache-unfriendly.\n>>\n>> The observation applies to heap sort.\n>\n> Well, heapsort is worse, but quicksort is also quite bad.\n\nHere[0], an interesting analysis. I really believe quicksort in PG\n(due to its more complex datatypes) fares a lot worse.\n\n[0] http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CD0QFjAB&url=http%3A%2F%2Fwww.cs.auckland.ac.nz%2F~mcw%2FTeaching%2Frefs%2Fsorting%2Fladner-lamarca-cach-sorting.pdf&ei=PPqXUMnEL9PaqQHntoDgDQ&usg=AFQjCNE3mDf6ydj1MHUzfQw13TccOa895A\n\n", "msg_date": "Mon, 5 Nov 2012 14:59:12 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On 4 November 2012 02:48, Gunnar \"Nick\" Bluth <[email protected]>wrote:\n\n> Am 03.11.2012 18:19, schrieb Petr Praus:\n>\n> On 3 November 2012 12:09, Gunnar \"Nick\" Bluth <[email protected]>wrote:\n>\n>> Am 03.11.2012 16:20, schrieb Petr Praus:\n>>\n>>\n>> Your CPUs are indeed pretty oldschool. FSB based, IIRC, not NUMA. A\n>>> process migration would be even more expensive there.\n>>>\n>>>\n>> Ok, I've actually looked these up now... at the time these were current,\n>> I was in the lucky situation to only deal with Opterons. And actually, with\n>> these CPUs it is pretty possible that Scott Marlowe's hint (check\n>> vm.zone_reclaim_mode) was pointing in the right direction. Did you check\n>> that?\n>>\n>\n> I did check that, it's zero. I responded to his message, but my messages\n> to the mailing list are getting delayed by ~24 hours because somebody has\n> to always bless them.\n>\n>\n>>\n>>\n>> Yes, same behaviour. I let the shared_buffers be the default (which\n>>> is 8MB). With work_mem 1MB the query runs fast, with 96MB it runs slow\n>>> (same times as before). It really seems that the culprit is work_mem.\n>>>\n>>\n>>\n>> Well, I'm pretty sure that having more work_mem is a good thing (tm)\n>> normally ;-)\n>>\n>\n> Well, that's what I always thought too! :-)\n>\n>\n> So, to sum this up (and make someone more competent bite on it maybe\n> ;-), on your SMP, FSB, \"fake-multicore\" system all \"hash\"-related works\n> that potentially switch to different implementations internally (but w/out\n> telling us so) when given more work_mem are slower.\n>\nYes, but note that this happens only in Linux. Increasing work_mem on my\niMac increases performance (but the queries are slower under OSX than on\nvirtualized Ubuntu on the same machine). Over the weekend, I tried the same\ntest on my Ubuntu home machine with Ivy Bridge i5 3570K and it also slows\ndown (from ~900ms with work_mem=1MB to ~1200ms with work_mem=96MB).\n\n\n>\n> I'm pretty sure you're hitting some subtle, memory-access-related\n> cornercase here.\n>\n> The L2 cache of your X7350 CPUs is 2MB, could you run the tests with, say,\n> 1, 2, 4 and 8MB of work_mem and post the results?\n>\nI made a pgbench test with the same query and run it 25 times (5 clients, 5\ntransactions each):\nwork_mem speed\n1MB 1794ms\n2MB 1877ms\n4MB 2084ms\n8MB 2141ms\n10MB 2124ms\n12MB 3018ms\n16MB 3004ms\n32MB 2999ms\n64MB 3015ms\n\nIt seems that there is some sort of \"plateau\".\n\n\n>\n> --\n> Gunnar \"Nick\" Bluth\n> RHCE/SCLA\n>\n> Mobil +49 172 8853339\n> Email: [email protected]\n> __________________________________________________________________________\n> In 1984 mainstream users were choosing VMS over UNIX. Ten years later\n> they are choosing Windows over UNIX. What part of that message aren't you\n> getting? - Tom Payne\n>\n>\n\nOn 4 November 2012 02:48, Gunnar \"Nick\" Bluth <[email protected]> wrote:\n\nAm 03.11.2012 18:19, schrieb Petr\n Praus:\n\nOn 3 November 2012 12:09, Gunnar \"Nick\" Bluth <[email protected]>\n wrote:\n\n\n\n\nAm 03.11.2012 16:20, schrieb Petr Praus:\n\n\n\n\n\n\n\n Your CPUs are indeed pretty oldschool.\n FSB based, IIRC, not NUMA. A process\n migration would be even more expensive\n there.\n\n\n\n\n\n\n\n\n\n Ok, I've actually looked these up now... at the time these\n were current, I was in the lucky situation to only deal\n with Opterons. And actually, with these CPUs it is pretty\n possible that Scott Marlowe's hint (check\n vm.zone_reclaim_mode) was pointing in the right direction.\n Did you check that? \n\n\n\n\nI did check that, it's zero. I responded to his message,\n but my messages to the mailing list are getting delayed by\n ~24 hours because somebody has to always bless them.\n \n\n\n \n\n\n\n\n\n\n \n Yes, same behaviour. I let the shared_buffers\n be the default (which is 8MB). With work_mem\n 1MB the query runs fast, with 96MB it runs\n slow (same times as before). It really seems\n that the culprit is work_mem.\n\n\n\n\n\n\n\n Well, I'm pretty sure that having more work_mem is a good\n thing (tm) normally ;-) \n\n\n\n\nWell, that's what I always thought too! :-) \n \n\n\n\n\n So, to sum this up (and make someone more competent bite on it maybe\n ;-), on your SMP, FSB, \"fake-multicore\" system all \"hash\"-related\n works that potentially switch to different implementations\n internally (but w/out telling us so) when given more work_mem are\n slower.Yes, but note that this happens only in Linux. Increasing work_mem on my iMac increases performance (but the queries are slower under OSX than on virtualized Ubuntu on the same machine). Over the weekend, I tried the same test on my Ubuntu home machine with Ivy Bridge i5 3570K and it also slows down (from ~900ms with work_mem=1MB to ~1200ms with work_mem=96MB).\n \n\n I'm pretty sure you're hitting some subtle, memory-access-related\n cornercase here.\n\n The L2 cache of your X7350 CPUs is 2MB, could you run the tests\n with, say, 1, 2, 4 and 8MB of work_mem and post the results?I made a pgbench test with the same query and run it 25 times (5 clients, 5 transactions each):work_mem   speed\n1MB        1794ms2MB        1877ms4MB        2084ms8MB        2141ms\n10MB       2124ms12MB       3018ms16MB       3004ms32MB       2999ms\n64MB       3015msIt seems that there is some sort of \"plateau\".\n\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Tue, 6 Nov 2012 11:38:41 -0600", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "Am 05.11.2012 18:09, schrieb Jeff Janes:\n> On Mon, Nov 5, 2012 at 8:48 AM, Claudio Freire <[email protected]> wrote:\n>> On Mon, Nov 5, 2012 at 1:44 PM, Jeff Janes <[email protected]> wrote:\n>>>> Well, I'm pretty sure that having more work_mem is a good thing (tm)\n>>>> normally ;-)\n>>> In my experience when doing sorts in isolation, having more work_mem\n>>> is a bad thing, unless it enables you to remove a layer of\n>>> tape-merging. I always blamed it on the L1/L2 etc. levels of caching.\n>> Blame it on quicksort, which is quite cache-unfriendly.\n> The observation applies to heap sort. If you can't set work_mem large\n> enough to do the sort in memory, then you want to set it just barely\n> large enough to avoid two layers of tape sorting. Any larger than\n> that reduces performance rather than increasing it. Of course that\n> assumes you have the luxury of knowing ahead of time exactly how large\n> your sort will be and can set work_mem accordingly on a case by case\n> basis, which is unlikely in the real world.\n>\n>> Perhaps PG should consider using in-memory mergesort for the bigger chunks.\nI don't want to be the party pooper here, but when you have another look \nat the EXPLAINs, you'll realize that there's not a single sort involved. \nThe expensive parts are HASH, HASH JOIN and HASH RIGHT JOIN (although \nthe SeqScan takes longer as well, for whatever reason). In those parts, \nthe difference is clearly in the # of buckets and batches. So to a \ndegree, PG even does tell us that it uses a different code path (sorry, \nPG ;-)...\n\nGreg Smith mentions an optimization wrt. Hash Joins that can become a \npitfall. His advise is to increase the statistic targets on the hashed \nouter relation. Might be worth a try.\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne\n\n\n", "msg_date": "Tue, 06 Nov 2012 20:08:26 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "Am 06.11.2012 18:38, schrieb Petr Praus:\n>\n> Yes, but note that this happens only in Linux. Increasing work_mem on \n> my iMac increases performance (but the queries are slower under OSX \n> than on virtualized Ubuntu on the same machine). Over the weekend, I \n> tried the same test on my Ubuntu home machine with Ivy Bridge i5 3570K \n> and it also slows down (from ~900ms with work_mem=1MB to ~1200ms with \n> work_mem=96MB).\n\nOS X is rather different from a memory access point of view, IIRC. So \nthe direct comparison actually only shows how well the Linux FS cache \nworks (for the temp files created with small work_mem ;-).\n\nThe i5 puzzles me a bit though...\n\n>\n> I'm pretty sure you're hitting some subtle, memory-access-related\n> cornercase here.\n>\n> The L2 cache of your X7350 CPUs is 2MB, could you run the tests\n> with, say, 1, 2, 4 and 8MB of work_mem and post the results?\n>\n> I made a pgbench test with the same query and run it 25 times (5 \n> clients, 5 transactions each):\n> work_mem speed\n> 1MB 1794ms\n> 2MB 1877ms\n> 4MB 2084ms\n> 8MB 2141ms\n> 10MB 2124ms\n> 12MB 3018ms\n> 16MB 3004ms\n> 32MB 2999ms\n> 64MB 3015ms\n>\n> It seems that there is some sort of \"plateau\".\nTwo, afaics. The 1->2 change hints towards occasionally breaching your \nL2 cache, so it can probably be ignored. The actual plateaus thus seem \nto be 0-2, 2-12, >= 12.\nIt'd be interesting to see the EXPLAIN ANALYSE outputs for these levels, \nthe buckets and batches in particular. I'd reckon we'll see significant \nchanges at 2->4 and 10->12MB work_mem.\n\n> So, to sum this up (and make someone more competent bite on it maybe \n> ;-), on your SMP, FSB, \"fake-multicore\" system all \"hash\"-related \n> works that potentially switch to different implementations internally \n> (but w/out telling us so) when given more work_mem are slower.\nSee other post... it actually does tell us (# of buckets/batches). \nHowever, the result is not good and could potentially be improved be \ntwealing the statistic_targets of the joined tables/columns.\n\nI wonder why noone actually understanding the implementation chipped in \nyet... Andres, Greg, Tom, whoever actually understands what's happening \nhere, anyone reading this? ;-)\n\nCheers,\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne\n\n\n\n\n\n\n\nAm 06.11.2012 18:38, schrieb Petr\n Praus:\n\n\n\nYes, but note that this happens only in Linux. Increasing\n work_mem on my iMac increases performance (but the queries are\n slower under OSX than on virtualized Ubuntu on the same\n machine). Over the weekend, I tried the same test on my Ubuntu\n home machine with Ivy Bridge i5 3570K and it also slows down\n (from ~900ms with work_mem=1MB to ~1200ms with work_mem=96MB).\n\n\n\n OS X is rather different from a memory access point of view, IIRC.\n So the direct comparison actually only shows how well the Linux FS\n cache works (for the temp files created with small work_mem ;-).\n\n The i5 puzzles me a bit though...\n\n\n\n\n \n I'm pretty sure you're hitting some subtle,\n memory-access-related cornercase here.\n\n The L2 cache of your X7350 CPUs is 2MB, could you run the\n tests with, say, 1, 2, 4 and 8MB of work_mem and post the\n results?\n\nI made a pgbench test with the same query and run it 25\n times (5 clients, 5 transactions each):\nwork_mem   speed\n1MB        1794ms\n2MB        1877ms\n4MB        2084ms\n8MB        2141ms\n10MB       2124ms\n12MB       3018ms\n16MB       3004ms\n32MB       2999ms\n64MB       3015ms\n\n\n It seems that there is some sort of \"plateau\".\n\n\n Two, afaics. The 1->2 change hints towards occasionally breaching\n your L2 cache, so it can probably be ignored. The actual plateaus\n thus seem to be 0-2, 2-12, >= 12.\n It'd be interesting to see the EXPLAIN ANALYSE outputs for these\n levels, the buckets and batches in particular. I'd reckon we'll see\n significant changes at 2->4 and 10->12MB work_mem.\n\n\nSo, to sum this up (and make\n someone more competent bite on it maybe ;-), on your SMP, FSB,\n \"fake-multicore\" system all \"hash\"-related works that\n potentially switch to different implementations internally (but\n w/out telling us so) when given more work_mem are slower.\n\n\n See other post... it actually does tell us (# of buckets/batches).\n However, the result is not good and could potentially be improved be\n twealing the statistic_targets of the joined tables/columns.\n\n I wonder why noone actually understanding the implementation chipped\n in yet... Andres, Greg, Tom, whoever actually understands what's\n happening here, anyone reading this? ;-)\n\n Cheers,\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Tue, 06 Nov 2012 20:38:46 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On 6 November 2012 13:38, Gunnar \"Nick\" Bluth <[email protected]>wrote:\n\n> Am 06.11.2012 18:38, schrieb Petr Praus:\n>\n>\n> Yes, but note that this happens only in Linux. Increasing work_mem on my\n> iMac increases performance (but the queries are slower under OSX than on\n> virtualized Ubuntu on the same machine). Over the weekend, I tried the same\n> test on my Ubuntu home machine with Ivy Bridge i5 3570K and it also slows\n> down (from ~900ms with work_mem=1MB to ~1200ms with work_mem=96MB).\n>\n>\n> OS X is rather different from a memory access point of view, IIRC. So the\n> direct comparison actually only shows how well the Linux FS cache works\n> (for the temp files created with small work_mem ;-).\n>\n> The i5 puzzles me a bit though...\n>\n>\n>\n>> I'm pretty sure you're hitting some subtle, memory-access-related\n>> cornercase here.\n>>\n>> The L2 cache of your X7350 CPUs is 2MB, could you run the tests with,\n>> say, 1, 2, 4 and 8MB of work_mem and post the results?\n>>\n> I made a pgbench test with the same query and run it 25 times (5 clients,\n> 5 transactions each):\n> work_mem speed\n> 1MB 1794ms\n> 2MB 1877ms\n> 4MB 2084ms\n> 8MB 2141ms\n> 10MB 2124ms\n> 12MB 3018ms\n> 16MB 3004ms\n> 32MB 2999ms\n> 64MB 3015ms\n>\n> It seems that there is some sort of \"plateau\".\n>\n> Two, afaics. The 1->2 change hints towards occasionally breaching your L2\n> cache, so it can probably be ignored. The actual plateaus thus seem to be\n> 0-2, 2-12, >= 12.\n> It'd be interesting to see the EXPLAIN ANALYSE outputs for these levels,\n> the buckets and batches in particular. I'd reckon we'll see significant\n> changes at 2->4 and 10->12MB work_mem.\n>\n\nHere are the explains, I run the query a few times before actually taking\nthe explain to warm up the caches. (I also noticed that explain slows down\nthe query execution which is probably to be expected.)\n\n2MB: http://explain.depesz.com/s/ul1\n4MB: http://explain.depesz.com/s/IlVu\n10MB: http://explain.depesz.com/s/afx3\n12MB: http://explain.depesz.com/s/i0vQ\n\n So, to sum this up (and make someone more competent bite on it maybe ;-),\n> on your SMP, FSB, \"fake-multicore\" system all \"hash\"-related works that\n> potentially switch to different implementations internally (but w/out\n> telling us so) when given more work_mem are slower.\n>\n> See other post... it actually does tell us (# of buckets/batches).\n> However, the result is not good and could potentially be improved be\n> twealing the statistic_targets of the joined tables/columns.\n>\n> I wonder why noone actually understanding the implementation chipped in\n> yet... Andres, Greg, Tom, whoever actually understands what's happening\n> here, anyone reading this? ;-)\n>\n> Cheers,\n>\n> --\n> Gunnar \"Nick\" Bluth\n> RHCE/SCLA\n>\n> Mobil +49 172 8853339\n> Email: [email protected]\n> __________________________________________________________________________\n> In 1984 mainstream users were choosing VMS over UNIX. Ten years later\n> they are choosing Windows over UNIX. What part of that message aren't you\n> getting? - Tom Payne\n>\n>\n\nOn 6 November 2012 13:38, Gunnar \"Nick\" Bluth <[email protected]> wrote:\n\nAm 06.11.2012 18:38, schrieb Petr\n Praus:\n\n\n\nYes, but note that this happens only in Linux. Increasing\n work_mem on my iMac increases performance (but the queries are\n slower under OSX than on virtualized Ubuntu on the same\n machine). Over the weekend, I tried the same test on my Ubuntu\n home machine with Ivy Bridge i5 3570K and it also slows down\n (from ~900ms with work_mem=1MB to ~1200ms with work_mem=96MB).\n\n\n\n OS X is rather different from a memory access point of view, IIRC.\n So the direct comparison actually only shows how well the Linux FS\n cache works (for the temp files created with small work_mem ;-).\n\n The i5 puzzles me a bit though...\n\n\n\n\n \n I'm pretty sure you're hitting some subtle,\n memory-access-related cornercase here.\n\n The L2 cache of your X7350 CPUs is 2MB, could you run the\n tests with, say, 1, 2, 4 and 8MB of work_mem and post the\n results?\n\nI made a pgbench test with the same query and run it 25\n times (5 clients, 5 transactions each):\nwork_mem   speed\n1MB        1794ms\n2MB        1877ms\n4MB        2084ms\n8MB        2141ms\n10MB       2124ms\n12MB       3018ms\n16MB       3004ms\n32MB       2999ms\n64MB       3015ms\n\n\n It seems that there is some sort of \"plateau\".\n\n\n Two, afaics. The 1->2 change hints towards occasionally breaching\n your L2 cache, so it can probably be ignored. The actual plateaus\n thus seem to be 0-2, 2-12, >= 12.\n It'd be interesting to see the EXPLAIN ANALYSE outputs for these\n levels, the buckets and batches in particular. I'd reckon we'll see\n significant changes at 2->4 and 10->12MB work_mem.Here are the explains, I run the query a few times before actually taking the explain to warm up the caches. (I also noticed that explain slows down the query execution which is probably to be expected.)\n2MB: http://explain.depesz.com/s/ul14MB: http://explain.depesz.com/s/IlVu10MB: http://explain.depesz.com/s/afx3\n12MB: http://explain.depesz.com/s/i0vQ\n\n\nSo, to sum this up (and make\n someone more competent bite on it maybe ;-), on your SMP, FSB,\n \"fake-multicore\" system all \"hash\"-related works that\n potentially switch to different implementations internally (but\n w/out telling us so) when given more work_mem are slower.\n\n\n See other post... it actually does tell us (# of buckets/batches).\n However, the result is not good and could potentially be improved be\n twealing the statistic_targets of the joined tables/columns.\n\n I wonder why noone actually understanding the implementation chipped\n in yet... Andres, Greg, Tom, whoever actually understands what's\n happening here, anyone reading this? ;-)\n\n Cheers,\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Tue, 6 Nov 2012 14:08:48 -0600", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "Am 06.11.2012 21:08, schrieb Petr Praus:\n>\n> 2MB: http://explain.depesz.com/s/ul1\n> 4MB: http://explain.depesz.com/s/IlVu\n> 10MB: http://explain.depesz.com/s/afx3\n> 12MB: http://explain.depesz.com/s/i0vQ\n>\nSee the change in the plan between 10MB and 12MB, directly at top level? \nThat narrows the thing down quite a bit.\n\nThough I wonder why this didn't show in the original plans...\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne\n\n\n", "msg_date": "Tue, 06 Nov 2012 21:17:59 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On 6 November 2012 14:17, Gunnar \"Nick\" Bluth <[email protected]>wrote:\n\n> Am 06.11.2012 21:08, schrieb Petr Praus:\n>\n>\n>> 2MB: http://explain.depesz.com/s/**ul1 <http://explain.depesz.com/s/ul1>\n>> 4MB: http://explain.depesz.com/s/**IlVu<http://explain.depesz.com/s/IlVu>\n>> 10MB: http://explain.depesz.com/s/**afx3<http://explain.depesz.com/s/afx3>\n>> 12MB: http://explain.depesz.com/s/**i0vQ<http://explain.depesz.com/s/i0vQ>\n>>\n>> See the change in the plan between 10MB and 12MB, directly at top level?\n> That narrows the thing down quite a bit.\n>\n> Though I wonder why this didn't show in the original plans...\n\n\nYes, the 2,4 and 10 are the same, the only difference is number of buckets.\nBut with 12, it makes completely different choices, it decides to make\nsequential scans and hash right joins instead of merge joins. And those\nsequential scans take a loong time. Could this be caused by some missing\nindices perhaps?\n\nThe original plans I posted at the start are the same as the 12MB plan, I'm\nnot sure why is that, I really hope I didn't make some sort of mistake\nthere.\n\nThanks for your help by the way! :-)\n\n\n>\n>\n> --\n> Gunnar \"Nick\" Bluth\n> RHCE/SCLA\n>\n> Mobil +49 172 8853339\n> Email: [email protected]\n> ______________________________**______________________________**\n> ______________\n> In 1984 mainstream users were choosing VMS over UNIX. Ten years later\n> they are choosing Windows over UNIX. What part of that message aren't you\n> getting? - Tom Payne\n>\n>\n\nOn 6 November 2012 14:17, Gunnar \"Nick\" Bluth <[email protected]> wrote:\n\nAm 06.11.2012 21:08, schrieb Petr Praus:\n\n\n2MB: http://explain.depesz.com/s/ul1\n4MB: http://explain.depesz.com/s/IlVu\n10MB: http://explain.depesz.com/s/afx3\n12MB: http://explain.depesz.com/s/i0vQ\n\n\nSee the change in the plan between 10MB and 12MB, directly at top level? That narrows the thing down quite a bit.\n\nThough I wonder why this didn't show in the original plans...Yes, the 2,4 and 10 are the same, the only difference is number of buckets. But with 12, it makes completely different choices, it decides to make sequential scans and hash right joins instead of merge joins. And those sequential scans take a loong time. Could this be caused by some missing indices perhaps?\nThe original plans I posted at the start are the same as the 12MB plan, I'm not sure why is that, I really hope I didn't make some sort of mistake there.Thanks for your help by the way! :-)\n \n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil   +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX.  Ten years later\nthey are choosing Windows over UNIX.  What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Tue, 6 Nov 2012 14:24:27 -0600", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "Am 06.11.2012 21:24, schrieb Petr Praus:\n> On 6 November 2012 14:17, Gunnar \"Nick\" Bluth \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Am 06.11.2012 21:08, schrieb Petr Praus:\n>\n>\n> 2MB: http://explain.depesz.com/s/ul1\n> 4MB: http://explain.depesz.com/s/IlVu\n> 10MB: http://explain.depesz.com/s/afx3\n> 12MB: http://explain.depesz.com/s/i0vQ\n>\n> See the change in the plan between 10MB and 12MB, directly at top\n> level? That narrows the thing down quite a bit.\n>\n> Though I wonder why this didn't show in the original plans...\n>\n>\n> Yes, the 2,4 and 10 are the same, the only difference is number of \n> buckets. But with 12, it makes completely different choices, it \n> decides to make sequential scans and hash right joins instead of merge \n> joins. And those sequential scans take a loong time. Could this be \n> caused by some missing indices perhaps?\n\nWell, you do have indices, as we can clearly see.\n\n> The original plans I posted at the start are the same as the 12MB \n> plan, I'm not sure why is that, I really hope I didn't make some sort \n> of mistake there.\n\nI had been wondering why you didn't have any indices, tbth. However, the \nexecution times still grow with work_mem, which is interesting \nindependent of the actual plan change...\n\n>\n> Thanks for your help by the way! :-)\n\nOh, no worries there... this is by far the most interesting challenge \nI've encountered in months ;-)\n\nBut I do admit that I've reached the end of the ladder now. No idea how \nyou can improve your runtime yet. Probably\n- using full text search on \"personinfo\"\n- try different join_collapse_limit / from_collapse_limit / \nenable_hashjoin values\n\nThe most pragmatic approach is probably to just stick with work_mem = \n1MB (or less) ;-), but that may potentially bite you later.\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne\n\n\n\n\n\n\n\nAm 06.11.2012 21:24, schrieb Petr\n Praus:\n\nOn 6 November 2012 14:17, Gunnar \"Nick\" Bluth <[email protected]>\n wrote:\n\n\n Am 06.11.2012 21:08, schrieb Petr Praus:\n \n\n\n 2MB: http://explain.depesz.com/s/ul1\n 4MB: http://explain.depesz.com/s/IlVu\n 10MB: http://explain.depesz.com/s/afx3\n 12MB: http://explain.depesz.com/s/i0vQ\n\n\n\n See the change in the plan between 10MB and 12MB, directly at\n top level? That narrows the thing down quite a bit.\n\n Though I wonder why this didn't show in the original plans...\n\n\nYes, the 2,4 and 10 are the same, the only difference is\n number of buckets. But with 12, it makes completely different\n choices, it decides to make sequential scans and hash right\n joins instead of merge joins. And those sequential scans take\n a loong time. Could this be caused by some missing indices\n perhaps?\n\n\n\n Well, you do have indices, as we can clearly see.\n\n\n\nThe original plans I posted at the start are the same as\n the 12MB plan, I'm not sure why is that, I really hope I\n didn't make some sort of mistake there.\n\n\n\n I had been wondering why you didn't have any indices, tbth. However,\n the execution times still grow with work_mem, which is interesting\n independent of the actual plan change...\n\n\n\n\n\nThanks for your help by the way! :-)\n \n\n\n\n Oh, no worries there... this is by far the most interesting\n challenge I've encountered in months ;-)\n\n But I do admit that I've reached the end of the ladder now. No idea\n how you can improve your runtime yet. Probably \n - using full text search on \"personinfo\"\n - try different join_collapse_limit / from_collapse_limit /\n enable_hashjoin values\n\n The most pragmatic approach is probably to just stick with work_mem\n = 1MB (or less) ;-), but that may potentially bite you later.\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Tue, 06 Nov 2012 21:50:21 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On 6 November 2012 14:50, Gunnar \"Nick\" Bluth <[email protected]>wrote:\n\n> Am 06.11.2012 21:24, schrieb Petr Praus:\n>\n> On 6 November 2012 14:17, Gunnar \"Nick\" Bluth <[email protected]>wrote:\n>\n>> Am 06.11.2012 21:08, schrieb Petr Praus:\n>>\n>>\n>>> 2MB: http://explain.depesz.com/s/ul1\n>>> 4MB: http://explain.depesz.com/s/IlVu\n>>> 10MB: http://explain.depesz.com/s/afx3\n>>> 12MB: http://explain.depesz.com/s/i0vQ\n>>>\n>>> See the change in the plan between 10MB and 12MB, directly at top\n>> level? That narrows the thing down quite a bit.\n>>\n>> Though I wonder why this didn't show in the original plans...\n>\n>\n> Yes, the 2,4 and 10 are the same, the only difference is number of\n> buckets. But with 12, it makes completely different choices, it decides to\n> make sequential scans and hash right joins instead of merge joins. And\n> those sequential scans take a loong time. Could this be caused by some\n> missing indices perhaps?\n>\n>\n> Well, you do have indices, as we can clearly see.\n>\n>\n> The original plans I posted at the start are the same as the 12MB plan,\n> I'm not sure why is that, I really hope I didn't make some sort of mistake\n> there.\n>\n>\n> I had been wondering why you didn't have any indices, tbth. However, the\n> execution times still grow with work_mem, which is interesting independent\n> of the actual plan change...\n>\n>\n>\n> Thanks for your help by the way! :-)\n>\n>\n>\n> Oh, no worries there... this is by far the most interesting challenge I've\n> encountered in months ;-)\n>\n> But I do admit that I've reached the end of the ladder now. No idea how\n> you can improve your runtime yet. Probably\n> - using full text search on \"personinfo\"\n> - try different join_collapse_limit / from_collapse_limit /\n> enable_hashjoin values\n>\n> The most pragmatic approach is probably to just stick with work_mem = 1MB\n> (or less) ;-), but that may potentially bite you later.\n>\n\nYes, that's what I'm running now in production :) When I have more time I\nmay come up with more queries to test overall system better.\nWe'll see if anyone else comes up with something but I am out of things to\ntry, too. So I guess I'll put this sideways for now.\n\n\n>\n>\n> --\n> Gunnar \"Nick\" Bluth\n> RHCE/SCLA\n>\n> Mobil +49 172 8853339\n> Email: [email protected]\n> __________________________________________________________________________\n> In 1984 mainstream users were choosing VMS over UNIX. Ten years later\n> they are choosing Windows over UNIX. What part of that message aren't you\n> getting? - Tom Payne\n>\n>\n\nOn 6 November 2012 14:50, Gunnar \"Nick\" Bluth <[email protected]> wrote:\n\n\nAm 06.11.2012 21:24, schrieb Petr\n Praus:\n\nOn 6 November 2012 14:17, Gunnar \"Nick\" Bluth <[email protected]>\n wrote:\n\n\n Am 06.11.2012 21:08, schrieb Petr Praus:\n \n\n\n 2MB: http://explain.depesz.com/s/ul1\n 4MB: http://explain.depesz.com/s/IlVu\n 10MB: http://explain.depesz.com/s/afx3\n 12MB: http://explain.depesz.com/s/i0vQ\n\n\n\n See the change in the plan between 10MB and 12MB, directly at\n top level? That narrows the thing down quite a bit.\n\n Though I wonder why this didn't show in the original plans...\n\n\nYes, the 2,4 and 10 are the same, the only difference is\n number of buckets. But with 12, it makes completely different\n choices, it decides to make sequential scans and hash right\n joins instead of merge joins. And those sequential scans take\n a loong time. Could this be caused by some missing indices\n perhaps?\n\n\n\n Well, you do have indices, as we can clearly see.\n\n\n\nThe original plans I posted at the start are the same as\n the 12MB plan, I'm not sure why is that, I really hope I\n didn't make some sort of mistake there.\n\n\n\n I had been wondering why you didn't have any indices, tbth. However,\n the execution times still grow with work_mem, which is interesting\n independent of the actual plan change...\n\n\n\n\n\nThanks for your help by the way! :-)\n \n\n\n\n Oh, no worries there... this is by far the most interesting\n challenge I've encountered in months ;-)\n\n But I do admit that I've reached the end of the ladder now. No idea\n how you can improve your runtime yet. Probably \n - using full text search on \"personinfo\"\n - try different join_collapse_limit / from_collapse_limit /\n enable_hashjoin values\n\n The most pragmatic approach is probably to just stick with work_mem\n = 1MB (or less) ;-), but that may potentially bite you later.Yes, that's what I'm running now in production :) When I have more time I may come up with more queries to test overall system better.\nWe'll see if anyone else comes up with something but I am out of things to try, too. So I guess I'll put this sideways for now. \n\n\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n__________________________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX. Ten years later\nthey are choosing Windows over UNIX. What part of that message aren't you\ngetting? - Tom Payne", "msg_date": "Thu, 8 Nov 2012 12:23:03 -0600", "msg_from": "Petr Praus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: Increasing work_mem and shared_buffers on Postgres\n\t9.2 significantly slows down queries" }, { "msg_contents": "On 2012-10-30 14:08:56 -0500, Petr Praus wrote:\n> select count(*) from contest c\n> left outer join contestparticipant cp on c.id=cp.contestId\n> left outer join teammember tm on tm.contestparticipantid=cp.id\n> left outer join staffmember sm on cp.id=sm.contestparticipantid\n> left outer join person p on p.id=cp.personid\n> left outer join personinfo pi on pi.id=cp.personinfoid\n> where pi.lastname like '%b%' or pi.firstname like '%a%';\n\nBtw, not really related to the question, but the way you use left joins\nhere doesn't really make sense and does lead to inferior plans.\nAs you restrict on 'pi', the rightmost table in a chain of left joins,\nthere is no point in all those left joins. I would guess the overall\nplan is better if use straight joins.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Fri, 9 Nov 2012 18:53:45 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing work_mem and shared_buffers on Postgres 9.2\n\tsignificantly slows down queries" } ]
[ { "msg_contents": "Shaun Thomas wrote:\n> On 10/30/2012 06:55 AM, Kevin Grittner wrote:\n\n>> Is there a good transaction-based connection pooler in Python?\n>> You're better off with a good pool built in to the client\n>> application than with a good pool running as a separate process\n>> between the client and the database, IMO.\n> \n> Could you explain this a little more? My experience is almost\n> always the exact opposite, especially in large clusters that may\n> have dozens of servers all hitting the same database. A\n> centralized pool has much less duplication and can serve from a\n> smaller pool than having 12 servers each have 25 connections\n> reserved in their own private pool or something.\n> \n> I mean... a pool is basically a proxy server. I don't have 12\n> individual proxy servers for 12 webservers.\n\nSure, if you have multiple web servers and they are not routing\ntheir database requests through a common \"model\" layer, an external\npooler would make sense. Most of the time I've dealt either with one\nweb server or multiple servers routing requests at the transaction\nlevel to a single JVM which ran the logic of the transaction --\neither of which is a good place to have a connection pool. A dozen\ndifferent JVMs all making JDBC requests does kind of beg for an\nexternal layer to concentrate the requests; if it isn't something\nthat's running the transaction layer, a connection pooler there\nwould be good.\n\n-Kevin\n\n", "msg_date": "Tue, 30 Oct 2012 16:07:42 -0400", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to keep queries low latency as concurrency increases" } ]
[ { "msg_contents": "On Thu, Jul 19, 2012 at 11:07 AM, Jon Nelson <[email protected]> wrote:\n> Recently I found myself wondering what was taking a particular query so long.\n> I immediately assumed it was a lack of I/O, because lack of I/O is a\n> thorn in my side.\n> Nope, the I/O was boring. CPU? Well, the process was using 100% of the\n> CPU but the query itself was really very simple.\n> I turned to ltrace (horribly imprecise, I know). ltrace told me this:\n>\n>\n> % time seconds usecs/call calls function\n> ------ ----------- ----------- --------- --------------------\n> 46.54 6.789433 69 97766 memcpy\n> 28.16 4.108324 1100 3732 strlen\n> 14.45 2.107567 564 3732 malloc\n> 9.16 1.336108 28 46877 memset\n> 0.74 0.107935 28 3732 strcpy\n> 0.73 0.107221 28 3732 free\n> 0.16 0.023687 187 126 write\n> 0.02 0.003587 28 126 __errno_location\n> 0.02 0.003075 59 52 read\n> 0.01 0.001523 29 52 memcmp\n> ------ ----------- ----------- --------- --------------------\n> 100.00 14.588460 159927 total\n>\n>\n> and this:\n>\n> strlen(\"SRF multi-call context\")\n> strcpy(0xe01d40, \"SRF multi-call context\")\n> malloc(1024)\n> memcpy(...)\n> memset(...)\n> ...\n> memset(...)\n> free(..)\n>\n> repeat.\n>\n> I was rather surprised to learn that (per-row):\n> (1) memcpy of 64 bytes accounted for 46% of the time spent in library calls\n> (2) the (other) costs of strlen, strcpy, malloc, and memset were so\n> huge (in particular, strlen)\n>\n> What, if anything, can be done about this? It seems the overhead for\n> setting up the memory context for the SRF is pretty high.\n> I notice this overhead pretty much every time I use any of the array\n> functions like unnest.\n>\n> Please help me to understand if I'm misinterpreting things here.\n>\n> [x86_64, Linux, PostgreSQL 9.1.4]\n\n\nA followup.\n\nRecently, I imported a bunch of data. The import ran in about 30\nseconds. The data itself was represented in a way that made more sense\n- from a relational database perspective - as multiple tables. To\naccomplish this, I made use of string_to_array and unnest. The\ninitial table creation and copy run in about 30 seconds, but then the\ncreation of the new table (create table ... as select ..\nunnest(string_to_array(....))) took over 5 minutes. 10 times as long.\nWhat is it about the array functions (actually, all set-returning\nfunctions that I've tried) that causes them to be so expensive? The\nper-call overhead is enormous in some cases. PostgreSQL 9.1.5 on\nx86_64 (openSUSE 12.2 - but the effect has been observed across\nseveral platforms and major/minor releases of PostgreSQL).\n\n\n\n> --\n> Jon\n\n\n\n-- \nJon\n\n", "msg_date": "Tue, 30 Oct 2012 16:31:20 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: set-returning calls and overhead" } ]
[ { "msg_contents": "dear friends\n\ni have - sql file of size more than 1 gb\nwhen i execute it then after some time \"Invalid memory alloc request size\n100234023 byte\" occcured\nwhat ' s problem that i don't know ?\n\n\n\nwith thanks\nmahavir\n\ndear friendsi have - sql file of size more than 1 gb when i execute it  then after some time \"Invalid memory alloc request size 100234023 byte\"   occcuredwhat ' s problem that i don't know ?\nwith thanksmahavir", "msg_date": "Wed, 31 Oct 2012 15:54:45 +0530", "msg_from": "Mahavir Trivedi <[email protected]>", "msg_from_op": true, "msg_subject": "Invalid memory alloc request size" }, { "msg_contents": "Hello\n\n2012/10/31 Mahavir Trivedi <[email protected]>:\n> dear friends\n>\n> i have - sql file of size more than 1 gb\n> when i execute it then after some time \"Invalid memory alloc request size\n> 100234023 byte\" occcured\n> what ' s problem that i don't know ?\n\nthere is hard-coded limit for memory request - for example - varlena\ncannot be longer than 1GB, so this request was usually signal of some\nerror. Probably is too less for current computers.\n\nRegards\n\nPavel Stěhule\n\n>\n>\n>\n> with thanks\n> mahavir\n\n", "msg_date": "Wed, 31 Oct 2012 11:28:50 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalid memory alloc request size" }, { "msg_contents": "This was answered on the list last time you asked it. You are exceeding a\nmaximum buffer size. There was an implication that it was related to\nconverting a string from one encoding to another that could maybe be\nalleviated by using the same encoding in both client and server, but a more\nreliable solution is probably breaking your sql file into smaller pieces\n(or, perhaps even better would be bulk-loading the data via COPY, assuming\nthat isn't subject to the same buffer size limitation ). I suppose you\ncould investigate recompiling postgresql with a larger buffer, though that\nis likely to have side effects that i certainly can't predict.\n\n\n\nOn Wed, Oct 31, 2012 at 3:24 AM, Mahavir Trivedi\n<[email protected]>wrote:\n\n> dear friends\n>\n> i have - sql file of size more than 1 gb\n> when i execute it then after some time \"Invalid memory alloc request size\n> 100234023 byte\" occcured\n> what ' s problem that i don't know ?\n>\n>\n>\n> with thanks\n> mahavir\n>\n\nThis was answered on the list last time you asked it. You are exceeding a maximum buffer size. There was an implication that it was related to converting a string from one encoding to another that could maybe be alleviated by using the same encoding in both client and server, but a more reliable solution is probably breaking your sql file into smaller pieces (or, perhaps even better would be bulk-loading the data via COPY, assuming that isn't subject to the same buffer size limitation ). I suppose you could investigate recompiling postgresql with a larger buffer, though that is likely to have side effects that i certainly can't predict.\nOn Wed, Oct 31, 2012 at 3:24 AM, Mahavir Trivedi <[email protected]> wrote:\ndear friendsi have - sql file of size more than 1 gb when i execute it  then after some time \"Invalid memory alloc request size 100234023 byte\"   occcured\nwhat ' s problem that i don't know ?\nwith thanksmahavir", "msg_date": "Wed, 31 Oct 2012 03:34:25 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invalid memory alloc request size" } ]
[ { "msg_contents": "Hi all :)\n\nI'm here again.\nThis time I'll provide more details (explain analyze, data-type, and\nindexes), hope it will be enough :)\n\nThe query that is performing a plan that i do not understand is the\nfollowing:\n--------------------\nselect [some fields from all 3 tables]\nfrom\nDATA_SEQUENCES\njoin SUBSCRIPTION on\n SUBSCRIPTION.key1 = DATA_SEQUENCES.key1 AND\nSUBSCRIPTION.key2 = DATA_SEQUENCES.key2\njoin people on\n people.key1 = SUBSCRIPTION.people_key1 AND\npeople.key2 = SUBSCRIPTION.people_key2\nWHERE DATA_SEQUENCES.import_id = 1351674661\n--------------------\n\nThis is the explain analyze:\n\n--------------------\nMerge Join (cost=2902927.01..2973307.79 rows=790371 width=240) (actual\ntime=40525.439..40525.439 rows=0 loops=1)\n Merge Cond: ((people.key1 = subscription.people_key1) AND (people.key2 =\nsubscription.people_key2))\n -> Sort (cost=2885618.73..2904468.49 rows=7539905 width=240) (actual\ntime=40525.268..40525.268 rows=1 loops=1)\n Sort Key: people.key1, people.key2\n Sort Method: external merge Disk: 466528kB\n -> Seq Scan on people (cost=0.00..323429.05 rows=7539905\nwidth=240) (actual time=0.029..5193.057 rows=7539469 loops=1)\n -> Sort (cost=17308.28..17318.76 rows=4193 width=16) (actual\ntime=0.167..0.167 rows=0 loops=1)\n Sort Key: subscription.people_key1, subscription.people_key2\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.00..17055.99 rows=4193 width=16) (actual\ntime=0.154..0.154 rows=0 loops=1)\n -> Seq Scan on data_sequences (cost=0.00..150.15 rows=39\nwidth=16) (actual time=0.154..0.154 rows=0 loops=1)\n Filter: (import_id = 1351674661)\n -> Index Scan using xpksubscription on subscription\n (cost=0.00..431.86 rows=108 width=16) (never executed)\n Index Cond: ((subscription.key1 = data_sequences.key1)\nAND (subscription.key2 = data_sequences.key2))\nTotal runtime: 40600.815 ms\n--------------------\n\nAll the key, key2, and relative foreign keys are int4. Import_id is a\nbigint.\nI'm not reporting the full create table script 'cause people and\nsubscription both have lots of fields. I know this can be wrong (lots of\nfield on big table), but this is an environment born something like 20\nyears ago and not intended from the start for such a big data volume.\n\nI have the following indexes:\n\non People:\nCREATE UNIQUE INDEX people_pkey ON people USING btree (key1, key2)\nCREATE INDEX people_pkey_hash_loc ON people USING hash (key1);\nCREATE INDEX people_pkey_hash_id ON people USING hash (key2);\n\non Subscription:\nCREATE UNIQUE INDEX subscription_pkey ON subscription USING btree (key1,\nkey2)\nCREATE INDEX subscription_fk_people ON subscription USING btree\n(people_key1, people_key2)\n\non Data_sequences:\ncreate index data_sequences_key on data_sequences USING btree (key1, key2);\ncreate index data_sequences_id on data_sequences USING btree (import_id);\n\nWhat i don't understand is WHY the seq scan on people, and how can I cast\nthe import_id to make it use the index on data_sequences (another useless\nseq scan).\nMind that when I run this explain analyze there were no records on\ndata_sequences. So all the time (40 seconds!) is for the useless seq scan\non people. Both people and subscription have lots of records (10-20.000.000\nrange).\nI'm running 8.4 (haven't tested it on 9.2.1 yet, but we planned to upgrade\nASAP cause we have other queries which will benefit from the\nindex-only-scan new feature).\n\n\nThank you in advance,\n-- \nVincenzo.\n\nHi all :)I'm here again.This time I'll provide more details (explain analyze, data-type, and indexes), hope it will be enough :)The query that is performing a plan that i do not understand is the following:\n--------------------select [some fields from all 3 tables]from  DATA_SEQUENCES  join SUBSCRIPTION on \n SUBSCRIPTION.key1 = DATA_SEQUENCES.key1 AND  SUBSCRIPTION.key2 = DATA_SEQUENCES.key2 join people on \n people.key1 = SUBSCRIPTION.people_key1 AND  people.key2 = SUBSCRIPTION.people_key2 WHERE  DATA_SEQUENCES.import_id = 1351674661\n--------------------This is the explain analyze:--------------------Merge Join  (cost=2902927.01..2973307.79 rows=790371 width=240) (actual time=40525.439..40525.439 rows=0 loops=1)\n  Merge Cond: ((people.key1 = subscription.people_key1) AND (people.key2 = subscription.people_key2))  ->  Sort  (cost=2885618.73..2904468.49 rows=7539905 width=240) (actual time=40525.268..40525.268 rows=1 loops=1)\n        Sort Key: people.key1, people.key2        Sort Method:  external merge  Disk: 466528kB        ->  Seq Scan on people  (cost=0.00..323429.05 rows=7539905 width=240) (actual time=0.029..5193.057 rows=7539469 loops=1)\n  ->  Sort  (cost=17308.28..17318.76 rows=4193 width=16) (actual time=0.167..0.167 rows=0 loops=1)        Sort Key: subscription.people_key1, subscription.people_key2        Sort Method:  quicksort  Memory: 25kB\n        ->  Nested Loop  (cost=0.00..17055.99 rows=4193 width=16) (actual time=0.154..0.154 rows=0 loops=1)              ->  Seq Scan on data_sequences  (cost=0.00..150.15 rows=39 width=16) (actual time=0.154..0.154 rows=0 loops=1)\n                    Filter: (import_id = 1351674661)              ->  Index Scan using xpksubscription on subscription  (cost=0.00..431.86 rows=108 width=16) (never executed)                    Index Cond: ((subscription.key1 = data_sequences.key1) AND (subscription.key2 = data_sequences.key2))\nTotal runtime: 40600.815 ms--------------------All the key, key2, and relative foreign keys are int4. Import_id is a bigint.I'm not reporting the full create table script 'cause people and subscription both have lots of fields. I know this can be wrong (lots of field on big table), but this is an environment born something like 20 years ago and not intended from the start for such a big data volume.\nI have the following indexes:on People:CREATE UNIQUE INDEX people_pkey ON people USING btree (key1, key2)CREATE INDEX people_pkey_hash_loc ON people USING hash (key1);\nCREATE INDEX people_pkey_hash_id ON people USING hash (key2);on Subscription:CREATE UNIQUE INDEX subscription_pkey ON subscription USING btree (key1, key2)CREATE INDEX subscription_fk_people ON subscription USING btree (people_key1, people_key2)\non Data_sequences:create index data_sequences_key on data_sequences USING btree (key1, key2);create index data_sequences_id on data_sequences USING btree (import_id);\nWhat i don't understand is WHY the seq scan on people, and how can I cast the import_id to make it use the index on data_sequences (another useless seq scan).Mind that when I run this explain analyze there were no records on data_sequences. So all the time (40 seconds!) is for the useless seq scan on people. Both people and subscription have lots of records (10-20.000.000 range).\nI'm running 8.4 (haven't tested it on 9.2.1 yet, but we planned to upgrade ASAP cause we have other queries which will benefit from the index-only-scan new feature).\n\nThank you in advance,-- Vincenzo.", "msg_date": "Wed, 31 Oct 2012 11:55:49 +0100", "msg_from": "Vincenzo Melandri <[email protected]>", "msg_from_op": true, "msg_subject": "Seq scan on big table, episode 2" }, { "msg_contents": "I may (or may not) have found the solution: a reindex on the 3 tables fixed\nthe query plan. Now I can plan to reindex only the involved indexes at the\nstart of the data import procedure.\n\nOn Wed, Oct 31, 2012 at 11:55 AM, Vincenzo Melandri\n<[email protected]>wrote:\n\n> Hi all :)\n>\n> I'm here again.\n> This time I'll provide more details (explain analyze, data-type, and\n> indexes), hope it will be enough :)\n>\n> The query that is performing a plan that i do not understand is the\n> following:\n> --------------------\n> select [some fields from all 3 tables]\n> from\n> DATA_SEQUENCES\n> join SUBSCRIPTION on\n> SUBSCRIPTION.key1 = DATA_SEQUENCES.key1 AND\n> SUBSCRIPTION.key2 = DATA_SEQUENCES.key2\n> join people on\n> people.key1 = SUBSCRIPTION.people_key1 AND\n> people.key2 = SUBSCRIPTION.people_key2\n> WHERE DATA_SEQUENCES.import_id = 1351674661\n> --------------------\n>\n> This is the explain analyze:\n>\n> --------------------\n> Merge Join (cost=2902927.01..2973307.79 rows=790371 width=240) (actual\n> time=40525.439..40525.439 rows=0 loops=1)\n> Merge Cond: ((people.key1 = subscription.people_key1) AND (people.key2 =\n> subscription.people_key2))\n> -> Sort (cost=2885618.73..2904468.49 rows=7539905 width=240) (actual\n> time=40525.268..40525.268 rows=1 loops=1)\n> Sort Key: people.key1, people.key2\n> Sort Method: external merge Disk: 466528kB\n> -> Seq Scan on people (cost=0.00..323429.05 rows=7539905\n> width=240) (actual time=0.029..5193.057 rows=7539469 loops=1)\n> -> Sort (cost=17308.28..17318.76 rows=4193 width=16) (actual\n> time=0.167..0.167 rows=0 loops=1)\n> Sort Key: subscription.people_key1, subscription.people_key2\n> Sort Method: quicksort Memory: 25kB\n> -> Nested Loop (cost=0.00..17055.99 rows=4193 width=16) (actual\n> time=0.154..0.154 rows=0 loops=1)\n> -> Seq Scan on data_sequences (cost=0.00..150.15 rows=39\n> width=16) (actual time=0.154..0.154 rows=0 loops=1)\n> Filter: (import_id = 1351674661)\n> -> Index Scan using xpksubscription on subscription\n> (cost=0.00..431.86 rows=108 width=16) (never executed)\n> Index Cond: ((subscription.key1 = data_sequences.key1)\n> AND (subscription.key2 = data_sequences.key2))\n> Total runtime: 40600.815 ms\n> --------------------\n>\n> All the key, key2, and relative foreign keys are int4. Import_id is a\n> bigint.\n> I'm not reporting the full create table script 'cause people and\n> subscription both have lots of fields. I know this can be wrong (lots of\n> field on big table), but this is an environment born something like 20\n> years ago and not intended from the start for such a big data volume.\n>\n> I have the following indexes:\n>\n> on People:\n> CREATE UNIQUE INDEX people_pkey ON people USING btree (key1, key2)\n> CREATE INDEX people_pkey_hash_loc ON people USING hash (key1);\n> CREATE INDEX people_pkey_hash_id ON people USING hash (key2);\n>\n> on Subscription:\n> CREATE UNIQUE INDEX subscription_pkey ON subscription USING btree (key1,\n> key2)\n> CREATE INDEX subscription_fk_people ON subscription USING btree\n> (people_key1, people_key2)\n>\n> on Data_sequences:\n> create index data_sequences_key on data_sequences USING btree (key1, key2);\n> create index data_sequences_id on data_sequences USING btree (import_id);\n>\n> What i don't understand is WHY the seq scan on people, and how can I cast\n> the import_id to make it use the index on data_sequences (another useless\n> seq scan).\n> Mind that when I run this explain analyze there were no records on\n> data_sequences. So all the time (40 seconds!) is for the useless seq scan\n> on people. Both people and subscription have lots of records (10-20.000.000\n> range).\n> I'm running 8.4 (haven't tested it on 9.2.1 yet, but we planned to upgrade\n> ASAP cause we have other queries which will benefit from the\n> index-only-scan new feature).\n>\n>\n> Thank you in advance,\n> --\n> Vincenzo.\n>\n\n\n\n-- \nVincenzo.\nImola Informatica\n\nAi sensi del D.Lgs. 196/2003 si precisa che le informazioni contenute in\nquesto messaggio sono riservate ed a uso esclusivo del destinatario.\nPursuant to Legislative Decree No. 196/2003, you are hereby informed that\nthis message contains confidential information intended only for the use of\nthe addressee.\n\nI may (or may not) have found the solution: a reindex on the 3 tables fixed the query plan. Now I can plan to reindex only the involved indexes at the start of the data import procedure.On Wed, Oct 31, 2012 at 11:55 AM, Vincenzo Melandri <[email protected]> wrote:\nHi all :)I'm here again.This time I'll provide more details (explain analyze, data-type, and indexes), hope it will be enough :)\nThe query that is performing a plan that i do not understand is the following:\n--------------------select [some fields from all 3 tables]from  DATA_SEQUENCES  join SUBSCRIPTION on \n SUBSCRIPTION.key1 = DATA_SEQUENCES.key1 AND  SUBSCRIPTION.key2 = DATA_SEQUENCES.key2 join people on \n people.key1 = SUBSCRIPTION.people_key1 AND  people.key2 = SUBSCRIPTION.people_key2 WHERE  DATA_SEQUENCES.import_id = 1351674661\n--------------------This is the explain analyze:--------------------Merge Join  (cost=2902927.01..2973307.79 rows=790371 width=240) (actual time=40525.439..40525.439 rows=0 loops=1)\n  Merge Cond: ((people.key1 = subscription.people_key1) AND (people.key2 = subscription.people_key2))  ->  Sort  (cost=2885618.73..2904468.49 rows=7539905 width=240) (actual time=40525.268..40525.268 rows=1 loops=1)\n        Sort Key: people.key1, people.key2        Sort Method:  external merge  Disk: 466528kB        ->  Seq Scan on people  (cost=0.00..323429.05 rows=7539905 width=240) (actual time=0.029..5193.057 rows=7539469 loops=1)\n  ->  Sort  (cost=17308.28..17318.76 rows=4193 width=16) (actual time=0.167..0.167 rows=0 loops=1)        Sort Key: subscription.people_key1, subscription.people_key2        Sort Method:  quicksort  Memory: 25kB\n        ->  Nested Loop  (cost=0.00..17055.99 rows=4193 width=16) (actual time=0.154..0.154 rows=0 loops=1)              ->  Seq Scan on data_sequences  (cost=0.00..150.15 rows=39 width=16) (actual time=0.154..0.154 rows=0 loops=1)\n                    Filter: (import_id = 1351674661)              ->  Index Scan using xpksubscription on subscription  (cost=0.00..431.86 rows=108 width=16) (never executed)                    Index Cond: ((subscription.key1 = data_sequences.key1) AND (subscription.key2 = data_sequences.key2))\nTotal runtime: 40600.815 ms--------------------All the key, key2, and relative foreign keys are int4. Import_id is a bigint.I'm not reporting the full create table script 'cause people and subscription both have lots of fields. I know this can be wrong (lots of field on big table), but this is an environment born something like 20 years ago and not intended from the start for such a big data volume.\nI have the following indexes:on People:CREATE UNIQUE INDEX people_pkey ON people USING btree (key1, key2)CREATE INDEX people_pkey_hash_loc ON people USING hash (key1);\nCREATE INDEX people_pkey_hash_id ON people USING hash (key2);on Subscription:CREATE UNIQUE INDEX subscription_pkey ON subscription USING btree (key1, key2)CREATE INDEX subscription_fk_people ON subscription USING btree (people_key1, people_key2)\non Data_sequences:create index data_sequences_key on data_sequences USING btree (key1, key2);create index data_sequences_id on data_sequences USING btree (import_id);\nWhat i don't understand is WHY the seq scan on people, and how can I cast the import_id to make it use the index on data_sequences (another useless seq scan).Mind that when I run this explain analyze there were no records on data_sequences. So all the time (40 seconds!) is for the useless seq scan on people. Both people and subscription have lots of records (10-20.000.000 range).\nI'm running 8.4 (haven't tested it on 9.2.1 yet, but we planned to upgrade ASAP cause we have other queries which will benefit from the index-only-scan new feature).\n\nThank you in advance,-- Vincenzo.\n\n-- Vincenzo.Imola InformaticaAi sensi del D.Lgs. 196/2003 si precisa che le informazioni contenute in questo messaggio sono riservate ed a uso esclusivo del destinatario.\n\nPursuant to Legislative Decree No. 196/2003, you are hereby informed that this message contains confidential information intended only for the use of the addressee.", "msg_date": "Wed, 31 Oct 2012 12:46:59 +0100", "msg_from": "Vincenzo Melandri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seq scan on big table, episode 2" }, { "msg_contents": "On 10/31/2012 05:55 AM, Vincenzo Melandri wrote:\n\n> on People:\n> CREATE UNIQUE INDEX people_pkey ON people USING btree (key1, key2)\n> CREATE INDEX people_pkey_hash_loc ON people USING hash (key1);\n> CREATE INDEX people_pkey_hash_id ON people USING hash (key2);\n\nI can't say why it would ignore the first index in this particular JOIN, \nbut you might as well discard both of those hash indexes. Also, \npeople_pkey_hash_loc is basically pointless anyway, as the database can \nuse the first column in a multi-column index as if it were a single \ncolumn index.\n\nI *can* ask you why you're using HASH indexes, though. They're not WAL \nlogged, so they can't be replicated, and they're also not crash safe.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n", "msg_date": "Wed, 31 Oct 2012 08:04:55 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seq scan on big table, episode 2" } ]
[ { "msg_contents": "Hi there,\n\nI work for VMware with our Postgres performance team. We recently came across a dbt2 performance regression from 9.1.6 to 9.2.1. We have done some profiling and don't see anything obvious. Would like to get some suggestions from the community where to investigate further.\n\nThe average notpm is 61384.24 with 9.1.6 and 57381.43 with 9.2.1.\nPlotting notps over time shows that the slowdown of 9.2.1 is evident across the entire run period.\nSince we also observed sustained 80+% CPU utilization during both runs, we suspected this is a CPU bottleneck issue.\nSo we run oprofile hoping that the profiles may suggest one is using CPU less productively than the other; but nothing jumped out to that explanation.\nThe profiling results are posted on http://pgsql.privatepaste.com/3fa3ae0627 (9.1.6 run) and http://pgsql.privatepaste.com/930bb51374 (9.2.1 run).\n\n\nSpecs:\n\nHP ML360 G6:\n2x Xeon E5520 (4-core/processor, Hyperthreading disabled)\n12GB DRAM\nHP P410i RAID controller (256MB battery-backed cache)\n- One 10k-rpm SAS: to mount /\n- One SSD: to mount pgdata and wal\n\nSUSE Linux Enterprise Server 11 SP1 64-bit (kernel version: 2.6.32.59-0.7-default)\n\npostgresql.conf:\nmax_connections = 100\t\t\t\nshared_buffers = 5600MB \ntemp_buffers = 8193kB \nwork_mem = 4096kB \nmaintenance_work_mem = 400MB \nwal_buffers = -1 \ncheckpoint_segments = 300\nlogging_collector = on\t\t \ndatestyle = 'iso, mdy'\nlc_messages = 'C'\t\t\t\nlc_monetary = 'C'\t\t\t\nlc_numeric = 'C'\t\t\t\nlc_time = 'C'\t\t\t\t\ndefault_text_search_config = 'pg_catalog.english'\nlog_destination = 'csvlog'\nlog_directory = 'pg_log'\nlog_filename = 'postgresql-%a'\nlog_rotation_age = 1440\nlog_truncate_on_rotation = on\n\ndbt2-0.40 was used:\n40 warehouse\n20 db connections \nuse no thinktime\nuse prepared statement\nbuffer warmup before measurement run (warmup is done through disabling sequential scan and count(*) all tables and indexes).\nmeasurement run lasts 20 minutes\n\nWe used postgresql91-9.1.6 and postgresql92-9.2.1 openSUSE builds:\nhttp://download.opensuse.org/repositories/server:/database:/postgresql/openSUSE_12.1/x86_64/\n\n\nThanks,\nDong\n\n\n", "msg_date": "Wed, 31 Oct 2012 15:48:44 -0700", "msg_from": "Dong Ye <[email protected]>", "msg_from_op": true, "msg_subject": "dbt2 performance regresses from 9.1.6 to 9.2.1" } ]
[ { "msg_contents": "Hi there,\n\nI work for VMware with our Postgres performance team. We recently came across a dbt2 performance regression from 9.1.6 to 9.2.1. We have done some profiling and don't see anything obvious. Would like to get some suggestions from the community where to investigate further.\n\nThe average notpm is 61384.24 with 9.1.6 and 57381.43 with 9.2.1.\nPlotting notps over time shows that the slowdown of 9.2.1 is evident across the entire run period.\nSince we also observed sustained 80+% CPU utilization during both runs, we suspected this is a CPU bottleneck issue.\nSo we run oprofile hoping that the profiles may suggest one is using CPU less productively than the other; but nothing jumped out to that explanation.\nThe profiling results are posted on http://pgsql.privatepaste.com/3fa3ae0627 (9.1.6 run) and http://pgsql.privatepaste.com/930bb51374 (9.2.1 run).\n\n\nSpecs:\n\nHP ML360 G6:\n2x Xeon E5520 (4-core/processor, Hyperthreading disabled)\n12GB DRAM\nHP P410i RAID controller (256MB battery-backed cache)\n- One 10k-rpm SAS: to mount /\n- One SSD: to mount pgdata and wal\n\nSUSE Linux Enterprise Server 11 SP1 64-bit (kernel version: 2.6.32.59-0.7-default)\n\npostgresql.conf:\nmax_connections = 100\t\t\t\nshared_buffers = 5600MB \ntemp_buffers = 8193kB \nwork_mem = 4096kB \nmaintenance_work_mem = 400MB \nwal_buffers = -1 \ncheckpoint_segments = 300\nlogging_collector = on\t\t \ndatestyle = 'iso, mdy'\nlc_messages = 'C'\t\t\t\nlc_monetary = 'C'\t\t\t\nlc_numeric = 'C'\t\t\t\nlc_time = 'C'\t\t\t\t\ndefault_text_search_config = 'pg_catalog.english'\nlog_destination = 'csvlog'\nlog_directory = 'pg_log'\nlog_filename = 'postgresql-%a'\nlog_rotation_age = 1440\nlog_truncate_on_rotation = on\n\ndbt2-0.40 was used:\n40 warehouse\n20 db connections \nuse no thinktime\nuse prepared statement\nbuffer warmup before measurement run (warmup is done through disabling sequential scan and count(*) all tables and indexes).\nmeasurement run lasts 20 minutes\n\nWe used postgresql91-9.1.6 and postgresql92-9.2.1 openSUSE builds:\nhttp://download.opensuse.org/repositories/server:/database:/postgresql/openSUSE_12.1/x86_64/\n\n\nThanks,\nDong\n\n\n", "msg_date": "Wed, 31 Oct 2012 15:51:39 -0700", "msg_from": "Dong Ye <[email protected]>", "msg_from_op": true, "msg_subject": "dbt2 performance regresses from 9.1.6 to 9.2.1" }, { "msg_contents": "On Thu, Nov 1, 2012 at 12:51 AM, Dong Ye <[email protected]> wrote:\n> The average notpm is 61384.24 with 9.1.6 and 57381.43 with 9.2.1.\n> Plotting notps over time shows that the slowdown of 9.2.1 is evident across the entire run period.\n> Since we also observed sustained 80+% CPU utilization during both runs, we suspected this is a CPU bottleneck issue.\n> So we run oprofile hoping that the profiles may suggest one is using CPU less productively than the other; but nothing jumped out to that explanation.\n> The profiling results are posted on http://pgsql.privatepaste.com/3fa3ae0627 (9.1.6 run) and http://pgsql.privatepaste.com/930bb51374 (9.2.1 run).\n\nYou are using prepared statements, this makes me think that this\nregression might be due to support for parameter specific plans for\nprepared statements. [1] Can you run the test on both versions without\nprepared statements and see if the regressions remains.\n\nI compared the profile results, I'll reproduce the results here incase\nthey ring any other bells for someone. Here are top 20 functions that\ntake more time under 9.2:\n\n Function Diff v9.2% v9.1%\n postgres.copyObject 3.48 1.2436 0.3569\n postgres.check_stack_depth 1.92 0.7244 0.3774\n postgres.eval_const_expressions_mutator 1.87 0.3473 0.1853\n jbd./jbd 1.82 0.4127 0.2271\n libc-2.14.1.so._int_malloc 1.75 1.4938 0.8540\n libc-2.14.1.so.__strlen_sse42 1.72 0.7098 0.4124\nvmlinux-2.6.32.59-0.7-default.copy_user_generic_string 1.70 0.5130 0.3017\n postgres.MemoryContextCreate 1.68 0.3206 0.1914\n postgres.MemoryContextAllocZeroAligned 1.64 1.5443 0.9443\n libc-2.14.1.so._int_free 1.60 0.7182 0.4476\n postgres.expression_tree_walker 1.60 0.8350 0.5235\n postgres.XLogInsert 1.58 2.7251 1.7210\n ext3./ext3 1.55 0.2065 0.1335\n libc-2.14.1.so.__strcpy_ssse3 1.50 0.3061 0.2046\n postgres.expression_tree_mutator 1.41 0.3461 0.2447\n libc-2.14.1.so.__memcpy_ssse3_back 1.40 1.2379 0.8830\n postgres.AllocSetAlloc 1.39 4.6567 3.3467\n postgres.LockAcquireExtended 1.39 0.2799 0.2015\n postgres.MemoryContextAlloc 1.38 1.0151 0.7373\n postgres.AllocSetDelete 1.33 0.2130 0.1600\n\nAnd top 10 functions present under 9.2 but not present with 9.1:\n Function\n postgres._copyList.isra.15 0.341\n postgres._SPI_execute_plan.isra.4 0.224\n postgres.grouping_planner 0.220\n postgres.IndexOnlyNext 0.213\n postgres.GetCachedPlan 0.189\n postgres.MemoryContextStrdup 0.171\n postgres.list_copy 0.165\n postgres.index_getnext_tid 0.155\n postgres.MemoryContextSetParent 0.128\n postgres.cost_qual_eval_walker 0.127\n\nI have no idea why is XLogInsert taking so much longer on 9.2.\n\n[1] http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e6faf910\n\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n\n", "msg_date": "Fri, 2 Nov 2012 13:27:40 +0200", "msg_from": "Ants Aasma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 performance regresses from 9.1.6 to 9.2.1" }, { "msg_contents": "> You are using prepared statements, this makes me think that this\n> regression might be due to support for parameter specific plans for\n> prepared statements. [1] Can you run the test on both versions without\n> prepared statements and see if the regressions remains.\n\nWithout prepare statement, we got 48837.33 avg notpm with 9.1.6 and 43264.54 avg notpm with 9.2.1.\nnotps over time shows the slowdown of 9.2.1 is evident during the entire course of the run.\nTheir profiles are posted on http://pgsql.privatepaste.com/b770f72967 (9.1.6) and http://pgsql.privatepaste.com/6fa8b7f174 (9.2.1).\n\nThanks,\nDong\n\n", "msg_date": "Sun, 4 Nov 2012 14:23:17 -0800", "msg_from": "Dong Ye <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dbt2 performance regresses from 9.1.6 to 9.2.1" }, { "msg_contents": "On Sun, Nov 4, 2012 at 7:23 PM, Dong Ye <[email protected]> wrote:\n>> You are using prepared statements, this makes me think that this\n>> regression might be due to support for parameter specific plans for\n>> prepared statements. [1] Can you run the test on both versions without\n>> prepared statements and see if the regressions remains.\n>\n> Without prepare statement, we got 48837.33 avg notpm with 9.1.6 and 43264.54 avg notpm with 9.2.1.\n> notps over time shows the slowdown of 9.2.1 is evident during the entire course of the run.\n> Their profiles are posted on http://pgsql.privatepaste.com/b770f72967 (9.1.6) and http://pgsql.privatepaste.com/6fa8b7f174 (9.2.1).\n\nYou know... it does look as if 9.2.1 is generating a lot more pressure\ninto the memory allocator (AllocSetAlloc notably higher).\n\n", "msg_date": "Mon, 5 Nov 2012 11:32:37 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 performance regresses from 9.1.6 to 9.2.1" }, { "msg_contents": "On 05.11.2012 16:32, Claudio Freire wrote:\n> On Sun, Nov 4, 2012 at 7:23 PM, Dong Ye<[email protected]> wrote:\n>>> You are using prepared statements, this makes me think that this\n>>> regression might be due to support for parameter specific plans for\n>>> prepared statements. [1] Can you run the test on both versions without\n>>> prepared statements and see if the regressions remains.\n>>\n>> Without prepare statement, we got 48837.33 avg notpm with 9.1.6 and 43264.54 avg notpm with 9.2.1.\n>> notps over time shows the slowdown of 9.2.1 is evident during the entire course of the run.\n>> Their profiles are posted on http://pgsql.privatepaste.com/b770f72967 (9.1.6) and http://pgsql.privatepaste.com/6fa8b7f174 (9.2.1).\n>\n> You know... it does look as if 9.2.1 is generating a lot more pressure\n> into the memory allocator (AllocSetAlloc notably higher).\n\nDid you check the access plans of the queries? 9.2 planner might choose \na slightly worse plan. Or perhaps index-only scans are hurting \nperformance with the DBT-2 queries.\n\n- Heikki\n\n", "msg_date": "Wed, 07 Nov 2012 13:43:09 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dbt2 performance regresses from 9.1.6 to 9.2.1" } ]
[ { "msg_contents": "Hello,\n I have this table definition:\nCREATE TABLE ism_floatvalues\n(\n id_signal bigint NOT NULL, -- Indica la señal a la que pertenece este \nvalor. Clave foránea que referencia al campo id_signal de la tabla \nism_signal.\n time_stamp timestamp without time zone NOT NULL, -- Marca de tiempo \nque indica fecha y hora correpondiente a este dato. Junto con id_signal \nforma la clave primaria de esta tabla\n var_value double precision, -- Almacena el valor concreto de la señal \nen la marca de tiempo espeficicada.\n CONSTRAINT ism_floatvalues_id_signal_fkey FOREIGN KEY (id_signal)\n REFERENCES ism_signal (id_signal) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX ism_floatvalues_index_idsignal_timestamp\n ON ism_floatvalues\n USING btree\n (id_signal, time_stamp DESC);\n\n\n\n*********************************************\n\nThen I run this query....\n*********************************************\nEXPLAIN analyze\nselect round(CAST(sum(var_value) AS numeric),2) as var_value, \ndate_trunc('month', time_stamp) as time_stamp , \ndate_part('month',date_trunc('month', time_stamp)) as month, \ndate_part('year',date_trunc('year', time_stamp)) as year from \nism_floatvalues where id_signal in\n(\nselect id_signal from ism_signal where reference = 'EDCA' and id_source in\n(\nselect id_source from ism_installation where id_installation in\n(select id_installation from ism_groupxinstallation where id_group = 101)\n)\n)\nand time_stamp > date_trunc('month', current_date - interval '11 months')\ngroup by date_trunc('month', time_stamp), month, year\norder by time_stamp\n\n******************************\nAnd this is the result:\n******************************\n\n\"GroupAggregate (cost=4766541.62..4884678.62 rows=39483 width=16) \n(actual time=1302542.073..1302713.154 rows=10 loops=1)\"\n\" -> Sort (cost=4766541.62..4789932.12 rows=9356201 width=16) (actual \ntime=1302444.324..1302531.447 rows=9741 loops=1)\"\n\" Sort Key: (date_trunc('month'::text, \nism_floatvalues.time_stamp)), (date_part('month'::text, \ndate_trunc('month'::text, ism_floatvalues.time_stamp))), \n(date_part('year'::text, date_trunc('year'::text, \nism_floatvalues.time_stamp)))\"\n\" Sort Method: quicksort Memory: 941kB\"\n\" -> Hash Join (cost=545.65..3203518.39 rows=9356201 width=16) \n(actual time=458941.090..1302245.307 rows=9741 loops=1)\"\n\" Hash Cond: (ism_floatvalues.id_signal = \nism_signal.id_signal)\"\n\" -> Seq Scan on ism_floatvalues (cost=0.00..2965077.57 \nrows=28817098 width=24) (actual time=453907.600..1002381.652 \nrows=29114105 loops=1)\"\n\" Filter: (time_stamp > date_trunc('month'::text, \n(('now'::text)::date - '11 mons'::interval)))\"\n\" -> Hash (cost=544.19..544.19 rows=117 width=8) (actual \ntime=733.782..733.782 rows=40 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n\" -> HashAggregate (cost=543.02..544.19 rows=117 \nwidth=8) (actual time=733.072..733.412 rows=40 loops=1)\"\n\" -> Hash Semi Join (cost=27.61..542.73 \nrows=117 width=8) (actual time=638.175..687.934 rows=40 loops=1)\"\n\" Hash Cond: (ism_signal.id_source = \nism_installation.id_source)\"\n\" -> Bitmap Heap Scan on ism_signal \n(cost=18.84..530.42 rows=850 width=16) (actual time=243.690..284.303 \nrows=850 loops=1)\"\n\" Recheck Cond: ((reference)::text \n= 'EDCA'::text)\"\n\" -> Bitmap Index Scan on \nism_signal_idx_reference (cost=0.00..18.63 rows=850 width=0) (actual \ntime=243.429..243.429 rows=865 loops=1)\"\n\" Index Cond: \n((reference)::text = 'EDCA'::text)\"\n\" -> Hash (cost=8.27..8.27 rows=40 \nwidth=8) (actual time=394.393..394.393 rows=40 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory \nUsage: 2kB\"\n\" -> Hash Semi Join \n(cost=3.25..8.27 rows=40 width=8) (actual time=391.966..394.000 rows=40 \nloops=1)\"\n\" Hash Cond: \n(ism_installation.id_installation = ism_groupxinstallation.id_installation)\"\n\" -> Seq Scan on \nism_installation (cost=0.00..4.17 rows=117 width=16) (actual \ntime=0.086..1.354 rows=117 loops=1)\"\n\" -> Hash (cost=2.75..2.75 \nrows=40 width=8) (actual time=390.274..390.274 rows=40 loops=1)\"\n\" Buckets: 1024 \nBatches: 1 Memory Usage: 2kB\"\n\" -> Seq Scan on \nism_groupxinstallation (cost=0.00..2.75 rows=40 width=8) (actual \ntime=389.536..389.903 rows=40 loops=1)\"\n\" Filter: \n(id_group = 101)\"\n\"Total runtime: 1302731.013 ms\"\n\n\nThis query is very slow as you can see, it took about 20 minutos to \ncomplete.... Can someone help me to improve performance on this query??\nRegards.\n-- \nDocumento sin título\n\n**Pedro Jiménez Pérez\n**[email protected]\n\n****\n\t\n\n**Innovación en Sistemas de Monitorización, S.L.**\nEdificio Hevimar\nC/ Iván Pavlov 2 y 4 - Parcela 4 2ª Planta Local 9\nParque Tecnológico de Andalucía\n29590 Campanillas (Málaga)\nTlfno. 952 02 07 13\[email protected]\n\nfirma_gpt.jpg, 1 kB\n\n\t\n\nAntes de imprimir, piensa en tu responsabilidad y compromiso con el \nMEDIO AMBIENTE!\n\nBefore printing, think about your responsibility and commitment with the \nENVIRONMENT!\n\nCLÁUSULA DE CONFIDENCIALIDAD.- Este mensaje, y en su caso, cualquier \nfichero anexo al mismo, puede contener información confidencial o \nlegalmente protegida (LOPD 15/1999 de 13 de Diciembre), siendo para uso \nexclusivo del destinatario. No hay renuncia a la confidencialidad o \nsecreto profesional por cualquier transmisión defectuosa o errónea, y \nqueda expresamente prohibida su divulgación, copia o distribución a \nterceros sin la autorización expresa del remitente. Si ha recibido este \nmensaje por error, se ruega lo notifique al remitente enviando un \nmensaje al correo electrónico [email protected] y proceda \ninmediatamente al borrado del mensaje original y de todas sus copias. \nGracias por su colaboración.", "msg_date": "Fri, 02 Nov 2012 13:13:52 +0100", "msg_from": "=?ISO-8859-1?Q?Pedro_Jim=E9nez_P=E9rez?= <[email protected]>", "msg_from_op": true, "msg_subject": "help with too slow query" }, { "msg_contents": "2012/11/2 Pedro Jiménez Pérez <[email protected]>\n\n> I have this table definition:\n>\n\n1) Could you kindly include also information bout ism_signal and\nism_installation\ntables?\n2) Please, follow this guide to provide more input:\nhttp://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\n-- \nVictor Y. Yegorov\n\n2012/11/2 Pedro Jiménez Pérez <[email protected]>\n\n   I have this table definition:1) Could you kindly include also information bout ism_signal and ism_installation tables?\n2) Please, follow this guide to provide more input: http://wiki.postgresql.org/wiki/Slow_Query_Questions\n-- Victor Y. Yegorov", "msg_date": "Mon, 5 Nov 2012 11:54:51 +0200", "msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGAINCV0LPQvtGA0L7Qsg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help with too slow query" }, { "msg_contents": "Pedro Jiménez Pérez wrote:\n> Sent: Friday, November 02, 2012 1:14 PM\n> To: [email protected]\n> Subject: [PERFORM] help with too slow query\n> \n> Hello,\n> I have this table definition:\n> CREATE TABLE ism_floatvalues\n> (\n> id_signal bigint NOT NULL, -- Indica la señal a la que pertenece este valor. Clave foránea que\n> referencia al campo id_signal de la tabla ism_signal.\n> time_stamp timestamp without time zone NOT NULL, -- Marca de tiempo que indica fecha y hora\n> correpondiente a este dato. Junto con id_signal forma la clave primaria de esta tabla\n> var_value double precision, -- Almacena el valor concreto de la señal en la marca de tiempo\n> espeficicada.\n> CONSTRAINT ism_floatvalues_id_signal_fkey FOREIGN KEY (id_signal)\n> REFERENCES ism_signal (id_signal) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE CASCADE\n> )\n> WITH (\n> OIDS=FALSE\n> );\n> \n> CREATE INDEX ism_floatvalues_index_idsignal_timestamp\n> ON ism_floatvalues\n> USING btree\n> (id_signal, time_stamp DESC);\n> \n> \n> \n> *********************************************\n> \n> Then I run this query....\n> *********************************************\n> EXPLAIN analyze\n> select round(CAST(sum(var_value) AS numeric),2) as var_value, date_trunc('month', time_stamp) as\n> time_stamp , date_part('month',date_trunc('month', time_stamp)) as month,\n> date_part('year',date_trunc('year', time_stamp)) as year from ism_floatvalues where id_signal in\n> (\n> select id_signal from ism_signal where reference = 'EDCA' and id_source in\n> (\n> select id_source from ism_installation where id_installation in\n> (select id_installation from ism_groupxinstallation where id_group = 101)\n> )\n> )\n> and time_stamp > date_trunc('month', current_date - interval '11 months')\n> group by date_trunc('month', time_stamp), month, year\n> order by time_stamp\n> \n> ******************************\n> And this is the result:\n> ******************************\n> \n> \"GroupAggregate (cost=4766541.62..4884678.62 rows=39483 width=16) (actual time=1302542.073..1302713.154 rows=10 loops=1)\"\n[...]\n> \" -> Hash Join (cost=545.65..3203518.39 rows=9356201 width=16) (actual time=458941.090..1302245.307 rows=9741 loops=1)\"\n> \" Hash Cond: (ism_floatvalues.id_signal = ism_signal.id_signal)\"\n> \" -> Seq Scan on ism_floatvalues (cost=0.00..2965077.57 rows=28817098 width=24) (actual time=453907.600..1002381.652 rows=29114105 loops=1)\"\n> \" Filter: (time_stamp > date_trunc('month'::text, (('now'::text)::date - '11 mons'::interval)))\"\n[...]\n\n> This query is very slow as you can see, it took about 20 minutos to complete.... Can someone help me\n> to improve performance on this query??\n> Regards.\n\nThis sequential scan takes the lion share of the time.\n\nAre the 29 million rows selected in that scan a significant percentage\nof the total rows? If yes, then the sequential scan is the\nmost efficient way to get the result, and the only remedy is to get\nfaster I/O or to cache more of the table in RAM.\n\nIf the query needs to access a lot of rows to complete, it must\nbe slow.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Mon, 5 Nov 2012 11:52:38 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help with too slow query" }, { "msg_contents": "Ok, here we go:\n\nI'm using postgresql version 8.0\n\nHere is my query that is too slow: http://explain.depesz.com/s/GbQ\n\n***************************************************\nEXPLAIN analyze\nselect round(CAST(sum(var_value) AS numeric),2) as var_value, \ndate_trunc('month', time_stamp) as time_stamp , \ndate_part('month',date_trunc('month', time_stamp)) as month, \ndate_part('year',date_trunc('year', time_stamp)) as year from \nism_floatvalues where id_signal in\n(\nselect id_signal from ism_signal where reference = 'EDCA' and id_source in\n(\nselect id_source from ism_installation where id_installation in\n(select id_installation from ism_groupxinstallation where id_group = 101)\n)\n)\nand time_stamp > date_trunc('month', current_date - interval '11 months')\ngroup by date_trunc('month', time_stamp), month, year\norder by time_stamp\n\n***************************************************\nHere are the tables:\n\nTable ism_floatvalues:\nTable ism_floatvalues has about 100 million records.\nThis table is updated everyday. Everyday we delete the data stored \nregarding yesterday, usually from 8 am to 13 pm moreless, and then we \ninsert the data for yesterday (complete data) and the data we have \navailable for today (usually from 8 am to 13pm) Then, tomorrow, we start \nover again.... so about 30% of records are deleted at least one time.\n***************************************************\nCREATE TABLE ism_floatvalues\n(\n id_signal bigint NOT NULL, -- Indica la señal a la que pertenece este \nvalor. Clave foránea que referencia al campo id_signal de la tabla \nism_signal.\n time_stamp timestamp without time zone NOT NULL, -- Marca de tiempo \nque indica fecha y hora correpondiente a este dato. Junto con id_signal \nforma la clave primaria de esta tabla\n var_value double precision, -- Almacena el valor concreto de la señal \nen la marca de tiempo espeficicada.\n CONSTRAINT ism_floatvalues_id_signal_fkey FOREIGN KEY (id_signal)\n REFERENCES ism_signal (id_signal) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE\n)\nWITH (\n OIDS=FALSE\n);\nCREATE INDEX ism_floatvalues_index_idsignal_timestamp\n ON ism_floatvalues\n USING btree\n (id_signal, time_stamp DESC);\n***************************************************\n\nTable ism_signal:\nthis table has about 24K records\n****************************************************\nCREATE TABLE ism_signal\n(\n id_signal bigserial NOT NULL, -- Código numérico autoincremental. \nClave primaria de la tabla.\n id_source bigint NOT NULL,\n reference character varying NOT NULL, -- Cadena de caracteres con la \nque se identifica de forma única cada señal ( válida para toda la \nplataforma de ISM ).\n \"name\" character varying NOT NULL, -- Cadena de caracteres con la que \nse muestra este señal al usuario.\n signalclass character varying NOT NULL, -- Indica la clase de la \nseñal. Sólo admite valores measure, global, hourly, daily, monthly, \nyearly, alarm, event, constant y attribute.\n signaltype character varying NOT NULL, -- Indica el tipo de dato de \nla señal.\n opcitem character varying, -- Cadena de caracteres que indica el item \nOPC de donde debe leerse esta señal. Como un mismo servidor OPC puede \nmanejar varios sistemas, este item OPC se personaliza según el el \nespacio de nombres para una configuración concreta del servidor OPC (si \nhay 3 inversores gestionados por el mismo servidor OPC, el canal que \ngenéricamente se donomina PCC debe ser personalizado a 1.PCC, 2.PCC o 3.PCC.\n formula character varying, -- Cadena de caracteres que indica la \nfórmula de ajuste para esta señal (como como argumento el valor leido \ndel servidor OPC y le aplica esta fórmula).\n id_opcserverconf bigint, -- Referencia a un servidor OPC configurado \nde una forma determinada. Clave foránea al campo id_opcServerConf de la \ntabla ism_opcServerConf\n decimals smallint, -- Número de cifras decimales que se muestran al \nusuario\n unit character varying, -- Cadena de caracteres que indica las \nunidades de medida.\n description text, -- Breve descripción y comentarios adicionales.\n max_value double precision, -- Límite superior para representar \ngráficamente la magnitud.\n min_value double precision, -- Límite inferior para representar \ngráficamente la magnitud.\n critical_day date, -- Indica el día a partir del cual debemos empezar \na pedir datos de esta señal.\n erased boolean DEFAULT false, -- Indica si este canal debe o no \nmostrarse en la web. Si vale \"false\" el dato debe mostrarse.\n writetodb boolean DEFAULT true, -- Indica si este canal debe o no \nalmacenarse en la base de datos. Si vale \"true\" el dato debe \nalmacenarse\n dbupdaterate integer DEFAULT 0, -- Indica el intervalo de tiempo en \nel que debe recuperarse y almacenarse esta señal desde el servidor OPC a \nla base de datos.\n ordering integer, -- Indica el orden en el que deben mostrarse las \nseñales al usuarioIndica el intervalo de tiempo en el que debe \nrecuperarse y almacenarse esta señal desde el servidor OPC a la base de \ndatos. Si están a NULL los canales se ordenan por orden alfabético.\n sync_level smallint NOT NULL DEFAULT 0, -- ndica el nivel de \nsincronización. Si vale 0 indica que el dato se obtiene \ndirectamente un servidor OPC asociado a un dispositivo. Si vale N (N>0) \nsignifica que la señal se obtiene mediante cálculos sobre alguna señal \nde nivel N-1 y se lee de un servidor OPC asociado a la base de datos. \nLas señales de nivel N deben sincronizarse antes que las de nivel N+1.\n hastodaydata boolean NOT NULL DEFAULT true, -- Indica si la medida \ntiene datos en el dia actual o si por el contrario solo tiene datos de \ndias ya pasados\n interval_minutes smallint,\n updaterate integer DEFAULT 0,\n CONSTRAINT ism_signal_pkey PRIMARY KEY (id_signal),\n CONSTRAINT ism_signal_id_opcserverconf_fkey FOREIGN KEY \n(id_opcserverconf)\n REFERENCES ism_opcserverconf (id_opcserverconf) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT ism_signal_id_source_fkey FOREIGN KEY (id_source)\n REFERENCES ism_source (id_source) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT ism_signal_name_check CHECK (name::text <> ''::text),\n CONSTRAINT ism_signal_reference_check CHECK (reference::text ~ \n'^[_A-Za-z0-9]+$'::text),\n CONSTRAINT ism_signal_signalclass_check2 CHECK (signalclass::text = \n'measure'::text OR signalclass::text = 'global'::text OR \nsignalclass::text = 'hourly'::text OR signalclass::text = 'daily'::text \nOR signalclass::text = 'monthly'::text OR signalclass::text = \n'yearly'::text OR signalclass::text = 'alarm'::text OR signalclass::text \n= 'event'::text OR signalclass::text = 'constant'::text OR \nsignalclass::text = 'attribute'::text OR signalclass::text = \n'DAmeasure'::text OR signalclass::text = 'filter'::text),\n CONSTRAINT ism_signal_signaltype_check CHECK (signaltype::text = \n'float'::text OR signaltype::text = 'integer'::text OR signaltype::text \n= 'char'::text OR signaltype::text = 'string'::text OR signaltype::text \n= 'boolean'::text OR signaltype::text = 'memo'::text OR signaltype::text \n= 'void'::text)\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX ism_signal_idx_id_signal\n ON ism_signal\n USING btree\n (id_signal);\n\nCREATE INDEX ism_signal_idx_id_source\n ON ism_signal\n USING btree\n (id_source);\n\nCREATE INDEX ism_signal_idx_reference\n ON ism_signal\n USING btree\n (reference);\n\n******************************************\n\nTable ism_installation:\nThis table has about 200 records.\n******************************************\n\nCREATE TABLE ism_installation\n(\n id_installation bigserial NOT NULL, -- Código numérico \nautoincremental. Clave primaria de la tabla.\n id_source bigint NOT NULL, -- Código único para cualquier fuente de \nseñales (source). Clave foránea al campo id_source de la tabla ism_source\n \"name\" character varying NOT NULL, -- Nombre de la instalación. No \npuede ser cadena vacía ni nulo. No puede haber dos instalaciones con el \nmismo nombre.\n description text, -- Breve descripción y comentarios adicionales.\n latitude_degree smallint, -- Grados de latitud del emplazamineto de \nla instalación. Entero entre -180 y +180\n latitude_minute smallint, -- Minutos de latitud del emplazamineto de \nla instalación. Entero entre 0 y 59\n longitude_degree smallint, -- Grados de longitud del emplazamineto de \nla instalación. Entero entre -180 y +180\n longitude_minute smallint, -- Minutos de longitud del emplazamineto \nde la instalación. Entero entre 0 y 59\n city character varying, -- Nombre del término municipal donde está la \ninstalación\n province character varying, -- Nombre de la provincia donde está la \ninstalación.\n id_class bigint, -- Referncia a un elemento de ism_class que indica \nla tecnología de la instalación(fotovoltaica, eólica, térmica). Clave \nforánea al campo id_class de la tabla ism_class\n initial_date date,\n last_date date,\n last_hour time without time zone,\n ordering integer DEFAULT 0, -- Orden de las instalaciones de un grupo\n short_name character varying(20), -- Nombre corto de la instalcion \npara mostrarlo en un menu lateral de la web, que no debe exceder de 20 \ncaracteres\n id_owner bigint,\n id_distributor bigint,\n installation_type character(1),\n id_syncgroup integer,\n address character varying(256),\n ripre character varying(256),\n plantgroup character varying(256),\n power character varying(256),\n edecode character varying(256),\n instgroup character varying(256),\n id_zone bigint,\n active boolean DEFAULT true,\n latitude_second double precision,\n longitude_second double precision,\n id_node integer,\n firstdataday date,\n CONSTRAINT ism_installation_pkey PRIMARY KEY (id_installation),\n CONSTRAINT id_syncgroup_fk FOREIGN KEY (id_syncgroup)\n REFERENCES ism_syncgroup (id_syncgroup) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT id_zone_fk FOREIGN KEY (id_zone)\n REFERENCES ism_counterzone (id_zone) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT ism_installation_distributor_fkey FOREIGN KEY (id_distributor)\n REFERENCES ism_distributor (id_distributor) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT ism_installation_id_class_fkey FOREIGN KEY (id_class)\n REFERENCES ism_class (id_class) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT ism_installation_id_source_fkey FOREIGN KEY (id_source)\n REFERENCES ism_source (id_source) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT ism_installation_owner_fkey FOREIGN KEY (id_owner)\n REFERENCES ism_owner (id_owner) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT ism_installation_name_key UNIQUE (name),\n CONSTRAINT ism_installation_latitude_degree_check CHECK ((-180) <= \nlatitude_degree AND latitude_degree <= 180),\n CONSTRAINT ism_installation_latitude_minute_check CHECK (0 <= \nlatitude_minute AND latitude_minute < 60),\n CONSTRAINT ism_installation_latitude_second_check CHECK (0::double \nprecision <= latitude_second AND latitude_second < 60::double precision),\n CONSTRAINT ism_installation_longitude_degree_check CHECK ((-180) <= \nlongitude_degree AND longitude_degree <= 180),\n CONSTRAINT ism_installation_longitude_minute_check CHECK (0 <= \nlongitude_minute AND longitude_minute < 60),\n CONSTRAINT ism_installation_longitude_second_check CHECK (0::double \nprecision <= longitude_second AND longitude_second < 60::double precision),\n CONSTRAINT ism_installation_name_check CHECK (name::text <> ''::text)\n)\nWITH (\n OIDS=FALSE\n);\n\n\n****************************************************\n\nRegards.\n\n\nEl 05/11/2012 10:54, Виктор Егоров escribió:\n> 2012/11/2 Pedro Jiménez Pérez <[email protected] \n> <mailto:[email protected]>>\n>\n> I have this table definition:\n>\n>\n> 1) Could you kindly include also information bout ism_signal and \n> ism_installation tables?\n> 2) Please, follow this guide to provide more input: \n> http://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n>\n> -- \n> Victor Y. Yegorov\n\n-- \nDocumento sin título\n\n**Pedro Jiménez Pérez\n**[email protected]\n\n****\n\t\n\n**Innovación en Sistemas de Monitorización, S.L.**\nEdificio Hevimar\nC/ Iván Pavlov 2 y 4 - Parcela 4 2ª Planta Local 9\nParque Tecnológico de Andalucía\n29590 Campanillas (Málaga)\nTlfno. 952 02 07 13\[email protected]\n\nfirma_gpt.jpg, 1 kB\n\n\t\n\nAntes de imprimir, piensa en tu responsabilidad y compromiso con el \nMEDIO AMBIENTE!\n\nBefore printing, think about your responsibility and commitment with the \nENVIRONMENT!\n\nCLÁUSULA DE CONFIDENCIALIDAD.- Este mensaje, y en su caso, cualquier \nfichero anexo al mismo, puede contener información confidencial o \nlegalmente protegida (LOPD 15/1999 de 13 de Diciembre), siendo para uso \nexclusivo del destinatario. No hay renuncia a la confidencialidad o \nsecreto profesional por cualquier transmisión defectuosa o errónea, y \nqueda expresamente prohibida su divulgación, copia o distribución a \nterceros sin la autorización expresa del remitente. Si ha recibido este \nmensaje por error, se ruega lo notifique al remitente enviando un \nmensaje al correo electrónico [email protected] y proceda \ninmediatamente al borrado del mensaje original y de todas sus copias. \nGracias por su colaboración.", "msg_date": "Tue, 06 Nov 2012 10:06:43 +0100", "msg_from": "=?UTF-8?B?UGVkcm8gSmltw6luZXogUMOpcmV6?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help with too slow query" }, { "msg_contents": "2012/11/6 Pedro Jiménez Pérez <[email protected]>\n> Ok, here we go:\n>\n> I'm using postgresql version 8.0\n>\n> Here is my query that is too slow: http://explain.depesz.com/s/GbQ\n\n\nWell, I would start with a note, that 8.0 is not supported anymore:\nhttp://www.postgresql.org/support/versioning/\nPlease, consider upgrading your instance.\n\nAlso, it is not handy to provide schema details here and anonymize the\nEXPLAIN output.\nHere's the visualization of your initial plan: http://explain.depesz.com/s/AOAN\n\nThe following join: (ism_floatvalues.id_signal = ism_signal.id_signal)\nis wrongly estimated by the planner (row 3 of the above explain visualization).\nIt looks like NestedLoop join with IndexScan over\nism_floatvalues_index_idsignal_timestamp\nmight do a better job.\n\nTry the following:\nALTER TABLE ism_floatvalues ALTER COLUMN id_signal SET STATISTICS\n1000; /* 1000 is maximum for 8.0 */\nANALYZE ism_floatvalues;\n\nLet me know if it helps.\n\n\n--\nVictor Y. Yegorov\n\n", "msg_date": "Tue, 6 Nov 2012 14:17:07 +0200", "msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGAINCV0LPQvtGA0L7Qsg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help with too slow query" }, { "msg_contents": "@Victor,\n \nIs the reason of the wrong cardinality estimations of the join indeed due to wrong statistics? I thought that the full table scan was due to the index on the timefield couldn't be used with this predicate:\n \ntime_stamp > date_trunc('month', current_date - interval '11 months') \n \nIt seems to me that a deterministic FBI should be made of this, deviding the records into month chuncks. Sort of a patch in stead of using partitions. But I'm new to Postgresql, so correct me if i'm wrong,\n \nRegards,\nWillem Leenen\nOracle DBA\n \n\n> Date: Tue, 6 Nov 2012 14:17:07 +0200\n> Subject: Re: [PERFORM] help with too slow query\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> \n> 2012/11/6 Pedro Jiménez Pérez <[email protected]>\n> > Ok, here we go:\n> >\n> > I'm using postgresql version 8.0\n> >\n> > Here is my query that is too slow: http://explain.depesz.com/s/GbQ\n> \n> \n> Well, I would start with a note, that 8.0 is not supported anymore:\n> http://www.postgresql.org/support/versioning/\n> Please, consider upgrading your instance.\n> \n> Also, it is not handy to provide schema details here and anonymize the\n> EXPLAIN output.\n> Here's the visualization of your initial plan: http://explain.depesz.com/s/AOAN\n> \n> The following join: (ism_floatvalues.id_signal = ism_signal.id_signal)\n> is wrongly estimated by the planner (row 3 of the above explain visualization).\n> It looks like NestedLoop join with IndexScan over\n> ism_floatvalues_index_idsignal_timestamp\n> might do a better job.\n> \n> Try the following:\n> ALTER TABLE ism_floatvalues ALTER COLUMN id_signal SET STATISTICS\n> 1000; /* 1000 is maximum for 8.0 */\n> ANALYZE ism_floatvalues;\n> \n> Let me know if it helps.\n> \n> \n> --\n> Victor Y. Yegorov\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n\n\n\n@Victor,\n \nIs the reason of the wrong cardinality estimations of the join indeed due to wrong statistics? I thought that the full table scan was due to the index on the timefield couldn't be used with this predicate:\n \ntime_stamp > date_trunc('month', current_date - interval '11 months') \n \nIt seems to me that a deterministic FBI should be made of this, deviding the records into month chuncks. Sort of a patch in stead of using partitions. But I'm new to Postgresql, so correct me if i'm wrong,\n \nRegards,\nWillem Leenen\nOracle DBA \n\n> Date: Tue, 6 Nov 2012 14:17:07 +0200> Subject: Re: [PERFORM] help with too slow query> From: [email protected]> To: [email protected]> CC: [email protected]> > 2012/11/6 Pedro Jiménez Pérez <[email protected]>> > Ok, here we go:> >> > I'm using postgresql version 8.0> >> > Here is my query that is too slow: http://explain.depesz.com/s/GbQ> > > Well, I would start with a note, that 8.0 is not supported anymore:> http://www.postgresql.org/support/versioning/> Please, consider upgrading your instance.> > Also, it is not handy to provide schema details here and anonymize the> EXPLAIN output.> Here's the visualization of your initial plan: http://explain.depesz.com/s/AOAN> > The following join: (ism_floatvalues.id_signal = ism_signal.id_signal)> is wrongly estimated by the planner (row 3 of the above explain visualization).> It looks like NestedLoop join with IndexScan over> ism_floatvalues_index_idsignal_timestamp> might do a better job.> > Try the following:> ALTER TABLE ism_floatvalues ALTER COLUMN id_signal SET STATISTICS> 1000; /* 1000 is maximum for 8.0 */> ANALYZE ism_floatvalues;> > Let me know if it helps.> > > --> Victor Y. Yegorov> > > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 6 Nov 2012 14:20:13 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help with too slow query" }, { "msg_contents": "2012/11/6 Willem Leenen <[email protected]>:\n> @Victor,\n>\n> Is the reason of the wrong cardinality estimations of the join indeed due to\n> wrong statistics? I thought that the full table scan was due to the index on\n> the timefield couldn't be used with this predicate:\n>\n> time_stamp > date_trunc('month', current_date - interval '11 months')\n>\n> It seems to me that a deterministic FBI should be made of this, deviding the\n> records into month chuncks. Sort of a patch in stead of using partitions.\n> But I'm new to Postgresql, so correct me if i'm wrong,\n\nIn 8.0, default_statistics_target=10, which means 1e8 rows big table\nwill get only 10 ranges\nfor the histograms, a bit too low to get a proper guess on the data\ndistribution. I would also\nhave increased default_statistics_target instance-wide, up to 50 at least.\n\nPostgreSQL can use the index as it is and apply a filter afterwards\nfor each record emited by\nthe index scan. Very rough estimate shows, that there'll be round 4.2k\nrows for each id_signal\nin the ism_floatvalues tables. So index scan looks valid here with the\ngiven setup.\n\nWith increased statistics target for the column I hope optimizer will\ndo a more precise estimate on\nthe column selectivity and will prefer to do a NestedLoop join between\nism_signal and ism_floatvalues tables.\n\nI haven't considered the FBI though.\n\nI hope I'm not mistaken here, waiting for the OP to provide more input.\n\n\n-- \nVictor Y. Yegorov\n\n", "msg_date": "Tue, 6 Nov 2012 16:52:01 +0200", "msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGAINCV0LPQvtGA0L7Qsg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help with too slow query" }, { "msg_contents": "@Victor,\n \nSpasibo for the information, seems valid to me. \n \nRegards,\nWillem Leenen\n\n \n\n> Date: Tue, 6 Nov 2012 16:52:01 +0200\n> Subject: Re: [PERFORM] help with too slow query\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]; [email protected]\n> \n> 2012/11/6 Willem Leenen <[email protected]>:\n> > @Victor,\n> >\n> > Is the reason of the wrong cardinality estimations of the join indeed due to\n> > wrong statistics? I thought that the full table scan was due to the index on\n> > the timefield couldn't be used with this predicate:\n> >\n> > time_stamp > date_trunc('month', current_date - interval '11 months')\n> >\n> > It seems to me that a deterministic FBI should be made of this, deviding the\n> > records into month chuncks. Sort of a patch in stead of using partitions.\n> > But I'm new to Postgresql, so correct me if i'm wrong,\n> \n> In 8.0, default_statistics_target=10, which means 1e8 rows big table\n> will get only 10 ranges\n> for the histograms, a bit too low to get a proper guess on the data\n> distribution. I would also\n> have increased default_statistics_target instance-wide, up to 50 at least.\n> \n> PostgreSQL can use the index as it is and apply a filter afterwards\n> for each record emited by\n> the index scan. Very rough estimate shows, that there'll be round 4.2k\n> rows for each id_signal\n> in the ism_floatvalues tables. So index scan looks valid here with the\n> given setup.\n> \n> With increased statistics target for the column I hope optimizer will\n> do a more precise estimate on\n> the column selectivity and will prefer to do a NestedLoop join between\n> ism_signal and ism_floatvalues tables.\n> \n> I haven't considered the FBI though.\n> \n> I hope I'm not mistaken here, waiting for the OP to provide more input.\n> \n> \n> -- \n> Victor Y. Yegorov\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n\n\n\n@Victor,\n \nSpasibo for the information, seems valid to me. \n \nRegards,\nWillem Leenen\n \n\n> Date: Tue, 6 Nov 2012 16:52:01 +0200> Subject: Re: [PERFORM] help with too slow query> From: [email protected]> To: [email protected]> CC: [email protected]; [email protected]> > 2012/11/6 Willem Leenen <[email protected]>:> > @Victor,> >> > Is the reason of the wrong cardinality estimations of the join indeed due to> > wrong statistics? I thought that the full table scan was due to the index on> > the timefield couldn't be used with this predicate:> >> > time_stamp > date_trunc('month', current_date - interval '11 months')> >> > It seems to me that a deterministic FBI should be made of this, deviding the> > records into month chuncks. Sort of a patch in stead of using partitions.> > But I'm new to Postgresql, so correct me if i'm wrong,> > In 8.0, default_statistics_target=10, which means 1e8 rows big table> will get only 10 ranges> for the histograms, a bit too low to get a proper guess on the data> distribution. I would also> have increased default_statistics_target instance-wide, up to 50 at least.> > PostgreSQL can use the index as it is and apply a filter afterwards> for each record emited by> the index scan. Very rough estimate shows, that there'll be round 4.2k> rows for each id_signal> in the ism_floatvalues tables. So index scan looks valid here with the> given setup.> > With increased statistics target for the column I hope optimizer will> do a more precise estimate on> the column selectivity and will prefer to do a NestedLoop join between> ism_signal and ism_floatvalues tables.> > I haven't considered the FBI though.> > I hope I'm not mistaken here, waiting for the OP to provide more input.> > > -- > Victor Y. Yegorov> > > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 6 Nov 2012 15:12:27 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help with too slow query" } ]
[ { "msg_contents": "I'm running a server with lots of counts and calculations.\ncurrently its ubuntu server is freebsd faster?\n\nalso this is a i386 machine.\n\nor linux and bsd is about the same.\n\nthis is not to be an argument just looking. Current benchmarks to compare\n\nthanks\n\nI'm running a server with lots of counts and calculations.currently its ubuntu server is freebsd faster?also this is a i386 machine.or linux and bsd is about the same.\nthis is not to be an argument just looking. Current benchmarks to compare\nthanks", "msg_date": "Fri, 2 Nov 2012 10:39:02 -0400", "msg_from": "\"list, mailing\" <[email protected]>", "msg_from_op": true, "msg_subject": "freebsd or linux" } ]
[ { "msg_contents": "Hi list.\n\nI've been battling with a design issue here.\n\nI have postgres 9.0.x deployed in some databases, and was designing\nsome changes that involve querying in a very partition-like way, but\nnot quite.\n\nIn particular, I have a few tables (lets call them table1...tableN). N\nis pretty small here, but it might grow over time. It's not date-based\npartitioning or anything like that, it's more like kinds of rows.\nThink multiple-table inheritance.\n\nNow, I have a view, call it all_tables, that \"normalizes\" the schema\n(picks common rows, does some expression magic to translate one form\nof some data point into another, etc), and union alls them all.\n\nSELECT t1.id, t1.x, t1.y, t1.z FROM table1\nUNION ALL\nSELECT t2.id, t2.x, t2.y, 0::integer as z FROM table2\n... etc\n\nIds are unique among all tables, a-la partitioning, so I have set up\ncheck constraints on each table, and it works perfectly for one case\nwhere table1..n are equal structure.\n\nBut for another case where they differ (like the case I pointed to\nabove), the planner ignores constraint exclusion, because it seems to\nadd a \"subquery\" node before the append:\n\n\"Append (cost=0.00..16.93 rows=2 width=136)\"\n\" -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..8.61 rows=1 width=179)\"\n\" -> Index Scan using table1_pkey on table1 (cost=0.00..8.60\nrows=1 width=179)\"\n\" Index Cond: (id = (-3))\"\n\" -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..8.32 rows=1 width=93)\"\n\" -> Index Scan using table2_pkey on table2 (cost=0.00..8.31\nrows=1 width=93)\"\n\" Index Cond: (id = (-3))\"\n\nFunny thing is, if I set constraint_exclusion=on, it works as\nexpected. But not with constraint_exclusion=partition.\n\nIs there a workaround for this, other than micromanaging\nconstraint_exclusion from the application side? (I wouldn't want to\nset it to on globally)\n\n", "msg_date": "Fri, 2 Nov 2012 15:17:10 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Constraint exclusion in views" }, { "msg_contents": "\n> Funny thing is, if I set constraint_exclusion=on, it works as\n> expected. But not with constraint_exclusion=partition.\n\nThe difference between \"on\" and \"partition\" is how it treats UNION.\nThis seems to be working as designed.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n", "msg_date": "Sat, 03 Nov 2012 12:37:20 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint exclusion in views" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> Funny thing is, if I set constraint_exclusion=on, it works as\n>> expected. But not with constraint_exclusion=partition.\n\n> The difference between \"on\" and \"partition\" is how it treats UNION.\n> This seems to be working as designed.\n\nWell, what \"partition\" actually means is \"only bother to try constraint\nexclusion proofs on appendrel members\". UNION ALL trees will get\nflattened into appendrels in some cases. In a quick look at the code,\nit seems like in recent releases the restrictions are basically that the\nUNION ALL arms have to (1) each be a plain SELECT from a single table\nwith no WHERE restriction; (2) all produce the same column datatypes;\nand (3) not have any volatile functions in the SELECT lists. I might be\nmissing something relevant to the OP's case, but it's hard to tell\nwithout a concrete example.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 03 Nov 2012 17:23:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Constraint exclusion in views" }, { "msg_contents": "On Sat, Nov 3, 2012 at 10:23 PM, Tom Lane <[email protected]> wrote:\n> Josh Berkus <[email protected]> writes:\n>>> Funny thing is, if I set constraint_exclusion=on, it works as\n>>> expected. But not with constraint_exclusion=partition.\n>\n>> The difference between \"on\" and \"partition\" is how it treats UNION.\n>> This seems to be working as designed.\n>\n> Well, what \"partition\" actually means is \"only bother to try constraint\n> exclusion proofs on appendrel members\". UNION ALL trees will get\n> flattened into appendrels in some cases. In a quick look at the code,\n> it seems like in recent releases the restrictions are basically that the\n> UNION ALL arms have to (1) each be a plain SELECT from a single table\n> with no WHERE restriction; (2) all produce the same column datatypes;\n> and (3) not have any volatile functions in the SELECT lists. I might be\n> missing something relevant to the OP's case, but it's hard to tell\n> without a concrete example.\n\nI would think our view succeeds all those tests, but I'm not entirely\nsure about 2. It does use coalesce too, but I really doubt coalesce is\nvolatile... right?\n\nI don't have access to the code during the weekend, but I'll check\nfirst thing tomorrow whether we have some datatype inconsistencies I\ndidn't notice.\n\nThanks for the hint.\n\n", "msg_date": "Sun, 4 Nov 2012 18:32:56 +0100", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint exclusion in views" }, { "msg_contents": "On Sun, Nov 4, 2012 at 2:32 PM, Claudio Freire <[email protected]> wrote:\n>> Well, what \"partition\" actually means is \"only bother to try constraint\n>> exclusion proofs on appendrel members\". UNION ALL trees will get\n>> flattened into appendrels in some cases. In a quick look at the code,\n>> it seems like in recent releases the restrictions are basically that the\n>> UNION ALL arms have to (1) each be a plain SELECT from a single table\n>> with no WHERE restriction; (2) all produce the same column datatypes;\n>> and (3) not have any volatile functions in the SELECT lists. I might be\n>> missing something relevant to the OP's case, but it's hard to tell\n>> without a concrete example.\n>\n> I would think our view succeeds all those tests, but I'm not entirely\n> sure about 2. It does use coalesce too, but I really doubt coalesce is\n> volatile... right?\n>\n> I don't have access to the code during the weekend, but I'll check\n> first thing tomorrow whether we have some datatype inconsistencies I\n> didn't notice.\n>\n> Thanks for the hint.\n\nIt was indeed a type mismatch, there was an int in one subquery that\nwas a bigint in all the others.\nThanks a lot.\n\n", "msg_date": "Mon, 5 Nov 2012 17:28:31 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Constraint exclusion in views" } ]
[ { "msg_contents": "Hello everyone,\n\nIf one would create a postgres performance tool, what would one like to measure aside from the obvious things common to all databases (query times, locks etc)?\n\nI'm in the slightly difficult situation of being asked to port \"Jet Profiler for MySQL\" (http://www.jetprofiler.com) for use with postgres (a \"Jet Profiler for PostgreSQL\" if you will). The author of the original product built several very heavily used systems on top of MySQL before writing this tool, so he knew what he wanted to look at.\n\nPersonally I'm also much more comfortable with MySQL than Postgres, having almost worked exclusively with the former and only recently started working with Postgres. Is anyone interested in helping out with some suggestions on where to start looking?\n\nChristoffer\nAEGIK / www.aegik.se\n\n\nP.S. Here's a copy-paste from the [MySQL] feature blurb on the Jet Profiler site, most - but not all - are not specific to MySQL.\n\t\n• Top Queries - See which queries are being run the most on your server.\n• Top Users - See which users are using your server the most.\n• Top Tables - See which database tables are opened the most.\n• Top States - See which states your database is most busy doing, such as creating temp tables.\n• Top IPs - See which client IPs are using your server the most.\n• Replication Profiling - You can measure how much capacity you have left on the replication SQL thread on slaves. If you are using MyISAM a lot, a lock analysis will help discover any locks associated with replication.\n• Master and Slave statistics - See how many threads are working on your masters and slaves. Find I/O or SQL bottlenecks.\n• MyISAM Lock Analysis - You can view which queries cause the most amount of MyISAM locking. This can be used to minimize replication lag and lock contention on busy tables.\n• Query Ratings - You can get your queries rated and see which queries are most likely to cause load due to missing indices, big tables and more.\n• Query Visualization - The query execution plan can be visualized using EXPLAIN. A diagram shows the table lookups involved, the rating and join size.\n• Slow Queries - See the slowest queries per time interval.\n• Zoomable GUI - You can easily zoom in on spikes in your load and see the corresponding queries for that time interval.\n• General Server Metrics - Such as threads connected, network I/O, command statistics, handler statistics and more. 50+ metrics are recorded from the server.\n• Save / Load Support - Save profiling data for later use, compare week to week or normal load vs high load situations.\n• Low Overhead - Running the tool against your database typically costs around 1%. Recording granularity customizable.\n• Supports all MySQL Versions - Works on 3.x (!), 4.0, 4.1, 5.0, 5.1 and 6.0, Enterprise and Community editions.\n• Works on Windows, Mac and Linux\n• No Server Changes\n• Simple Setup\n• Free / Professional Version - The free version doesn't cost anything and isn't time limited. Upgrade to the professional version to get all features.\n• Multi-language support - available in English, German and Swedish\n", "msg_date": "Mon, 5 Nov 2012 00:54:56 +0100", "msg_from": "=?windows-1252?Q?Christoffer_Lern=F6?= <[email protected]>", "msg_from_op": true, "msg_subject": "Suggested test points for a performance tool?" }, { "msg_contents": "On Mon, Nov 5, 2012 at 12:54 AM, Christoffer Lernö\n<[email protected]> wrote:\n> • Top Queries - See which queries are being run the most on your server.\n> • Top Users - See which users are using your server the most.\n> • Top Tables - See which database tables are opened the most.\n> • Top States - See which states your database is most busy doing, such as creating temp tables.\n> • Top IPs - See which client IPs are using your server the most.\n\nYou probably also want to see top *and bottom* indexes, as maintaining\nindexes in postgresql not only hurts insert performance, but it also\nimpairs HOT updates, which makes it much more important to remove\nunused indexes than in MySQL.\n\nYou have access to lots of statistics for all relations (tables,\nindexes and such), which can also help refine your reports.\n\nPersonally, I routinely check \"long-lived transactions\" and the\nqueries that generate them, as they're a constant pain with MVCC, by\nsorting pg_stat_activity by transaction start time, and checking the\nfirst few transaction if they've been started \"long ago\" (with \"long\"\nvarying from application to application)\n\n", "msg_date": "Mon, 5 Nov 2012 01:08:04 +0100", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggested test points for a performance tool?" } ]
[ { "msg_contents": "where should i get the original working source code of DBT-1 bechmark and it\nis compatible with postgres 9.2?\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/DBT-1-tp5730629.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Sun, 4 Nov 2012 22:43:04 -0800 (PST)", "msg_from": "Samrat <[email protected]>", "msg_from_op": true, "msg_subject": "DBT-1" } ]
[ { "msg_contents": "where should i get the original working source code of DBT-1 benchmark and it is compatible with postgres 9.2?\n\nBest Regards\n------------------------\n[Description: a]\nSamrat Revagade |Software Engineer | NTTDATA Global Technology Services Pvt. Ltd.|\[email protected]<mailto:[email protected]> | Work- +91-20-66041500 x 626 | Mob - +91 9665572989\n[Description: Description: nttd]\n\n\n\n______________________________________________________________________\nDisclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding", "msg_date": "Mon, 5 Nov 2012 06:44:52 +0000", "msg_from": "\"Revagade, Samrat\" <[email protected]>", "msg_from_op": true, "msg_subject": "DBT-1" } ]
[ { "msg_contents": "Hello, this is my first message to this list, so sorry if this is not \nthe right place to discuss this or if some data is missing from this \nmessage.\n\nI'll gladly send any data you request that would help us to understand \nthis issue. I don't believe I'm allowed to share the actual database \ndump, but other than that I can provide much more details you might ask for.\n\nI can't understand why PG 9.2 performs so differently from PG 9.1.\n\nI tested these queries in my Debian unstable amd64 box after restoring \nthe same database dump this morning in both PG 9.1 (Debian unstable \nrepository) and PG9.2 (Debian experimental repository) with same settings:\n\nhttps://gist.github.com/3f1f3aad3847155e1e35\n\nIgnore all lines like the line below because it doesn't make any \ndifference on my tests if I just remove them or any other column from \nthe SELECT clause:\n\n\" exists(select id from condition_document_excerpt where \ncondition_id=c1686.id) as v1686_has_reference,\"\n\nThe results below are pretty much the same if you assume \"SELECT 1 FROM \n...\".\n\nI have proper indices created for all tables and the query is fast in \nboth PG versions when I don't use many conditions in the WHERE clause.\n\nfast.sql returns the same data as slow.sql but it returns much faster in \nmy tests with PG 9.1.\n\nSo here are the completion times for each query on each PG version:\n\nQuery | PG 9.1 | PG 9.2 |\n-----------------------------------\nfast.sql| 650 ms (0.65s) | 690s |\nslow.sql| 419s | 111s |\n\n\nFor the curious, the results would be very similar to slow.sql if I use \ninner joins with the conditions inside the WHERE moved to the \"ON\" \nclause of the inner join instead of the left outer join + global WHERE \napproach. But I don't have this option anyway because this query is \ngenerated dynamically and not all my queries are \"ALL\"-like queries.\n\nHere are the relevant indices (id is SERIAL primary key in all tables):\n\nCREATE UNIQUE INDEX transaction_condition_transaction_id_type_id_idx\n ON transaction_condition\n USING btree\n (transaction_id, type_id);\nCREATE INDEX index_transaction_condition_on_transaction_id\n ON transaction_condition\n USING btree\n (transaction_id);\nCREATE INDEX index_transaction_condition_on_type_id\n ON transaction_condition\n USING btree\n (type_id);\n\nCREATE INDEX acquirer_target_names\n ON company_transaction\n USING btree\n (acquiror_company_name COLLATE pg_catalog.\"default\", \ntarget_company_name COLLATE pg_catalog.\"default\");\nCREATE INDEX index_company_transaction_on_target_company_name\n ON company_transaction\n USING btree\n (target_company_name COLLATE pg_catalog.\"default\");\nCREATE INDEX index_company_transaction_on_date\n ON company_transaction\n USING btree\n (date);\nCREATE INDEX index_company_transaction_on_edit_status\n ON company_transaction\n USING btree\n (edit_status COLLATE pg_catalog.\"default\");\n\nCREATE UNIQUE INDEX index_condition_boolean_value_on_condition_id\n ON condition_boolean_value\n USING btree\n (condition_id);\nCREATE INDEX index_condition_boolean_value_on_value_and_condition_id\n ON condition_boolean_value\n USING btree\n (value COLLATE pg_catalog.\"default\", condition_id);\n\nCREATE UNIQUE INDEX index_condition_option_value_on_condition_id\n ON condition_option_value\n USING btree\n (condition_id);\nCREATE INDEX index_condition_option_value_on_value_id_and_condition_id\n ON condition_option_value\n USING btree\n (value_id, condition_id);\n\n\nCREATE INDEX index_condition_option_label_on_type_id_and_position\n ON condition_option_label\n USING btree\n (type_id, \"position\");\nCREATE INDEX index_condition_option_label_on_type_id_and_value\n ON condition_option_label\n USING btree\n (type_id, value COLLATE pg_catalog.\"default\");\n\n\nCREATE UNIQUE INDEX index_condition_string_value_on_condition_id\n ON condition_string_value\n USING btree\n (condition_id);\nCREATE INDEX index_condition_string_value_on_value_and_condition_id\n ON condition_string_value\n USING btree\n (value COLLATE pg_catalog.\"default\", condition_id);\n\n\nPlease let me know of any suggestions on how to try to get similar \nresults in PG 9.2 as well as to understand why fast.sql performs so much \nbetter than slow.sql on PG 9.1.\n\nBest,\nRodrigo.\n\n\n\n\n\n\n Hello, this is my first message to this list, so sorry if this is\n not the right place to discuss this or if some data is missing from\n this message.\n\n I'll gladly send any data you request that would help us to\n understand this issue. I don't believe I'm allowed to share the\n actual database dump, but other than that I can provide much more\n details you might ask for.\n\n I can't understand why PG 9.2 performs so differently from PG 9.1.\n\n I tested these queries in my Debian unstable amd64 box after\n restoring the same database dump this morning in both PG 9.1 (Debian\n unstable repository) and PG9.2 (Debian experimental repository) with\n same settings:\n\n\nhttps://gist.github.com/3f1f3aad3847155e1e35\n\n Ignore all lines like the line below because it doesn't make any\n difference on my tests if I just remove them or any other column\n from the SELECT clause:\n\n \"  exists(select id from condition_document_excerpt where\n condition_id=c1686.id) as v1686_has_reference,\"\n\n The results below are pretty much the same if you assume \"SELECT 1\n FROM ...\".\n\n I have proper indices created for all tables and the query  is fast\n in both PG versions when I don't use many conditions in the WHERE\n clause.\n\n fast.sql returns the same data as slow.sql but it returns much\n faster in my tests with PG 9.1.\n\n So here are the completion times for each query on each PG version:\n\nQuery   | PG 9.1         | PG 9.2 |\n -----------------------------------\n fast.sql| 650 ms (0.65s) | 690s   |\n slow.sql| 419s           | 111s   |\n\n\n For the curious, the results would be very similar to slow.sql if I\n use inner joins with the conditions inside the WHERE moved to the\n \"ON\" clause of the inner join instead of the left outer join +\n global WHERE approach. But I don't have this option anyway because\n this query is generated dynamically and not all my queries are\n \"ALL\"-like queries.\n\n Here are the relevant indices (id is SERIAL primary key in all\n tables):\n\n CREATE UNIQUE INDEX transaction_condition_transaction_id_type_id_idx\n   ON transaction_condition\n   USING btree\n   (transaction_id, type_id);\n CREATE INDEX index_transaction_condition_on_transaction_id\n   ON transaction_condition\n   USING btree\n   (transaction_id);\n CREATE INDEX index_transaction_condition_on_type_id\n   ON transaction_condition\n   USING btree\n   (type_id);\n\n CREATE INDEX acquirer_target_names\n   ON company_transaction\n   USING btree\n   (acquiror_company_name COLLATE pg_catalog.\"default\",\n target_company_name COLLATE pg_catalog.\"default\");\n CREATE INDEX index_company_transaction_on_target_company_name\n   ON company_transaction\n   USING btree\n   (target_company_name COLLATE pg_catalog.\"default\");\n CREATE INDEX index_company_transaction_on_date\n   ON company_transaction\n   USING btree\n   (date);\n CREATE INDEX index_company_transaction_on_edit_status\n   ON company_transaction\n   USING btree\n   (edit_status COLLATE pg_catalog.\"default\");\n\n CREATE UNIQUE INDEX index_condition_boolean_value_on_condition_id\n   ON condition_boolean_value\n   USING btree\n   (condition_id);\n CREATE INDEX index_condition_boolean_value_on_value_and_condition_id\n   ON condition_boolean_value\n   USING btree\n   (value COLLATE pg_catalog.\"default\", condition_id);\n\n CREATE UNIQUE INDEX index_condition_option_value_on_condition_id\n   ON condition_option_value\n   USING btree\n   (condition_id);\n CREATE INDEX\n index_condition_option_value_on_value_id_and_condition_id\n   ON condition_option_value\n   USING btree\n   (value_id, condition_id);\n\n\n CREATE INDEX index_condition_option_label_on_type_id_and_position\n   ON condition_option_label\n   USING btree\n   (type_id, \"position\");\n CREATE INDEX index_condition_option_label_on_type_id_and_value\n   ON condition_option_label\n   USING btree\n   (type_id, value COLLATE pg_catalog.\"default\");\n\n\n CREATE UNIQUE INDEX index_condition_string_value_on_condition_id\n   ON condition_string_value\n   USING btree\n   (condition_id);\n CREATE INDEX index_condition_string_value_on_value_and_condition_id\n   ON condition_string_value\n   USING btree\n   (value COLLATE pg_catalog.\"default\", condition_id);\n\n\n Please let me know of any suggestions on how to try to get similar\n results in PG 9.2 as well as to understand why fast.sql performs so\n much better than slow.sql on PG 9.1.\n\n Best,\n Rodrigo.", "msg_date": "Tue, 06 Nov 2012 15:11:58 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "On Tue, Nov 6, 2012 at 11:11 AM, Rodrigo Rosenfeld Rosas\n<[email protected]> wrote:\n> Hello, this is my first message to this list, so sorry if this is not the\n> right place to discuss this or if some data is missing from this message.\n>\n> I'll gladly send any data you request that would help us to understand this\n> issue. I don't believe I'm allowed to share the actual database dump, but\n> other than that I can provide much more details you might ask for.\n>\n> I can't understand why PG 9.2 performs so differently from PG 9.1.\n>\n> I tested these queries in my Debian unstable amd64 box after restoring the\n> same database dump this morning in both PG 9.1 (Debian unstable repository)\n> and PG9.2 (Debian experimental repository) with same settings:\n>\n> https://gist.github.com/3f1f3aad3847155e1e35\n>\n> Ignore all lines like the line below because it doesn't make any difference\n> on my tests if I just remove them or any other column from the SELECT\n> clause:\n>\n> \" exists(select id from condition_document_excerpt where\n> condition_id=c1686.id) as v1686_has_reference,\"\n>\n> The results below are pretty much the same if you assume \"SELECT 1 FROM\n> ...\".\n>\n> I have proper indices created for all tables and the query is fast in both\n> PG versions when I don't use many conditions in the WHERE clause.\n>\n> fast.sql returns the same data as slow.sql but it returns much faster in my\n> tests with PG 9.1.\n>\n> So here are the completion times for each query on each PG version:\n>\n> Query | PG 9.1 | PG 9.2 |\n> -----------------------------------\n> fast.sql| 650 ms (0.65s) | 690s |\n> slow.sql| 419s | 111s |\n>\n>\n> For the curious, the results would be very similar to slow.sql if I use\n> inner joins with the conditions inside the WHERE moved to the \"ON\" clause of\n> the inner join instead of the left outer join + global WHERE approach. But I\n> don't have this option anyway because this query is generated dynamically\n> and not all my queries are \"ALL\"-like queries.\n>\n> Here are the relevant indices (id is SERIAL primary key in all tables):\n>\n> CREATE UNIQUE INDEX transaction_condition_transaction_id_type_id_idx\n> ON transaction_condition\n> USING btree\n> (transaction_id, type_id);\n> CREATE INDEX index_transaction_condition_on_transaction_id\n> ON transaction_condition\n> USING btree\n> (transaction_id);\n> CREATE INDEX index_transaction_condition_on_type_id\n> ON transaction_condition\n> USING btree\n> (type_id);\n>\n> CREATE INDEX acquirer_target_names\n> ON company_transaction\n> USING btree\n> (acquiror_company_name COLLATE pg_catalog.\"default\", target_company_name\n> COLLATE pg_catalog.\"default\");\n> CREATE INDEX index_company_transaction_on_target_company_name\n> ON company_transaction\n> USING btree\n> (target_company_name COLLATE pg_catalog.\"default\");\n> CREATE INDEX index_company_transaction_on_date\n> ON company_transaction\n> USING btree\n> (date);\n> CREATE INDEX index_company_transaction_on_edit_status\n> ON company_transaction\n> USING btree\n> (edit_status COLLATE pg_catalog.\"default\");\n>\n> CREATE UNIQUE INDEX index_condition_boolean_value_on_condition_id\n> ON condition_boolean_value\n> USING btree\n> (condition_id);\n> CREATE INDEX index_condition_boolean_value_on_value_and_condition_id\n> ON condition_boolean_value\n> USING btree\n> (value COLLATE pg_catalog.\"default\", condition_id);\n>\n> CREATE UNIQUE INDEX index_condition_option_value_on_condition_id\n> ON condition_option_value\n> USING btree\n> (condition_id);\n> CREATE INDEX index_condition_option_value_on_value_id_and_condition_id\n> ON condition_option_value\n> USING btree\n> (value_id, condition_id);\n>\n>\n> CREATE INDEX index_condition_option_label_on_type_id_and_position\n> ON condition_option_label\n> USING btree\n> (type_id, \"position\");\n> CREATE INDEX index_condition_option_label_on_type_id_and_value\n> ON condition_option_label\n> USING btree\n> (type_id, value COLLATE pg_catalog.\"default\");\n>\n>\n> CREATE UNIQUE INDEX index_condition_string_value_on_condition_id\n> ON condition_string_value\n> USING btree\n> (condition_id);\n> CREATE INDEX index_condition_string_value_on_value_and_condition_id\n> ON condition_string_value\n> USING btree\n> (value COLLATE pg_catalog.\"default\", condition_id);\n>\n>\n> Please let me know of any suggestions on how to try to get similar results\n> in PG 9.2 as well as to understand why fast.sql performs so much better than\n> slow.sql on PG 9.1.\n\n\nneed explain analyze for 9.1 vs 9.2. use this site:\nhttp://explain.depesz.com/ to post info.\n\nlooking at your query -- it's a fair possibility that the root cause\nof your issue is your database schema and organization. It's hard to\ntell for sure, but it looks like you might have dived head first into\nthe EAV anti-pattern -- deconstructing your data to such a degree that\naccurate statistics and query plans are difficult or impossible. I\nmean this in the most constructive way possible naturally. If that is\nindeed the case a good plan is going to be sheer luck as the database\nis essentially guessing.\n\nProblem could also be no statistics (run ANALYZE to test) or some\nother configuration problem (like index locale), or a bona fide\nregression.\n\nmerlin\n\n", "msg_date": "Tue, 6 Nov 2012 11:22:12 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> Problem could also be no statistics (run ANALYZE to test) or some\n> other configuration problem (like index locale), or a bona fide\n> regression.\n\nI'm wondering about join_collapse_limit in particular --- if that wasn't\ncranked up in the 9.1 installation, it would be pure luck if you got a\ngood query plan for an example like this. Maybe that and/or other\nparameter settings didn't get transposed to the 9.2 installation.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 06 Nov 2012 12:36:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Hi Rodrigo,\n\nIt looks like a lot of joins and 9.2 does some optimizations that\ninternally add additional joins. Did you try raising the\njoin_collapse_limit and maybe the from_collapse_limit from the\ndefault values of 8?\n\nRegards,\nKen\n\nOn Tue, Nov 06, 2012 at 03:11:58PM -0200, Rodrigo Rosenfeld Rosas wrote:\n> Hello, this is my first message to this list, so sorry if this is\n> not the right place to discuss this or if some data is missing from\n> this message.\n> \n> I'll gladly send any data you request that would help us to\n> understand this issue. I don't believe I'm allowed to share the\n> actual database dump, but other than that I can provide much more\n> details you might ask for.\n> \n> I can't understand why PG 9.2 performs so differently from PG 9.1.\n> \n> I tested these queries in my Debian unstable amd64 box after\n> restoring the same database dump this morning in both PG 9.1 (Debian\n> unstable repository) and PG9.2 (Debian experimental repository) with\n> same settings:\n> \n> https://gist.github.com/3f1f3aad3847155e1e35\n> \n> Ignore all lines like the line below because it doesn't make any\n> difference on my tests if I just remove them or any other column\n> from the SELECT clause:\n> \n> \" exists(select id from condition_document_excerpt where\n> condition_id=c1686.id) as v1686_has_reference,\"\n> \n> The results below are pretty much the same if you assume \"SELECT 1\n> FROM ...\".\n> \n> I have proper indices created for all tables and the query is fast\n> in both PG versions when I don't use many conditions in the WHERE\n> clause.\n> \n> fast.sql returns the same data as slow.sql but it returns much\n> faster in my tests with PG 9.1.\n> \n> So here are the completion times for each query on each PG version:\n> \n> Query | PG 9.1 | PG 9.2 |\n> -----------------------------------\n> fast.sql| 650 ms (0.65s) | 690s |\n> slow.sql| 419s | 111s |\n> \n> \n> For the curious, the results would be very similar to slow.sql if I\n> use inner joins with the conditions inside the WHERE moved to the\n> \"ON\" clause of the inner join instead of the left outer join +\n> global WHERE approach. But I don't have this option anyway because\n> this query is generated dynamically and not all my queries are\n> \"ALL\"-like queries.\n> \n> Here are the relevant indices (id is SERIAL primary key in all tables):\n> \n> CREATE UNIQUE INDEX transaction_condition_transaction_id_type_id_idx\n> ON transaction_condition\n> USING btree\n> (transaction_id, type_id);\n> CREATE INDEX index_transaction_condition_on_transaction_id\n> ON transaction_condition\n> USING btree\n> (transaction_id);\n> CREATE INDEX index_transaction_condition_on_type_id\n> ON transaction_condition\n> USING btree\n> (type_id);\n> \n> CREATE INDEX acquirer_target_names\n> ON company_transaction\n> USING btree\n> (acquiror_company_name COLLATE pg_catalog.\"default\",\n> target_company_name COLLATE pg_catalog.\"default\");\n> CREATE INDEX index_company_transaction_on_target_company_name\n> ON company_transaction\n> USING btree\n> (target_company_name COLLATE pg_catalog.\"default\");\n> CREATE INDEX index_company_transaction_on_date\n> ON company_transaction\n> USING btree\n> (date);\n> CREATE INDEX index_company_transaction_on_edit_status\n> ON company_transaction\n> USING btree\n> (edit_status COLLATE pg_catalog.\"default\");\n> \n> CREATE UNIQUE INDEX index_condition_boolean_value_on_condition_id\n> ON condition_boolean_value\n> USING btree\n> (condition_id);\n> CREATE INDEX index_condition_boolean_value_on_value_and_condition_id\n> ON condition_boolean_value\n> USING btree\n> (value COLLATE pg_catalog.\"default\", condition_id);\n> \n> CREATE UNIQUE INDEX index_condition_option_value_on_condition_id\n> ON condition_option_value\n> USING btree\n> (condition_id);\n> CREATE INDEX index_condition_option_value_on_value_id_and_condition_id\n> ON condition_option_value\n> USING btree\n> (value_id, condition_id);\n> \n> \n> CREATE INDEX index_condition_option_label_on_type_id_and_position\n> ON condition_option_label\n> USING btree\n> (type_id, \"position\");\n> CREATE INDEX index_condition_option_label_on_type_id_and_value\n> ON condition_option_label\n> USING btree\n> (type_id, value COLLATE pg_catalog.\"default\");\n> \n> \n> CREATE UNIQUE INDEX index_condition_string_value_on_condition_id\n> ON condition_string_value\n> USING btree\n> (condition_id);\n> CREATE INDEX index_condition_string_value_on_value_and_condition_id\n> ON condition_string_value\n> USING btree\n> (value COLLATE pg_catalog.\"default\", condition_id);\n> \n> \n> Please let me know of any suggestions on how to try to get similar\n> results in PG 9.2 as well as to understand why fast.sql performs so\n> much better than slow.sql on PG 9.1.\n> \n> Best,\n> Rodrigo.\n\n", "msg_date": "Tue, 6 Nov 2012 12:08:27 -0600", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Hi Merlin,\n\nEm 06-11-2012 15:22, Merlin Moncure escreveu:\n> On Tue, Nov 6, 2012 at 11:11 AM, Rodrigo Rosenfeld Rosas\n> <[email protected]> wrote:\n>> Hello, this is my first message to this list, so sorry if this is not the\n>> right place to discuss this or if some data is missing from this message.\n>>\n>> I'll gladly send any data you request that would help us to understand this\n>> issue. I don't believe I'm allowed to share the actual database dump, but\n>> other than that I can provide much more details you might ask for.\n>>\n>> I can't understand why PG 9.2 performs so differently from PG 9.1.\n>>\n>> I tested these queries in my Debian unstable amd64 box after restoring the\n>> same database dump this morning in both PG 9.1 (Debian unstable repository)\n>> and PG9.2 (Debian experimental repository) with same settings:\n>>\n>> https://gist.github.com/3f1f3aad3847155e1e35\n>>\n>> Ignore all lines like the line below because it doesn't make any difference\n>> on my tests if I just remove them or any other column from the SELECT\n>> clause:\n>>\n>> \" exists(select id from condition_document_excerpt where\n>> condition_id=c1686.id) as v1686_has_reference,\"\n>>\n>> The results below are pretty much the same if you assume \"SELECT 1 FROM\n>> ...\".\n>>\n>> I have proper indices created for all tables and the query is fast in both\n>> PG versions when I don't use many conditions in the WHERE clause.\n>>\n>> fast.sql returns the same data as slow.sql but it returns much faster in my\n>> tests with PG 9.1.\n>>\n>> So here are the completion times for each query on each PG version:\n>>\n>> Query | PG 9.1 | PG 9.2 |\n>> -----------------------------------\n>> fast.sql| 650 ms (0.65s) | 690s |\n>> slow.sql| 419s | 111s |\n>>\n>>\n>> For the curious, the results would be very similar to slow.sql if I use\n>> inner joins with the conditions inside the WHERE moved to the \"ON\" clause of\n>> the inner join instead of the left outer join + global WHERE approach. But I\n>> don't have this option anyway because this query is generated dynamically\n>> and not all my queries are \"ALL\"-like queries.\n>>\n>> Here are the relevant indices (id is SERIAL primary key in all tables):\n>>\n>> CREATE UNIQUE INDEX transaction_condition_transaction_id_type_id_idx\n>> ON transaction_condition\n>> USING btree\n>> (transaction_id, type_id);\n>> CREATE INDEX index_transaction_condition_on_transaction_id\n>> ON transaction_condition\n>> USING btree\n>> (transaction_id);\n>> CREATE INDEX index_transaction_condition_on_type_id\n>> ON transaction_condition\n>> USING btree\n>> (type_id);\n>>\n>> CREATE INDEX acquirer_target_names\n>> ON company_transaction\n>> USING btree\n>> (acquiror_company_name COLLATE pg_catalog.\"default\", target_company_name\n>> COLLATE pg_catalog.\"default\");\n>> CREATE INDEX index_company_transaction_on_target_company_name\n>> ON company_transaction\n>> USING btree\n>> (target_company_name COLLATE pg_catalog.\"default\");\n>> CREATE INDEX index_company_transaction_on_date\n>> ON company_transaction\n>> USING btree\n>> (date);\n>> CREATE INDEX index_company_transaction_on_edit_status\n>> ON company_transaction\n>> USING btree\n>> (edit_status COLLATE pg_catalog.\"default\");\n>>\n>> CREATE UNIQUE INDEX index_condition_boolean_value_on_condition_id\n>> ON condition_boolean_value\n>> USING btree\n>> (condition_id);\n>> CREATE INDEX index_condition_boolean_value_on_value_and_condition_id\n>> ON condition_boolean_value\n>> USING btree\n>> (value COLLATE pg_catalog.\"default\", condition_id);\n>>\n>> CREATE UNIQUE INDEX index_condition_option_value_on_condition_id\n>> ON condition_option_value\n>> USING btree\n>> (condition_id);\n>> CREATE INDEX index_condition_option_value_on_value_id_and_condition_id\n>> ON condition_option_value\n>> USING btree\n>> (value_id, condition_id);\n>>\n>>\n>> CREATE INDEX index_condition_option_label_on_type_id_and_position\n>> ON condition_option_label\n>> USING btree\n>> (type_id, \"position\");\n>> CREATE INDEX index_condition_option_label_on_type_id_and_value\n>> ON condition_option_label\n>> USING btree\n>> (type_id, value COLLATE pg_catalog.\"default\");\n>>\n>>\n>> CREATE UNIQUE INDEX index_condition_string_value_on_condition_id\n>> ON condition_string_value\n>> USING btree\n>> (condition_id);\n>> CREATE INDEX index_condition_string_value_on_value_and_condition_id\n>> ON condition_string_value\n>> USING btree\n>> (value COLLATE pg_catalog.\"default\", condition_id);\n>>\n>>\n>> Please let me know of any suggestions on how to try to get similar results\n>> in PG 9.2 as well as to understand why fast.sql performs so much better than\n>> slow.sql on PG 9.1.\n>\n> need explain analyze for 9.1 vs 9.2. use this site:\n> http://explain.depesz.com/ to post info.\n\nhttp://explain.depesz.com/s/ToX (fast on 9.1)\nhttp://explain.depesz.com/s/65t (fast on 9.2)\nhttp://explain.depesz.com/s/gZm (slow on 9.1)\nhttp://explain.depesz.com/s/END (slow on 9.2 - funny that the generated \nURL was END while this was my last explain :D )\n\n> looking at your query -- it's a fair possibility that the root cause\n> of your issue is your database schema and organization. It's hard to\n> tell for sure, but it looks like you might have dived head first into\n> the EAV anti-pattern -- deconstructing your data to such a degree that\n> accurate statistics and query plans are difficult or impossible. I\n> mean this in the most constructive way possible naturally. If that is\n> indeed the case a good plan is going to be sheer luck as the database\n> is essentially guessing.\n\nLet me explain how the application works, how the database was designed \nand hopefully you'll be able to guide me in the correct way to design \nthe database for this use case.\n\nOur application will present a big contract to some attorneys. There is \ncurrently a dynamic template with around 800 fields to be extracted from \neach contract in our system. These fields can be of different types \n(boolean, string, number, currency, percents, fixed options, dates, \ntime-spans and so on). There is a fields tree that is maintained by the \napplication editors. The application will allow the attorneys to read \nthe contracts and highlight parts of the contract where they extracted \neach field from and associate each field with its value interpreted by \nthe attorney and store the reference to what paragraphs in the contract \ndemonstrate where the value came from.\n\nThen there is an interface that will allow clients to search for \ntransactions based on its associated contracts and those ~800 fields. \nFor the particular query above, 14 of the 800 fields have been searched \nby this particular user (most of them were boolean ones plus a few \noptions and a string field). Usually the queries perform much better \nwhen less than 10 fields are used in the criteria. But our client wants \nus to handle up to 20 fields in a single query or they won't close the \ndeal and this is a really important client to us.\n\nSo, for the time being my only plan is to rollback to PG 9.1 and replace \nmy query builder that currently generate queries like slow.sql and \nchange it to generate the queries like fast.sql but I'm pretty sure this \napproach should be avoided. I just don't know any other alternative for \nthe time being.\n\nWhat database design would you recommend me for this use case?\n\nOr, how would you recommend me to perform the queries? Keep in mind that \na user could create a filter like \"(f10.value = 'N' OR f11.value = 'N') \nAND f13.value=50\".\n\n> Problem could also be no statistics (run ANALYZE to test) or some\n> other configuration problem (like index locale), or a bona fide\n> regression.\n\nI know barely anything about performance tuning in PostgreSQL (I tried \nmany tutorials but I have a hard time trying to understand EXPLAIN \nqueries for queries like above). Would you mind in explaining me how to \nimprove statistics or what is this index locale thing you talked about? \nAlso, I have never heard about bona fide regression before. I'm looking \nfor it on Google right now.\n\nThank you very much for your response!\n\nCheers,\nRodrigo.\n\n\n\n\n\n\n Hi Merlin,\n\n Em 06-11-2012 15:22, Merlin Moncure escreveu:\n \nOn Tue, Nov 6, 2012 at 11:11 AM, Rodrigo Rosenfeld Rosas\n<[email protected]> wrote:\n\n\nHello, this is my first message to this list, so sorry if this is not the\nright place to discuss this or if some data is missing from this message.\n\nI'll gladly send any data you request that would help us to understand this\nissue. I don't believe I'm allowed to share the actual database dump, but\nother than that I can provide much more details you might ask for.\n\nI can't understand why PG 9.2 performs so differently from PG 9.1.\n\nI tested these queries in my Debian unstable amd64 box after restoring the\nsame database dump this morning in both PG 9.1 (Debian unstable repository)\nand PG9.2 (Debian experimental repository) with same settings:\n\nhttps://gist.github.com/3f1f3aad3847155e1e35\n\nIgnore all lines like the line below because it doesn't make any difference\non my tests if I just remove them or any other column from the SELECT\nclause:\n\n\" exists(select id from condition_document_excerpt where\ncondition_id=c1686.id) as v1686_has_reference,\"\n\nThe results below are pretty much the same if you assume \"SELECT 1 FROM\n...\".\n\nI have proper indices created for all tables and the query is fast in both\nPG versions when I don't use many conditions in the WHERE clause.\n\nfast.sql returns the same data as slow.sql but it returns much faster in my\ntests with PG 9.1.\n\nSo here are the completion times for each query on each PG version:\n\nQuery | PG 9.1 | PG 9.2 |\n-----------------------------------\nfast.sql| 650 ms (0.65s) | 690s |\nslow.sql| 419s | 111s |\n\n\nFor the curious, the results would be very similar to slow.sql if I use\ninner joins with the conditions inside the WHERE moved to the \"ON\" clause of\nthe inner join instead of the left outer join + global WHERE approach. But I\ndon't have this option anyway because this query is generated dynamically\nand not all my queries are \"ALL\"-like queries.\n\nHere are the relevant indices (id is SERIAL primary key in all tables):\n\nCREATE UNIQUE INDEX transaction_condition_transaction_id_type_id_idx\n ON transaction_condition\n USING btree\n (transaction_id, type_id);\nCREATE INDEX index_transaction_condition_on_transaction_id\n ON transaction_condition\n USING btree\n (transaction_id);\nCREATE INDEX index_transaction_condition_on_type_id\n ON transaction_condition\n USING btree\n (type_id);\n\nCREATE INDEX acquirer_target_names\n ON company_transaction\n USING btree\n (acquiror_company_name COLLATE pg_catalog.\"default\", target_company_name\nCOLLATE pg_catalog.\"default\");\nCREATE INDEX index_company_transaction_on_target_company_name\n ON company_transaction\n USING btree\n (target_company_name COLLATE pg_catalog.\"default\");\nCREATE INDEX index_company_transaction_on_date\n ON company_transaction\n USING btree\n (date);\nCREATE INDEX index_company_transaction_on_edit_status\n ON company_transaction\n USING btree\n (edit_status COLLATE pg_catalog.\"default\");\n\nCREATE UNIQUE INDEX index_condition_boolean_value_on_condition_id\n ON condition_boolean_value\n USING btree\n (condition_id);\nCREATE INDEX index_condition_boolean_value_on_value_and_condition_id\n ON condition_boolean_value\n USING btree\n (value COLLATE pg_catalog.\"default\", condition_id);\n\nCREATE UNIQUE INDEX index_condition_option_value_on_condition_id\n ON condition_option_value\n USING btree\n (condition_id);\nCREATE INDEX index_condition_option_value_on_value_id_and_condition_id\n ON condition_option_value\n USING btree\n (value_id, condition_id);\n\n\nCREATE INDEX index_condition_option_label_on_type_id_and_position\n ON condition_option_label\n USING btree\n (type_id, \"position\");\nCREATE INDEX index_condition_option_label_on_type_id_and_value\n ON condition_option_label\n USING btree\n (type_id, value COLLATE pg_catalog.\"default\");\n\n\nCREATE UNIQUE INDEX index_condition_string_value_on_condition_id\n ON condition_string_value\n USING btree\n (condition_id);\nCREATE INDEX index_condition_string_value_on_value_and_condition_id\n ON condition_string_value\n USING btree\n (value COLLATE pg_catalog.\"default\", condition_id);\n\n\nPlease let me know of any suggestions on how to try to get similar results\nin PG 9.2 as well as to understand why fast.sql performs so much better than\nslow.sql on PG 9.1.\n\n\n\n\nneed explain analyze for 9.1 vs 9.2. use this site:\nhttp://explain.depesz.com/ to post info.\n\n\n\nhttp://explain.depesz.com/s/ToX\n (fast on 9.1)\n\nhttp://explain.depesz.com/s/65t\n (fast on 9.2)\n\nhttp://explain.depesz.com/s/gZm\n (slow on 9.1)\n\nhttp://explain.depesz.com/s/END\n (slow on 9.2 - funny that the generated URL was END while this was\n my last explain :D )\n\n\n\nlooking at your query -- it's a fair possibility that the root cause\nof your issue is your database schema and organization. It's hard to\ntell for sure, but it looks like you might have dived head first into\nthe EAV anti-pattern -- deconstructing your data to such a degree that\naccurate statistics and query plans are difficult or impossible. I\nmean this in the most constructive way possible naturally. If that is\nindeed the case a good plan is going to be sheer luck as the database\nis essentially guessing.\n\n\n Let me explain how the application works, how the database was\n designed and hopefully you'll be able to guide me in the correct way\n to design the database for this use case.\n\n Our application will present a big contract to some attorneys. There\n is currently a dynamic template with around 800 fields to be\n extracted from each contract in our system. These fields can be of\n different types (boolean, string, number, currency, percents, fixed\n options, dates, time-spans and so on). There is a fields tree that\n is maintained by the application editors. The application will allow\n the attorneys to read the contracts and highlight parts of the\n contract where they extracted each field from and associate each\n field with its value interpreted by the attorney and store the\n reference to what paragraphs in the contract demonstrate where the\n value came from.\n\n Then there is an interface that will allow clients to search for\n transactions based on its associated contracts and those ~800\n fields. For the particular query above, 14 of the 800 fields have\n been searched by this particular user (most of them were boolean\n ones plus a few options and a string field). Usually the queries\n perform much better when less than 10 fields are used in the\n criteria. But our client wants us to handle up to 20 fields in a\n single query or they won't close the deal and this is a really\n important client to us.\n\n So, for the time being my only plan is to rollback to PG 9.1 and\n replace my query builder that currently generate queries like\n slow.sql and change it to generate the queries like fast.sql but I'm\n pretty sure this approach should be avoided. I just don't know any\n other alternative for the time being.\n\n What database design would you recommend me for this use case?\n\n Or, how would you recommend me to perform the queries? Keep in mind\n that a user could create a filter like \"(f10.value = 'N' OR\n f11.value = 'N') AND f13.value=50\".\n\n\nProblem could also be no statistics (run ANALYZE to test) or some\nother configuration problem (like index locale), or a bona fide\nregression.\n\n\n I know barely anything about performance tuning in PostgreSQL (I\n tried many tutorials but I have a hard time trying to understand\n EXPLAIN queries for queries like above). Would you mind in\n explaining me how to improve statistics or what is this index locale\n thing you talked about? Also, I have never heard about bona fide\n regression before. I'm looking for it on Google right now.\n\n Thank you very much for your response!\n\n Cheers,\n Rodrigo.", "msg_date": "Tue, 06 Nov 2012 16:09:31 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" }, { "msg_contents": "I've raised both to 25 in PG 9.2 and reloaded the server. Didn't make \nany difference. :(\n\nThanks for the suggestion anyway.\n\nCheers,\nRodrigo.\n\nEm 06-11-2012 16:08, [email protected] escreveu:\n> Hi Rodrigo,\n>\n> It looks like a lot of joins and 9.2 does some optimizations that\n> internally add additional joins. Did you try raising the\n> join_collapse_limit and maybe the from_collapse_limit from the\n> default values of 8?\n>\n> Regards,\n> Ken\n>\n> On Tue, Nov 06, 2012 at 03:11:58PM -0200, Rodrigo Rosenfeld Rosas wrote:\n>> Hello, this is my first message to this list, so sorry if this is\n>> not the right place to discuss this or if some data is missing from\n>> this message.\n>>\n>> I'll gladly send any data you request that would help us to\n>> understand this issue. I don't believe I'm allowed to share the\n>> actual database dump, but other than that I can provide much more\n>> details you might ask for.\n>>\n>> I can't understand why PG 9.2 performs so differently from PG 9.1.\n>>\n>> I tested these queries in my Debian unstable amd64 box after\n>> restoring the same database dump this morning in both PG 9.1 (Debian\n>> unstable repository) and PG9.2 (Debian experimental repository) with\n>> same settings:\n>>\n>> https://gist.github.com/3f1f3aad3847155e1e35\n>>\n>> Ignore all lines like the line below because it doesn't make any\n>> difference on my tests if I just remove them or any other column\n>> from the SELECT clause:\n>>\n>> \" exists(select id from condition_document_excerpt where\n>> condition_id=c1686.id) as v1686_has_reference,\"\n>>\n>> The results below are pretty much the same if you assume \"SELECT 1\n>> FROM ...\".\n>>\n>> I have proper indices created for all tables and the query is fast\n>> in both PG versions when I don't use many conditions in the WHERE\n>> clause.\n>>\n>> fast.sql returns the same data as slow.sql but it returns much\n>> faster in my tests with PG 9.1.\n>>\n>> So here are the completion times for each query on each PG version:\n>>\n>> Query | PG 9.1 | PG 9.2 |\n>> -----------------------------------\n>> fast.sql| 650 ms (0.65s) | 690s |\n>> slow.sql| 419s | 111s |\n>>\n>>\n>> For the curious, the results would be very similar to slow.sql if I\n>> use inner joins with the conditions inside the WHERE moved to the\n>> \"ON\" clause of the inner join instead of the left outer join +\n>> global WHERE approach. But I don't have this option anyway because\n>> this query is generated dynamically and not all my queries are\n>> \"ALL\"-like queries.\n>>\n>> Here are the relevant indices (id is SERIAL primary key in all tables):\n>>\n>> CREATE UNIQUE INDEX transaction_condition_transaction_id_type_id_idx\n>> ON transaction_condition\n>> USING btree\n>> (transaction_id, type_id);\n>> CREATE INDEX index_transaction_condition_on_transaction_id\n>> ON transaction_condition\n>> USING btree\n>> (transaction_id);\n>> CREATE INDEX index_transaction_condition_on_type_id\n>> ON transaction_condition\n>> USING btree\n>> (type_id);\n>>\n>> CREATE INDEX acquirer_target_names\n>> ON company_transaction\n>> USING btree\n>> (acquiror_company_name COLLATE pg_catalog.\"default\",\n>> target_company_name COLLATE pg_catalog.\"default\");\n>> CREATE INDEX index_company_transaction_on_target_company_name\n>> ON company_transaction\n>> USING btree\n>> (target_company_name COLLATE pg_catalog.\"default\");\n>> CREATE INDEX index_company_transaction_on_date\n>> ON company_transaction\n>> USING btree\n>> (date);\n>> CREATE INDEX index_company_transaction_on_edit_status\n>> ON company_transaction\n>> USING btree\n>> (edit_status COLLATE pg_catalog.\"default\");\n>>\n>> CREATE UNIQUE INDEX index_condition_boolean_value_on_condition_id\n>> ON condition_boolean_value\n>> USING btree\n>> (condition_id);\n>> CREATE INDEX index_condition_boolean_value_on_value_and_condition_id\n>> ON condition_boolean_value\n>> USING btree\n>> (value COLLATE pg_catalog.\"default\", condition_id);\n>>\n>> CREATE UNIQUE INDEX index_condition_option_value_on_condition_id\n>> ON condition_option_value\n>> USING btree\n>> (condition_id);\n>> CREATE INDEX index_condition_option_value_on_value_id_and_condition_id\n>> ON condition_option_value\n>> USING btree\n>> (value_id, condition_id);\n>>\n>>\n>> CREATE INDEX index_condition_option_label_on_type_id_and_position\n>> ON condition_option_label\n>> USING btree\n>> (type_id, \"position\");\n>> CREATE INDEX index_condition_option_label_on_type_id_and_value\n>> ON condition_option_label\n>> USING btree\n>> (type_id, value COLLATE pg_catalog.\"default\");\n>>\n>>\n>> CREATE UNIQUE INDEX index_condition_string_value_on_condition_id\n>> ON condition_string_value\n>> USING btree\n>> (condition_id);\n>> CREATE INDEX index_condition_string_value_on_value_and_condition_id\n>> ON condition_string_value\n>> USING btree\n>> (value COLLATE pg_catalog.\"default\", condition_id);\n>>\n>>\n>> Please let me know of any suggestions on how to try to get similar\n>> results in PG 9.2 as well as to understand why fast.sql performs so\n>> much better than slow.sql on PG 9.1.\n>>\n>> Best,\n>> Rodrigo.\n\n\n", "msg_date": "Tue, 06 Nov 2012 16:22:36 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" }, { "msg_contents": "Em 06-11-2012 15:36, Tom Lane escreveu:\n> Merlin Moncure<[email protected]> writes:\n>> Problem could also be no statistics (run ANALYZE to test) or some\n>> other configuration problem (like index locale), or a bona fide\n>> regression.\n> I'm wondering about join_collapse_limit in particular --- if that wasn't\n> cranked up in the 9.1 installation, it would be pure luck if you got a\n> good query plan for an example like this.\n\nI tried increasing it from 8 to 25 and it didn't make any difference.\n> Maybe that and/or other\n> parameter settings didn't get transposed to the 9.2 installation.\n\ndiff /etc/postgresql/9.[12]/main/postgresql.conf\n\n41c41\n< data_directory = '/var/lib/postgresql/9.1/main' # use \ndata in another directory\n---\n > data_directory = '/var/lib/postgresql/9.2/main' # use \ndata in another directory\n43c43\n< hba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based \nauthentication file\n---\n > hba_file = '/etc/postgresql/9.2/main/pg_hba.conf' # host-based \nauthentication file\n45c45\n< ident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident \nconfiguration file\n---\n > ident_file = '/etc/postgresql/9.2/main/pg_ident.conf' # ident \nconfiguration file\n49c49\n< external_pid_file = '/var/run/postgresql/9.1-main.pid' \n# write an extra PID file\n---\n > external_pid_file = '/var/run/postgresql/9.2-main.pid' \n# write an extra PID file\n63c63\n< port = 5433 # (change requires restart)\n---\n > port = 5432 # (change requires restart)\n556a557,558\n > ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'\n > ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'\n\nAny other idea?\n\n", "msg_date": "Tue, 06 Nov 2012 16:30:50 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" }, { "msg_contents": "On Tue, Nov 6, 2012 at 12:09 PM, Rodrigo Rosenfeld Rosas\n<[email protected]> wrote:\n> http://explain.depesz.com/s/ToX (fast on 9.1)\n> http://explain.depesz.com/s/65t (fast on 9.2)\n> http://explain.depesz.com/s/gZm (slow on 9.1)\n> http://explain.depesz.com/s/END (slow on 9.2 - funny that the generated URL\n> was END while this was my last explain :D )\n\nHm -- looking at your 'slow' 9.2 query, it is reporting that the query\ntook 3 seconds (reported times are in milliseconds). How are you\ntiming the data? What happens when you run explain analyze\n<your_query> from psql (as in, how long does it take)?\n\n> Let me explain how the application works, how the database was designed and\n> hopefully you'll be able to guide me in the correct way to design the\n> database for this use case.\n>\n> Our application will present a big contract to some attorneys. There is\n> currently a dynamic template with around 800 fields to be extracted from\n> each contract in our system. These fields can be of different types\n> (boolean, string, number, currency, percents, fixed options, dates,\n> time-spans and so on). There is a fields tree that is maintained by the\n> application editors. The application will allow the attorneys to read the\n> contracts and highlight parts of the contract where they extracted each\n> field from and associate each field with its value interpreted by the\n> attorney and store the reference to what paragraphs in the contract\n> demonstrate where the value came from.\n>\n> Then there is an interface that will allow clients to search for\n> transactions based on its associated contracts and those ~800 fields. For\n> the particular query above, 14 of the 800 fields have been searched by this\n> particular user (most of them were boolean ones plus a few options and a\n> string field). Usually the queries perform much better when less than 10\n> fields are used in the criteria. But our client wants us to handle up to 20\n> fields in a single query or they won't close the deal and this is a really\n> important client to us.\n>\n> So, for the time being my only plan is to rollback to PG 9.1 and replace my\n> query builder that currently generate queries like slow.sql and change it to\n> generate the queries like fast.sql but I'm pretty sure this approach should\n> be avoided. I just don't know any other alternative for the time being.\n>\n> What database design would you recommend me for this use case?\n\nI would strongly consider investigation of hstore type along with\ngist/gin index.\nselect * from company_transaction where contract_attributes @>\n'State=>Delaware, Paid=Y';\netc\n\nBarring that, I would then consider complete elimination of integer\nproxies for your variables. They make your query virtually impossible\nto read/write, and they don't help.\n\nmerlin\n\n", "msg_date": "Tue, 6 Nov 2012 12:42:21 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Em 06-11-2012 16:42, Merlin Moncure escreveu:\n> On Tue, Nov 6, 2012 at 12:09 PM, Rodrigo Rosenfeld Rosas\n> <[email protected]> wrote:\n>> http://explain.depesz.com/s/ToX (fast on 9.1)\n>> http://explain.depesz.com/s/65t (fast on 9.2)\n>> http://explain.depesz.com/s/gZm (slow on 9.1)\n>> http://explain.depesz.com/s/END (slow on 9.2 - funny that the generated URL\n>> was END while this was my last explain :D )\n> Hm -- looking at your 'slow' 9.2 query, it is reporting that the query\n> took 3 seconds (reported times are in milliseconds). How are you\n> timing the data? What happens when you run explain analyze\n> <your_query> from psql (as in, how long does it take)?\n\nThe time I reported in the tables of my first message were the time \nreported by pgAdmin3 (compiled from source).\n\nBut I get similar time when I run like this:\n\ntime psql -p 5432 -f slow.sql db_name > slow-9.2-again.explain\n\nreal 1m56.353s\nuser 0m0.068s\nsys 0m0.020s\n\nslow-9.2-again.explain: http://explain.depesz.com/s/zF1\n\n>> Let me explain how the application works, how the database was designed and\n>> hopefully you'll be able to guide me in the correct way to design the\n>> database for this use case.\n>>\n>> Our application will present a big contract to some attorneys. There is\n>> currently a dynamic template with around 800 fields to be extracted from\n>> each contract in our system. These fields can be of different types\n>> (boolean, string, number, currency, percents, fixed options, dates,\n>> time-spans and so on). There is a fields tree that is maintained by the\n>> application editors. The application will allow the attorneys to read the\n>> contracts and highlight parts of the contract where they extracted each\n>> field from and associate each field with its value interpreted by the\n>> attorney and store the reference to what paragraphs in the contract\n>> demonstrate where the value came from.\n>>\n>> Then there is an interface that will allow clients to search for\n>> transactions based on its associated contracts and those ~800 fields. For\n>> the particular query above, 14 of the 800 fields have been searched by this\n>> particular user (most of them were boolean ones plus a few options and a\n>> string field). Usually the queries perform much better when less than 10\n>> fields are used in the criteria. But our client wants us to handle up to 20\n>> fields in a single query or they won't close the deal and this is a really\n>> important client to us.\n>>\n>> So, for the time being my only plan is to rollback to PG 9.1 and replace my\n>> query builder that currently generate queries like slow.sql and change it to\n>> generate the queries like fast.sql but I'm pretty sure this approach should\n>> be avoided. I just don't know any other alternative for the time being.\n>>\n>> What database design would you recommend me for this use case?\n> I would strongly consider investigation of hstore type along with\n> gist/gin index.\n> select * from company_transaction where contract_attributes @>\n> 'State=>Delaware, Paid=Y';\n> etc\n\nI'm not very familiar with hstore yet but this was one of the reasons I \nwanted to migrate to PG 9.2 but I won't be able to migrate the \napplication quickly to use hstore.\n\nAlso, I'm not sure if hstore allows us to be as flexible as we currently \nare (c1 and (c2 or c3 and not (c4 and c5))). c == condition\n\n> Barring that, I would then consider complete elimination of integer\n> proxies for your variables. They make your query virtually impossible\n> to read/write, and they don't help.\n\nI'm not sure if I understood what you're talking about. The template is \ndynamic and contains lots of information for each field, like type \n(number, percent, string, date, etc), parent_id (auto-referencing), \naggregator_id (also auto-referencing) and several other columns. But the \nvalues associate the field id (type_id) and the transaction id in a \nunique way (see unique index in my first message of the thread). Then I \nneed different tables to store the actual value because we're using SQL \ninstead of MongoDB or something else. The table that stores the value \ndepend on the field type.\n\nMaybe it would help me to understand if you could provide some example \nfor the design you're proposing.\n\nThank you very much,\nRodrigo.\n\n\n\n\n\n\n\n Em 06-11-2012 16:42, Merlin Moncure escreveu:\n \nOn Tue, Nov 6, 2012 at 12:09 PM, Rodrigo Rosenfeld Rosas\n<[email protected]> wrote:\n\n\nhttp://explain.depesz.com/s/ToX (fast on 9.1)\nhttp://explain.depesz.com/s/65t (fast on 9.2)\nhttp://explain.depesz.com/s/gZm (slow on 9.1)\nhttp://explain.depesz.com/s/END (slow on 9.2 - funny that the generated URL\nwas END while this was my last explain :D )\n\n\n\nHm -- looking at your 'slow' 9.2 query, it is reporting that the query\ntook 3 seconds (reported times are in milliseconds). How are you\ntiming the data? What happens when you run explain analyze\n<your_query> from psql (as in, how long does it take)?\n\n\n The time I reported in the tables of my first message were the time\n reported by pgAdmin3 (compiled from source).\n\n But I get similar time when I run like this:\n\n time psql -p 5432 -f slow.sql db_name > slow-9.2-again.explain\n\n real    1m56.353s\n user    0m0.068s\n sys     0m0.020s\n\n slow-9.2-again.explain:\n \nhttp://explain.depesz.com/s/zF1\n\n\n\n\n\nLet me explain how the application works, how the database was designed and\nhopefully you'll be able to guide me in the correct way to design the\ndatabase for this use case.\n\nOur application will present a big contract to some attorneys. There is\ncurrently a dynamic template with around 800 fields to be extracted from\neach contract in our system. These fields can be of different types\n(boolean, string, number, currency, percents, fixed options, dates,\ntime-spans and so on). There is a fields tree that is maintained by the\napplication editors. The application will allow the attorneys to read the\ncontracts and highlight parts of the contract where they extracted each\nfield from and associate each field with its value interpreted by the\nattorney and store the reference to what paragraphs in the contract\ndemonstrate where the value came from.\n\nThen there is an interface that will allow clients to search for\ntransactions based on its associated contracts and those ~800 fields. For\nthe particular query above, 14 of the 800 fields have been searched by this\nparticular user (most of them were boolean ones plus a few options and a\nstring field). Usually the queries perform much better when less than 10\nfields are used in the criteria. But our client wants us to handle up to 20\nfields in a single query or they won't close the deal and this is a really\nimportant client to us.\n\nSo, for the time being my only plan is to rollback to PG 9.1 and replace my\nquery builder that currently generate queries like slow.sql and change it to\ngenerate the queries like fast.sql but I'm pretty sure this approach should\nbe avoided. I just don't know any other alternative for the time being.\n\nWhat database design would you recommend me for this use case?\n\n\n\nI would strongly consider investigation of hstore type along with\ngist/gin index.\nselect * from company_transaction where contract_attributes @>\n'State=>Delaware, Paid=Y';\netc\n\n\n I'm not very familiar with hstore yet but this was one of the\n reasons I wanted to migrate to PG 9.2 but I won't be able to migrate\n the application quickly to use hstore.\n\n Also, I'm not sure if hstore allows us to be as flexible as we\n currently are (c1 and (c2 or c3 and not (c4 and c5))). c ==\n condition\n\n\nBarring that, I would then consider complete elimination of integer\nproxies for your variables. They make your query virtually impossible\nto read/write, and they don't help.\n\n\n I'm not sure if I understood what you're talking about. The template\n is dynamic and contains lots of information for each field, like\n type (number, percent, string, date, etc), parent_id\n (auto-referencing), aggregator_id (also auto-referencing) and\n several other columns. But the values associate the field id\n (type_id) and the transaction id in a unique way (see unique index\n in my first message of the thread). Then I need different tables to\n store the actual value because we're using SQL instead of MongoDB or\n something else. The table that stores the value depend on the field\n type.\n\n Maybe it would help me to understand if you could provide some\n example for the design you're proposing.\n\n Thank you very much,\n Rodrigo.", "msg_date": "Tue, 06 Nov 2012 16:57:00 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" }, { "msg_contents": "Rodrigo Rosenfeld Rosas <[email protected]> writes:\n> Em 06-11-2012 16:42, Merlin Moncure escreveu:\n>> Hm -- looking at your 'slow' 9.2 query, it is reporting that the query\n>> took 3 seconds (reported times are in milliseconds). How are you\n>> timing the data? What happens when you run explain analyze\n>> <your_query> from psql (as in, how long does it take)?\n\n> The time I reported in the tables of my first message were the time \n> reported by pgAdmin3 (compiled from source).\n\n> But I get similar time when I run like this:\n\n> time psql -p 5432 -f slow.sql db_name > slow-9.2-again.explain\n\n> real 1m56.353s\n> user 0m0.068s\n> sys 0m0.020s\n\n> slow-9.2-again.explain: http://explain.depesz.com/s/zF1\n\nBut that again shows only five seconds runtime. If you repeat the query\nseveral dozen times in a row, run the same way each time, do you get\nconsistent timings?\n\nCan you put together a self-contained test case to duplicate these\nresults? I'm prepared to believe there's some sort of planner\nregression involved here, but we'll never find it without a test case.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 06 Nov 2012 14:24:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Em 06-11-2012 17:24, Tom Lane escreveu:\n> Rodrigo Rosenfeld Rosas<[email protected]> writes:\n>> Em 06-11-2012 16:42, Merlin Moncure escreveu:\n>>> Hm -- looking at your 'slow' 9.2 query, it is reporting that the query\n>>> took 3 seconds (reported times are in milliseconds). How are you\n>>> timing the data? What happens when you run explain analyze\n>>> <your_query> from psql (as in, how long does it take)?\n>> The time I reported in the tables of my first message were the time\n>> reported by pgAdmin3 (compiled from source).\n>> But I get similar time when I run like this:\n>> time psql -p 5432 -f slow.sql db_name> slow-9.2-again.explain\n>> real 1m56.353s\n>> user 0m0.068s\n>> sys 0m0.020s\n>> slow-9.2-again.explain: http://explain.depesz.com/s/zF1\n> But that again shows only five seconds runtime. If you repeat the query\n> several dozen times in a row, run the same way each time, do you get\n> consistent timings?\n\nYes, the timings are consistent here.\n\n> Can you put together a self-contained test case to duplicate these\n> results? I'm prepared to believe there's some sort of planner\n> regression involved here, but we'll never find it without a test case.\n\nI'd love to, but I'm afraid I won't have time to do this any time soon. \nMaybe on Sunday. I'll see if I can get a script to generate the database \non Sunday and hope for it to replicate the issue.\n\nWould you mind if I coded it using Ruby? (can you run Ruby code in your \ncomputer?) I mean, for filling with some sample data.\n\nRight now I need to concentrate on getting a working solution for 9.1 \nand downgrade the database and work in several other requested fixes. \nThat is why I'm out of time for writing this test case right now... I'll \ntry to find some time on Sunday and will post here if I can replicate.\n\nThank you so much!\n\nRodrigo.\n\n\n", "msg_date": "Tue, 06 Nov 2012 17:31:15 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" }, { "msg_contents": "Rodrigo Rosenfeld Rosas <[email protected]> writes:\n> Em 06-11-2012 17:24, Tom Lane escreveu:\n>> Can you put together a self-contained test case to duplicate these\n>> results? I'm prepared to believe there's some sort of planner\n>> regression involved here, but we'll never find it without a test case.\n\n> I'd love to, but I'm afraid I won't have time to do this any time soon. \n> Maybe on Sunday. I'll see if I can get a script to generate the database \n> on Sunday and hope for it to replicate the issue.\n\n> Would you mind if I coded it using Ruby? (can you run Ruby code in your \n> computer?) I mean, for filling with some sample data.\n\nNo objection.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 06 Nov 2012 14:45:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "On Tue, Nov 6, 2012 at 1:45 PM, Tom Lane <[email protected]> wrote:\n> Rodrigo Rosenfeld Rosas <[email protected]> writes:\n>> Em 06-11-2012 17:24, Tom Lane escreveu:\n>>> Can you put together a self-contained test case to duplicate these\n>>> results? I'm prepared to believe there's some sort of planner\n>>> regression involved here, but we'll never find it without a test case.\n>\n>> I'd love to, but I'm afraid I won't have time to do this any time soon.\n>> Maybe on Sunday. I'll see if I can get a script to generate the database\n>> on Sunday and hope for it to replicate the issue.\n>\n>> Would you mind if I coded it using Ruby? (can you run Ruby code in your\n>> computer?) I mean, for filling with some sample data.\n>\n> No objection.\n\nhm, wouldn't timing the time to generate a raw EXPLAIN (that is,\nwithout ANALYZE) give a rough estimate of planning time? better to\nrule it out before OP goes to the trouble...\n\nmerlin\n\n", "msg_date": "Tue, 6 Nov 2012 15:11:08 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> hm, wouldn't timing the time to generate a raw EXPLAIN (that is,\n> without ANALYZE) give a rough estimate of planning time? better to\n> rule it out before OP goes to the trouble...\n\nWell, we still wouldn't know *why* there was a problem ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 06 Nov 2012 16:18:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "On Tue, Nov 6, 2012 at 12:57 PM, Rodrigo Rosenfeld Rosas\n<[email protected]> wrote:\n> I would strongly consider investigation of hstore type along with\n> gist/gin index.\n> select * from company_transaction where contract_attributes @>\n> 'State=>Delaware, Paid=Y';\n> etc\n>\n>\n> I'm not very familiar with hstore yet but this was one of the reasons I\n> wanted to migrate to PG 9.2 but I won't be able to migrate the application\n> quickly to use hstore.\n\nsure -- it's a major change. note though that 9.1 hstore has\neverything you need.\n\n> Also, I'm not sure if hstore allows us to be as flexible as we currently are\n> (c1 and (c2 or c3 and not (c4 and c5))). c == condition\n\nyour not gated from that functionality, although making complicated\nexpressions might require some thought and defeat some or all of GIST\noptimization. that said, nothing is keeping you from doing:\n\nwhere fields @> 'c1=>true, c2=>45' and not (fields @> 'c3=>false, c4=>xyz');\n\nrange searches would completely bypass GIST. so that:\nselect * from foo where attributes -> 'somekey' between 'value1' and 'value2';\n\nwould work but would be brute force. Still, with a little bit of\nthough, you should be able to optimize most common cases and when it\nboils down to straight filter (a and b and c) you'll get an orders of\nmagnitude faster query.\n\n>> Barring that, I would then consider complete elimination of integer\n> proxies for your variables. They make your query virtually impossible\n> to read/write, and they don't help.\n>\n> I'm not sure if I understood what you're talking about. The template is\n> dynamic and contains lots of information for each field, like type (number,\n> percent, string, date, etc), parent_id (auto-referencing), aggregator_id\n> (also auto-referencing) and several other columns. But the values associate\n> the field id (type_id) and the transaction id in a unique way (see unique\n> index in my first message of the thread). Then I need different tables to\n> store the actual value because we're using SQL instead of MongoDB or\n> something else. The table that stores the value depend on the field type.\n\nWell, that's probably a mistake. It's probably better to have a\nsingle table with a text field (which is basically a variant) and a\n'type' column storing the type of it if you need special handling down\nthe line. One thing I'm sure of is that abstracting type behind\ntype_id is doing nothing but creating needless extra work. You're\ndoing all kinds of acrobatics to fight the schema by hiding it under\nvarious layers of abstraction.\n\nmerlin\n\n", "msg_date": "Tue, 6 Nov 2012 15:48:02 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Em 06-11-2012 19:11, Merlin Moncure escreveu:\n> On Tue, Nov 6, 2012 at 1:45 PM, Tom Lane<[email protected]> wrote:\n>> Rodrigo Rosenfeld Rosas<[email protected]> writes:\n>>> Em 06-11-2012 17:24, Tom Lane escreveu:\n>>>> Can you put together a self-contained test case to duplicate these\n>>>> results? I'm prepared to believe there's some sort of planner\n>>>> regression involved here, but we'll never find it without a test case.\n>>> I'd love to, but I'm afraid I won't have time to do this any time soon.\n>>> Maybe on Sunday. I'll see if I can get a script to generate the database\n>>> on Sunday and hope for it to replicate the issue.\n>>> Would you mind if I coded it using Ruby? (can you run Ruby code in your\n>>> computer?) I mean, for filling with some sample data.\n>> No objection.\n> hm, wouldn't timing the time to generate a raw EXPLAIN (that is,\n> without ANALYZE) give a rough estimate of planning time? better to\n> rule it out before OP goes to the trouble...\n\nThis was a great guess! Congrats, Merlin:\n\nPG 9.1 (port 5433):\n\ntime psql -p 5433 -f slow-explain-only.sql db_name > /dev/null\n\nreal 0m0.284s\nuser 0m0.068s\nsys 0m0.012s\n\ntime psql -p 5432 -f slow-explain-only.sql db_name > /dev/null\n\nreal 2m10.409s\nuser 0m0.056s\nsys 0m0.016s\n\ntime psql -p 5433 -f fast-explain-only.sql db_name > /dev/null\n\nreal 0m0.264s\nuser 0m0.064s\nsys 0m0.020s\n\ntime psql -p 5432 -f fast-explain-only.sql db_name > /dev/null\n\nreal 12m25.084s\nuser 0m0.052s\nsys 0m0.020s\n\n\nThis is great news because it makes it easier for me to provide a \ntest-case since the results were the same in my test database (which is \nmostly empty):\n\ntime psql -p 5432 -f fast-explain-only.sql db_test > /dev/null\n\nreal 6m0.414s\nuser 0m0.064s\nsys 0m0.024s\n\nI'm in Brazil which is 3 hours behind NY, where my client is. Later when \nthey start their journey I'll ask them if I can send our plain database \nschema to make it even easier. Otherwise, if they prefer me to create \nanother database schema or to drop the unrelated tables first I'll do \nthat. Maybe they could be afraid of SQL injection attacks although I \nbelieve we're currently free of errors of this nature in our applications.\n\nThank you so much for narrowing down the real problem with 9.2.\n\nAfter this regression is fixed in 9.2 I'd like to know if it would be \npossible to optimize the planner so that slow.sql could perform as well \nas fast.sql. I believe the unique index on (transaction_id, type_id) \nhelps slow.sql to perform better but if the planner could be smart \nenough to understand that slow.sql and fast.sql are equivalents I'd \nprefer to use slow.sql instead of fast.sql as it reads better and it is \neasier to maintain and write tests for and reduces our database log files.\n\nCheers,\nRodrigo.\n\n\n", "msg_date": "Wed, 07 Nov 2012 09:16:09 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" }, { "msg_contents": "Em 06-11-2012 19:48, Merlin Moncure escreveu:\n> On Tue, Nov 6, 2012 at 12:57 PM, Rodrigo Rosenfeld Rosas\n> <[email protected]> wrote:\n>> I would strongly consider investigation of hstore type along with\n>> gist/gin index.\n>> select * from company_transaction where contract_attributes @>\n>> 'State=>Delaware, Paid=Y';\n>> etc\n>>\n>>\n>> I'm not very familiar with hstore yet but this was one of the reasons I\n>> wanted to migrate to PG 9.2 but I won't be able to migrate the application\n>> quickly to use hstore.\n> sure -- it's a major change. note though that 9.1 hstore has\n> everything you need.\n\nGreat to know.\n\n>> Also, I'm not sure if hstore allows us to be as flexible as we currently are\n>> (c1 and (c2 or c3 and not (c4 and c5))). c == condition\n> your not gated from that functionality, although making complicated\n> expressions might require some thought and defeat some or all of GIST\n> optimization. that said, nothing is keeping you from doing:\n>\n> where fields @> 'c1=>true, c2=>45' and not (fields @> 'c3=>false, c4=>xyz');\n>\n> range searches would completely bypass GIST. so that:\n> select * from foo where attributes -> 'somekey' between 'value1' and 'value2';\n>\n> would work but would be brute force. Still, with a little bit of\n> though, you should be able to optimize most common cases and when it\n> boils down to straight filter (a and b and c) you'll get an orders of\n> magnitude faster query.\n\nThen I'm not sure if hstore would speed up anything because except for \nboolean fields most types won't use the equal (=) operator.\n\nFor instance, for numeric types (number, percent, currency) and dates it \nis more usual to use something like (>), (<) or (between) than (=). For \nstrings we use ILIKE operator instead of (=).\n\n>>> Barring that, I would then consider complete elimination of integer\n>> proxies for your variables. They make your query virtually impossible\n>> to read/write, and they don't help.\n>>\n>> I'm not sure if I understood what you're talking about. The template is\n>> dynamic and contains lots of information for each field, like type (number,\n>> percent, string, date, etc), parent_id (auto-referencing), aggregator_id\n>> (also auto-referencing) and several other columns. But the values associate\n>> the field id (type_id) and the transaction id in a unique way (see unique\n>> index in my first message of the thread). Then I need different tables to\n>> store the actual value because we're using SQL instead of MongoDB or\n>> something else. The table that stores the value depend on the field type.\n> Well, that's probably a mistake. It's probably better to have a\n> single table with a text field (which is basically a variant) and a\n> 'type' column storing the type of it if you need special handling down\n> the line.\n\nThis would require tons of run-time conversions that would not be \nindexable (dates, numbers, etc). I thought that approach would be much \nslower. The user can also sort the results by any field and the sort \noperation could also become too slow with all those run-time conversions \nin place.\n\n> One thing I'm sure of is that abstracting type behind\n> type_id is doing nothing but creating needless extra work.\n\nYou said that in the other message and I asked for an example when I \ntold you why I need a separate table for storing all field data. I still \ndon't understand what you mean, that is why I asked for some example. I \nguess the main problem here is terminology because when I joined this \nproject I had the same problems I think you're having to understand the \nquery.\n\nCurrently there is a \"condition_type\" table that actually should be \ncalled \"contract_fields\" as it contains the possible fields to be \nextracted from some contract using our clients' terminology. In this \ntable we find the label of the field, its actual data type (string, \ncurrency, date, etc) among several other database fields.\n\nSo, \"type_id\" should actually be called \"field_id\" or \n\"contract_field_id\". It doesn't hold only the data type.\n\nThen we have a table called \"transaction_condition\" where I would call \nit \"field_value\" or \"transaction_field_value\" (I simplified before since \na transaction can have multiple contracts but the field is actually part \nof a transaction, not of some contract really - we have a references \ntable that will join the contract and position (paragraph,etc) in the \ncontract to the transaction).\n\nSo I can see two options here. We could either have a column of each \ntype in \"transaction_condition\" (or \"field_value\" as I would call it) \nand create an index for each column, or we could have different tables \nto store the values. It wasn't me who decided what approach to take some \nyears ago when this database was designed (I have not joined this \nproject by then). But I'm not sure either what approach I would have \ntaken. I would probably perform some benchmarks first before deciding \nwhich one to choose.\n\nBut I guess you're seeing a third approach I'm unable to understand, \nalthough I'd love to understand your proposal. Could you please provide \nsome example?\n\n> You're doing all kinds of acrobatics to fight the schema by hiding it under\n> various layers of abstraction.\n\nWe do that because we don't see another option. We'd love to know about \nany suggestions to improve our design. We don't like to complicate \nsimple stuff ;)\n\nThanks in advance,\nRodrigo.\n\n\n", "msg_date": "Wed, 07 Nov 2012 09:42:15 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" }, { "msg_contents": "Please try LIMIT 1 in exists\n\n exists(select id from condition_document_excerpt where\ncondition_id=c1686.id LIMIT 1) as v1686_has_reference\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Query-completed-in-1s-in-PG-9-1-and-700s-in-PG-9-2-tp5730899p5731021.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 7 Nov 2012 07:23:32 -0800 (PST)", "msg_from": "aasat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "On Wed, Nov 7, 2012 at 5:16 AM, Rodrigo Rosenfeld Rosas\n<[email protected]> wrote:\n> Em 06-11-2012 19:11, Merlin Moncure escreveu:\n>\n>> On Tue, Nov 6, 2012 at 1:45 PM, Tom Lane<[email protected]> wrote:\n>>>\n>>> Rodrigo Rosenfeld Rosas<[email protected]> writes:\n>>>>\n>>>> Em 06-11-2012 17:24, Tom Lane escreveu:\n>>>>>\n>>>>> Can you put together a self-contained test case to duplicate these\n>>>>> results? I'm prepared to believe there's some sort of planner\n>>>>> regression involved here, but we'll never find it without a test case.\n>>>>\n>>>> I'd love to, but I'm afraid I won't have time to do this any time soon.\n>>>> Maybe on Sunday. I'll see if I can get a script to generate the database\n>>>> on Sunday and hope for it to replicate the issue.\n>>>> Would you mind if I coded it using Ruby? (can you run Ruby code in your\n>>>> computer?) I mean, for filling with some sample data.\n>>>\n>>> No objection.\n>>\n>> hm, wouldn't timing the time to generate a raw EXPLAIN (that is,\n>> without ANALYZE) give a rough estimate of planning time? better to\n>> rule it out before OP goes to the trouble...\n>\n> This was a great guess! Congrats, Merlin:\n\nHeh -- that was tom's guess, not mine. What this does is confirm the\nplanner regression and that elevates the importance of Tom's request\nto get sample data so we (he) can fix it.\n\nmerlin\n\n", "msg_date": "Wed, 7 Nov 2012 10:34:54 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> On Wed, Nov 7, 2012 at 5:16 AM, Rodrigo Rosenfeld Rosas\n> <[email protected]> wrote:\n>> This was a great guess! Congrats, Merlin:\n\n> Heh -- that was tom's guess, not mine. What this does is confirm the\n> planner regression and that elevates the importance of Tom's request\n> to get sample data so we (he) can fix it.\n\nWell, the fact that it's a planner runtime problem and not a\nquality-of-plan problem is new information (I'd been assuming the\nlatter). Given that, it's possible it's already fixed:\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ca2d6a6cef5740b29406980eb8d21d44da32634b\nbut I'd still want to see a test case to be sure. In any case,\nit's not clear what's the critical difference between the \"fast\" and\n\"slow\" versions of the query.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 07 Nov 2012 11:58:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Em 07-11-2012 14:34, Merlin Moncure escreveu:\n> On Wed, Nov 7, 2012 at 5:16 AM, Rodrigo Rosenfeld Rosas\n> <[email protected]> wrote:\n>> Em 06-11-2012 19:11, Merlin Moncure escreveu:\n>>\n>>> On Tue, Nov 6, 2012 at 1:45 PM, Tom Lane<[email protected]> wrote:\n>>>> Rodrigo Rosenfeld Rosas<[email protected]> writes:\n>>>>> Em 06-11-2012 17:24, Tom Lane escreveu:\n>>>>>> Can you put together a self-contained test case to duplicate these\n>>>>>> results? I'm prepared to believe there's some sort of planner\n>>>>>> regression involved here, but we'll never find it without a test case.\n>>>>> I'd love to, but I'm afraid I won't have time to do this any time soon.\n>>>>> Maybe on Sunday. I'll see if I can get a script to generate the database\n>>>>> on Sunday and hope for it to replicate the issue.\n>>>>> Would you mind if I coded it using Ruby? (can you run Ruby code in your\n>>>>> computer?) I mean, for filling with some sample data.\n>>>> No objection.\n>>> hm, wouldn't timing the time to generate a raw EXPLAIN (that is,\n>>> without ANALYZE) give a rough estimate of planning time? better to\n>>> rule it out before OP goes to the trouble...\n>> This was a great guess! Congrats, Merlin:\n> Heh -- that was tom's guess, not mine. What this does is confirm the\n> planner regression and that elevates the importance of Tom's request\n> to get sample data so we (he) can fix it.\n\nTrue, sorry :) So, thanks Tom! I have some good news. It seems I'll be \nable to send the schema after just stripping a few parts of the schema \nfirst.\n\nRight now I have to leave but I think I'll have some time to do this \ntomorrow, so I hope I can send you the test case tomorrow.\n\nAs a curious note I tried running a query with 11 fields (instead of 14 \nfields like in the example I gave you) and I didn't experience any \nproblems...\n\nThank you very much you both!\n\n", "msg_date": "Wed, 07 Nov 2012 18:29:11 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" }, { "msg_contents": "Em 07-11-2012 14:58, Tom Lane escreveu:\n> Merlin Moncure<[email protected]> writes:\n>> On Wed, Nov 7, 2012 at 5:16 AM, Rodrigo Rosenfeld Rosas\n>> <[email protected]> wrote:\n>>> This was a great guess! Congrats, Merlin:\n>> Heh -- that was tom's guess, not mine. What this does is confirm the\n>> planner regression and that elevates the importance of Tom's request\n>> to get sample data so we (he) can fix it.\n> Well, the fact that it's a planner runtime problem and not a\n> quality-of-plan problem is new information (I'd been assuming the\n> latter). Given that, it's possible it's already fixed:\n> http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ca2d6a6cef5740b29406980eb8d21d44da32634b\n> but I'd still want to see a test case to be sure. In any case,\n> it's not clear what's the critical difference between the \"fast\" and\n> \"slow\" versions of the query.\n>\n> \t\t\tregards, tom lane\n\nOk, I could finally strip part of my database schema that will allow you \nto run the explain query and reproduce the issue.\n\nThere is a simple SQL dump in plain format that you can restore both on \n9.1 and 9.2 and an example EXPLAIN query so that you can see the \ndifference between both versions.\n\nPlease keep me up to date with regards to any progress. Let me know if \nthe commit above fixed this issue.\n\nThanks in advance,\n\nRodrigo.", "msg_date": "Wed, 07 Nov 2012 19:38:12 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" }, { "msg_contents": "Rodrigo Rosenfeld Rosas <[email protected]> writes:\n> Ok, I could finally strip part of my database schema that will allow you \n> to run the explain query and reproduce the issue.\n\n> There is a simple SQL dump in plain format that you can restore both on \n> 9.1 and 9.2 and an example EXPLAIN query so that you can see the \n> difference between both versions.\n\n> Please keep me up to date with regards to any progress. Let me know if \n> the commit above fixed this issue.\n\nAFAICT, HEAD and 9.2 branch tip plan this query a bit faster than 9.1\ndoes. It does appear that the problem is the same one fixed in that\nrecent commit: the problem is you've got N join clauses all involving\nt.id and so there are lots of redundant ways to use the index on t.id.\n\nI've got to say though that this is one of the most bizarre database\nschemas I've ever seen. It seems to be sort of an unholy combination of\nEAV and a star schema. A star schema might not actually be a bad model\nfor what you're trying to do, but what you want for that is one big fact\ntable and a collection of *small* detail tables you join to it (small\nmeaning just one entry per possible value). The way this is set up, you\nneed to join two or three tables before you can even join to the main\nfact table - and those tables don't even have the virtue of being small.\nThat's never going to perform well.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 07 Nov 2012 19:58:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Em 07-11-2012 22:58, Tom Lane escreveu:\n> Rodrigo Rosenfeld Rosas<[email protected]> writes:\n>> Ok, I could finally strip part of my database schema that will allow you\n>> to run the explain query and reproduce the issue.\n>> There is a simple SQL dump in plain format that you can restore both on\n>> 9.1 and 9.2 and an example EXPLAIN query so that you can see the\n>> difference between both versions.\n>> Please keep me up to date with regards to any progress. Let me know if\n>> the commit above fixed this issue.\n> AFAICT, HEAD and 9.2 branch tip plan this query a bit faster than 9.1\n> does.\n\nGreat! What is the estimate for 9.2.2 release?\n\n> It does appear that the problem is the same one fixed in that\n> recent commit: the problem is you've got N join clauses all involving\n> t.id and so there are lots of redundant ways to use the index on t.id.\n\nAnd what is the reason why fast.sql performs much better than slow.sql? \nIs it possible to optimize the planner so that both fast.sql and \nslow.sql finish about the same time?\n\n> I've got to say though that this is one of the most bizarre database\n> schemas I've ever seen.\n\nMerlin seems to share your opinion on that. I'd love to try a different \ndatabase design when I have a chance.\n\nWhat would you guys suggest me for handling my application requirements?\n\nThe only reason it is bizarre is because I have no idea on how to \nsimplify much our database design using relational databases. And pstore \nalso doesn't sound like a reasonable option either for our requirements.\n\nThe only other option I can think of is stop splitting \ntransaction_condition in many tables (one for each data type). Then I'd \nneed to include all possible columns in transaction_condition and I'm \nnot sure if it would perform better and what would be the implications \nwith regards to the database size since most columns will be null for \neach record. This also introduces another issue. I would need to create \na trigger to detect if the record is valid upon insertion to avoid \ncreating records with all columns set to NULL for instance. Currently \neach separate table that store the values have not-null constraints \namong others to prevent this kind of problem. Triggers are more \ncomplicated to maintain, specially because we're used to using an ORM \n(except for this particular case where I generate the SQL query manually \ninstead of using an ORM for this).\n\nAlso, we migrate the database using standalone_migrations:\n\nhttps://github.com/thuss/standalone-migrations\n\nIf we change a single line in the trigger code it won't be easy to see \nwhat line has changed in the commit that introduces the change because \nwe would have to create a separate migration to alter the trigger with \nall code repeated.\n\n> It seems to be sort of an unholy combination of\n> EAV and a star schema. A star schema might not actually be a bad model\n> for what you're trying to do, but what you want for that is one big fact\n> table and a collection of *small* detail tables you join to it (small\n> meaning just one entry per possible value). The way this is set up, you\n> need to join two or three tables before you can even join to the main\n> fact table - and those tables don't even have the virtue of being small.\n> That's never going to perform well.\n\nIf I understand correctly, you're suggesting that I dropped \ntransaction_condition(id, transaction_id, type_id) and replaced \ncondition_boolean_value(id, condition_id, value) with \ncondition_boolean_value(id, transaction_id, type_id, value) and repeat \nthe same idea for the other tables.\n\nIs that right? Would that perform much better? If you think so, I could \ntry this approach when I find some time. But I'd also need to \ndenormalize other related tables I didn't send in the schema dump. For \ninstance, the documents snippets have also a condition_id column. Each \nfield value (transaction_condition) can have multiple contract snippets \nin a table called condition_document_excerpt(id, document_id, \ncondition_id, \"position\"). I'd need to remove condition_id from it and \nappend transaction_id and type_id just like the values tables. No big \ndeal if this would speed up our queries.\n\nAm I missing something?\n\n\n\n\n\n\n Em 07-11-2012 22:58, Tom Lane escreveu:\n \nRodrigo Rosenfeld Rosas <[email protected]> writes:\n\n\nOk, I could finally strip part of my database schema that will allow you \nto run the explain query and reproduce the issue.\n\n\n\n\n\nThere is a simple SQL dump in plain format that you can restore both on \n9.1 and 9.2 and an example EXPLAIN query so that you can see the \ndifference between both versions.\n\n\n\n\n\nPlease keep me up to date with regards to any progress. Let me know if \nthe commit above fixed this issue.\n\n\n\nAFAICT, HEAD and 9.2 branch tip plan this query a bit faster than 9.1\ndoes.\n\n\n Great! What is the estimate for 9.2.2 release?\n\n\n It does appear that the problem is the same one fixed in that\nrecent commit: the problem is you've got N join clauses all involving\nt.id and so there are lots of redundant ways to use the index on t.id.\n\n\n And what is the reason why fast.sql performs much better than\n slow.sql? Is it possible to optimize the planner so that both\n fast.sql and slow.sql finish about the same time?\n\n\nI've got to say though that this is one of the most bizarre database\nschemas I've ever seen.\n\n\n Merlin seems to share your opinion on that. I'd love to try a\n different database design when I have a chance.\n\n What would you guys suggest me for handling my application\n requirements?\n\n The only reason it is bizarre is because I have no idea on how to\n simplify much our database design using relational databases. And\n pstore also doesn't sound like a reasonable option either for our\n requirements.\n\n The only other option I can think of is stop splitting\n transaction_condition in many tables (one for each data type). Then\n I'd need to include all possible columns in transaction_condition\n and I'm not sure if it would perform better and what would be the\n implications with regards to the database size since most columns\n will be null for each record. This also introduces another issue. I\n would need to create a trigger to detect if the record is valid upon\n insertion to avoid creating records with all columns set to NULL for\n instance. Currently each separate table that store the values have\n not-null constraints among others to prevent this kind of problem.\n Triggers are more complicated to maintain, specially because we're\n used to using an ORM (except for this particular case where I\n generate the SQL query manually instead of using an ORM for this).\n\n Also, we migrate the database using standalone_migrations:\n\n\nhttps://github.com/thuss/standalone-migrations\n\n If we change a single line in the trigger code it won't be easy to\n see what line has changed in the commit that introduces the change\n because we would have to create a separate migration to alter the\n trigger with all code repeated.\n\n\n It seems to be sort of an unholy combination of\nEAV and a star schema. A star schema might not actually be a bad model\nfor what you're trying to do, but what you want for that is one big fact\ntable and a collection of *small* detail tables you join to it (small\nmeaning just one entry per possible value). The way this is set up, you\nneed to join two or three tables before you can even join to the main\nfact table - and those tables don't even have the virtue of being small.\nThat's never going to perform well.\n\n\n If I understand correctly, you're suggesting that I dropped\n transaction_condition(id, transaction_id, type_id) and replaced\n condition_boolean_value(id, condition_id, value) with\n condition_boolean_value(id, transaction_id, type_id, value) and\n repeat the same idea for the other tables.\n\n Is that right? Would that perform much better? If you think so, I\n could try this approach when I find some time. But I'd also need to\n denormalize other related tables I didn't send in the schema dump.\n For instance, the documents snippets have also a condition_id\n column. Each field value (transaction_condition) can have multiple\n contract snippets in a table called condition_document_excerpt(id,\n document_id, condition_id, \"position\"). I'd need to remove\n condition_id from it and append transaction_id and type_id just like\n the values tables. No big deal if this would speed up our queries.\n\n Am I missing something?", "msg_date": "Thu, 08 Nov 2012 13:30:39 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" }, { "msg_contents": "Rodrigo Rosenfeld Rosas escribió:\n> Em 07-11-2012 22:58, Tom Lane escreveu:\n> >Rodrigo Rosenfeld Rosas<[email protected]> writes:\n> >>Ok, I could finally strip part of my database schema that will allow you\n> >>to run the explain query and reproduce the issue.\n> >>There is a simple SQL dump in plain format that you can restore both on\n> >>9.1 and 9.2 and an example EXPLAIN query so that you can see the\n> >>difference between both versions.\n> >>Please keep me up to date with regards to any progress. Let me know if\n> >>the commit above fixed this issue.\n> >AFAICT, HEAD and 9.2 branch tip plan this query a bit faster than 9.1\n> >does.\n> \n> Great! What is the estimate for 9.2.2 release?\n\nHasn't been announced, but you can grab a snapshot right now from\nftp.postgresql.org if you want.\n\n-- \nÁlvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Thu, 8 Nov 2012 12:38:45 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2" }, { "msg_contents": "Em 08-11-2012 13:38, Alvaro Herrera escreveu:\n> Rodrigo Rosenfeld Rosas escribi�:\n>> Em 07-11-2012 22:58, Tom Lane escreveu:\n>>> Rodrigo Rosenfeld Rosas<[email protected]> writes:\n>>>> Ok, I could finally strip part of my database schema that will allow you\n>>>> to run the explain query and reproduce the issue.\n>>>> There is a simple SQL dump in plain format that you can restore both on\n>>>> 9.1 and 9.2 and an example EXPLAIN query so that you can see the\n>>>> difference between both versions.\n>>>> Please keep me up to date with regards to any progress. Let me know if\n>>>> the commit above fixed this issue.\n>>> AFAICT, HEAD and 9.2 branch tip plan this query a bit faster than 9.1\n>>> does.\n>> Great! What is the estimate for 9.2.2 release?\n> Hasn't been announced, but you can grab a snapshot right now from\n> ftp.postgresql.org if you want.\n\nThank you, �lvaro, but I prefer to use official Debian packages instead \nsince they are easier to manage and more integrated to our OS.\n\nFor now I have rolled back to 9.1 this morning and it is working fine, \nso I don't have any rush. I just want an estimate to know when I should \ntry upgrading 9.2 from experimental again after 9.2.2 is released.\n\nCheers,\nRodrigo.\n\n\n", "msg_date": "Thu, 08 Nov 2012 13:43:58 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG\n 9.2" } ]
[ { "msg_contents": "\nI'm bringing up a new type of server using Intel E5-2620 (unisocket) \nwhich was selected for good SpecIntRate performance vs cost/power (201 \nfor $410 and 95W).\n\nWas assuming it was 6-core but I just noticed it has HT which is \ncurrently enabled since I see 12 cores in /proc/cpuinfo\n\nQuestion for the performance experts : is it better to have HT enabled \nor disabled for this generation of Xeon ?\nWorkload will be moderately concurrent, small OLTP type transactions. \nWe'll also run a few low-load VMs (using KVM) and a big Java application.\n\nAny thoughts welcome.\nThanks.\n\n\n\n\n\n\n\n", "msg_date": "Tue, 06 Nov 2012 20:31:06 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": true, "msg_subject": "HT on or off for E5-26xx ?" }, { "msg_contents": "On 07/11/12 16:31, David Boreham wrote:\n>\n> I'm bringing up a new type of server using Intel E5-2620 (unisocket) \n> which was selected for good SpecIntRate performance vs cost/power (201 \n> for $410 and 95W).\n>\n> Was assuming it was 6-core but I just noticed it has HT which is \n> currently enabled since I see 12 cores in /proc/cpuinfo\n>\n> Question for the performance experts : is it better to have HT enabled \n> or disabled for this generation of Xeon ?\n> Workload will be moderately concurrent, small OLTP type transactions. \n> We'll also run a few low-load VMs (using KVM) and a big Java application.\n>\n>\n>\n\nI've been benchmarking a E5-4640 (4 socket) and hyperthreading off gave \nmuch better scaling behaviour in pgbench (gentle rise and flatten off), \nwhereas with hyperthreading on there was a dramatic falloff after approx \nnumber clients = number of (hyperthreaded) cpus. The box is intended to \nbe a pure db server, so we are running with hyperthreading off.\n\nCheers\n\nMark\n\n", "msg_date": "Wed, 07 Nov 2012 17:16:20 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HT on or off for E5-26xx ?" }, { "msg_contents": "On 11/6/2012 9:16 PM, Mark Kirkwood wrote:\n>\n>\n> I've been benchmarking a E5-4640 (4 socket) and hyperthreading off \n> gave much better scaling behaviour in pgbench (gentle rise and flatten \n> off), whereas with hyperthreading on there was a dramatic falloff \n> after approx number clients = number of (hyperthreaded) cpus. The box \n> is intended to be a pure db server, so we are running with \n> hyperthreading off.\n\nIt looks like this syndrome is not observed on my box, likely due to the \nmuch lower number of cores system-wide (12).\nI see pgbench tps increase nicely until #threads/clients == #cores, then \nplateau. I tested up to 96 threads btw.\n\nWe're waiting on more memory modules to arrive. I'll post some test \nresults once we have all 4 memory banks populated.\n\n\n\n", "msg_date": "Wed, 07 Nov 2012 06:33:19 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HT on or off for E5-26xx ?" }, { "msg_contents": "Hi,\n\nOn Tue, 2012-11-06 at 20:31 -0700, David Boreham wrote:\n\n> Was assuming it was 6-core but I just noticed it has HT which is \n> currently enabled since I see 12 cores in /proc/cpuinfo\n> \n> Question for the performance experts : is it better to have H enabled\n> or disabled for this generation of Xeon ? Workload will be moderately\n> concurrent, small OLTP type transactions. We'll also run a few\n> low-load VMs (using KVM) and big Java application. \n\nHT should be good for file servers, or say many of the app servers, or\nsmall web/mail servers. PostgreSQL relies on the CPU power, and since\nthe HT CPUs don't have the same power as the original CPU, when OS\nsubmits a job to that particular HTed CPU, query will run significantly\nslow. To avoid issues, I would suggest you to turn HT off on all\nPostgreSQL servers. If you can throw some more money, another 6-core CPU\nwould give more benefit.\n\nRegards,\n\n-- \nDevrim GÜNDÜZ\nPrincipal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Wed, 07 Nov 2012 13:37:24 +0000", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HT on or off for E5-26xx ?" }, { "msg_contents": "On 11/7/2012 6:37 AM, Devrim GÜNDÜZ wrote:\n> HT should be good for file servers, or say many of the app servers, or\n> small web/mail servers. PostgreSQL relies on the CPU power, and since\n> the HT CPUs don't have the same power as the original CPU, when OS\n> submits a job to that particular HTed CPU, query will run significantly\n> slow. To avoid issues, I would suggest you to turn HT off on all\n> PostgreSQL servers. If you can throw some more money, another 6-core CPU\n> would give more benefit.\nI realize this is the \"received knowledge\" but it is not supported by \nthe evidence before me (which is that I get nearly 2x the throughput \nfrom pgbench using nthreads == nhtcores vs nthreads == nfullcores). \nIntel's latest HT implementation seems to suffer less from the kinds of \nresource sharing contention issues seen in older generations.\n\nOnce I have the machine's full memory installed I'll run pgbench with HT \ndisabled in the BIOS and post the results.\n\n\n\n\n\n", "msg_date": "Wed, 07 Nov 2012 06:45:25 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HT on or off for E5-26xx ?" }, { "msg_contents": "On 08/11/12 02:33, David Boreham wrote:\n> On 11/6/2012 9:16 PM, Mark Kirkwood wrote:\n>>\n>>\n>> I've been benchmarking a E5-4640 (4 socket) and hyperthreading off \n>> gave much better scaling behaviour in pgbench (gentle rise and \n>> flatten off), whereas with hyperthreading on there was a dramatic \n>> falloff after approx number clients = number of (hyperthreaded) cpus. \n>> The box is intended to be a pure db server, so we are running with \n>> hyperthreading off.\n>\n> It looks like this syndrome is not observed on my box, likely due to \n> the much lower number of cores system-wide (12).\n> I see pgbench tps increase nicely until #threads/clients == #cores, \n> then plateau. I tested up to 96 threads btw.\n>\n> We're waiting on more memory modules to arrive. I'll post some test \n> results once we have all 4 memory banks populated.\n>\n>\n>\n>\n\nInteresting - I was wondering if a single socket board would behave \ndifferently (immediately after posting of course)...I've got an i3 home \nsystem that scales nicely even with hyperthreading on (2 cores, 4 \ntyperthreads).\n\nCheers\n\nMark\n\n", "msg_date": "Thu, 08 Nov 2012 10:20:43 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HT on or off for E5-26xx ?" }, { "msg_contents": "Well, the results are in and at least in this particular case \nconventional wisdom is overturned. Not a huge benefit, but throughput is \ndefinitely higher with HT enabled and nthreads >> ncores:\n\nHT off :\n\nbash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 48\nnumber of threads: 48\nduration: 600 s\nnumber of transactions actually processed: 2435711\ntps = 4058.667332 (including connections establishing)\ntps = 4058.796309 (excluding connections establishing)\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 52.50 0.00 14.79 5.07 0.00 27.64\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 5700.30 0.10 13843.50 0.00 74.78 \n11.06 48.46 3.50 0.05 65.21\n\nHT on:\n\nbash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 48\nnumber of threads: 48\nduration: 600 s\nnumber of transactions actually processed: 2832463\ntps = 4720.668984 (including connections establishing)\ntps = 4720.750477 (excluding connections establishing)\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 40.61 0.00 12.71 3.09 0.00 43.59\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 6197.10 14.80 16389.50 0.14 86.53 \n10.82 54.11 3.30 0.05 82.35\n\nSystem details:\n\nE5-2620 (6 core + HT 15Mb LL) 64G (4 channels with 16G 1333 modules), \nIntel 710 300G (which is faster than the smaller drives, note), \nSupermicro X9SRi-F Motherboard.\nCentOS 6.3 64-bit, PG 9.2.1 from the PGDG RPM repository. pgbench \nrunning locally on the server.\n\n\n\n", "msg_date": "Wed, 07 Nov 2012 20:16:44 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HT on or off for E5-26xx ?" }, { "msg_contents": "On 11/07/2012 09:16 PM, David Boreham wrote:\n\n> bash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48\n\nUnfortunately without -S, you're not really testing the processors. A \nregular pgbench can fluctuate a more than that due to writing and \ncheckpoints.\n\nFor what it's worth, our X5675's perform about 40-50% better with HT \nenabled. Not the 2x you might expect by doubling the amount of \n\"processors\", but it definitely didn't make things worse.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n", "msg_date": "Thu, 8 Nov 2012 07:58:31 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HT on or off for E5-26xx ?" }, { "msg_contents": "On 11/8/2012 6:58 AM, Shaun Thomas wrote:\n> On 11/07/2012 09:16 PM, David Boreham wrote:\n>\n>> bash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48\n>\n> Unfortunately without -S, you're not really testing the processors. A \n> regular pgbench can fluctuate a more than that due to writing and \n> checkpoints.\nHmm...my goal was to test with a workload close to our application's \n(which is heavy OLTP, small transactions and hence sensitive to I/O \ncommit rate).\nThe hypothesis I was testing was that enabling HT positively degrades \nperformance (which in my case it does not). I wasn't to be honest really \ntesting the additional benefit from HT, rather observing that it is \nnon-negative :)\n\nIf I have time I can run the select-only test for you and post the \nresults. The DB fits into memory so it will be a good CPU test.\n\n\n\n", "msg_date": "Thu, 08 Nov 2012 07:18:47 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HT on or off for E5-26xx ?" }, { "msg_contents": "Here are the SELECT only pgbench test results from my E5-2620 machine, \nwith HT on and off:\n\nHT off:\n\nbash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48 -S\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 48\nnumber of threads: 48\nduration: 600 s\nnumber of transactions actually processed: 25969680\ntps = 43281.392273 (including connections establishing)\ntps = 43282.476955 (excluding connections establishing)\n\nAll 6 cores saturated:\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 81.42 0.00 18.21 0.00 0.00 0.37\n\nHT on:\n\nbash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48 -S\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 48\nnumber of threads: 48\nduration: 600 s\nnumber of transactions actually processed: 29934601\ntps = 49888.697225 (including connections establishing)\ntps = 49889.570754 (excluding connections establishing)\n\n12% of CPU showing as idle (whether that's true or not I'm not sure):\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 71.09 0.00 16.99 0.00 0.00 11.92\n\nSo for this particular test HT gives us the equivalent of about one \nextra core.\nIt does not reduce performance, rather increases performance slightly.\n\n\n\n\n", "msg_date": "Fri, 09 Nov 2012 07:34:13 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HT on or off for E5-26xx ?" } ]
[ { "msg_contents": "Hello,\n\nI know I need to re-engineer this so it doesn't suck by design, so I'm\nwondering if there is some nifty PostgreSQL feature or best practice\nwhich may automagically do the best thing.\n\nI store information about documents which are tagged by string tags. The\nstructure is very simple:\n\nCREATE TABLE documents (\n id SERIAL NOT NULL,\n title TEXT NOT NULL,\n -- other fields --\n tags TEXT[] NOT NULL,\n flags INTEGER\n);\n\nCurrently, I have a GIN index on the tags field, and it works for searching:\n\nedem=> explain analyze select id,title,flags from documents where tags\n@> ARRAY['tag'];\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on documents (cost=8.00..12.01 rows=1 width=39)\n(actual time=0.067..0.086 rows=9 loops=1)\n Recheck Cond: (tags @> '{tag}'::text[])\n -> Bitmap Index Scan on documents_tags (cost=0.00..8.00 rows=1\nwidth=0) (actual time=0.053..0.053 rows=9 loops=1)\n Index Cond: (tags @> '{tag}'::text[])\n Total runtime: 0.135 ms\n(5 rows)\n\n\nThe other feature I need is a list of unique tags in all the documents,\ne.g.:\n\nedem=> explain analyze select distinct unnest(tags) as tag from documents;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=28.54..28.84 rows=24 width=42) (actual\ntime=0.261..0.307 rows=44 loops=1)\n -> Seq Scan on documents (cost=0.00..28.45 rows=36 width=42)\n(actual time=0.020..0.157 rows=68 loops=1)\n Total runtime: 0.419 ms\n(3 rows)\n\nThis is unfortunately slow (because I know the load will increase and\nthis will be a common operation).\n\nThe thing I was planning to do is create a separate table, with only the\nunique tags, and possibly an array of documents which have these tags,\nwhich will be maintained with UPDATE and INSERT triggers on the\ndocuments table, but then I remembered that the GIN index itself does\nsomething not unlike this method. Is there a way to make use of this\ninformation to get a list of unique tags?\n\nBarring that, what would you suggest for efficiently handing a classic\nstructure like this (meaning documents with tags)?", "msg_date": "Wed, 07 Nov 2012 16:21:47 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Unique values across a table of arrays - documents and tags" }, { "msg_contents": "Le 2012-11-07 à 10:21, Ivan Voras a écrit :\n\n> \n> This is unfortunately slow (because I know the load will increase and\n> this will be a common operation).\n> \n> The thing I was planning to do is create a separate table, with only the\n> unique tags, and possibly an array of documents which have these tags,\n> which will be maintained with UPDATE and INSERT triggers on the\n> documents table, but then I remembered that the GIN index itself does\n> something not unlike this method. Is there a way to make use of this\n> information to get a list of unique tags?\n> \n> Barring that, what would you suggest for efficiently handing a classic\n> structure like this (meaning documents with tags)?\n> \n\nCan you structure it as the \"classic\" many to many pattern:\n\ndocuments <-> taggings <-> tags\n\nUnique tags then becomes a plain seq scan on a smallish table (tags). To keep the ability to have a single field, you can hide the documents table behind a view that would do an array_agg, such as:\n\nSELECT documents.*, array_agg(taggings.tag)\nFROM documents JOIN tags ON tags.document_id = documents.id\nGROUP BY documents.*\n\nNot sure we can do GROUP BY documents.*, but if not, you list your columns individually.\n\nHope that helps!\nFrançois\n", "msg_date": "Wed, 7 Nov 2012 10:34:21 -0500", "msg_from": "=?iso-8859-1?Q?Fran=E7ois_Beausoleil?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unique values across a table of arrays - documents and tags" }, { "msg_contents": "On 07/11/2012 16:34, François Beausoleil wrote:\n> Le 2012-11-07 à 10:21, Ivan Voras a écrit :\n\n>> Barring that, what would you suggest for efficiently handing a classic\n>> structure like this (meaning documents with tags)?\n> \n> Can you structure it as the \"classic\" many to many pattern:\n> \n> documents <-> taggings <-> tags\n\nYes, that is as you said, a classic solution to a classic problem :)\n\nIf needed, this is the way I will do it, but for now I'm asking if\nthere's something that can be done to avoid creating another table or\ntwo. The reason I'm asking is that I've often found that PostgreSQL can\ndo much more than I thought it can.", "msg_date": "Wed, 07 Nov 2012 16:38:43 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unique values across a table of arrays - documents and tags" }, { "msg_contents": "Maybe you could store the tags as fulltext words, query them using\nfulltext search, and use ts_stat to gather the list of words? Needs to\nbe benched of course.\nYou'll probably need to change the config to avoid stemming and stop words.\n\nFlorent\n\nOn Wed, Nov 7, 2012 at 4:38 PM, Ivan Voras <[email protected]> wrote:\n> On 07/11/2012 16:34, François Beausoleil wrote:\n>> Le 2012-11-07 à 10:21, Ivan Voras a écrit :\n>\n>>> Barring that, what would you suggest for efficiently handing a classic\n>>> structure like this (meaning documents with tags)?\n>>\n>> Can you structure it as the \"classic\" many to many pattern:\n>>\n>> documents <-> taggings <-> tags\n>\n> Yes, that is as you said, a classic solution to a classic problem :)\n>\n> If needed, this is the way I will do it, but for now I'm asking if\n> there's something that can be done to avoid creating another table or\n> two. The reason I'm asking is that I've often found that PostgreSQL can\n> do much more than I thought it can.\n>\n>\n>\n\n\n\n-- \nFlorent Guillaume, Director of R&D, Nuxeo\nOpen Source, Java EE based, Enterprise Content Management (ECM)\nhttp://www.nuxeo.com http://www.nuxeo.org +33 1 40 33 79 87\n\n", "msg_date": "Wed, 7 Nov 2012 16:54:40 +0100", "msg_from": "Florent Guillaume <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unique values across a table of arrays - documents and tags" }, { "msg_contents": "On 07/11/2012 16:54, Florent Guillaume wrote:\n> Maybe you could store the tags as fulltext words, query them using\n> fulltext search, and use ts_stat to gather the list of words? Needs to\n> be benched of course.\n> You'll probably need to change the config to avoid stemming and stop words.\n\nI have thought of that but decided not to even try because fts will also\nuse GIN and the combination of fts on a text field + GIN cannot possibly\nbe faster than just GIN on an array...", "msg_date": "Wed, 07 Nov 2012 17:00:59 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unique values across a table of arrays - documents and tags" }, { "msg_contents": "On Wed, Nov 7, 2012 at 5:00 PM, Ivan Voras <[email protected]> wrote:\n> On 07/11/2012 16:54, Florent Guillaume wrote:\n>> Maybe you could store the tags as fulltext words, query them using\n>> fulltext search, and use ts_stat to gather the list of words? Needs to\n>> be benched of course.\n>> You'll probably need to change the config to avoid stemming and stop words.\n>\n> I have thought of that but decided not to even try because fts will also\n> use GIN and the combination of fts on a text field + GIN cannot possibly\n> be faster than just GIN on an array...\n\nUnless ts_stat itself is optimized to reach inside the index to fetch\nits statistics, but I don't think that's the case.\n\nFlorent\n\n-- \nFlorent Guillaume, Director of R&D, Nuxeo\nOpen Source, Java EE based, Enterprise Content Management (ECM)\nhttp://www.nuxeo.com http://www.nuxeo.org +33 1 40 33 79 87\n\n", "msg_date": "Wed, 7 Nov 2012 17:48:44 +0100", "msg_from": "Florent Guillaume <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unique values across a table of arrays - documents and tags" } ]
[ { "msg_contents": "We have a web application where we create a schema or a database with a\nnumber of tables in it for each customer. Now we have about 2600 clients.\n\nThe problem we met using a separate DB for each client is that the creation\nof new DB can take up to 2 minutes, that is absolutely unacceptable. Using\nschemes instead (one DB with a number of schemes containing similar tables\nin it) solved this problem (schemes are created in a couple of seconds), but\ncreated two other blocking points:\n1. sometimes creation of a new table in a schema takes up to 5 seconds. In\ncase when we have create up to 40 tables in a schema this takes way too much\ntime.\n2. \"pg_dump -n schema_name db_name\" takes from 30 to 60 seconds, no matter\nhow big is the amount of data in the schema. Also, the dump of the tables\nstructure only takes at least 30 seconds. Basing on this topic\nhttp://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-td5709766i60.html,\npg_dump always analyses ALL the tables in the DB, i.e. in my case more than\n100 000 tables.\n\nI know you guys will ask me about selecting this particular application\narchitecture.\nThis architecture was chosen to ease the process of backup/restoring data\nand isolating client's data from each other. Sometimes clients ask us to\nrestore data for the last month or roll back to last week's state. This task\nis easy to accomplish then the client's data is isolated in a schema/DB. If\nwe put all the clients data in one table - operations of this kind will be\nmuch harder to perform. We will have to restore a huge DB with an enormously\nlarge tables in it to find the requested data. Sometime client even doesn't\nremember the exact date, he or she just say \"I lost my data somewhere\nbetween Tuesday and Friday last week\" and I have to restore backups for\nseveral days. If I have one huge table instead of small tables it will be a\nnightmare!\n \nDifferent clients have different activity rate and we can select different\nbackup strategies according to it. This would be impossible in case we keep\nall the clients data in one table. \nBesides all the above mentioned, the probability of massive data corruption\n(if an error in our web application occurs) is much higher. \n\n\nP.S.\nNot to start a holywar, but FYI: in a similar project where we used MySQL\nnow we have about 6000 DBs and everything works like a charm.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Thousands-databases-or-schemas-tp5731189.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 8 Nov 2012 01:36:16 -0800 (PST)", "msg_from": "Denis <[email protected]>", "msg_from_op": true, "msg_subject": "Thousands databases or schemas" }, { "msg_contents": "On Thu, Nov 8, 2012 at 1:36 AM, Denis <[email protected]> wrote:\n\n>\n> P.S.\n> Not to start a holywar, but FYI: in a similar project where we used MySQL\n> now we have about 6000 DBs and everything works like a charm.\n>\n\nYou seem to have answered your own question here. If my recollection of a\nprevious discussion about many schemas and pg_dump performance is accurate,\nI suspect you are going to be told that you've got a data architecture that\nis fairly incompatible with postgresql's architecture and you've\nspecifically ruled out a solution that would play to postgresql's strengths.\n\nOn Thu, Nov 8, 2012 at 1:36 AM, Denis <[email protected]> wrote:\n\nP.S.\nNot to start a holywar, but FYI: in a similar project where we used MySQL\nnow we have about 6000 DBs and everything works like a charm.You seem to have answered your own question here.  If my recollection of a previous discussion about many schemas and pg_dump performance is accurate, I suspect you are going to be told that you've got a data architecture that is fairly incompatible with postgresql's architecture and you've specifically ruled out a solution that would play to postgresql's strengths.", "msg_date": "Thu, 8 Nov 2012 02:31:52 -0800", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands databases or schemas" }, { "msg_contents": "Samuel Gendler wrote\n> On Thu, Nov 8, 2012 at 1:36 AM, Denis &lt;\n\n> socsam@\n\n> &gt; wrote:\n> \n>>\n>> P.S.\n>> Not to start a holywar, but FYI: in a similar project where we used MySQL\n>> now we have about 6000 DBs and everything works like a charm.\n>>\n> \n> You seem to have answered your own question here. If my recollection of a\n> previous discussion about many schemas and pg_dump performance is\n> accurate,\n> I suspect you are going to be told that you've got a data architecture\n> that\n> is fairly incompatible with postgresql's architecture and you've\n> specifically ruled out a solution that would play to postgresql's\n> strengths.\n\nOk guys, it was not my intention to hurt anyone's feelings by mentioning\nMySQL. Sorry about that. There simply was a project with a similar\narchitecture built using MySQL. When we started the current project, I have\nmade a decision to give PostgreSQL a try. Now I see that the same\narchitecture is not applicable if PostgreSQL is used. \n\nI would recommend you to refresh the info here \nhttp://wiki.postgresql.org/wiki/FAQ. There is a question \"What is the\nmaximum size for a row, a table, and a database?\". Please add there info on\nmaximum DBs number and tables number one DB can contain while PostgreSQL\ncontinues to work properly.\n\nPS: the easiest solution in my case is to create initially 500 DBs (like\napp_template_[0-500]) and create up to 500 schemas in each of it. This will\nmake 250000 possible clients in total. This should be enough. The question\nis: can you see the possible pitfalls of this solution?\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Thousands-databases-or-schemas-tp5731189p5731203.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 8 Nov 2012 05:29:40 -0800 (PST)", "msg_from": "Denis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thousands databases or schemas" }, { "msg_contents": "Hello\n\n2012/11/8 Denis <[email protected]>:\n> Samuel Gendler wrote\n>> On Thu, Nov 8, 2012 at 1:36 AM, Denis &lt;\n>\n>> socsam@\n>\n>> &gt; wrote:\n>>\n>>>\n>>> P.S.\n>>> Not to start a holywar, but FYI: in a similar project where we used MySQL\n>>> now we have about 6000 DBs and everything works like a charm.\n>>>\n>>\n>> You seem to have answered your own question here. If my recollection of a\n>> previous discussion about many schemas and pg_dump performance is\n>> accurate,\n>> I suspect you are going to be told that you've got a data architecture\n>> that\n>> is fairly incompatible with postgresql's architecture and you've\n>> specifically ruled out a solution that would play to postgresql's\n>> strengths.\n>\n> Ok guys, it was not my intention to hurt anyone's feelings by mentioning\n> MySQL. Sorry about that. There simply was a project with a similar\n> architecture built using MySQL. When we started the current project, I have\n> made a decision to give PostgreSQL a try. Now I see that the same\n> architecture is not applicable if PostgreSQL is used.\n>\n> I would recommend you to refresh the info here\n> http://wiki.postgresql.org/wiki/FAQ. There is a question \"What is the\n> maximum size for a row, a table, and a database?\". Please add there info on\n> maximum DBs number and tables number one DB can contain while PostgreSQL\n> continues to work properly.\n>\n> PS: the easiest solution in my case is to create initially 500 DBs (like\n> app_template_[0-500]) and create up to 500 schemas in each of it. This will\n> make 250000 possible clients in total. This should be enough. The question\n> is: can you see the possible pitfalls of this solution?\n>\n\nwe use about 2000 databases per warehouse - and it working well, but\npg_dumpall doesn't work well in this environment. So we use a\ndifferent backup methods.\n\nRegards\n\nPavel\n\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Thousands-databases-or-schemas-tp5731189p5731203.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 8 Nov 2012 14:48:54 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands databases or schemas" }, { "msg_contents": "On 08/11/12 09:36, Denis wrote:\n> We have a web application where we create a schema or a database with a\n> number of tables in it for each customer. Now we have about 2600 clients.\n>\n> The problem we met using a separate DB for each client is that the creation\n> of new DB can take up to 2 minutes, that is absolutely unacceptable. Using\n> schemes instead (one DB with a number of schemes containing similar tables\n> in it) solved this problem (schemes are created in a couple of seconds), but\n> created two other blocking points:\n> 1. sometimes creation of a new table in a schema takes up to 5 seconds. In\n> case when we have create up to 40 tables in a schema this takes way too much\n> time.\n> 2. \"pg_dump -n schema_name db_name\" takes from 30 to 60 seconds, no matter\n> how big is the amount of data in the schema. Also, the dump of the tables\n> structure only takes at least 30 seconds. Basing on this topic\n> http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-td5709766i60.html,\n> pg_dump always analyses ALL the tables in the DB, i.e. in my case more than\n> 100 000 tables.\nThe obvious solution would be to write your own version of pg_dump which \nonly examines the tables within a schema. You can even start with the \nsource of the standard pg_dump! However, you could then eliminate the \nper customer schema/tables and add an extra 'customer' key column on \neach table. Now you modify pg_dump to only dump the parts of each table \nmatching a given customer id.\n\nMark Thornton\n\n\n", "msg_date": "Thu, 08 Nov 2012 13:50:45 +0000", "msg_from": "Mark Thornton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands databases or schemas" }, { "msg_contents": "On 11/08/2012 09:29 PM, Denis wrote:\n> Ok guys, it was not my intention to hurt anyone's feelings by mentioning\n> MySQL. Sorry about that.\nIt's pretty silly to be upset by someone mentioning another DB product.\nI wouldn't worry.\n> There simply was a project with a similar\n> architecture built using MySQL. When we started the current project, I have\n> made a decision to give PostgreSQL a try.\nIt's certainly interesting that MySQL currently scales to much larger\ntable counts better than PostgreSQL appears to.\n\nI'd like to see if this can be improved down the track. Various people\nare doing work on PostgreSQL scaling and performance, so with luck huge\ntable counts will come into play there. If nothing else, supporting\nlarge table counts is important when dealing with very large amounts of\ndata in partitioned tables.\n\nI think I saw mention of better performance with higher table counts in\n9.3 in -hackers, too.\n\n> I would recommend you to refresh the info here \n> http://wiki.postgresql.org/wiki/FAQ. There is a question \"What is the\n> maximum size for a row, a table, and a database?\". Please add there info on\n> maximum DBs number and tables number one DB can contain while PostgreSQL\n> continues to work properly.\nYeah, a number of people have been thrown by that. Technical limitations\naren't the same as practical limitations, and in some cases the\npractical limitations are lower.\n\nThe trouble is: How do you put a number to it when something is a slow\nand gradual drop in performance? And when one person's \"performs\nadequately\" is another's \"way too slow\" ?\n\n--\nCraig Ringer\n\n", "msg_date": "Fri, 09 Nov 2012 14:15:45 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands databases or schemas" }, { "msg_contents": "On Fri, Nov 9, 2012 at 7:15 AM, Craig Ringer <[email protected]> wrote:\n> On 11/08/2012 09:29 PM, Denis wrote:\n>> Ok guys, it was not my intention to hurt anyone's feelings by mentioning\n>> MySQL. Sorry about that.\n> It's pretty silly to be upset by someone mentioning another DB product.\n> I wouldn't worry.\n>> There simply was a project with a similar\n>> architecture built using MySQL. When we started the current project, I have\n>> made a decision to give PostgreSQL a try.\n> It's certainly interesting that MySQL currently scales to much larger\n> table counts better than PostgreSQL appears to.\n>\n> I'd like to see if this can be improved down the track. Various people\n> are doing work on PostgreSQL scaling and performance, so with luck huge\n> table counts will come into play there. If nothing else, supporting\n> large table counts is important when dealing with very large amounts of\n> data in partitioned tables.\n>\n> I think I saw mention of better performance with higher table counts in\n> 9.3 in -hackers, too.\n>\n>> I would recommend you to refresh the info here\n>> http://wiki.postgresql.org/wiki/FAQ. There is a question \"What is the\n>> maximum size for a row, a table, and a database?\". Please add there info on\n>> maximum DBs number and tables number one DB can contain while PostgreSQL\n>> continues to work properly.\n> Yeah, a number of people have been thrown by that. Technical limitations\n> aren't the same as practical limitations, and in some cases the\n> practical limitations are lower.\n\nYes. And the fact is that, PostgreSQL doesn't actually have a big\nproblem with this scenario. pg_dump does. It's mainly a tool issue,\nnot a database engine one. For example, pg_admin used to have problems\neven with a much smaller number of databases than that - I think that\nis better in current releases, but I'm not 100% sure. And you can\nimagine what a tool looks like that tries to graph per database\nvalues, for example, into a single graph when you have thousands of\ndatabases.\n\nPostgreSQL isn't perfect in these cases - as noted just creating a new\nschema can take slightly more than the usual millisecond. But it\nworks, and I've never come across a scenario personally where it's not\n\"good enough\" (that doesn't mean it doesn't exist, of course).\n\npg_dump and pg_dumpall, however, do have issues, as noted elsewhere as well.\n\nBut as Pavel mentioned, and others certainly have before as well,\nthere are other ways to deal with backups that solve this problem. It\nmay not be perfect, but it gets you pretty far.\n\nIt may not be a match for the requirements in this case - but it's not\na general limitation of the db.\n\nFWIW, have you looked into using proper backups with PITR and just\nrolling forward through the transaction log with\npause_at_recovery_target set? That's an excellent way to deal with the\n\"we lost data sometime between <x> and <y> but don't know when\" when\ntracking it down. I've yet to find a case where that's not easier than\nrepeatedly restoring even a fairly small pg_dump based backup to find\nit. Plus, it give you a much better granularity, which leads to better\nresults for your customer. *And* it takes away the dependency on\npg_dump's performance issues.\n\n--\n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n\n", "msg_date": "Fri, 9 Nov 2012 07:38:22 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands databases or schemas" }, { "msg_contents": "On Thu, Nov 8, 2012 at 3:36 AM, Denis <[email protected]> wrote:\n> We have a web application where we create a schema or a database with a\n> number of tables in it for each customer. Now we have about 2600 clients.\n>\n> The problem we met using a separate DB for each client is that the creation\n> of new DB can take up to 2 minutes, that is absolutely unacceptable. Using\n> schemes instead (one DB with a number of schemes containing similar tables\n> in it) solved this problem (schemes are created in a couple of seconds), but\n> created two other blocking points:\n\nSure: db creation can be a bear particularly on servers already under\nload; it's i/o intensive. I think you made the right choice: it's not\na good idea to create databases via user input (but if I *had* to do\nthat, I would probably be pre-creating them).\n\n> 1. sometimes creation of a new table in a schema takes up to 5 seconds. In\n> case when we have create up to 40 tables in a schema this takes way too much\n> time.\n\nHow are you creating the tables. What's the iowait on the sever in\nsituations like this? If the file system is binding you here, there's\na not a lot you can do other than to try and pre-create or improve i/o\nperformance.\n\n\n> 2. \"pg_dump -n schema_name db_name\" takes from 30 to 60 seconds, no matter\n> how big is the amount of data in the schema. Also, the dump of the tables\n> structure only takes at least 30 seconds. Basing on this topic\n> http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-td5709766i60.html,\n> pg_dump always analyses ALL the tables in the DB, i.e. in my case more than\n> 100 000 tables.\n\nThat may be correct. To prove it, try:\n\npg_dump -s -n schema_name db_name\n\nwhere '-s' is the switch to dump only schema. if things are still\nslow, try logging queries from pg_dump (maybe enable\nlog_min_duration_statement if you can) and maybe something turns up\nthat can be optimized. One you have the query, explain analyze it and\npost it to the list. postgresql has been continuously improving in\nterms of bulk table handling over the years and there may be some low\nhanging fruit there. There may even be some simple database tweaks\nyou can make to improve thing without code changes.\n\nMagnus already answered this pretty well, but I'd like to add: the\ndatabase engine scales pretty well to large amounts of table but in\nparticular cases the tools are not. The reason for this is that the\nengine deals with internal optimized structures while the tools have\nto do everything over SQL. That said, there may be some low hanging\noptimization fruit; it's a lot easier to hack on client side tools vs\nthe backend.\n\n> I know you guys will ask me about selecting this particular application\n> architecture.\n\nSharding by schema is pretty common actually. 6000 schema holding\ntables is a lot -- so the question you should be getting is 'given the\ncurrent state of affairs, have you considered distributing your\nclients across more than one server'. What if you suddenly sign\n10,000 more clients?\n\n> Different clients have different activity rate and we can select different\n> backup strategies according to it. This would be impossible in case we keep\n> all the clients data in one table.\n> Besides all the above mentioned, the probability of massive data corruption\n> (if an error in our web application occurs) is much higher.\n\nsure -- all large databases struggle with backups once the brute force\ndump starts to become impractical. the way forward is to explore\nvarious high availability options -- PITR, HS/SR etc. distributing\nthe backup load across shards is also good as long as your rigorous\nabout not involving any shared structures.\n\n> Not to start a holywar, but FYI: in a similar project where we used MySQL\n> now we have about 6000 DBs and everything works like a charm.\n\nno worries. postgres schemas are fancier than mysql databases and\nthis is one of those things were extra features really do impact\nperformance :-).\n\nmerlin\n\n", "msg_date": "Fri, 9 Nov 2012 09:04:20 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands databases or schemas" }, { "msg_contents": "On Fri, Nov 9, 2012 at 02:15:45PM +0800, Craig Ringer wrote:\n> On 11/08/2012 09:29 PM, Denis wrote:\n> > Ok guys, it was not my intention to hurt anyone's feelings by mentioning\n> > MySQL. Sorry about that.\n> It's pretty silly to be upset by someone mentioning another DB product.\n> I wouldn't worry.\n> > There simply was a project with a similar\n> > architecture built using MySQL. When we started the current project, I have\n> > made a decision to give PostgreSQL a try.\n> It's certainly interesting that MySQL currently scales to much larger\n> table counts better than PostgreSQL appears to.\n> \n> I'd like to see if this can be improved down the track. Various people\n> are doing work on PostgreSQL scaling and performance, so with luck huge\n> table counts will come into play there. If nothing else, supporting\n> large table counts is important when dealing with very large amounts of\n> data in partitioned tables.\n> \n> I think I saw mention of better performance with higher table counts in\n> 9.3 in -hackers, too.\n\nYes, 9.3 does much better dumping/restoring databases with a large\nnumber of tables. I was testing this as part of pg_upgrade performance\nimprovements for large tables. We have a few other things we might try\nto improve for 9.3 related to caching, but that might not help in this\ncase.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n", "msg_date": "Thu, 15 Nov 2012 10:49:53 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thousands databases or schemas" } ]
[ { "msg_contents": "Hi,\n\nI have\n\n create table x ( att bigint, val bigint, hash varchar(30) \n);\n\nwith 693million rows. The query\n\n create table y as select att, val, count(*) as cnt from x \ngroup by att, val;\n\nran for more than 2000 minutes and used 14g memory on an 8g physical \nRAM machine -- eventually I stopped it. Doing\n\n create table y ( att bigint, val bigint, cnt int );\n and something a bit like: for i in `seq 0 255` | xargs -n 1 \n-P 6\n psql -c \"insert into y select att, val, \ncount(*) from x where att%256=$1 group by att, val\" test\n\nruns 6 out of 256 in 10 minutes -- meaning the whole problem can be \ndone in just under 3 hours.\n\nQuestion 1: do you see any reason why the second method would yield a \ndifferent result from the first method?\nQuestion 2: is that method generalisabl so that it could be included in \nthe base system without manual shell glue?\n\nThanks,\n\nOliver\n\n\n", "msg_date": "Thu, 08 Nov 2012 12:55:12 +0100", "msg_from": "Oliver Seidel <[email protected]>", "msg_from_op": true, "msg_subject": "parallel query evaluation" }, { "msg_contents": "Oliver Seidel <[email protected]> writes:\n> I have\n> create table x ( att bigint, val bigint, hash varchar(30) \n> );\n> with 693million rows. The query\n\n> create table y as select att, val, count(*) as cnt from x \n> group by att, val;\n\n> ran for more than 2000 minutes and used 14g memory on an 8g physical \n> RAM machine\n\nWhat was the plan for that query? What did you have work_mem set to?\n\nI can believe such a thing overrunning memory if the planner chose to\nuse a hash-aggregation plan instead of sort-and-unique, but it would\nonly do that if it had made a drastic underestimate of the number of\ngroups implied by the GROUP BY clause. Do you have up-to-date\nstatistics for the source table?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 10 Nov 2012 10:32:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: parallel query evaluation" } ]
[ { "msg_contents": "[Please CC me on replies, as I'm not subscribed; thank you.]\n\nI've ran into a problem with the query planner and IN (subquery)\nconditions which I suspect to be a bug. I'll attempt to describe the\nrelevant details of my database and explain which behaviour I find\nunexpected. I've also tried to trigger this behaviour in a clean\ndatabase; I think I've succeeded, but the conditions are a bit\ndifferent, so perhaps it's a different problem. I'll describe this\nsetup in detail below.\n\nI have a somewhat large table (~2.5M rows), stats, which is quite\noften (several records a minute) INSERTed to, but never UPDATEd or\nDELETEd from. (In case it's relevant, it has an attached AFTER INSERT\ntrigger which checks time and rebuilds an aggregate materialized view\nevery hour.) This is the schema:\n# \\d+ stats\n Table \"serverwatch.stats\"\n Column | Type |\nModifiers | Storage | Description\n------------------+-----------------------------+----------------------------------------------------+---------+-------------\n id | integer | not null default\nnextval('stats_id_seq'::regclass) | plain |\n run_id | integer | not null\n | plain |\n start_time | timestamp without time zone | not null\n | plain |\n end_time | timestamp without time zone | not null\n | plain |\n cpu_utilization | double precision |\n | plain |\n disk_read_ops | bigint |\n | plain |\n disk_write_ops | bigint |\n | plain |\n network_out | bigint |\n | plain |\n network_in | bigint |\n | plain |\n disk_read_bytes | bigint |\n | plain |\n disk_write_bytes | bigint |\n | plain |\nIndexes:\n \"stats_pkey\" PRIMARY KEY, btree (id)\n \"stats_day_index\" btree (run_id, day(stats.*))\n \"stats_month_index\" btree (run_id, month(stats.*))\n \"stats_week_index\" btree (run_id, week(stats.*))\nForeign-key constraints:\n \"stats_runs\" FOREIGN KEY (run_id) REFERENCES runs(id)\nTriggers:\n stats_day_refresh_trigger AFTER INSERT OR UPDATE ON stats FOR EACH\nSTATEMENT EXECUTE PROCEDURE mat_view_refresh('serverwatch.stats_day')\nHas OIDs: no\n\nday(), month() and week() functions are just trivial date_trunc on a\nrelevant field. The referenced table looks like this:\n# \\d+ runs\n Table \"serverwatch.runs\"\n Column | Type |\nModifiers | Storage | Description\n-----------------+-----------------------------+---------------------------------------------------+---------+-------------\n id | integer | not null default\nnextval('runs_id_seq'::regclass) | plain |\n server_id | integer | not null\n | plain |\n flavor | flavor | not null\n | plain |\n region | region | not null\n | plain |\n launch_time | timestamp without time zone | not null\n | plain |\n stop_time | timestamp without time zone |\n | plain |\n project_info_id | integer | not null\n | plain |\n owner_info_id | integer | not null\n | plain |\nIndexes:\n \"runs_pkey\" PRIMARY KEY, btree (id)\n \"index_runs_on_flavor\" btree (flavor)\n \"index_runs_on_owner_info_id\" btree (owner_info_id)\n \"index_runs_on_project_info_id\" btree (project_info_id)\n \"index_runs_on_region\" btree (region)\n \"index_runs_on_server_id\" btree (server_id)\nForeign-key constraints:\n \"runs_owner_info_id_fkey\" FOREIGN KEY (owner_info_id) REFERENCES\nuser_infos(id)\n \"runs_project_info_id_fkey\" FOREIGN KEY (project_info_id)\nREFERENCES project_infos(id)\nReferenced by:\n TABLE \"stats_day\" CONSTRAINT \"stats_day_runs\" FOREIGN KEY (run_id)\nREFERENCES runs(id)\n TABLE \"stats\" CONSTRAINT \"stats_runs\" FOREIGN KEY (run_id)\nREFERENCES runs(id)\nHas OIDs: no\n\nNow consider this query - note I'm using a subselect here because the\nproblem originally manifested itself with a view:\nSELECT * FROM (SELECT run_id, disk_write_ops FROM stats) AS s WHERE\nrun_id IN (SELECT id FROM runs WHERE server_id = 515);\n\nAs might be expected, the planner chooses to use one of the three\nindices with run_id:\nhttp://explain.depesz.com/s/XU3Q\n\nNow consider a similar query, but with aggregation:\nSELECT * FROM (SELECT run_id, SUM(disk_write_ops) FROM stats GROUP BY\nrun_id) AS s WHERE run_id IN (SELECT id FROM runs WHERE server_id =\n515);\n\nNow the picture is very different. The planner, unexplicably,\ndismisses the index and opts instead to do a full scan on stats, the\ntable 2.5 million rows big.\nhttp://explain.depesz.com/s/Rqt\n\nNote that the problem disappears when we replace the IN condition with literal:\nSELECT * FROM (SELECT run_id, SUM(disk_write_ops) FROM stats GROUP BY\nrun_id) AS s WHERE run_id IN (1815, 1816);\n\nThe ids are the result of the inner select ran separately, so the\nquery has the exact same result; it's worth pointing out that the\nplanner has a correct estimate on the selectivity of the condition -\nexactly two rows from runs are selected, as expected. But when literal\nis used the planner correctly chooses to use the index:\nhttp://explain.depesz.com/s/lYc\n\nSimilarly a correct plan is chosen when we unnest the inner SELECT:\nSELECT run_id, SUM(disk_write_ops) FROM stats WHERE run_id IN (SELECT\nid FROM runs WHERE server_id = 515) GROUP BY run_id;\nhttp://explain.depesz.com/s/dlwZ\n\n\nI've tried to replicate this on a clean database:\nCREATE TABLE runs(run_id serial PRIMARY KEY, server_id INTEGER NOT NULL);\nCREATE INDEX runs_server ON runs(server_id);\nCREATE TABLE stats(entry_id serial PRIMARY KEY, run_id integer\nREFERENCES runs NOT NULL, utilization INTEGER NOT NULL);\nCREATE INDEX stats_runs ON stats(run_id);\n\nNow let's try some queries:\nSELECT * FROM (SELECT run_id, utilization FROM stats) AS s WHERE\nrun_id IN (1212, 2323, 121, 561, 21, 561, 125, 2, 55, 52, 42);\nhttp://explain.depesz.com/s/Kcb - fine, index used\n\nSELECT * FROM (SELECT run_id, utilization FROM stats) AS s WHERE\nrun_id IN (SELECT run_id FROM runs WHERE server_id = 515);\nhttp://explain.depesz.com/s/QFs - seqscan!\nObviously it doesn't mean much, as the tables are empty and there are\nno stats, but still a radically different plan is chosen for what is\nessentially the same query.\n\nNote that in this case the behaviour is the same even when unnested:\nSELECT run_id, utilization FROM stats WHERE run_id IN (SELECT run_id\nFROM runs WHERE server_id = 515);\nhttp://explain.depesz.com/s/y3GM\n\nSo, is this a bug in the planner, or am I somehow subtly changing the\nsemantics of the query and don't notice?\nI understand the planner perhaps tries to parallelize queries when a\nSELECT is used in the IN clause, but given the stats it doesn't seem\nto make much sense.\n\nThanks, and let me know if you want me to test something on my\ndatabase over here or if there's some relevant info I've ommited.\n\n(PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.7.2-2ubuntu1) 4.7.2, 64-bit\nrunning on Ubuntu 12.10, ubuntu package\npostgresql-9.1-9.1.6-1ubuntu1:amd64, default configuration)\n--\nRafał Rzepecki\n\n", "msg_date": "Sun, 11 Nov 2012 04:18:31 +0100", "msg_from": "=?UTF-8?Q?Rafa=C5=82_Rzepecki?= <[email protected]>", "msg_from_op": true, "msg_subject": "Planner sometimes doesn't use a relevant index with IN (subquery)\n\tcondition" }, { "msg_contents": "This indeed works around the issue. Thanks!\n\nOn Mon, Nov 12, 2012 at 9:53 AM, ashutosh durugkar <[email protected]> wrote:\n> Hey Rafal,\n>\n>\n>>SELECT * FROM (SELECT run_id, utilization FROM stats) AS s WHERE\n> run_id IN (SELECT run_id FROM runs WHERE server_id = 515);\n>\n> could you try this:\n>\n>\n> SELECT * FROM (SELECT run_id, utilization FROM stats) AS s WHERE\n> run_id = ANY(ARRAY(SELECT run_id FROM runs WHERE server_id = 515));\n>\n> Thanks,\n>\n> On Sun, Nov 11, 2012 at 8:48 AM, Rafał Rzepecki <[email protected]>\n> wrote:\n>>\n>> [Please CC me on replies, as I'm not subscribed; thank you.]\n>>\n>> I've ran into a problem with the query planner and IN (subquery)\n>> conditions which I suspect to be a bug. I'll attempt to describe the\n>> relevant details of my database and explain which behaviour I find\n>> unexpected. I've also tried to trigger this behaviour in a clean\n>> database; I think I've succeeded, but the conditions are a bit\n>> different, so perhaps it's a different problem. I'll describe this\n>> setup in detail below.\n>>\n>> I have a somewhat large table (~2.5M rows), stats, which is quite\n>> often (several records a minute) INSERTed to, but never UPDATEd or\n>> DELETEd from. (In case it's relevant, it has an attached AFTER INSERT\n>> trigger which checks time and rebuilds an aggregate materialized view\n>> every hour.) This is the schema:\n>> # \\d+ stats\n>> Table\n>> \"serverwatch.stats\"\n>> Column | Type |\n>> Modifiers | Storage | Description\n>>\n>> ------------------+-----------------------------+----------------------------------------------------+---------+-------------\n>> id | integer | not null default\n>> nextval('stats_id_seq'::regclass) | plain |\n>> run_id | integer | not null\n>> | plain |\n>> start_time | timestamp without time zone | not null\n>> | plain |\n>> end_time | timestamp without time zone | not null\n>> | plain |\n>> cpu_utilization | double precision |\n>> | plain |\n>> disk_read_ops | bigint |\n>> | plain |\n>> disk_write_ops | bigint |\n>> | plain |\n>> network_out | bigint |\n>> | plain |\n>> network_in | bigint |\n>> | plain |\n>> disk_read_bytes | bigint |\n>> | plain |\n>> disk_write_bytes | bigint |\n>> | plain |\n>> Indexes:\n>> \"stats_pkey\" PRIMARY KEY, btree (id)\n>> \"stats_day_index\" btree (run_id, day(stats.*))\n>> \"stats_month_index\" btree (run_id, month(stats.*))\n>> \"stats_week_index\" btree (run_id, week(stats.*))\n>> Foreign-key constraints:\n>> \"stats_runs\" FOREIGN KEY (run_id) REFERENCES runs(id)\n>> Triggers:\n>> stats_day_refresh_trigger AFTER INSERT OR UPDATE ON stats FOR EACH\n>> STATEMENT EXECUTE PROCEDURE mat_view_refresh('serverwatch.stats_day')\n>> Has OIDs: no\n>>\n>> day(), month() and week() functions are just trivial date_trunc on a\n>> relevant field. The referenced table looks like this:\n>> # \\d+ runs\n>> Table \"serverwatch.runs\"\n>> Column | Type |\n>> Modifiers | Storage | Description\n>>\n>> -----------------+-----------------------------+---------------------------------------------------+---------+-------------\n>> id | integer | not null default\n>> nextval('runs_id_seq'::regclass) | plain |\n>> server_id | integer | not null\n>> | plain |\n>> flavor | flavor | not null\n>> | plain |\n>> region | region | not null\n>> | plain |\n>> launch_time | timestamp without time zone | not null\n>> | plain |\n>> stop_time | timestamp without time zone |\n>> | plain |\n>> project_info_id | integer | not null\n>> | plain |\n>> owner_info_id | integer | not null\n>> | plain |\n>> Indexes:\n>> \"runs_pkey\" PRIMARY KEY, btree (id)\n>> \"index_runs_on_flavor\" btree (flavor)\n>> \"index_runs_on_owner_info_id\" btree (owner_info_id)\n>> \"index_runs_on_project_info_id\" btree (project_info_id)\n>> \"index_runs_on_region\" btree (region)\n>> \"index_runs_on_server_id\" btree (server_id)\n>> Foreign-key constraints:\n>> \"runs_owner_info_id_fkey\" FOREIGN KEY (owner_info_id) REFERENCES\n>> user_infos(id)\n>> \"runs_project_info_id_fkey\" FOREIGN KEY (project_info_id)\n>> REFERENCES project_infos(id)\n>> Referenced by:\n>> TABLE \"stats_day\" CONSTRAINT \"stats_day_runs\" FOREIGN KEY (run_id)\n>> REFERENCES runs(id)\n>> TABLE \"stats\" CONSTRAINT \"stats_runs\" FOREIGN KEY (run_id)\n>> REFERENCES runs(id)\n>> Has OIDs: no\n>>\n>> Now consider this query - note I'm using a subselect here because the\n>> problem originally manifested itself with a view:\n>> SELECT * FROM (SELECT run_id, disk_write_ops FROM stats) AS s WHERE\n>> run_id IN (SELECT id FROM runs WHERE server_id = 515);\n>>\n>> As might be expected, the planner chooses to use one of the three\n>> indices with run_id:\n>> http://explain.depesz.com/s/XU3Q\n>>\n>> Now consider a similar query, but with aggregation:\n>> SELECT * FROM (SELECT run_id, SUM(disk_write_ops) FROM stats GROUP BY\n>> run_id) AS s WHERE run_id IN (SELECT id FROM runs WHERE server_id =\n>> 515);\n>>\n>> Now the picture is very different. The planner, unexplicably,\n>> dismisses the index and opts instead to do a full scan on stats, the\n>> table 2.5 million rows big.\n>> http://explain.depesz.com/s/Rqt\n>>\n>> Note that the problem disappears when we replace the IN condition with\n>> literal:\n>> SELECT * FROM (SELECT run_id, SUM(disk_write_ops) FROM stats GROUP BY\n>> run_id) AS s WHERE run_id IN (1815, 1816);\n>>\n>> The ids are the result of the inner select ran separately, so the\n>> query has the exact same result; it's worth pointing out that the\n>> planner has a correct estimate on the selectivity of the condition -\n>> exactly two rows from runs are selected, as expected. But when literal\n>> is used the planner correctly chooses to use the index:\n>> http://explain.depesz.com/s/lYc\n>>\n>> Similarly a correct plan is chosen when we unnest the inner SELECT:\n>> SELECT run_id, SUM(disk_write_ops) FROM stats WHERE run_id IN (SELECT\n>> id FROM runs WHERE server_id = 515) GROUP BY run_id;\n>> http://explain.depesz.com/s/dlwZ\n>>\n>>\n>> I've tried to replicate this on a clean database:\n>> CREATE TABLE runs(run_id serial PRIMARY KEY, server_id INTEGER NOT NULL);\n>> CREATE INDEX runs_server ON runs(server_id);\n>> CREATE TABLE stats(entry_id serial PRIMARY KEY, run_id integer\n>> REFERENCES runs NOT NULL, utilization INTEGER NOT NULL);\n>> CREATE INDEX stats_runs ON stats(run_id);\n>>\n>> Now let's try some queries:\n>> SELECT * FROM (SELECT run_id, utilization FROM stats) AS s WHERE\n>> run_id IN (1212, 2323, 121, 561, 21, 561, 125, 2, 55, 52, 42);\n>> http://explain.depesz.com/s/Kcb - fine, index used\n>>\n>> SELECT * FROM (SELECT run_id, utilization FROM stats) AS s WHERE\n>> run_id IN (SELECT run_id FROM runs WHERE server_id = 515);\n>> http://explain.depesz.com/s/QFs - seqscan!\n>> Obviously it doesn't mean much, as the tables are empty and there are\n>> no stats, but still a radically different plan is chosen for what is\n>> essentially the same query.\n>>\n>> Note that in this case the behaviour is the same even when unnested:\n>> SELECT run_id, utilization FROM stats WHERE run_id IN (SELECT run_id\n>> FROM runs WHERE server_id = 515);\n>> http://explain.depesz.com/s/y3GM\n>>\n>> So, is this a bug in the planner, or am I somehow subtly changing the\n>> semantics of the query and don't notice?\n>> I understand the planner perhaps tries to parallelize queries when a\n>> SELECT is used in the IN clause, but given the stats it doesn't seem\n>> to make much sense.\n>>\n>> Thanks, and let me know if you want me to test something on my\n>> database over here or if there's some relevant info I've ommited.\n>>\n>> (PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc\n>> (Ubuntu/Linaro 4.7.2-2ubuntu1) 4.7.2, 64-bit\n>> running on Ubuntu 12.10, ubuntu package\n>> postgresql-9.1-9.1.6-1ubuntu1:amd64, default configuration)\n>> --\n>> Rafał Rzepecki\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n-- \nRafał Rzepecki\n\n", "msg_date": "Mon, 12 Nov 2012 10:06:19 +0100", "msg_from": "=?UTF-8?Q?Rafa=C5=82_Rzepecki?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner sometimes doesn't use a relevant index with IN\n\t(subquery) condition" }, { "msg_contents": "On 12/11/12 22:06, Rafał Rzepecki wrote:\n> This indeed works around the issue. Thanks!\n>\n> On Mon, Nov 12, 2012 at 9:53 AM, ashutosh durugkar <[email protected]> wrote:\n>> Hey Rafal,\n>>\n>>\n>>> SELECT * FROM (SELECT run_id, utilization FROM stats) AS s WHERE\n>> run_id IN (SELECT run_id FROM runs WHERE server_id = 515);\n>>\n>> could you try this:\n>>\n>>\n>> SELECT * FROM (SELECT run_id, utilization FROM stats) AS s WHERE\n>> run_id = ANY(ARRAY(SELECT run_id FROM runs WHERE server_id = 515));\n>>\n>> Thanks,\n>>\n>> On Sun, Nov 11, 2012 at 8:48 AM, Rafał Rzepecki <[email protected]>\n>> wrote:\n>>> [Please CC me on replies, as I'm not subscribed; thank you.]\n>>>\n>>> I've ran into a problem with the query planner and IN (subquery)\n>>> conditions which I suspect to be a bug. I'll attempt to describe the\n>>> relevant details of my database and explain which behaviour I find\n>>> unexpected. I've also tried to trigger this behaviour in a clean\n>>> database; I think I've succeeded, but the conditions are a bit\n>>> different, so perhaps it's a different problem. I'll describe this\n>>> setup in detail below.\n>>>\n>>> I have a somewhat large table (~2.5M rows), stats, which is quite\n>>> often (several records a minute) INSERTed to, but never UPDATEd or\n>>> DELETEd from. (In case it's relevant, it has an attached AFTER INSERT\n>>> trigger which checks time and rebuilds an aggregate materialized view\n>>> every hour.) This is the schema:\n>>> # \\d+ stats\n>>> Table\n>>> \"serverwatch.stats\"\n>>> Column | Type |\n>>> Modifiers | Storage | Description\n>>>\n>>> ------------------+-----------------------------+----------------------------------------------------+---------+-------------\n>>> id | integer | not null default\n>>> nextval('stats_id_seq'::regclass) | plain |\n>>> run_id | integer | not null\n>>> | plain |\n>>> start_time | timestamp without time zone | not null\n>>> | plain |\n>>> end_time | timestamp without time zone | not null\n>>> | plain |\n>>> cpu_utilization | double precision |\n>>> | plain |\n>>> disk_read_ops | bigint |\n>>> | plain |\n>>> disk_write_ops | bigint |\n>>> | plain |\n>>> network_out | bigint |\n>>> | plain |\n>>> network_in | bigint |\n>>> | plain |\n>>> disk_read_bytes | bigint |\n>>> | plain |\n>>> disk_write_bytes | bigint |\n>>> | plain |\n>>> Indexes:\n>>> \"stats_pkey\" PRIMARY KEY, btree (id)\n>>> \"stats_day_index\" btree (run_id, day(stats.*))\n>>> \"stats_month_index\" btree (run_id, month(stats.*))\n>>> \"stats_week_index\" btree (run_id, week(stats.*))\n>>> Foreign-key constraints:\n>>> \"stats_runs\" FOREIGN KEY (run_id) REFERENCES runs(id)\n>>> Triggers:\n>>> stats_day_refresh_trigger AFTER INSERT OR UPDATE ON stats FOR EACH\n>>> STATEMENT EXECUTE PROCEDURE mat_view_refresh('serverwatch.stats_day')\n>>> Has OIDs: no\n>>>\n>>> day(), month() and week() functions are just trivial date_trunc on a\n>>> relevant field. The referenced table looks like this:\n>>> # \\d+ runs\n>>> Table \"serverwatch.runs\"\n>>> Column | Type |\n>>> Modifiers | Storage | Description\n>>>\n>>> -----------------+-----------------------------+---------------------------------------------------+---------+-------------\n>>> id | integer | not null default\n>>> nextval('runs_id_seq'::regclass) | plain |\n>>> server_id | integer | not null\n>>> | plain |\n>>> flavor | flavor | not null\n>>> | plain |\n>>> region | region | not null\n>>> | plain |\n>>> launch_time | timestamp without time zone | not null\n>>> | plain |\n>>> stop_time | timestamp without time zone |\n>>> | plain |\n>>> project_info_id | integer | not null\n>>> | plain |\n>>> owner_info_id | integer | not null\n>>> | plain |\n>>> Indexes:\n>>> \"runs_pkey\" PRIMARY KEY, btree (id)\n>>> \"index_runs_on_flavor\" btree (flavor)\n>>> \"index_runs_on_owner_info_id\" btree (owner_info_id)\n>>> \"index_runs_on_project_info_id\" btree (project_info_id)\n>>> \"index_runs_on_region\" btree (region)\n>>> \"index_runs_on_server_id\" btree (server_id)\n>>> Foreign-key constraints:\n>>> \"runs_owner_info_id_fkey\" FOREIGN KEY (owner_info_id) REFERENCES\n>>> user_infos(id)\n>>> \"runs_project_info_id_fkey\" FOREIGN KEY (project_info_id)\n>>> REFERENCES project_infos(id)\n>>> Referenced by:\n>>> TABLE \"stats_day\" CONSTRAINT \"stats_day_runs\" FOREIGN KEY (run_id)\n>>> REFERENCES runs(id)\n>>> TABLE \"stats\" CONSTRAINT \"stats_runs\" FOREIGN KEY (run_id)\n>>> REFERENCES runs(id)\n>>> Has OIDs: no\n>>>\n>>> Now consider this query - note I'm using a subselect here because the\n>>> problem originally manifested itself with a view:\n>>> SELECT * FROM (SELECT run_id, disk_write_ops FROM stats) AS s WHERE\n>>> run_id IN (SELECT id FROM runs WHERE server_id = 515);\n>>>\n>>> As might be expected, the planner chooses to use one of the three\n>>> indices with run_id:\n>>> http://explain.depesz.com/s/XU3Q\n>>>\n>>> Now consider a similar query, but with aggregation:\n>>> SELECT * FROM (SELECT run_id, SUM(disk_write_ops) FROM stats GROUP BY\n>>> run_id) AS s WHERE run_id IN (SELECT id FROM runs WHERE server_id =\n>>> 515);\n>>>\n>>> Now the picture is very different. The planner, unexplicably,\n>>> dismisses the index and opts instead to do a full scan on stats, the\n>>> table 2.5 million rows big.\n>>> http://explain.depesz.com/s/Rqt\n>>>\n>>> Note that the problem disappears when we replace the IN condition with\n>>> literal:\n>>> SELECT * FROM (SELECT run_id, SUM(disk_write_ops) FROM stats GROUP BY\n>>> run_id) AS s WHERE run_id IN (1815, 1816);\n>>>\n>>> The ids are the result of the inner select ran separately, so the\n>>> query has the exact same result; it's worth pointing out that the\n>>> planner has a correct estimate on the selectivity of the condition -\n>>> exactly two rows from runs are selected, as expected. But when literal\n>>> is used the planner correctly chooses to use the index:\n>>> http://explain.depesz.com/s/lYc\n>>>\n>>> Similarly a correct plan is chosen when we unnest the inner SELECT:\n>>> SELECT run_id, SUM(disk_write_ops) FROM stats WHERE run_id IN (SELECT\n>>> id FROM runs WHERE server_id = 515) GROUP BY run_id;\n>>> http://explain.depesz.com/s/dlwZ\n>>>\n>>>\n>>> I've tried to replicate this on a clean database:\n>>> CREATE TABLE runs(run_id serial PRIMARY KEY, server_id INTEGER NOT NULL);\n>>> CREATE INDEX runs_server ON runs(server_id);\n>>> CREATE TABLE stats(entry_id serial PRIMARY KEY, run_id integer\n>>> REFERENCES runs NOT NULL, utilization INTEGER NOT NULL);\n>>> CREATE INDEX stats_runs ON stats(run_id);\n>>>\n>>> Now let's try some queries:\n>>> SELECT * FROM (SELECT run_id, utilization FROM stats) AS s WHERE\n>>> run_id IN (1212, 2323, 121, 561, 21, 561, 125, 2, 55, 52, 42);\n>>> http://explain.depesz.com/s/Kcb - fine, index used\n>>>\n>>> SELECT * FROM (SELECT run_id, utilization FROM stats) AS s WHERE\n>>> run_id IN (SELECT run_id FROM runs WHERE server_id = 515);\n>>> http://explain.depesz.com/s/QFs - seqscan!\n>>> Obviously it doesn't mean much, as the tables are empty and there are\n>>> no stats, but still a radically different plan is chosen for what is\n>>> essentially the same query.\n>>>\n>>> Note that in this case the behaviour is the same even when unnested:\n>>> SELECT run_id, utilization FROM stats WHERE run_id IN (SELECT run_id\n>>> FROM runs WHERE server_id = 515);\n>>> http://explain.depesz.com/s/y3GM\n>>>\n>>> So, is this a bug in the planner, or am I somehow subtly changing the\n>>> semantics of the query and don't notice?\n>>> I understand the planner perhaps tries to parallelize queries when a\n>>> SELECT is used in the IN clause, but given the stats it doesn't seem\n>>> to make much sense.\n>>>\n>>> Thanks, and let me know if you want me to test something on my\n>>> database over here or if there's some relevant info I've ommited.\n>>>\n>>> (PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc\n>>> (Ubuntu/Linaro 4.7.2-2ubuntu1) 4.7.2, 64-bit\n>>> running on Ubuntu 12.10, ubuntu package\n>>> postgresql-9.1-9.1.6-1ubuntu1:amd64, default configuration)\n>>> --\n>>> Rafał Rzepecki\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\nCurious, would the following be of any use?\n\nSELECT DISTINCT\n r.run_id,\n s.utilization\nFROM\n runs AS r JOIN stats AS s USING (run_id)\nWHERE\n r.server_id = 515\n/**/;/**/\n\n\nCheers,\nGavin\n\n", "msg_date": "Tue, 13 Nov 2012 09:49:38 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner sometimes doesn't use a relevant index with\n\tIN (subquery) condition" } ]
[ { "msg_contents": "Hi All\n\nI am facing query performance in one of my testing server.\n\nHow i can create index with table column name ?\n\nEXPLAIN select xxx.* from xxx xxx where exists (select 1 from tmp\nwhere mdc_domain_reverse like xxx.reverse_pd || '.%');\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Nested Loop Semi Join (cost=0.00..315085375.74 rows=63 width=3142)\n Join Filter: ((tmp.mdc_domain_reverse)::text ~~\n((xxx.reverse_pd)::text || '.%'::text))\n -> Seq Scan on xxx (cost=0.00..6276.47 rows=12547 width=3142)\n -> Materialize (cost=0.00..31811.93 rows=1442062 width=17)\n -> Seq Scan on tmp (cost=0.00..24601.62 rows=1442062 width=17)\n\n\nsaleshub=# EXPLAIN create table tmp2 as select xxx.* from xxx xxx\nwhere exists (select 1 from tmp where mdc_domain_reverse like\n'moc.ytirucesspc%') ;\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.06..6276.53 rows=12547 width=3142)\n One-Time Filter: $0\n InitPlan 1 (returns $0)\n -> Index Scan using tmp_txt_idx_mdc on tmp (cost=0.00..8.53\nrows=144 width=0)\n Index Cond: (((mdc_domain_reverse)::text ~>=~\n'moc.ytirucesspc'::text) AND ((mdc_domain_reverse)::text ~<~\n'moc.ytirucesspd'::text))\n Filter: ((mdc_domain_reverse)::text ~~ 'moc.ytirucesspc%'::text)\n -> Seq Scan on xxx (cost=0.00..6276.47 rows=12547 width=3142)\n\nHi AllI am facing query performance in one of my testing server.How i can create index with table column name ? EXPLAIN select xxx.* from xxx xxx where exists (select 1 from tmp where mdc_domain_reverse like xxx.reverse_pd || '.%');\n QUERY PLAN \n-------------------------------------------------------------------------------------------\n Nested Loop Semi Join (cost=0.00..315085375.74 rows=63 width=3142)\n Join Filter: ((tmp.mdc_domain_reverse)::text ~~ ((xxx.reverse_pd)::text || '.%'::text))\n -> Seq Scan on xxx (cost=0.00..6276.47 rows=12547 width=3142)\n -> Materialize (cost=0.00..31811.93 rows=1442062 width=17)\n -> Seq Scan on tmp (cost=0.00..24601.62 rows=1442062 width=17)saleshub=# EXPLAIN create table tmp2 as select xxx.* from xxx xxx where exists (select 1 from tmp where mdc_domain_reverse like 'moc.ytirucesspc%') ;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.06..6276.53 rows=12547 width=3142)\n One-Time Filter: $0\n InitPlan 1 (returns $0)\n -> Index Scan using tmp_txt_idx_mdc on tmp (cost=0.00..8.53 rows=144 width=0)\n Index Cond: (((mdc_domain_reverse)::text ~>=~ 'moc.ytirucesspc'::text) AND ((mdc_domain_reverse)::text ~<~ 'moc.ytirucesspd'::text))\n Filter: ((mdc_domain_reverse)::text ~~ 'moc.ytirucesspc%'::text)\n -> Seq Scan on xxx (cost=0.00..6276.47 rows=12547 width=3142)", "msg_date": "Mon, 12 Nov 2012 13:01:00 +0530", "msg_from": "K P Manoj <[email protected]>", "msg_from_op": true, "msg_subject": "Index is not using" }, { "msg_contents": "K P Manoj wrote:\n> I am facing query performance in one of my testing server.\n> How i can create index with table column name ?\n> EXPLAIN select xxx.* from xxx xxx where exists (select 1 from tmp\nwhere mdc_domain_reverse like\n> xxx.reverse_pd || '.%');\n> QUERY PLAN\n>\n------------------------------------------------------------------------\n-------------------\n> Nested Loop Semi Join (cost=0.00..315085375.74 rows=63 width=3142)\n> Join Filter: ((tmp.mdc_domain_reverse)::text ~~\n((xxx.reverse_pd)::text || '.%'::text))\n> -> Seq Scan on xxx (cost=0.00..6276.47 rows=12547 width=3142)\n> -> Materialize (cost=0.00..31811.93 rows=1442062 width=17)\n> -> Seq Scan on tmp (cost=0.00..24601.62 rows=1442062\nwidth=17)\n> \n> saleshub=# EXPLAIN create table tmp2 as select xxx.* from xxx xxx\nwhere exists (select 1 from tmp\n> where mdc_domain_reverse like 'moc.ytirucesspc%') ;\n>\nQUERY PLAN\n>\n------------------------------------------------------------------------\n------------------------------\n> ------------------------------------------\n> Result (cost=0.06..6276.53 rows=12547 width=3142)\n> One-Time Filter: $0\n> InitPlan 1 (returns $0)\n> -> Index Scan using tmp_txt_idx_mdc on tmp (cost=0.00..8.53\nrows=144 width=0)\n> Index Cond: (((mdc_domain_reverse)::text ~>=~\n'moc.ytirucesspc'::text) AND\n> ((mdc_domain_reverse)::text ~<~ 'moc.ytirucesspd'::text))\n> Filter: ((mdc_domain_reverse)::text ~~\n'moc.ytirucesspc%'::text)\n> -> Seq Scan on xxx (cost=0.00..6276.47 rows=12547 width=3142)\n\nI don't really understand what your problem is, but if\nyou are complaining that no index is used for the LIKE\ncondition in the first query, you're out of luck:\n\nThe planner has no way of knowing if the contents of\nxxx.reverse_pd start with \"%\" or not.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Mon, 12 Nov 2012 09:31:01 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index is not using" }, { "msg_contents": "Hi Albe,\nThank you for your reply ,\n\nPlease find the details of table description\n\ntest=# \\d xxx\n Table \"public.xxx\"\n Column | Type | Modifiers\n------------------------------+-----------------------------+-----------\n crawler_id | bigint |\n effective_org | character varying(255) |\n reverse_pd | character varying(255) |\n Indexes:\n \"xxx_rev_pd_idx1\" btree (reverse_pd)\n\n\ntest =#\\d tmp\n Table \"public.tmp\"\n Column | Type | Modifiers\n--------------------+------------------------+-----------\n id | bigint |\n mdc_domain_reverse | character varying(255) |\nIndexes:\n \"tmp_idx1\" btree (mdc_domain_reverse)\n \"tmp_txt_idx_mdc\" btree (mdc_domain_reverse varchar_pattern_ops)\n\n\ntest=# EXPLAIN select xxx.* from xxx xxx where exists (select 1 from tmp\nwhere mdc_domain_reverse like 'ttt' || '.%');\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------\n Result (cost=0.03..2249.94 rows=13591 width=3141)\n One-Time Filter: $0\n InitPlan 1 (returns $0)\n -> Index Only Scan using tmp_txt_idx_mdc on tmp (cost=0.00..4.27\nrows=144 width=0)\n Index Cond: ((mdc_domain_reverse ~>=~ 'ttt.'::text) AND\n(mdc_domain_reverse ~<~ 'ttt/'::text))\n Filter: ((mdc_domain_reverse)::text ~~ 'ttt.%'::text)\n -> Seq Scan on xxx (cost=0.00..2249.91 rows=13591 width=3141)\n(7 rows)\n\nsaleshub=# EXPLAIN select xxx.* from xxx xxx where exists (select 1 from\ntmp where mdc_domain_reverse like xxx.reverse_pd || '.%');\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------\n Nested Loop Semi Join (cost=0.00..341301641.67 rows=68 width=3141)\n Join Filter: ((tmp.mdc_domain_reverse)::text ~~ ((xxx.reverse_pd)::text\n|| '.%'::text))\n -> Seq Scan on xxx (cost=0.00..2249.91 rows=13591 width=3141)\n -> Materialize (cost=0.00..31811.93 rows=1442062 width=18)\n -> Seq Scan on tmp (cost=0.00..24601.62 rows=1442062 width=18)\n(5 rows)\n\n\nMy question was any chance to use query planner with above index ? or i\nwant to change the query ?\n\nRegards\nManoj K P\n\nOn Mon, Nov 12, 2012 at 2:01 PM, Albe Laurenz <[email protected]>wrote:\n\n> K P Manoj wrote:\n> > I am facing query performance in one of my testing server.\n> > How i can create index with table column name ?\n> > EXPLAIN select xxx.* from xxx xxx where exists (select 1 from tmp\n> where mdc_domain_reverse like\n> > xxx.reverse_pd || '.%');\n> > QUERY PLAN\n> >\n> ------------------------------------------------------------------------\n> -------------------\n> > Nested Loop Semi Join (cost=0.00..315085375.74 rows=63 width=3142)\n> > Join Filter: ((tmp.mdc_domain_reverse)::text ~~\n> ((xxx.reverse_pd)::text || '.%'::text))\n> > -> Seq Scan on xxx (cost=0.00..6276.47 rows=12547 width=3142)\n> > -> Materialize (cost=0.00..31811.93 rows=1442062 width=17)\n> > -> Seq Scan on tmp (cost=0.00..24601.62 rows=1442062\n> width=17)\n> >\n> > saleshub=# EXPLAIN create table tmp2 as select xxx.* from xxx xxx\n> where exists (select 1 from tmp\n> > where mdc_domain_reverse like 'moc.ytirucesspc%') ;\n> >\n> QUERY PLAN\n> >\n> ------------------------------------------------------------------------\n> ------------------------------\n> > ------------------------------------------\n> > Result (cost=0.06..6276.53 rows=12547 width=3142)\n> > One-Time Filter: $0\n> > InitPlan 1 (returns $0)\n> > -> Index Scan using tmp_txt_idx_mdc on tmp (cost=0.00..8.53\n> rows=144 width=0)\n> > Index Cond: (((mdc_domain_reverse)::text ~>=~\n> 'moc.ytirucesspc'::text) AND\n> > ((mdc_domain_reverse)::text ~<~ 'moc.ytirucesspd'::text))\n> > Filter: ((mdc_domain_reverse)::text ~~\n> 'moc.ytirucesspc%'::text)\n> > -> Seq Scan on xxx (cost=0.00..6276.47 rows=12547 width=3142)\n>\n> I don't really understand what your problem is, but if\n> you are complaining that no index is used for the LIKE\n> condition in the first query, you're out of luck:\n>\n> The planner has no way of knowing if the contents of\n> xxx.reverse_pd start with \"%\" or not.\n>\n> Yours,\n> Laurenz Albe\n>\n\nHi Albe,Thank you for your reply ,Please find the details of table descriptiontest=# \\d xxx                           Table \"public.xxx\"\n            Column            |            Type             | Modifiers ------------------------------+-----------------------------+----------- crawler_id                   | bigint                      | \n effective_org                | character varying(255)      |  reverse_pd                   | character varying(255)      |  Indexes:    \"xxx_rev_pd_idx1\" btree (reverse_pd)\ntest =#\\d tmp                   Table \"public.tmp\"       Column       |          Type          | Modifiers --------------------+------------------------+-----------\n id                 | bigint                 |  mdc_domain_reverse | character varying(255) | Indexes:    \"tmp_idx1\" btree (mdc_domain_reverse)    \"tmp_txt_idx_mdc\" btree (mdc_domain_reverse varchar_pattern_ops)\ntest=# EXPLAIN   select xxx.* from xxx xxx where exists (select 1 from tmp where mdc_domain_reverse like 'ttt' || '.%');                                                QUERY PLAN                                                \n---------------------------------------------------------------------------------------------------------- Result  (cost=0.03..2249.94 rows=13591 width=3141)   One-Time Filter: $0   InitPlan 1 (returns $0)\n     ->  Index Only Scan using tmp_txt_idx_mdc on tmp  (cost=0.00..4.27 rows=144 width=0)           Index Cond: ((mdc_domain_reverse ~>=~ 'ttt.'::text) AND (mdc_domain_reverse ~<~ 'ttt/'::text))\n           Filter: ((mdc_domain_reverse)::text ~~ 'ttt.%'::text)   ->  Seq Scan on xxx  (cost=0.00..2249.91 rows=13591 width=3141)(7 rows)saleshub=# EXPLAIN   select xxx.* from xxx xxx where exists (select 1 from tmp where mdc_domain_reverse like xxx.reverse_pd || '.%');\n                                        QUERY PLAN                                         ------------------------------------------------------------------------------------------- Nested Loop Semi Join  (cost=0.00..341301641.67 rows=68 width=3141)\n   Join Filter: ((tmp.mdc_domain_reverse)::text ~~ ((xxx.reverse_pd)::text || '.%'::text))   ->  Seq Scan on xxx  (cost=0.00..2249.91 rows=13591 width=3141)   ->  Materialize  (cost=0.00..31811.93 rows=1442062 width=18)\n         ->  Seq Scan on tmp  (cost=0.00..24601.62 rows=1442062 width=18)(5 rows)My question was any chance to use  query planner with above index ? or i want to change the query ?\nRegardsManoj K P On Mon, Nov 12, 2012 at 2:01 PM, Albe Laurenz <[email protected]> wrote:\nK P Manoj wrote:\n> I am facing query performance in one of my testing server.\n> How i can create index with table column name ?\n> EXPLAIN select xxx.* from xxx xxx where exists (select 1 from tmp\nwhere mdc_domain_reverse like\n> xxx.reverse_pd || '.%');\n>                                         QUERY PLAN\n>\n------------------------------------------------------------------------\n-------------------\n>  Nested Loop Semi Join  (cost=0.00..315085375.74 rows=63 width=3142)\n>    Join Filter: ((tmp.mdc_domain_reverse)::text ~~\n((xxx.reverse_pd)::text || '.%'::text))\n>    ->  Seq Scan on xxx  (cost=0.00..6276.47 rows=12547 width=3142)\n>    ->  Materialize  (cost=0.00..31811.93 rows=1442062 width=17)\n>          ->  Seq Scan on tmp  (cost=0.00..24601.62 rows=1442062\nwidth=17)\n>\n> saleshub=# EXPLAIN  create table tmp2 as select xxx.* from xxx xxx\nwhere exists (select 1 from tmp\n> where mdc_domain_reverse like 'moc.ytirucesspc%') ;\n>\nQUERY PLAN\n>\n------------------------------------------------------------------------\n------------------------------\n> ------------------------------------------\n>  Result  (cost=0.06..6276.53 rows=12547 width=3142)\n>    One-Time Filter: $0\n>    InitPlan 1 (returns $0)\n>      ->  Index Scan using tmp_txt_idx_mdc on tmp  (cost=0.00..8.53\nrows=144 width=0)\n>            Index Cond: (((mdc_domain_reverse)::text ~>=~\n'moc.ytirucesspc'::text) AND\n> ((mdc_domain_reverse)::text ~<~ 'moc.ytirucesspd'::text))\n>            Filter: ((mdc_domain_reverse)::text ~~\n'moc.ytirucesspc%'::text)\n>    ->  Seq Scan on xxx  (cost=0.00..6276.47 rows=12547 width=3142)\n\nI don't really understand what your problem is, but if\nyou are complaining that no index is used for the LIKE\ncondition in the first query, you're out of luck:\n\nThe planner has no way of knowing if the contents of\nxxx.reverse_pd start with \"%\" or not.\n\nYours,\nLaurenz Albe", "msg_date": "Mon, 12 Nov 2012 14:12:28 +0530", "msg_from": "K P Manoj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index is not using" }, { "msg_contents": "K P Manoj wrote:\n> Please find the details of table description\n> \n> test=# \\d xxx\n> Table \"public.xxx\"\n> Column | Type |\nModifiers\n>\n------------------------------+-----------------------------+-----------\n> crawler_id | bigint |\n> effective_org | character varying(255) |\n> reverse_pd | character varying(255) |\n> Indexes:\n> \"xxx_rev_pd_idx1\" btree (reverse_pd)\n> \n> \n> test =#\\d tmp\n> Table \"public.tmp\"\n> Column | Type | Modifiers\n> --------------------+------------------------+-----------\n> id | bigint |\n> mdc_domain_reverse | character varying(255) |\n> Indexes:\n> \"tmp_idx1\" btree (mdc_domain_reverse)\n> \"tmp_txt_idx_mdc\" btree (mdc_domain_reverse varchar_pattern_ops)\n> \n> \n> test=# EXPLAIN select xxx.* from xxx xxx where exists (select 1 from\ntmp where mdc_domain_reverse\n> like 'ttt' || '.%');\n> QUERY PLAN\n>\n------------------------------------------------------------------------\n------------------------------\n> ----\n> Result (cost=0.03..2249.94 rows=13591 width=3141)\n> One-Time Filter: $0\n> InitPlan 1 (returns $0)\n> -> Index Only Scan using tmp_txt_idx_mdc on tmp\n(cost=0.00..4.27 rows=144 width=0)\n> Index Cond: ((mdc_domain_reverse ~>=~ 'ttt.'::text) AND\n(mdc_domain_reverse ~<~\n> 'ttt/'::text))\n> Filter: ((mdc_domain_reverse)::text ~~ 'ttt.%'::text)\n> -> Seq Scan on xxx (cost=0.00..2249.91 rows=13591 width=3141)\n> (7 rows)\n> \n> saleshub=# EXPLAIN select xxx.* from xxx xxx where exists (select 1\nfrom tmp where\n> mdc_domain_reverse like xxx.reverse_pd || '.%');\n> QUERY PLAN\n>\n------------------------------------------------------------------------\n-------------------\n> Nested Loop Semi Join (cost=0.00..341301641.67 rows=68 width=3141)\n> Join Filter: ((tmp.mdc_domain_reverse)::text ~~\n((xxx.reverse_pd)::text || '.%'::text))\n> -> Seq Scan on xxx (cost=0.00..2249.91 rows=13591 width=3141)\n> -> Materialize (cost=0.00..31811.93 rows=1442062 width=18)\n> -> Seq Scan on tmp (cost=0.00..24601.62 rows=1442062\nwidth=18)\n> (5 rows)\n> \n> \n> My question was any chance to use query planner with above index ? or\ni want to change the query ?\n\nIt looks like I understood you right, and my answer applies:\n\n> \tI don't really understand what your problem is, but if\n> \tyou are complaining that no index is used for the LIKE\n> \tcondition in the first query, you're out of luck:\n> \n> \tThe planner has no way of knowing if the contents of\n> \txxx.reverse_pd start with \"%\" or not.\n\nThere is no chance to have the index used with this query.\n\nYou'll have to change the query so that the LIKE pattern\nstarts with a constant.\n\nMaybe in your case (few entries in \"xxx\") you could use a\nPL/SQL function that dynamically generates a query for each\nrow in \"xxx\".\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Mon, 12 Nov 2012 10:44:35 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index is not using" } ]
[ { "msg_contents": "Dear All\n\nI am currently implementing using a compressed binary storage scheme \ngenotyping data. These are basically vectors of binary data which may be\nmegabytes in size.\n\nOur current implementation uses the data type bit varying.\n\nWhat we want to do is very simple: we want to retrieve such records from\nthe database and transfer it unaltered to the client which will do\nsomething (uncompressing) with it. As massive amounts of data are to be\nmoved, speed is of great importance, precluding any to and fro\nconversions.\n\nOur current implementation uses Perl DBI; we can retrieve the data ok,\nbut apparently there is some converting going on.\n\nFurther, we would like to use ODBC from Fortran90 (wrapping the\nC-library) for such transfers. However, all sorts funny things happen\nhere which look like conversion issues.\n\nIn old fashioned network database some decade ago (in pre SQL times)\nthis was no problem. Maybe there is someone here who knows the PG\ninternals sufficiently well to give advice on how big blocks of memory \n(i.e. bit varying records) can between transferred UNALTERED between\nbackend and clients.\n\nlooking forward to you response.\n\ngreetings\n\nEildert\n\n\n\n-- \nEildert Groeneveld\n===================================================\nInstitute of Farm Animal Genetics (FLI)\nMariensee 31535 Neustadt Germany\nTel : (+49)(0)5034 871155 Fax : (+49)(0)5034 871143\ne-mail: [email protected] \nweb: http://vce.tzv.fal.de\n==================================================\n\n\n", "msg_date": "Mon, 12 Nov 2012 11:45:48 +0100", "msg_from": "Eildert Groeneveld <[email protected]>", "msg_from_op": true, "msg_subject": "fast read of binary data" }, { "msg_contents": "Eildert Groeneveld wrote:\r\n> I am currently implementing using a compressed binary storage scheme\r\n> genotyping data. These are basically vectors of binary data which may be\r\n> megabytes in size.\r\n> \r\n> Our current implementation uses the data type bit varying.\r\n> \r\n> What we want to do is very simple: we want to retrieve such records from\r\n> the database and transfer it unaltered to the client which will do\r\n> something (uncompressing) with it. As massive amounts of data are to be\r\n> moved, speed is of great importance, precluding any to and fro\r\n> conversions.\r\n> \r\n> Our current implementation uses Perl DBI; we can retrieve the data ok,\r\n> but apparently there is some converting going on.\r\n> \r\n> Further, we would like to use ODBC from Fortran90 (wrapping the\r\n> C-library) for such transfers. However, all sorts funny things happen\r\n> here which look like conversion issues.\r\n> \r\n> In old fashioned network database some decade ago (in pre SQL times)\r\n> this was no problem. Maybe there is someone here who knows the PG\r\n> internals sufficiently well to give advice on how big blocks of memory\r\n> (i.e. bit varying records) can between transferred UNALTERED between\r\n> backend and clients.\r\n\r\nUsing the C API you can specify binary mode for your data, which\r\nmeand that they won't be converted.\r\n\r\nI don't think you will be able to use this with DBI or ODBC,\r\nbut maybe binary corsors can help\r\n(http://www.postgresql.org/docs/current/static/sql-declare.html),\r\nbut I don't know if DBI or ODBC handles them well.\r\n\r\nIf you can avoid DBI or ODBC, that would be best.\r\n\r\nYours,\r\nLaurenz Albe\r\n", "msg_date": "Mon, 12 Nov 2012 12:18:37 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fast read of binary data" }, { "msg_contents": "On 12-11-2012 11:45, Eildert Groeneveld wrote:\n> Dear All\n>\n> I am currently implementing using a compressed binary storage scheme\n> genotyping data. These are basically vectors of binary data which may be\n> megabytes in size.\n>\n> Our current implementation uses the data type bit varying.\n\nWouldn't 'bytea' be a more logical choice for binary data?\nhttp://www.postgresql.org/docs/9.2/interactive/datatype-binary.html\n\n> What we want to do is very simple: we want to retrieve such records from\n> the database and transfer it unaltered to the client which will do\n> something (uncompressing) with it. As massive amounts of data are to be\n> moved, speed is of great importance, precluding any to and fro\n> conversions.\n>\n> Our current implementation uses Perl DBI; we can retrieve the data ok,\n> but apparently there is some converting going on.\n>\n> Further, we would like to use ODBC from Fortran90 (wrapping the\n> C-library) for such transfers. However, all sorts funny things happen\n> here which look like conversion issues.\n>\n> In old fashioned network database some decade ago (in pre SQL times)\n> this was no problem. Maybe there is someone here who knows the PG\n> internals sufficiently well to give advice on how big blocks of memory\n> (i.e. bit varying records) can between transferred UNALTERED between\n> backend and clients.\n\nAlthough I have no idea whether bytea is treated differently in this \ncontext. Bit varying should be about as simple as possible (given that \nit only has 0's and 1's)\n\nBest regards,\n\nArjen\n\n", "msg_date": "Mon, 12 Nov 2012 12:38:04 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fast read of binary data" }, { "msg_contents": "On Mon, Nov 12, 2012 at 4:45 AM, Eildert Groeneveld\n<[email protected]> wrote:\n> Dear All\n>\n> I am currently implementing using a compressed binary storage scheme\n> genotyping data. These are basically vectors of binary data which may be\n> megabytes in size.\n>\n> Our current implementation uses the data type bit varying.\n>\n> What we want to do is very simple: we want to retrieve such records from\n> the database and transfer it unaltered to the client which will do\n> something (uncompressing) with it. As massive amounts of data are to be\n> moved, speed is of great importance, precluding any to and fro\n> conversions.\n>\n> Our current implementation uses Perl DBI; we can retrieve the data ok,\n> but apparently there is some converting going on.\n>\n> Further, we would like to use ODBC from Fortran90 (wrapping the\n> C-library) for such transfers. However, all sorts funny things happen\n> here which look like conversion issues.\n>\n> In old fashioned network database some decade ago (in pre SQL times)\n> this was no problem. Maybe there is someone here who knows the PG\n> internals sufficiently well to give advice on how big blocks of memory\n> (i.e. bit varying records) can between transferred UNALTERED between\n> backend and clients.\n>\n> looking forward to you response.\n\nFastest/best way to transfer binary data to/from postgres is going to\nmean direct coding against libpq since most drivers wall you off from\nthe binary protocol (this may or may not be the case with ODBC). If I\nwere you I'd be writing C code to manage the database and linking the\nC compiled object to the Fortran application. Assuming the conversion\ndoesn't go the way you want (briefly looking, there is a 'bytea as LO'\noption you may want to explore), ODBC brings nothing but complication\nin this regard unless your application has to support multiple\ndatabase vendors or you have zero C chops in-house.\n\nmerlin\n\n", "msg_date": "Mon, 12 Nov 2012 10:57:43 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fast read of binary data" }, { "msg_contents": "On Mo, 2012-11-12 at 12:18 +0100, Albe Laurenz wrote:\n> Eildert Groeneveld wrote:\n> > I am currently implementing using a compressed binary storage scheme\n> > genotyping data. These are basically vectors of binary data which may be\n> > megabytes in size.\n> > \n> > Our current implementation uses the data type bit varying.\n> > \n> > What we want to do is very simple: we want to retrieve such records from\n> > the database and transfer it unaltered to the client which will do\n> > something (uncompressing) with it. As massive amounts of data are to be\n> > moved, speed is of great importance, precluding any to and fro\n> > conversions.\n> > \n> > Our current implementation uses Perl DBI; we can retrieve the data ok,\n> > but apparently there is some converting going on.\n> > \n> > Further, we would like to use ODBC from Fortran90 (wrapping the\n> > C-library) for such transfers. However, all sorts funny things happen\n> > here which look like conversion issues.\n> > \n> > In old fashioned network database some decade ago (in pre SQL times)\n> > this was no problem. Maybe there is someone here who knows the PG\n> > internals sufficiently well to give advice on how big blocks of memory\n> > (i.e. bit varying records) can between transferred UNALTERED between\n> > backend and clients.\n> \n> Using the C API you can specify binary mode for your data, which\n> meand that they won't be converted.\n> \n> I don't think you will be able to use this with DBI or ODBC,\n> but maybe binary corsors can help\n> (http://www.postgresql.org/docs/current/static/sql-declare.html),\n> but I don't know if DBI or ODBC handles them well.\n> \n> If you can avoid DBI or ODBC, that would be best.\nok, I did have a look at the libpq librar, and you are right, there is a\nway to obtain binary data from the backend through the PQexecParams\n\n res = PQexecParams(conn,\n \"DECLARE myportal CURSOR FOR select genotype_bits\nfrom v_genotype_data\",\n 0, /* zero param */\n NULL, /* let the backend deduce param type */\n paramValues,\n NULL, /* don't need param lengths since text*/\n NULL, /* default to all text params */\n 1); /* ask for binary results */\n\ngenotype_bits is defined as bit varying in the backend. When writing the\nresults:\n for (i = 0; i < PQntuples(res); i++)\n {\n for (j = 0; j < nFields; j++)\n fwrite(PQgetvalue(res, i, j),100000,1,f);\n }\n\nit is clear that the results are NOT in binary format:\neg(eno,snp): od -b junk |head\n0000000 061 060 061 060 061 060 061 060 061 060 061 060 061 060 061 060\n\nclearly, these are nice 0 and 1 in ASCII and not as I need it as a bit\nstream. \n\nAlso, (and in line with this) PQgetvalue(res, i, j) seems to be of type\ntext.\n\nWhat am I missing?\n\n\n\n\n\n", "msg_date": "Thu, 22 Nov 2012 08:54:04 +0100", "msg_from": "Eildert Groeneveld <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fast read of binary data" }, { "msg_contents": "On 22.11.2012 09:54, Eildert Groeneveld wrote:\n> ok, I did have a look at the libpq librar, and you are right, there is a\n> way to obtain binary data from the backend through the PQexecParams\n>\n> res = PQexecParams(conn,\n> \"DECLARE myportal CURSOR FOR select genotype_bits\n> from v_genotype_data\",\n> 0, /* zero param */\n> NULL, /* let the backend deduce param type */\n> paramValues,\n> NULL, /* don't need param lengths since text*/\n> NULL, /* default to all text params */\n> 1); /* ask for binary results */\n>\n> genotype_bits is defined as bit varying in the backend. When writing the\n> results:\n> for (i = 0; i< PQntuples(res); i++)\n> {\n> for (j = 0; j< nFields; j++)\n> fwrite(PQgetvalue(res, i, j),100000,1,f);\n> }\n>\n> it is clear that the results are NOT in binary format:\n> eg(eno,snp): od -b junk |head\n> 0000000 061 060 061 060 061 060 061 060 061 060 061 060 061 060 061 060\n\nYou need to ask for binary results when you execute the FETCH \nstatements. Asking for binary results on the DECLARE CURSOR statement \nhas no effect, as DECLARE CURSOR doesn't return any results; it's the \nFETCH that follows that returns the result set.\n\n- Heikki\n\n", "msg_date": "Thu, 22 Nov 2012 11:17:15 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fast read of binary data" } ]
[ { "msg_contents": "Hi,\n\nI had installed postgreSQL v9.2 in Windows XP SP3.\n\nMy PC specs:\nProcessor: Pentium Dual Core 2.09 GHz\nRAM: 2GB\n\nThe postgreSQL is run as windows service (manual).\n\nThe problem is the postgreSQL service uses a lot of memory and lags\nthe OS if running in long time (about 2 hours or more) so I had to\nrestart the postgreSQL service everytime it happened. I never do any\nbig querying process so far. I only ever run it for adempiere ERP\nsoftware and a small struts 2 project.\n\nSee this screenshot link from the Process Explorer:\n\nhttp://i45.tinypic.com/vr4t3b.png\n\nYou can see that there are a lot of threads spawned. Is the threads\nthat caused the high memory usage?\n\nIs there a way to decrease the memory usage?\n\n\nThanks & Regards,\nWM\n\n", "msg_date": "Mon, 12 Nov 2012 21:17:42 +0700", "msg_from": "Wu Ming <[email protected]>", "msg_from_op": true, "msg_subject": "PostreSQL v9.2 uses a lot of memory in Windows XP" }, { "msg_contents": "Wu Ming wrote:\n> I had installed postgreSQL v9.2 in Windows XP SP3.\n> \n> My PC specs:\n> Processor: Pentium Dual Core 2.09 GHz\n> RAM: 2GB\n> \n> The postgreSQL is run as windows service (manual).\n> \n> The problem is the postgreSQL service uses a lot of memory and lags\n> the OS if running in long time (about 2 hours or more) so I had to\n> restart the postgreSQL service everytime it happened. I never do any\n> big querying process so far. I only ever run it for adempiere ERP\n> software and a small struts 2 project.\n> \n> See this screenshot link from the Process Explorer:\n> \n> http://i45.tinypic.com/vr4t3b.png\n> \n> You can see that there are a lot of threads spawned. Is the threads\n> that caused the high memory usage?\n> \n> Is there a way to decrease the memory usage?\n\nIs the machine dedicated to PostgreSQL?\n\nWhat did you set the following patameters to:\n\nshared_buffers\nmax_connections\nwork_mem\nmaintenance_work_mem\n\nYou probably need to reduce some of these settings.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Mon, 12 Nov 2012 15:29:11 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostreSQL v9.2 uses a lot of memory in Windows XP" }, { "msg_contents": "On Mon, Nov 12, 2012 at 8:17 AM, Wu Ming <[email protected]> wrote:\n> Hi,\n>\n> I had installed postgreSQL v9.2 in Windows XP SP3.\n>\n> My PC specs:\n> Processor: Pentium Dual Core 2.09 GHz\n> RAM: 2GB\n>\n> The postgreSQL is run as windows service (manual).\n>\n> The problem is the postgreSQL service uses a lot of memory and lags\n> the OS if running in long time (about 2 hours or more) so I had to\n> restart the postgreSQL service everytime it happened. I never do any\n> big querying process so far. I only ever run it for adempiere ERP\n> software and a small struts 2 project.\n>\n> See this screenshot link from the Process Explorer:\n>\n> http://i45.tinypic.com/vr4t3b.png\n>\n> You can see that there are a lot of threads spawned. Is the threads\n> that caused the high memory usage?\n>\n> Is there a way to decrease the memory usage?\n\nI don't think memory usage is all that high. You've got less than\n50mb reserved in memory which is not outrageous for a database server\n(that said, windows per process memory usage is higher than *nix for\nvarious reasons). High virtual memory sizes are due to shared memory\nimplementation and should not be of large concern. Albe made some\ngood suggestions, but you can also disable autovacuum which would\neliminate one of the spawned processes at the expense of making all\nvacuum and analyze operations manual. You used to also be able to\ndisable statistics gathering, but AIUI that's no longer possible.\n\nmerlin\n\n", "msg_date": "Mon, 12 Nov 2012 08:52:57 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostreSQL v9.2 uses a lot of memory in Windows XP" }, { "msg_contents": "On 11/12/2012 10:17 PM, Wu Ming wrote:\n> See this screenshot link from the Process Explorer:\n>\n> http://i45.tinypic.com/vr4t3b.png\nThat looks pretty reasonable to me.\n\nThe \"virtual size\" includes the shared memory segment, so the\nper-process use is actually much lower than it looks. The real use will\nbe closer to one of the virtual sizes plus the working sets of all the\nrest of the processes. They are processes, not threads.\n\nThere may be a genuine issue here, but it isn't demonstrated by the\nscreenshot.\n\nHow do you determine that it's \"lagging\"? What's the overall system\nmemory pressure like? Check Task Manager. What's the system's swap\nusage? Are there other big processes?\n\n> You can see that there are a lot of threads spawned. Is the threads\n> that caused the high memory usage?\nPostgreSQL has a process-based architecture. They're processes not\nthreads. Each process only uses a fairly small amount of memory - the\nexact amount depends on settings like work_mem and what the queries\nrunning are doing, but it's usually not much. Most of the apparent use\nis shared memory.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 13 Nov 2012 10:08:06 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostreSQL v9.2 uses a lot of memory in Windows XP" }, { "msg_contents": "Please reply to the list, not directly to me. Comments follow in-line.\n\nOn 11/13/2012 11:37 PM, Wu Ming wrote:\n> Hi,\n>\n> What column in Process Explorer to determine memory usage? Currently I\n> thought \"Working Set\" is the correct one.\nAs I said, it just isn't that simple when shared memory is involved. A\nrough measure for PostgreSQL is the \"virtual size\" of one of the\nprocesses, plus the working sets of all the others. Alternately, you can\nreasonably estimate the memory consumption by adding all the working\nsets and then adding the value of shared_buffers to that - this will\nunder-estimate usage slightly because PostgreSQL also uses shared memory\nfor other things, but not tons of it in a normal configuration.\n> The 'lagging' is like when you try to alt+tab or activating/focusing\n> other application window, or changing tab in browser, it goes slow or\n> lagged in its UI loading.\nSure, that's what you see, but you should really be looking at the\nnumbers. Swap in and out bytes, memory usage, etc. In Windows 7 or\nWin2k8 Server you'd use the Performance Monitor for that; I don't\nremember off the top of my head where to look in XP.\n> My firefox has many tabs opened (around 30 tabs) and eclipse is well\n> known as its high memory usage.\nOn a 2GB machine? Yup, that'll do it.\n\nYou've shown a screenshot that suggests that Pg is using relatively\nlittle RAM, and you're running two known memory pigs. I'd say your\nproblem has nothing to do with PostgreSQL.\n> Then usually I also opened opera and\n> chrome with ~10-20 tabs opened.\nTime to buy more RAM.\n> I saw that chrome also spawned many\n> process (I had 4 tabs opened, but it shows 8 child process). They\n> might be the big process that probably is the main cause of the\n> lagging.\nIt's going to be everything adding up. Chrome, Eclipse, Firefox, all\nfighting for RAM.\n\nBTW, chrome uses a multi-process architecture like PostgreSQL, but\nunlike PostgreSQL it does not use shared memory, so you can tell how\nmuch RAM Chrome is using very easily by adding up the working sets.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 14 Nov 2012 07:07:00 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostreSQL v9.2 uses a lot of memory in Windows XP" }, { "msg_contents": "Hi,\n\n> As I said, it just isn't that simple when shared memory is involved. A\n> rough measure for PostgreSQL is the \"virtual size\" of one of the\n> processes, plus the working sets of all the others. Alternately, you can\n> reasonably estimate the memory consumption by adding all the working\n> sets and then adding the value of shared_buffers to that - this will\n> under-estimate usage slightly because PostgreSQL also uses shared memory\n> for other things, but not tons of it in a normal configuration.\n\nThis is interesting. About the \"virtual size of one of the process\",\nwhich process I should look up? Is the one who has the biggest virtual\nsize?\n\nhttp://i45.tinypic.com/vr4t3b.png\n\nFor example, from the above screenshot, the biggest virtual size from\nall postgresql process is 740004. So can we said the total approximate\nof memory usage of the postgresql service is 740004 K +\ntotal_of_working_sets (4844 K + 10056 K + 5408 K + 6020 K + ...) ?\n\n\n> Sure, that's what you see, but you should really be looking at the\n> numbers. Swap in and out bytes, memory usage, etc. In Windows 7 or\n> Win2k8 Server you'd use the Performance Monitor for that; I don't\n> remember off the top of my head where to look in XP.\n\nI had total paging file size = 3GB.\n\nThere is perfmon.exe in windows xp, but don't know how to use and\nanalyze the graph.\n\n\n> I'd say your problem has nothing to do with PostgreSQL.\n\nMaybe you're right. If I close one of the memory porks, it gets a bit\nbetter. Maybe I was too quick to blame postgreSQL, it's just that I\ncan't close and restart other applications because they are either too\nimportant or slow to reload, where postgresql service is very quick in\nrestarting. I hope it'll understand.\n\n\n\nOn Wed, Nov 14, 2012 at 6:07 AM, Craig Ringer <[email protected]> wrote:\n> Please reply to the list, not directly to me. Comments follow in-line.\n>\n> On 11/13/2012 11:37 PM, Wu Ming wrote:\n>> Hi,\n>>\n>> What column in Process Explorer to determine memory usage? Currently I\n>> thought \"Working Set\" is the correct one.\n> As I said, it just isn't that simple when shared memory is involved. A\n> rough measure for PostgreSQL is the \"virtual size\" of one of the\n> processes, plus the working sets of all the others. Alternately, you can\n> reasonably estimate the memory consumption by adding all the working\n> sets and then adding the value of shared_buffers to that - this will\n> under-estimate usage slightly because PostgreSQL also uses shared memory\n> for other things, but not tons of it in a normal configuration.\n>> The 'lagging' is like when you try to alt+tab or activating/focusing\n>> other application window, or changing tab in browser, it goes slow or\n>> lagged in its UI loading.\n> Sure, that's what you see, but you should really be looking at the\n> numbers. Swap in and out bytes, memory usage, etc. In Windows 7 or\n> Win2k8 Server you'd use the Performance Monitor for that; I don't\n> remember off the top of my head where to look in XP.\n>> My firefox has many tabs opened (around 30 tabs) and eclipse is well\n>> known as its high memory usage.\n> On a 2GB machine? Yup, that'll do it.\n>\n> You've shown a screenshot that suggests that Pg is using relatively\n> little RAM, and you're running two known memory pigs. I'd say your\n> problem has nothing to do with PostgreSQL.\n>> Then usually I also opened opera and\n>> chrome with ~10-20 tabs opened.\n> Time to buy more RAM.\n>> I saw that chrome also spawned many\n>> process (I had 4 tabs opened, but it shows 8 child process). They\n>> might be the big process that probably is the main cause of the\n>> lagging.\n> It's going to be everything adding up. Chrome, Eclipse, Firefox, all\n> fighting for RAM.\n>\n> BTW, chrome uses a multi-process architecture like PostgreSQL, but\n> unlike PostgreSQL it does not use shared memory, so you can tell how\n> much RAM Chrome is using very easily by adding up the working sets.\n>\n> --\n> Craig Ringer http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\n", "msg_date": "Wed, 14 Nov 2012 12:56:17 +0700", "msg_from": "Wu Ming <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostreSQL v9.2 uses a lot of memory in Windows XP" }, { "msg_contents": "On 11/14/2012 01:56 PM, Wu Ming wrote:\n\n> This is interesting. About the \"virtual size of one of the process\",\n> which process I should look up? Is the one who has the biggest virtual\n> size?\n\nThinking about this some more, I haven't checked to see if Windows adds\ndirtied shared_buffers to the process's working set. If so, you'd still\nbe multiply counting shared memory. In that case, since you can't use an\napproach like Depesz writes about here for Linux:\n\nhttp://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/\n\nthen it's going to be pretty hard to actually work it out.\npg_buffercache might be of some help (\nhttp://www.postgresql.org/docs/current/static/pgbuffercache.html\n<http://www.postgresql.org/docs/9.1/static/pgbuffercache.html>) but it's\nnot exactly friendly.\n\nYes, it's absurd that it's so hard to work out how much memory Pg uses.\nIt'd be nice if Pg provided better tools for this by allowing the\npostmaster to interrogate backends' memory contexts, though that'd only\nreport how much memory Pg thought it was using, not how much memory it\nwas actually using from the OS. Really, OS-specific tools are required,\nand nobody's written them - at least, I'm not aware of any that've been\npublished.\n\nMost of the problem is that operating systems make it so hard to tell\nwhere memory is going when shared memory is involved.\n\n> http://i45.tinypic.com/vr4t3b.png\n>\n> For example, from the above screenshot, the biggest virtual size from\n> all postgresql process is 740004. So can we said the total approximate\n> of memory usage of the postgresql service is 740004 K +\n> total_of_working_sets (4844 K + 10056 K + 5408 K + 6020 K + ...) ?\n*if* Windows XP doesn't add dirtied shared buffers to the working set,\nthen that would be a reasonable approximation.\n\nIf it does, then it'd be massively out because it'd be double-counting\nshared memory.\n\nOff the top of my head I'm not sure how best to test this. Maybe if you\ndo a simple query like `SELECT * FROM some_big_table` in `psql` and dump\nthe result to the null device (\\o NUL in windows if I recall correctly,\nbut again not tested) or a temp file and see how much the backend grows.\nIf it grows more than a few hundred K then I expect it's probably having\nthe dirtied shared_buffers counted against it.\n> Maybe you're right. If I close one of the memory porks, it gets a bit\n> better. Maybe I was too quick to blame postgreSQL, it's just that I\n> can't close and restart other applications because they are either too\n> important or slow to reload, where postgresql service is very quick in\n> restarting. I hope it'll understand.\nAnd, of course, because PostgreSQL looks like it uses a TON of memory,\neven when it's really using only a small amount.\n\nThis has been an ongoing source of confusion, but it's one that isn't\ngoing to go away until OSes offer a way to easily ask \"how much RAM is\nthis group of processes using in total\".\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n\n\nOn 11/14/2012 01:56 PM, Wu Ming wrote:\n \n\n\n\n\nThis is interesting. About the \"virtual size of one of the process\",\nwhich process I should look up? Is the one who has the biggest virtual\nsize?\n\n\n Thinking about this some more, I haven't checked to see if Windows\n adds dirtied shared_buffers to the process's working set. If so,\n you'd still be multiply counting shared memory. In that case, since\n you can't use an approach like Depesz writes about here for Linux:\n\nhttp://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/\n\n then it's going to be pretty hard to actually work it out.\n pg_buffercache might be of some help (\n \nhttp://www.postgresql.org/docs/current/static/pgbuffercache.html)\n but it's not exactly friendly.\n\n Yes, it's absurd that it's so hard to work out how much memory Pg\n uses. It'd be nice if Pg provided better tools for this by allowing\n the postmaster to interrogate backends' memory contexts, though\n that'd only report how much memory Pg thought it was using, not how\n much memory it was actually using from the OS. Really, OS-specific\n tools are required, and nobody's written them - at least, I'm not\n aware of any that've been published.\n\n Most of the problem is that operating systems make it so hard to\n tell where memory is going when shared memory is involved.\n\n\nhttp://i45.tinypic.com/vr4t3b.png\n\nFor example, from the above screenshot, the biggest virtual size from\nall postgresql process is 740004. So can we said the total approximate\nof memory usage of the postgresql service is 740004 K +\ntotal_of_working_sets (4844 K + 10056 K + 5408 K + 6020 K + ...) ?\n\n *if* Windows XP doesn't add dirtied shared buffers to the working\n set, then that would be a reasonable approximation.\n\n If it does, then it'd be massively out because it'd be\n double-counting shared memory.\n\n Off the top of my head I'm not sure how best to test this. Maybe if\n you do a simple query like `SELECT * FROM some_big_table` in `psql`\n and dump the result to the null device (\\o NUL in windows if I\n recall correctly, but again not tested) or a temp file and see how\n much the backend grows. If it grows more than a few hundred K then I\n expect it's probably having the dirtied shared_buffers counted\n against it.\n \n\nMaybe you're right. If I close one of the memory porks, it gets a bit\nbetter. Maybe I was too quick to blame postgreSQL, it's just that I\ncan't close and restart other applications because they are either too\nimportant or slow to reload, where postgresql service is very quick in\nrestarting. I hope it'll understand.\n\n And, of course, because PostgreSQL looks like it uses a TON of\n memory, even when it's really using only a small amount.\n\n This has been an ongoing source of confusion, but it's one that\n isn't going to go away until OSes offer a way to easily ask \"how\n much RAM is this group of processes using in total\".\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 14 Nov 2012 14:47:36 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostreSQL v9.2 uses a lot of memory in Windows XP" }, { "msg_contents": "On 14 November 2012 06:47, Craig Ringer <[email protected]> wrote:\n> Yes, it's absurd that it's so hard to work out how much memory Pg uses. It'd\n> be nice if Pg provided better tools for this by allowing the postmaster to\n> interrogate backends' memory contexts, though that'd only report how much\n> memory Pg thought it was using, not how much memory it was actually using\n> from the OS. Really, OS-specific tools are required, and nobody's written\n> them - at least, I'm not aware of any that've been published.\n\nI wrote a GDB Python script that interrogates a running backend about\nmemory context information, walking a tree of contexts, which is based\nalmost entirely on standard infrastructure used by\nMemoryContextStats(). It's quite possible. You're quite right to say\nthat OS-specific tools would probably do a more satisfactory job,\nthough, particularly if you're not interested in *what* Postgres is\ndoing with memory, but need to summarise it usefully.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n", "msg_date": "Wed, 14 Nov 2012 12:04:52 +0000", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostreSQL v9.2 uses a lot of memory in Windows XP" } ]
[ { "msg_contents": "This query is a couple orders of magnitude slower the first result is\n9.2.1, the second 9.1\n\n=# explain analyze SELECT note_sets.\"id\" AS t0_r0,\nnote_sets.\"note_set_source_id\" AS t0_r1, note_sets.\"parent_id\" AS t0_r2,\nnote_sets.\"business_entity_id\" AS t0_r3, note_sets.\"created_at\" AS t0_r4,\nnote_sets.\"updated_at\" AS t0_r5, note_sets.\"created_by\" AS t0_r6,\nnote_sets.\"updated_by\" AS t0_r7, note_set_sources.\"id\" AS t1_r0,\nnote_set_sources.\"name\" AS t1_r1, note_set_sources.\"code\" AS t1_r2,\nnote_set_sources.\"description\" AS t1_r3, note_set_sources.\"status\" AS\nt1_r4, note_set_sources.\"created_at\" AS t1_r5,\nnote_set_sources.\"updated_at\" AS t1_r6, note_set_sources.\"created_by\" AS\nt1_r7, note_set_sources.\"updated_by\" AS t1_r8, notes.\"id\" AS t2_r0,\nnotes.\"note_set_id\" AS t2_r1, notes.\"subject\" AS t2_r2, notes.\"text\" AS\nt2_r3, notes.\"status\" AS t2_r4, notes.\"is_dissmissable\" AS t2_r5,\nnotes.\"is_home\" AS t2_r6, notes.\"created_at\" AS t2_r7, notes.\"updated_at\"\nAS t2_r8, notes.\"created_by\" AS t2_r9, notes.\"updated_by\" AS t2_r10 FROM\nnote_sets LEFT OUTER JOIN note_set_sources ON note_set_sources.id =\nnote_sets.note_set_source_id LEFT OUTER JOIN notes ON notes.note_set_id =\nnote_sets.id AND notes.\"status\" = E'A' WHERE (note_sets.id IN (WITH\nRECURSIVE parent_noteset as (SELECT id FROM note_sets where id = 8304085\nUNION SELECT note_sets.id FROM parent_noteset parent_noteset, note_sets\nnote_sets WHERE note_sets.parent_id = parent_noteset.id) SELECT id FROM\nparent_noteset))\nCareCloud_Prod-# ;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=4704563.58..5126773.60 rows=10184900 width=773)\n(actual time=14414.946..14554.962 rows=52 loops=1)\n Hash Cond: (public.note_sets.note_set_source_id = note_set_sources.id)\n -> Hash Right Join (cost=4704562.22..4986729.86 rows=10184900\nwidth=720) (actual time=14414.908..14554.878 rows=52 loops=1)\n Hash Cond: (notes.note_set_id = public.note_sets.id)\n -> Seq Scan on notes (cost=0.00..33097.97 rows=883383 width=680)\n(actual time=0.021..292.329 rows=882307 loops=1)\n Filter: (status = 'A'::bpchar)\n Rows Removed by Filter: 862\n -> Hash (cost=4497680.97..4497680.97 rows=10184900 width=40)\n(actual time=13848.559..13848.559 rows=46 loops=1)\n Buckets: 524288 Batches: 4 Memory Usage: 7kB\n -> Nested Loop (cost=4496147.89..4497680.97 rows=10184900\nwidth=40) (actual time=13847.537..13848.125 rows=46 loops=1)\n -> HashAggregate (cost=4496147.89..4496149.89\nrows=200 width=4) (actual time=13847.410..13847.425 rows=46 loops=1)\n -> CTE Scan on parent_noteset\n (cost=4495503.38..4495900.00 rows=19831 width=4) (actual\ntime=0.058..13847.350 rows=46 loops=1)\n CTE parent_noteset\n -> Recursive Union\n (cost=0.00..4495503.38 rows=19831 width=4) (actual time=0.057..13847.284\nrows=46 loops=1)\n -> Index Only Scan using\nnote_sets_pkey on note_sets (cost=0.00..7.85 rows=1 width=4) (actual\ntime=0.054..0.055 rows=1 loops=1)\n Index Cond: (id = 8304085)\n Heap Fetches: 1\n -> Hash Join\n (cost=0.33..449509.89 rows=1983 width=4) (actual time=2788.672..4615.686\nrows=15 loops=3)\n Hash Cond:\n(note_sets.parent_id = parent_noteset.id)\n -> Seq Scan on note_sets\n (cost=0.00..373102.99 rows=20369799 width=8) (actual time=0.006..2288.076\nrows=20355654 loops=3)\n -> Hash (cost=0.20..0.20\nrows=10 width=4) (actual time=0.006..0.006 rows=15 loops=3)\n Buckets: 1024\n Batches: 1 Memory Usage: 1kB\n -> WorkTable Scan on\nparent_noteset (cost=0.00..0.20 rows=10 width=4) (actual time=0.001..0.003\nrows=15 loops=3)\n -> Index Scan using note_sets_pkey on note_sets\n (cost=0.00..7.65 rows=1 width=40) (actual time=0.014..0.014 rows=1\nloops=46)\n Index Cond: (id = parent_noteset.id)\n -> Hash (cost=1.16..1.16 rows=16 width=53) (actual time=0.010..0.010\nrows=16 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on note_set_sources (cost=0.00..1.16 rows=16\nwidth=53) (actual time=0.004..0.005 rows=16 loops=1)\n Total runtime: 14555.254 ms\n(29 rows)\n\nCareCloud_Prod=#\n\n\n\n# SELECT note_sets.\"id\" AS t0_r0, note_sets.\"note_set_source_id\" AS t0_r1,\nnote_sets.\"parent_id\" AS t0_r2, note_sets.\"business_entity_id\" AS t0_r3,\nnote_sets.\"created_at\" AS t0_r4, note_sets.\"updated_at\" AS t0_r5,\nnote_sets.\"created_by\" AS t0_r6, note_sets.\"updated_by\" AS t0_r7,\nnote_set_sources.\"id\" AS t1_r0, note_set_sources.\"name\" AS t1_r1,\nnote_set_sources.\"code\" AS t1_r2, note_set_sources.\"description\" AS t1_r3,\nnote_set_sources.\"status\" AS t1_r4, note_set_sources.\"created_at\" AS t1_r5,\nnote_set_sources.\"updated_at\" AS t1_r6, note_set_sources.\"created_by\" AS\nt1_r7, note_set_sources.\"updated_by\" AS t1_r8, notes.\"id\" AS t2_r0,\nnotes.\"note_set_id\" AS t2_r1, notes.\"subject\" AS t2_r2, notes.\"text\" AS\nt2_r3, notes.\"status\" AS t2_r4, notes.\"is_dissmissable\" AS t2_r5,\nnotes.\"is_home\" AS t2_r6, notes.\"created_at\" AS t2_r7, notes.\"updated_at\"\nAS t2_r8, notes.\"created_by\" AS t2_r9, notes.\"updated_by\" AS t2_r10 FROM\nnote_sets LEFT OUTER JOIN note_set_sources ON note_set_sources.id =\nnote_sets.note_set_source_id LEFT OUTER JOIN notes ON notes.note_set_id =\nnote_sets.id AND notes.\"status\" = E'A' WHERE (note_sets.id IN (WITH\nRECURSIVE parent_noteset as (SELECT id FROM note_sets where id = 8304085\nUNION SELECT note_sets.id FROM parent_noteset parent_noteset, note_sets\nnote_sets WHERE note_sets.parent_id = parent_noteset.id) SELECT id FROM\nparent_noteset));\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=249847.72..670928.98 rows=10180954 width=1512)\n(actual time=692.423..829.258 rows=48 loops=1)\n Hash Cond: (public.note_sets.note_set_source_id = note_set_sources.id)\n -> Hash Right Join (cost=249846.36..530939.50 rows=10180954 width=718)\n(actual time=692.382..829.180 rows=48 loops=1)\n Hash Cond: (notes.note_set_id = public.note_sets.id)\n -> Seq Scan on notes (cost=0.00..32981.14 rows=878550 width=678)\n(actual time=0.027..413.972 rows=878529 loops=1)\n Filter: (status = 'A'::bpchar)\n -> Hash (cost=43045.44..43045.44 rows=10180954 width=40) (actual\ntime=22.904..22.904 rows=46 loops=1)\n Buckets: 524288 Batches: 4 Memory Usage: 2kB\n -> Nested Loop (cost=41106.18..43045.44 rows=10180954\nwidth=40) (actual time=12.319..22.738 rows=46 loops=1)\n -> HashAggregate (cost=41106.18..41108.18 rows=200\nwidth=4) (actual time=11.873..11.889 rows=46 loops=1)\n -> CTE Scan on parent_noteset\n (cost=40459.39..40857.41 rows=19901 width=4) (actual time=0.492..11.843\nrows=46 loops=1)\n CTE parent_noteset\n -> Recursive Union\n (cost=0.00..40459.39 rows=19901 width=4) (actual time=0.489..11.822\nrows=46 loops=1)\n -> Index Scan using\nnote_sets_pkey on note_sets (cost=0.00..10.50 rows=1 width=4) (actual\ntime=0.484..0.485 rows=1 loops=1)\n Index Cond: (id = 8304085)\n -> Nested Loop\n (cost=0.00..4005.09 rows=1990 width=4) (actual time=1.534..3.764 rows=15\nloops=3)\n -> WorkTable Scan on\nparent_noteset (cost=0.00..0.20 rows=10 width=4) (actual time=0.000..0.001\nrows=15 loops=3)\n -> Index Scan using\nnote_sets_parent_id_idx on note_sets (cost=0.00..398.00 rows=199 width=8)\n(actual time=0.216..0.244 rows=1 loops=46)\n Index Cond: (parent_id\n= parent_noteset.id)\n -> Index Scan using note_sets_pkey on note_sets\n (cost=0.00..9.67 rows=1 width=40) (actual time=0.234..0.234 rows=1\nloops=46)\n Index Cond: (id = parent_noteset.id)\n -> Hash (cost=1.16..1.16 rows=16 width=794) (actual time=0.020..0.020\nrows=16 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on note_set_sources (cost=0.00..1.16 rows=16\nwidth=794) (actual time=0.012..0.014 rows=16 loops=1)\n Total runtime: 829.657 ms\n(25 rows)\n\n\nDave Cramer\n\ndave.cramer(at)credativ(dot)ca\nhttp://www.credativ.ca\n\nThis query is a couple orders of magnitude slower the first result is 9.2.1, the second 9.1=# explain analyze SELECT note_sets.\"id\" AS t0_r0, note_sets.\"note_set_source_id\" AS t0_r1, note_sets.\"parent_id\" AS t0_r2, note_sets.\"business_entity_id\" AS t0_r3, note_sets.\"created_at\" AS t0_r4, note_sets.\"updated_at\" AS t0_r5, note_sets.\"created_by\" AS t0_r6, note_sets.\"updated_by\" AS t0_r7, note_set_sources.\"id\" AS t1_r0, note_set_sources.\"name\" AS t1_r1, note_set_sources.\"code\" AS t1_r2, note_set_sources.\"description\" AS t1_r3, note_set_sources.\"status\" AS t1_r4, note_set_sources.\"created_at\" AS t1_r5, note_set_sources.\"updated_at\" AS t1_r6, note_set_sources.\"created_by\" AS t1_r7, note_set_sources.\"updated_by\" AS t1_r8, notes.\"id\" AS t2_r0, notes.\"note_set_id\" AS t2_r1, notes.\"subject\" AS t2_r2, notes.\"text\" AS t2_r3, notes.\"status\" AS t2_r4, notes.\"is_dissmissable\" AS t2_r5, notes.\"is_home\" AS t2_r6, notes.\"created_at\" AS t2_r7, notes.\"updated_at\" AS t2_r8, notes.\"created_by\" AS t2_r9, notes.\"updated_by\" AS t2_r10 FROM note_sets  LEFT OUTER JOIN note_set_sources ON note_set_sources.id = note_sets.note_set_source_id  LEFT OUTER JOIN notes ON notes.note_set_id = note_sets.id AND notes.\"status\" = E'A' WHERE (note_sets.id IN (WITH RECURSIVE parent_noteset as (SELECT id FROM note_sets where id = 8304085 UNION SELECT note_sets.id FROM parent_noteset parent_noteset, note_sets note_sets WHERE note_sets.parent_id = parent_noteset.id) SELECT id FROM parent_noteset))\nCareCloud_Prod-# ;                                                                                QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join  (cost=4704563.58..5126773.60 rows=10184900 width=773) (actual time=14414.946..14554.962 rows=52 loops=1)   Hash Cond: (public.note_sets.note_set_source_id = note_set_sources.id)\n   ->  Hash Right Join  (cost=4704562.22..4986729.86 rows=10184900 width=720) (actual time=14414.908..14554.878 rows=52 loops=1)         Hash Cond: (notes.note_set_id = public.note_sets.id)\n         ->  Seq Scan on notes  (cost=0.00..33097.97 rows=883383 width=680) (actual time=0.021..292.329 rows=882307 loops=1)               Filter: (status = 'A'::bpchar)               Rows Removed by Filter: 862\n         ->  Hash  (cost=4497680.97..4497680.97 rows=10184900 width=40) (actual time=13848.559..13848.559 rows=46 loops=1)               Buckets: 524288  Batches: 4  Memory Usage: 7kB               ->  Nested Loop  (cost=4496147.89..4497680.97 rows=10184900 width=40) (actual time=13847.537..13848.125 rows=46 loops=1)\n                     ->  HashAggregate  (cost=4496147.89..4496149.89 rows=200 width=4) (actual time=13847.410..13847.425 rows=46 loops=1)                           ->  CTE Scan on parent_noteset  (cost=4495503.38..4495900.00 rows=19831 width=4) (actual time=0.058..13847.350 rows=46 loops=1)\n                                 CTE parent_noteset                                   ->  Recursive Union  (cost=0.00..4495503.38 rows=19831 width=4) (actual time=0.057..13847.284 rows=46 loops=1)\n                                         ->  Index Only Scan using note_sets_pkey on note_sets  (cost=0.00..7.85 rows=1 width=4) (actual time=0.054..0.055 rows=1 loops=1)                                               Index Cond: (id = 8304085)\n                                               Heap Fetches: 1                                         ->  Hash Join  (cost=0.33..449509.89 rows=1983 width=4) (actual time=2788.672..4615.686 rows=15 loops=3)\n                                               Hash Cond: (note_sets.parent_id = parent_noteset.id)                                               ->  Seq Scan on note_sets  (cost=0.00..373102.99 rows=20369799 width=8) (actual time=0.006..2288.076 rows=20355654 loops=3)\n                                               ->  Hash  (cost=0.20..0.20 rows=10 width=4) (actual time=0.006..0.006 rows=15 loops=3)                                                     Buckets: 1024  Batches: 1  Memory Usage: 1kB\n                                                     ->  WorkTable Scan on parent_noteset  (cost=0.00..0.20 rows=10 width=4) (actual time=0.001..0.003 rows=15 loops=3)                     ->  Index Scan using note_sets_pkey on note_sets  (cost=0.00..7.65 rows=1 width=40) (actual time=0.014..0.014 rows=1 loops=46)\n                           Index Cond: (id = parent_noteset.id)   ->  Hash  (cost=1.16..1.16 rows=16 width=53) (actual time=0.010..0.010 rows=16 loops=1)\n\n         Buckets: 1024  Batches: 1  Memory Usage: 2kB         ->  Seq Scan on note_set_sources  (cost=0.00..1.16 rows=16 width=53) (actual time=0.004..0.005 rows=16 loops=1) Total runtime: 14555.254 ms\n(29 rows)CareCloud_Prod=##  SELECT note_sets.\"id\" AS t0_r0, note_sets.\"note_set_source_id\" AS t0_r1, note_sets.\"parent_id\" AS t0_r2, note_sets.\"business_entity_id\" AS t0_r3, note_sets.\"created_at\" AS t0_r4, note_sets.\"updated_at\" AS t0_r5, note_sets.\"created_by\" AS t0_r6, note_sets.\"updated_by\" AS t0_r7, note_set_sources.\"id\" AS t1_r0, note_set_sources.\"name\" AS t1_r1, note_set_sources.\"code\" AS t1_r2, note_set_sources.\"description\" AS t1_r3, note_set_sources.\"status\" AS t1_r4, note_set_sources.\"created_at\" AS t1_r5, note_set_sources.\"updated_at\" AS t1_r6, note_set_sources.\"created_by\" AS t1_r7, note_set_sources.\"updated_by\" AS t1_r8, notes.\"id\" AS t2_r0, notes.\"note_set_id\" AS t2_r1, notes.\"subject\" AS t2_r2, notes.\"text\" AS t2_r3, notes.\"status\" AS t2_r4, notes.\"is_dissmissable\" AS t2_r5, notes.\"is_home\" AS t2_r6, notes.\"created_at\" AS t2_r7, notes.\"updated_at\" AS t2_r8, notes.\"created_by\" AS t2_r9, notes.\"updated_by\" AS t2_r10 FROM note_sets  LEFT OUTER JOIN note_set_sources ON note_set_sources.id = note_sets.note_set_source_id  LEFT OUTER JOIN notes ON notes.note_set_id = note_sets.id AND notes.\"status\" = E'A' WHERE (note_sets.id IN (WITH RECURSIVE parent_noteset as (SELECT id FROM note_sets where id = 8304085 UNION SELECT note_sets.id FROM parent_noteset parent_noteset, note_sets note_sets WHERE note_sets.parent_id = parent_noteset.id) SELECT id FROM parent_noteset));\n                                                                                        QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join  (cost=249847.72..670928.98 rows=10180954 width=1512) (actual time=692.423..829.258 rows=48 loops=1)   Hash Cond: (public.note_sets.note_set_source_id = note_set_sources.id)\n   ->  Hash Right Join  (cost=249846.36..530939.50 rows=10180954 width=718) (actual time=692.382..829.180 rows=48 loops=1)         Hash Cond: (notes.note_set_id = public.note_sets.id)\n         ->  Seq Scan on notes  (cost=0.00..32981.14 rows=878550 width=678) (actual time=0.027..413.972 rows=878529 loops=1)               Filter: (status = 'A'::bpchar)         ->  Hash  (cost=43045.44..43045.44 rows=10180954 width=40) (actual time=22.904..22.904 rows=46 loops=1)\n               Buckets: 524288  Batches: 4  Memory Usage: 2kB               ->  Nested Loop  (cost=41106.18..43045.44 rows=10180954 width=40) (actual time=12.319..22.738 rows=46 loops=1)                     ->  HashAggregate  (cost=41106.18..41108.18 rows=200 width=4) (actual time=11.873..11.889 rows=46 loops=1)\n                           ->  CTE Scan on parent_noteset  (cost=40459.39..40857.41 rows=19901 width=4) (actual time=0.492..11.843 rows=46 loops=1)                                 CTE parent_noteset\n                                   ->  Recursive Union  (cost=0.00..40459.39 rows=19901 width=4) (actual time=0.489..11.822 rows=46 loops=1)                                         ->  Index Scan using note_sets_pkey on note_sets  (cost=0.00..10.50 rows=1 width=4) (actual time=0.484..0.485 rows=1 loops=1)\n                                               Index Cond: (id = 8304085)                                         ->  Nested Loop  (cost=0.00..4005.09 rows=1990 width=4) (actual time=1.534..3.764 rows=15 loops=3)\n                                               ->  WorkTable Scan on parent_noteset  (cost=0.00..0.20 rows=10 width=4) (actual time=0.000..0.001 rows=15 loops=3)                                               ->  Index Scan using note_sets_parent_id_idx on note_sets  (cost=0.00..398.00 rows=199 width=8) (actual time=0.216..0.244 rows=1 loops=46)\n                                                     Index Cond: (parent_id = parent_noteset.id)                     ->  Index Scan using note_sets_pkey on note_sets  (cost=0.00..9.67 rows=1 width=40) (actual time=0.234..0.234 rows=1 loops=46)\n                           Index Cond: (id = parent_noteset.id)   ->  Hash  (cost=1.16..1.16 rows=16 width=794) (actual time=0.020..0.020 rows=16 loops=1)\n\n         Buckets: 1024  Batches: 1  Memory Usage: 2kB         ->  Seq Scan on note_set_sources  (cost=0.00..1.16 rows=16 width=794) (actual time=0.012..0.014 rows=16 loops=1) Total runtime: 829.657 ms\n(25 rows)Dave Cramerdave.cramer(at)credativ(dot)cahttp://www.credativ.ca", "msg_date": "Mon, 12 Nov 2012 14:49:00 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "performance regression with 9.2" }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> This query is a couple orders of magnitude slower the first result is\n> 9.2.1, the second 9.1\n\nHm, the planner's evidently doing the wrong thing inside the recursive\nunion, but not obvious why. Can you extract a self-contained test case?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 12 Nov 2012 15:43:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance regression with 9.2" }, { "msg_contents": "Tom,\n\nWill try to get one ASAP.\n\nDave\n\nDave Cramer\n\ndave.cramer(at)credativ(dot)ca\nhttp://www.credativ.ca\n\n\n\nOn Mon, Nov 12, 2012 at 3:43 PM, Tom Lane <[email protected]> wrote:\n\n> Dave Cramer <[email protected]> writes:\n> > This query is a couple orders of magnitude slower the first result is\n> > 9.2.1, the second 9.1\n>\n> Hm, the planner's evidently doing the wrong thing inside the recursive\n> union, but not obvious why. Can you extract a self-contained test case?\n>\n> regards, tom lane\n>\n\nTom,Will try to get one ASAP.DaveDave Cramerdave.cramer(at)credativ(dot)cahttp://www.credativ.ca\n\nOn Mon, Nov 12, 2012 at 3:43 PM, Tom Lane <[email protected]> wrote:\nDave Cramer <[email protected]> writes:\n> This query is a couple orders of magnitude slower the first result is\n> 9.2.1, the second 9.1\n\nHm, the planner's evidently doing the wrong thing inside the recursive\nunion, but not obvious why.  Can you extract a self-contained test case?\n\n                        regards, tom lane", "msg_date": "Mon, 12 Nov 2012 15:53:20 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance regression with 9.2" }, { "msg_contents": "Hello Tom,\n\nCould you elaborate on this? I'm trying to learn the explain plans of postgresql and i would like to know if we're looking at the same clue's.\n\nTo me, i see a mismatch between the optimizer and the actual records retrieved in the fast SQL as well, so plan instability is a realistic scenario. For the slow query, I thought to see a problem in the part below the ' recursive union' :\nthe HASH join is more expensive that the nested loop. ( hints are not yet implemented in Postgresql , aren't they? )\n\nSo the SQL text is:\n\nexplain analyze \nSELECT \n note_sets.\"id\" AS t0_r0, \n ...\n notes.\"updated_by\" AS t2_r10 \nFROM \n note_sets \nLEFT OUTER JOIN note_set_sources ON note_set_sources.id = note_sets.note_set_source_id \nLEFT OUTER JOIN notes ON notes.note_set_id = note_sets.id AND \nnotes.\"status\" = E'A' \nWHERE \n (note_sets.id IN (WITH RECURSIVE parent_noteset as \n (SELECT id FROM note_sets where id = 8304085 \n UNION \n SELECT note_sets.id FROM \n parent_noteset parent_noteset, \n note_sets note_sets \n WHERE note_sets.parent_id = parent_noteset.id) SELECT id FROM parent_noteset))\n\nIMHO, the plan goes wrong at the part \n\nSELECT note_sets.id FROM \n parent_noteset parent_noteset, \n note_sets note_sets \n WHERE note_sets.parent_id = parent_noteset.id)\n\nDo you agree?\n\n\n\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> Subject: Re: [PERFORM] performance regression with 9.2\n> Date: Mon, 12 Nov 2012 15:43:53 -0500\n> \n> Dave Cramer <[email protected]> writes:\n> > This query is a couple orders of magnitude slower the first result is\n> > 9.2.1, the second 9.1\n> \n> Hm, the planner's evidently doing the wrong thing inside the recursive\n> union, but not obvious why. Can you extract a self-contained test case?\n> \n> \t\t\tregards, tom lane\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n\n\n\nHello Tom,Could you elaborate on this? I'm trying to learn the explain plans of postgresql and i would like to know if we're looking at the same clue's.To me, i see a mismatch between the optimizer and the actual records retrieved in the fast SQL as well, so plan instability is a realistic scenario. For the slow query, I thought to see a problem in the part below the ' recursive union' :the HASH join is more expensive that the nested loop. ( hints are not yet implemented in Postgresql , aren't they? )So the SQL text is:explain analyze SELECT  note_sets.\"id\" AS t0_r0,  ... notes.\"updated_by\" AS t2_r10 FROM  note_sets  LEFT OUTER JOIN note_set_sources ON note_set_sources.id = note_sets.note_set_source_id  LEFT OUTER JOIN notes ON notes.note_set_id = note_sets.id AND notes.\"status\" = E'A' WHERE  (note_sets.id IN (WITH RECURSIVE parent_noteset as  (SELECT id FROM note_sets where id = 8304085    UNION   SELECT note_sets.id FROM          parent_noteset parent_noteset,          note_sets note_sets   WHERE note_sets.parent_id = parent_noteset.id) SELECT id FROM parent_noteset))IMHO, the plan goes wrong at the part SELECT note_sets.id FROM          parent_noteset parent_noteset,          note_sets note_sets   WHERE note_sets.parent_id = parent_noteset.id)Do you agree?> From: [email protected]> To: [email protected]> CC: [email protected]> Subject: Re: [PERFORM] performance regression with 9.2> Date: Mon, 12 Nov 2012 15:43:53 -0500> > Dave Cramer <[email protected]> writes:> > This query is a couple orders of magnitude slower the first result is> > 9.2.1, the second 9.1> > Hm, the planner's evidently doing the wrong thing inside the recursive> union, but not obvious why. Can you extract a self-contained test case?> > \t\t\tregards, tom lane> > > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 12 Nov 2012 21:13:46 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance regression with 9.2" }, { "msg_contents": "Willem Leenen <[email protected]> writes:\n> To me, i see a mismatch between the optimizer and the actual records\n> retrieved in the fast SQL as well, so plan instability is a realistic\n> scenario.\n\nWell, the rowcount estimates for a recursive union are certainly\npretty bogus, but those are the same either way. The reason this looks\nlike a bug and not just statistical issues is that the join inside the\nrecursive union is done as a hash, even though that's much more\nexpensive (according to the estimates, not reality) than a nestloop.\nPresumably the planner is failing to even consider a\nnestloop-with-inner-indexscan join there, else it would have picked that\ntype of plan. Why it's failing is as yet unclear.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 12 Nov 2012 16:26:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance regression with 9.2" }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> This query is a couple orders of magnitude slower the first result is\n> 9.2.1, the second 9.1\n\nThanks for sending me the test case off-list. I found the reason why\nI'd not been able to reproduce the problem: the index you're hoping it\nwill use is declared\n\n\"note_sets_parent_id_idx\" btree (parent_id) WHERE parent_id IS NOT NULL\n\nApparently 9.2 is less bright than 9.1 about when it can use a partial\nindex. I'm not sure where I broke that, but will look.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 15 Nov 2012 14:21:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance regression with 9.2" } ]
[ { "msg_contents": "I was working on a data warehousing project where a fair number of files\ncould be COPY'd more or less directly into tables. I have a somewhat nice\nmachine to work with, and I ran on 75% of the cores I have (75% of 32 is\n24).\n\nPerformance was pretty bad. With 24 processes going, each backend (in COPY)\nspent 98% of it's time in semop (as identified by strace). I tried larger\nand smaller shared buffers, all sorts of other tweaks, until I tried\nreducing the number of concurrent processes from 24 to 4.\n\nDisk I/O went up (on average) at least 10X and strace reports that the top\nsystem calls are write (61%), recvfrom (25%), and lseek (14%) - pretty\nreasonable IMO.\n\nGiven that each COPY is into it's own, newly-made table with no indices or\nforeign keys, etc, I would have expected the interaction among the backends\nto be minimal, but that doesn't appear to be the case. What is the likely\ncause of the semops?\n\nI can't really try a newer version of postgres at this time (perhaps soon).\n\nI'm using PG 8.4.13 on ScientificLinux 6.2 (x86_64), and the CPU is a 32\ncore Xeon E5-2680 @ 2.7 GHz.\n\n-- \nJon\n\nI was working on a data warehousing project where a fair number of files could be COPY'd more or less directly into tables. I have a somewhat nice machine to work with, and I ran on 75% of the cores I have (75% of 32 is 24).\nPerformance was pretty bad. With 24 processes going, each backend (in COPY) spent 98% of it's time in semop (as identified by strace).  I tried larger and smaller shared buffers, all sorts of other tweaks, until I tried reducing the number of concurrent processes from 24 to 4.\nDisk I/O went up (on average) at least 10X and strace reports that the top system calls are write (61%), recvfrom (25%), and lseek (14%) - pretty reasonable IMO.Given that each COPY is into it's own, newly-made table with no indices or foreign keys, etc, I would have expected the interaction among the backends to be minimal, but that doesn't appear to be the case.  What is the likely cause of the semops?\nI can't really try a newer version of postgres at this time (perhaps soon).I'm using PG 8.4.13 on ScientificLinux 6.2 (x86_64), and the CPU is a 32 core Xeon E5-2680 @ 2.7 GHz.-- Jon", "msg_date": "Tue, 13 Nov 2012 13:13:40 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "postgres 8.4, COPY, and high concurrency" }, { "msg_contents": "On 13.11.2012 21:13, Jon Nelson wrote:\n> I was working on a data warehousing project where a fair number of files\n> could be COPY'd more or less directly into tables. I have a somewhat nice\n> machine to work with, and I ran on 75% of the cores I have (75% of 32 is\n> 24).\n>\n> Performance was pretty bad. With 24 processes going, each backend (in COPY)\n> spent 98% of it's time in semop (as identified by strace). I tried larger\n> and smaller shared buffers, all sorts of other tweaks, until I tried\n> reducing the number of concurrent processes from 24 to 4.\n>\n> Disk I/O went up (on average) at least 10X and strace reports that the top\n> system calls are write (61%), recvfrom (25%), and lseek (14%) - pretty\n> reasonable IMO.\n>\n> Given that each COPY is into it's own, newly-made table with no indices or\n> foreign keys, etc, I would have expected the interaction among the backends\n> to be minimal, but that doesn't appear to be the case. What is the likely\n> cause of the semops?\n\nI'd guess it's lock contention on WALInsertLock. That means, the system \nis experiencing lock contention on generating WAL records for the \ninsertions. If that theory is correct, you ought to get a big gain if \nyou have wal_level=minimal, and you create or truncate the table in the \nsame transaction with the COPY. That allows the system to skip \nWAL-logging the COPY.\n\nOr you could upgrade to 9.2. The WAL-logging of bulk COPY was optimized \nin 9.2, it should help precisely the scenario you're facing.\n\n- Heikki\n\n", "msg_date": "Tue, 13 Nov 2012 21:27:39 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 8.4, COPY, and high concurrency" }, { "msg_contents": "On Tue, Nov 13, 2012 at 11:13 AM, Jon Nelson <[email protected]> wrote:\n> I was working on a data warehousing project where a fair number of files\n> could be COPY'd more or less directly into tables. I have a somewhat nice\n> machine to work with, and I ran on 75% of the cores I have (75% of 32 is\n> 24).\n>\n> Performance was pretty bad. With 24 processes going, each backend (in COPY)\n> spent 98% of it's time in semop (as identified by strace).\n\nThey are probably fighting over the right to insert records into the WAL stream.\n\nThis has been improved in 9.2\n\n\n> Given that each COPY is into it's own, newly-made table with no indices or\n> foreign keys, etc, I would have expected the interaction among the backends\n> to be minimal, but that doesn't appear to be the case.\n\nOn newer versions if you set wal_level to minimal and archive_mode to\noff, then these operations would bypass WAL entirely. I can't figure\nout if there is a corresponding optimization in 8.4, though.\n\nCheers,\n\nJeff\n\n", "msg_date": "Tue, 13 Nov 2012 11:30:52 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 8.4, COPY, and high concurrency" }, { "msg_contents": "On Tue, Nov 13, 2012 at 1:27 PM, Heikki Linnakangas <[email protected]\n> wrote:\n\n> On 13.11.2012 21:13, Jon Nelson wrote:\n>\n>> I was working on a data warehousing project where a fair number of files\n>> could be COPY'd more or less directly into tables. I have a somewhat nice\n>> machine to work with, and I ran on 75% of the cores I have (75% of 32 is\n>> 24).\n>>\n>> Performance was pretty bad. With 24 processes going, each backend (in\n>> COPY)\n>> spent 98% of it's time in semop (as identified by strace). I tried larger\n>> and smaller shared buffers, all sorts of other tweaks, until I tried\n>> reducing the number of concurrent processes from 24 to 4.\n>>\n>> Disk I/O went up (on average) at least 10X and strace reports that the top\n>> system calls are write (61%), recvfrom (25%), and lseek (14%) - pretty\n>> reasonable IMO.\n>>\n>> Given that each COPY is into it's own, newly-made table with no indices or\n>> foreign keys, etc, I would have expected the interaction among the\n>> backends\n>> to be minimal, but that doesn't appear to be the case. What is the likely\n>> cause of the semops?\n>>\n>\n> I'd guess it's lock contention on WALInsertLock. That means, the system is\n> experiencing lock contention on generating WAL records for the insertions.\n> If that theory is correct, you ought to get a big gain if you have\n> wal_level=minimal, and you create or truncate the table in the same\n> transaction with the COPY. That allows the system to skip WAL-logging the\n> COPY.\n>\n\nwal_level doesn't exist for 8.4, but I have archive_mode = \"off\" and I am\ncreating the table in the same transaction as the COPY.\n\n\n>\n> Or you could upgrade to 9.2. The WAL-logging of bulk COPY was optimized in\n> 9.2, it should help precisely the scenario you're facing.\n>\n\nUnfortunately, that's what I was expecting.\n\n\n\n-- \nJon\n\nOn Tue, Nov 13, 2012 at 1:27 PM, Heikki Linnakangas <[email protected]> wrote:\nOn 13.11.2012 21:13, Jon Nelson wrote:\n\nI was working on a data warehousing project where a fair number of files\ncould be COPY'd more or less directly into tables. I have a somewhat nice\nmachine to work with, and I ran on 75% of the cores I have (75% of 32 is\n24).\n\nPerformance was pretty bad. With 24 processes going, each backend (in COPY)\nspent 98% of it's time in semop (as identified by strace).  I tried larger\nand smaller shared buffers, all sorts of other tweaks, until I tried\nreducing the number of concurrent processes from 24 to 4.\n\nDisk I/O went up (on average) at least 10X and strace reports that the top\nsystem calls are write (61%), recvfrom (25%), and lseek (14%) - pretty\nreasonable IMO.\n\nGiven that each COPY is into it's own, newly-made table with no indices or\nforeign keys, etc, I would have expected the interaction among the backends\nto be minimal, but that doesn't appear to be the case.  What is the likely\ncause of the semops?\n\n\nI'd guess it's lock contention on WALInsertLock. That means, the system is experiencing lock contention on generating WAL records for the insertions. If that theory is correct, you ought to get a big gain if you have wal_level=minimal, and you create or truncate the table in the same transaction with the COPY. That allows the system to skip WAL-logging the COPY.\nwal_level doesn't exist for 8.4, but I have archive_mode = \"off\" and I am creating the table in the same transaction as the COPY. \n\nOr you could upgrade to 9.2. The WAL-logging of bulk COPY was optimized in 9.2, it should help precisely the scenario you're facing.Unfortunately, that's what I was expecting.\n-- Jon", "msg_date": "Tue, 13 Nov 2012 14:03:06 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 8.4, COPY, and high concurrency" }, { "msg_contents": "On Tue, Nov 13, 2012 at 12:03 PM, Jon Nelson <[email protected]> wrote:\n> On Tue, Nov 13, 2012 at 1:27 PM, Heikki Linnakangas\n> <[email protected]> wrote:\n>>\n>> On 13.11.2012 21:13, Jon Nelson wrote:\n>>>\n>>\n>> I'd guess it's lock contention on WALInsertLock. That means, the system is\n>> experiencing lock contention on generating WAL records for the insertions.\n>> If that theory is correct, you ought to get a big gain if you have\n>> wal_level=minimal, and you create or truncate the table in the same\n>> transaction with the COPY. That allows the system to skip WAL-logging the\n>> COPY.\n>\n>\n> wal_level doesn't exist for 8.4, but I have archive_mode = \"off\" and I am\n> creating the table in the same transaction as the COPY.\n\n\nThat should work to bypass WAL. Can you directly verify whether you\nare generating lots of WAL (look at the churn in pg_xlog) during those\nloads?\n\nMaybe your contention is someplace else. Since they must all be using\ndifferent tables, I don't think it would be the relation extension\nlock. Maybe buffer mapping lock or freelist lock?\n\nCheers,\n\nJeff\n\n", "msg_date": "Tue, 13 Nov 2012 12:43:19 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 8.4, COPY, and high concurrency" }, { "msg_contents": "On Tue, Nov 13, 2012 at 2:43 PM, Jeff Janes <[email protected]> wrote:\n> On Tue, Nov 13, 2012 at 12:03 PM, Jon Nelson <[email protected]> wrote:\n>> On Tue, Nov 13, 2012 at 1:27 PM, Heikki Linnakangas\n>> <[email protected]> wrote:\n>>>\n>>> On 13.11.2012 21:13, Jon Nelson wrote:\n>>>>\n>>>\n>>> I'd guess it's lock contention on WALInsertLock. That means, the system is\n>>> experiencing lock contention on generating WAL records for the insertions.\n>>> If that theory is correct, you ought to get a big gain if you have\n>>> wal_level=minimal, and you create or truncate the table in the same\n>>> transaction with the COPY. That allows the system to skip WAL-logging the\n>>> COPY.\n>>\n>>\n>> wal_level doesn't exist for 8.4, but I have archive_mode = \"off\" and I am\n>> creating the table in the same transaction as the COPY.\n>\n>\n> That should work to bypass WAL. Can you directly verify whether you\n> are generating lots of WAL (look at the churn in pg_xlog) during those\n> loads?\n>\n> Maybe your contention is someplace else. Since they must all be using\n> different tables, I don't think it would be the relation extension\n> lock. Maybe buffer mapping lock or freelist lock?\n\nI had moved on to a different approach to importing the data which\ndoes not work concurrently. However, I went back and tried to\nre-create the situation and - at least a naive attempt failed. I'll\ngive it a few more tries -- I was creating two tables using CREATE\nTABLE <unique name> LIKE (some other table INCLUDING <everything>).\nThen I would copy the data in, add some constraints (FK constraints\nbut only within these two tables) and then finally (for each table)\nissue an ALTER TABLE <unique name> INHERIT <some other table>. To be\nclear, however, everything bogged down in the COPY stage which was\nimmediately following the table creation.\n\nI'll note that my naive test showed almost no unexpected overhead at\nall, so it's clearly not representative of the problem I encountered.\n\n\n--\nJon\n\n", "msg_date": "Tue, 13 Nov 2012 19:10:28 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 8.4, COPY, and high concurrency" }, { "msg_contents": "On Tue, Nov 13, 2012 at 7:10 PM, Jon Nelson <[email protected]> wrote:\n> I had moved on to a different approach to importing the data which\n> does not work concurrently. However, I went back and tried to\n> re-create the situation and - at least a naive attempt failed. I'll\n> give it a few more tries -- I was creating two tables using CREATE\n> TABLE <unique name> LIKE (some other table INCLUDING <everything>).\n> Then I would copy the data in, add some constraints (FK constraints\n> but only within these two tables) and then finally (for each table)\n> issue an ALTER TABLE <unique name> INHERIT <some other table>. To be\n> clear, however, everything bogged down in the COPY stage which was\n> immediately following the table creation.\n>\n> I'll note that my naive test showed almost no unexpected overhead at\n> all, so it's clearly not representative of the problem I encountered.\n\n\nI'm still unable to replicate the problem, but I can show I wasn't\ncrazy, either. The average time to perform one of these COPY\noperations when things are working is in the 15-45 second range. I\nhad configured PG to log any statement that look longer than 3\nseconds, so I got a bunch of those in the logs. I have since\nreconfigured to log *everything*. Anyway, when things were going\nbadly, COPY would take anywhere from 814 seconds to just under 1400\nseconds for the exact same files.\n\nUPDATE: I have been able to replicate the issue. The parent table (the\none referenced in the LIKE portion of the CREATE TABLE statement) had\nthree indices.\n\nNow that I've been able to replicate the issue, are there tests that I\ncan perform that would be useful to people?\nI will also try to build a stand-alone test.\n\n\n--\nJon\n\n", "msg_date": "Wed, 14 Nov 2012 08:41:45 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 8.4, COPY, and high concurrency" }, { "msg_contents": "On Wed, Nov 14, 2012 at 6:41 AM, Jon Nelson <[email protected]> wrote:\n>\n> UPDATE: I have been able to replicate the issue. The parent table (the\n> one referenced in the LIKE portion of the CREATE TABLE statement) had\n> three indices.\n>\n> Now that I've been able to replicate the issue, are there tests that I\n> can perform that would be useful to people?\n> I will also try to build a stand-alone test.\n\nWhile the WAL is suppressed for the table inserts, it is not\nsuppressed for the index inserts, and the index WAL traffic is enough\nto lead to contention.\n\nI don't know why that is the case, it seems like the same method that\nallows us to bypass WAL for the table would work for the indices as\nwell. Maybe it is just that no one bothered to implement it. After\nall, building the index after the copy will be even more efficient\nthan building it before but by-passing WAL.\n\nBut it does seem like the docs could at least be clarified here.\n\nCheers,\n\nJeff\n\n", "msg_date": "Wed, 14 Nov 2012 11:01:57 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 8.4, COPY, and high concurrency" }, { "msg_contents": "On Wed, Nov 14, 2012 at 1:01 PM, Jeff Janes <[email protected]> wrote:\n> On Wed, Nov 14, 2012 at 6:41 AM, Jon Nelson <[email protected]> wrote:\n>>\n>> UPDATE: I have been able to replicate the issue. The parent table (the\n>> one referenced in the LIKE portion of the CREATE TABLE statement) had\n>> three indices.\n>>\n>> Now that I've been able to replicate the issue, are there tests that I\n>> can perform that would be useful to people?\n>> I will also try to build a stand-alone test.\n>\n> While the WAL is suppressed for the table inserts, it is not\n> suppressed for the index inserts, and the index WAL traffic is enough\n> to lead to contention.\n\nAha!\n\n> I don't know why that is the case, it seems like the same method that\n> allows us to bypass WAL for the table would work for the indices as\n> well. Maybe it is just that no one bothered to implement it. After\n> all, building the index after the copy will be even more efficient\n> than building it before but by-passing WAL.\n\n> But it does seem like the docs could at least be clarified here.\n\nIn general, then, would it be safe to say that concurrent (parallel)\nindex creation may be a source of significant WAL contention? I was\nplanning on taking advantage of this due to modern, beefy boxes with\n10's of CPUs all just sitting there bored.\n\n\n--\nJon\n\n", "msg_date": "Wed, 14 Nov 2012 14:04:12 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 8.4, COPY, and high concurrency" }, { "msg_contents": "On Wed, Nov 14, 2012 at 12:04 PM, Jon Nelson <[email protected]> wrote:\n> On Wed, Nov 14, 2012 at 1:01 PM, Jeff Janes <[email protected]> wrote:\n>>\n>> While the WAL is suppressed for the table inserts, it is not\n>> suppressed for the index inserts, and the index WAL traffic is enough\n>> to lead to contention.\n>\n> Aha!\n>\n>> I don't know why that is the case, it seems like the same method that\n>> allows us to bypass WAL for the table would work for the indices as\n>> well. Maybe it is just that no one bothered to implement it. After\n>> all, building the index after the copy will be even more efficient\n>> than building it before but by-passing WAL.\n>\n>> But it does seem like the docs could at least be clarified here.\n>\n> In general, then, would it be safe to say that concurrent (parallel)\n> index creation may be a source of significant WAL contention?\n\nNo, that shouldn't lead to WAL contention. The creation of an index\non an already-populated table bypasses most WAL when you are not using\narchiving. It is the maintenance of an already existing index that\ngenerates WAL.\n\n\n\"begin; truncate; copy; create index\" generates little WAL.\n\n\"begin; truncate; create index; copy\" generates a lot of WAL, and is\nslower for other reason as well.\n\nCheers,\n\nJeff\n\n", "msg_date": "Wed, 14 Nov 2012 13:25:36 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 8.4, COPY, and high concurrency" }, { "msg_contents": "If you are inserting a lot of data into the same table, table extension locks are a problem, and will be extended in only 8k increments which if you have a lot of clients hitting/expanding the same table you are going to have a lot of overhead.\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Jeff Janes\r\nSent: Wednesday, November 14, 2012 3:26 PM\r\nTo: Jon Nelson\r\nCc: Heikki Linnakangas; [email protected]\r\nSubject: Re: [PERFORM] postgres 8.4, COPY, and high concurrency\r\n\r\nOn Wed, Nov 14, 2012 at 12:04 PM, Jon Nelson <[email protected]> wrote:\r\n> On Wed, Nov 14, 2012 at 1:01 PM, Jeff Janes <[email protected]> wrote:\r\n>>\r\n>> While the WAL is suppressed for the table inserts, it is not \r\n>> suppressed for the index inserts, and the index WAL traffic is enough \r\n>> to lead to contention.\r\n>\r\n> Aha!\r\n>\r\n>> I don't know why that is the case, it seems like the same method that \r\n>> allows us to bypass WAL for the table would work for the indices as \r\n>> well. Maybe it is just that no one bothered to implement it. After \r\n>> all, building the index after the copy will be even more efficient \r\n>> than building it before but by-passing WAL.\r\n>\r\n>> But it does seem like the docs could at least be clarified here.\r\n>\r\n> In general, then, would it be safe to say that concurrent (parallel) \r\n> index creation may be a source of significant WAL contention?\r\n\r\nNo, that shouldn't lead to WAL contention. The creation of an index on an already-populated table bypasses most WAL when you are not using archiving. It is the maintenance of an already existing index that generates WAL.\r\n\r\n\r\n\"begin; truncate; copy; create index\" generates little WAL.\r\n\r\n\"begin; truncate; create index; copy\" generates a lot of WAL, and is slower for other reason as well.\r\n\r\nCheers,\r\n\r\nJeff\r\n\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis email is confidential and subject to important disclaimers and\r\nconditions including on offers for the purchase or sale of\r\nsecurities, accuracy and completeness of information, viruses,\r\nconfidentiality, legal privilege, and legal entity disclaimers,\r\navailable at http://www.jpmorgan.com/pages/disclosures/email. \n\n", "msg_date": "Wed, 14 Nov 2012 22:06:10 +0000", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 8.4, COPY, and high concurrency" } ]
[ { "msg_contents": "Have a query using a CTE that is performing very poorly. The equivalent query against the same data in an Oracle database runs in under 1 second, in Postgres it takes 2000 seconds.\n\nThe smp_pkg.get_invoice_charges queries fedexinvoices for some data and normalizes it into a SETOF some record type. It is declared to be STABLE. Fedexinvoices consists of about 1.3M rows of medium width. Fedexinvoices.id is the primary key on that table, and trim(fedexinvoices.trackno) is indexed via the function trim.\n\nThe plan for the equivalent query in Oracle is much smaller and simpler. No sequential (or full table) scans on fedexinvoices.\n\n\n\nWITH charges as (\n SELECT fi2.id, smp_pkg.get_invoice_charges(fi2.id) charge_info from fedexinvoices fi2\n)\nselect fedexinvoices.* from\nfedexinvoices\ninner join charges on charges.id = fedexinvoices.id AND (charges.charge_info).charge_name IN ('ADDRESS CORRECTION CHARGE','ADDRESS CORRECTION')\nwhere\ntrim(fedexinvoices.trackno)='799159791643'\n;\n\nExplain Analyze output, I abbreviated some of the column lists for brevity:\n\nNested Loop (cost=457380.38..487940.77 rows=1 width=1024) (actual time=1978019.858..1978019.858 rows=0 loops=1)\n Output: fedexinvoices.id, .........\n Join Filter: (fedexinvoices.id = charges.id)\n Buffers: shared hit=20387611, temp written=94071\n CTE charges\n -> Seq Scan on hits.fedexinvoices fi2 (cost=0.00..457380.38 rows=1350513 width=8) (actual time=0.613..1964632.763 rows=9007863 loops=1)\n Output: fi2.id, smp_pkg.get_invoice_charges(fi2.id, NULL::character varying)\n Buffers: shared hit=20387606\n -> Index Scan using fedexinvoices_trim_track_idx on hits.fedexinvoices (cost=0.00..5.46 rows=1 width=1024) (actual time=0.024..0.026 rows=1 loops=1)\n Output: fedexinvoices.id, .........\n Index Cond: (btrim((fedexinvoices.trackno)::text) = '799159791643'::text)\n Buffers: shared hit=5\n -> CTE Scan on charges (cost=0.00..30386.54 rows=13471 width=8) (actual time=1978019.827..1978019.827 rows=0 loops=1)\n Output: charges.id, charges.charge_info\n Filter: (((charges.charge_info).charge_name)::text = ANY ('{\"ADDRESS CORRECTION CHARGE\",\"ADDRESS CORRECTION\"}'::text[]))\n Buffers: shared hit=20387606, temp written=94071\nTotal runtime: 1978214.743 ms\n\n\n\n\n\n\n\n\n\n\n\nHave a query using a CTE that is performing very poorly. The equivalent query against the same data in an Oracle database runs in under 1 second, in Postgres  it takes 2000 seconds.\n\n \nThe smp_pkg.get_invoice_charges queries fedexinvoices for some data and normalizes it into a SETOF some record type. It is declared to be STABLE. Fedexinvoices consists of about 1.3M rows of medium width. Fedexinvoices.id is the primary\n key on that table, and trim(fedexinvoices.trackno) is indexed via the function trim.\n \nThe plan for the equivalent query in Oracle is much smaller and simpler. No sequential (or full table) scans on fedexinvoices.\n \n \n \nWITH charges as (\n                SELECT fi2.id, smp_pkg.get_invoice_charges(fi2.id) charge_info from fedexinvoices fi2\n)\nselect fedexinvoices.* from \nfedexinvoices\ninner join charges on charges.id = fedexinvoices.id AND (charges.charge_info).charge_name IN ('ADDRESS CORRECTION CHARGE','ADDRESS CORRECTION')\nwhere\ntrim(fedexinvoices.trackno)='799159791643'\n;\n \nExplain Analyze output, I abbreviated some of the column lists for brevity:\n \nNested Loop  (cost=457380.38..487940.77 rows=1 width=1024) (actual time=1978019.858..1978019.858 rows=0 loops=1)\n  Output: fedexinvoices.id, ………\n  Join Filter: (fedexinvoices.id = charges.id)\n  Buffers: shared hit=20387611, temp written=94071\n  CTE charges\n    ->  Seq Scan on hits.fedexinvoices fi2  (cost=0.00..457380.38 rows=1350513 width=8) (actual time=0.613..1964632.763 rows=9007863 loops=1)\n          Output: fi2.id, smp_pkg.get_invoice_charges(fi2.id, NULL::character varying)\n          Buffers: shared hit=20387606\n  ->  Index Scan using fedexinvoices_trim_track_idx on hits.fedexinvoices  (cost=0.00..5.46 rows=1 width=1024) (actual time=0.024..0.026 rows=1 loops=1)\n        Output: fedexinvoices.id, ………\n        Index Cond: (btrim((fedexinvoices.trackno)::text) = '799159791643'::text)\n        Buffers: shared hit=5\n  ->  CTE Scan on charges  (cost=0.00..30386.54 rows=13471 width=8) (actual time=1978019.827..1978019.827 rows=0 loops=1)\n        Output: charges.id, charges.charge_info\n        Filter: (((charges.charge_info).charge_name)::text = ANY ('{\"ADDRESS CORRECTION CHARGE\",\"ADDRESS CORRECTION\"}'::text[]))\n        Buffers: shared hit=20387606, temp written=94071\nTotal runtime: 1978214.743 ms", "msg_date": "Tue, 13 Nov 2012 20:57:07 +0000", "msg_from": "David Greco <[email protected]>", "msg_from_op": true, "msg_subject": "Poor performance using CTE" }, { "msg_contents": "On Tue, Nov 13, 2012 at 2:57 PM, David Greco\n<[email protected]> wrote:\n> Have a query using a CTE that is performing very poorly. The equivalent\n> query against the same data in an Oracle database runs in under 1 second, in\n> Postgres it takes 2000 seconds.\n>\n>\n>\n> The smp_pkg.get_invoice_charges queries fedexinvoices for some data and\n> normalizes it into a SETOF some record type. It is declared to be STABLE.\n> Fedexinvoices consists of about 1.3M rows of medium width. Fedexinvoices.id\n> is the primary key on that table, and trim(fedexinvoices.trackno) is indexed\n> via the function trim.\n>\n>\n>\n> The plan for the equivalent query in Oracle is much smaller and simpler. No\n> sequential (or full table) scans on fedexinvoices.\n>\n>\n>\n>\n>\n>\n>\n> WITH charges as (\n>\n> SELECT fi2.id, smp_pkg.get_invoice_charges(fi2.id)\n> charge_info from fedexinvoices fi2\n>\n> )\n>\n> select fedexinvoices.* from\n>\n> fedexinvoices\n>\n> inner join charges on charges.id = fedexinvoices.id AND\n> (charges.charge_info).charge_name IN ('ADDRESS CORRECTION CHARGE','ADDRESS\n> CORRECTION')\n>\n> where\n>\n> trim(fedexinvoices.trackno)='799159791643'\n>\n> ;\n>\n>\n>\n> Explain Analyze output, I abbreviated some of the column lists for brevity:\n>\n>\n>\n> Nested Loop (cost=457380.38..487940.77 rows=1 width=1024) (actual\n> time=1978019.858..1978019.858 rows=0 loops=1)\n>\n> Output: fedexinvoices.id, ………\n>\n> Join Filter: (fedexinvoices.id = charges.id)\n>\n> Buffers: shared hit=20387611, temp written=94071\n>\n> CTE charges\n>\n> -> Seq Scan on hits.fedexinvoices fi2 (cost=0.00..457380.38\n> rows=1350513 width=8) (actual time=0.613..1964632.763 rows=9007863 loops=1)\n>\n> Output: fi2.id, smp_pkg.get_invoice_charges(fi2.id,\n> NULL::character varying)\n>\n> Buffers: shared hit=20387606\n>\n> -> Index Scan using fedexinvoices_trim_track_idx on hits.fedexinvoices\n> (cost=0.00..5.46 rows=1 width=1024) (actual time=0.024..0.026 rows=1\n> loops=1)\n>\n> Output: fedexinvoices.id, ………\n>\n> Index Cond: (btrim((fedexinvoices.trackno)::text) =\n> '799159791643'::text)\n>\n> Buffers: shared hit=5\n>\n> -> CTE Scan on charges (cost=0.00..30386.54 rows=13471 width=8) (actual\n> time=1978019.827..1978019.827 rows=0 loops=1)\n>\n> Output: charges.id, charges.charge_info\n>\n> Filter: (((charges.charge_info).charge_name)::text = ANY ('{\"ADDRESS\n> CORRECTION CHARGE\",\"ADDRESS CORRECTION\"}'::text[]))\n>\n> Buffers: shared hit=20387606, temp written=94071\n>\n> Total runtime: 1978214.743 ms\n\nThe problem here is very clear. Oracle is optimizing through the CTE.\n PostgreSQL does not do this by design -- CTE's are used as a forced\nmaterialization step.\n\nmerlin\n\n", "msg_date": "Tue, 20 Nov 2012 09:04:13 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On Tue, Nov 20, 2012 at 12:04 PM, Merlin Moncure <[email protected]> wrote:\n> The problem here is very clear. Oracle is optimizing through the CTE.\n> PostgreSQL does not do this by design -- CTE's are used as a forced\n> materialization step.\n\nWhile I love that design (it lets me solve lots of problems for huge\nqueries), wouldn't pushing constraints into the CTE be a rather safe\noptimization?\n\n", "msg_date": "Tue, 20 Nov 2012 12:10:11 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On Tue, Nov 20, 2012 at 9:10 AM, Claudio Freire <[email protected]> wrote:\n> On Tue, Nov 20, 2012 at 12:04 PM, Merlin Moncure <[email protected]> wrote:\n>> The problem here is very clear. Oracle is optimizing through the CTE.\n>> PostgreSQL does not do this by design -- CTE's are used as a forced\n>> materialization step.\n>\n> While I love that design (it lets me solve lots of problems for huge\n> queries), wouldn't pushing constraints into the CTE be a rather safe\n> optimization?\n\nsure, or rewrite query as classic join.\n\nmerlin\n\n", "msg_date": "Tue, 20 Nov 2012 09:23:50 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On Tue, Nov 20, 2012 at 12:23 PM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Nov 20, 2012 at 9:10 AM, Claudio Freire <[email protected]> wrote:\n>> On Tue, Nov 20, 2012 at 12:04 PM, Merlin Moncure <[email protected]> wrote:\n>>> The problem here is very clear. Oracle is optimizing through the CTE.\n>>> PostgreSQL does not do this by design -- CTE's are used as a forced\n>>> materialization step.\n>>\n>> While I love that design (it lets me solve lots of problems for huge\n>> queries), wouldn't pushing constraints into the CTE be a rather safe\n>> optimization?\n>\n> sure, or rewrite query as classic join.\n\nI meant for postgres to do automatically. Rewriting as a join wouldn't\nwork as an optimization fence the way we're used to, but pushing\nconstraints upwards can only help (especially if highly selective).\n\n", "msg_date": "Tue, 20 Nov 2012 13:06:40 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 11/21/2012 12:06 AM, Claudio Freire wrote:\n> I meant for postgres to do automatically. Rewriting as a join wouldn't\n> work as an optimization fence the way we're used to, but pushing\n> constraints upwards can only help (especially if highly selective).\nBecause people are now used to using CTEs as query hints, it'd probably\ncause performance regressions in working queries. Perhaps more\nimportantly, Pg would have to prove that doing so didn't change queries\nthat invoked functions with side-effects to avoid changing the results\nof currently valid queries.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 21 Nov 2012 07:38:26 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On Tue, Nov 20, 2012 at 8:38 PM, Craig Ringer <[email protected]> wrote:\n> On 11/21/2012 12:06 AM, Claudio Freire wrote:\n>> I meant for postgres to do automatically. Rewriting as a join wouldn't\n>> work as an optimization fence the way we're used to, but pushing\n>> constraints upwards can only help (especially if highly selective).\n> Because people are now used to using CTEs as query hints, it'd probably\n> cause performance regressions in working queries. Perhaps more\n> importantly, Pg would have to prove that doing so didn't change queries\n> that invoked functions with side-effects to avoid changing the results\n> of currently valid queries.\n\nFair point. Will look into it a bit.\n\n", "msg_date": "Tue, 20 Nov 2012 20:44:51 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> On 11/21/2012 12:06 AM, Claudio Freire wrote:\n>> I meant for postgres to do automatically. Rewriting as a join wouldn't\n>> work as an optimization fence the way we're used to, but pushing\n>> constraints upwards can only help (especially if highly selective).\n\n> Because people are now used to using CTEs as query hints, it'd probably\n> cause performance regressions in working queries. Perhaps more\n> importantly, Pg would have to prove that doing so didn't change queries\n> that invoked functions with side-effects to avoid changing the results\n> of currently valid queries.\n\nWe could trivially arrange to keep the current semantics if the CTE\nquery contains any volatile functions (or of course if it's\nINSERT/UPDATE/DELETE). I think we'd also need to not optimize if\nit's invoked from more than one place in the outer query.\n\nI think the more interesting question is what cases wouldn't be covered\nby such a rule. Typically you need to use OFFSET 0 in situations where\nthe planner has guessed wrong about costs or rowcounts, and I think\npeople are likely using WITH for that as well. Should we be telling\npeople that they ought to insert OFFSET 0 in WITH queries if they want\nto be sure there's an optimization fence?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 20 Nov 2012 18:53:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 21.11.2012 01:53, Tom Lane wrote:\n> I think the more interesting question is what cases wouldn't be covered\n> by such a rule. Typically you need to use OFFSET 0 in situations where\n> the planner has guessed wrong about costs or rowcounts, and I think\n> people are likely using WITH for that as well. Should we be telling\n> people that they ought to insert OFFSET 0 in WITH queries if they want\n> to be sure there's an optimization fence?\n\nYes, I strongly feel that we should. Writing a query using WITH often \nmakes it more readable. It would be a shame if people have to refrain \nfrom using it, because the planner treats it as an optimization fence.\n\n- Heikki\n\n", "msg_date": "Wed, 21 Nov 2012 15:04:38 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 21 November 2012 13:04, Heikki Linnakangas <[email protected]> wrote:\n> Yes, I strongly feel that we should. Writing a query using WITH often makes\n> it more readable. It would be a shame if people have to refrain from using\n> it, because the planner treats it as an optimization fence.\n\n+1\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n", "msg_date": "Wed, 21 Nov 2012 13:15:40 +0000", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "\nOn 11/21/2012 08:04 AM, Heikki Linnakangas wrote:\n> On 21.11.2012 01:53, Tom Lane wrote:\n>> I think the more interesting question is what cases wouldn't be covered\n>> by such a rule. Typically you need to use OFFSET 0 in situations where\n>> the planner has guessed wrong about costs or rowcounts, and I think\n>> people are likely using WITH for that as well. Should we be telling\n>> people that they ought to insert OFFSET 0 in WITH queries if they want\n>> to be sure there's an optimization fence?\n>\n> Yes, I strongly feel that we should. Writing a query using WITH often \n> makes it more readable. It would be a shame if people have to refrain \n> from using it, because the planner treats it as an optimization fence.\n>\n>\n\n\n\nIf we're going to do it can we please come up with something more \nintuitive and much, much more documented than \"OFFSET 0\"? And if/when we \ndo this we'll need to have big red warnings all over then release notes, \nsince a lot of people I know will need to do some extensive remediation \nbefore moving to such a release.\n\ncheers\n\nandrew\n\n\n", "msg_date": "Wed, 21 Nov 2012 09:47:12 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> If we're going to do it can we please come up with something more \n> intuitive and much, much more documented than \"OFFSET 0\"? And if/when we \n> do this we'll need to have big red warnings all over then release notes, \n> since a lot of people I know will need to do some extensive remediation \n> before moving to such a release.\n\nThe probability that we would actually *remove* that behavior of OFFSET\n0 is not distinguishable from zero. I'm not terribly excited about\nhaving an alternate syntax to specify an optimization fence, but even\nif we do create such a thing, there's no need to break the historical\nusage.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 21 Nov 2012 09:59:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "\nOn 11/21/2012 09:59 AM, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> If we're going to do it can we please come up with something more\n>> intuitive and much, much more documented than \"OFFSET 0\"? And if/when we\n>> do this we'll need to have big red warnings all over then release notes,\n>> since a lot of people I know will need to do some extensive remediation\n>> before moving to such a release.\n> The probability that we would actually *remove* that behavior of OFFSET\n> 0 is not distinguishable from zero. I'm not terribly excited about\n> having an alternate syntax to specify an optimization fence, but even\n> if we do create such a thing, there's no need to break the historical\n> usage.\n>\n\nI wasn't talking about removing it. My point was that if the \noptimization fence around CTEs is removed a lot of people will need to \nrework apps where they have used them for that purpose. And I continue \nto think that spelling it \"OFFSET 0\" is horribly obscure.\n\ncheers\n\nandrew\n\n", "msg_date": "Wed, 21 Nov 2012 10:21:16 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 2012-11-21 10:21:16 -0500, Andrew Dunstan wrote:\n> \n> On 11/21/2012 09:59 AM, Tom Lane wrote:\n> >Andrew Dunstan <[email protected]> writes:\n> >>If we're going to do it can we please come up with something more\n> >>intuitive and much, much more documented than \"OFFSET 0\"? And if/when we\n> >>do this we'll need to have big red warnings all over then release notes,\n> >>since a lot of people I know will need to do some extensive remediation\n> >>before moving to such a release.\n> >The probability that we would actually *remove* that behavior of OFFSET\n> >0 is not distinguishable from zero. I'm not terribly excited about\n> >having an alternate syntax to specify an optimization fence, but even\n> >if we do create such a thing, there's no need to break the historical\n> >usage.\n> >\n> \n> I wasn't talking about removing it. My point was that if the optimization\n> fence around CTEs is removed a lot of people will need to rework apps where\n> they have used them for that purpose. And I continue to think that spelling\n> it \"OFFSET 0\" is horribly obscure.\n\n+1\n\nWITH foo AS (SELECT ...) (barrier=on|off)?\n\n9.3 introduces the syntax, defaulting to on\n9.4 switches the default to off.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Wed, 21 Nov 2012 16:32:09 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 21 November 2012 15:21, Andrew Dunstan <[email protected]> wrote:\n> And I continue to think that spelling it \"OFFSET 0\" is horribly obscure.\n\nI'm not sure that it's any more obscure than the very idea of an\noptimisation fence.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n", "msg_date": "Wed, 21 Nov 2012 15:32:22 +0000", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 22/11/12 04:32, Andres Freund wrote:\n> On 2012-11-21 10:21:16 -0500, Andrew Dunstan wrote:\n>> On 11/21/2012 09:59 AM, Tom Lane wrote:\n>>> Andrew Dunstan <[email protected]> writes:\n>>>> If we're going to do it can we please come up with something more\n>>>> intuitive and much, much more documented than \"OFFSET 0\"? And if/when we\n>>>> do this we'll need to have big red warnings all over then release notes,\n>>>> since a lot of people I know will need to do some extensive remediation\n>>>> before moving to such a release.\n>>> The probability that we would actually *remove* that behavior of OFFSET\n>>> 0 is not distinguishable from zero. I'm not terribly excited about\n>>> having an alternate syntax to specify an optimization fence, but even\n>>> if we do create such a thing, there's no need to break the historical\n>>> usage.\n>>>\n>> I wasn't talking about removing it. My point was that if the optimization\n>> fence around CTEs is removed a lot of people will need to rework apps where\n>> they have used them for that purpose. And I continue to think that spelling\n>> it \"OFFSET 0\" is horribly obscure.\n> +1\n>\n> WITH foo AS (SELECT ...) (barrier=on|off)?\n>\n> 9.3 introduces the syntax, defaulting to on\n> 9.4 switches the default to off.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n\nWITH foo AS (SELECT ...) (fence=on|off)?\n\n\nWITH foo AS (SELECT ...) (optimisation_fence=on|off)?\n\n\n\n", "msg_date": "Thu, 22 Nov 2012 04:42:16 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 21.11.2012 17:42, Gavin Flower wrote:\n> On 22/11/12 04:32, Andres Freund wrote:\n>> On 2012-11-21 10:21:16 -0500, Andrew Dunstan wrote:\n>>> I wasn't talking about removing it. My point was that if the\n>>> optimization\n>>> fence around CTEs is removed a lot of people will need to rework apps\n>>> where\n>>> they have used them for that purpose. And I continue to think that\n>>> spelling\n>>> it \"OFFSET 0\" is horribly obscure.\n>> +1\n\nFWIW, I'm happy with \"OFFSET 0\". Granted, it's pretty obscure, but \nthat's what we've historically recommended, and it's pretty ugly to have \nto specify a fence like that in the first place. Whenever you have to \nresort to it, you ought have a comment in the query explaining why you \nneed to force the planner like that, anyway.\n\n>> WITH foo AS (SELECT ...) (barrier=on|off)?\n>>\n>> 9.3 introduces the syntax, defaulting to on\n>> 9.4 switches the default to off.\n>\n> WITH foo AS (SELECT ...) (fence=on|off)?\n>\n> WITH foo AS (SELECT ...) (optimisation_fence=on|off)?\n\nIf we are to invent a new syntax for this, can we please come up with \nsomething that's more widely applicable than just the WITH syntax. \nSomething that you could use to replace OFFSET 0 in a subquery, too.\n\n- Heikki\n\n", "msg_date": "Wed, 21 Nov 2012 17:56:01 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On Wed, Nov 21, 2012 at 12:32 PM, Andres Freund <[email protected]> wrote:\n> +1\n>\n> WITH foo AS (SELECT ...) (barrier=on|off)?\n>\n> 9.3 introduces the syntax, defaulting to on\n> 9.4 switches the default to off.\n\nWhy syntax? What about a guc?\n\ncollapse_cte_limit?\n\n", "msg_date": "Wed, 21 Nov 2012 13:16:25 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 2012-11-21 13:16:25 -0300, Claudio Freire wrote:\n> On Wed, Nov 21, 2012 at 12:32 PM, Andres Freund <[email protected]> wrote:\n> > +1\n> >\n> > WITH foo AS (SELECT ...) (barrier=on|off)?\n> >\n> > 9.3 introduces the syntax, defaulting to on\n> > 9.4 switches the default to off.\n>\n> Why syntax? What about a guc?\n>\n> collapse_cte_limit?\n\nBecause there are very good reasons to want to current behaviour. A guc\nis a global either/or so I don't see it helping much.\n\nGreetings,\n\nAndres Freund\n\n--\n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Wed, 21 Nov 2012 17:24:16 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On Wed, Nov 21, 2012 at 1:24 PM, Andres Freund <[email protected]> wrote:\n> On 2012-11-21 13:16:25 -0300, Claudio Freire wrote:\n>> On Wed, Nov 21, 2012 at 12:32 PM, Andres Freund <[email protected]> wrote:\n>> > +1\n>> >\n>> > WITH foo AS (SELECT ...) (barrier=on|off)?\n>> >\n>> > 9.3 introduces the syntax, defaulting to on\n>> > 9.4 switches the default to off.\n>>\n>> Why syntax? What about a guc?\n>>\n>> collapse_cte_limit?\n>\n> Because there are very good reasons to want to current behaviour. A guc\n> is a global either/or so I don't see it helping much.\n\nset collapse_cte_limit=8;\nwith blah as (blah) select blah;\n\nNot global at all.\n\n", "msg_date": "Wed, 21 Nov 2012 13:32:45 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 2012-11-21 13:32:45 -0300, Claudio Freire wrote:\n> On Wed, Nov 21, 2012 at 1:24 PM, Andres Freund <[email protected]> wrote:\n> > On 2012-11-21 13:16:25 -0300, Claudio Freire wrote:\n> >> On Wed, Nov 21, 2012 at 12:32 PM, Andres Freund <[email protected]> wrote:\n> >> > +1\n> >> >\n> >> > WITH foo AS (SELECT ...) (barrier=on|off)?\n> >> >\n> >> > 9.3 introduces the syntax, defaulting to on\n> >> > 9.4 switches the default to off.\n> >>\n> >> Why syntax? What about a guc?\n> >>\n> >> collapse_cte_limit?\n> >\n> > Because there are very good reasons to want to current behaviour. A guc\n> > is a global either/or so I don't see it helping much.\n> \n> set collapse_cte_limit=8;\n> with blah as (blah) select blah;\n> \n> Not global at all.\n\nNot very manageable though. And it doesn't help if you need both in a\nquery which isn't actually that unlikely.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Wed, 21 Nov 2012 17:34:50 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "\nOn 11/21/2012 11:32 AM, Claudio Freire wrote:\n> On Wed, Nov 21, 2012 at 1:24 PM, Andres Freund <[email protected]> wrote:\n>> On 2012-11-21 13:16:25 -0300, Claudio Freire wrote:\n>>> On Wed, Nov 21, 2012 at 12:32 PM, Andres Freund <[email protected]> wrote:\n>>>> +1\n>>>>\n>>>> WITH foo AS (SELECT ...) (barrier=on|off)?\n>>>>\n>>>> 9.3 introduces the syntax, defaulting to on\n>>>> 9.4 switches the default to off.\n>>> Why syntax? What about a guc?\n>>>\n>>> collapse_cte_limit?\n>> Because there are very good reasons to want to current behaviour. A guc\n>> is a global either/or so I don't see it helping much.\n> set collapse_cte_limit=8;\n> with blah as (blah) select blah;\n>\n> Not global at all.\n>\n\nThen you have to unset it again, which is ugly. You might even want it \napplying to *part* of a query, not the whole thing, so this strikes me \nas a dead end.\n\ncheers\n\nandrew\n\n", "msg_date": "Wed, 21 Nov 2012 11:35:18 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On Wed, Nov 21, 2012 at 1:35 PM, Andrew Dunstan <[email protected]> wrote:\n>>>> Why syntax? What about a guc?\n>>>>\n>>>> collapse_cte_limit?\n>>>\n>>> Because there are very good reasons to want to current behaviour. A guc\n>>> is a global either/or so I don't see it helping much.\n>>\n>> set collapse_cte_limit=8;\n>> with blah as (blah) select blah;\n>>\n>> Not global at all.\n>>\n>\n> Then you have to unset it again, which is ugly. You might even want it\n> applying to *part* of a query, not the whole thing, so this strikes me as a\n> dead end.\n\nReally?\n\nBecause I've seen here people that want it generally (because\nOracle/MSSQL/your favourite db does it), and people that don't want it\n(generally because they need it). I haven't seen any mention to mixing\nfenced and unfenced usage.\n\n", "msg_date": "Wed, 21 Nov 2012 13:37:55 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "Claudio Freire <[email protected]> writes:\n> collapse_cte_limit?\n\nThe join collapse limits address a completely different problem (ie,\nexplosion of planning time with too many relations), and are pretty much\nuseless as a model for this. As multiple people told you already,\noptimization fences are typically wanted for only specific subqueries.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 21 Nov 2012 12:09:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 22/11/12 04:56, Heikki Linnakangas wrote:\n> On 21.11.2012 17:42, Gavin Flower wrote:\n>> On 22/11/12 04:32, Andres Freund wrote:\n>>> On 2012-11-21 10:21:16 -0500, Andrew Dunstan wrote:\n>>>> I wasn't talking about removing it. My point was that if the\n>>>> optimization\n>>>> fence around CTEs is removed a lot of people will need to rework apps\n>>>> where\n>>>> they have used them for that purpose. And I continue to think that\n>>>> spelling\n>>>> it \"OFFSET 0\" is horribly obscure.\n>>> +1\n>\n> FWIW, I'm happy with \"OFFSET 0\". Granted, it's pretty obscure, but \n> that's what we've historically recommended, and it's pretty ugly to \n> have to specify a fence like that in the first place. Whenever you \n> have to resort to it, you ought have a comment in the query explaining \n> why you need to force the planner like that, anyway.\n>\n>>> WITH foo AS (SELECT ...) (barrier=on|off)?\n>>>\n>>> 9.3 introduces the syntax, defaulting to on\n>>> 9.4 switches the default to off.\n>>\n>> WITH foo AS (SELECT ...) (fence=on|off)?\n>>\n>> WITH foo AS (SELECT ...) (optimisation_fence=on|off)?\n>\n> If we are to invent a new syntax for this, can we please come up with \n> something that's more widely applicable than just the WITH syntax. \n> Something that you could use to replace OFFSET 0 in a subquery, too.\n>\n> - Heikki\nWITH FENCE foo AS (SELECT ...)\ndefault?\n\n\nWITHOUT FENCE foo AS (SELECT ...) :-)\nNah!\nI prefer this, but it is too specific to 'WITH',\nand very unSQL standardish!\n\nAlternatively one of the following\n\n 1. WITH UNFENCED foo AS (SELECT ...)\n 2. WITH NO FENCE foo AS (SELECT ...)\n 3. WITH NOT FENCE foo AS (SELECT ...)\n\nI loke the firsat variant, but the 3rd is\nmost SQL standardish!\n\nCheers,\nGavin\n\n\n\n\n\n\nOn 22/11/12 04:56, Heikki Linnakangas\n wrote:\n\nOn\n 21.11.2012 17:42, Gavin Flower wrote:\n \nOn 22/11/12 04:32, Andres Freund wrote:\n \nOn 2012-11-21 10:21:16 -0500, Andrew\n Dunstan wrote:\n \nI wasn't talking about removing it. My\n point was that if the\n \n optimization\n \n fence around CTEs is removed a lot of people will need to\n rework apps\n \n where\n \n they have used them for that purpose. And I continue to\n think that\n \n spelling\n \n it \"OFFSET 0\" is horribly obscure.\n \n\n +1\n \n\n\n\n FWIW, I'm happy with \"OFFSET 0\". Granted, it's pretty obscure, but\n that's what we've historically recommended, and it's pretty ugly\n to have to specify a fence like that in the first place. Whenever\n you have to resort to it, you ought have a comment in the query\n explaining why you need to force the planner like that, anyway.\n \n\n\nWITH foo AS (SELECT ...)\n (barrier=on|off)?\n \n\n 9.3 introduces the syntax, defaulting to on\n \n 9.4 switches the default to off.\n \n\n\n WITH foo AS (SELECT ...) (fence=on|off)?\n \n\n WITH foo AS (SELECT ...) (optimisation_fence=on|off)?\n \n\n\n If we are to invent a new syntax for this, can we please come up\n with something that's more widely applicable than just the WITH\n syntax. Something that you could use to replace OFFSET 0 in a\n subquery, too.\n \n\n - Heikki\n \n\n WITH FENCE foo AS (SELECT ...)\n default?\n\n\n WITHOUT FENCE foo AS (SELECT ...)     :-)\n Nah! \n I prefer this, but it is too specific to 'WITH',\n and very unSQL standardish!\n\n Alternatively one of the following\n\n WITH UNFENCED foo AS (SELECT ...)\n WITH NO FENCE foo AS (SELECT ...)\n WITH NOT FENCE foo AS (SELECT ...)\n\n I loke the firsat variant, but the 3rd is\n most SQL standardish!\n\n Cheers,\n Gavin", "msg_date": "Thu, 22 Nov 2012 08:30:25 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "\nOn 11/21/2012 02:30 PM, Gavin Flower wrote:\n> WITH FENCE foo AS (SELECT ...)\n> default?\n>\n>\n> WITHOUT FENCE foo AS (SELECT ...) :-)\n> Nah!\n> I prefer this, but it is too specific to 'WITH',\n> and very unSQL standardish!\n>\n> Alternatively one of the following\n>\n> 1. WITH UNFENCED foo AS (SELECT ...)\n> 2. WITH NO FENCE foo AS (SELECT ...)\n> 3. WITH NOT FENCE foo AS (SELECT ...)\n>\n> I loke the firsat variant, but the 3rd is\n> most SQL standardish!\n>\n\nAs Tom (I think) pointed out, we should not have a syntax tied to CTEs.\n\ncheers\n\nandrew\n\n\n", "msg_date": "Wed, 21 Nov 2012 14:42:00 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 22/11/12 08:42, Andrew Dunstan wrote:\n>\n> On 11/21/2012 02:30 PM, Gavin Flower wrote:\n>> WITH FENCE foo AS (SELECT ...)\n>> default?\n>>\n>>\n>> WITHOUT FENCE foo AS (SELECT ...) :-)\n>> Nah!\n>> I prefer this, but it is too specific to 'WITH',\n>> and very unSQL standardish!\n>>\n>> Alternatively one of the following\n>>\n>> 1. WITH UNFENCED foo AS (SELECT ...)\n>> 2. WITH NO FENCE foo AS (SELECT ...)\n>> 3. WITH NOT FENCE foo AS (SELECT ...)\n>>\n>> I loke the firsat variant, but the 3rd is\n>> most SQL standardish!\n>>\n>\n> As Tom (I think) pointed out, we should not have a syntax tied to CTEs.\n>\n> cheers\n>\n> andrew\n>\nIf other SQL constructs have a optimisation fence, then the FENCE & NOT \nFENCE syntax can be used theire as well.\n\nSo what am I missing? (obviously WITHOUT FENCE would not make sense \nelsewhere, but I wasn't really being serious when I suggested it!)\n\n\n", "msg_date": "Thu, 22 Nov 2012 10:24:55 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 11/22/2012 03:30 AM, Gavin Flower wrote:\n> On 22/11/12 04:56, Heikki Linnakangas wrote:\n>> On 21.11.2012 17:42, Gavin Flower wrote:\n>>> On 22/11/12 04:32, Andres Freund wrote:\n>>>> On 2012-11-21 10:21:16 -0500, Andrew Dunstan wrote:\n>>>>> I wasn't talking about removing it. My point was that if the\n>>>>> optimization\n>>>>> fence around CTEs is removed a lot of people will need to rework apps\n>>>>> where\n>>>>> they have used them for that purpose. And I continue to think that\n>>>>> spelling\n>>>>> it \"OFFSET 0\" is horribly obscure.\n>>>> +1\n>>\n>> FWIW, I'm happy with \"OFFSET 0\". Granted, it's pretty obscure, but\n>> that's what we've historically recommended, and it's pretty ugly to\n>> have to specify a fence like that in the first place. Whenever you\n>> have to resort to it, you ought have a comment in the query\n>> explaining why you need to force the planner like that, anyway.\n>>\n>>>> WITH foo AS (SELECT ...) (barrier=on|off)?\n>>>>\n>>>> 9.3 introduces the syntax, defaulting to on\n>>>> 9.4 switches the default to off.\n>>>\n>>> WITH foo AS (SELECT ...) (fence=on|off)?\n>>>\n>>> WITH foo AS (SELECT ...) (optimisation_fence=on|off)?\n>>\n>> If we are to invent a new syntax for this, can we please come up with\n>> something that's more widely applicable than just the WITH syntax.\n>> Something that you could use to replace OFFSET 0 in a subquery, too.\n>>\n>> - Heikki\n> WITH FENCE foo AS (SELECT ...)\n> default?\nThat doesn't bind tightly enough to a specific CTE term. Consider:\n\nWITH\n FENCE foo AS (SELECT ...),\n bar AS (SELECT ...)\nSELECT * FROM bar;\n\nAre we fencing just foo? Or all expressions?\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n\n\nOn 11/22/2012 03:30 AM, Gavin Flower\n wrote:\n\n\n\nOn 22/11/12 04:56, Heikki Linnakangas\n wrote:\n\nOn\n 21.11.2012 17:42, Gavin Flower wrote: \nOn 22/11/12 04:32, Andres Freund wrote:\n \nOn 2012-11-21 10:21:16 -0500, Andrew\n Dunstan wrote: \nI wasn't talking about removing it.\n My point was that if the \n optimization \n fence around CTEs is removed a lot of people will need to\n rework apps \n where \n they have used them for that purpose. And I continue to\n think that \n spelling \n it \"OFFSET 0\" is horribly obscure. \n\n +1 \n\n\n\n FWIW, I'm happy with \"OFFSET 0\". Granted, it's pretty obscure,\n but that's what we've historically recommended, and it's pretty\n ugly to have to specify a fence like that in the first place.\n Whenever you have to resort to it, you ought have a comment in\n the query explaining why you need to force the planner like\n that, anyway. \n\n\nWITH foo AS (SELECT ...)\n (barrier=on|off)? \n\n 9.3 introduces the syntax, defaulting to on \n 9.4 switches the default to off. \n\n\n WITH foo AS (SELECT ...) (fence=on|off)? \n\n WITH foo AS (SELECT ...) (optimisation_fence=on|off)? \n\n\n If we are to invent a new syntax for this, can we please come up\n with something that's more widely applicable than just the WITH\n syntax. Something that you could use to replace OFFSET 0 in a\n subquery, too. \n\n - Heikki \n\n WITH FENCE foo AS (SELECT ...)\n default?\n\n That doesn't bind tightly enough to a specific CTE term. Consider:\n\n WITH \n   FENCE foo AS (SELECT ...),\n   bar AS (SELECT ...)\n SELECT * FROM bar;\n\n Are we fencing just foo? Or all expressions? \n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Thu, 22 Nov 2012 08:08:54 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 22/11/12 13:08, Craig Ringer wrote:\n> On 11/22/2012 03:30 AM, Gavin Flower wrote:\n>> On 22/11/12 04:56, Heikki Linnakangas wrote:\n>>> On 21.11.2012 17:42, Gavin Flower wrote:\n>>>> On 22/11/12 04:32, Andres Freund wrote:\n>>>>> On 2012-11-21 10:21:16 -0500, Andrew Dunstan wrote:\n>>>>>> I wasn't talking about removing it. My point was that if the\n>>>>>> optimization\n>>>>>> fence around CTEs is removed a lot of people will need to rework \n>>>>>> apps\n>>>>>> where\n>>>>>> they have used them for that purpose. And I continue to think that\n>>>>>> spelling\n>>>>>> it \"OFFSET 0\" is horribly obscure.\n>>>>> +1\n>>>\n>>> FWIW, I'm happy with \"OFFSET 0\". Granted, it's pretty obscure, but \n>>> that's what we've historically recommended, and it's pretty ugly to \n>>> have to specify a fence like that in the first place. Whenever you \n>>> have to resort to it, you ought have a comment in the query \n>>> explaining why you need to force the planner like that, anyway.\n>>>\n>>>>> WITH foo AS (SELECT ...) (barrier=on|off)?\n>>>>>\n>>>>> 9.3 introduces the syntax, defaulting to on\n>>>>> 9.4 switches the default to off.\n>>>>\n>>>> WITH foo AS (SELECT ...) (fence=on|off)?\n>>>>\n>>>> WITH foo AS (SELECT ...) (optimisation_fence=on|off)?\n>>>\n>>> If we are to invent a new syntax for this, can we please come up \n>>> with something that's more widely applicable than just the WITH \n>>> syntax. Something that you could use to replace OFFSET 0 in a \n>>> subquery, too.\n>>>\n>>> - Heikki\n>> WITH FENCE foo AS (SELECT ...)\n>> default?\n> That doesn't bind tightly enough to a specific CTE term. Consider:\n>\n> WITH\n> FENCE foo AS (SELECT ...),\n> bar AS (SELECT ...)\n> SELECT * FROM bar;\n>\n> Are we fencing just foo? Or all expressions?\n>\n>\n> -- \n> Craig Ringerhttp://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\nAre we fencing or fooing??? :-)\n\nHmm...\nHow about:\n\n(a) If we have lots of WITH SELECTS which mostly have one specific type \nof fencing, then we could specify the common fence value after the WITH \nand the exceptions after the AS:\n\nWITH FENCE\n foo AS (SELECT ...),\n bar AS NOT FENCE (SELECT ...).\n baz AS (SELECT ...)\nSELECT * FROM bar;\n\nalternatively:\nWITH NOT FENCE\n foo AS FENCE (SELECT ...),\n bar AS (SELECT ...).\n baz AS FENCE (SELECT ...)\nSELECT * FROM bar;\n\n(b) If we retain that FENCE is the default, then it would be simpler \njust to just allow a FENCE clause after the AS keyword.\n\nWITH\n foo AS (SELECT ...),\n bar AS NOT FENCE (SELECT ...).\n baz AS (SELECT ...)\nSELECT * FROM bar;\n\nObviously even for (a), we have to have one value of the FENCE clause as \nthe default. Either make the default FENCE, as now - or NOT FENCE if \nthat is seen to be a better default, especially if that is easier for \npeople coming from Oracle.\n\nI suspect most people are blissfully unaware of CTE's being fenced, or \nat least not really sure what it means. So I suspect NOT FENCE would be \nthe better default.\n\nAlternative spellings might be better such as:\nFENCED / NOT FENCED\nor\nFENCED / UNFENCED\n\n\nCheers,\nGavin\n\n\n\n\n\n\n\n\nOn 22/11/12 13:08, Craig Ringer wrote:\n\n\n\nOn 11/22/2012 03:30 AM, Gavin Flower\n wrote:\n\n\n\nOn 22/11/12 04:56, Heikki\n Linnakangas wrote:\n\nOn\n\n 21.11.2012 17:42, Gavin Flower wrote: \nOn 22/11/12 04:32, Andres Freund\n wrote: \nOn 2012-11-21 10:21:16 -0500, Andrew\n Dunstan wrote: \nI wasn't talking about removing\n it. My point was that if the \n optimization \n fence around CTEs is removed a lot of people will need\n to rework apps \n where \n they have used them for that purpose. And I continue to\n think that \n spelling \n it \"OFFSET 0\" is horribly obscure. \n\n +1 \n\n\n\n FWIW, I'm happy with \"OFFSET 0\". Granted, it's pretty obscure,\n but that's what we've historically recommended, and it's\n pretty ugly to have to specify a fence like that in the first\n place. Whenever you have to resort to it, you ought have a\n comment in the query explaining why you need to force the\n planner like that, anyway. \n\n\nWITH foo AS (SELECT ...)\n (barrier=on|off)? \n\n 9.3 introduces the syntax, defaulting to on \n 9.4 switches the default to off. \n\n\n WITH foo AS (SELECT ...) (fence=on|off)? \n\n WITH foo AS (SELECT ...) (optimisation_fence=on|off)? \n\n\n If we are to invent a new syntax for this, can we please come\n up with something that's more widely applicable than just the\n WITH syntax. Something that you could use to replace OFFSET 0\n in a subquery, too. \n\n - Heikki \n\n WITH FENCE foo AS (SELECT ...)\n default?\n\n That doesn't bind tightly enough to a specific CTE term. Consider:\n\n WITH \n   FENCE foo AS (SELECT ...),\n   bar AS (SELECT ...)\n SELECT * FROM bar;\n\n Are we fencing just foo? Or all expressions? \n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n Are we fencing or fooing??? :-)\n\n Hmm...\n How about:\n\n (a) If we have lots of WITH SELECTS which mostly have one specific\n type of fencing, then we could specify the common fence value after\n the WITH and the exceptions after the AS:\n\n WITH FENCE\n   foo AS (SELECT ...),\n   bar AS NOT FENCE (SELECT ...).\n   baz AS (SELECT ...)\n SELECT * FROM bar;\n\n alternatively:\n WITH NOT FENCE\n   foo AS FENCE (SELECT ...),\n   bar AS (SELECT ...).\n   baz AS FENCE (SELECT ...)\n SELECT * FROM bar;\n\n (b) If we retain that FENCE is the default, then it would be simpler\n just to just allow a FENCE clause after the AS keyword.\n\n WITH\n   foo AS (SELECT ...),\n   bar AS NOT FENCE (SELECT ...).\n   baz AS (SELECT ...)\n SELECT * FROM bar;\n\n Obviously even for (a), we have to have one value of the FENCE\n clause as the default.  Either make the default FENCE, as now - or\n NOT FENCE if that is seen to be a better default, especially if that\n is easier for people coming from Oracle.\n\n I suspect most people are blissfully unaware of CTE's being fenced,\n or at least not really sure what it means. So I suspect NOT FENCE\n would be the better default.\n\n Alternative spellings might be better such as:\n FENCED / NOT FENCED\n or\n FENCED / UNFENCED\n\n\n Cheers,\n Gavin", "msg_date": "Thu, 22 Nov 2012 13:38:54 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 11/22/2012 08:38 AM, Gavin Flower wrote:\n> I suspect most people are blissfully unaware of CTE's being fenced, or\n> at least not really sure what it means. So I suspect NOT FENCE would\n> be the better default.\nIt's also probably more standard, and a better fit with what other DBs do.\n\nPg would still need to detect conditions like the use of functions with\nside effects or (obviously) INSERT/UPDATE/DELETE wCTEs and not push\nconditions down into them / pull conditions up from them, etc. That's\nhow I read the standard, though; must have the same effect as if the\nqueries were executed as written, so Pg is free to transform them so\nlong as it doesn't change the results.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Thu, 22 Nov 2012 08:42:51 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On 22/11/2012 00:08, Craig Ringer wrote:\n> WITH\n> FENCE foo AS (SELECT ...),\n> bar AS (SELECT ...)\n> SELECT * FROM bar;\n>\n> Are we fencing just foo? Or all expressions?\n>\n\nWITH foo AS (FENCED SELECT ...),\n bar AS (SELECT ...),\nSELECT ... ;\n\n-- \nJeremy\n\n\n\n", "msg_date": "Thu, 22 Nov 2012 13:42:33 +0000", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "On Thu, Nov 22, 2012 at 7:42 AM, Jeremy Harris <[email protected]> wrote:\n> On 22/11/2012 00:08, Craig Ringer wrote:\n>>\n>> WITH\n>> FENCE foo AS (SELECT ...),\n>> bar AS (SELECT ...)\n>> SELECT * FROM bar;\n>>\n>> Are we fencing just foo? Or all expressions?\n>>\n>\n> WITH foo AS (FENCED SELECT ...),\n> bar AS (SELECT ...),\n> SELECT ... ;\n\nI would much rather see 'MATERIALIZE' instead of 'FENCED', unless the\nby the latter you mean to forbid *all* optimizations, whereas with the\nlatter the meaning is pretty clear.\n\n-- \nJon\n\n", "msg_date": "Thu, 22 Nov 2012 08:23:57 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" } ]
[ { "msg_contents": "Have a query using a CTE that is performing very poorly. The equivalent query against the same data in an Oracle database runs in under 1 second, in Postgres it takes 2000 seconds.\n\nThe smp_pkg.get_invoice_charges queries fedexinvoices for some data and normalizes it into a SETOF some record type. It is declared to be STABLE. Fedexinvoices consists of about 1.3M rows of medium width. Fedexinvoices.id is the primary key on that table, and trim(fedexinvoices.trackno) is indexed via the function trim.\n\nThe plan for the equivalent query in Oracle is much smaller and simpler. No sequential (or full table) scans on fedexinvoices.\n\n\n\nWITH charges as (\n SELECT fi2.id, smp_pkg.get_invoice_charges(fi2.id) charge_info from fedexinvoices fi2\n)\nselect fedexinvoices.* from\nfedexinvoices\ninner join charges on charges.id = fedexinvoices.id AND (charges.charge_info).charge_name IN ('ADDRESS CORRECTION CHARGE','ADDRESS CORRECTION')\nwhere\ntrim(fedexinvoices.trackno)='799159791643'\n;\n\nExplain Analyze output, I abbreviated some of the column lists for brevity:\n\nNested Loop (cost=457380.38..487940.77 rows=1 width=1024) (actual time=1978019.858..1978019.858 rows=0 loops=1)\n Output: fedexinvoices.id, .........\n Join Filter: (fedexinvoices.id = charges.id)\n Buffers: shared hit=20387611, temp written=94071\n CTE charges\n -> Seq Scan on hits.fedexinvoices fi2 (cost=0.00..457380.38 rows=1350513 width=8) (actual time=0.613..1964632.763 rows=9007863 loops=1)\n Output: fi2.id, smp_pkg.get_invoice_charges(fi2.id, NULL::character varying)\n Buffers: shared hit=20387606\n -> Index Scan using fedexinvoices_trim_track_idx on hits.fedexinvoices (cost=0.00..5.46 rows=1 width=1024) (actual time=0.024..0.026 rows=1 loops=1)\n Output: fedexinvoices.id, .........\n Index Cond: (btrim((fedexinvoices.trackno)::text) = '799159791643'::text)\n Buffers: shared hit=5\n -> CTE Scan on charges (cost=0.00..30386.54 rows=13471 width=8) (actual time=1978019.827..1978019.827 rows=0 loops=1)\n Output: charges.id, charges.charge_info\n Filter: (((charges.charge_info).charge_name)::text = ANY ('{\"ADDRESS CORRECTION CHARGE\",\"ADDRESS CORRECTION\"}'::text[]))\n Buffers: shared hit=20387606, temp written=94071\nTotal runtime: 1978214.743 ms\n\n\n\n\n\n\n\n\n\n\n\nHave a query using a CTE that is performing very poorly. The equivalent query against the same data in an Oracle database runs in under 1 second, in Postgres  it takes 2000 seconds.\n\n \nThe smp_pkg.get_invoice_charges queries fedexinvoices for some data and normalizes it into a SETOF some record type. It is declared to be STABLE. Fedexinvoices consists of about 1.3M rows of medium width. Fedexinvoices.id is the primary\n key on that table, and trim(fedexinvoices.trackno) is indexed via the function trim.\n \nThe plan for the equivalent query in Oracle is much smaller and simpler. No sequential (or full table) scans on fedexinvoices.\n \n \n \nWITH charges as (\n                SELECT fi2.id, smp_pkg.get_invoice_charges(fi2.id) charge_info from fedexinvoices fi2\n)\nselect fedexinvoices.* from \nfedexinvoices\ninner join charges on charges.id = fedexinvoices.id AND (charges.charge_info).charge_name IN ('ADDRESS CORRECTION CHARGE','ADDRESS CORRECTION')\nwhere\ntrim(fedexinvoices.trackno)='799159791643'\n;\n \nExplain Analyze output, I abbreviated some of the column lists for brevity:\n \nNested Loop  (cost=457380.38..487940.77 rows=1 width=1024) (actual time=1978019.858..1978019.858 rows=0 loops=1)\n  Output: fedexinvoices.id, ………\n  Join Filter: (fedexinvoices.id = charges.id)\n  Buffers: shared hit=20387611, temp written=94071\n  CTE charges\n    ->  Seq Scan on hits.fedexinvoices fi2  (cost=0.00..457380.38 rows=1350513 width=8) (actual time=0.613..1964632.763 rows=9007863 loops=1)\n          Output: fi2.id, smp_pkg.get_invoice_charges(fi2.id, NULL::character varying)\n          Buffers: shared hit=20387606\n  ->  Index Scan using fedexinvoices_trim_track_idx on hits.fedexinvoices  (cost=0.00..5.46 rows=1 width=1024) (actual time=0.024..0.026 rows=1 loops=1)\n        Output: fedexinvoices.id, ………\n        Index Cond: (btrim((fedexinvoices.trackno)::text) = '799159791643'::text)\n        Buffers: shared hit=5\n  ->  CTE Scan on charges  (cost=0.00..30386.54 rows=13471 width=8) (actual time=1978019.827..1978019.827 rows=0 loops=1)\n        Output: charges.id, charges.charge_info\n        Filter: (((charges.charge_info).charge_name)::text = ANY ('{\"ADDRESS CORRECTION CHARGE\",\"ADDRESS CORRECTION\"}'::text[]))\n        Buffers: shared hit=20387606, temp written=94071\nTotal runtime: 1978214.743 ms", "msg_date": "Wed, 14 Nov 2012 15:23:15 +0000", "msg_from": "David Greco <[email protected]>", "msg_from_op": true, "msg_subject": "Poor performance using CTE" }, { "msg_contents": "\nOn 11/14/2012 10:23 AM, David Greco wrote:\n>\n> Have a query using a CTE that is performing very poorly. The \n> equivalent query against the same data in an Oracle database runs in \n> under 1 second, in Postgres it takes 2000 seconds.\n>\n> The smp_pkg.get_invoice_charges queries fedexinvoices for some data \n> and normalizes it into a SETOF some record type. It is declared to be \n> STABLE. Fedexinvoices consists of about 1.3M rows of medium width. \n> Fedexinvoices.id is the primary key on that table, and \n> trim(fedexinvoices.trackno) is indexed via the function trim.\n>\n> The plan for the equivalent query in Oracle is much smaller and \n> simpler. No sequential (or full table) scans on fedexinvoices.\n>\n> WITH charges as (\n>\n> SELECT fi2.id, smp_pkg.get_invoice_charges(fi2.id) \n> charge_info from fedexinvoices fi2\n>\n> )\n>\n> select fedexinvoices.* from\n>\n> fedexinvoices\n>\n> inner join charges on charges.id = fedexinvoices.id AND \n> (charges.charge_info).charge_name IN ('ADDRESS CORRECTION \n> CHARGE','ADDRESS CORRECTION')\n>\n> where\n>\n> trim(fedexinvoices.trackno)='799159791643'\n>\n> ;\n>\n\n\nCan you explain what you're actually trying to do here? The query looks \nrather odd. Why are you joining this table (or an extract from it) to \nitself?\n\n\nIn any case, you could almost certainly recast it and have it run fast \nby first filtering on the tracking number.\n\n\ncheers\n\nandrew\n\n", "msg_date": "Wed, 14 Nov 2012 10:50:58 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "You're right. I was translating an oracle query , but looks like PG will allow some syntax that is different. Trying to find entries in fedexinvoices where smp_pkg.get_invoice_charges(id) returns a record containing charge_name in ('ADDRESS CORRECTION CHARGE','ADDRESS CORRECTION'). Should return the fedexinvoices row and the row from smp_pkg.get_invoice_charges that contains the address correction.\n\n\nSomething like this, though this is syntactically incorrect as smp_pkg.get_invoice_charges returns a set:\n\n\nselect fedexinvoices.*, (smp_pkg.get_invoice_charges(id)).*\nfrom fedexinvoices\nWHERE\ntrim(fedexinvoices.trackno)='799159791643'\nand\n(smp_pkg.get_invoice_charges(id)).charge_name IN ('ADDRESS CORRECTION CHARGE','ADDRESS CORRECTION')\n\n\n\n\n-----Original Message-----\nFrom: Andrew Dunstan [mailto:[email protected]] \nSent: Wednesday, November 14, 2012 10:51 AM\nTo: David Greco\nCc: [email protected]\nSubject: Re: [PERFORM] Poor performance using CTE\n\n\nOn 11/14/2012 10:23 AM, David Greco wrote:\n>\n> Have a query using a CTE that is performing very poorly. The \n> equivalent query against the same data in an Oracle database runs in \n> under 1 second, in Postgres it takes 2000 seconds.\n>\n> The smp_pkg.get_invoice_charges queries fedexinvoices for some data \n> and normalizes it into a SETOF some record type. It is declared to be \n> STABLE. Fedexinvoices consists of about 1.3M rows of medium width.\n> Fedexinvoices.id is the primary key on that table, and\n> trim(fedexinvoices.trackno) is indexed via the function trim.\n>\n> The plan for the equivalent query in Oracle is much smaller and \n> simpler. No sequential (or full table) scans on fedexinvoices.\n>\n> WITH charges as (\n>\n> SELECT fi2.id, smp_pkg.get_invoice_charges(fi2.id)\n> charge_info from fedexinvoices fi2\n>\n> )\n>\n> select fedexinvoices.* from\n>\n> fedexinvoices\n>\n> inner join charges on charges.id = fedexinvoices.id AND \n> (charges.charge_info).charge_name IN ('ADDRESS CORRECTION \n> CHARGE','ADDRESS CORRECTION')\n>\n> where\n>\n> trim(fedexinvoices.trackno)='799159791643'\n>\n> ;\n>\n\n\nCan you explain what you're actually trying to do here? The query looks rather odd. Why are you joining this table (or an extract from it) to itself?\n\n\nIn any case, you could almost certainly recast it and have it run fast by first filtering on the tracking number.\n\n\ncheers\n\nandrew\n\n\n\n", "msg_date": "Wed, 14 Nov 2012 15:56:38 +0000", "msg_from": "David Greco <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor performance using CTE" }, { "msg_contents": "\nOn 11/14/2012 10:56 AM, David Greco wrote:\n> You're right. I was translating an oracle query , but looks like PG will allow some syntax that is different. Trying to find entries in fedexinvoices where smp_pkg.get_invoice_charges(id) returns a record containing charge_name in ('ADDRESS CORRECTION CHARGE','ADDRESS CORRECTION'). Should return the fedexinvoices row and the row from smp_pkg.get_invoice_charges that contains the address correction.\n>\n>\n> Something like this, though this is syntactically incorrect as smp_pkg.get_invoice_charges returns a set:\n>\n>\n> select fedexinvoices.*, (smp_pkg.get_invoice_charges(id)).*\n> from fedexinvoices\n> WHERE\n> trim(fedexinvoices.trackno)='799159791643'\n> and\n> (smp_pkg.get_invoice_charges(id)).charge_name IN ('ADDRESS CORRECTION CHARGE','ADDRESS CORRECTION')\n\n\nFirst, please don't top-post when someone has replied underneath your \npost. It makes the thread totally unreadable. See \n<http://idallen.com/topposting.html>\n\nYou could do something like this:\n\nWITH invoices as\n(\n select *\n from fedexinvoices\n where trim(fedexinvoices.trackno)='799159791643'\n),\n\ncharges as\n(\n SELECT fi2.id, smp_pkg.get_invoice_charges(fi2.id) charge_info\n from fedexinvoices fi2 join invoices i on i.id = f12.id\n)\n\nselect invoices.*\nfrom invoices\ninner join charges on charges.id = invoices.id\n AND (charges.charge_info).charge_name IN ('ADDRESS CORRECTION \nCHARGE','ADDRESS CORRECTION')\n\n;\n\n\nOr probably something way simpler but I just did this fairly quickly and \nmechanically\n\n\ncheers\n\nandrew\n\n", "msg_date": "Wed, 14 Nov 2012 11:07:31 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor performance using CTE" } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Andrew Dunstan [mailto:[email protected]] \nSent: Wednesday, November 14, 2012 11:08 AM\nTo: David Greco\nCc: [email protected]\nSubject: Re: [PERFORM] Poor performance using CTE\n\n\nOn 11/14/2012 10:56 AM, David Greco wrote:\n> You're right. I was translating an oracle query , but looks like PG will allow some syntax that is different. Trying to find entries in fedexinvoices where smp_pkg.get_invoice_charges(id) returns a record containing charge_name in ('ADDRESS CORRECTION CHARGE','ADDRESS CORRECTION'). Should return the fedexinvoices row and the row from smp_pkg.get_invoice_charges that contains the address correction.\n>\n>\n> Something like this, though this is syntactically incorrect as smp_pkg.get_invoice_charges returns a set:\n>\n>\n> select fedexinvoices.*, (smp_pkg.get_invoice_charges(id)).*\n> from fedexinvoices\n> WHERE\n> trim(fedexinvoices.trackno)='799159791643'\n> and\n> (smp_pkg.get_invoice_charges(id)).charge_name IN ('ADDRESS CORRECTION \n> CHARGE','ADDRESS CORRECTION')\n\n\nFirst, please don't top-post when someone has replied underneath your post. It makes the thread totally unreadable. See <http://idallen.com/topposting.html>\n\nYou could do something like this:\n\nWITH invoices as\n(\n select *\n from fedexinvoices\n where trim(fedexinvoices.trackno)='799159791643'\n),\n\ncharges as\n(\n SELECT fi2.id, smp_pkg.get_invoice_charges(fi2.id) charge_info\n from fedexinvoices fi2 join invoices i on i.id = f12.id\n)\n\nselect invoices.*\nfrom invoices\ninner join charges on charges.id = invoices.id\n AND (charges.charge_info).charge_name IN ('ADDRESS CORRECTION CHARGE','ADDRESS CORRECTION')\n\n;\n\n\nOr probably something way simpler but I just did this fairly quickly and mechanically\n\n\ncheers\n\nandrew\n\n\n\n\n\n\n\nThanks, that did the trick. Though I'm still not clear as to why. \n\n\n\n\n\n", "msg_date": "Wed, 14 Nov 2012 16:18:42 +0000", "msg_from": "David Greco <[email protected]>", "msg_from_op": true, "msg_subject": "SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "David Greco <[email protected]> writes:\n> Thanks, that did the trick. Though I'm still not clear as to why. \n\nPG treats WITH as an optimization fence --- the WITH query will be\nexecuted pretty much as-is. It may be that Oracle flattens the query\nsomehow; though if you're using black-box functions in both cases,\nit's not obvious where the optimizer could get any purchase that way.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 14 Nov 2012 11:29:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "On 11/15/2012 12:29 AM, Tom Lane wrote:\n> David Greco <[email protected]> writes:\n>> Thanks, that did the trick. Though I'm still not clear as to why. \n> PG treats WITH as an optimization fence --- the WITH query will be\n> executed pretty much as-is. It may be that Oracle flattens the query\n> somehow; though if you're using black-box functions in both cases,\n> it's not obvious where the optimizer could get any purchase that way.\n>\n\nI was looking through the latest spec drafts I have access to and\ncouldn't find any reference to Pg's optimisation-fence-for-CTEs\nbehaviour being required by the standard, though I've repeatedly seen it\nsaid that there is such a requirement.\n\nDo you know where it's specified?\n\nAll I can see is that the optimised result must have the same effect as\nthe original. That'd mean that wCTEs and CTE terms that use VOLATILE\nfunctions or functions with side-effects couldn't be optimised into\nother queries. Simple CTEs could be, though, and there are times I've\nreally wished I could use a CTE but I've had to use a set-returning\nsubquery to get reasonable plans.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Thu, 15 Nov 2012 09:17:28 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> I was looking through the latest spec drafts I have access to and\n> couldn't find any reference to Pg's optimisation-fence-for-CTEs\n> behaviour being required by the standard, though I've repeatedly seen it\n> said that there is such a requirement.\n\nI don't believe it's required by the standard (it's hard to see how it\ncould be, when query optimization is a topic outside the spec to start\nwith). However, we allow INSERT/UPDATE/DELETE RETURNING inside WITH,\nand for those I think you really need to treat WITH as an optimization\nfence. It's a lot more debatable for SELECT; there are some advantages\nto providing a fence this way but there are definitely downsides too.\nI could see adjusting that definition in the future, as we get more\nexperience with use of CTEs.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 14 Nov 2012 20:31:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "\nOn 11/14/2012 08:17 PM, Craig Ringer wrote:\n> On 11/15/2012 12:29 AM, Tom Lane wrote:\n>> David Greco <[email protected]> writes:\n>>> Thanks, that did the trick. Though I'm still not clear as to why.\n>> PG treats WITH as an optimization fence --- the WITH query will be\n>> executed pretty much as-is. It may be that Oracle flattens the query\n>> somehow; though if you're using black-box functions in both cases,\n>> it's not obvious where the optimizer could get any purchase that way.\n>>\n> I was looking through the latest spec drafts I have access to and\n> couldn't find any reference to Pg's optimisation-fence-for-CTEs\n> behaviour being required by the standard, though I've repeatedly seen it\n> said that there is such a requirement.\n>\n> Do you know where it's specified?\n>\n> All I can see is that the optimised result must have the same effect as\n> the original. That'd mean that wCTEs and CTE terms that use VOLATILE\n> functions or functions with side-effects couldn't be optimised into\n> other queries. Simple CTEs could be, though, and there are times I've\n> really wished I could use a CTE but I've had to use a set-returning\n> subquery to get reasonable plans.\n\n\nIt cuts both ways. I have used CTEs a LOT precisely because this \nbehaviour lets me get better plans. Without that I'll be back to using \nthe \"offset 0\" hack.\n\ncheers\n\nandrew\n\n", "msg_date": "Wed, 14 Nov 2012 20:46:37 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "On 15 November 2012 01:46, Andrew Dunstan <[email protected]> wrote:\n> It cuts both ways. I have used CTEs a LOT precisely because this behaviour\n> lets me get better plans. Without that I'll be back to using the \"offset 0\"\n> hack.\n\nIs the \"OFFSET 0\" hack really so bad? We've been telling people to do\nthat for years, so it's already something that we've effectively\ncommitted to.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n", "msg_date": "Thu, 15 Nov 2012 02:03:24 +0000", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "On 15/11/12 15:03, Peter Geoghegan wrote:\n> On 15 November 2012 01:46, Andrew Dunstan <[email protected]> wrote:\n>> It cuts both ways. I have used CTEs a LOT precisely because this behaviour\n>> lets me get better plans. Without that I'll be back to using the \"offset 0\"\n>> hack.\n> Is the \"OFFSET 0\" hack really so bad? We've been telling people to do\n> that for years, so it's already something that we've effectively\n> committed to.\n>\nHow about adding the keywords FENCED and NOT FENCED to the SQL \ndefinition of CTE's - with FENCED being the default?\n\n\nCheers,\nGavin\n\n\n", "msg_date": "Fri, 16 Nov 2012 07:26:17 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "On Wed, Nov 14, 2012 at 8:03 PM, Peter Geoghegan <[email protected]> wrote:\n> On 15 November 2012 01:46, Andrew Dunstan <[email protected]> wrote:\n>> It cuts both ways. I have used CTEs a LOT precisely because this behaviour\n>> lets me get better plans. Without that I'll be back to using the \"offset 0\"\n>> hack.\n>\n> Is the \"OFFSET 0\" hack really so bad? We've been telling people to do\n> that for years, so it's already something that we've effectively\n> committed to.\n\nIMSNHO, 'OFFSET 0' is completely unreadable black magic. I agree with\nAndrew: CTEs allow for manual composition of queries and can be the\nbest tool when the planner is outsmarting itself. In the old days,\nwe'd extract data to a temp table and join against that: CTE are\nessentially a formalization of that technique. I like things the way\nthey are; if CTE are hurting your plan, that's an indication you're\nusing them inappropriately.\n\nmerlin\n\n", "msg_date": "Tue, 20 Nov 2012 13:22:50 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "On Tue, Nov 20, 2012 at 4:22 PM, Merlin Moncure <[email protected]> wrote:\n> On Wed, Nov 14, 2012 at 8:03 PM, Peter Geoghegan <[email protected]> wrote:\n>> On 15 November 2012 01:46, Andrew Dunstan <[email protected]> wrote:\n>>> It cuts both ways. I have used CTEs a LOT precisely because this behaviour\n>>> lets me get better plans. Without that I'll be back to using the \"offset 0\"\n>>> hack.\n>>\n>> Is the \"OFFSET 0\" hack really so bad? We've been telling people to do\n>> that for years, so it's already something that we've effectively\n>> committed to.\n>\n> IMSNHO, 'OFFSET 0' is completely unreadable black magic. I agree with\n> Andrew: CTEs allow for manual composition of queries and can be the\n> best tool when the planner is outsmarting itself. In the old days,\n> we'd extract data to a temp table and join against that: CTE are\n> essentially a formalization of that technique. I like things the way\n> they are; if CTE are hurting your plan, that's an indication you're\n> using them inappropriately.\n\nI agree, **BUT**, I cannot imagine how pushing constraints to the CTE\n(under adequate conditions) could be anything but beneficial.\n\nIt *could* just be a lack of imagination on my part. But if it were\nnot, then it'd be nice for it to be done automatically (since this\nparticular CTE behavior bites enough people already).\n\n", "msg_date": "Tue, 20 Nov 2012 16:26:09 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "My perspective on this is that CTEs *should* be just like creating a\ntemporary table and then joining to it, but without the\nmaterialization costs. In that respect, they seem like they should be\nlike nifty VIEWs. If I wanted the behavior of materialization and then\njoin, I'd do that explicitly with temporary tables, but using CTEs as\nan explicit optimization barrier feels like the explaining away\nsurprising behavior.\n\nAs can be seen by the current conversation, not everyone is convinced\nthat CTEs ought to be an explicit optimization barrier, and setting\nthat behavior as somehow desirable or explicit (rather than merely an\nimplementation detail) feels shortsighted to me. I would be delighted\nto find that in some future version of PostgreSQL, but if that is not\nto be, at the very least, the verbiage surrounding CTEs might want to\ninclude (perhaps prominently) something along the lines of \"CTEs are\ncurrently an optimization barrier, but this is an implementation\ndetail and may change in future versions\". Perhaps even including a\nsmall blurb about what an optimization barrier even means (my\nunderstanding is that it merely forces materialization of that part of\nthe query).\n\nThat's just my perspective, coming at the use of CTEs not as a\nPostgreSQL developer, but as somebody who learned about CTEs and\nstarted using them - only to discover surprising behavior.\n\nOn Tue, Nov 20, 2012 at 1:22 PM, Merlin Moncure <[email protected]> wrote:\n> On Wed, Nov 14, 2012 at 8:03 PM, Peter Geoghegan <[email protected]> wrote:\n>> On 15 November 2012 01:46, Andrew Dunstan <[email protected]> wrote:\n>>> It cuts both ways. I have used CTEs a LOT precisely because this behaviour\n>>> lets me get better plans. Without that I'll be back to using the \"offset 0\"\n>>> hack.\n>>\n>> Is the \"OFFSET 0\" hack really so bad? We've been telling people to do\n>> that for years, so it's already something that we've effectively\n>> committed to.\n>\n> IMSNHO, 'OFFSET 0' is completely unreadable black magic. I agree with\n> Andrew: CTEs allow for manual composition of queries and can be the\n> best tool when the planner is outsmarting itself. In the old days,\n> we'd extract data to a temp table and join against that: CTE are\n> essentially a formalization of that technique. I like things the way\n> they are; if CTE are hurting your plan, that's an indication you're\n> using them inappropriately.\n>\n> merlin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nJon\n\n", "msg_date": "Tue, 20 Nov 2012 13:53:30 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "On Tue, Nov 20, 2012 at 1:53 PM, Jon Nelson <[email protected]> wrote:\n> As can be seen by the current conversation, not everyone is convinced\nthat CTEs ought to be an explicit optimization barrier\n\nOn Tue, Nov 20, 2012 at 1:26 PM, Claudio Freire <[email protected]> wrote:\n> It *could* just be a lack of imagination on my part. But if it were\n> not, then it'd be nice for it to be done automatically (since this\n> particular CTE behavior bites enough people already).\n\nSure. I just find it personally hard to find a good demarcation line\nbetween A: \"queries where pushing quals through are universally\nbeneficial and wanted\" and B: \"queries where we are inserting an\nexplicit materialization step to avoid planner issues\", particularly\nwhere there is substantial overlap with between A and C: \"queries that\nare written with a CTE and arguably shouldn't be\".\n\nPut another way, I find CTE to express: 'this then that' where joins\nexpress 'this with that'. So current behavior is not surprising at\nall. All that said, there could be a narrow class of low hanging cases\n(such as the OP's) that could be sniped...I'm just skeptical.\n\nmerlin\n\n", "msg_date": "Tue, 20 Nov 2012 14:24:01 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "On Tue, Nov 20, 2012 at 5:24 PM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Nov 20, 2012 at 1:53 PM, Jon Nelson <[email protected]> wrote:\n>> As can be seen by the current conversation, not everyone is convinced\n> that CTEs ought to be an explicit optimization barrier\n>\n> On Tue, Nov 20, 2012 at 1:26 PM, Claudio Freire <[email protected]> wrote:\n>> It *could* just be a lack of imagination on my part. But if it were\n>> not, then it'd be nice for it to be done automatically (since this\n>> particular CTE behavior bites enough people already).\n>\n> Sure. I just find it personally hard to find a good demarcation line\n> between A: \"queries where pushing quals through are universally\n> beneficial and wanted\" and B: \"queries where we are inserting an\n> explicit materialization step to avoid planner issues\", particularly\n> where there is substantial overlap with between A and C: \"queries that\n> are written with a CTE and arguably shouldn't be\".\n>\n> Put another way, I find CTE to express: 'this then that' where joins\n> express 'this with that'. So current behavior is not surprising at\n> all. All that said, there could be a narrow class of low hanging cases\n> (such as the OP's) that could be sniped...I'm just skeptical.\n\nIt could work very well towards CTE-including views, where the quals\ncannot be added in the view but would be present when the view is\nexpanded in final queries.\n\n", "msg_date": "Tue, 20 Nov 2012 17:28:40 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "Jon Nelson <[email protected]> writes:\n> ... Perhaps even including a\n> small blurb about what an optimization barrier even means (my\n> understanding is that it merely forces materialization of that part of\n> the query).\n\nFWIW, it has nothing to do with materialization; it means that we don't\npush conditions down into that subquery, nor pull subexpressions up out\nof it, nor rearrange join order across the subquery boundary. In short\nthe subquery is planned separately from the outer query. But it could\nthen be run by the executor in the usual tuple-at-a-time fashion,\nwithout materializing the whole subquery result.\n\nIt is true that CTEScan nodes materialize the subquery output (ie copy\nit into a tuplestore), but that's to support multiple CTEScans reading\nthe same CTE. One of the optimizations we *should* put in place\nsometime is skipping the tuplestore if there's only one CTEScan on the\nCTE.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 20 Nov 2012 16:26:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "On 11/21/2012 03:53 AM, Jon Nelson wrote:\n> My perspective on this is that CTEs *should* be just like creating a\n> temporary table and then joining to it, but without the\n> materialization costs. In that respect, they seem like they should be\n> like nifty VIEWs. If I wanted the behavior of materialization and then\n> join, I'd do that explicitly with temporary tables, but using CTEs as\n> an explicit optimization barrier feels like the explaining away\n> surprising behavior.\nI agree, especially since that barrier isn't specified as standard, so\nwe're using a standard feature with a subtle quirk as a\ndatabase-specific optimisation trick. A hint, as it were, like OFFSET 0.\n\n*(Dons asbestos underwear an dives for cover)*\n\nMy big problem with the status quo is that it breaks queries from other\ndatabases, like MS SQL server, where CTEs are optimised. I see this\nperiodically on Stack Overflow, with people asking variants of \"Why\ndoes PostgreSQL take 10,000 times longer to execute this query\"? (not a\nliteral quote).\n\nI really want to see this formalized and made explicit with `WITH\ntablename AS MATERIALIZE (SELECT)` or similar.\n\nRight now I often can't use CTEs to clean up hard-to-read queries\nbecause of the optimisation barrier, so I have to create a temporary\nview, temporary table, or use nested subqueries in FROM instead. Ugly.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 21 Nov 2012 07:33:51 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "I'd also add ANALYZED/NOT ANALYZED. This should force it behave like\n'create table, analyze, select' with statistics used in second query plan.\n\nP.S. defaults can be configurable.\n20 лист. 2012 02:22, \"Gavin Flower\" <[email protected]> напис.\n\n> On 15/11/12 15:03, Peter Geoghegan wrote:\n>\n>> On 15 November 2012 01:46, Andrew Dunstan <[email protected]> wrote:\n>>\n>>> It cuts both ways. I have used CTEs a LOT precisely because this\n>>> behaviour\n>>> lets me get better plans. Without that I'll be back to using the \"offset\n>>> 0\"\n>>> hack.\n>>>\n>> Is the \"OFFSET 0\" hack really so bad? We've been telling people to do\n>> that for years, so it's already something that we've effectively\n>> committed to.\n>>\n>> How about adding the keywords FENCED and NOT FENCED to the SQL\n> definition of CTE's - with FENCED being the default?\n>\n>\n> Cheers,\n> Gavin\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nI'd also add ANALYZED/NOT ANALYZED. This should force it behave like 'create table, analyze, select' with statistics used in second query plan.\nP.S. defaults can be configurable.\n20 лист. 2012 02:22, \"Gavin Flower\" <[email protected]> напис.\nOn 15/11/12 15:03, Peter Geoghegan wrote:\n\nOn 15 November 2012 01:46, Andrew Dunstan <[email protected]> wrote:\n\nIt cuts both ways. I have used CTEs a LOT precisely because this behaviour\nlets me get better plans. Without that I'll be back to using the \"offset 0\"\nhack.\n\nIs the \"OFFSET 0\" hack really so bad? We've been telling people to do\nthat for years, so it's already something that we've effectively\ncommitted to.\n\n\nHow about adding the keywords FENCED and NOT FENCED to the SQL definition of CTE's - with FENCED being the default?\n\n\nCheers,\nGavin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 22 Nov 2012 12:37:21 -0500", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" }, { "msg_contents": "On Tue, Nov 20, 2012 at 02:24:01PM -0600, Merlin Moncure wrote:\n> On Tue, Nov 20, 2012 at 1:53 PM, Jon Nelson <[email protected]> wrote:\n> > As can be seen by the current conversation, not everyone is convinced\n> that CTEs ought to be an explicit optimization barrier\n> \n> On Tue, Nov 20, 2012 at 1:26 PM, Claudio Freire <[email protected]> wrote:\n> > It *could* just be a lack of imagination on my part. But if it were\n> > not, then it'd be nice for it to be done automatically (since this\n> > particular CTE behavior bites enough people already).\n> \n> Sure. I just find it personally hard to find a good demarcation line\n> between A: \"queries where pushing quals through are universally\n> beneficial and wanted\" and B: \"queries where we are inserting an\n> explicit materialization step to avoid planner issues\", particularly\n> where there is substantial overlap with between A and C: \"queries that\n> are written with a CTE and arguably shouldn't be\".\n> \n> Put another way, I find CTE to express: 'this then that' where joins\n> express 'this with that'. So current behavior is not surprising at\n> all. All that said, there could be a narrow class of low hanging cases\n> (such as the OP's) that could be sniped...I'm just skeptical.\n\nIs thi\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n", "msg_date": "Fri, 23 Nov 2012 19:21:49 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SOLVED - RE: Poor performance using CTE" } ]
[ { "msg_contents": "Hi,\n\ni am running some tests to check performance between postgresql and mysql.\n\none important issue is PQconnectdb (PQconnectStart/PQconnectPoll) against mysql_init/mysql_real_connect functions. (debian platform/C application).\n\nPQconnectdb(\"host=localhost dbname=my_db user=my_user password='' sslmode=disable\"); \n\nco = mysql_init(NULL)\nmysql_real_connect(co, \"127.0.0.1\", \"my_user\", \"\", \"my_db\", 0, NULL, 0)\n\nPQconnectdb is taking too long comparing to mysql and i found out the time is consumed by PQconnectPoll waiting for the socket to be ready for reading/writing\nbut this behaviour is not seen in mysql.\n\nI cannot use persistent connections. I must open/close a connection anytime I want to insert something new.\n\ndo i have to configure something different? Am i missing something?\n\nthis problem gets even worse under PHP.\n\nRegards,\nHi,i am running some tests to check performance between postgresql and mysql.one important issue is PQconnectdb (PQconnectStart/PQconnectPoll) against mysql_init/mysql_real_connect functions. (debian platform/C application).PQconnectdb(\"host=localhost dbname=my_db user=my_user password='' sslmode=disable\"); co = mysql_init(NULL)mysql_real_connect(co, \"127.0.0.1\", \"my_user\", \"\", \"my_db\", 0, NULL, 0)PQconnectdb is taking too long comparing to mysql and i found out the time is consumed by PQconnectPoll waiting for the socket to be ready for reading/writingbut this behaviour is not seen in mysql.I cannot use persistent connections. I must open/close a connection anytime I want to insert something\n new.do i have to configure something different? Am i missing something?this problem gets even worse under PHP.Regards,", "msg_date": "Thu, 15 Nov 2012 01:02:57 -0800 (PST)", "msg_from": "Sergio Mayoral <[email protected]>", "msg_from_op": true, "msg_subject": "PQconnectStart/PQconnectPoll" }, { "msg_contents": "On 11/15/2012 05:02 PM, Sergio Mayoral wrote:\n>\n> PQconnectdb is taking too long comparing to mysql and i found out the\n> time is consumed by PQconnectPoll waiting for the socket to be ready\n> for reading/writing\nWhat's \"too long\"?\n>\n> I cannot use persistent connections. I must open/close a connection\n> anytime I want to insert something new.\nIf you mean that you intend to open a new connection, do a single\nINSERT, and close the connection - your performance will be awful.\n\nIf your app can't use persistent or pooled connections, use PgBouncer as\nan external connection pool. Have your app connect to PgBouncer, and\nPgBouncer connect to PostgreSQL.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n\n\nOn 11/15/2012 05:02 PM, Sergio Mayoral\n wrote:\n\n\n\n\n\n\nPQconnectdb\n is taking too long comparing to mysql and i found out the\n time is consumed by PQconnectPoll waiting for the socket\n to be ready for reading/writing\n\n\n\nWhat's \"too\n long\"?\n\n\n\n\n\nI\n cannot use persistent connections. I must open/close a\n connection anytime I want to insert something new.\n\n\n\nIf you mean\n that you intend to open a new connection, do a single INSERT, and\n close the connection - your performance will be awful.\n\n If your app can't use persistent or pooled connections, use\n PgBouncer as an external connection pool. Have your app connect to\n PgBouncer, and PgBouncer connect to PostgreSQL.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Thu, 22 Nov 2012 18:34:56 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PQconnectStart/PQconnectPoll" } ]
[ { "msg_contents": "Can someone shed some light on the following query.....\nany help would certainly be appreciated!\n\nthanks -\n\n*****\nMaria Wilson\nNasa/Langley Research Center\nHampton, Virginia\[email protected]\n*****\n\nexplain analyze\nselect a.ID, a.provider, a.hostname, a.username, a.eventTimeStamp, \na.AIPGUID, a.submissionGUID, a.parentSubmissionGUID, a.sizeArchived, \na.addedContentString,\na.addedContentSizesString, a.removedContentString, \na.removedContentSizesString, a.modifiedContentString, \na.modifiedContentSizesString, a.DISCRIMINATOR\n from AIPModificationEvent a\n where a.ID in (select MAX(b.ID) from AIPModificationEvent b where \nb.parentSubmissionGUID\n in\n (select c.GUID from WorkflowProcessingEvent c where \nc.DISCRIMINATOR='WorkflowCompleted'\n and c.eventTimeStamp >= '2012-11-10 00:00:00' and \nc.eventTimeStamp < '2012-11-11 00:00:00')\n or b.submissionGUID in\n (select c.GUID from WorkflowProcessingEvent c\n where c.DISCRIMINATOR='WorkflowCompleted' and \nc.eventTimeStamp >= '2012-11-10 00:00:00' and c.eventTimeStamp < \n'2012-11-11 00:00:00')\n group by b.AIPGUID)\nlimit 1000 offset 3000\n\n\n\"Limit (cost=5325840.21..5325840.21 rows=1 width=268) (actual \ntime=20418.800..20422.577 rows=1000 loops=1)\"\n\" -> Nested Loop (cost=5323597.90..5325840.21 rows=200 width=268) \n(actual time=20406.888..20422.265 rows=4000 loops=1)\"\n\" -> HashAggregate (cost=5323597.90..5323599.90 rows=200 \nwidth=8) (actual time=20406.867..20407.927 rows=4000 loops=1)\"\n\" -> GroupAggregate (cost=4701622.10..5090733.69 \nrows=18629137 width=44) (actual time=20359.752..20389.387 rows=58552 \nloops=1)\"\n\" -> Sort (cost=4701622.10..4753704.56 \nrows=20832984 width=44) (actual time=20359.746..20367.125 rows=59325 \nloops=1)\"\n\" Sort Key: b.aipguid\"\n\" Sort Method: quicksort Memory: 6171kB\"\n\" -> Seq Scan on aipmodificationevent b \n(cost=23.24..1528265.92 rows=20832984 width=44) (actual \ntime=1647.075..20188.844 rows=59325 loops=1)\"\n\" Filter: ((hashed SubPlan 1) OR (hashed \nSubPlan 2))\"\n\" SubPlan 1\"\n\" -> Index Scan using \nwk_eventtimestamp_idx1 on workflowprocessingevent c (cost=0.00..11.62 \nrows=1 width=37) (actual time=0.053..40.741 rows=35945 loops=1)\"\n\" Index Cond: ((eventtimestamp >= \n'2012-11-10 00:00:00'::timestamp without time zone) AND (eventtimestamp \n< '2012-11-11 00:00:00'::timestamp without time zone))\"\n\" Filter: ((discriminator)::text \n= 'WorkflowCompleted'::text)\"\n\" SubPlan 2\"\n\" -> Index Scan using \nwk_eventtimestamp_idx1 on workflowprocessingevent c (cost=0.00..11.62 \nrows=1 width=37) (actual time=0.035..31.820 rows=35945 loops=1)\"\n\" Index Cond: ((eventtimestamp >= \n'2012-11-10 00:00:00'::timestamp without time zone) AND (eventtimestamp \n< '2012-11-11 00:00:00'::timestamp without time zone))\"\n\" Filter: ((discriminator)::text \n= 'WorkflowCompleted'::text)\"\n\" -> Index Scan using aipmodificationevent_pkey on \naipmodificationevent a (cost=0.00..11.19 rows=1 width=268) (actual \ntime=0.003..0.003 rows=1 loops=4000)\"\n\" Index Cond: (a.id = (max(b.id)))\"\n\"Total runtime: 20422.761 ms\"\n\n\n", "msg_date": "Thu, 15 Nov 2012 10:03:10 -0500", "msg_from": "\"Maria L. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "help on slow query using postgres 8.4" } ]
[ { "msg_contents": "Can someone shed some light on the following query.....\nany help would certainly be appreciated!\n\nthanks -\n\n*****\nMaria Wilson\nNasa/Langley Research Center\nHampton, Virginia\[email protected]\n*****\n\nexplain analyze\nselect a.ID, a.provider, a.hostname, a.username, a.eventTimeStamp, \na.AIPGUID, a.submissionGUID, a.parentSubmissionGUID, a.sizeArchived, \na.addedContentString,\na.addedContentSizesString, a.removedContentString, \na.removedContentSizesString, a.modifiedContentString, \na.modifiedContentSizesString, a.DISCRIMINATOR\n from AIPModificationEvent a\n where a.ID in (select MAX(b.ID) from AIPModificationEvent b where \nb.parentSubmissionGUID\n in\n (select c.GUID from WorkflowProcessingEvent c where \nc.DISCRIMINATOR='WorkflowCompleted'\n and c.eventTimeStamp >= '2012-11-10 00:00:00' and \nc.eventTimeStamp < '2012-11-11 00:00:00')\n or b.submissionGUID in\n (select c.GUID from WorkflowProcessingEvent c\n where c.DISCRIMINATOR='WorkflowCompleted' and \nc.eventTimeStamp >= '2012-11-10 00:00:00' and c.eventTimeStamp < \n'2012-11-11 00:00:00')\n group by b.AIPGUID)\nlimit 1000 offset 3000\n\n\n\"Limit (cost=5325840.21..5325840.21 rows=1 width=268) (actual \ntime=20418.800..20422.577 rows=1000 loops=1)\"\n\" -> Nested Loop (cost=5323597.90..5325840.21 rows=200 width=268) \n(actual time=20406.888..20422.265 rows=4000 loops=1)\"\n\" -> HashAggregate (cost=5323597.90..5323599.90 rows=200 \nwidth=8) (actual time=20406.867..20407.927 rows=4000 loops=1)\"\n\" -> GroupAggregate (cost=4701622.10..5090733.69 \nrows=18629137 width=44) (actual time=20359.752..20389.387 rows=58552 \nloops=1)\"\n\" -> Sort (cost=4701622.10..4753704.56 \nrows=20832984 width=44) (actual time=20359.746..20367.125 rows=59325 \nloops=1)\"\n\" Sort Key: b.aipguid\"\n\" Sort Method: quicksort Memory: 6171kB\"\n\" -> Seq Scan on aipmodificationevent b \n(cost=23.24..1528265.92 rows=20832984 width=44) (actual \ntime=1647.075..20188.844 rows=59325 loops=1)\"\n\" Filter: ((hashed SubPlan 1) OR (hashed \nSubPlan 2))\"\n\" SubPlan 1\"\n\" -> Index Scan using \nwk_eventtimestamp_idx1 on workflowprocessingevent c (cost=0.00..11.62 \nrows=1 width=37) (actual time=0.053..40.741 rows=35945 loops=1)\"\n\" Index Cond: ((eventtimestamp >= \n'2012-11-10 00:00:00'::timestamp without time zone) AND (eventtimestamp \n< '2012-11-11 00:00:00'::timestamp without time zone))\"\n\" Filter: ((discriminator)::text \n= 'WorkflowCompleted'::text)\"\n\" SubPlan 2\"\n\" -> Index Scan using \nwk_eventtimestamp_idx1 on workflowprocessingevent c (cost=0.00..11.62 \nrows=1 width=37) (actual time=0.035..31.820 rows=35945 loops=1)\"\n\" Index Cond: ((eventtimestamp >= \n'2012-11-10 00:00:00'::timestamp without time zone) AND (eventtimestamp \n< '2012-11-11 00:00:00'::timestamp without time zone))\"\n\" Filter: ((discriminator)::text \n= 'WorkflowCompleted'::text)\"\n\" -> Index Scan using aipmodificationevent_pkey on \naipmodificationevent a (cost=0.00..11.19 rows=1 width=268) (actual \ntime=0.003..0.003 rows=1 loops=4000)\"\n\" Index Cond: (a.id = (max(b.id)))\"\n\"Total runtime: 20422.761 ms\"\n\n", "msg_date": "Thu, 15 Nov 2012 10:59:16 -0500", "msg_from": "\"Maria L. Wilson\" <[email protected]>", "msg_from_op": true, "msg_subject": "slow query on postgres 8.4" } ]
[ { "msg_contents": "Hi\n\nI tried to run quite simple query. For some reason query took lots of\nmemory, more than 6GB.\nSystem start swapping, so I canceled it after 4 minutes. There were no\nother queries in same time.\n\nIf I I understood my config correctly that is more than it should be. Is it\nbug or is there some other explanation?\n\nquery:\n\nSELECT name, artist_count, aid INTO res FROM ac\nEXCEPT\nSELECT name, artist_count, aid FROM artist_credit;\n\nExplain gives following:\n\nHashSetOp Except (cost=0.00..297100.69 rows=594044 width=30)\n -> Append (cost=0.00..234950.32 rows=8286716 width=30)\n -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..168074.62\nrows=5940431 width=29)\n -> Seq Scan on ac (cost=0.00..108670.31 rows=5940431\nwidth=29)\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..66875.70\nrows=2346285 width=32)\n -> Seq Scan on artist_credit (cost=0.00..43412.85\nrows=2346285 width=32)\n\nPostgreSQL version: \"PostgreSQL 9.2.1, compiled by Visual C++ build 1600,\n64-bit\"\nOS: Windows 7 (x64)\n\nMemory config:\neffective_cache_size=2048MB\nshared_buffers=1024MB\nwork_mem=64MB\nmaintenance_work_mem=256MB\n\nP.S. I got result witch I was after by changing query to use left join and\nisnull comparison.\nThat query took little more than 500MB memory and execution took 41 seconds.\n\nYours,\nAntti Jokipii\n\nHiI tried to run quite simple query. For some reason query took lots of memory, more than 6GB.System start swapping, so I canceled it after 4 minutes. There were no other queries in same time.If I I understood my config correctly that is more than it should be. Is it bug or is there some other explanation?\nquery:SELECT name, artist_count, aid INTO res FROM acEXCEPTSELECT name, artist_count, aid FROM artist_credit;Explain gives following:HashSetOp Except  (cost=0.00..297100.69 rows=594044 width=30)\n  ->  Append  (cost=0.00..234950.32 rows=8286716 width=30)        ->  Subquery Scan on \"*SELECT* 1\"  (cost=0.00..168074.62 rows=5940431 width=29)              ->  Seq Scan on ac  (cost=0.00..108670.31 rows=5940431 width=29)\n        ->  Subquery Scan on \"*SELECT* 2\"  (cost=0.00..66875.70 rows=2346285 width=32)              ->  Seq Scan on artist_credit  (cost=0.00..43412.85 rows=2346285 width=32)PostgreSQL version: \"PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\"\nOS: Windows 7 (x64)Memory config:effective_cache_size=2048MBshared_buffers=1024MBwork_mem=64MBmaintenance_work_mem=256MBP.S. I got result witch I was after by changing query to use left join and isnull comparison.\nThat query took little more than 500MB memory and execution took 41 seconds.Yours,Antti Jokipii", "msg_date": "Thu, 15 Nov 2012 21:20:07 +0200", "msg_from": "Antti Jokipii <[email protected]>", "msg_from_op": true, "msg_subject": "Query that uses lots of memory in PostgreSQL 9.2.1 in Windows 7" }, { "msg_contents": "Hello\n\nHashSetOp is memory expensive operation, and should be problematic\nwhen statistic estimation is bad.\n\nTry to rewritre this query to JOIN\n\nRegards\n\nPavel Stehule\n\n2012/11/15 Antti Jokipii <[email protected]>:\n> Hi\n>\n> I tried to run quite simple query. For some reason query took lots of\n> memory, more than 6GB.\n> System start swapping, so I canceled it after 4 minutes. There were no other\n> queries in same time.\n>\n> If I I understood my config correctly that is more than it should be. Is it\n> bug or is there some other explanation?\n>\n> query:\n>\n> SELECT name, artist_count, aid INTO res FROM ac\n> EXCEPT\n> SELECT name, artist_count, aid FROM artist_credit;\n>\n> Explain gives following:\n>\n> HashSetOp Except (cost=0.00..297100.69 rows=594044 width=30)\n> -> Append (cost=0.00..234950.32 rows=8286716 width=30)\n> -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..168074.62\n> rows=5940431 width=29)\n> -> Seq Scan on ac (cost=0.00..108670.31 rows=5940431\n> width=29)\n> -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..66875.70 rows=2346285\n> width=32)\n> -> Seq Scan on artist_credit (cost=0.00..43412.85\n> rows=2346285 width=32)\n>\n> PostgreSQL version: \"PostgreSQL 9.2.1, compiled by Visual C++ build 1600,\n> 64-bit\"\n> OS: Windows 7 (x64)\n>\n> Memory config:\n> effective_cache_size=2048MB\n> shared_buffers=1024MB\n> work_mem=64MB\n> maintenance_work_mem=256MB\n>\n> P.S. I got result witch I was after by changing query to use left join and\n> isnull comparison.\n> That query took little more than 500MB memory and execution took 41 seconds.\n>\n> Yours,\n> Antti Jokipii\n\n", "msg_date": "Tue, 20 Nov 2012 08:27:36 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query that uses lots of memory in PostgreSQL 9.2.1 in\n Windows 7" }, { "msg_contents": "On Tue, Nov 20, 2012 at 1:27 AM, Pavel Stehule <[email protected]> wrote:\n> Hello\n>\n> HashSetOp is memory expensive operation, and should be problematic\n> when statistic estimation is bad.\n>\n> Try to rewritre this query to JOIN\n\nor, 'WHERE NOT EXISTS'. if 41 seconds seems like it's too long, go\nahead and post that plan and maybe that can be optimized.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 17:37:34 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query that uses lots of memory in PostgreSQL 9.2.1 in\n Windows 7" } ]
[ { "msg_contents": "I have database with few hundred millions of rows. I'm running the following query:\nselect * from \"Payments\" as p\ninner join \"PaymentOrders\" as po\non po.\"Id\" = p.\"PaymentOrderId\"\ninner join \"Users\" as u\nOn u.\"Id\" = po.\"UserId\"\nINNER JOIN \"Roles\" as r\non u.\"RoleId\" = r.\"Id\"\nWhere r.\"Name\" = 'Moses'\nLIMIT 1000When the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existent r.\"Name\" in where clause, it takes too much time to complete. I guess that PostgreSQL is doing a sequential scan on the Payments table (which contains the most rows), comparing each row one by one.Isn't postgresql smart enough to check first if Roles table contains any row with Name 'Moses'?Roles table contains only 15 row, while Payments contains ~350 million.I'm running PostgreSQL 9.2.1.BTW, this same query on the same schema/data takes 0.024ms to complete on MS SQL Server.\nHere'e explain analyse results: http://explain.depesz.com/s/7e7\nAnd here's server configuration:version PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\nclient_encoding UNICODE\neffective_cache_size 4500MB\nfsync on\nlc_collate English_United States.1252\nlc_ctype English_United States.1252\nlisten_addresses *\nlog_destination stderr\nlog_line_prefix %t \nlogging_collector on\nmax_connections 100\nmax_stack_depth 2MB\nport 5432\nsearch_path dbo, \"$user\", public\nserver_encoding UTF8\nshared_buffers 1500MB\nTimeZone Asia/Tbilisi\nwal_buffers 16MB\nwork_mem 10MBI'm running postgresql on a i5 cpu (4 core, 3.3 GHz), 8 GB of RAM and Crucial m4 SSD 128GB\nOriginal question source http://stackoverflow.com/questions/13407555/postgresql-query-taking-too-long#comment18330095_13407555\nThank you very much. \t\t \t \t\t \n\n\n\nI have database with few hundred millions of rows. I'm running the following query:select * from \"Payments\" as p\ninner join \"PaymentOrders\" as po\non po.\"Id\" = p.\"PaymentOrderId\"\ninner join \"Users\" as u\nOn u.\"Id\" = po.\"UserId\"\nINNER JOIN \"Roles\" as r\non u.\"RoleId\" = r.\"Id\"\nWhere r.\"Name\" = 'Moses'\nLIMIT 1000When the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existent r.\"Name\" in where clause, it takes too much time to complete. I guess that PostgreSQL is doing a sequential scan on the Payments table (which contains the most rows), comparing each row one by one.Isn't postgresql smart enough to check first if Roles table contains any row with Name 'Moses'?Roles table contains only 15 row, while Payments contains ~350 million.I'm running PostgreSQL 9.2.1.BTW, this same query on the same schema/data takes 0.024ms to complete on MS SQL Server.Here'e explain analyse results: http://explain.depesz.com/s/7e7And here's server configuration:version PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\nclient_encoding UNICODE\neffective_cache_size 4500MB\nfsync on\nlc_collate English_United States.1252\nlc_ctype English_United States.1252\nlisten_addresses *\nlog_destination stderr\nlog_line_prefix %t \nlogging_collector on\nmax_connections 100\nmax_stack_depth 2MB\nport 5432\nsearch_path dbo, \"$user\", public\nserver_encoding UTF8\nshared_buffers 1500MB\nTimeZone Asia/Tbilisi\nwal_buffers 16MB\nwork_mem 10MBI'm running postgresql on a i5 cpu (4 core, 3.3 GHz), 8 GB of RAM and Crucial m4 SSD 128GBOriginal question source http://stackoverflow.com/questions/13407555/postgresql-query-taking-too-long#comment18330095_13407555Thank you very much.", "msg_date": "Fri, 16 Nov 2012 15:32:14 +0400", "msg_from": "David Popiashvili <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL strange query plan for my query" }, { "msg_contents": "I have database with few hundred millions of rows. I'm running the following query:\n\nselect * from \"Payments\" as p\ninner join \"PaymentOrders\" as po\non po.\"Id\" = p.\"PaymentOrderId\"\ninner join \"Users\" as u\nOn u.\"Id\" = po.\"UserId\"\nINNER JOIN \"Roles\" as r\non u.\"RoleId\" = r.\"Id\"\nWhere r.\"Name\" = 'Moses'\nLIMIT 1000When the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existent r.\"Name\" in where clause, it takes too much time to complete. I guess that PostgreSQL is doing a sequential scan on the Payments table (which contains the most rows), comparing each row one by one.Isn't postgresql smart enough to check first if Roles table contains any row with Name 'Moses'?Roles table contains only 15 row, while Payments contains ~350 million.I'm running PostgreSQL 9.2.1.BTW, this same query on the same schema/data takes 0.024ms to complete on MS SQL Server.\nHere'e explain analyse results: http://explain.depesz.com/s/7e7\nAnd here's server configuration:version PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\nclient_encoding UNICODE\neffective_cache_size 4500MB\nfsync on\nlc_collate English_United States.1252\nlc_ctype English_United States.1252\nlisten_addresses *\nlog_destination stderr\nlog_line_prefix %t \nlogging_collector on\nmax_connections 100\nmax_stack_depth 2MB\nport 5432\nsearch_path dbo, \"$user\", public\nserver_encoding UTF8\nshared_buffers 1500MB\nTimeZone Asia/Tbilisi\nwal_buffers 16MB\nwork_mem 10MBI'm running postgresql on a i5 cpu (4 core, 3.3 GHz), 8 GB of RAM and Crucial m4 SSD 128GB\nOriginal question source http://stackoverflow.com/questions/13407555/postgresql-query-taking-too-long#comment18330095_13407555\nThank you very much. \t\t \t \t\t \t\t \t \t\t \n\n\n\nI have database with few hundred millions of rows. I'm running the following query:select * from \"Payments\" as p\ninner join \"PaymentOrders\" as po\non po.\"Id\" = p.\"PaymentOrderId\"\ninner join \"Users\" as u\nOn u.\"Id\" = po.\"UserId\"\nINNER JOIN \"Roles\" as r\non u.\"RoleId\" = r.\"Id\"\nWhere r.\"Name\" = 'Moses'\nLIMIT 1000When the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existent r.\"Name\" in where clause, it takes too much time to complete. I guess that PostgreSQL is doing a sequential scan on the Payments table (which contains the most rows), comparing each row one by one.Isn't postgresql smart enough to check first if Roles table contains any row with Name 'Moses'?Roles table contains only 15 row, while Payments contains ~350 million.I'm running PostgreSQL 9.2.1.BTW, this same query on the same schema/data takes 0.024ms to complete on MS SQL Server.Here'e explain analyse results: http://explain.depesz.com/s/7e7And here's server configuration:version PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\nclient_encoding UNICODE\neffective_cache_size 4500MB\nfsync on\nlc_collate English_United States.1252\nlc_ctype English_United States.1252\nlisten_addresses *\nlog_destination stderr\nlog_line_prefix %t \nlogging_collector on\nmax_connections 100\nmax_stack_depth 2MB\nport 5432\nsearch_path dbo, \"$user\", public\nserver_encoding UTF8\nshared_buffers 1500MB\nTimeZone Asia/Tbilisi\nwal_buffers 16MB\nwork_mem 10MBI'm running postgresql on a i5 cpu (4 core, 3.3 GHz), 8 GB of RAM and Crucial m4 SSD 128GBOriginal question source http://stackoverflow.com/questions/13407555/postgresql-query-taking-too-long#comment18330095_13407555Thank you very much.", "msg_date": "Fri, 16 Nov 2012 15:40:52 +0400", "msg_from": "David Popiashvili <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL strange query plan for my query" }, { "msg_contents": "David Popiashvili wrote:\n> I have database with few hundred millions of rows. I'm running the\nfollowing query:\n> \n> select * from \"Payments\" as p\n> inner join \"PaymentOrders\" as po\n> on po.\"Id\" = p.\"PaymentOrderId\"\n> inner join \"Users\" as u\n> On u.\"Id\" = po.\"UserId\"\n> INNER JOIN \"Roles\" as r\n> on u.\"RoleId\" = r.\"Id\"\n> Where r.\"Name\" = 'Moses'\n> LIMIT 1000\n> When the where clause finds a match in database, I get the result in\nseveral milliseconds, but if I\n> modify the query and specify a non-existent r.\"Name\" in where clause,\nit takes too much time to\n> complete. I guess that PostgreSQL is doing a sequential scan on the\nPayments table (which contains the\n> most rows), comparing each row one by one.\n> Isn't postgresql smart enough to check first if Roles table contains\nany row with Name 'Moses'?\n> \n> Roles table contains only 15 row, while Payments contains ~350\nmillion.\n> \n> I'm running PostgreSQL 9.2.1.\n\n> Here'e explain analyse results: http://explain.depesz.com/s/7e7\n\nCan you also show the plan for the good case?\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Fri, 16 Nov 2012 13:55:41 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL strange query plan for my query" }, { "msg_contents": "All right, after some discussion on StackOverflow, we found out that incorrect query plan is generated due to the fact that there is a LIMIT keyword in the query. I guess Postgresql expects to find appropriate rows faster and that's why it generates a seq scan on the table. If I remove LIMIT 1000 everything is executed in several milliseconds and query plan looks like this:\nHash Join (cost=2662004.85..14948213.44 rows=22661658 width=138) (actual time=0.105..0.105 rows=0 loops=1) Hash Cond: (p.\"PaymentOrderId\" = po.\"Id\") -> Seq Scan on \"Payments\" p (cost=0.00..5724570.00 rows=350000000 width=18) (actual time=0.018..0.018 rows=1 loops=1) -> Hash (cost=2583365.85..2583365.85 rows=2614480 width=120) (actual time=0.046..0.046 rows=0 loops=1) Buckets: 8192 Batches: 64 Memory Usage: 0kB -> Hash Join (cost=904687.05..2583365.85 rows=2614480 width=120) (actual time=0.046..0.046 rows=0 loops=1) Hash Cond: (po.\"UserId\" = u.\"Id\") -> Seq Scan on \"PaymentOrders\" po (cost=0.00..654767.00 rows=40000000 width=24) (actual time=0.003..0.003 rows=1 loops=1) -> Hash (cost=850909.04..850909.04 rows=1980881 width=96) (actual time=0.016..0.016 rows=0 loops=1) Buckets: 8192 Batches: 32 Memory Usage: 0kB -> Hash Join (cost=1.20..850909.04 rows=1980881 width=96) (actual time=0.016..0.016 rows=0 loops=1) Hash Cond: (u.\"RoleId\" = r.\"Id\") -> Seq Scan on \"Users\" u (cost=0.00..718598.20 rows=30000220 width=80) (actual time=0.002..0.002 rows=1 loops=1) -> Hash (cost=1.19..1.19 rows=1 width=16) (actual time=0.009..0.009 rows=0 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 0kB -> Seq Scan on \"Roles\" r (cost=0.00..1.19 rows=1 width=16) (actual time=0.009..0.009 rows=0 loops=1) Filter: ((\"Name\")::text = 'Moses2333'::text) Rows Removed by Filter: 15Total runtime: 0.209 ms\nAccording to Erwin Brandstetter I also tried pushing the query in a subquery and applying LIMIT there:\nSELECT *FROM ( SELECT * FROM \"Roles\" AS r JOIN \"Users\" AS u ON u.\"RoleId\" = r.\"Id\" JOIN \"PaymentOrders\" AS po ON po.\"UserId\" = u.\"Id\" JOIN \"Payments\" AS p ON p.\"PaymentOrderId\" = po.\"Id\" WHERE r.\"Name\" = 'Moses' ) xLIMIT 1000;\nbut this solution also generates incorrect query plan. Any idea how to solve this query without omitting LIMIT keyword?Thanks\n> Subject: RE: [PERFORM] PostgreSQL strange query plan for my query\n> Date: Fri, 16 Nov 2012 13:55:41 +0100\n> From: [email protected]\n> To: [email protected]; [email protected]\n> \n> David Popiashvili wrote:\n> > I have database with few hundred millions of rows. I'm running the\n> following query:\n> > \n> > select * from \"Payments\" as p\n> > inner join \"PaymentOrders\" as po\n> > on po.\"Id\" = p.\"PaymentOrderId\"\n> > inner join \"Users\" as u\n> > On u.\"Id\" = po.\"UserId\"\n> > INNER JOIN \"Roles\" as r\n> > on u.\"RoleId\" = r.\"Id\"\n> > Where r.\"Name\" = 'Moses'\n> > LIMIT 1000\n> > When the where clause finds a match in database, I get the result in\n> several milliseconds, but if I\n> > modify the query and specify a non-existent r.\"Name\" in where clause,\n> it takes too much time to\n> > complete. I guess that PostgreSQL is doing a sequential scan on the\n> Payments table (which contains the\n> > most rows), comparing each row one by one.\n> > Isn't postgresql smart enough to check first if Roles table contains\n> any row with Name 'Moses'?\n> > \n> > Roles table contains only 15 row, while Payments contains ~350\n> million.\n> > \n> > I'm running PostgreSQL 9.2.1.\n> \n> > Here'e explain analyse results: http://explain.depesz.com/s/7e7\n> \n> Can you also show the plan for the good case?\n> \n> Yours,\n> Laurenz Albe\n \t\t \t \t\t \n\n\n\nAll right, after some discussion on StackOverflow, we found out that incorrect query plan is generated due to the fact that there is a LIMIT keyword in the query. I guess Postgresql expects to find appropriate rows faster and that's why it generates a seq scan on the table. If I remove LIMIT 1000 everything is executed in several milliseconds and query plan looks like this:Hash Join  (cost=2662004.85..14948213.44 rows=22661658 width=138) (actual time=0.105..0.105 rows=0 loops=1)  Hash Cond: (p.\"PaymentOrderId\" = po.\"Id\")  ->  Seq Scan on \"Payments\" p  (cost=0.00..5724570.00 rows=350000000 width=18) (actual time=0.018..0.018 rows=1 loops=1)  ->  Hash  (cost=2583365.85..2583365.85 rows=2614480 width=120) (actual time=0.046..0.046 rows=0 loops=1)        Buckets: 8192  Batches: 64  Memory Usage: 0kB        ->  Hash Join  (cost=904687.05..2583365.85 rows=2614480 width=120) (actual time=0.046..0.046 rows=0 loops=1)              Hash Cond: (po.\"UserId\" = u.\"Id\")              ->  Seq Scan on \"PaymentOrders\" po  (cost=0.00..654767.00 rows=40000000 width=24) (actual time=0.003..0.003 rows=1 loops=1)              ->  Hash  (cost=850909.04..850909.04 rows=1980881 width=96) (actual time=0.016..0.016 rows=0 loops=1)                    Buckets: 8192  Batches: 32  Memory Usage: 0kB                    ->  Hash Join  (cost=1.20..850909.04 rows=1980881 width=96) (actual time=0.016..0.016 rows=0 loops=1)                          Hash Cond: (u.\"RoleId\" = r.\"Id\")                          ->  Seq Scan on \"Users\" u  (cost=0.00..718598.20 rows=30000220 width=80) (actual time=0.002..0.002 rows=1 loops=1)                          ->  Hash  (cost=1.19..1.19 rows=1 width=16) (actual time=0.009..0.009 rows=0 loops=1)                                Buckets: 1024  Batches: 1  Memory Usage: 0kB                                ->  Seq Scan on \"Roles\" r  (cost=0.00..1.19 rows=1 width=16) (actual time=0.009..0.009 rows=0 loops=1)                                      Filter: ((\"Name\")::text = 'Moses2333'::text)                                      Rows Removed by Filter: 15Total runtime: 0.209 msAccording to Erwin Brandstetter I also tried pushing the query in a subquery and applying LIMIT there:SELECT *FROM  (   SELECT *   FROM   \"Roles\"         AS r     JOIN   \"Users\"         AS u  ON u.\"RoleId\" = r.\"Id\"   JOIN   \"PaymentOrders\" AS po ON po.\"UserId\" = u.\"Id\"   JOIN   \"Payments\"      AS p  ON p.\"PaymentOrderId\" = po.\"Id\"   WHERE  r.\"Name\" = 'Moses'  ) xLIMIT  1000;but this solution also generates incorrect query plan. Any idea how to solve this query without omitting LIMIT keyword?Thanks> Subject: RE: [PERFORM] PostgreSQL strange query plan for my query> Date: Fri, 16 Nov 2012 13:55:41 +0100> From: [email protected]> To: [email protected]; [email protected]> > David Popiashvili wrote:> > I have database with few hundred millions of rows. I'm running the> following query:> > > > select * from \"Payments\" as p> > inner join \"PaymentOrders\" as po> > on po.\"Id\" = p.\"PaymentOrderId\"> > inner join \"Users\" as u> > On u.\"Id\" = po.\"UserId\"> > INNER JOIN \"Roles\" as r> > on u.\"RoleId\" = r.\"Id\"> > Where r.\"Name\" = 'Moses'> > LIMIT 1000> > When the where clause finds a match in database, I get the result in> several milliseconds, but if I> > modify the query and specify a non-existent r.\"Name\" in where clause,> it takes too much time to> > complete. I guess that PostgreSQL is doing a sequential scan on the> Payments table (which contains the> > most rows), comparing each row one by one.> > Isn't postgresql smart enough to check first if Roles table contains> any row with Name 'Moses'?> > > > Roles table contains only 15 row, while Payments contains ~350> million.> > > > I'm running PostgreSQL 9.2.1.> > > Here'e explain analyse results: http://explain.depesz.com/s/7e7> > Can you also show the plan for the good case?> > Yours,> Laurenz Albe", "msg_date": "Fri, 16 Nov 2012 17:04:05 +0400", "msg_from": "David Popiashvili <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL strange query plan for my query" }, { "msg_contents": "On 11/16/2012 14:04, David Popiashvili wrote:\n> All right, after some discussion on StackOverflow \n> <http://stackoverflow.com/questions/13407555/postgresql-query-taking-too-long/13415984#13415984>, \n> we found out that incorrect query plan is generated due to the fact \n> that there is a LIMIT keyword in the query. I guess Postgresql expects \n> to find appropriate rows faster and that's why it generates a seq scan \n> on the table. If I remove LIMIT 1000 everything is executed in several \n> milliseconds and query plan looks like this:\n>\n> Hash Join (cost=2662004.85..14948213.44 rows=22661658 width=138) \n> (actual time=0.105..0.105 rows=0 loops=1)\n> Hash Cond: (p.\"PaymentOrderId\" = po.\"Id\")\n> -> Seq Scan on \"Payments\" p (cost=0.00..5724570.00 rows=350000000 \n> width=18) (actual time=0.018..0.018 rows=1 loops=1)\n> -> Hash (cost=2583365.85..2583365.85 rows=2614480 width=120) \n> (actual time=0.046..0.046 rows=0 loops=1)\n> Buckets: 8192 Batches: 64 Memory Usage: 0kB\n> -> Hash Join (cost=904687.05..2583365.85 rows=2614480 \n> width=120) (actual time=0.046..0.046 rows=0 loops=1)\n> Hash Cond: (po.\"UserId\" = u.\"Id\")\n> -> Seq Scan on \"PaymentOrders\" po \n> (cost=0.00..654767.00 rows=40000000 width=24) (actual \n> time=0.003..0.003 rows=1 loops=1)\n> -> Hash (cost=850909.04..850909.04 rows=1980881 \n> width=96) (actual time=0.016..0.016 rows=0 loops=1)\n> Buckets: 8192 Batches: 32 Memory Usage: 0kB\n> -> Hash Join (cost=1.20..850909.04 rows=1980881 \n> width=96) (actual time=0.016..0.016 rows=0 loops=1)\n> Hash Cond: (u.\"RoleId\" = r.\"Id\")\n> -> Seq Scan on \"Users\" u \n> (cost=0.00..718598.20 rows=30000220 width=80) (actual \n> time=0.002..0.002 rows=1 loops=1)\n> -> Hash (cost=1.19..1.19 rows=1 width=16) \n> (actual time=0.009..0.009 rows=0 loops=1)\n> Buckets: 1024 Batches: 1 Memory \n> Usage: 0kB\n> -> Seq Scan on \"Roles\" r \n> (cost=0.00..1.19 rows=1 width=16) (actual time=0.009..0.009 rows=0 \n> loops=1)\n> Filter: ((\"Name\")::text = \n> 'Moses2333'::text)\n> Rows Removed by Filter: 15\n> Total runtime: 0.209 ms\n>\n> According to Erwin Brandstetter I also tried pushing the query in a \n> subquery and applying LIMIT there:\n>\n> SELECT *\n> FROM (\n> SELECT *\n> FROM \"Roles\" AS r\n> JOIN \"Users\" AS u ON u.\"RoleId\" = r.\"Id\"\n> JOIN \"PaymentOrders\" AS po ON po.\"UserId\" = u.\"Id\"\n> JOIN \"Payments\" AS p ON p.\"PaymentOrderId\" = po.\"Id\"\n> WHERE r.\"Name\" = 'Moses'\n> ) x\n> LIMIT 1000;\n>\n> but this solution also generates incorrect query plan. Any idea how to \n> solve this query without omitting LIMIT keyword?\n> Thanks\n>\n\nmaybe with a CTE ?\n\n> > Subject: RE: [PERFORM] PostgreSQL strange query plan for my query\n> > Date: Fri, 16 Nov 2012 13:55:41 +0100\n> > From: [email protected]\n> > To: [email protected]; [email protected]\n> >\n> > David Popiashvili wrote:\n> > > I have database with few hundred millions of rows. I'm running the\n> > following query:\n> > >\n> > > select * from \"Payments\" as p\n> > > inner join \"PaymentOrders\" as po\n> > > on po.\"Id\" = p.\"PaymentOrderId\"\n> > > inner join \"Users\" as u\n> > > On u.\"Id\" = po.\"UserId\"\n> > > INNER JOIN \"Roles\" as r\n> > > on u.\"RoleId\" = r.\"Id\"\n> > > Where r.\"Name\" = 'Moses'\n> > > LIMIT 1000\n> > > When the where clause finds a match in database, I get the result in\n> > several milliseconds, but if I\n> > > modify the query and specify a non-existent r.\"Name\" in where clause,\n> > it takes too much time to\n> > > complete. I guess that PostgreSQL is doing a sequential scan on the\n> > Payments table (which contains the\n> > > most rows), comparing each row one by one.\n> > > Isn't postgresql smart enough to check first if Roles table contains\n> > any row with Name 'Moses'?\n> > >\n> > > Roles table contains only 15 row, while Payments contains ~350\n> > million.\n> > >\n> > > I'm running PostgreSQL 9.2.1.\n> >\n> > > Here'e explain analyse results: http://explain.depesz.com/s/7e7\n> >\n> > Can you also show the plan for the good case?\n> >\n> > Yours,\n> > Laurenz Albe\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.", "msg_date": "Fri, 16 Nov 2012 14:18:19 +0100", "msg_from": "Julien Cigar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL strange query plan for my query" }, { "msg_contents": "On Fri, Nov 16, 2012 at 3:40 AM, David Popiashvili <[email protected]>wrote:\n\n> I have database with few hundred millions of rows. I'm running the\n> following query:\n>\n> select * from \"Payments\" as pinner join \"PaymentOrders\" as poon po.\"Id\" = p.\"PaymentOrderId\"inner join \"Users\" as uOn u.\"Id\" = po.\"UserId\"INNER JOIN \"Roles\" as ron u.\"RoleId\" = r.\"Id\"Where r.\"Name\" = 'Moses'\n> LIMIT 1000\n>\n> When the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existent r.\"Name\" in where clause, it takes too much time to complete. I guess that PostgreSQL is doing a sequential scan on the Payments table (which contains the most rows), comparing each row one by one.\n>\n> Isn't postgresql smart enough to check first if Roles table contains any row with Name 'Moses'?\n>\n> Roles table contains only 15 row, while Payments contains ~350 million\n>\n> You probably checked this already, but just in case you didn't ... did you\ndo an \"analyze\" on the small table? I've been hit by this before ... it's\nnatural to think that Postgres would always check a very small table first\nno matter what the statistics are. But it's not true. If you analyze the\nsmall table, even if it only has one or two rows in it, it will often\nradically change the plan that Postgres chooses.\n\nCraig James\n\nOn Fri, Nov 16, 2012 at 3:40 AM, David Popiashvili <[email protected]> wrote:\nI have database with few hundred millions of rows. I'm running the following query:\n\nselect * from \"Payments\" as p\ninner join \"PaymentOrders\" as po\non po.\"Id\" = p.\"PaymentOrderId\"\ninner join \"Users\" as u\nOn u.\"Id\" = po.\"UserId\"\nINNER JOIN \"Roles\" as r\non u.\"RoleId\" = r.\"Id\"\nWhere r.\"Name\" = 'Moses'\nLIMIT 1000\nWhen the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existent r.\"Name\" in where clause, it takes too much time to complete. I guess that PostgreSQL is doing a sequential scan on the Payments table (which contains the most rows), comparing each row one by one.\n\nIsn't postgresql smart enough to check first if Roles table contains any row with Name 'Moses'?\nRoles table contains only 15 row, while Payments contains ~350 million\nYou probably checked this already, but just in case you didn't ... did you do an \"analyze\" on the small table?  I've been hit by this before ... it's natural to think that Postgres would always check a very small table first no matter what the statistics are.  But it's not true.  If you analyze the small table, even if it only has one or two rows in it, it will often radically change the plan that Postgres chooses.\nCraig James", "msg_date": "Fri, 16 Nov 2012 08:32:24 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL strange query plan for my query" }, { "msg_contents": "Thanks Craig. Yes I already tried it but it didn't work. I don't see any solution other than fixing this bug. Take a look http://www.postgresql.org/search/?m=1&q=LIMIT&l=8&d=365&s=r. There are too many bug reports about LIMIT slowing down queries. Let's hope it will be fixed someday :)\n\nDate: Fri, 16 Nov 2012 08:32:24 -0800\nSubject: Re: [PERFORM] PostgreSQL strange query plan for my query\nFrom: [email protected]\nTo: [email protected]\nCC: [email protected]\n\n\n\nOn Fri, Nov 16, 2012 at 3:40 AM, David Popiashvili <[email protected]> wrote:\n\n\n\n\nI have database with few hundred millions of rows. I'm running the following query:\n\n\nselect * from \"Payments\" as p\ninner join \"PaymentOrders\" as po\non po.\"Id\" = p.\"PaymentOrderId\"\ninner join \"Users\" as u\nOn u.\"Id\" = po.\"UserId\"\nINNER JOIN \"Roles\" as r\non u.\"RoleId\" = r.\"Id\"\nWhere r.\"Name\" = 'Moses'\nLIMIT 1000When the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existent r.\"Name\" in where clause, it takes too much time to complete. I guess that PostgreSQL is doing a sequential scan on the Payments table (which contains the most rows), comparing each row one by one.\nIsn't postgresql smart enough to check first if Roles table contains any row with Name 'Moses'?\nRoles table contains only 15 row, while Payments contains ~350 million\n\nYou probably checked this already, but just in case you didn't ... did you do an \"analyze\" on the small table? I've been hit by this before ... it's natural to think that Postgres would always check a very small table first no matter what the statistics are. But it's not true. If you analyze the small table, even if it only has one or two rows in it, it will often radically change the plan that Postgres chooses.\n\n\nCraig James\n \n \t\t \t \t\t \n\n\n\nThanks Craig. Yes I already tried it but it didn't work. I don't see any solution other than fixing this bug. Take a look http://www.postgresql.org/search/?m=1&q=LIMIT&l=8&d=365&s=r. There are too many bug reports about LIMIT slowing down queries. Let's hope it will be fixed someday :)Date: Fri, 16 Nov 2012 08:32:24 -0800Subject: Re: [PERFORM] PostgreSQL strange query plan for my queryFrom: [email protected]: [email protected]: [email protected] Fri, Nov 16, 2012 at 3:40 AM, David Popiashvili <[email protected]> wrote:\nI have database with few hundred millions of rows. I'm running the following query:\nselect * from \"Payments\" as p\ninner join \"PaymentOrders\" as po\non po.\"Id\" = p.\"PaymentOrderId\"\ninner join \"Users\" as u\nOn u.\"Id\" = po.\"UserId\"\nINNER JOIN \"Roles\" as r\non u.\"RoleId\" = r.\"Id\"\nWhere r.\"Name\" = 'Moses'\nLIMIT 1000When the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existent r.\"Name\" in where clause, it takes too much time to complete. I guess that PostgreSQL is doing a sequential scan on the Payments table (which contains the most rows), comparing each row one by one.\nIsn't postgresql smart enough to check first if Roles table contains any row with Name 'Moses'?\nRoles table contains only 15 row, while Payments contains ~350 million\nYou probably checked this already, but just in case you didn't ... did you do an \"analyze\" on the small table?  I've been hit by this before ... it's natural to think that Postgres would always check a very small table first no matter what the statistics are.  But it's not true.  If you analyze the small table, even if it only has one or two rows in it, it will often radically change the plan that Postgres chooses.\nCraig James", "msg_date": "Fri, 16 Nov 2012 20:35:50 +0400", "msg_from": "David Popiashvili <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL strange query plan for my query" }, { "msg_contents": "On 11/16/2012 17:35, David Popiashvili wrote:\n> Thanks Craig. Yes I already tried it but it didn't work. I don't see \n> any solution other than fixing this bug. Take a \n> look http://www.postgresql.org/search/?m=1&q=LIMIT&l=8&d=365&s=r. \n> There are too many bug reports about LIMIT slowing down queries. Let's \n> hope it will be fixed someday :)\n>\n> ------------------------------------------------------------------------\n> Date: Fri, 16 Nov 2012 08:32:24 -0800\n> Subject: Re: [PERFORM] PostgreSQL strange query plan for my query\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n>\n>\n>\n> On Fri, Nov 16, 2012 at 3:40 AM, David Popiashvili \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> I have database with few hundred millions of rows. I'm running the\n> following query:\n>\n> |select * from \"Payments\" as p\n> inner join \"PaymentOrders\" as po\n> on po.\"Id\" = p.\"PaymentOrderId\"\n> inner join \"Users\" as u\n> On u.\"Id\" = po.\"UserId\"\n> INNER JOIN \"Roles\" as r\n> on u.\"RoleId\" = r.\"Id\"\n> Where r.\"Name\" = 'Moses'\n> LIMIT1000|\n>\n\ndid you try:\n\nwith foo as (\nselect * from \"Payments\" as p\ninner join \"PaymentOrders\" as po\non po.\"Id\" = p.\"PaymentOrderId\"\ninner join \"Users\" as u\nOn u.\"Id\" = po.\"UserId\"\nINNER JOIN \"Roles\" as r\non u.\"RoleId\" = r.\"Id\"\nWhere r.\"Name\" = 'Moses'\n) select * from foo LIMIT 1000\n\n?\n\n> When the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existent|r.\"Name\"| in where clause, it takes too much time to complete. I guess that PostgreSQL is doing a sequential scan on the|Payments| table (which contains the most rows), comparing each row one by one.\n>\n> Isn't postgresql smart enough to check first if |Roles| table\n> contains any row with |Name| |'Moses'|?\n>\n>\n> Roles table contains only 15 row, while Payments contains ~350\n> million\n>\n> You probably checked this already, but just in case you didn't ... did \n> you do an \"analyze\" on the small table? I've been hit by this before \n> ... it's natural to think that Postgres would always check a very \n> small table first no matter what the statistics are. But it's not \n> true. If you analyze the small table, even if it only has one or two \n> rows in it, it will often radically change the plan that Postgres chooses.\n>\n> Craig James\n>\n\n\n\n\n\n\n\nOn 11/16/2012 17:35, David Popiashvili\n wrote:\n\n\n\nThanks Craig. Yes I already tried it but it didn't\n work. I don't see any solution other than fixing this bug. Take\n a\n lookпїЅhttp://www.postgresql.org/search/?m=1&q=LIMIT&l=8&d=365&s=r.\n There are too many bug reports about LIMIT slowing down queries.\n Let's hope it will be fixed someday :)\n\n\nDate: Fri, 16 Nov 2012 08:32:24 -0800\n Subject: Re: [PERFORM] PostgreSQL strange query plan for my\n query\n From: [email protected]\n To: [email protected]\n CC: [email protected]\n\n\n\nOn Fri, Nov 16, 2012 at 3:40 AM,\n David Popiashvili <[email protected]>\n wrote:\n\n\nI have database with\n few hundred millions of rows. I'm running the\n following query:\n\n\n\n\n\nselect * from \"Payments\" as p\ninner join \"PaymentOrders\" as po\non po.\"Id\" = p.\"PaymentOrderId\"\ninner join \"Users\" as u\nOn u.\"Id\" = po.\"UserId\"\nINNER JOIN \"Roles\" as r\non u.\"RoleId\" = r.\"Id\"\nWhere r.\"Name\" = 'Moses'\nLIMIT 1000\n\n\n\n\n\n\n\n\n\n\n\n did you try:\n\n with foo as (\n select * from \"Payments\" as p\n inner join \"PaymentOrders\" as po\n on po.\"Id\" = p.\"PaymentOrderId\"\n inner join \"Users\" as u\n On u.\"Id\" = po.\"UserId\"\n INNER JOIN \"Roles\" as r\n on u.\"RoleId\" = r.\"Id\"\n Where r.\"Name\" = 'Moses'\n ) select * from foo LIMIT 1000\n\n ?\n\n\n\n\n\n\n\n\n\n\n\nWhen the where clause finds a match in database, I get the result in several milliseconds, but if I modify the query and specify a non-existentпїЅr.\"Name\"пїЅin where clause, it takes too much time to complete. I gues\ns that PostgreSQL is doing a sequential scan on theпїЅPaymentsпїЅtable (which contains the most rows), comparing each row one by one.\nIsn't postgresql smart enough to check first ifпїЅRolesпїЅtable contains any row withпїЅNameпїЅ'Moses'?\nRoles table contains only 15 row, while Payments contains ~350 million\n\n\n\n\n\n\n\n\nYou probably checked this already, but just in case you\n didn't ... did you do an \"analyze\" on the small table?пїЅ\n I've been hit by this before ... it's natural to think\n that Postgres would always check a very small table first\n no matter what the statistics are.пїЅ But it's not true.пїЅ If\n you analyze the small table, even if it only has one or\n two rows in it, it will often radically change the plan\n that Postgres chooses.\n\n Craig James\n пїЅ", "msg_date": "Fri, 16 Nov 2012 17:53:26 +0100", "msg_from": "Julien Cigar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL strange query plan for my query" } ]
[ { "msg_contents": "Hi,\n\nI have a table which contains generated static data (defined below) where the search_key field contains varying length strings.\nThere are 122,000 rows in the table\nWhen the data is created the search_key field is ordered alphanumerically and assigned a unique order_key value starting at 1.\n\nThe table is defined as:\nCREATE TABLE stuff\n(\n code integer NOT NULL DEFAULT 0,\n search_key character varying(255),\n order_key integer,\n CONSTRAINT \"PK_code\" PRIMARY KEY (code)\n)\n\nCREATE INDEX order_key\n ON stuff\n USING btree\n (order_key);\nALTER TABLE stuff CLUSTER ON order_key;\n\nAnd a view defined as:\nCREATE OR REPLACE VIEW stuff_view AS\nselect * from stuff\n\nRunning the following query takes 56+ ms as it does a seq scan of the whole table:\nSELECT CODE FROM stuff\n WHERE SEARCH_KEY LIKE 'AAAAAA%'\n\nRunning the following query takes 16+ ms as it does 2 index scans of the order_key index:\n SELECT CODE FROM stuff\n WHERE SEARCH_KEY LIKE 'AAAAAA%'\n and order_key >=\n (\n SELECT order_key FROM stuff\n WHERE SEARCH_KEY LIKE 'AA%'\n order by order_key\n limit 1\n )\n and order_key <\n (\n SELECT order_key FROM stuff\n WHERE SEARCH_KEY LIKE 'AB%'\n order by order_key\n limit 1\n )\n\nRunning the following query takes less than a second doing a single index scan:\nSELECT CODE FROM stuff\n WHERE SEARCH_KEY LIKE 'AAAAAA%'\n and order_key >= 14417\n and order_key < 15471\n\n\nThe problem query is always going to be in the first format.\nIt was my intention to either change the view to intercept the query using a rule and either\nadd the extra parameters from the second query\nOR\n Add a second table which contains the order_key ranges and\nadd the extra parameters from the third query\n\n\nIs there an easier way to do this?\n\nAs always, thanks for you help...\n\nRegards,\n\nRussell Keane\nINPS\n\nTel: +44 (0)20 7501 7277\n[cid:[email protected]]\nFollow us<https://twitter.com/INPSnews> on twitter | visit www.inps.co.uk<http://www.inps.co.uk/>\n\n\n________________________________\nRegistered name: In Practice Systems Ltd.\nRegistered address: The Bread Factory, 1a Broughton Street, London, SW8 3QJ\nRegistered Number: 1788577\nRegistered in England\nVisit our Internet Web site at www.inps.co.uk\nThe information in this internet email is confidential and is intended solely for the addressee. Access, copying or re-use of information in it by anyone else is not authorised. Any views or opinions presented are solely those of the author and do not necessarily represent those of INPS or any of its affiliates. If you are not the intended recipient please contact [email protected]", "msg_date": "Fri, 16 Nov 2012 14:28:25 +0000", "msg_from": "Russell Keane <[email protected]>", "msg_from_op": true, "msg_subject": "intercepting where clause on a view or other performance tweak" }, { "msg_contents": "Russell Keane <[email protected]> writes:\n> Running the following query takes 56+ ms as it does a seq scan of the whole table:\n> SELECT CODE FROM stuff\n> WHERE SEARCH_KEY LIKE 'AAAAAA%'\n\nWhy don't you create an index on search_key, and forget all these other\nmachinations? (If your locale isn't C you'll need to use a\nvarchar_pattern_ops index.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 16 Nov 2012 10:05:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: intercepting where clause on a view or other performance tweak" }, { "msg_contents": "Sorry, I should've added that in the original description.\nI have an index on search_key and it's never used.\n\nIf it makes any difference, the table is about 9MB and the index on that field alone is 3MB.\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: 16 November 2012 15:05\nTo: Russell Keane\nCc: [email protected]\nSubject: Re: [PERFORM] intercepting where clause on a view or other performance tweak\n\nRussell Keane <[email protected]> writes:\n> Running the following query takes 56+ ms as it does a seq scan of the whole table:\n> SELECT CODE FROM stuff\n> WHERE SEARCH_KEY LIKE 'AAAAAA%'\n\nWhy don't you create an index on search_key, and forget all these other machinations? (If your locale isn't C you'll need to use a varchar_pattern_ops index.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 16 Nov 2012 15:17:42 +0000", "msg_from": "Russell Keane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: intercepting where clause on a view or other\n performance tweak" }, { "msg_contents": "I should've also mentioned that we're using PG 9.0.\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Russell Keane\nSent: 16 November 2012 15:18\nTo: [email protected]\nSubject: Re: [PERFORM] intercepting where clause on a view or other performance tweak\n\nSorry, I should've added that in the original description.\nI have an index on search_key and it's never used.\n\nIf it makes any difference, the table is about 9MB and the index on that field alone is 3MB.\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: 16 November 2012 15:05\nTo: Russell Keane\nCc: [email protected]\nSubject: Re: [PERFORM] intercepting where clause on a view or other performance tweak\n\nRussell Keane <[email protected]> writes:\n> Running the following query takes 56+ ms as it does a seq scan of the whole table:\n> SELECT CODE FROM stuff\n> WHERE SEARCH_KEY LIKE 'AAAAAA%'\n\nWhy don't you create an index on search_key, and forget all these other machinations? (If your locale isn't C you'll need to use a varchar_pattern_ops index.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 16 Nov 2012 15:22:14 +0000", "msg_from": "Russell Keane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: intercepting where clause on a view or other\n performance tweak" }, { "msg_contents": "Russell Keane <[email protected]> writes:\n> Sorry, I should've added that in the original description.\n> I have an index on search_key and it's never used.\n\nDid you pay attention to the point about the nondefault operator class?\nIf the LIKE pattern is left-anchored and as selective as your example\nimplies, the planner certainly ought to try to use a compatible index.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 16 Nov 2012 10:41:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: intercepting where clause on a view or other performance tweak" } ]
[ { "msg_contents": "Postgres Performance Wizards,\n\nI am a few years into developing and operating a system underpinned by\nPostgres that sees the arrival a significant number of events around\nthe clock, not an epic amount of data, but enough to be challenging to\nme, in particular when system downtime is not very palatable and the\ndata is retained quasi-indefinitely.\n\nI have various tables that house different kinds of events, and in\naddition to wanting to look at a small number of rows of data, users\noften want to generate summary reports on large swaths of data that\nspan days or weeks. At present, these reports can trigger index scans\nthat take minutes to service, and the parameters of the reports are\nuser specified, making their pre-generation infeasible. Generally the\nrows in these tables are write-once, but they contain a pointer to the\nrelated BLOB from which they were constructed, and every now and again\nsome new field in the originating BLOB becomes of interest, causing me\nto alter the table and then do a sweep of the full table with\ncorresponding updates, violating the otherwise INSERT-only nature.\n\nThese event tables generally have an \"event time\" column that is\nindexed and which is an obvious candidate for either partitioning or\nclustering of the table. I'm trying to make sense of which is the\nbetter option for me.\n\nAs best I can tell, the decision points are as follows...\n\nPARTITIONING\n\nPros:\n\n * no outage; data just starts flowing into new partitions seamlessly\n * allows more control over where the data goes, creating retrieval parallelization opportunities\n * \"clustering\" cannot be inadvertently undone in a way that requires scheduled downtime to repair\n * probably more resilient in the case of the \"event time\" being different from the time that I processed the event\n\nCons:\n\n * does not deal with legacy data without extra migration (over time this becomes less relevant)\n * requires some kind of background process to manage partition creation\n * partition size will affect performance and choosing its size is not a science\n\nCLUSTERING\n\nPros:\n\n * no particularly custom development work on my part\n * once done, it puts all existing data in a good state for efficient querying without extra work\n\nCons:\n\n * will lock up the system for the duration of the CLUSTER command\n * somehow need to make sure that ANALYZE commands run often enough\n * does not give me much control of the underlying storage layout\n * may have problems when the occasional mass-UPDATE is done\n * unclear whether a VACUUM FULL is required to prevent subsequent un-clustered-ness despite having a fill factor of 100, stemming from the mass-UPDATE operations\n * could generate a huge number of WAL segments to archive\n * could possibly be sabotaged by the \"event time\" property not being well correlated with the time that the event is processed in the face of upstream systems have momentary issues\n\nAs far as questions to the group go:\n\n * Is my understanding of the pros and cons of the options reasonably correct and comprehensive?\n * What has governed your decisions in making such a choice on past projects of your own?\n * If I go the clustering route, will the occasional mass update really mess with things, requiring a re-cluster and possibly even a full vacuum (to prevent re-un-clustering)?\n * Might it make more sense to cluster when the \"event time\" property is the time that I processed the event but partition when it is the time that the event occurred in some other system?\n * Is running a CLUSTER command actually necessary to get the performance benefits if the table ought already be in a good order, or is just running a CLUSTER command on a well ordered table enough to get query execution to yield nice sequential access to the disk?\n\nMany thanks in advance for your insights...\n\n -- AWG\n\n", "msg_date": "Sun, 18 Nov 2012 12:14:02 -0500", "msg_from": "\"Andrew W. Gibbs\" <[email protected]>", "msg_from_op": true, "msg_subject": "partitioning versus clustering" } ]
[ { "msg_contents": "> explain analyze\n> select a.ID, a.provider, a.hostname, a.username, a.eventTimeStamp, a.AIPGUID, a.submissionGUID, a.parentSubmissionGUID, a.sizeArchived, a.addedContentString, a.addedContentSizesString, a.removedContentString, a.removedContentSizesString, a.modifiedContentString, a.modifiedContentSizesString, a.DISCRIMINATOR\n> from AIPModificationEvent a\n> where a.ID in (select MAX(b.ID) from AIPModificationEvent b where b.parentSubmissionGUID\n> in\n> (select c.GUID from WorkflowProcessingEvent c where c.DISCRIMINATOR='WorkflowCompleted'\n> and c.eventTimeStamp >= '2012-11-10 00:00:00' and c.eventTimeStamp < '2012-11-11 00:00:00')\n> or b.submissionGUID in\n> (select c.GUID from WorkflowProcessingEvent c\n> where c.DISCRIMINATOR='WorkflowCompleted' and c.eventTimeStamp >= '2012-11-10 00:00:00' and c.eventTimeStamp <\n> '2012-11-11 00:00:00')\n> group by b.AIPGUID)\n> limit 1000 offset 3000\n\n\nHi Maria,\n\nIt appears to be doing a sort so that it can carry out the group by clause but the group by doesn't appear to be necessary as you're selecting the max(b.ID) after doing the group by.\nIf you omit the group by then it will return more rows in that part of the query but the MAX(b.ID) will return 1 value regardless.\n\nRegards,\n\nRussell Keane.\n\nRegistered name: In Practice Systems Ltd.\nRegistered address: The Bread Factory, 1a Broughton Street, London, SW8 3QJ\nRegistered Number: 1788577\nRegistered in England\nVisit our Internet Web site at www.inps.co.uk\nThe information in this internet email is confidential and is intended solely for the addressee. Access, copying or re-use of information in it by anyone else is not authorised. Any views or opinions presented are solely those of the author and do not necessarily represent those of INPS or any of its affiliates. If you are not the intended recipient please contact [email protected]\n\n\n", "msg_date": "Tue, 20 Nov 2012 11:16:46 +0000", "msg_from": "Russell Keane <[email protected]>", "msg_from_op": true, "msg_subject": "FW: slow query on postgres 8.4" } ]
[ { "msg_contents": "Sergio Mayoral wrote:\n\n> I cannot use persistent connections. I must open/close a connection\n> anytime I want to insert something new.\n\nThat's odd. Why is that?\n\n> do i have to configure something different? Am i missing something?\n\nYou could use pgbouncer to hold database connections open for you and\nprovide a somewhat more lightweight connection process.\n\n-Kevin\n\n", "msg_date": "Tue, 20 Nov 2012 07:57:23 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PQconnectStart/PQconnectPoll" } ]
[ { "msg_contents": "Maria L. Wilson wrote:\n\n> Can someone shed some light on the following query.....\n> any help would certainly be appreciated!\n\nThe query text and EXPLAIN ANALYZE output are a good start, but a lot\nof other information is needed to really understand the issue.\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nThe EXPLAIN ANALYZE output will be easier to read as an attachment --\nor even better, post it to:\n\nhttp://explain.depesz.com/\n\n-Kevin\n\n", "msg_date": "Tue, 20 Nov 2012 08:06:09 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help on slow query using postgres 8.4" } ]
[ { "msg_contents": "On Tue, Nov 20, 2012 at 3:53 PM, Tom Lane <[email protected]> wrote:\n\n> Craig Ringer <[email protected]> writes:\n> > On 11/21/2012 12:06 AM, Claudio Freire wrote:\n> >> I meant for postgres to do automatically. Rewriting as a join wouldn't\n> >> work as an optimization fence the way we're used to, but pushing\n> >> constraints upwards can only help (especially if highly selective).\n>\n> > Because people are now used to using CTEs as query hints, it'd probably\n> > cause performance regressions in working queries. Perhaps more\n> > importantly, Pg would have to prove that doing so didn't change queries\n> > that invoked functions with side-effects to avoid changing the results\n> > of currently valid queries.\n>\n> We could trivially arrange to keep the current semantics if the CTE\n> query contains any volatile functions (or of course if it's\n> INSERT/UPDATE/DELETE). I think we'd also need to not optimize if\n> it's invoked from more than one place in the outer query.\n>\n> I think the more interesting question is what cases wouldn't be covered\n> by such a rule. Typically you need to use OFFSET 0 in situations where\n> the planner has guessed wrong about costs or rowcounts, and I think\n> people are likely using WITH for that as well. Should we be telling\n> people that they ought to insert OFFSET 0 in WITH queries if they want\n> to be sure there's an optimization fence?\n>\n\nI'm probably beating a dead horse ... but isn't this just a hint? Except\nthat it's worse than a hint, because it's a hint in disguise and is\nundocumented. As far as I can tell, there's no use for \"OFFSET 0\" except\nto act as an optimizer fence.\n\nIt's clearly an important need, given the nature of the dialog above (and\nmany others that have passed through this mailing list).\n\nWhy not make an explicit hint syntax and document it? I've still don't\nunderstand why \"hint\" is a dirty word in Postgres. There are a half-dozen\nor so ways in common use to circumvent or correct sub-optimal plans.\n\nCraig James\n\n\n>\n> regards, tom lane\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Tue, Nov 20, 2012 at 3:53 PM, Tom Lane <[email protected]> wrote:\nCraig Ringer <[email protected]> writes:\n> On 11/21/2012 12:06 AM, Claudio Freire wrote:\n>> I meant for postgres to do automatically. Rewriting as a join wouldn't\n>> work as an optimization fence the way we're used to, but pushing\n>> constraints upwards can only help (especially if highly selective).\n\n> Because people are now used to using CTEs as query hints, it'd probably\n> cause performance regressions in working queries. Perhaps more\n> importantly, Pg would have to prove that doing so didn't change queries\n> that invoked functions with side-effects to avoid changing the results\n> of currently valid queries.\n\nWe could trivially arrange to keep the current semantics if the CTE\nquery contains any volatile functions (or of course if it's\nINSERT/UPDATE/DELETE).  I think we'd also need to not optimize if\nit's invoked from more than one place in the outer query.\n\nI think the more interesting question is what cases wouldn't be covered\nby such a rule.  Typically you need to use OFFSET 0 in situations where\nthe planner has guessed wrong about costs or rowcounts, and I think\npeople are likely using WITH for that as well.  Should we be telling\npeople that they ought to insert OFFSET 0 in WITH queries if they want\nto be sure there's an optimization fence?I'm probably beating a dead horse ... but isn't this just a hint?  Except that it's worse than a hint, because it's a hint in disguise and is undocumented.  As far as I can tell, there's no use for \"OFFSET 0\" except to act as an optimizer fence.\nIt's clearly an important need, given the nature of the dialog above (and many others that have passed through this mailing list).Why not make an explicit hint syntax and document it? I've still don't understand why \"hint\" is a dirty word in Postgres.  There are a half-dozen or so ways in common use to circumvent or correct sub-optimal plans.\nCraig James \n\n                        regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 20 Nov 2012 17:35:38 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Hints (was Poor performance using CTE)" }, { "msg_contents": "On 11/21/2012 09:35 AM, Craig James wrote:\n> Why not make an explicit hint syntax and document it? I've still don't\n> understand why \"hint\" is a dirty word in Postgres. There are a\n> half-dozen or so ways in common use to circumvent or correct\n> sub-optimal plans.\n>\n\nThe reason usually given is that hints provide easy workarounds for\nplanner and stats issues, so people don't report problems or fix the\nunderlying problem.\n\nOf course, if that's all there was to it, `OFFSET 0` would be made into\nan error or warning, or ignored and not fenced.\n\nThe reality is, as you say, that there's a need, because the planner can\nnever be perfect - or rather, if it were nearly perfect, it'd take so\nlong to read the stats and calculate plans that everything would be\nglacially slow anyway. The planner has to compromise, and so cases will\nalways arise where it needs a little help.\n\nI think it's time to admit that and get the syntax in place for CTEs so\nthere's room to optimize them later, rather than cementing\nCTEs-as-fences in forever as a Pg quirk.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 21 Nov 2012 10:15:17 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On 11/20/2012 08:15 PM, Craig Ringer wrote:\n\n> I think it's time to admit that and get the syntax in place for CTEs so\n> there's room to optimize them later, rather than cementing\n> CTEs-as-fences in forever as a Pg quirk.\n\nI know I'm just some schmo, but I'd vote for this. I'm certainly guilty \nof using OFFSET 0. Undocumented hints are still hints. As much as I \nthink they're a bad idea by cementing a certain plan that may not get \nthe benefits of future versions, non-intuitive side-effects by using \noverloaded syntax are worse.\n\nI've been using CTEs as temp tables because I know that's how they work. \nBut I'd be more than willing to modify my syntax one way or the other to \nadopt non-materialized CTEs, provided there's some way to get the \ncurrent behavior.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n", "msg_date": "Wed, 21 Nov 2012 07:27:18 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" } ]
[ { "msg_contents": "Craig Ringer wrote:\n> On 11/21/2012 09:35 AM, Craig James wrote:\n>> Why not make an explicit hint syntax and document it? I've still\n>> don't understand why \"hint\" is a dirty word in Postgres. There are\n>> a half-dozen or so ways in common use to circumvent or correct\n>> sub-optimal plans.\n> \n> The reason usually given is that hints provide easy workarounds for\n> planner and stats issues, so people don't report problems or fix\n> the underlying problem.\n> \n> Of course, if that's all there was to it, `OFFSET 0` would be made\n> into an error or warning, or ignored and not fenced.\n> \n> The reality is, as you say, that there's a need, because the\n> planner can never be perfect - or rather, if it were nearly\n> perfect, it'd take so long to read the stats and calculate plans\n> that everything would be glacially slow anyway. The planner has to\n> compromise, and so cases will always arise where it needs a little\n> help.\n> \n> I think it's time to admit that and get the syntax in place for\n> CTEs so there's room to optimize them later, rather than cementing\n> CTEs-as-fences in forever as a Pg quirk.\n\nIt's a tough problem. Disguising and not documenting the available\noptimizer hints leads to more reports on where the optimizer should\nbe smarter, and has spurred optimizer improvements. And many type of\nhints would undoubtedly cause people to force what they *think* would\nbe the best plan in many cases where they are wrong, or become wrong\nas data scales up. But it does seem odd every time I hear people\nsaying that they don't want to eliminate some optimization fence\nbecause \"they find it useful\" while simultaneously arguing that we\ndon't have or want hints.\n\nHaving a way to coerce the optimizer from the plan it would take with\nstraightforward coding *is* a hint, and one down-side of hiding the\nhints inside syntax mostly supported for other reasons is that people\nwho don't know about these clever devices can't do reasonable\nrefactoring of queries for readability without risking performance\nregressions. Another down-side is that perfectly reasonable queries\nported from other databases that use hint syntax for hints run afoul\nof the secret hints when trying to run queries on PostgreSQL, and get\nperformance potentially orders of magnitude worse than they expect.\n\nI'm not sure what the best answer is, but as long as we have hints,\nbut only through OFFSET 0 or CTE usage, that should be documented.\nBetter, IMV, would be to identify what sorts of hints people actually\nfind useful, and use that as the basis for TODO items for optimizer\nimprovement as well as inventing clear ways to specify the desired\ncoercion. I liked the suggestion that a CTE which didn't need to be\nmaterialized because of side-effects or multiple references have a\nkeyword. Personally, I think that AS MATERIALIZED x (SELECT ...)\nwould be preferable to AS x (SELECT ... OFFSET 0) as the syntax to\nspecify that.\n\nRegarding the above-mentioned benefits we would stand to lose by\nhaving clear and documented hints, perhaps we could occasionally\nsolicit input on where people are finding hints useful to get ideas\non where we might want to improve the optimizer. As far as worrying\nabout people using hints to force a plan which is sub-optimal --\nisn't that getting into nanny mode a bit too much?\n\n-Kevin\n\n", "msg_date": "Wed, 21 Nov 2012 08:42:32 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On 21.11.2012 15:42, Kevin Grittner wrote:\n> Better, IMV, would be to identify what sorts of hints people actually\n> find useful, and use that as the basis for TODO items for optimizer\n> improvement as well as inventing clear ways to specify the desired\n> coercion. I liked the suggestion that a CTE which didn't need to be\n> materialized because of side-effects or multiple references have a\n> keyword. Personally, I think that AS MATERIALIZED x (SELECT ...)\n> would be preferable to AS x (SELECT ... OFFSET 0) as the syntax to\n> specify that.\n\nRather than telling the planner what to do or not to do, I'd much rather \nhave hints that give the planner more information about the tables and \nquals involved in the query. A typical source of bad plans is when the \nplanner gets its cost estimates wrong. So rather than telling the \nplanner to use a nested loop join for \"a INNER JOIN b ON a.id = b.id\", \nthe user could tell the planner that there are only 10 rows that match \nthe \"a.id = b.id\" qual. That gives the planner the information it needs \nto choose the right plan on its own. That kind of hints would be much \nless implementation specific and much more likely to still be useful, or \nat least not outright counter-productive, in a future version with a \nsmarter planner.\n\nYou could also attach that kind of hints to tables and columns, which \nwould be more portable and nicer than decorating all queries.\n\n- Heikki\n\n", "msg_date": "Wed, 21 Nov 2012 18:05:17 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On Wed, Nov 21, 2012 at 5:42 AM, Kevin Grittner <[email protected]> wrote:\n\n>\n> It's a tough problem. Disguising and not documenting the available\n> optimizer hints leads to more reports on where the optimizer should\n> be smarter, and has spurred optimizer improvements. ...\n> Regarding the above-mentioned benefits we would stand to lose by\n> having clear and documented hints, perhaps we could occasionally\n> solicit input on where people are finding hints useful to get ideas\n> on where we might want to improve the optimizer. As far as worrying\n> about people using hints to force a plan which is sub-optimal --\n> isn't that getting into nanny mode a bit too much?\n>\n\nToward that end, the hint documentation (which is almost always viewed as\nHTML) could be prefaced by a strong suggestion to post performance\nquestions in this group first, with links to the \"subscribe\" page and the\n\"how to report performance problems\" FAQ. The hint documentation could even\nbe minimalistic; suggest to developers that they should post their\nproblematic queries here before resorting to hints. That would give the\nexperts an opportunity to provide the normal advice. The correct hint\nsyntax would be suggested only when all other avenues failed.\n\nCraig James\n\n\n>\n> -Kevin\n>\n\nOn Wed, Nov 21, 2012 at 5:42 AM, Kevin Grittner <[email protected]> wrote:\n\nIt's a tough problem. Disguising and not documenting the available\noptimizer hints leads to more reports on where the optimizer should\nbe smarter, and has spurred optimizer improvements. ...\nRegarding the above-mentioned benefits we would stand to lose by\nhaving clear and documented hints, perhaps we could occasionally\nsolicit input on where people are finding hints useful to get ideas\non where we might want to improve the optimizer. As far as worrying\nabout people using hints to force a plan which is sub-optimal --\nisn't that getting into nanny mode a bit too much?Toward that end, the hint documentation (which is almost always viewed as HTML) could be prefaced by a strong suggestion to post performance questions in this group first, with links to the \"subscribe\" page and the \"how to report performance problems\" FAQ. The hint documentation could even be minimalistic; suggest to developers that they should post their problematic queries here before resorting to hints.  That would give the experts an opportunity to provide the normal advice.  The correct hint syntax would be suggested only when all other avenues failed.\nCraig James \n\n-Kevin", "msg_date": "Wed, 21 Nov 2012 08:34:02 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On 11/21/2012 08:05 AM, Heikki Linnakangas wrote:\n> Rather than telling the planner what to do or not to do, I'd much rather\n> have hints that give the planner more information about the tables and\n> quals involved in the query. A typical source of bad plans is when the\n> planner gets its cost estimates wrong. So rather than telling the\n> planner to use a nested loop join for \"a INNER JOIN b ON a.id = b.id\",\n> the user could tell the planner that there are only 10 rows that match\n> the \"a.id = b.id\" qual. That gives the planner the information it needs\n> to choose the right plan on its own. That kind of hints would be much\n> less implementation specific and much more likely to still be useful, or\n> at least not outright counter-productive, in a future version with a\n> smarter planner.\n> \n> You could also attach that kind of hints to tables and columns, which\n> would be more portable and nicer than decorating all queries.\n\nI like this idea, but also think that if we have a syntax to allow\nhints, it would be nice to have a simple way to ignore all hints (yes, I\nsuppose I'm suggesting yet another GUC). That way after sprinkling your\nSQL with hints, you could easily periodically (e.g. after a Postgres\nupgrade) test what would happen if the hints were removed.\n\nJoe\n-- \nJoe Conway\ncredativ LLC: http://www.credativ.us\nLinux, PostgreSQL, and general Open Source\nTraining, Service, Consulting, & 24x7 Support\n\n\n\n", "msg_date": "Wed, 21 Nov 2012 09:25:52 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On Wed, Nov 21, 2012 at 9:25 AM, Joe Conway <[email protected]> wrote:\n\n> On 11/21/2012 08:05 AM, Heikki Linnakangas wrote:\n> > Rather than telling the planner what to do or not to do, I'd much rather\n> > have hints that give the planner more information about the tables and\n> > quals involved in the query. A typical source of bad plans is when the\n> > planner gets its cost estimates wrong. So rather than telling the\n> > planner to use a nested loop join for \"a INNER JOIN b ON a.id = b.id\",\n> > the user could tell the planner that there are only 10 rows that match\n> > the \"a.id = b.id\" qual. That gives the planner the information it needs\n> > to choose the right plan on its own. That kind of hints would be much\n> > less implementation specific and much more likely to still be useful, or\n> > at least not outright counter-productive, in a future version with a\n> > smarter planner.\n> >\n> > You could also attach that kind of hints to tables and columns, which\n> > would be more portable and nicer than decorating all queries.\n>\n> I like this idea, but also think that if we have a syntax to allow\n> hints, it would be nice to have a simple way to ignore all hints (yes, I\n> suppose I'm suggesting yet another GUC). That way after sprinkling your\n> SQL with hints, you could easily periodically (e.g. after a Postgres\n> upgrade) test what would happen if the hints were removed.\n>\n\nOr a three-way choice: Allow, ignore, or generate an error. That would\nallow developers to identify where hints are being used.\n\nCraig\n\n\n>\n> Joe\n> --\n> Joe Conway\n> credativ LLC: http://www.credativ.us\n> Linux, PostgreSQL, and general Open Source\n> Training, Service, Consulting, & 24x7 Support\n>\n>\n>\n\nOn Wed, Nov 21, 2012 at 9:25 AM, Joe Conway <[email protected]> wrote:\nOn 11/21/2012 08:05 AM, Heikki Linnakangas wrote:\n> Rather than telling the planner what to do or not to do, I'd much rather\n> have hints that give the planner more information about the tables and\n> quals involved in the query. A typical source of bad plans is when the\n> planner gets its cost estimates wrong. So rather than telling the\n> planner to use a nested loop join for \"a INNER JOIN b ON a.id = b.id\",\n> the user could tell the planner that there are only 10 rows that match\n> the \"a.id = b.id\" qual. That gives the planner the information it needs\n> to choose the right plan on its own. That kind of hints would be much\n> less implementation specific and much more likely to still be useful, or\n> at least not outright counter-productive, in a future version with a\n> smarter planner.\n>\n> You could also attach that kind of hints to tables and columns, which\n> would be more portable and nicer than decorating all queries.\n\nI like this idea, but also think that if we have a syntax to allow\nhints, it would be nice to have a simple way to ignore all hints (yes, I\nsuppose I'm suggesting yet another GUC). That way after sprinkling your\nSQL with hints, you could easily periodically (e.g. after a Postgres\nupgrade) test what would happen if the hints were removed.Or a three-way choice: Allow, ignore, or generate an error.  That would allow developers to identify where hints are being used.\nCraig \n\nJoe\n--\nJoe Conway\ncredativ LLC: http://www.credativ.us\nLinux, PostgreSQL, and general Open Source\nTraining, Service, Consulting, & 24x7 Support", "msg_date": "Wed, 21 Nov 2012 09:28:35 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On 11/21/2012 09:28 AM, Craig James wrote:\n> \n> \n> On Wed, Nov 21, 2012 at 9:25 AM, Joe Conway <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> On 11/21/2012 08:05 AM, Heikki Linnakangas wrote:\n> > Rather than telling the planner what to do or not to do, I'd much\n> rather\n> > have hints that give the planner more information about the tables and\n> > quals involved in the query. A typical source of bad plans is when the\n> > planner gets its cost estimates wrong. So rather than telling the\n> > planner to use a nested loop join for \"a INNER JOIN b ON a.id\n> <http://a.id> = b.id <http://b.id>\",\n> > the user could tell the planner that there are only 10 rows that match\n> > the \"a.id <http://a.id> = b.id <http://b.id>\" qual. That gives the\n> planner the information it needs\n> > to choose the right plan on its own. That kind of hints would be much\n> > less implementation specific and much more likely to still be\n> useful, or\n> > at least not outright counter-productive, in a future version with a\n> > smarter planner.\n> >\n> > You could also attach that kind of hints to tables and columns, which\n> > would be more portable and nicer than decorating all queries.\n> \n> I like this idea, but also think that if we have a syntax to allow\n> hints, it would be nice to have a simple way to ignore all hints (yes, I\n> suppose I'm suggesting yet another GUC). That way after sprinkling your\n> SQL with hints, you could easily periodically (e.g. after a Postgres\n> upgrade) test what would happen if the hints were removed.\n> \n> \n> Or a three-way choice: Allow, ignore, or generate an error. That would\n> allow developers to identify where hints are being used.\n\n+1\n\nJoe\n\n\n-- \nJoe Conway\ncredativ LLC: http://www.credativ.us\nLinux, PostgreSQL, and general Open Source\nTraining, Service, Consulting, & 24x7 Support\n\n\n\n", "msg_date": "Wed, 21 Nov 2012 09:30:41 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "Craig James <[email protected]> writes:\n> On Wed, Nov 21, 2012 at 9:25 AM, Joe Conway <[email protected]> wrote:\n>> I like this idea, but also think that if we have a syntax to allow\n>> hints, it would be nice to have a simple way to ignore all hints (yes, I\n>> suppose I'm suggesting yet another GUC). That way after sprinkling your\n>> SQL with hints, you could easily periodically (e.g. after a Postgres\n>> upgrade) test what would happen if the hints were removed.\n\n> Or a three-way choice: Allow, ignore, or generate an error. That would\n> allow developers to identify where hints are being used.\n\nThrowing errors would likely prevent you from reaching all parts of your\napplication, thus preventing complete testing. Much more sensible to\njust log such queries.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 21 Nov 2012 12:39:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "Craig James <[email protected]> wrote:\n\n> On 11/21/2012 08:05 AM, Heikki Linnakangas wrote:\n> > Rather than telling the planner what to do or not to do, I'd much rather\n> > have hints that give the planner more information about the tables and\n> > quals involved in the query. A typical source of bad plans is when the\n> > planner gets its cost estimates wrong. So rather than telling the\n> > planner to use a nested loop join for \"a INNER JOIN b ON a.id = b.id\",\n> > the user could tell the planner that there are only 10 rows that match\n> > the \"a.id = b.id\" qual. That gives the planner the information it needs\n> > to choose the right plan on its own. That kind of hints would be much\n> > less implementation specific and much more likely to still be useful, or\n> > at least not outright counter-productive, in a future version with a\n> > smarter planner.\n> >\n> > You could also attach that kind of hints to tables and columns, which\n> > would be more portable and nicer than decorating all queries.\n> \n> I like this idea, but also think that if we have a syntax to allow\n> hints, it would be nice to have a simple way to ignore all hints (yes, I\n> suppose I'm suggesting yet another GUC). That way after sprinkling your\n> SQL with hints, you could easily periodically (e.g. after a Postgres\n> upgrade) test what would happen if the hints were removed.\n> \n> \n> Or a three-way choice: Allow, ignore, or generate an error. That would allow\n> developers to identify where hints are being used.\n> \n> Craig\n\n+1\n\nI think, we HAVE a smart planner, but hints in this direction are okay,\nand we need a simple way to make such hints obsolete - for/in the future. \n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n", "msg_date": "Wed, 21 Nov 2012 18:48:08 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "> \n> Rather than telling the planner what to do or not to do, I'd much rather \n> have hints that give the planner more information about the tables and \n> quals involved in the query. A typical source of bad plans is when the \n> planner gets its cost estimates wrong. So rather than telling the \n> planner to use a nested loop join for \"a INNER JOIN b ON a.id = b.id\", \n> the user could tell the planner that there are only 10 rows that match \n> the \"a.id = b.id\" qual. \n\n\n\nInstead of gathering statistics for all possible joins ( and join orders) , in Oracle there is a functionality that can be switched on where the optimizer is given cardinality feedback for the chosen plans, so it can choose another plan if the same statement comes around. \n\nSecondly, there is functionality to insert a hint into an SQL statement. That's very good for COTS apps where the statement can't be altered. Now I know that there's relatively not much COTS for the Postgresql, ( hence arguments like 'we should not implement hints so we're forcing people to solve the underlying problem' ), but as Postgresql will replace oracle in the lower end of the market, this functionality is usefull. \n\n\n \t\t \t \t\t \n\n\n\n\n> > Rather than telling the planner what to do or not to do, I'd much rather > have hints that give the planner more information about the tables and > quals involved in the query. A typical source of bad plans is when the > planner gets its cost estimates wrong. So rather than telling the > planner to use a nested loop join for \"a INNER JOIN b ON a.id = b.id\", > the user could tell the planner that there are only 10 rows that match > the \"a.id = b.id\" qual. Instead of gathering statistics for all possible joins ( and join orders) , in Oracle there is a functionality that can be switched on where the optimizer is given cardinality feedback for the chosen plans, so it can choose another plan if the same statement comes around. Secondly, there is functionality to insert a hint into an SQL statement. That's very good for COTS apps where the statement can't be altered. Now I know that there's relatively not much COTS for the Postgresql, ( hence arguments like 'we should not implement hints so we're forcing people to solve the underlying problem' ), but as Postgresql will replace oracle in the lower end of the market, this functionality is usefull.", "msg_date": "Wed, 21 Nov 2012 19:04:20 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints - experiences from other rdbms" }, { "msg_contents": "On 22/11/12 06:28, Craig James wrote:\n>\n>\n> On Wed, Nov 21, 2012 at 9:25 AM, Joe Conway <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> On 11/21/2012 08:05 AM, Heikki Linnakangas wrote:\n> > Rather than telling the planner what to do or not to do, I'd\n> much rather\n> > have hints that give the planner more information about the\n> tables and\n> > quals involved in the query. A typical source of bad plans is\n> when the\n> > planner gets its cost estimates wrong. So rather than telling the\n> > planner to use a nested loop join for \"a INNER JOIN b ON a.id\n> <http://a.id> = b.id <http://b.id>\",\n> > the user could tell the planner that there are only 10 rows that\n> match\n> > the \"a.id <http://a.id> = b.id <http://b.id>\" qual. That gives\n> the planner the information it needs\n> > to choose the right plan on its own. That kind of hints would be\n> much\n> > less implementation specific and much more likely to still be\n> useful, or\n> > at least not outright counter-productive, in a future version with a\n> > smarter planner.\n> >\n> > You could also attach that kind of hints to tables and columns,\n> which\n> > would be more portable and nicer than decorating all queries.\n>\n> I like this idea, but also think that if we have a syntax to allow\n> hints, it would be nice to have a simple way to ignore all hints\n> (yes, I\n> suppose I'm suggesting yet another GUC). That way after sprinkling\n> your\n> SQL with hints, you could easily periodically (e.g. after a Postgres\n> upgrade) test what would happen if the hints were removed.\n>\n>\n> Or a three-way choice: Allow, ignore, or generate an error. That would \n> allow developers to identify where hints are being used.\n>\n> Craig\n>\n>\n> Joe\n> --\n> Joe Conway\n> credativ LLC: http://www.credativ.us\n> Linux, PostgreSQL, and general Open Source\n> Training, Service, Consulting, & 24x7 Support\n>\n>\n>\nOr perhaps hints should have the pg version attached, so that they are \nautomatically ignored when the pg version changed? Problem may then \nbecome people reluctant to upgrade because their hints relate to a \nprevious version! Sigh...\n\nEven requiring registration of hints and expiring them after a limited \ntime period would not work - as people would simply automate the process \nof registration & application...\n\n\nCheers,\nGavin\n\n\n\n\n\n\nOn 22/11/12 06:28, Craig James wrote:\n\n\n\nOn Wed, Nov 21, 2012 at 9:25 AM, Joe\n Conway <[email protected]>\n wrote:\n\n On 11/21/2012 08:05 AM, Heikki Linnakangas wrote:\n > Rather than telling the planner what to do or not to do,\n I'd much rather\n > have hints that give the planner more information about\n the tables and\n > quals involved in the query. A typical source of bad\n plans is when the\n > planner gets its cost estimates wrong. So rather than\n telling the\n > planner to use a nested loop join for \"a INNER JOIN b ON\n a.id\n = b.id\",\n > the user could tell the planner that there are only 10\n rows that match\n > the \"a.id = b.id\" qual. That\n gives the planner the information it needs\n > to choose the right plan on its own. That kind of hints\n would be much\n > less implementation specific and much more likely to\n still be useful, or\n > at least not outright counter-productive, in a future\n version with a\n > smarter planner.\n >\n > You could also attach that kind of hints to tables and\n columns, which\n > would be more portable and nicer than decorating all\n queries.\n\n I like this idea, but also think that if we have a syntax to\n allow\n hints, it would be nice to have a simple way to ignore all\n hints (yes, I\n suppose I'm suggesting yet another GUC). That way after\n sprinkling your\n SQL with hints, you could easily periodically (e.g. after a\n Postgres\n upgrade) test what would happen if the hints were removed.\n\n\n Or a three-way choice: Allow, ignore, or generate an error. \n That would allow developers to identify where hints are being\n used.\n\n Craig\n  \n\n\n\n Joe\n --\n Joe Conway\n credativ LLC: http://www.credativ.us\n Linux, PostgreSQL, and general Open Source\n Training, Service, Consulting, & 24x7 Support\n\n\n\n\n\n\n Or perhaps hints should have the pg version attached, so that they\n are automatically ignored when the pg version changed?  Problem may\n then become people reluctant to upgrade because their hints relate\n to a previous version!  Sigh...\n\n Even requiring registration of hints and expiring them after a\n limited time period would not work - as people would simply automate\n the process of registration & application...\n\n\n Cheers,\n Gavin", "msg_date": "Thu, 22 Nov 2012 10:18:45 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On Wed, Nov 21, 2012 at 8:05 AM, Heikki Linnakangas\n<[email protected]> wrote:\n> On 21.11.2012 15:42, Kevin Grittner wrote:\n>>\n>> Better, IMV, would be to identify what sorts of hints people actually\n>> find useful, and use that as the basis for TODO items for optimizer\n>> improvement as well as inventing clear ways to specify the desired\n>> coercion. I liked the suggestion that a CTE which didn't need to be\n>> materialized because of side-effects or multiple references have a\n>> keyword. Personally, I think that AS MATERIALIZED x (SELECT ...)\n>> would be preferable to AS x (SELECT ... OFFSET 0) as the syntax to\n>> specify that.\n>\n>\n> Rather than telling the planner what to do or not to do, I'd much rather\n> have hints that give the planner more information about the tables and quals\n> involved in the query. A typical source of bad plans is when the planner\n> gets its cost estimates wrong. So rather than telling the planner to use a\n> nested loop join for \"a INNER JOIN b ON a.id = b.id\", the user could tell\n> the planner that there are only 10 rows that match the \"a.id = b.id\" qual.\n\nFor each a.id there are 10 b.id, or for each b.id there are 10 a.id?\n\n> That gives the planner the information it needs to choose the right plan on\n> its own. That kind of hints would be much less implementation specific and\n> much more likely to still be useful, or at least not outright\n> counter-productive, in a future version with a smarter planner.\n\nWhen I run into unexpectedly poor performance, I have an intuitive\nenough feel for my own data that I know what plan it ought to be\nusing. Figuring out why it is not using it is very hard. For one\nthing, EXPLAIN tells you about the \"winning\" plan, but there is no\nvisibility into what ought to be the winning plan but isn't, so no way\nto see why it isn't. So you first have to use our existing non-hint\nhints (enable_*, doing weird things with cost_*, CTE stuff) to trick\nit into using the plan I want it to use, before I can figure out why\nit isn't using it, before I could figure out what hints of the style\nyou are suggesting to supply to get it to use it.\n\nSo I think the type of hints you are suggesting would be about as hard\nfor the user to use as debugging the planner for the particular case\nwould be. While the more traditional type of hint is easy to use,\nbecause the end user understands their data more than they understand\nthe guts of the planner.\n\n\nCheers,\n\nJeff\n\n", "msg_date": "Wed, 21 Nov 2012 16:53:56 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On 22.11.2012 02:53, Jeff Janes wrote:\n>> That gives the planner the information it needs to choose the right plan on\n>> its own. That kind of hints would be much less implementation specific and\n>> much more likely to still be useful, or at least not outright\n>> counter-productive, in a future version with a smarter planner.\n>\n> When I run into unexpectedly poor performance, I have an intuitive\n> enough feel for my own data that I know what plan it ought to be\n> using. Figuring out why it is not using it is very hard. For one\n> thing, EXPLAIN tells you about the \"winning\" plan, but there is no\n> visibility into what ought to be the winning plan but isn't, so no way\n> to see why it isn't. So you first have to use our existing non-hint\n> hints (enable_*, doing weird things with cost_*, CTE stuff) to trick\n> it into using the plan I want it to use, before I can figure out why\n> it isn't using it, before I could figure out what hints of the style\n> you are suggesting to supply to get it to use it.\n\nI'm sure that happens too, but my gut feeling is that more often the \nEXPLAIN ANALYZE output reveals a bad estimate somewhere in the plan, and \nthe planner chooses a bad plan based on the bad estimate. If you hint \nthe planner by giving a better estimate for where the estimator got it \nwrong, the planner will choose the desired plan.\n\n- Heikki\n\n", "msg_date": "Thu, 22 Nov 2012 11:10:57 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "Le mercredi 21 novembre 2012 17:34:02, Craig James a écrit :\n> On Wed, Nov 21, 2012 at 5:42 AM, Kevin Grittner <[email protected]> wrote:\n> > It's a tough problem. Disguising and not documenting the available\n> > optimizer hints leads to more reports on where the optimizer should\n> > be smarter, and has spurred optimizer improvements. ...\n> > Regarding the above-mentioned benefits we would stand to lose by\n> > having clear and documented hints, perhaps we could occasionally\n> > solicit input on where people are finding hints useful to get ideas\n> > on where we might want to improve the optimizer. As far as worrying\n> > about people using hints to force a plan which is sub-optimal --\n> > isn't that getting into nanny mode a bit too much?\n> \n> Toward that end, the hint documentation (which is almost always viewed as\n> HTML) could be prefaced by a strong suggestion to post performance\n> questions in this group first, with links to the \"subscribe\" page and the\n> \"how to report performance problems\" FAQ. The hint documentation could even\n> be minimalistic; suggest to developers that they should post their\n> problematic queries here before resorting to hints. That would give the\n> experts an opportunity to provide the normal advice. The correct hint\n> syntax would be suggested only when all other avenues failed.\n\nWe have hooks in PostgreSQL. We already have at least one extension which is \nusing that to change the planner behavior.\n\nWe can have a bit more hooks and try to improve the cost estimate, this part \nof the code is known to be built by reports and human estimations, also the \n9.2 version got heavy modifications in this area. \n\nLet the 'Hints' be inside an extension thus we are able to track them and fix \nthe planner/costestimate issues.\n\nI don't see why PostgreSQL needs 'Hints' *in-core*.\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation", "msg_date": "Fri, 23 Nov 2012 11:05:52 +0100", "msg_from": "=?iso-8859-15?q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On Fri, Nov 23, 2012 at 3:05 AM, Cédric Villemain\n<[email protected]> wrote:\n> Le mercredi 21 novembre 2012 17:34:02, Craig James a écrit :\n>> On Wed, Nov 21, 2012 at 5:42 AM, Kevin Grittner <[email protected]> wrote:\n>> > It's a tough problem. Disguising and not documenting the available\n>> > optimizer hints leads to more reports on where the optimizer should\n>> > be smarter, and has spurred optimizer improvements. ...\n>> > Regarding the above-mentioned benefits we would stand to lose by\n>> > having clear and documented hints, perhaps we could occasionally\n>> > solicit input on where people are finding hints useful to get ideas\n>> > on where we might want to improve the optimizer. As far as worrying\n>> > about people using hints to force a plan which is sub-optimal --\n>> > isn't that getting into nanny mode a bit too much?\n>>\n>> Toward that end, the hint documentation (which is almost always viewed as\n>> HTML) could be prefaced by a strong suggestion to post performance\n>> questions in this group first, with links to the \"subscribe\" page and the\n>> \"how to report performance problems\" FAQ. The hint documentation could even\n>> be minimalistic; suggest to developers that they should post their\n>> problematic queries here before resorting to hints. That would give the\n>> experts an opportunity to provide the normal advice. The correct hint\n>> syntax would be suggested only when all other avenues failed.\n>\n> We have hooks in PostgreSQL. We already have at least one extension which is\n> using that to change the planner behavior.\n>\n> We can have a bit more hooks and try to improve the cost estimate, this part\n> of the code is known to be built by reports and human estimations, also the\n> 9.2 version got heavy modifications in this area.\n>\n> Let the 'Hints' be inside an extension thus we are able to track them and fix\n> the planner/costestimate issues.\n>\n> I don't see why PostgreSQL needs 'Hints' *in-core*.\n\nHere here! PostgreSQL is well known for its extensibility and this is\nthe perfect place for hints. That way they can get worked on without\nbecoming a crutch for every user and forcing the backend developers to\nsupport what may or may not be a good idea syntax wise. After a few\ndifferent people have banged some code out to make workable hint\nsyntaxes for their own use maybe then it will be time to revisit\nadding hints to core.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 16:42:10 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On 27/11/2012 3:42 PM, Scott Marlowe wrote:\n\n> Here here! PostgreSQL is well known for its extensibility and this is\n> the perfect place for hints.\nI agree with the sentiment and your concerns. However, this doesn't \nsolve the CTE problem.\n\nSome people are relying on the planner's inability to push conditions \ninto / pull conditions out of CTEs, and otherwise re-arrange them. If \nsupport for optimising into eligible CTEs (ie CTE terms that contain \nonly SELECT or VALUES and call no VOLATILE functions) then these \napplications will potentially encounter serious performance regressions.\n\nShould this feature never be added to Pg, making it different and \nincompatible with other DBs that implement CTE optimisation, just \nbecause some people are using it for a hacky hint like OFFSET 0?\n\nShould these applications just be broken by the update, with people told \nto add `OFFSET 0` or load some not-yet-existing hints module after \nreporting the performance issue to the list?\n\nI don't think either of those are acceptable. Sooner or later somebody's \ngoing to want to add CTE optimisation, and I don't think that \"you \ncan't\" or \"great, we'll do it and break everything\" are acceptable \nresponses to any proposed patch someone might come up with to add that.\n\nA GUC might be OK, as apps can always SET it before problem queries or \nnot-yet-ported code. It'd probably reduce the rate at which people fixed \ntheir code considerably, though, going by past experience with \nstandard_conforming_strings, etc, but it'd work.\n\n--\nCraig Ringer\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 18:17:36 -0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On Tue, Nov 27, 2012 at 7:17 PM, Craig Ringer <[email protected]> wrote:\n> On 27/11/2012 3:42 PM, Scott Marlowe wrote:\n>\n>> Here here! PostgreSQL is well known for its extensibility and this is\n>> the perfect place for hints.\n>\n> I agree with the sentiment and your concerns. However, this doesn't solve\n> the CTE problem.\n>\n> Some people are relying on the planner's inability to push conditions into /\n> pull conditions out of CTEs, and otherwise re-arrange them. If support for\n> optimising into eligible CTEs (ie CTE terms that contain only SELECT or\n> VALUES and call no VOLATILE functions) then these applications will\n> potentially encounter serious performance regressions.\n>\n> Should this feature never be added to Pg, making it different and\n> incompatible with other DBs that implement CTE optimisation, just because\n> some people are using it for a hacky hint like OFFSET 0?\n\nI'm strictly talking about any hinting mechanism being added being an\nextension. Fixing the planner so that optimizations can get cross the\nCTE boundary seems the domain of back end hackers, not extensions.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 19:26:20 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" }, { "msg_contents": "On 28/11/12 15:17, Craig Ringer wrote:\n> On 27/11/2012 3:42 PM, Scott Marlowe wrote:\n>\n>> Here here! PostgreSQL is well known for its extensibility and this is\n>> the perfect place for hints.\n> I agree with the sentiment and your concerns. However, this doesn't \n> solve the CTE problem.\n>\n> Some people are relying on the planner's inability to push conditions \n> into / pull conditions out of CTEs, and otherwise re-arrange them. If \n> support for optimising into eligible CTEs (ie CTE terms that contain \n> only SELECT or VALUES and call no VOLATILE functions) then these \n> applications will potentially encounter serious performance regressions.\n>\n> Should this feature never be added to Pg, making it different and \n> incompatible with other DBs that implement CTE optimisation, just \n> because some people are using it for a hacky hint like OFFSET 0?\n>\n> Should these applications just be broken by the update, with people \n> told to add `OFFSET 0` or load some not-yet-existing hints module \n> after reporting the performance issue to the list?\n>\n> I don't think either of those are acceptable. Sooner or later \n> somebody's going to want to add CTE optimisation, and I don't think \n> that \"you can't\" or \"great, we'll do it and break everything\" are \n> acceptable responses to any proposed patch someone might come up with \n> to add that.\n>\n> A GUC might be OK, as apps can always SET it before problem queries or \n> not-yet-ported code. It'd probably reduce the rate at which people \n> fixed their code considerably, though, going by past experience with \n> standard_conforming_strings, etc, but it'd work.\n>\n> -- \n> Craig Ringer\n>\n>\nI think it would be best to be something in the SQL for SELECT, as:\n\n 1. One is more likely to find it by looking up the documentation for SELECT\n\n 2. It could allow selective application within a SELECT: one could have\n several queries within the WITH clause: where all except one might\n benefit for optimisation, and the exception might cause problems\n\nI have suggested a couple possible syntax paterns, but there may well be \nbetter alternative syntaxes.\n\n\nCheers,\nGavin\n\n\n\n\n\n\nOn 28/11/12 15:17, Craig Ringer wrote:\n\nOn\n 27/11/2012 3:42 PM, Scott Marlowe wrote:\n \n\nHere here!  PostgreSQL is well known for\n its extensibility and this is\n \n the perfect place for hints.\n \n\n I agree with the sentiment and your concerns. However, this\n doesn't solve the CTE problem.\n \n\n Some people are relying on the planner's inability to push\n conditions into / pull conditions out of CTEs, and otherwise\n re-arrange them. If support for optimising into eligible CTEs (ie\n CTE terms that contain only SELECT or VALUES and call no VOLATILE\n functions) then these applications will potentially encounter\n serious performance regressions.\n \n\n Should this feature never be added to Pg, making it different and\n incompatible with other DBs that implement CTE optimisation, just\n because some people are using it for a hacky hint like OFFSET 0?\n \n\n Should these applications just be broken by the update, with\n people told to add `OFFSET 0` or load some not-yet-existing hints\n module after reporting the performance issue to the list?\n \n\n I don't think either of those are acceptable. Sooner or later\n somebody's going to want to add CTE optimisation, and I don't\n think that \"you can't\" or \"great, we'll do it and break\n everything\" are acceptable responses to any proposed patch someone\n might come up with to add that.\n \n\n A GUC might be OK, as apps can always SET it before problem\n queries or not-yet-ported code. It'd probably reduce the rate at\n which people fixed their code considerably, though, going by past\n experience with standard_conforming_strings, etc, but it'd work.\n \n\n --\n \n Craig Ringer\n \n\n\n\nI think it would be best to be something in\n the SQL for SELECT, as:\n\nOne is more likely to find it by looking up the documentation\n for SELECT\n\n\nIt could allow selective application within a SELECT: one\n could have several queries within the WITH clause: where all\n except one might benefit for optimisation, and the exception\n might cause problems\n\n\n I have suggested a couple possible syntax paterns, but there may\n well be better alternative syntaxes.\n\n\n Cheers,\n Gavin", "msg_date": "Wed, 28 Nov 2012 15:34:12 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hints (was Poor performance using CTE)" } ]
[ { "msg_contents": "Hello,\n\nThis is a performance question that has held me occupied for quite some \ntime now,\n\nThe following join is a somewhat slow query:\n\n(np_artikel, sm_artikel_dim are views and sm_orderrad_* are tables )\n\n\nxtest=# explain analyze verbose\nselect * from np_artikel np\njoin sm_artikel_dim dim on np.artikelid = dim.artikelid\njoin sm_orderrad ord on ord.artikelid = np.artikelid\nJOIN sm_orderrad_storlek STL ON ORD.ordradnr = STL.ordradnr\nWHERE STL.BATCHNR = 3616912 AND STL.ORDRADNR = 3 AND ORD.BATCHNR=3616912;\n\nSee: http://explain.depesz.com/s/stI\n\n Total runtime: 47748.786 ms\n(140 rows)\n\n\n\nThis is somewhat strange - beacause i look for i single order-row in a \nspecific order-batch which only returns one article-id. Please see the \nfollowing three questions.\n\n\n\n\nxtest=# SELECT distinct artikelid FROM sm_orderrad ORD JOIN \nsm_orderrad_storlek STL ON ORD.ordradnr = STL.ordradnr WHERE STL.BATCHNR \n= 3616912 AND STL.ORDRADNR = 3 AND ORD.BATCHNR=3616912;\n artikelid\n-----------\n 301206\n(1 row)\n\nxtest=# explain analyze verbose SELECT distinct artikelid FROM \nsm_orderrad ORD JOIN sm_orderrad_storlek STL ON ORD.ordradnr = \nSTL.ordradnr WHERE STL.BATCHNR = 3616912 AND STL.ORDRADNR = 3 AND \nORD.BATCHNR=3616912;\n\nSee: http://explain.depesz.com/s/kI2\n\n Total runtime: 0.256 ms\n(13 rows)\n\nxtest=# explain analyze verbose select * from np_artikel np join \nsm_artikel_dim dim on np.artikelid = dim.artikelid where np.artikelid \n=301206;\n\nSee: http://explain.depesz.com/s/fFN\n\n Total runtime: 2.563 ms\n(99 rows)\n\n\n\n\nGetting the same result from a question where I use a fixed article-id \nis about 23 000 times faster .....\n\nPerhaps if use a subquery?\n\n\n\n\nxtest=# explain analyze select * from np_artikel np join sm_artikel_dim \ndim on np.artikelid = dim.artikelid where np.artikelid in ( SELECT \ndistinct artikelid FROM sm_orderrad ORD JOIN sm_orderrad_storlek STL ON \nORD.ordradnr = STL.ordradnr WHERE STL.BATCHNR = 3616912 AND STL.ORDRADNR \n= 3 AND ORD.BATCHNR=3616912);\n\nSee:http://explain.depesz.com/s/wcD )\n\n Total runtime: 45542.462 ms\n(90 rows)\n\n\n\nNo, not much luck there either ..\n\nCTE's are cool, or so I've heard atleast ...\n\n\n\nxtest=# explain analyze verbose\nWITH orders AS ( SELECT distinct artikelid FROM sm_orderrad ORD JOIN \nsm_orderrad_storlek STL ON ORD.ordradnr = STL.ordradnr WHERE STL.BATCHNR \n= 3616912 AND STL.ORDRADNR = 3 AND ORD.BATCHNR=3616912) \n\n select * from np_artikel np\n join sm_artikel_dim dim on np.artikelid = dim.artikelid\n join orders on np.artikelid=orders.artikelid;\n\nSee: http://explain.depesz.com/s/1a2\n\n Total runtime: 44966.271 ms\n(145 rows)\n\n\n\nBut they aren't much faster than a join, obviously.\n\nMy question is the following: Would it be possible to rewrite the query \nin such a way or use some kind of server-setting/tuning so it will get \nas fast as when I query with a single article-id as argument?\n\n\n-- \n+46 734 307 163 (mobile)\nwww.lodon.se\n\nBes�ksadress:\nLodon AB\nVingalandsgatan 8\n417 63 G�teborg\n\n", "msg_date": "Thu, 22 Nov 2012 13:06:01 +0100", "msg_from": "Niklas Paulsson <[email protected]>", "msg_from_op": true, "msg_subject": "SQL performance question" } ]
[ { "msg_contents": "Hi,\n\n \n\nI am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size\nis around 100 GB and I have tuned my PostgreSQL accordingly still I am\nfacing performance issues. The query performance is too low despite tables\nbeing properly indexed and are vacuumed and analyzed at regular basis. CPU\nusage never exceeded 15% even at peak usage times. Kindly guide me through\nif there are any mistakes in setting configuration parameters. Below are my\nsystem specs and please find attached my postgresql configuration parameters\nfor current system.\n\n \n\nOS: Windows Server 2008 R2 Standard\n\nManufacturer: IBM\n\nMode: System X3250 M3\n\nProcessor: Intel (R) Xeon (R) CPU X3440 @ 2.53\nGHz\n\nRam: 6 GB\n\nOS Type: 64 bit\n\n \n\nThanks in advance\n\n \n\nSyed Asif Tanveer\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 27 Nov 2012 12:47:03 +0500", "msg_from": "\"Syed Asif Tanveer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres configuration for 8 CPUs, 6 GB RAM" }, { "msg_contents": "On 27.11.2012 09:47, Syed Asif Tanveer wrote:\n> I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size\n> is around 100 GB and I have tuned my PostgreSQL accordingly still I am\n> facing performance issues. The query performance is too low despite tables\n> being properly indexed and are vacuumed and analyzed at regular basis. CPU\n> usage never exceeded 15% even at peak usage times. Kindly guide me through\n> if there are any mistakes in setting configuration parameters. Below are my\n> system specs and please find attached my postgresql configuration parameters\n> for current system.\n\nThe configuration looks OK to me at a quick glance. I'd suggest looking \nat the access plans of the queries that are too slow (ie. EXPLAIN \nANALYZE). How low is \"too low\", and how fast do the queries need to be? \nWhat kind of an I/O system does the server have? See also \nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 15:43:39 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 8 CPUs, 6 GB RAM" }, { "msg_contents": "\nOn 11/27/2012 02:47 AM, Syed Asif Tanveer wrote:\n>\n> Hi,\n>\n> I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. \n> Data size is around 100 GB and I have tuned my PostgreSQL accordingly \n> still I am facing performance issues. The query performance is too low \n> despite tables being properly indexed and are vacuumed and analyzed at \n> regular basis. CPU usage never exceeded 15% even at peak usage times. \n> Kindly guide me through if there are any mistakes in setting \n> configuration parameters. Below are my system specs and please find \n> attached my postgresql configuration parameters for current system.\n>\n>\n\n\nThere is at least anecdotal evidence that Windows servers degrade when \nshared_buffers is set above 512Mb. Personally, I would not recommend \nusing Windows for a high performance server.\n\nAlso, it makes no sense to have a lower setting for maintenance_work_mem \nthan for work_mem. You would normally expect maintenance_work_mem to be \nhigher - sometimes much higher.\n\nApart from that, it's going to be impossible to tell what your problem \nis without seeing actual slow running queries and their corresponding \nexplain analyse output.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 08:56:12 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 8 CPUs, 6 GB RAM" }, { "msg_contents": "On Tue, Nov 27, 2012 at 12:47 AM, Syed Asif Tanveer\n<[email protected]> wrote:\n> Hi,\n>\n>\n>\n> I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size\n> is around 100 GB and I have tuned my PostgreSQL accordingly still I am\n> facing performance issues. The query performance is too low despite tables\n> being properly indexed and are vacuumed and analyzed at regular basis. CPU\n> usage never exceeded 15% even at peak usage times. Kindly guide me through\n> if there are any mistakes in setting configuration parameters. Below are my\n> system specs and please find attached my postgresql configuration parameters\n> for current system.\n>\n\nI notice that you've got autovac nap time of 60 minutes, so it's\npossible you've managed to bloat your tables a fair bit. What do you\nget running the queries from this page:\n\nhttp://wiki.postgresql.org/wiki/Show_database_bloat\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 12:53:45 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 8 CPUs, 6 GB RAM" }, { "msg_contents": "Asif:\n\n1. 6GB is pretty small .... once you work through the issues, adding RAM\nwill probably be a good investment, depending on your time-working set\ncurve.\n\nA quick rule of thumb is this:\n\n- if your cache hit ratio is significantly larger than (cache size / db\nsize) then there is locality of reference among queries, and if the hit\nratio is less than high 90's percent, then there is a high probablility\nthat adding incremental RAM for caching by the OS and/or PG itself will\nmake things significantly better; this applies to both database-wide\naverages and individual slow query types.\n\n- Look for long-running queries spilling merges and sorts to disk; if these\nare a concern then adding RAM and leaving it out of the buffer cache but\nsetting larger work_mem sizes will improve their performance\n\n2. You also need to consider how many queries are running concurrently;\nlimiting the number of concurrent executions to a strict number e.g. by\nplacing the database behind a connection pooler. By avoiding contention for\ndisk head seeking\n\n3. If I/O is a real bottleneck, especially random access, you might\nconsider more drives\n\n4. If the data access is truly all over the place, or you have lots of\nqueries which touch large chucnks of the data, then depending on your\nbudget, a cheap high RAM machine built from a desktop motherboard which\nwill allow you have e.g. 128GB of RAM in low cost modules and thus have the\nentire DB in RAM is definitely worth considering as a replica server on\nwhich to offload some queries. I priced this out at around US$2000 here in\nAmerica using high quality parts.\n\n\nThese performance tweaks are all of course interrelated ... e.g. if the\naccess patterns are amenable to caching, then adding RAM will reduce I/O\nload without any further changes, and item 3. may cease to be a problem.\n\nBe careful of the bottleneck issue ... if you're a long way from the\nperformance you need, then fixing one issue will expose another etc. until\nevery part of the system is quick enough to keep up.\n\nDon't forget that your time is worth money too, and throwing more hardware\nat it is one of many viable strategies.\n\nCheers\nDave\n\nOn Tue, Nov 27, 2012 at 1:53 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Tue, Nov 27, 2012 at 12:47 AM, Syed Asif Tanveer\n> <[email protected]> wrote:\n> > Hi,\n> >\n> >\n> >\n> > I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data\n> size\n> > is around 100 GB and I have tuned my PostgreSQL accordingly still I am\n> > facing performance issues. The query performance is too low despite\n> tables\n> > being properly indexed and are vacuumed and analyzed at regular basis.\n> CPU\n> > usage never exceeded 15% even at peak usage times. Kindly guide me\n> through\n> > if there are any mistakes in setting configuration parameters. Below are\n> my\n> > system specs and please find attached my postgresql configuration\n> parameters\n> > for current system.\n> >\n>\n> I notice that you've got autovac nap time of 60 minutes, so it's\n> possible you've managed to bloat your tables a fair bit. What do you\n> get running the queries from this page:\n>\n> http://wiki.postgresql.org/wiki/Show_database_bloat\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAsif: 1. 6GB is pretty small .... once you work through the issues, adding RAM will probably be a good investment, depending on your time-working set curve. A quick rule of thumb is this:- if your cache hit ratio is significantly larger than (cache size / db size) then there is locality of reference among queries, and if the hit ratio is less than high 90's percent, then there is a high probablility that adding incremental RAM for caching by the OS and/or PG itself will make things significantly better; this applies to both database-wide averages and individual slow query types. \n- Look for long-running queries spilling merges and sorts to disk; if these are a concern then adding RAM and leaving it out of the buffer cache but setting larger work_mem sizes will improve their performance\n2. You also need to consider how many queries are running concurrently; limiting the number of concurrent executions to a strict number e.g. by placing the database behind a connection pooler. By avoiding contention for disk head seeking\n3. If I/O is a real bottleneck, especially random access, you might consider more drives4. If the data access is truly all over the place, or you have lots of queries which touch large chucnks of the data, then depending on your budget, a cheap high RAM machine built from a desktop motherboard which will allow you have e.g. 128GB of RAM in low cost modules and thus have the entire DB in RAM is definitely worth considering as a replica server on which to offload some queries. I priced this out at around US$2000 here in America using high quality parts.\nThese performance tweaks are all of course interrelated ... e.g. if the \naccess patterns are amenable to caching, then adding RAM will reduce I/O\n load without any further changes, and item 3. may cease to be a \nproblem.Be careful of the bottleneck issue ... if you're a long \nway from the performance you need, then fixing one issue will expose \nanother etc. until every part of the system is quick enough to keep up.Don't forget that your time is worth money too, and throwing more hardware at it is one of many viable strategies.CheersDave\nOn Tue, Nov 27, 2012 at 1:53 PM, Scott Marlowe <[email protected]> wrote:\nOn Tue, Nov 27, 2012 at 12:47 AM, Syed Asif Tanveer\n<[email protected]> wrote:\n> Hi,\n>\n>\n>\n> I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size\n> is around 100 GB and I have tuned my PostgreSQL accordingly still I am\n> facing performance issues. The query performance is too low despite tables\n> being properly indexed and are vacuumed and analyzed at regular basis. CPU\n> usage never exceeded 15% even at peak usage times. Kindly guide me through\n> if there are any mistakes in setting configuration parameters. Below are my\n> system specs and please find attached my postgresql configuration parameters\n> for current system.\n>\n\nI notice that you've got autovac nap time of 60 minutes, so it's\npossible you've managed to bloat your tables a fair bit.  What do you\nget running the queries from this page:\n\nhttp://wiki.postgresql.org/wiki/Show_database_bloat\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 27 Nov 2012 18:29:07 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres configuration for 8 CPUs, 6 GB RAM" } ]
[ { "msg_contents": "I need to delete about 1.5 million records from a table and reload it in\none transaction. The usual advice when loading with inserts seems to be\ngroup them into transactions of around 1k records. Committing at that\npoint would leave the table in an inconsistent state. Would issuing a\nsavepoint every 1k or so records negate whatever downside there is to\nkeeping a transaction open for all 1.5 million records, or just add more\noverhead?\n\nThe data to reload the table is coming from a Perl DBI connection to a\ndifferent database (not PostgreSQL) so I'm not sure the COPY alternative\napplies here.\n\nAny suggestions are welcome.\n\n\nMike\n\nI need to delete about 1.5 million records from a table and reload it in one transaction.  The usual advice when loading with inserts seems to be group them into transactions of around 1k records.  Committing at that point would leave the table in an inconsistent state.  Would issuing a savepoint every 1k or so records negate whatever downside there is to keeping a transaction open for all 1.5 million records, or just add more overhead?\nThe data to reload the table is coming from a Perl DBI connection to a different database (not PostgreSQL) so I'm not sure the COPY alternative applies here.Any suggestions are welcome.\nMike", "msg_date": "Tue, 27 Nov 2012 16:04:42 -0600", "msg_from": "Mike Blackwell <[email protected]>", "msg_from_op": true, "msg_subject": "Savepoints in transactions for speed?" }, { "msg_contents": "On 27/11/12 22:04, Mike Blackwell wrote:\n> I need to delete about 1.5 million records from a table and reload it \n> in one transaction.\n\n> The data to reload the table is coming from a Perl DBI connection to a \n> different database (not PostgreSQL) so I'm not sure the COPY \n> alternative applies here.\nNo reason why it shouldn't.\n\nhttps://metacpan.org/module/DBD::Pg#COPY-support\n\n--\n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 22:52:20 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "\nOn Nov 27, 2012, at 2:04 PM, Mike Blackwell <[email protected]> wrote:\n\n> I need to delete about 1.5 million records from a table and reload it in one transaction. The usual advice when loading with inserts seems to be group them into transactions of around 1k records. Committing at that point would leave the table in an inconsistent state. \n\nI'd probably just do the whole thing in one transaction. \n\nDo you have specific reasons you want to avoid a long transaction, or just relying on rules of thumb? Postgresql isn't going to run out of resources doing a big transaction, in the way some other databases will.\n\nLong running transactions will interfere with vacuuming, but inserting a couple of million rows shouldn't take that long.\n\n> Would issuing a savepoint every 1k or so records negate whatever downside there is to keeping a transaction open for all 1.5 million records, or just add more overhead?\n\n\nSavepoints are going to increase overhead and have no effect on the length of the transaction. If you want to catch errors and not have to redo the entire transaction, they're great, but that's about it.\n\n> The data to reload the table is coming from a Perl DBI connection to a different database (not PostgreSQL) so I'm not sure the COPY alternative applies here.\n\nCOPY works nicely from perl:\n\n$dbh->do(\"COPY foo FROM STDIN\");\n$dbh->pg_putcopydata(\"foo\\tbar\\tbaz\\n\");\n$dbh->pg_putcopyend();\n\nThe details are in DBD::Pg. I use this a lot for doing big-ish (tens of millions of rows) bulk inserts. It's not as fast as you can get, but it's probably as fast as you can get with perl.\n\nCheers,\n Steve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 15:26:26 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "Mike,\n\nIs there anything that the 1.5 million rows have in common that would allow you to use partitions? if so, you could load the new data into a partition at your leisure, start a transaction, alter the partition table with the old data to no longer inherit from the parent, alter the new partition table to\ninherit from the parent, commit, then drop the old table. This operation would be very fast, the users probably won't even notice. \n\nBob Lunney\n\nOn Nov 27, 2012, at 4:04 PM, Mike Blackwell <[email protected]> wrote:\n\n> I need to delete about 1.5 million records from a table and reload it in one transaction. The usual advice when loading with inserts seems to be group them into transactions of around 1k records. Committing at that point would leave the table in an inconsistent state. Would issuing a savepoint every 1k or so records negate whatever downside there is to keeping a transaction open for all 1.5 million records, or just add more overhead?\n> \n> The data to reload the table is coming from a Perl DBI connection to a different database (not PostgreSQL) so I'm not sure the COPY alternative applies here.\n> \n> Any suggestions are welcome.\n> \n> \n> Mike\n\nMike,Is there anything that the 1.5 million rows have in common that would allow you to use partitions?  if so, you could load the new data into a partition  at your leisure, start a transaction, alter the partition table with the old data to no longer inherit from the parent, alter the new partition table toinherit from the parent, commit, then drop the old table.  This operation would be very fast, the users probably won't even notice. Bob LunneyOn Nov 27, 2012, at 4:04 PM, Mike Blackwell <[email protected]> wrote:I need to delete about 1.5 million records from a table and reload it in one transaction.  The usual advice when loading with inserts seems to be group them into transactions of around 1k records.  Committing at that point would leave the table in an inconsistent state.  Would issuing a savepoint every 1k or so records negate whatever downside there is to keeping a transaction open for all 1.5 million records, or just add more overhead?\nThe data to reload the table is coming from a Perl DBI connection to a different database (not PostgreSQL) so I'm not sure the COPY alternative applies here.Any suggestions are welcome.\nMike", "msg_date": "Tue, 27 Nov 2012 18:16:23 -0600", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "Steve Atkins wrote:\n\n> Postgresql isn't going to run out of resources doing a big transaction,\nin the way some other databases will.\n\nI thought I had read something at one point about keeping the transaction\nsize on the order of a couple thousand because there were issues when it\ngot larger. As that apparently is not an issue I went ahead and tried the\nDELETE and COPY in a transaction. The load time is quite reasonable this\nway.\n\nThanks!\n\nMike\n\n\nSteve Atkins wrote:> Postgresql isn't going to run out of resources doing a big transaction, in the way some other databases will.\nI thought I had read something at one point about keeping the transaction size on the order of a couple thousand because there were issues when it got larger.  As that apparently is not an issue I went ahead and tried the DELETE and COPY in a transaction.  The load time is quite reasonable this way.\nThanks!Mike", "msg_date": "Tue, 27 Nov 2012 19:08:25 -0600", "msg_from": "Mike Blackwell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Tue, Nov 27, 2012 at 10:08 PM, Mike Blackwell <[email protected]> wrote:\n>\n> > Postgresql isn't going to run out of resources doing a big transaction, in the way some other databases will.\n>\n> I thought I had read something at one point about keeping the transaction size on the order of a couple thousand because there were issues when it got larger. As that apparently is not an issue I went ahead and tried the DELETE and COPY in a transaction. The load time is quite reasonable this way.\n\nUpdates, are faster if batched, if your business logic allows it,\nbecause it creates less bloat and creates more opportunities for with\nHOT updates. I don't think it applies to inserts, though, and I\nhaven't heard it either.\n\nIn any case, if your business logic doesn't allow it (and your case\nseems to suggest it), there's no point in worrying.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 22:16:28 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Tue, Nov 27, 2012 at 6:26 PM, Steve Atkins <[email protected]> wrote:\n\n>\n> On Nov 27, 2012, at 2:04 PM, Mike Blackwell <[email protected]>\n> wrote:\n>\n> > I need to delete about 1.5 million records from a table and reload it in\n> one transaction. The usual advice when loading with inserts seems to be\n> group them into transactions of around 1k records. Committing at that\n> point would leave the table in an inconsistent state.\n>\n> I'd probably just do the whole thing in one transaction.\n>\n> Do you have specific reasons you want to avoid a long transaction, or just\n> relying on rules of thumb? Postgresql isn't going to run out of resources\n> doing a big transaction, in the way some other databases will.\n>\n> Long running transactions will interfere with vacuuming, but inserting a\n> couple of million rows shouldn't take that long.\n>\n> > Would issuing a savepoint every 1k or so records negate whatever\n> downside there is to keeping a transaction open for all 1.5 million\n> records, or just add more overhead?\n>\n>\n> Savepoints are going to increase overhead and have no effect on the length\n> of the transaction. If you want to catch errors and not have to redo the\n> entire transaction, they're great, but that's about it.\n>\n> > The data to reload the table is coming from a Perl DBI connection to a\n> different database (not PostgreSQL) so I'm not sure the COPY alternative\n> applies here.\n>\n> COPY works nicely from perl:\n>\n> $dbh->do(\"COPY foo FROM STDIN\");\n> $dbh->pg_putcopydata(\"foo\\tbar\\tbaz\\n\");\n> $dbh->pg_putcopyend();\n>\n> The details are in DBD::Pg. I use this a lot for doing big-ish (tens of\n> millions of rows) bulk inserts. It's not as fast as you can get, but it's\n> probably as fast as you can get with perl.\n>\n> Cheers,\n> Steve\n\n\nI do this as well - insert a few million rows into a table using the\nDBI::Pg copy interface. It works well.\n\nI ended up batching the copies so that each COPY statement only does a few\nhundred thousand at a time, but it's all one transaction.\n\nThe batching was necessary because of an idiosyncrasy of COPY in Pg 8.1:\neach COPY statement's contents was buffered in a malloc'd space, and if\nthere were several million rows buffered up, the allocated virtual memory\ncould get quite large - as in several GB. It plus the buffer pool\nsometimes exceeded the amount of RAM I had available at that time (several\nyears ago), with bad effects on performance.\n\nThis may have been fixed since then, or maybe RAM's gotten big enough that\nit's not a problem.\n\nDan Franklin\n\nOn Tue, Nov 27, 2012 at 6:26 PM, Steve Atkins <[email protected]> wrote:\n\nOn Nov 27, 2012, at 2:04 PM, Mike Blackwell <[email protected]> wrote:\n\n> I need to delete about 1.5 million records from a table and reload it in one transaction.  The usual advice when loading with inserts seems to be group them into transactions of around 1k records.  Committing at that point would leave the table in an inconsistent state.\n\nI'd probably just do the whole thing in one transaction.\n\nDo you have specific reasons you want to avoid a long transaction, or just relying on rules of thumb? Postgresql isn't going to run out of resources doing a big transaction, in the way some other databases will.\n\nLong running transactions will interfere with vacuuming, but inserting a couple of million rows shouldn't take that long.\n\n>  Would issuing a savepoint every 1k or so records negate whatever downside there is to keeping a transaction open for all 1.5 million records, or just add more overhead?\n\n\nSavepoints are going to increase overhead and have no effect on the length of the transaction. If you want to catch errors and not have to redo the entire transaction, they're great, but that's about it.\n\n> The data to reload the table is coming from a Perl DBI connection to a different database (not PostgreSQL) so I'm not sure the COPY alternative applies here.\n\nCOPY works nicely from perl:\n\n$dbh->do(\"COPY foo FROM STDIN\");\n$dbh->pg_putcopydata(\"foo\\tbar\\tbaz\\n\");\n$dbh->pg_putcopyend();\n\nThe details are in DBD::Pg. I use this a lot for doing big-ish (tens of millions of rows) bulk inserts. It's not as fast as you can get, but it's probably as fast as you can get with perl.\n\nCheers,\n  Steve I do this as well - insert a few million rows into a table using the DBI::Pg copy interface.  It works well.I ended up batching the copies so that each COPY statement only does a few hundred thousand at a time, but it's all one transaction.\nThe batching was necessary because of an idiosyncrasy of COPY in Pg 8.1: each COPY statement's contents was buffered in a malloc'd space, and if there were several million rows buffered up, the allocated virtual memory could get quite large - as in several GB.  It plus the buffer pool sometimes exceeded the amount of RAM I had available at that time (several years ago), with bad effects on performance.\nThis may have been fixed since then, or maybe RAM's gotten big enough that it's not a problem.Dan Franklin", "msg_date": "Tue, 27 Nov 2012 22:08:44 -0500", "msg_from": "\"Franklin, Dan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "Savepoint are not created for performance. If you have one very long running transactions that fails in the end, it will all be rolled back. So be pretty sure about your dataquality or use safepoints.\n \t\t \t \t\t \n\n\n\n\nSavepoint are not created for performance.  If you have one very long running transactions that fails in the end, it will all be rolled back. So be pretty sure about your dataquality or use safepoints.", "msg_date": "Wed, 28 Nov 2012 06:03:20 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Tue, Nov 27, 2012 at 7:16 PM, Claudio Freire <[email protected]>wrote:\n\n> Updates, are faster if batched, if your business logic allows it,\n> because it creates less bloat and creates more opportunities for with\n> HOT updates. I don't think it applies to inserts, though, and I\n> haven't heard it either.\n>\n\nAh. That must have been what I'd half-remembered. Thanks for the\nclarification.\n\nMike\n\nOn Tue, Nov 27, 2012 at 7:16 PM, Claudio Freire <[email protected]> wrote:\nUpdates, are faster if batched, if your business logic allows it,\nbecause it creates less bloat and creates more opportunities for with\nHOT updates. I don't think it applies to inserts, though, and I\nhaven't heard it either.Ah.  That must have been what I'd half-remembered.  Thanks for the clarification.Mike", "msg_date": "Wed, 28 Nov 2012 09:18:20 -0600", "msg_from": "Mike Blackwell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "Commitmarks are written to disk after each transaction. So transactionsize has impact on performance. \n\nDate: Wed, 28 Nov 2012 09:18:20 -0600\nSubject: Re: [PERFORM] Savepoints in transactions for speed?\nFrom: [email protected]\nTo: [email protected]\nCC: [email protected]\n\n\n\nOn Tue, Nov 27, 2012 at 7:16 PM, Claudio Freire <[email protected]> wrote:\n\nUpdates, are faster if batched, if your business logic allows it,\n\nbecause it creates less bloat and creates more opportunities for with\n\nHOT updates. I don't think it applies to inserts, though, and I\n\nhaven't heard it either.\n\nAh. That must have been what I'd half-remembered. Thanks for the clarification.\nMike \t\t \t \t\t \n\n\n\n\nCommitmarks are written to disk after each transaction. So transactionsize has impact on performance. Date: Wed, 28 Nov 2012 09:18:20 -0600Subject: Re: [PERFORM] Savepoints in transactions for speed?From: [email protected]: [email protected]: [email protected] Tue, Nov 27, 2012 at 7:16 PM, Claudio Freire <[email protected]> wrote:\nUpdates, are faster if batched, if your business logic allows it,\nbecause it creates less bloat and creates more opportunities for with\nHOT updates. I don't think it applies to inserts, though, and I\nhaven't heard it either.Ah.  That must have been what I'd half-remembered.  Thanks for the clarification.Mike", "msg_date": "Wed, 28 Nov 2012 15:23:21 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Tue, 2012-11-27 at 22:16 -0300, Claudio Freire wrote:\n> Updates, are faster if batched, if your business logic allows it,\n> because it creates less bloat and creates more opportunities for with\n> HOT updates. I don't think it applies to inserts, though, and I\n> haven't heard it either.\n\nHuge updates (e.g. UPDATE with no WHERE clause) are less likely to\nbenefit from HOT. HOT has two main optimizations:\n\n1. Remove dead tuples faster without waiting for VACUUM -- this only\nworks if the transaction that updated/deleted the tuple actually\nfinished (otherwise the tuple can't be removed yet), so it only benefits\nthe *next* update to come along. But if it's one big update, then VACUUM\nis probably just as good at cleaning up the space.\n\n2. Doesn't make new index entries for the new tuple; reuses the old\nindex entries -- this only works if the update is on the same page, but\nlarge updates tend to fill pages up (because of the buildup of dead\ntuples) and force new to go to new pages.\n\nHOT is essentially designed for lots of small updates, which didn't\nperform well before PG 8.3.\n\nBatching of inserts/updates/deletes has a big benefit over separate\ntransactions, but only up to a point, after which it levels off. I'm not\nsure exactly when that point is, but after that, the downsides of\nkeeping a transaction open (like inability to remove the previous\nversion of an updated tuple) take over.\n\nRegards,\n\tJeff Davis\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 15:18:42 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Tue, 2012-11-27 at 16:04 -0600, Mike Blackwell wrote:\n> I need to delete about 1.5 million records from a table and reload it\n> in one transaction. The usual advice when loading with inserts seems\n> to be group them into transactions of around 1k records. Committing\n> at that point would leave the table in an inconsistent state. Would\n> issuing a savepoint every 1k or so records negate whatever downside\n> there is to keeping a transaction open for all 1.5 million records, or\n> just add more overhead?\n\nA large transaction isn't really a big problem for postgres, and 1.5M\nrecords should be processed quickly anyway.\n\nThe main problem with a long-running delete or update transaction is\nthat the dead tuples (deleted tuples or the old version of an updated\ntuple) can't be removed until the transaction finishes. That can cause\ntemporary \"bloat\", but 1.5M records shouldn't be noticeable. \n\nAdding subtransactions into the mix won't help, but probably won't hurt,\neither. The transaction will still run just as long, and you still can't\ndelete the tuples ahead of time (unless you abort a subtransaction). If\nyou *do* use subtransactions, make sure to release them as quickly as\nyou create them (don't just use ROLLBACK TO, that still leaves the\nsavepoint there); having 1500 open subtransactions might cause\nperformance problems elsewhere.\n\nRegards,\n\tJeff Davis\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 15:28:51 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis <[email protected]> wrote:\n>\n> The main problem with a long-running delete or update transaction is\n> that the dead tuples (deleted tuples or the old version of an updated\n> tuple) can't be removed until the transaction finishes. That can cause\n> temporary \"bloat\", but 1.5M records shouldn't be noticeable.\n\nNot really that fast if you have indices (and who doesn't have a PK or two).\n\nI've never been able to update (update) 2M rows in one transaction in\nreasonable times (read: less than several hours) without dropping\nindices. Doing it in batches is way faster if you can't drop the\nindices, and if you can leverage HOT updates.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 00:48:32 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Wed, Nov 28, 2012 at 9:48 PM, Claudio Freire <[email protected]>wrote:\n\n> On Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis <[email protected]> wrote:\n> >\n> > The main problem with a long-running delete or update transaction is\n> > that the dead tuples (deleted tuples or the old version of an updated\n> > tuple) can't be removed until the transaction finishes. That can cause\n> > temporary \"bloat\", but 1.5M records shouldn't be noticeable.\n>\n> Not really that fast if you have indices (and who doesn't have a PK or\n> two).\n>\n> I've never been able to update (update) 2M rows in one transaction in\n> reasonable times (read: less than several hours) without dropping\n> indices. Doing it in batches is way faster if you can't drop the\n> indices, and if you can leverage HOT updates.\n\n\nWhat I'm trying at this point is:\n\nBEGIN;\nDROP INDEX -- only one unique index exists\nDELETE FROM table;\nCOPY table FROM STDIN;\nCOMMIT;\nCREATE INDEX CONCURRENTLY;\n\nDo I understand correctly that DROP/CREATE index are not transactional, and\nthus the index will disappear immediately for other transactions? Am I\nbetter off in that case moving the DROP INDEX outside the transaction?\n\nThe access pattern for the table is such that I can afford the occasional\nstray hit without an index during the reload time. It's been pretty quick\nusing the above.\n\nMike\n\nOn Wed, Nov 28, 2012 at 9:48 PM, Claudio Freire <[email protected]> wrote:\nOn Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis <[email protected]> wrote:\n\n>\n> The main problem with a long-running delete or update transaction is\n> that the dead tuples (deleted tuples or the old version of an updated\n> tuple) can't be removed until the transaction finishes. That can cause\n> temporary \"bloat\", but 1.5M records shouldn't be noticeable.\n\nNot really that fast if you have indices (and who doesn't have a PK or two).\n\nI've never been able to update (update) 2M rows in one transaction in\nreasonable times (read: less than several hours) without dropping\nindices. Doing it in batches is way faster if you can't drop the\nindices, and if you can leverage HOT updates.What I'm trying at this point is:BEGIN;DROP INDEX -- only one unique index existsDELETE FROM table;\nCOPY table FROM STDIN;COMMIT;CREATE INDEX CONCURRENTLY;Do I understand correctly that DROP/CREATE index are not transactional, and thus the index will disappear immediately for other transactions?  Am I better off in that case moving the DROP INDEX outside the transaction?\nThe access pattern for the table is such that I can afford the occasional stray hit without an index during the reload time.  It's been pretty quick using the above.Mike", "msg_date": "Thu, 29 Nov 2012 10:38:31 -0600", "msg_from": "Mike Blackwell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Thu, Nov 29, 2012 at 9:38 AM, Mike Blackwell <[email protected]> wrote:\n> On Wed, Nov 28, 2012 at 9:48 PM, Claudio Freire <[email protected]>\n> wrote:\n>>\n>> On Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis <[email protected]> wrote:\n>> >\n>> > The main problem with a long-running delete or update transaction is\n>> > that the dead tuples (deleted tuples or the old version of an updated\n>> > tuple) can't be removed until the transaction finishes. That can cause\n>> > temporary \"bloat\", but 1.5M records shouldn't be noticeable.\n>>\n>> Not really that fast if you have indices (and who doesn't have a PK or\n>> two).\n>>\n>> I've never been able to update (update) 2M rows in one transaction in\n>> reasonable times (read: less than several hours) without dropping\n>> indices. Doing it in batches is way faster if you can't drop the\n>> indices, and if you can leverage HOT updates.\n>\n>\n> What I'm trying at this point is:\n>\n> BEGIN;\n> DROP INDEX -- only one unique index exists\n> DELETE FROM table;\n> COPY table FROM STDIN;\n> COMMIT;\n> CREATE INDEX CONCURRENTLY;\n>\n> Do I understand correctly that DROP/CREATE index are not transactional, and\n> thus the index will disappear immediately for other transactions? Am I\n> better off in that case moving the DROP INDEX outside the transaction?\n>\n> The access pattern for the table is such that I can afford the occasional\n> stray hit without an index during the reload time. It's been pretty quick\n> using the above.\n\nDrop / create index ARE transactional, like most other things in\npostgresql (only drop / create database and drop / create tablespace\nare non-transactional). Your current sequence will result in the\ntable you are dropping the index on being locked for other\ntransactions until commit or rollback. Run two psql sessions and test\nit to see.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 09:54:42 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "Ah. So it does. Testing with two psql sessions locks as you said, and\nmoving the DROP INDEX to a separate transaction give the results I was\nlooking for.\n\nThanks,\nMike\n\n__________________________________________________________________________________\n*Mike Blackwell | Technical Analyst, Distribution Services/Rollout\nManagement | RR Donnelley*\n1750 Wallace Ave | St Charles, IL 60174-3401\nOffice: 630.313.7818\[email protected]\nhttp://www.rrdonnelley.com\n\n\n<http://www.rrdonnelley.com/>\n* <[email protected]>*\n\n\nOn Thu, Nov 29, 2012 at 10:54 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Thu, Nov 29, 2012 at 9:38 AM, Mike Blackwell <[email protected]>\n> wrote:\n> > On Wed, Nov 28, 2012 at 9:48 PM, Claudio Freire <[email protected]>\n> > wrote:\n> >>\n> >> On Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis <[email protected]> wrote:\n> >> >\n> >> > The main problem with a long-running delete or update transaction is\n> >> > that the dead tuples (deleted tuples or the old version of an updated\n> >> > tuple) can't be removed until the transaction finishes. That can cause\n> >> > temporary \"bloat\", but 1.5M records shouldn't be noticeable.\n> >>\n> >> Not really that fast if you have indices (and who doesn't have a PK or\n> >> two).\n> >>\n> >> I've never been able to update (update) 2M rows in one transaction in\n> >> reasonable times (read: less than several hours) without dropping\n> >> indices. Doing it in batches is way faster if you can't drop the\n> >> indices, and if you can leverage HOT updates.\n> >\n> >\n> > What I'm trying at this point is:\n> >\n> > BEGIN;\n> > DROP INDEX -- only one unique index exists\n> > DELETE FROM table;\n> > COPY table FROM STDIN;\n> > COMMIT;\n> > CREATE INDEX CONCURRENTLY;\n> >\n> > Do I understand correctly that DROP/CREATE index are not transactional,\n> and\n> > thus the index will disappear immediately for other transactions? Am I\n> > better off in that case moving the DROP INDEX outside the transaction?\n> >\n> > The access pattern for the table is such that I can afford the occasional\n> > stray hit without an index during the reload time. It's been pretty\n> quick\n> > using the above.\n>\n> Drop / create index ARE transactional, like most other things in\n> postgresql (only drop / create database and drop / create tablespace\n> are non-transactional). Your current sequence will result in the\n> table you are dropping the index on being locked for other\n> transactions until commit or rollback. Run two psql sessions and test\n> it to see.\n>\n\nAh.  So it does.  Testing with two psql sessions locks as you said, and moving the DROP INDEX to a separate transaction give the results I was looking for.Thanks,Mike\n__________________________________________________________________________________\nMike Blackwell | Technical Analyst, Distribution Services/Rollout Management | RR Donnelley\n1750 Wallace Ave | St Charles, IL 60174-3401 \nOffice: 630.313.7818 \[email protected]\nhttp://www.rrdonnelley.com\n\nOn Thu, Nov 29, 2012 at 10:54 AM, Scott Marlowe <[email protected]> wrote:\nOn Thu, Nov 29, 2012 at 9:38 AM, Mike Blackwell <[email protected]> wrote:\n> On Wed, Nov 28, 2012 at 9:48 PM, Claudio Freire <[email protected]>\n> wrote:\n>>\n>> On Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis <[email protected]> wrote:\n>> >\n>> > The main problem with a long-running delete or update transaction is\n>> > that the dead tuples (deleted tuples or the old version of an updated\n>> > tuple) can't be removed until the transaction finishes. That can cause\n>> > temporary \"bloat\", but 1.5M records shouldn't be noticeable.\n>>\n>> Not really that fast if you have indices (and who doesn't have a PK or\n>> two).\n>>\n>> I've never been able to update (update) 2M rows in one transaction in\n>> reasonable times (read: less than several hours) without dropping\n>> indices. Doing it in batches is way faster if you can't drop the\n>> indices, and if you can leverage HOT updates.\n>\n>\n> What I'm trying at this point is:\n>\n> BEGIN;\n> DROP INDEX -- only one unique index exists\n> DELETE FROM table;\n> COPY table FROM STDIN;\n> COMMIT;\n> CREATE INDEX CONCURRENTLY;\n>\n> Do I understand correctly that DROP/CREATE index are not transactional, and\n> thus the index will disappear immediately for other transactions?  Am I\n> better off in that case moving the DROP INDEX outside the transaction?\n>\n> The access pattern for the table is such that I can afford the occasional\n> stray hit without an index during the reload time.  It's been pretty quick\n> using the above.\n\nDrop / create index ARE transactional, like most other things in\npostgresql (only drop / create database and drop / create tablespace\nare non-transactional).  Your current sequence will result in the\ntable you are dropping the index on being locked for other\ntransactions until commit or rollback.  Run two psql sessions and test\nit to see.", "msg_date": "Thu, 29 Nov 2012 12:07:15 -0600", "msg_from": "Mike Blackwell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Thu, Nov 29, 2012 at 8:38 AM, Mike Blackwell <[email protected]> wrote:\n>\n> What I'm trying at this point is:\n>\n> BEGIN;\n> DROP INDEX -- only one unique index exists\n> DELETE FROM table;\n> COPY table FROM STDIN;\n> COMMIT;\n> CREATE INDEX CONCURRENTLY;\n>\n> Do I understand correctly that DROP/CREATE index are not transactional, and\n> thus the index will disappear immediately for other transactions?\n\nThe DROP is transactional.\n\nBut the way it works here is that the index is access exclusively\nlocked when the DROP is encountered (and so is the table) so any other\ntransaction will block on it, even though the index is still there.\n(Transactionality does not inherently demand this behavior, it is just\nthe way PG implements it. For example, it could take a weaker lock at\nthe time DROP is encountered and then escalate it to exclusive only\nduring the commit processing. But that would greatly expand the risk\nof deadlock, and would certainly be more complicated to code.)\n\n\n> Am I\n> better off in that case moving the DROP INDEX outside the transaction?\n>\n> The access pattern for the table is such that I can afford the occasional\n> stray hit without an index during the reload time.\n\nIf you don't mind queries doing doing full table scans, and not having\nthe benefit of the unique constraint, for that period, then yes you\nshould move the drop index into a separate transaction.\n\nBut If you do keep the drop index inside the transaction, then you\nwould probably be better off using truncate rather than delete, and\nrebuild the index non-concurrently and move that inside the\ntransaction as well.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 10:09:19 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Thu, Nov 29, 2012 at 12:09 PM, Jeff Janes <[email protected]> wrote:\n\n>\n> But If you do keep the drop index inside the transaction, then you\n> would probably be better off using truncate rather than delete, and\n> rebuild the index non-concurrently and move that inside the\n> transaction as well.\n>\n>\n\nHmm.... From the 9.2 manual it seems that might not work out so well:\n\nTRUNCATE is not MVCC-safe (see Chapter\n13<http://www.postgresql.org/docs/9.2/static/mvcc.html> for\ngeneral information about MVCC). After truncation, the table will appear\nempty to all concurrent transactions, even if they are using a snapshot\ntaken before the truncation occurred.\n\nIt looks like other transactions could find an empty table while it was\nbeing reloaded under that approach.\n\n\nOn Thu, Nov 29, 2012 at 12:09 PM, Jeff Janes <[email protected]> wrote:\n\nBut If you do keep the drop index inside the transaction, then you\nwould probably be better off using truncate rather than delete, and\nrebuild the index non-concurrently and move that inside the\ntransaction as well.\nHmm....  From the 9.2 manual it seems that might not work out so well:TRUNCATE is not MVCC-safe (see Chapter 13 for general information about MVCC). After truncation, the table will appear empty to all concurrent transactions, even if they are using a snapshot taken before the truncation occurred.\nIt looks like other transactions could find an empty table while it was being reloaded under that approach.", "msg_date": "Thu, 29 Nov 2012 12:14:20 -0600", "msg_from": "Mike Blackwell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Thu, 2012-11-29 at 00:48 -0300, Claudio Freire wrote:\n> Not really that fast if you have indices (and who doesn't have a PK or two).\n> \n> I've never been able to update (update) 2M rows in one transaction in\n> reasonable times (read: less than several hours) without dropping\n> indices. Doing it in batches is way faster if you can't drop the\n> indices, and if you can leverage HOT updates.\n\nI tried a quick test with 2M tuples and 3 indexes over int8, numeric,\nand text (generated data). There was also an unindexed bytea column.\nUsing my laptop, a full update of the int8 column (which is indexed,\nforcing cold updates) took less than 4 minutes.\n\nI'm sure there are other issues with real-world workloads, and I know\nthat it's wasteful compared to something that can make use of HOT\nupdates. But unless there is something I'm missing, it's not really\nworth the effort to batch if that is the size of the update.\n\nRegards,\n\tJeff Davis\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 10:32:22 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Thu, Nov 29, 2012 at 3:32 PM, Jeff Davis <[email protected]> wrote:\n> On Thu, 2012-11-29 at 00:48 -0300, Claudio Freire wrote:\n>> Not really that fast if you have indices (and who doesn't have a PK or two).\n>>\n>> I've never been able to update (update) 2M rows in one transaction in\n>> reasonable times (read: less than several hours) without dropping\n>> indices. Doing it in batches is way faster if you can't drop the\n>> indices, and if you can leverage HOT updates.\n>\n> I tried a quick test with 2M tuples and 3 indexes over int8, numeric,\n> and text (generated data). There was also an unindexed bytea column.\n> Using my laptop, a full update of the int8 column (which is indexed,\n> forcing cold updates) took less than 4 minutes.\n>\n> I'm sure there are other issues with real-world workloads, and I know\n> that it's wasteful compared to something that can make use of HOT\n> updates. But unless there is something I'm missing, it's not really\n> worth the effort to batch if that is the size of the update.\n\nOn a pre-production database I have (that is currently idle), on a\nserver with 4G RAM and a single SATA disk (probably similar to your\nlaptop in that sense more or less, possibly more TPS since the HD rpm\nis 7k and your laptop probably is 5k), it's been running for half an\nhour and is still running (and I don't expect it to finish today if\npast experience is to be believed).\n\nThe database sees somewhat real test workloads from time to time, so\nit's probably a good example of a live database (sans the concurrent\nload).\n\nThe table is probably a lot wider than yours, having many columns,\nsome text typed, and many indices too. Updating one indexed int4 like\nso:\n\nbegin;\nupdate users set country_id = 1 where id < (328973625/2);\nrollback;\n\n(the where condition returns about 2M rows out of a ~5M total)\n\nThere is quite a few foreign key constraints that I expect are\ninterfering as well. I could try dropping them, just not on this\ndatabase.\n\nThe schema:\n\n\nCREATE TABLE users\n(\n id integer NOT NULL,\n about_me character varying(500),\n birth_date timestamp without time zone,\n confirmed boolean,\n creation_date timestamp without time zone,\n email character varying(255),\n first_name character varying(255),\n image_link character varying(255),\n is_native_location boolean,\n is_panelist boolean,\n last_name character varying(255),\n privacy bigint NOT NULL,\n sex integer,\n username character varying(255),\n city_id integer,\n country_id integer,\n state_id integer,\n last_access_to_inbox timestamp without time zone,\n profile_background_color character varying(255),\n profile_background_image_link character varying(255),\n url character varying(255),\n notifications bigint,\n last_activity_date timestamp without time zone,\n site_country_id integer,\n show_welcome_message boolean NOT NULL,\n invited boolean,\n partner_id integer,\n panelist_update bigint,\n unregistered boolean DEFAULT false,\n import_state integer,\n show_alerts_since timestamp without time zone,\n super_user boolean DEFAULT false,\n survey_id integer,\n site_id smallint NOT NULL,\n panelist_percentage smallint NOT NULL DEFAULT 0,\n reason integer,\n unregistered_date timestamp without time zone,\n is_panelist_update_date timestamp without time zone,\n confirmation_update_date timestamp without time zone,\n no_panelist_reason integer,\n facebook_connect_status smallint,\n CONSTRAINT user_pkey PRIMARY KEY (id ),\n CONSTRAINT fk36ebcb26f1a196 FOREIGN KEY (site_country_id)\n REFERENCES countries (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_city_id FOREIGN KEY (city_id)\n REFERENCES cities (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_country_id FOREIGN KEY (country_id)\n REFERENCES countries (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_state_id FOREIGN KEY (state_id)\n REFERENCES states (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_survey_id FOREIGN KEY (survey_id)\n REFERENCES surveys (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT users_id_check CHECK (id >= 0)\n)\n\nIndices on:\n\n (creation_date );\n\n (panelist_update )\n WHERE panelist_update IS NOT NULL;\n\n (birth_date );\n\n (email COLLATE pg_catalog.\"default\" );\n\n (lower(email::text) COLLATE pg_catalog.\"default\" );\n\n (partner_id , creation_date );\n\n (site_id , country_id , city_id );\n\n (site_id , country_id , state_id );\n\n (username COLLATE pg_catalog.\"default\" );\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 16:58:06 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Thu, Nov 29, 2012 at 10:14 AM, Mike Blackwell <[email protected]> wrote:\n>\n>\n>\n> On Thu, Nov 29, 2012 at 12:09 PM, Jeff Janes <[email protected]> wrote:\n>>\n>>\n>> But If you do keep the drop index inside the transaction, then you\n>> would probably be better off using truncate rather than delete, and\n>> rebuild the index non-concurrently and move that inside the\n>> transaction as well.\n>>\n>\n>\n> Hmm.... From the 9.2 manual it seems that might not work out so well:\n>\n> TRUNCATE is not MVCC-safe (see Chapter 13 for general information about\n> MVCC). After truncation, the table will appear empty to all concurrent\n> transactions, even if they are using a snapshot taken before the truncation\n> occurred.\n>\n> It looks like other transactions could find an empty table while it was\n> being reloaded under that approach.\n\nThey would block during the load, it is just after the load that they\nwould see the table as empty. I thought that that would only be a\nproblem for repeatable read or higher, but a test shows that read\ncommitted has that problem as well. But yeah, that could definitely\nbe a problem with that method.\n\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 12:25:53 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" }, { "msg_contents": "On Thu, Nov 29, 2012 at 11:58 AM, Claudio Freire <[email protected]> wrote:\n> On Thu, Nov 29, 2012 at 3:32 PM, Jeff Davis <[email protected]> wrote:\n>>\n>> I tried a quick test with 2M tuples and 3 indexes over int8, numeric,\n>> and text (generated data). There was also an unindexed bytea column.\n>> Using my laptop, a full update of the int8 column (which is indexed,\n>> forcing cold updates) took less than 4 minutes.\n>>\n>> I'm sure there are other issues with real-world workloads, and I know\n>> that it's wasteful compared to something that can make use of HOT\n>> updates. But unless there is something I'm missing, it's not really\n>> worth the effort to batch if that is the size of the update.\n>\n> On a pre-production database I have (that is currently idle), on a\n> server with 4G RAM and a single SATA disk (probably similar to your\n> laptop in that sense more or less, possibly more TPS since the HD rpm\n> is 7k and your laptop probably is 5k), it's been running for half an\n> hour and is still running (and I don't expect it to finish today if\n> past experience is to be believed).\n\nSo probably Jeff Davis's indexes fit in RAM (or the part that can be\ndirtied without causing thrashing), and yours do not.\n\nBut, does batching them up help at all? I doubt it does.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 09:01:54 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoints in transactions for speed?" } ]
[ { "msg_contents": "Hi,\n\nThis folde( Temporary tablespace) is getting filled and size increases in\nthe day where there lots of sorting operations.But after some times the data\nin the is deleted automatically . Can any one explain what is going on ?\n\nRgrds\nSuhas\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pgsql-tmp-Temporary-tablespace-tp5733858.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Nov 2012 22:45:11 -0800 (PST)", "msg_from": "\"suhas.basavaraj12\" <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql_tmp( Temporary tablespace)" }, { "msg_contents": "suhas.basavaraj12 wrote:\n> This folde( Temporary tablespace) is getting filled and size increases\nin\n> the day where there lots of sorting operations.But after some times\nthe data\n> in the is deleted automatically . Can any one explain what is going\non ?\n\nMust be temporary files created by the sorting operations.\nIf a sort, hash or similar operation is estimated to need\nmore than work_mem if done in memory, data will be dumped\nto disk instead.\n\nIf you want to avoid that, you need to increase work_mem\n(but make sure you don't run out of memory).\n\nYours,\nLaurenz Albe\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 09:28:55 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql_tmp( Temporary tablespace)" }, { "msg_contents": "Hi,\n\n\nCan i delete the content of this folder. I have observed couple of times ,\nthis folder got cleaned automatically.\nWhich backend process deletes the data from this folder .Any Idea?\n\nRgrds\nSuhas\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/pgsql-tmp-Temporary-tablespace-tp5733858p5733863.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 00:37:50 -0800 (PST)", "msg_from": "\"suhas.basavaraj12\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql_tmp( Temporary tablespace)" }, { "msg_contents": "suhas.basavaraj12 wrote:\n> Can i delete the content of this folder. I have observed couple of\ntimes ,\n> this folder got cleaned automatically.\n\nThese files are in use and you should not delete them.\nIf you need them to go right now, cancel the queries that\ncreate temporary files.\n\nIf there are any leftover when PostgreSQL is shut down\n(don't know if that can happen), it should be safe to\ndelete them.\n\n> Which backend process deletes the data from this folder .Any Idea?\n\nI'm not sure, but probably the one that created them.\n\nYours,\nLaurenz Albe\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 11:27:42 +0100", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql_tmp( Temporary tablespace)" } ]
[ { "msg_contents": "Hi,\n\nI'm on the hunt for some solid knowledge on a theoretical level about the performance of postgresql. My question is regarding best practices, and how architectural decisions might influence the performance. First a little background:\n\nThe setup:\nI have a database which holds informations on used cars. The database has mainly 3 tables of interest for this case:\nA cars table, an adverts table and a sellers table. One car has many adverts and one seller has many adverts. One advert belongs to one car and one seller.\nThe database is powering a website for searching used cars. When searching for used cars, the cars table is mainly used, and a lot of the columns should be directly available for searching e.g. color, milage, price, has_automatic_transmission etc.\n\nSo my main concern is actually about the cars table, since this one currently has a lot of columns (151 - I expect thats quite a lot?), and a lot of data (4 mil. rows, and growing). Now you might start by thinking, this could sound like a regular need for some normalization, but wait a second and let me explain :-)\nThe columns in this table is for the most very short stings, integers, decimals or booleans. So take for an example has_automatic_transmission (boolean) I can't see why it would make sense to put that into a separate table and join in the values. Or the milage or the price as another example. The cars table used for search is indexed quite a lot.\n\nThe questions:\nHaving the above setup in mind, what impact on performance, in terms of read performance and write performance, does it have, whether I do the following:\n\t1) In general would the read and/or the write on the database be faster, if I serialized some of the not searched columns in the table into a single text columns instead of let's say 20 booleans?\n\t2) Lets say I'm updating a timestamp in a single one of the 151 columns in the cars table. The update statement is using the id to find the car. Would the write performance of that UPDATE be affected, if the table had fewer columns?\n\t3) When adding a new column to the table i know that it becomes slower the more rows is in the table, but what about the \"width\" of the table does that affect the performance when adding new columns?\n\t4) In general what performance downsides do you get when adding a lot of columns to one table instead of having them in separate tables?\n\t5) Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\n\nHope there is some good answers out there :-)\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 13:41:14 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "Database design - best practice" }, { "msg_contents": "Niels,\n\n\" I can't see why it would make sense to put that into a separate table and join in the values \" \nYou don't normalize for performance. People DEnormalize for performance.\n\n\nQuestions: (AFAIK)\n\t\n1) This is a way to disaster. Get yourself a book on RDBMS from for example Celko. Do NOT go against the flow of the RDBMS rules, as here in rule #1 atomic values of a column. \n \n \t2) This is not the big fish you are after. First benchmark your setup and compare the results with your desired performance level. First quantify your problem, if there is any, before using tricks.\n\n3) A row will need more memory when it is wider, this may be amplified during hash joins. \n\n \t4) People DEnormalize for performance. \n\n5) \" Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\" \n\nI know the answer, but i encourage you to simply test this. I have seen lot's of urban legends about performance ( including the dropping of the referential integrity be cause that would make a difference.... ). \nOf course , when it's a full table scan, and it are ALL disk reads, (or ALL memory reads_) you can simply calculate it too. But just get into the habit of testing for learning.\n\n\nMy advice:\n- know what performance you need.\n- test if you have this, varying tablecontent and systemload\n- do not tamper with the RDBMS rules, this will haunt you.\n- if you have the latest postgres version, you can use covering indexes: tables aren't accessed at all, bypassing most of your questions. Check with peers if you've got the indexes right.\n\nRegards,\nWillem\n\n\n\n> From: [email protected]\n> Subject: [PERFORM] Database design - best practice\n> Date: Wed, 28 Nov 2012 13:41:14 +0100\n> To: [email protected]\n> \n> Hi,\n> \n> I'm on the hunt for some solid knowledge on a theoretical level about the performance of postgresql. My question is regarding best practices, and how architectural decisions might influence the performance. First a little background:\n> \n> The setup:\n> I have a database which holds informations on used cars. The database has mainly 3 tables of interest for this case:\n> A cars table, an adverts table and a sellers table. One car has many adverts and one seller has many adverts. One advert belongs to one car and one seller.\n> The database is powering a website for searching used cars. When searching for used cars, the cars table is mainly used, and a lot of the columns should be directly available for searching e.g. color, milage, price, has_automatic_transmission etc.\n> \n> So my main concern is actually about the cars table, since this one currently has a lot of columns (151 - I expect thats quite a lot?), and a lot of data (4 mil. rows, and growing). Now you might start by thinking, this could sound like a regular need for some normalization, but wait a second and let me explain :-)\n> The columns in this table is for the most very short stings, integers, decimals or booleans. So take for an example has_automatic_transmission (boolean) I can't see why it would make sense to put that into a separate table and join in the values. Or the milage or the price as another example. The cars table used for search is indexed quite a lot.\n> \n> The questions:\n> Having the above setup in mind, what impact on performance, in terms of read performance and write performance, does it have, whether I do the following:\n> \t1) In general would the read and/or the write on the database be faster, if I serialized some of the not searched columns in the table into a single text columns instead of let's say 20 booleans?\n> \t2) Lets say I'm updating a timestamp in a single one of the 151 columns in the cars table. The update statement is using the id to find the car. Would the write performance of that UPDATE be affected, if the table had fewer columns?\n> \t3) When adding a new column to the table i know that it becomes slower the more rows is in the table, but what about the \"width\" of the table does that affect the performance when adding new columns?\n> \t4) In general what performance downsides do you get when adding a lot of columns to one table instead of having them in separate tables?\n> \t5) Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\n> \n> Hope there is some good answers out there :-)\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n\n\n\nNiels,\" I can't see why it would make sense to put that into a separate table and join in the values \" You don't normalize for performance. People DEnormalize for performance.Questions: (AFAIK) 1) This is a way to disaster. Get yourself a book on RDBMS from for example Celko. Do NOT go against the flow of the RDBMS rules, as here in rule #1 atomic values of a column. \t2) This is not the big fish you are after. First benchmark your setup and compare the results with your desired performance level. First quantify your problem, if there is any, before using tricks.3) A row will need more memory when it is wider, this may be amplified during hash joins. \t4) People DEnormalize for performance. 5) \" Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\" I know the answer, but i encourage you to simply test this. I have seen lot's of urban legends about performance ( including the dropping of the referential integrity be cause that would make a difference.... ). Of course , when it's a full table scan, and it are ALL disk reads, (or ALL memory reads_) you can simply calculate it too. But just get into the habit of  testing for learning.My advice:- know what performance you need.- test if you have this, varying tablecontent and systemload- do not tamper with the RDBMS rules, this will haunt you.- if you have the latest postgres version, you can use covering indexes: tables aren't accessed at all, bypassing most of your questions. Check with peers if you've got the indexes right.Regards,Willem> From: [email protected]> Subject: [PERFORM] Database design - best practice> Date: Wed, 28 Nov 2012 13:41:14 +0100> To: [email protected]> > Hi,> > I'm on the hunt for some solid knowledge on a theoretical level about the performance of postgresql. My question is regarding best practices, and how architectural decisions might influence the performance. First a little background:> > The setup:> I have a database which holds informations on used cars. The database has mainly 3 tables of interest for this case:> A cars table, an adverts table and a sellers table. One car has many adverts and one seller has many adverts. One advert belongs to one car and one seller.> The database is powering a website for searching used cars. When searching for used cars, the cars table is mainly used, and a lot of the columns should be directly available for searching e.g. color, milage, price, has_automatic_transmission etc.> > So my main concern is actually about the cars table, since this one currently has a lot of columns (151 - I expect thats quite a lot?), and a lot of data (4 mil. rows, and growing). Now you might start by thinking, this could sound like a regular need for some normalization, but wait a second and let me explain :-)> The columns in this table is for the most very short stings, integers, decimals or booleans. So take for an example has_automatic_transmission (boolean) I can't see why it would make sense to put that into a separate table and join in the values. Or the milage or the price as another example. The cars table used for search is indexed quite a lot.> > The questions:> Having the above setup in mind, what impact on performance, in terms of read performance and write performance, does it have, whether I do the following:> \t1) In general would the read and/or the write on the database be faster, if I serialized some of the not searched columns in the table into a single text columns instead of let's say 20 booleans?> \t2) Lets say I'm updating a timestamp in a single one of the 151 columns in the cars table. The update statement is using the id to find the car. Would the write performance of that UPDATE be affected, if the table had fewer columns?> \t3) When adding a new column to the table i know that it becomes slower the more rows is in the table, but what about the \"width\" of the table does that affect the performance when adding new columns?> \t4) In general what performance downsides do you get when adding a lot of columns to one table instead of having them in separate tables?> \t5) Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?> > Hope there is some good answers out there :-)> > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 28 Nov 2012 13:10:21 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database design - best practice" }, { "msg_contents": "Thanks for the advice.\n\nCurrently I see a lot of I/O related to update/inserts, so I'm trying to track down these guys at first. In relation to question 2, I read somewhere in the documentation that because of MVCC, the whole row has to be rewritten even though I just update one single column in that row. Hence if the table is wider (has more columns), the update will be slower. Does this match your understanding?\n\nDen 28/11/2012 kl. 14.10 skrev Willem Leenen <[email protected]>:\n\n> Niels,\n> \n> \" I can't see why it would make sense to put that into a separate table and join in the values \" \n> You don't normalize for performance. People DEnormalize for performance.\n> \n> \n> Questions: (AFAIK)\n> \n> 1) This is a way to disaster. Get yourself a book on RDBMS from for example Celko. Do NOT go against the flow of the RDBMS rules, as here in rule #1 atomic values of a column. \n> \n> 2) This is not the big fish you are after. First benchmark your setup and compare the results with your desired performance level. First quantify your problem, if there is any, before using tricks.\n> \n> 3) A row will need more memory when it is wider, this may be amplified during hash joins. \n> \n> 4) People DEnormalize for performance. \n> \n> 5) \" Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\" \n> \n> I know the answer, but i encourage you to simply test this. I have seen lot's of urban legends about performance ( including the dropping of the referential integrity be cause that would make a difference.... ). \n> Of course , when it's a full table scan, and it are ALL disk reads, (or ALL memory reads_) you can simply calculate it too. But just get into the habit of testing for learning.\n> \n> \n> My advice:\n> - know what performance you need.\n> - test if you have this, varying tablecontent and systemload\n> - do not tamper with the RDBMS rules, this will haunt you.\n> - if you have the latest postgres version, you can use covering indexes: tables aren't accessed at all, bypassing most of your questions. Check with peers if you've got the indexes right.\n> \n> Regards,\n> Willem\n> \n> \n> \n> > From: [email protected]\n> > Subject: [PERFORM] Database design - best practice\n> > Date: Wed, 28 Nov 2012 13:41:14 +0100\n> > To: [email protected]\n> > \n> > Hi,\n> > \n> > I'm on the hunt for some solid knowledge on a theoretical level about the performance of postgresql. My question is regarding best practices, and how architectural decisions might influence the performance. First a little background:\n> > \n> > The setup:\n> > I have a database which holds informations on used cars. The database has mainly 3 tables of interest for this case:\n> > A cars table, an adverts table and a sellers table. One car has many adverts and one seller has many adverts. One advert belongs to one car and one seller.\n> > The database is powering a website for searching used cars. When searching for used cars, the cars table is mainly used, and a lot of the columns should be directly available for searching e.g. color, milage, price, has_automatic_transmission etc.\n> > \n> > So my main concern is actually about the cars table, since this one currently has a lot of columns (151 - I expect thats quite a lot?), and a lot of data (4 mil. rows, and growing). Now you might start by thinking, this could sound like a regular need for some normalization, but wait a second and let me explain :-)\n> > The columns in this table is for the most very short stings, integers, decimals or booleans. So take for an example has_automatic_transmission (boolean) I can't see why it would make sense to put that into a separate table and join in the values. Or the milage or the price as another example. The cars table used for search is indexed quite a lot.\n> > \n> > The questions:\n> > Having the above setup in mind, what impact on performance, in terms of read performance and write performance, does it have, whether I do the following:\n> > 1) In general would the read and/or the write on the database be faster, if I serialized some of the not searched columns in the table into a single text columns instead of let's say 20 booleans?\n> > 2) Lets say I'm updating a timestamp in a single one of the 151 columns in the cars table. The update statement is using the id to find the car. Would the write performance of that UPDATE be affected, if the table had fewer columns?\n> > 3) When adding a new column to the table i know that it becomes slower the more rows is in the table, but what about the \"width\" of the table does that affect the performance when adding new columns?\n> > 4) In general what performance downsides do you get when adding a lot of columns to one table instead of having them in separate tables?\n> > 5) Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\n> > \n> > Hope there is some good answers out there :-)\n> > \n> > -- \n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n\n\nThanks for the advice.Currently I see a lot of I/O related to update/inserts, so I'm trying to track down these guys at first. In relation to question 2, I read somewhere in the documentation that because of MVCC, the whole row has to be rewritten even though I just update one single column in that row. Hence if the table is wider (has more columns), the update will be slower. Does this match your understanding?Den 28/11/2012 kl. 14.10 skrev Willem Leenen <[email protected]>:Niels,\" I can't see why it would make sense to put that into a separate table and join in the values \" You don't normalize for performance. People DEnormalize for performance.Questions: (AFAIK)1) This is a way to disaster. Get yourself a book on RDBMS from for example Celko. Do NOT go against the flow of the RDBMS rules, as here in rule #1 atomic values of a column. 2) This is not the big fish you are after. First benchmark your setup and compare the results with your desired performance level. First quantify your problem, if there is any, before using tricks.3) A row will need more memory when it is wider, this may be amplified during hash joins. 4) People DEnormalize for performance. 5) \" Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\" I know the answer, but i encourage you to simply test this. I have seen lot's of urban legends about performance ( including the dropping of the referential integrity be cause that would make a difference.... ). Of course , when it's a full table scan, and it are ALL disk reads, (or ALL memory reads_) you can simply calculate it too. But just get into the habit of  testing for learning.My advice:- know what performance you need.- test if you have this, varying tablecontent and systemload- do not tamper with the RDBMS rules, this will haunt you.- if you have the latest postgres version, you can use covering indexes: tables aren't accessed at all, bypassing most of your questions. Check with peers if you've got the indexes right.Regards,Willem> From: [email protected]> Subject: [PERFORM] Database design - best practice> Date: Wed, 28 Nov 2012 13:41:14 +0100> To: [email protected]> > Hi,> > I'm on the hunt for some solid knowledge on a theoretical level about the performance of postgresql. My question is regarding best practices, and how architectural decisions might influence the performance. First a little background:> > The setup:> I have a database which holds informations on used cars. The database has mainly 3 tables of interest for this case:> A cars table, an adverts table and a sellers table. One car has many adverts and one seller has many adverts. One advert belongs to one car and one seller.> The database is powering a website for searching used cars. When searching for used cars, the cars table is mainly used, and a lot of the columns should be directly available for searching e.g. color, milage, price, has_automatic_transmission etc.> > So my main concern is actually about the cars table, since this one currently has a lot of columns (151 - I expect thats quite a lot?), and a lot of data (4 mil. rows, and growing). Now you might start by thinking, this could sound like a regular need for some normalization, but wait a second and let me explain :-)> The columns in this table is for the most very short stings, integers, decimals or booleans. So take for an example has_automatic_transmission (boolean) I can't see why it would make sense to put that into a separate table and join in the values. Or the milage or the price as another example. The cars table used for search is indexed quite a lot.> > The questions:> Having the above setup in mind, what impact on performance, in terms of read performance and write performance, does it have, whether I do the following:> 1) In general would the read and/or the write on the database be faster, if I serialized some of the not searched columns in the table into a single text columns instead of let's say 20 booleans?> 2) Lets say I'm updating a timestamp in a single one of the 151 columns in the cars table. The update statement is using the id to find the car. Would the write performance of that UPDATE be affected, if the table had fewer columns?> 3) When adding a new column to the table i know that it becomes slower the more rows is in the table, but what about the \"width\" of the table does that affect the performance when adding new columns?> 4) In general what performance downsides do you get when adding a lot of columns to one table instead of having them in separate tables?> 5) Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?> > Hope there is some good answers out there :-)> > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 28 Nov 2012 14:20:27 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database design - best practice" }, { "msg_contents": "In relation to question 2, I read somewhere in the documentation that \nbecause of MVCC, the whole row has to be rewritten even though I just \nupdate one single column in that row. Hence if the table is wider (has \nmore columns), the update will be slower. Does this match your \nunderstanding?\n\nNo. I don't count the number of rows, but number of blocks (pages) that are modified, which are 8K each. \n\nMy advice would be to first establish a solutiondirection via diagnosing the problem. My experience is that most solutions are not obscure at all.\n\n\n\nSubject: Re: [PERFORM] Database design - best practice\nFrom: [email protected]\nDate: Wed, 28 Nov 2012 14:20:27 +0100\nCC: [email protected]\nTo: [email protected]\n\nThanks for the advice.\nCurrently I see a lot of I/O related to update/inserts, so I'm trying to track down these guys at first. In relation to question 2, I read somewhere in the documentation that because of MVCC, the whole row has to be rewritten even though I just update one single column in that row. Hence if the table is wider (has more columns), the update will be slower. Does this match your understanding?\nDen 28/11/2012 kl. 14.10 skrev Willem Leenen <[email protected]>:Niels,\n\n\" I can't see why it would make sense to put that into a separate table and join in the values \" \nYou don't normalize for performance. People DEnormalize for performance.\n\n\nQuestions: (AFAIK)\n\n1) This is a way to disaster. Get yourself a book on RDBMS from for example Celko. Do NOT go against the flow of the RDBMS rules, as here in rule #1 atomic values of a column. \n\n2) This is not the big fish you are after. First benchmark your setup and compare the results with your desired performance level. First quantify your problem, if there is any, before using tricks.\n\n3) A row will need more memory when it is wider, this may be amplified during hash joins. \n\n4) People DEnormalize for performance. \n\n5) \" Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\" \n\nI know the answer, but i encourage you to simply test this. I have seen lot's of urban legends about performance ( including the dropping of the referential integrity be cause that would make a difference.... ). \nOf course , when it's a full table scan, and it are ALL disk reads, (or ALL memory reads_) you can simply calculate it too. But just get into the habit of testing for learning.\n\n\nMy advice:\n- know what performance you need.\n- test if you have this, varying tablecontent and systemload\n- do not tamper with the RDBMS rules, this will haunt you.\n- if you have the latest postgres version, you can use covering indexes: tables aren't accessed at all, bypassing most of your questions. Check with peers if you've got the indexes right.\n\nRegards,\nWillem\n\n\n\n> From: [email protected]\n> Subject: [PERFORM] Database design - best practice\n> Date: Wed, 28 Nov 2012 13:41:14 +0100\n> To: [email protected]\n> \n> Hi,\n> \n> I'm on the hunt for some solid knowledge on a theoretical level about the performance of postgresql. My question is regarding best practices, and how architectural decisions might influence the performance. First a little background:\n> \n> The setup:\n> I have a database which holds informations on used cars. The database has mainly 3 tables of interest for this case:\n> A cars table, an adverts table and a sellers table. One car has many adverts and one seller has many adverts. One advert belongs to one car and one seller.\n> The database is powering a website for searching used cars. When searching for used cars, the cars table is mainly used, and a lot of the columns should be directly available for searching e.g. color, milage, price, has_automatic_transmission etc.\n> \n> So my main concern is actually about the cars table, since this one currently has a lot of columns (151 - I expect thats quite a lot?), and a lot of data (4 mil. rows, and growing). Now you might start by thinking, this could sound like a regular need for some normalization, but wait a second and let me explain :-)\n> The columns in this table is for the most very short stings, integers, decimals or booleans. So take for an example has_automatic_transmission (boolean) I can't see why it would make sense to put that into a separate table and join in the values. Or the milage or the price as another example. The cars table used for search is indexed quite a lot.\n> \n> The questions:\n> Having the above setup in mind, what impact on performance, in terms of read performance and write performance, does it have, whether I do the following:\n> 1) In general would the read and/or the write on the database be faster, if I serialized some of the not searched columns in the table into a single text columns instead of let's say 20 booleans?\n> 2) Lets say I'm updating a timestamp in a single one of the 151 columns in the cars table. The update statement is using the id to find the car. Would the write performance of that UPDATE be affected, if the table had fewer columns?\n> 3) When adding a new column to the table i know that it becomes slower the more rows is in the table, but what about the \"width\" of the table does that affect the performance when adding new columns?\n> 4) In general what performance downsides do you get when adding a lot of columns to one table instead of having them in separate tables?\n> 5) Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\n> \n> Hope there is some good answers out there :-)\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n\n\n\n In relation to question 2, I read somewhere in the documentation that \nbecause of MVCC, the whole row has to be rewritten even though I just \nupdate one single column in that row. Hence if the table is wider (has \nmore columns), the update will be slower. Does this match your \nunderstanding?No. I don't count the number of rows, but number of blocks (pages) that are modified, which are 8K each. My advice would be to first establish a solutiondirection via diagnosing the problem. My experience is that most solutions are not obscure at all.Subject: Re: [PERFORM] Database design - best practiceFrom: [email protected]: Wed, 28 Nov 2012 14:20:27 +0100CC: [email protected]: [email protected] for the advice.Currently I see a lot of I/O related to update/inserts, so I'm trying to track down these guys at first. In relation to question 2, I read somewhere in the documentation that because of MVCC, the whole row has to be rewritten even though I just update one single column in that row. Hence if the table is wider (has more columns), the update will be slower. Does this match your understanding?Den 28/11/2012 kl. 14.10 skrev Willem Leenen <[email protected]>:Niels,\" I can't see why it would make sense to put that into a separate table and join in the values \" You don't normalize for performance. People DEnormalize for performance.Questions: (AFAIK)1) This is a way to disaster. Get yourself a book on RDBMS from for example Celko. Do NOT go against the flow of the RDBMS rules, as here in rule #1 atomic values of a column. 2) This is not the big fish you are after. First benchmark your setup and compare the results with your desired performance level. First quantify your problem, if there is any, before using tricks.3) A row will need more memory when it is wider, this may be amplified during hash joins. 4) People DEnormalize for performance. 5) \" Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\" I know the answer, but i encourage you to simply test this. I have seen lot's of urban legends about performance ( including the dropping of the referential integrity be cause that would make a difference.... ). Of course , when it's a full table scan, and it are ALL disk reads, (or ALL memory reads_) you can simply calculate it too. But just get into the habit of  testing for learning.My advice:- know what performance you need.- test if you have this, varying tablecontent and systemload- do not tamper with the RDBMS rules, this will haunt you.- if you have the latest postgres version, you can use covering indexes: tables aren't accessed at all, bypassing most of your questions. Check with peers if you've got the indexes right.Regards,Willem> From: [email protected]> Subject: [PERFORM] Database design - best practice> Date: Wed, 28 Nov 2012 13:41:14 +0100> To: [email protected]> > Hi,> > I'm on the hunt for some solid knowledge on a theoretical level about the performance of postgresql. My question is regarding best practices, and how architectural decisions might influence the performance. First a little background:> > The setup:> I have a database which holds informations on used cars. The database has mainly 3 tables of interest for this case:> A cars table, an adverts table and a sellers table. One car has many adverts and one seller has many adverts. One advert belongs to one car and one seller.> The database is powering a website for searching used cars. When searching for used cars, the cars table is mainly used, and a lot of the columns should be directly available for searching e.g. color, milage, price, has_automatic_transmission etc.> > So my main concern is actually about the cars table, since this one currently has a lot of columns (151 - I expect thats quite a lot?), and a lot of data (4 mil. rows, and growing). Now you might start by thinking, this could sound like a regular need for some normalization, but wait a second and let me explain :-)> The columns in this table is for the most very short stings, integers, decimals or booleans. So take for an example has_automatic_transmission (boolean) I can't see why it would make sense to put that into a separate table and join in the values. Or the milage or the price as another example. The cars table used for search is indexed quite a lot.> > The questions:> Having the above setup in mind, what impact on performance, in terms of read performance and write performance, does it have, whether I do the following:> 1) In general would the read and/or the write on the database be faster, if I serialized some of the not searched columns in the table into a single text columns instead of let's say 20 booleans?> 2) Lets say I'm updating a timestamp in a single one of the 151 columns in the cars table. The update statement is using the id to find the car. Would the write performance of that UPDATE be affected, if the table had fewer columns?> 3) When adding a new column to the table i know that it becomes slower the more rows is in the table, but what about the \"width\" of the table does that affect the performance when adding new columns?> 4) In general what performance downsides do you get when adding a lot of columns to one table instead of having them in separate tables?> 5) Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?> > Hope there is some good answers out there :-)> > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 28 Nov 2012 13:27:18 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database design - best practice" }, { "msg_contents": "Hi Kristian,\n\n> \" I can't see why it would make sense to put that into a separate table and\n> join in the values \"\n> You don't normalize for performance. People DEnormalize for performance.\n\nYes. In short, you seem more of a developer than a RDBMS guy. This is\nnot a personal fault, but it's a *very* dangerous state to be in and\nyou should address the problem asap. Erase from your head all you\ncould possibly know in terms of \"putting it into a file\" and read very\nbasic texts about normal forms. Like this:\n\nhttp://en.wikipedia.org/wiki/Database_normalization\n\nAs already said by Willem, learn to test your stuff. There is a\n\\timing command in psql, use it.\n\nFor example (addressing your other post), you want to check how long it takes to\nUPDATE \"adverts\"\nSET\n \"last_observed_at\" = '2012-11-28 00:02:30.265154',\n \"data_source_id\" ='83d024a57bc2958940f3ca281bddcbf4'\nWHERE\n \"adverts\".\"id\" IN ( 1602382, 4916432, ...... 3637777 ) ;\n\nas opposed to\n\nUPDATE \"adverts\"\nSET\n \"last_observed_at\" = '2012-11-28 00:02:30.265154',\n \"data_source_id\" ='83d024a57bc2958940f3ca281bddcbf4'\nWHERE\n \"adverts\".\"id\" = 1602382 OR\n \"adverts\".\"id\" = 4916432 OR\n ......\n \"adverts\".\"id\" = 3637777;\n\nMy 5 pence\nBèrto\n-- \n==============================\nIf Pac-Man had affected us as kids, we'd all be running around in a\ndarkened room munching pills and listening to repetitive music.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 13:29:47 +0000", "msg_from": "=?UTF-8?B?QsOocnRvIMOrZCBTw6hyYQ==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database design - best practice" }, { "msg_contents": "Let me be devil advocate here :)\nFirst of all, even if you read any basics about normalization, don't take\nit to your heart :) Think.\nKnow that each normalization/denormalization step has it's cons and pros.\nE.g. in NoSQL world they often don't normalize much.\nWhat's interesting with PosgreSQL is that it is suited quite good for\nNoSQL-like scenarios.\nFirst of all, each unfilled (null) data column takes 1 bit only. This, BTW,\nleads to interesting consequence that performance-wise it can be better to\nhave null/true boolean than false/true. Especially if you've got a lot of\n\"false\".\nSo, PostgreSQL should be good with 10th, possible 100th of data column with\nmost columns empty. Record of 151 null columns would take header +\nroundup(151/8 ) = 19 bytes. Not much. NoSQLs usually put column names into\nrecords and this costs more.\nAny null columns at the end of record take no space at all (so, you can\nthink on reordering your columns to put the least used to the record end).\nAdding column with null as default is cheap operation that do not require\ntable scan.\nYou can have partial indexes to speed things up, like create index on car\n(car_id) where (has_automatic_transmission);\n\nAt the other side, when you normalize you need to join. Instead of select *\nfrom car where has_automatic_transmission (that will use index above), you\nwill have to \"select * from car where id in (select id from\ncar_with_automatic_transmission)\". The plan is much more complex here. It\nwill be slower.\n\nThe main normalization plus for you is that you work with record as a\nwhole, so if there is a lot of information in there that is rarely used,\nyou will \"pay\" for it's access every time, both on selects and updates.\n\nSo, as conclusion, I agree with others, that you should check. But\nremember, joining two tables with millions of records os never cheap :)\n\nBest regards, Vitalii Tymchyshyn\n\n\n2012/11/28 Niels Kristian Schjødt <[email protected]>\n\n> Hi,\n>\n> I'm on the hunt for some solid knowledge on a theoretical level about the\n> performance of postgresql. My question is regarding best practices, and how\n> architectural decisions might influence the performance. First a little\n> background:\n>\n> The setup:\n> I have a database which holds informations on used cars. The database has\n> mainly 3 tables of interest for this case:\n> A cars table, an adverts table and a sellers table. One car has many\n> adverts and one seller has many adverts. One advert belongs to one car and\n> one seller.\n> The database is powering a website for searching used cars. When searching\n> for used cars, the cars table is mainly used, and a lot of the columns\n> should be directly available for searching e.g. color, milage, price,\n> has_automatic_transmission etc.\n>\n> So my main concern is actually about the cars table, since this one\n> currently has a lot of columns (151 - I expect thats quite a lot?), and a\n> lot of data (4 mil. rows, and growing). Now you might start by thinking,\n> this could sound like a regular need for some normalization, but wait a\n> second and let me explain :-)\n> The columns in this table is for the most very short stings, integers,\n> decimals or booleans. So take for an example has_automatic_transmission\n> (boolean) I can't see why it would make sense to put that into a separate\n> table and join in the values. Or the milage or the price as another\n> example. The cars table used for search is indexed quite a lot.\n>\n> The questions:\n> Having the above setup in mind, what impact on performance, in terms of\n> read performance and write performance, does it have, whether I do the\n> following:\n> 1) In general would the read and/or the write on the database be\n> faster, if I serialized some of the not searched columns in the table into\n> a single text columns instead of let's say 20 booleans?\n> 2) Lets say I'm updating a timestamp in a single one of the 151\n> columns in the cars table. The update statement is using the id to find the\n> car. Would the write performance of that UPDATE be affected, if the table\n> had fewer columns?\n> 3) When adding a new column to the table i know that it becomes\n> slower the more rows is in the table, but what about the \"width\" of the\n> table does that affect the performance when adding new columns?\n> 4) In general what performance downsides do you get when adding a\n> lot of columns to one table instead of having them in separate tables?\n> 5) Is it significantly faster to select * from a table with 20\n> columns, than selecting the same 20 in a table with 150 columns?\n>\n> Hope there is some good answers out there :-)\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nLet me be devil advocate here :)First of all, even if you read any basics about normalization, don't take it to your heart :) Think.Know that each normalization/denormalization step has it's cons and pros. E.g. in NoSQL world they often don't normalize much.\nWhat's interesting with PosgreSQL is that it is suited quite good for NoSQL-like scenarios.First of all, each unfilled (null) data column takes 1 bit only. This, BTW, leads to interesting consequence that performance-wise it can be better to have null/true boolean than false/true. Especially if you've got a lot of \"false\".\nSo, PostgreSQL should be good with 10th, possible 100th of data column with most columns empty. Record of 151 null columns would take header + roundup(151/8 ) = 19 bytes. Not much. NoSQLs usually put column names into records and this costs more.\nAny null columns at the end of record take no space at all (so, you can think on reordering your columns to put the least used to the record end).Adding column with null as default is cheap operation that do not require table scan.\nYou can have partial indexes to speed things up, like create index on car (car_id) where (has_automatic_transmission);At the other side, when you normalize you need to join. Instead of select * from car where has_automatic_transmission (that will use index above), you will have to \"select * from car where id in (select id from car_with_automatic_transmission)\". The plan is much more complex here. It will be slower.\nThe main normalization plus for you is that you work with record as a whole, so if there is a lot of information in there that is rarely used, you will \"pay\" for it's access every time, both on selects and updates. \nSo, as conclusion, I agree with others, that you should check. But remember, joining two tables with millions of records os never cheap :)Best regards, Vitalii Tymchyshyn\n2012/11/28 Niels Kristian Schjødt <[email protected]>\nHi,\n\nI'm on the hunt for some solid knowledge on a theoretical level about the performance of postgresql. My question is regarding best practices, and how architectural decisions might influence the performance. First a little background:\n\nThe setup:\nI have a database which holds informations on used cars. The database has mainly 3 tables of interest for this case:\nA cars table, an adverts table and a sellers table. One car has many adverts and one seller has many adverts. One advert belongs to one car and one seller.\nThe database is powering a website for searching used cars. When searching for used cars, the cars table is mainly used, and a lot of the columns should be directly available for searching e.g. color, milage, price, has_automatic_transmission etc.\n\nSo my main concern is actually about the cars table, since this one currently has a lot of columns (151 - I expect thats quite a lot?), and a lot of data (4 mil. rows, and growing). Now you might start by thinking, this could sound like a regular need for some normalization, but wait a second and let me explain :-)\n\nThe columns in this table is for the most very short stings, integers, decimals or booleans. So take for an example has_automatic_transmission (boolean) I can't see why it would make sense to put that into a separate table and join in the values. Or the milage or the price as another example. The cars table used for search is indexed quite a lot.\n\nThe questions:\nHaving the above setup in mind, what impact on performance, in terms of read performance and write performance, does it have, whether I do the following:\n        1) In general would the read and/or the write on the database be faster, if I serialized some of the not searched columns in the table into a single text columns instead of let's say 20 booleans?\n        2) Lets say I'm updating a timestamp in a single one of the 151 columns in the cars table. The update statement is using the id to find the car. Would the write performance of that UPDATE be affected, if the table had fewer columns?\n\n        3) When adding a new column to the table i know that it becomes slower the more rows is in the table, but what about the \"width\" of the table does that affect the performance when adding new columns?\n\n        4) In general what performance downsides do you get when adding a lot of columns to one table instead of having them in separate tables?\n        5) Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\n\nHope there is some good answers out there :-)\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Wed, 28 Nov 2012 16:41:14 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database design - best practice" }, { "msg_contents": "On Wed, Nov 28, 2012 at 4:41 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n>\n> So my main concern is actually about the cars table, since this one currently has a lot of columns (151 - I expect thats quite a lot?), and a lot of data (4 mil. rows, and growing). Now you might start by thinking, this could sound like a regular need for some normalization, but wait a second and let me explain :-)\n\nIf you have 151 single-valued pieces of information, than that is what\nyou have. You can't tell if something is normalized or not by\ncounting the columns.\n\n> The columns in this table is for the most very short stings, integers, decimals or booleans. So take for an example has_automatic_transmission (boolean) I can't see why it would make sense to put that into a separate table and join in the values.\n\nI can't see why that would make sense, either. Nor do I think that\ndoing so would increase the level of normalization. What rule of\nnormalization would be served by creating gratuitous joins?\n\n> Or the milage or the price as another example. The cars table used for search is indexed quite a lot.\n\nHow useful are the indices?\n\n> The questions:\n> Having the above setup in mind, what impact on performance, in terms of read performance and write performance, does it have, whether I do the following:\n> 1) In general would the read and/or the write on the database be faster, if I serialized some of the not searched columns in the table into a single text columns instead of let's say 20 booleans?\n\nProbably not. And could make it much worse, depending on how you\nserialize it. For example, if you use hstore or json, now the \"column\nnames\" for each of the 20 booleans are repeated in every row, rather\nthan being metadata stored only once. But try it and see.\n\n> 2) Lets say I'm updating a timestamp in a single one of the 151 columns in the cars table. The update statement is using the id to find the car. Would the write performance of that UPDATE be affected, if the table had fewer columns?\n\nYes, but probably not by much. The biggest effect will be on whether\nthe timestamp column is indexed. If it is, then updating it means\nthat all other indexes on the table will also need to be updated. If\nit is not indexed, then the update can be a HOT update.\n\n\n> 3) When adding a new column to the table i know that it becomes slower the more rows is in the table, but what about the \"width\" of the table does that affect the performance when adding new columns?\n\nAdding a new column to a table is pretty much instantaneous if the\ndefault value is NULL.\n\n> 4) In general what performance downsides do you get when adding a lot of columns to one table instead of having them in separate tables?\n\nThis question cannot be answered in general. If every time you use\nthe main table you have to join it to the separate table, then\nperformance will be bad. If you almost never have to join to the\nseparate table, then performance will be better.\n\n> 5) Is it significantly faster to select * from a table with 20 columns, than selecting the same 20 in a table with 150 columns?\n\nIf the extra 130 columns are mostly null, the difference will be very\nsmall. Or, if the where clause is such that you only do a single-row\nlookup on a primary key column, for example, the difference will also\nbe small.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 10:05:37 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database design - best practice" } ]
[ { "msg_contents": "Hi, i have these update queries, that run very often, and takes too long time, in order for us to reach the throughput we are aiming at. However, the update query is very simple, and I can't figure out any way to improve the situation. The query looks like this:\n\nUPDATE \"adverts\" SET \"last_observed_at\" = '2012-11-28 00:02:30.265154', \"data_source_id\" ='83d024a57bc2958940f3ca281bddcbf4' WHERE\"adverts\".\"id\" IN ( 1602382, 4916432, 3221246, 4741057, 3853335, 571429, 3222740, 571736, 3544903, 325378,5774338, 5921451, 4295768, 3223170, 5687001, 4741966, 325519, 580867, 325721, 4412200, 4139598, 325567, 1616653,1616664, 6202007, 3223748, 325613, 3223764, 325615, 4296536, 3854595, 4971428, 3224146, 5150522, 4412617, 5073048,325747, 325771, 1622154, 5794384, 5736581, 1623767, 5686945, 3224627, 5073009, 3224747, 3224749, 325809, 5687051,3224811, 5687052, 4917824, 5073013, 3224816, 3224834, 4297331, 1623907, 325864, 1623947, 6169706, 325869, 325877,3225074, 3225112, 325893, 325912, 3225151, 3225184, 3225175, 1624659, 325901, 4033926, 325904, 325911, 4412835,1624737, 5073004, 5921434, 325915, 3225285, 3225452, 4917672, 1624984, 3225472, 325940, 5380611, 325957, 5073258,3225500, 1625002, 5923489, 4413009, 325952, 3961122, 3637777 ) ;\n\nAn explain outputs me the following:\n\n\"Update on adverts (cost=0.12..734.27 rows=95 width=168)\"\n\" -> Index Scan using adverts_pkey on adverts (cost=0.12..734.27 rows=95 width=168)\"\n\" Index Cond: (id = ANY ('{1602382,4916432,3221246,4741057,3853335,571429,3222740,571736,3544903,325378,5774338,5921451,4295768,3223170,5687001,4741966,325519,580867,325721,4412200,4139598,325567,1616653,1616664,6202007,3223748,325613,3223764,325615,4296536,3854595,4971428,3224146,5150522,4412617,5073048,325747,325771,1622154,5794384,5736581,1623767,5686945,3224627,5073009,3224747,3224749,325809,5687051,3224811,5687052,4917824,5073013,3224816,3224834,4297331,1623907,325864,1623947,6169706,325869,325877,3225074,3225112,325893,325912,3225151,3225184,3225175,1624659,325901,4033926,325904,325911,4412835,1624737,5073004,5921434,325915,3225285,3225452,4917672,1624984,3225472,325940,5380611,325957,5073258,3225500,1625002,5923489,4413009,325952,3961122,3637777}'::integer[]))\"\n\nSo as you can see, it's already pretty optimized, it's just not enough :-) So what can I do? the two columns last_observed_at and data_source_id has an index, and it is needed elsewhere, so I can't delete those.\n\nPS. I'm on postgres 9.2 on a server with 32gb ram, 8 cores and two 3T disks in a software raid 1 setup.\n\nIs the only way out of this really a SSD disk?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 13:57:49 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "Optimize update query" }, { "msg_contents": "On 11/28/2012 06:57 AM, Niels Kristian Schjødt wrote:\n\nBefore I go crazy, here... you really need to tell us what \"not enough\"\nmeans. You didn't provide an explain analyze, so we don't know what your\nactual performance is. But I have my suspicions.\n\n> So as you can see, it's already pretty optimized, it's just not\n> enough :-) So what can I do? the two columns last_observed_at and\n> data_source_id has an index, and it is needed elsewhere, so I can't\n> delete those.\n\nOk, so part of your problem is that you're tying an advertising system\ndirectly to the database for direct updates. That's a big no-no. Any\ntime you got a huge influx of views, there would be a logjam. You need\nto decouple this so you can use a second tool to load the database in\nlarger batches. You'll get much higher throughput this way.\n\nIf you absolutely must use this approach, you're going to have to beef\nup your hardware.\n\n> PS. I'm on postgres 9.2 on a server with 32gb ram, 8 cores and two 3T\n> disks in a software raid 1 setup.\n\nThis is not sufficient for a high-bandwidth stream of updates. Not even\nclose. Even if those 3T disks are 7200 RPM, and even in RAID-1, you're\ngoing to have major problems with concurrent reads and writes. You need\nto do several things:\n\n1. Move your transaction logs (pg_xlog) to another pair of disks\nentirely. Do not put these on the same disks as your data if you need\nhigh write throughput.\n2. Get a better disk architecture. You need 10k, or 15k RPM disks.\nStarting with 6 or more of them in a RAID-10 would be a good beginning.\n\nYou never told us your postgresql.conf settings, so I'm just going with\nvery generic advice. Essentially, you're expecting too much for too\nlittle. That machine would have been low-spec three years ago, and\nunsuited to database use simply due to the 2-disk RAID.\n\n> Is the only way out of this really a SSD disk?\n\nNo. There are many, many steps you can and should take before going this\nroute. You need to know the problem you're solving before making\npotentially expensive hardware decisions.\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 08:07:59 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "W dniu 28.11.2012 15:07, Shaun Thomas pisze:\n> On 11/28/2012 06:57 AM, Niels Kristian Schjødt wrote:\n> \n> Before I go crazy, here... you really need to tell us what \"not enough\"\n> means. You didn't provide an explain analyze, so we don't know what your\n> actual performance is. But I have my suspicions.\n> \n>> So as you can see, it's already pretty optimized, it's just not\n>> enough :-) So what can I do? the two columns last_observed_at and\n>> data_source_id has an index, and it is needed elsewhere, so I can't\n>> delete those.\n> \n> Ok, so part of your problem is that you're tying an advertising system\n> directly to the database for direct updates. That's a big no-no. Any\n> time you got a huge influx of views, there would be a logjam. You need\n> to decouple this so you can use a second tool to load the database in\n> larger batches. You'll get much higher throughput this way.\n\n+1, sql databases has limited number of inserts/updates per second. Even\nwith highend hardware you won't have more than XXX operations per\nsecond. As Thomas said, you should feed something like nosql database\nfrom www server and use other tool to do aggregation and batch inserts\nto postgresql. It will scale much better.\n\nMarcin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 15:39:55 +0100", "msg_from": "=?UTF-8?B?TWFyY2luIE1pcm9zxYJhdw==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "I assume that SQL databases ( Banks? Telecom?) can handle an used car shop. No need for an unstructured data tool.\n\n\n\n> +1, sql databases has limited number of inserts/updates per second. Even\n> with highend hardware you won't have more than XXX operations per\n> second. As Thomas said, you should feed something like nosql database\n> from www server and use other tool to do aggregation and batch inserts\n> to postgresql. It will scale much better.\n> \n> Marcin\n\n \t\t \t \t\t \n\n\n\n\nI assume that SQL databases ( Banks? Telecom?) can handle an used car shop. No need for an unstructured data tool.> +1, sql databases has limited number of inserts/updates per second. Even> with highend hardware you won't have more than XXX operations per> second. As Thomas said, you should feed something like nosql database> from www server and use other tool to do aggregation and batch inserts> to postgresql. It will scale much better.> > Marcin", "msg_date": "Wed, 28 Nov 2012 15:11:46 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Okay guys,\n\nThanks for all the great help and advice already! Let me just clear some things, to make my question a little easier to answer :-)\nNow my site is a search engine for used cars - not just a car shop with a few hundred cars.\nThe update query you look at, is an update that is executed once a day in chunks for all active adverts, so we know they are still for sale (one car can be advertised at several places hence several \"adverts\"). So it's not a \"constant stream\" but it has a fairly high volume especially at night time though.\n\nA compressed version of my .conf looks like this (note: there is some tweaks at the end of the file)\n data_directory = '/var/lib/postgresql/9.2/main'\n hba_file = '/etc/postgresql/9.2/main/pg_hba.conf' \n ident_file = '/etc/postgresql/9.2/main/pg_ident.conf'\n external_pid_file = '/var/run/postgresql/9.2-main.pid' \n listen_addresses = '192.168.0.2, localhost'\n port = 5432\n max_connections = 1000 \n unix_socket_directory = '/var/run/postgresql'\n wal_level = hot_standby\n synchronous_commit = off\n archive_mode = onarchive_command = 'rsync -a %p [email protected]:/var/lib/postgresql/9.2/wals/%f </dev/null' \n max_wal_senders = 1 \n wal_keep_segments = 32\n logging_collector = on \n log_min_messages = debug1 \n log_min_error_statement = debug1\n log_min_duration_statement = 0\n log_checkpoints = on\n log_connections = on\n log_disconnections = onlog_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d '\n log_lock_waits = on log_temp_files = 0\n datestyle = 'iso, mdy' \n lc_messages = 'C'\n lc_monetary = 'en_US.UTF-8'\n lc_numeric = 'en_US.UTF-8' \n lc_time = 'en_US.UTF-8' \n default_text_search_config = 'pg_catalog.english' \n default_statistics_target = 100\n maintenance_work_mem = 1GB\n checkpoint_completion_target = 0.7\n effective_cache_size = 22GB\n work_mem = 160MB\n wal_buffers = 4MB\n checkpoint_segments = 16\n shared_buffers = 7680MB\n\n# All the log stuff is mainly temporary requirement for pgBadger\n# The database has been tuned with pgtuner\n\nYou might be familiar with new relic, and I use that for quite a lot of monitoring. So, this is what I see at night time (a lot of I/O). So I went to play around with pgBadger to get some insights at database level.\n<iframe src=\"https://rpm.newrelic.com/public/charts/h2dtedghfsv\" width=\"500\" height=\"300\" scrolling=\"no\" frameborder=\"no\"></iframe>\n\nThis shows me, that the by far most time-consuming queries are updates (in general). On avg. a query like the one I showed you, take 1,3 sec (but often it takes several minutes - which makes me wonder). So correct me if I'm wrong here: my theory is, that I have too many too slow update queries, that then often end up in a situation, where they \"wait\" for each other to finish, hence the sometimes VERY long execution times. So my basic idea here is, that if I could reduce the cost of the updates, then I could get a hight throughput overall.\n\nHere is a sample of the pgBadger analysis:\n\nQueries that took up the most time (N) ^\nRank\tTotal duration\tTimes executed\tAv. duration (s)\tQuery\n1\t1d15h28m38.71s\t\n948,711\n0.15s\t\nCOMMIT;\n\n2\t1d2h17m55.43s\t\n401,002\n0.24s\t\nINSERT INTO \"car_images\" ( \"car_id\", \"created_at\", \"image\", \"updated_at\" ) VALUES ( '', '', '', '' ) returning \"id\";\n\n3\t23h18m33.68s\t\n195,093\n0.43s\t\nSELECT DISTINCT \"cars\".id FROM \"cars\" LEFT OUTER JOIN \"adverts\" ON \"adverts\".\"car_id\" = \"cars\".\"id\" LEFT OUTERJOIN \"sellers\" ON \"sellers\".\"id\" = \"adverts\".\"seller_id\" WHERE \"cars\".\"sales_state\" = '' AND \"cars\".\"year\" = 0 AND\"cars\".\"engine_size\" = 0.0 AND ( ( \"cars\".\"id\" IS NOT NULL AND \"cars\".\"brand\" = '' AND \"cars\".\"model_name\" = ''AND \"cars\".\"fuel\" = '' AND \"cars\".\"km\" = 0 AND \"cars\".\"price\" = 0 AND \"sellers\".\"kind\" = '' ) ) LIMIT 0;\n\n4\t22h45m26.52s\t\n3,374,133\n0.02s\t\nSELECT \"adverts\".* FROM \"adverts\" WHERE ( source_name = '' AND md5 ( url ) = md5 ( '' ) ) LIMIT 0;\n\n5\t10h31m37.18s\t\n29,671\n1.28s\t\nUPDATE \"adverts\" SET \"last_observed_at\" = '', \"data_source_id\" = '' WHERE \"adverts\".\"id\" IN ( ... ) ;\n\n6\t7h18m40.65s\t\n396,393\n0.07s\t\nUPDATE \"cars\" SET \"updated_at\" = '' WHERE \"cars\".\"id\" = 0;\n\n7\t7h6m7.87s\t\n241,294\n0.11s\t\nUPDATE \"cars\" SET \"images_count\" = COALESCE ( \"images_count\", 0 ) + 0 WHERE \"cars\".\"id\" = 0;\n\n8\t6h56m11.78s\t\n84,571\n0.30s\t\nINSERT INTO \"failed_adverts\" ( \"active_record_object_class\", \"advert_candidate\", \"created_at\", \"exception_class\",\"exception_message\", \"from_rescraper\", \"last_retried_at\", \"retry_count\", \"source_name\", \"stack_trace\",\"updated_at\", \"url\" ) VALUES ( NULL, '', '', '', '', NULL, NULL, '', '', '', '', '' ) returning \"id\";\n\n9\t5h47m25.45s\t\n188,402\n0.11s\t\nINSERT INTO \"adverts\" ( \"availability_state\", \"car_id\", \"created_at\", \"data_source_id\", \"deactivated_at\",\"first_extraction\", \"last_observed_at\", \"price\", \"seller_id\", \"source_id\", \"source_name\", \"updated_at\", \"url\" )VALUES ( '', '', '', '', NULL, '', '', '', '', '', '', '', '' ) returning \"id\";\n\n10\t3h4m26.86s\t\n166,235\n0.07s\t\nUPDATE \"adverts\" SET \"deactivated_at\" = '', \"availability_state\" = '', \"updated_at\" = '' WHERE \"adverts\".\"id\" = 0;\n\n(Yes I'm already on the task of improving the selects)\n\nDen 28/11/2012 kl. 16.11 skrev Willem Leenen <[email protected]>:\n\n> \n> I assume that SQL databases ( Banks? Telecom?) can handle an used car shop. No need for an unstructured data tool.\n> \n> \n> \n> > +1, sql databases has limited number of inserts/updates per second. Even\n> > with highend hardware you won't have more than XXX operations per\n> > second. As Thomas said, you should feed something like nosql database\n> > from www server and use other tool to do aggregation and batch inserts\n> > to postgresql. It will scale much better.\n> > \n> > Marcin\n\n\nOkay guys,Thanks for all the great help and advice already! Let me just clear some things, to make my question a little easier to answer :-)Now my site is a search engine for used cars - not just a car shop with a few hundred cars.The update query you look at, is an update that is executed once a day in chunks for all active adverts, so we know they are still for sale (one car can be advertised at several places hence several \"adverts\"). So it's not a \"constant stream\" but it has a fairly high volume especially at night time though.A compressed version of my .conf looks like this (note: there is some tweaks at the end of the file)  data_directory = '/var/lib/postgresql/9.2/main'  hba_file = '/etc/postgresql/9.2/main/pg_hba.conf'   ident_file = '/etc/postgresql/9.2/main/pg_ident.conf'  external_pid_file = '/var/run/postgresql/9.2-main.pid'   listen_addresses = '192.168.0.2, localhost'  port = 5432  max_connections = 1000   unix_socket_directory = '/var/run/postgresql'  wal_level = hot_standby  synchronous_commit = off  archive_mode = onarchive_command = 'rsync -a %p [email protected]:/var/lib/postgresql/9.2/wals/%f </dev/null'   max_wal_senders = 1   wal_keep_segments = 32  logging_collector = on   log_min_messages = debug1   log_min_error_statement = debug1  log_min_duration_statement = 0  log_checkpoints = on  log_connections = on  log_disconnections = onlog_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d '  log_lock_waits = on log_temp_files = 0  datestyle = 'iso, mdy'   lc_messages = 'C'  lc_monetary = 'en_US.UTF-8'  lc_numeric = 'en_US.UTF-8'   lc_time = 'en_US.UTF-8'   default_text_search_config = 'pg_catalog.english'   default_statistics_target = 100  maintenance_work_mem = 1GB  checkpoint_completion_target = 0.7  effective_cache_size = 22GB  work_mem = 160MB  wal_buffers = 4MB  checkpoint_segments = 16  shared_buffers = 7680MB# All the log stuff is mainly temporary requirement for pgBadger# The database has been tuned with pgtunerYou might be familiar with new relic, and I use that for quite a lot of monitoring. So, this is what I see at night time (a lot of I/O). So I went to play around with pgBadger to get some insights at database level.<iframe src=\"https://rpm.newrelic.com/public/charts/h2dtedghfsv\" width=\"500\" height=\"300\" scrolling=\"no\" frameborder=\"no\"></iframe>This shows me, that the by far most time-consuming queries are updates (in general). On avg. a query like the one I showed you, take 1,3 sec (but often it takes several minutes - which makes me wonder). So correct me if I'm wrong here: my theory is, that I have too many too slow update queries, that then often end up in a situation, where they \"wait\" for each other to finish, hence the sometimes VERY long execution times. So my basic idea here is, that if I could reduce the cost of the updates, then I could get a hight throughput overall.Here is a sample of the pgBadger analysis:Queries that took up the most time (N) ^Rank Total duration Times executed Av. duration (s) Query1 1d15h28m38.71s 948,7110.15s COMMIT;2 1d2h17m55.43s 401,0020.24s INSERT INTO \"car_images\" ( \"car_id\", \"created_at\", \"image\", \"updated_at\" ) VALUES ( '', '', '', '' ) returning \"id\";3 23h18m33.68s 195,0930.43s SELECT DISTINCT \"cars\".id FROM \"cars\" LEFT OUTER JOIN \"adverts\" ON \"adverts\".\"car_id\" = \"cars\".\"id\" LEFT OUTERJOIN \"sellers\" ON \"sellers\".\"id\" = \"adverts\".\"seller_id\" WHERE \"cars\".\"sales_state\" = '' AND \"cars\".\"year\" = 0 AND\"cars\".\"engine_size\" = 0.0 AND ( ( \"cars\".\"id\" IS NOT NULL AND \"cars\".\"brand\" = '' AND \"cars\".\"model_name\" = ''AND \"cars\".\"fuel\" = '' AND \"cars\".\"km\" = 0 AND \"cars\".\"price\" = 0 AND \"sellers\".\"kind\" = '' ) ) LIMIT 0;4 22h45m26.52s 3,374,1330.02s SELECT \"adverts\".* FROM \"adverts\" WHERE ( source_name = '' AND md5 ( url ) = md5 ( '' ) ) LIMIT 0;5 10h31m37.18s 29,6711.28s UPDATE \"adverts\" SET \"last_observed_at\" = '', \"data_source_id\" = '' WHERE \"adverts\".\"id\" IN ( ... ) ;6 7h18m40.65s 396,3930.07s UPDATE \"cars\" SET \"updated_at\" = '' WHERE \"cars\".\"id\" = 0;7 7h6m7.87s 241,2940.11s UPDATE \"cars\" SET \"images_count\" = COALESCE ( \"images_count\", 0 ) + 0 WHERE \"cars\".\"id\" = 0;8 6h56m11.78s 84,5710.30s INSERT INTO \"failed_adverts\" ( \"active_record_object_class\", \"advert_candidate\", \"created_at\", \"exception_class\",\"exception_message\", \"from_rescraper\", \"last_retried_at\", \"retry_count\", \"source_name\", \"stack_trace\",\"updated_at\", \"url\" ) VALUES ( NULL, '', '', '', '', NULL, NULL, '', '', '', '', '' ) returning \"id\";9 5h47m25.45s 188,4020.11s INSERT INTO \"adverts\" ( \"availability_state\", \"car_id\", \"created_at\", \"data_source_id\", \"deactivated_at\",\"first_extraction\", \"last_observed_at\", \"price\", \"seller_id\", \"source_id\", \"source_name\", \"updated_at\", \"url\" )VALUES ( '', '', '', '', NULL, '', '', '', '', '', '', '', '' ) returning \"id\";10 3h4m26.86s 166,2350.07s UPDATE \"adverts\" SET \"deactivated_at\" = '', \"availability_state\" = '', \"updated_at\" = '' WHERE \"adverts\".\"id\" = 0;(Yes I'm already on the task of improving the selects)Den 28/11/2012 kl. 16.11 skrev Willem Leenen <[email protected]>:I assume that SQL databases ( Banks? Telecom?) can handle an used car shop. No need for an unstructured data tool.> +1, sql databases has limited number of inserts/updates per second. Even> with highend hardware you won't have more than XXX operations per> second. As Thomas said, you should feed something like nosql database> from www server and use other tool to do aggregation and batch inserts> to postgresql. It will scale much better.> > Marcin", "msg_date": "Wed, 28 Nov 2012 17:19:17 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "max_connections = 1000 looks bad... why not a pooler in place?\nCheers\nBèrto\n\nOn 28 November 2012 16:19, Niels Kristian Schjødt\n<[email protected]> wrote:\n> max_connections = 1000\n\n\n\n-- \n==============================\nIf Pac-Man had affected us as kids, we'd all be running around in a\ndarkened room munching pills and listening to repetitive music.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 16:29:24 +0000", "msg_from": "=?UTF-8?B?QsOocnRvIMOrZCBTw6hyYQ==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "On 11/28/2012 10:19 AM, Niels Kristian Schjødt wrote:\n\n> https://rpm.newrelic.com/public/charts/h2dtedghfsv\n\nDoesn't this answer your question?\n\nThat iowait is crushing your server into the ground. It's no surprise\nupdates are taking several seconds. That update you sent us *should*\nexecute on the order of only a few milliseconds.\n\nSo I'll reiterate that you *must* move your pg_xlog location elsewhere.\nYou've got row lookup bandwidth conflicting with writes. There are a\ncouple other changes you should probably make to your config:\n\n> checkpoint_segments = 16\n\nThis is not enough for the workload you describe. Every time the\ndatabase checkpoints, all of those changes in pg_xlog are applied to the\nbackend data files. You should set these values:\n\ncheckpoint_segments = 100\ncheckpoint_timeout = 10m\ncheckpoint_completion_target = 0.9\n\nThis will reduce your overall write workload, and make it less active.\nToo many checkpoints massively reduce write throughput. With the\nsettings you have, it's probably checkpointing constantly while your\nload runs. Start with this, but experiment with increasing\ncheckpoint_segments further.\n\nIf you check your logs now, you probably see a ton of \"checkpoint\nstarting: xlog\" in there. That's very bad. It should say \"checkpoint\nstarting: time\" meaning it's keeping up with your writes naturally.\n\n> work_mem = 160MB\n\nThis is probably way too high. work_mem is used every sort operation in\na query. So each connection could have several of these allocated, thus\nstarting your system of memory which will reduce that available for page\ncache. Change it to 8mb, and increase it in small increments if necessary.\n\n> So correct me if I'm wrong here: my theory is, that I have too many\n> too slow update queries, that then often end up in a situation, where\n> they \"wait\" for each other to finish, hence the sometimes VERY long\n> execution times.\n\nSometimes this is the case, but for you, you're running into IO\ncontention, not lock contention. Your 3TB RAID-1 is simply insufficient\nfor this workload.\n\nIf you check your logs after making the changes I've suggested, take a\nlook at your checkpoint sync times. That will tell you how long it took\nthe kernel to physically commit those blocks to disk and get a\nconfirmation back from the controller. If those take longer than a\nsecond or two, you're probably running into controller buffer overflows.\nYou have a large amount of RAM, so you should also make these two kernel\nchanges to sysctl.conf:\n\nvm.dirty_ratio = 10\nvm.dirty_writeback_ratio = 1\n\nThen run this:\n\nsysctl -p\n\nThis will help prevent large IO write spikes caused when the kernel\ndecides to write out dirty memory. That can make checkpoints take\nminutes to commit in some cases, which basically stops all write traffic\nto your database entirely.\n\nThat should get you going, anyway. You still need more/better disks so\nyou can move your pg_xlog directory. With your write load, that will\nmake a huge difference.\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 10:54:05 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "On 11/28/2012 11:44 AM, Niels Kristian Schjødt wrote:\n\n> Thanks a lot - on the server I already have one additional SSD 250gb\n> disk, that I don't use for anything at the moment.\n\nGoooood. An SSD would actually be better for your data, as it follows\nmore random access patterns, and xlogs are more sequential. But it's\nbetter than nothing.\n\nAnd yes, you'd be better off with a RAID-1 of two of these SSDs, because\nthe xlogs are critical to database health. You have your archived copy\ndue to the rsync, which helps. But if you had a crash, there could\npotentially be a need to replay unarchived transaction logs, and you'd\nend up with some data loss.\n\n> BTW. as you might have seen from the .conf I have a second slave\n> server with the exact same setup, which currently runs as a hot\n> streaming replication slave. I might ask a stupid question here, but\n> this does not affect the performance of the master does it?\n\nOnly if you're using synchronous replication. From what I saw in the\nconfig, that isn't the case.\n\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 12:01:41 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Hi, I have started to implement your suggestions . I have a small error so far though. The \"vm.dirty_writeback_ratio = 1\" command rerurns: \n error: \"vm.dirty_writeback_ratio\" is an unknown key\nI'm on ubuntu 12.04\n\n\nDen 28/11/2012 kl. 17.54 skrev Shaun Thomas <[email protected]>:\n\n> On 11/28/2012 10:19 AM, Niels Kristian Schjødt wrote:\n> \n>> https://rpm.newrelic.com/public/charts/h2dtedghfsv\n> \n> Doesn't this answer your question?\n> \n> That iowait is crushing your server into the ground. It's no surprise updates are taking several seconds. That update you sent us *should* execute on the order of only a few milliseconds.\n> \n> So I'll reiterate that you *must* move your pg_xlog location elsewhere. You've got row lookup bandwidth conflicting with writes. There are a couple other changes you should probably make to your config:\n> \n>> checkpoint_segments = 16\n> \n> This is not enough for the workload you describe. Every time the database checkpoints, all of those changes in pg_xlog are applied to the backend data files. You should set these values:\n> \n> checkpoint_segments = 100\n> checkpoint_timeout = 10m\n> checkpoint_completion_target = 0.9\n> \n> This will reduce your overall write workload, and make it less active. Too many checkpoints massively reduce write throughput. With the settings you have, it's probably checkpointing constantly while your load runs. Start with this, but experiment with increasing checkpoint_segments further.\n> \n> If you check your logs now, you probably see a ton of \"checkpoint starting: xlog\" in there. That's very bad. It should say \"checkpoint starting: time\" meaning it's keeping up with your writes naturally.\n> \n>> work_mem = 160MB\n> \n> This is probably way too high. work_mem is used every sort operation in a query. So each connection could have several of these allocated, thus starting your system of memory which will reduce that available for page cache. Change it to 8mb, and increase it in small increments if necessary.\n> \n>> So correct me if I'm wrong here: my theory is, that I have too many\n>> too slow update queries, that then often end up in a situation, where\n>> they \"wait\" for each other to finish, hence the sometimes VERY long\n>> execution times.\n> \n> Sometimes this is the case, but for you, you're running into IO contention, not lock contention. Your 3TB RAID-1 is simply insufficient for this workload.\n> \n> If you check your logs after making the changes I've suggested, take a look at your checkpoint sync times. That will tell you how long it took the kernel to physically commit those blocks to disk and get a confirmation back from the controller. If those take longer than a second or two, you're probably running into controller buffer overflows. You have a large amount of RAM, so you should also make these two kernel changes to sysctl.conf:\n> \n> vm.dirty_ratio = 10\n> vm.dirty_writeback_ratio = 1\n> \n> Then run this:\n> \n> sysctl -p\n> \n> This will help prevent large IO write spikes caused when the kernel decides to write out dirty memory. That can make checkpoints take minutes to commit in some cases, which basically stops all write traffic to your database entirely.\n> \n> That should get you going, anyway. You still need more/better disks so you can move your pg_xlog directory. With your write load, that will make a huge difference.\n> \n> -- \n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 04:32:11 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "In later kernels these have been renamed:\n\n\nWelcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-32-generic x86_64)\n\n$ sysctl -a|grep dirty\nvm.dirty_background_ratio = 5\nvm.dirty_background_bytes = 0\nvm.dirty_ratio = 10\nvm.dirty_bytes = 0\nvm.dirty_writeback_centisecs = 500\nvm.dirty_expire_centisecs = 3000\n\nYou the option of specifying either a ratio, or - more usefully for \nmachines with a lot of ram - bytes.\n\nRegards\n\nMark\n\nP.s: People on this list usually prefer it if you *bottom* post (i.e \nreply underneath the original).\n\n\nOn 29/11/12 16:32, Niels Kristian Schjødt wrote:\n> Hi, I have started to implement your suggestions . I have a small error so far though. The \"vm.dirty_writeback_ratio = 1\" command rerurns:\n> error: \"vm.dirty_writeback_ratio\" is an unknown key\n> I'm on ubuntu 12.04\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 17:30:52 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "\nDen 28/11/2012 kl. 17.54 skrev Shaun Thomas <[email protected]>:\n\n> On 11/28/2012 10:19 AM, Niels Kristian Schjødt wrote:\n> \n>> https://rpm.newrelic.com/public/charts/h2dtedghfsv\n> \n> Doesn't this answer your question?\n> \n> That iowait is crushing your server into the ground. It's no surprise updates are taking several seconds. That update you sent us *should* execute on the order of only a few milliseconds.\n> \n> So I'll reiterate that you *must* move your pg_xlog location elsewhere. You've got row lookup bandwidth conflicting with writes. There are a couple other changes you should probably make to your config:\n> \n>> checkpoint_segments = 16\n> \n> This is not enough for the workload you describe. Every time the database checkpoints, all of those changes in pg_xlog are applied to the backend data files. You should set these values:\n> \n> checkpoint_segments = 100\n> checkpoint_timeout = 10m\n> checkpoint_completion_target = 0.9\n> \n> This will reduce your overall write workload, and make it less active. Too many checkpoints massively reduce write throughput. With the settings you have, it's probably checkpointing constantly while your load runs. Start with this, but experiment with increasing checkpoint_segments further.\n> \n> If you check your logs now, you probably see a ton of \"checkpoint starting: xlog\" in there. That's very bad. It should say \"checkpoint starting: time\" meaning it's keeping up with your writes naturally.\n> \n>> work_mem = 160MB\n> \n> This is probably way too high. work_mem is used every sort operation in a query. So each connection could have several of these allocated, thus starting your system of memory which will reduce that available for page cache. Change it to 8mb, and increase it in small increments if necessary.\n> \n>> So correct me if I'm wrong here: my theory is, that I have too many\n>> too slow update queries, that then often end up in a situation, where\n>> they \"wait\" for each other to finish, hence the sometimes VERY long\n>> execution times.\n> \n> Sometimes this is the case, but for you, you're running into IO contention, not lock contention. Your 3TB RAID-1 is simply insufficient for this workload.\n> \n> If you check your logs after making the changes I've suggested, take a look at your checkpoint sync times. That will tell you how long it took the kernel to physically commit those blocks to disk and get a confirmation back from the controller. If those take longer than a second or two, you're probably running into controller buffer overflows. You have a large amount of RAM, so you should also make these two kernel changes to sysctl.conf:\n> \n> vm.dirty_ratio = 10\n> vm.dirty_writeback_ratio = 1\n> \n> Then run this:\n> \n> sysctl -p\n> \n> This will help prevent large IO write spikes caused when the kernel decides to write out dirty memory. That can make checkpoints take minutes to commit in some cases, which basically stops all write traffic to your database entirely.\n> \n> That should get you going, anyway. You still need more/better disks so you can move your pg_xlog directory. With your write load, that will make a huge difference.\n> \n> -- \n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\nOkay, now I'm done the updating as described above. I did the postgres.conf changes. I did the kernel changes, i added two SSD's in a software RAID1 where the pg_xlog is now located - unfortunately the the picture is still the same :-( \nWhen the database is under \"heavy\" load, there is almost no improvement to see in the performance compared to before the changes. A lot of both read and writes takes more than a 1000 times as long as they usually do, under \"lighter\" overall load. \n\nI added All the overview charts I can get hold on from new relic beneath. What am I overlooking? There must be an obvious bottleneck? Where should I dive in?\n\nDatabase server CPU usage\nhttps://rpm.newrelic.com/public/charts/cEdIvvoQZCr\n\nDatabase server load average\nhttps://rpm.newrelic.com/public/charts/cMNdrYW51QJ\n\nDatabase server physical memory\nhttps://rpm.newrelic.com/public/charts/c3dZBntNpa1\n\nDatabase server disk I/O utulization\nhttps://rpm.newrelic.com/public/charts/9YEVw6RekFG\n\nDatabase server network I/O (Mb/s)\nhttps://rpm.newrelic.com/public/charts/lKiZ0Szmwe7\n\nTop 5 database operations by wall clock time\nhttps://rpm.newrelic.com/public/charts/dCt45YH12FK\n\nDatabase throughput\nhttps://rpm.newrelic.com/public/charts/bIbtQ1mDzMI\n\nDatabase response time\nhttps://rpm.newrelic.com/public/charts/fPcNL8WA6xx\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 01:59:00 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize update query" } ]
[ { "msg_contents": "Niels Kristian Schjødt wrote:\n\n> So my main concern is actually about the cars table, since this\n> one currently has a lot of columns (151 - I expect thats quite a\n> lot?),\n\nThat's pretty wide, but not outrageous.\n\n> and a lot of data (4 mil. rows, and growing).\n\nThat's not a big deal. It's not unusual to have hundreds of\nmillions of rows in a PostgreSQL table. Properly indexed, that\nshould perform fine on queries. Sometimes partitioning rows into\nsub-tables helps, but you didn't really mention anything which\nsuggests that would be helpful for you.\n\n> Now you might start by thinking, this could sound like a regular\n> need for some normalization\n\nOn the contrary, what you describe sounds well normalized. Breaking\noff attributes of a car into separate tables would not advance\nthat.\n\n> The columns in this table is for the most very short stings,\n> integers, decimals or booleans. So take for an example\n> has_automatic_transmission (boolean) I can't see why it would\n> make sense to put that into a separate table and join in the\n> values. Or the milage or the price as another example. The cars\n> table used for search is indexed quite a lot.\n\nOn the face of it, it sounds like you should have some one-column\nindexes on the columns most useful for selection (based on\nfrequency of use and how selective a selection on the column tends\nto be).\n\nYou might benefit from a technique called \"vertical partitioning\"\n-- where you split off less frequently referenced column and/or\ncolumns which are updated more often into \"sibling\" tables, with\nthe same primary key as the car table. That can sometimes buy some\nperformance at the expense of programming complexity and more\ndifficulty maintaining data integrity. I wouldn't go there without\nevidence that your performance is not adequate without it.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 09:40:06 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database design - best practice" } ]
[ { "msg_contents": "Niels Kristian Schjødt wrote:\n\n> PS. I'm on postgres 9.2 on a server with 32gb ram, 8 cores and\n> two 3T disks in a software raid 1 setup.\n\nIn addtion to the excellent advice from Shaun, I would like to\npoint out a few other things.\n\nOne query runs on one core. In a test of a single query, the other\nseven cores aren't doing anything. Be sure to pay attention to how\na representative workload is handled.\n\nUnless you have tuned your postgresql.conf settings, you probably\naren't taking very good advantage of that RAM.\n\nFor heavy load you need lots of spindles and a good RAID controller\nwith battery-backed cache configured for write-back.\n\nYou will probably benefit from reading this page:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nIf you don't already have it, you will probably find Greg Smith's\nbook on PostgreSQL performance a great investment:\n\nhttp://www.postgresql.org/docs/books/\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 09:51:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize update query" } ]
[ { "msg_contents": "Hi, \n\nWe are planning to migrate our production databases to different\nservers.We have around 8 servers with 8 different clusters.We are planning\nt shuffle databases and make them as 7 cluster and migrate to new remote\nservers . \nWe cannot use streaming replication as we are migrating different databases\nfrom different clusters to one single cluster . This will be resulting in\nhuge downtime as data is huge . \nNeed expert advice on this scenario.Can we reduce downtime in any way ..?? \n\nRgrds \nSuhas\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/NEED-REPLICATION-SOLUTION-POSTGRES-9-1-tp5733939.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 10:12:27 -0800 (PST)", "msg_from": "\"suhas.basavaraj12\" <[email protected]>", "msg_from_op": true, "msg_subject": "NEED REPLICATION SOLUTION -POSTGRES 9.1" }, { "msg_contents": "On Wed, Nov 28, 2012 at 3:12 PM, suhas.basavaraj12 <[email protected]> wrote:\n> We are planning to migrate our production databases to different\n> servers.We have around 8 servers with 8 different clusters.We are planning\n> t shuffle databases and make them as 7 cluster and migrate to new remote\n> servers .\n> We cannot use streaming replication as we are migrating different databases\n> from different clusters to one single cluster . This will be resulting in\n> huge downtime as data is huge .\n> Need expert advice on this scenario.Can we reduce downtime in any way ..??\n\nOne time we had to do something like this (on a smaller scale), we used slony.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 15:15:33 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NEED REPLICATION SOLUTION -POSTGRES 9.1" }, { "msg_contents": "I recommend SymmetricDS - http://www.symmetricds.org\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of suhas.basavaraj12\nSent: Wednesday, November 28, 2012 1:12 PM\nTo: [email protected]\nSubject: [PERFORM] NEED REPLICATION SOLUTION -POSTGRES 9.1\n\nHi, \n\nWe are planning to migrate our production databases to different servers.We have around 8 servers with 8 different clusters.We are planning t shuffle databases and make them as 7 cluster and migrate to new remote servers . \nWe cannot use streaming replication as we are migrating different databases from different clusters to one single cluster . This will be resulting in huge downtime as data is huge . \nNeed expert advice on this scenario.Can we reduce downtime in any way ..?? \n\nRgrds\nSuhas\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/NEED-REPLICATION-SOLUTION-POSTGRES-9-1-tp5733939.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 18:30:20 +0000", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NEED REPLICATION SOLUTION -POSTGRES 9.1" } ]
[ { "msg_contents": "Hi, I've just been benchmarking a new box I've got and running pgbench \nyields what I thought was a slow tps count. It is dificult to find \ncomparisons online of other benchmark results, I'd like to see if I have \nthe box set up reasonably well.\n\nI know oracle, et al prohibit benchmark results, but was surprised that \nthere doesn't seem to be any postgresql ones out there..\n\nAnyway, the machine is a Dell R720 with the data on a raid 10 using 8x \nintel 320 SSDs and a mirrored pair of 15k SAS HDDs configured for the \npg_xlog, both on a dell H710 raid controller, in addition it has 64Gb of \n1600Mhz memory and 2x E5-2650 processors (with HT=32 cores). The arrays \nare all setup with XFS on and tweaked as I could. The drives are 160Gb \nand overprovisioned by another 15%.\n\nI'm running postgresql 9.1 on ubuntu 12.04\n\nbonnie++ (using defaults) shows about 600MB/s sequential read/write IO \non the main data array, this doesn't seem too bad although the specs \nshow over 200MB/s should be achievable per drive.\n\npgbench (using a scaling factor of 100 with 100 clients and 25 threads) \ngives an average of about 7200tps.\n\nDoes this look acceptable? Instinctively it feels on the low side, \nalthough I noted that a couple of blogs show \n(http://www.fuzzy.cz/en/articles/ssd-benchmark-results-read-write-pgbench/ \nand \nhttp://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html) \nshow around 1500tps for a single ssd, so maybe this is what is expected.\n\nThe interesting param differences from the postgresql conf are:\nshare_buffers=6Gb\nwork_mem=64Mb\nmax_stack_depth=4Mb\nrandom_page_cost=1.1\ncpu_tuple_cost=0.1\ncpu_index_tuple_cost=0.05\ncpu_operator_cost=0.025\neffective_cache_size=40Gb\n\nI'd be happy to provide any other configs, etc assuming the tps values \nare way off the expected.\n\nThanks\n\nJohn\n\nps. the number of \"safe\" ssds available in the uk seems to be rather \nlimited, hence the intel 320s which I probably aren't as fast as modern \ndrives.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 18:37:32 +0000", "msg_from": "John Lister <[email protected]>", "msg_from_op": true, "msg_subject": "Comparative tps question" }, { "msg_contents": "On Wed, Nov 28, 2012 at 12:37 PM, John Lister <[email protected]> wrote:\n> Hi, I've just been benchmarking a new box I've got and running pgbench\n> yields what I thought was a slow tps count. It is dificult to find\n> comparisons online of other benchmark results, I'd like to see if I have the\n> box set up reasonably well.\n>\n> I know oracle, et al prohibit benchmark results, but was surprised that\n> there doesn't seem to be any postgresql ones out there..\n>\n> Anyway, the machine is a Dell R720 with the data on a raid 10 using 8x intel\n> 320 SSDs and a mirrored pair of 15k SAS HDDs configured for the pg_xlog,\n> both on a dell H710 raid controller, in addition it has 64Gb of 1600Mhz\n> memory and 2x E5-2650 processors (with HT=32 cores). The arrays are all\n> setup with XFS on and tweaked as I could. The drives are 160Gb and\n> overprovisioned by another 15%.\n>\n> I'm running postgresql 9.1 on ubuntu 12.04\n>\n> bonnie++ (using defaults) shows about 600MB/s sequential read/write IO on\n> the main data array, this doesn't seem too bad although the specs show over\n> 200MB/s should be achievable per drive.\n\nProbably this limitation is coming from sata bus. It shouldn't be a\nproblem in practice. Can you report bonnie++ seek performance?\nAnother possibility is the raid controller is introducing overhead\nhere.\n\n> pgbench (using a scaling factor of 100 with 100 clients and 25 threads)\n> gives an average of about 7200tps.\n>\n> Does this look acceptable? Instinctively it feels on the low side, although\n> I noted that a couple of blogs show\n> (http://www.fuzzy.cz/en/articles/ssd-benchmark-results-read-write-pgbench/\n> and\n> http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html)\n> show around 1500tps for a single ssd, so maybe this is what is expected.\n>\n> The interesting param differences from the postgresql conf are:\n> share_buffers=6Gb\n> work_mem=64Mb\n> max_stack_depth=4Mb\n> random_page_cost=1.1\n> cpu_tuple_cost=0.1\n> cpu_index_tuple_cost=0.05\n> cpu_operator_cost=0.025\n> effective_cache_size=40Gb\n\n*) none of the above settings will influence storage bound pgbench\nresults. Influential settings are fsync, synchronous_commit,\nwal_sync_method, wal_level, full_page_writes, wal_buffers,\nwal_writer_delay, and commit_delay. These settings are basically\nmanaging various tradeoffs, espeically in the sense of safety vs\nperformance.\n\n> I'd be happy to provide any other configs, etc assuming the tps values are\n> way off the expected.\n\n*) Very first thing we need to check is if we are storage bound (check\ni/o wait) and if so where the bind up is. Could be on the wal or heap\nvolume. Another possibility is that we're lock bound which is a\ncompletely different issue to deal with.\n\nso we want to see top, iostat, vmstat, etc while test is happening.\n\n*) another interesting test to run is large scaling factor (ideally,\nat least 2x ram) read only test via pgbench -S.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Nov 2012 13:21:37 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparative tps question" }, { "msg_contents": "On 28/11/2012 19:21, Merlin Moncure wrote:\n> On Wed, Nov 28, 2012 at 12:37 PM, John Lister <[email protected]> wrote:\n>> Hi, I've just been benchmarking a new box I've got and running pgbench\n>> yields what I thought was a slow tps count. It is dificult to find\n>> comparisons online of other benchmark results, I'd like to see if I have the\n>> box set up reasonably well.\n>>\n>> I know oracle, et al prohibit benchmark results, but was surprised that\n>> there doesn't seem to be any postgresql ones out there..\n>>\n>> Anyway, the machine is a Dell R720 with the data on a raid 10 using 8x intel\n>> 320 SSDs and a mirrored pair of 15k SAS HDDs configured for the pg_xlog,\n>> both on a dell H710 raid controller, in addition it has 64Gb of 1600Mhz\n>> memory and 2x E5-2650 processors (with HT=32 cores). The arrays are all\n>> setup with XFS on and tweaked as I could. The drives are 160Gb and\n>> overprovisioned by another 15%.\n>>\n>> I'm running postgresql 9.1 on ubuntu 12.04\n>>\n>> bonnie++ (using defaults) shows about 600MB/s sequential read/write IO on\n>> the main data array, this doesn't seem too bad although the specs show over\n>> 200MB/s should be achievable per drive.\n> Probably this limitation is coming from sata bus. It shouldn't be a\n> problem in practice. Can you report bonnie++ seek performance?\n> Another possibility is the raid controller is introducing overhead\n> here.\nI must have misread the numbers before when using bonnie++, run it again \nand getting 1.3Gb/s read and 700Mb/s write which looks more promising. \nIn terms of vmstat:\nprocs -----------memory---------- ---swap-- -----io---- -system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 5 1 0 275800 564 62541220 0 0 346904 259208 18110 \n12013 7 3 86 5\nand iostat\navg-cpu: %user %nice %system %iowait %steal %idle\n 8.97 0.00 3.95 2.04 0.00 85.03\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsdc 4716.00 1271456.00 0.00 1271456 0\n\nobviously the figures varied for read/write speed during the tests. but \niowait averaged about 3% with the system about 85-90% idle. Oddly bonnie \nreports near 80% cpu use during the test which seems high?\n\nThe H710 is capable of using 6Gbps sata drives although the intel ones \nare limited to 3Gbps, given the above results, the io performance looks \nto be ok?\n>> pgbench (using a scaling factor of 100 with 100 clients and 25 threads)\n>> gives an average of about 7200tps.\n>>\n>> Does this look acceptable? Instinctively it feels on the low side, although\n>> I noted that a couple of blogs show\n>> (http://www.fuzzy.cz/en/articles/ssd-benchmark-results-read-write-pgbench/\n>> and\n>> http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html)\n>> show around 1500tps for a single ssd, so maybe this is what is expected.\n>>\n>> The interesting param differences from the postgresql conf are:\n>> share_buffers=6Gb\n>> work_mem=64Mb\n>> max_stack_depth=4Mb\n>> random_page_cost=1.1\n>> cpu_tuple_cost=0.1\n>> cpu_index_tuple_cost=0.05\n>> cpu_operator_cost=0.025\n>> effective_cache_size=40Gb\n> *) none of the above settings will influence storage bound pgbench\n> results. Influential settings are fsync, synchronous_commit,\n> wal_sync_method, wal_level, full_page_writes, wal_buffers,\n> wal_writer_delay, and commit_delay. These settings are basically\n> managing various tradeoffs, espeically in the sense of safety vs\n> performance.\nI figured they may influence the planner, caching of the queries. Of the \nones you list only this is changed:\nwal_level=hot_standby\n\n> *) Very first thing we need to check is if we are storage bound (check \n> i/o wait) and if so where the bind up is. Could be on the wal or heap \n> volume. Another possibility is that we're lock bound which is a \n> completely different issue to deal with. so we want to see top, \n> iostat, vmstat, etc while test is happening. \nio_wait is typically <20% which is worse than for bonnie.\nvmstat typical figures are during pgbench are\nprocs -----------memory---------- ---swap-- -----io---- -system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n30 1 0 261900 44340 62650808 0 0 88348 74500 103544 175006 \n53 20 21 6\n\nand iostat (sda is the wal device)\navg-cpu: %user %nice %system %iowait %steal %idle\n 52.80 0.00 17.94 12.22 0.00 17.03\n\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 2544.00 0.00 66432.00 0 132864\nsdc 4153.00 132848.00 136.00 265696 272\n\nI noticed that the system values are usually in the 20% region, could \nthis be the locks? btw pgbench is running on the db server not a client \n- would that influence things dramatically.\n\n> *) another interesting test to run is large scaling factor (ideally, \n> at least 2x ram) read only test via pgbench -S. merlin \nMight give that a go when I next get a chance to run the tests...\n\n\n\nJohn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 16:56:37 +0000", "msg_from": "John Lister <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Comparative tps question" }, { "msg_contents": "On Thu, Nov 29, 2012 at 10:56 AM, John Lister <[email protected]> wrote:\n> I must have misread the numbers before when using bonnie++, run it again and\n> getting 1.3Gb/s read and 700Mb/s write which looks more promising. In terms\n> of vmstat:\n\npretty nice.\n\n>> *) Very first thing we need to check is if we are storage bound (check i/o\n>> wait) and if so where the bind up is. Could be on the wal or heap volume.\n>> Another possibility is that we're lock bound which is a completely different\n>> issue to deal with. so we want to see top, iostat, vmstat, etc while test is\n>> happening.\n>\n> io_wait is typically <20% which is worse than for bonnie.\n> vmstat typical figures are during pgbench are\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> 30 1 0 261900 44340 62650808 0 0 88348 74500 103544 175006 53\n> 20 21 6\n>\n> and iostat (sda is the wal device)\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 52.80 0.00 17.94 12.22 0.00 17.03\n>\n> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> sda 2544.00 0.00 66432.00 0 132864\n> sdc 4153.00 132848.00 136.00 265696 272\n>\n> I noticed that the system values are usually in the 20% region, could this\n> be the locks? btw pgbench is running on the db server not a client - would\n> that influence things dramatically.\n\nSince we have some idle cpu% here we can probably eliminate pgbench as\na bottleneck by messing around with the -j switch. another thing we\nwant to test is the \"-N\" switch -- this doesn't update the tellers and\nbranches table which in high concurrency situations can bind you from\nlocking perspective.\n\none thing that immediately jumps out here is that your wal volume\ncould be holding you up. so it's possible we may want to move wal to\nthe ssd volume. if you can scrounge up a 9.2 pgbench, we can gather\nmore evidence for that by running pgbench with the\n\"--unlogged-tables\" option, which creates the tables unlogged so that\nthey are not wal logged (for the record, this causes tables to be\ntruncated when not shut down in clean state).\n\nputting all the options above together (history only, no wal, multi\nthread) and you're test is more approximating random device write\nperformance.\n\n>> *) another interesting test to run is large scaling factor (ideally, at\n>> least 2x ram) read only test via pgbench -S. merlin\n>\n> Might give that a go when I next get a chance to run the tests...\n\nyeah -- this will tell us raw seek performance of ssd volume which\npresumably will be stupendous. 2x is minimum btw 10x would be more\nappropriate.\n\nsince you're building a beast, other settings to explore are numa\n(http://frosty-postgres.blogspot.com/2012/08/postgresql-numa-and-zone-reclaim-mode.html)\nand dell memory bios settings that are occasionally set from the\nfactory badly (see here:\nhttp://bleything.net/articles/postgresql-benchmarking-memory.html).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 11:33:49 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparative tps question" }, { "msg_contents": "On 29/11/2012 17:33, Merlin Moncure wrote:\n> Since we have some idle cpu% here we can probably eliminate pgbench as \n> a bottleneck by messing around with the -j switch. another thing we \n> want to test is the \"-N\" switch -- this doesn't update the tellers and \n> branches table which in high concurrency situations can bind you from \n> locking perspective.\nUsing -N gives around a 15% increase in tps with no major changes in \nload, etc. using more threads slightly drops the performance (as \nexpected with only 32 \"cores\"). dropping it does give a slight increase \n(presumably because half the cores aren't real).\n\n> one thing that immediately jumps out here is that your wal volume\n> could be holding you up. so it's possible we may want to move wal to\n> the ssd volume. if you can scrounge up a 9.2 pgbench, we can gather\n> more evidence for that by running pgbench with the\n> \"--unlogged-tables\" option, which creates the tables unlogged so that\n> they are not wal logged (for the record, this causes tables to be\n> truncated when not shut down in clean state).\nI did notice that using -S drives the tps up to near 30K tps, so it is \npossibly the wal volume, although saying that I did move the pg_xlog \ndirectory onto the ssd array before posting to the list and the \ndifference wasn't significant. I'll try and repeat that when I get some \nmore downtime (I'm having to run the current tests while the db is live, \nbut under light load).\n\nI'll have a look at using the 9.2 pgbench and see what happens.\n> yeah -- this will tell us raw seek performance of ssd volume which\n> presumably will be stupendous. 2x is minimum btw 10x would be more\n> appropriate.\n>\n> since you're building a beast, other settings to explore are numa\n> (http://frosty-postgres.blogspot.com/2012/08/postgresql-numa-and-zone-reclaim-mode.html)\n> and dell memory bios settings that are occasionally set from the\n> factory badly (see here:\n> http://bleything.net/articles/postgresql-benchmarking-memory.html).\nCheers for the links, I'd already looked at the numa stuff and disabled \nzone reclaim. I was looking at using the patch previously posted that \nused shared mode for the master process and then local only for the \nworkers - excuse the terminology - but time constraints prevented that.\nMade sure the box was in performance mode in the bios, unfortunately I \nspotted bens blog when I was setting the box up, but didn't have time to \ngo through all the tests. At the time performance seemed ok (well better \nthan the previous box :) - but having it live for a while made me think \nI or it could be doing better.\n\nAnyway, I still think it would be nice to post tps results for compative \npurposes, so if I get a minute or two I'll create a site and stick mine \non there.\n\nJohn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 19:59:14 +0000", "msg_from": "John Lister <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Comparative tps question" }, { "msg_contents": "\nOn 29/11/2012 17:33, Merlin Moncure wrote:\n> one thing that immediately jumps out here is that your wal volume \n> could be holding you up. so it's possible we may want to move wal to \n> the ssd volume. if you can scrounge up a 9.2 pgbench, we can gather \n> more evidence for that by running pgbench with the \"--unlogged-tables\" \n> option, which creates the tables unlogged so that they are not wal \n> logged (for the record, this causes tables to be truncated when not \n> shut down in clean state).\nOk, got myself a 9.2 version of pgbench and run it a few times on \nunlogged tables...\nchanging the number of threads has maybe a 5% change in values which \nisn't probably too much to worry about.\n-j 25 -c 100 -s 100 gives a tps of around 10.5k\nusing -N ups that to around 20k\nusing -S ups that again to around 40k\n\nI'll have to wait until I get to shut the db down again to try the wal \non an ssd. Although unless I did something wrong it didn't seem to make \na huge difference before....\n\nDuring these tests, iowait dropped to almost 0, user and sys stayed \naround the same (60% and 20% respectively). although the disk traffic \nwas only in the 10s of Mb/s which seems very low - unless there is some \nwierd caching going on and it gets dumped at a later date?\n\n\nJohn\n\n-- \nGet the PriceGoblin Browser Addon\nwww.pricegoblin.co.uk\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 04 Dec 2012 16:30:43 +0000", "msg_from": "John Lister <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Comparative tps question" } ]
[ { "msg_contents": "Hello,\n\nI am toying around with 9.2.1, trying to measure/determine how\nindex-only scans can improve our performance.\n\nA small script which is attached to this mail, shows that as long\nas the table has been VACUUM FULL'd, there is a unusual high\namount of heap fetches. It is strange that the visibilitymap_test\npredicate fails in these situations, is the visibility map\nsomehow trashed in this situation? It should not, or at least the\ndocumentation[1] should state it (my understanding is that vacuum\nfull does *more* than vacuum, but nothing less) (note to usual\nanti vacuum full trollers: I know you hate vacuum full).\n\nUsing pg 9.2.1 compiled from sources, almost standard\nconfiguration except shared_buffers at 512M, effective_cache_size\nat 1536M, random_page_cost at 2, and vacuum delays increased.\n\nPlease find complete logs attached, and selected logs below:\n\nAfter table creation + analyze:\n\n Index Only Scan using i on ta (cost=0.00..156991.10 rows=2018667 width=4) (actual time=0.034..336.443 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 2000000\n\nAfter vacuum:\n\n Index Only Scan using i on ta (cost=0.00..50882.62 rows=2018667 width=4) (actual time=0.014..193.120 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 0\n\nAfter vacuum analyze:\n\n Index Only Scan using i on ta (cost=0.00..50167.13 rows=1990353 width=4) (actual time=0.015..193.035 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 0\n\nAfter vacuum full:\n\n Index Only Scan using i on ta (cost=0.00..155991.44 rows=1990333 width=4) (actual time=0.042..364.412 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 2000000\n ^^^^^^^ uh uh, looking bad\n\nAfter vacuum full analyze:\n\n Index Only Scan using i on ta (cost=0.00..157011.85 rows=2030984 width=4) (actual time=0.025..365.657 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 2000000\n\nAfter vacuum:\n\n Index Only Scan using i on ta (cost=0.00..51192.45 rows=2031000 width=4) (actual time=0.015..192.520 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 0\n\nThanks for any comments/hints,\n\nRef: \n[1] http://www.postgresql.org/docs/9.1/static/sql-vacuum.html\n\n\n\nDROP TABLE ta;\npsql:/tmp/vacfull.sql:3: ERROR: table \"ta\" does not exist\nCREATE TABLE ta (ca int, cb int, cc int);\nCREATE TABLE\nINSERT INTO ta VALUES (generate_series(1, 5), generate_series(1, 10000000), generate_series(1, 10000000));\nINSERT 0 10000000\nANALYZE ta;\nANALYZE\nCREATE INDEX i ON ta (ca, cb, cc);\nCREATE INDEX\nEXPLAIN ANALYZE SELECT cb FROM ta WHERE ca = 1 ORDER BY cb;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using i on ta (cost=0.00..156991.10 rows=2018667 width=4) (actual time=0.034..336.443 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 2000000\n Total runtime: 385.023 ms\n(4 rows)\n\nVACUUM ta;\nVACUUM\nEXPLAIN ANALYZE SELECT cb FROM ta WHERE ca = 1 ORDER BY cb;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using i on ta (cost=0.00..50882.62 rows=2018667 width=4) (actual time=0.014..193.120 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 0\n Total runtime: 241.079 ms\n(4 rows)\n\nVACUUM ANALYZE ta;\nVACUUM\nEXPLAIN ANALYZE SELECT cb FROM ta WHERE ca = 1 ORDER BY cb;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using i on ta (cost=0.00..50167.13 rows=1990353 width=4) (actual time=0.015..193.035 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 0\n Total runtime: 241.101 ms\n(4 rows)\n\nVACUUM FULL ta;\nVACUUM\nEXPLAIN ANALYZE SELECT cb FROM ta WHERE ca = 1 ORDER BY cb;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using i on ta (cost=0.00..155991.44 rows=1990333 width=4) (actual time=0.042..364.412 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 2000000\n Total runtime: 412.715 ms\n(4 rows)\n\nVACUUM FULL ANALYZE ta;\nVACUUM\nEXPLAIN ANALYZE SELECT cb FROM ta WHERE ca = 1 ORDER BY cb;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using i on ta (cost=0.00..157011.85 rows=2030984 width=4) (actual time=0.025..365.657 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 2000000\n Total runtime: 414.223 ms\n(4 rows)\n\nVACUUM ta;\nVACUUM\nEXPLAIN ANALYZE SELECT cb FROM ta WHERE ca = 1 ORDER BY cb;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using i on ta (cost=0.00..51192.45 rows=2031000 width=4) (actual time=0.015..192.520 rows=2000000 loops=1)\n Index Cond: (ca = 1)\n Heap Fetches: 0\n Total runtime: 240.918 ms\n(4 rows)\n\nDROP TABLE ta;\nDROP TABLE\n\n\n-- \nGuillaume Cottenceau\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 29 Nov 2012 12:33:50 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": true, "msg_subject": "9.2.1 & index-only scans : abnormal heap fetches after VACUUM FULL " }, { "msg_contents": "On Thu, Nov 29, 2012 at 5:03 PM, Guillaume Cottenceau <[email protected]> wrote:\n\n> Hello,\n>\n> I am toying around with 9.2.1, trying to measure/determine how\n> index-only scans can improve our performance.\n>\n> A small script which is attached to this mail, shows that as long\n> as the table has been VACUUM FULL'd, there is a unusual high\n> amount of heap fetches. It is strange that the visibilitymap_test\n> predicate fails in these situations, is the visibility map\n> somehow trashed in this situation? It should not, or at least the\n> documentation[1] should state it (my understanding is that vacuum\n> full does *more* than vacuum, but nothing less) (note to usual\n> anti vacuum full trollers: I know you hate vacuum full).\n>\n>\nI don't find it very surprising given that VACUUM FULL is now implemented\nas a CLUSTER command which rewrites the entire heap, thus invalidating all\nthe visibility map info whatsoever. The code paths that VACUUM FULL and\nLAZY VACUUM takes are now completely different.\n\nEven with the old VACUUM FULL we would have seen some impact on heap\nfetches because it used to move tuples around and thus potentially\nresetting visibility map bits. But its definitely going to be worse with\nthe new implementation.\n\nNow can CLUSTER or VACUUM FULL recreate the visibility map with all bits\nset to visible, thats an entirely different question. I don't think it can,\nbut then I haven't thought through this completely.\n\nThanks,\nPavan\n\nOn Thu, Nov 29, 2012 at 5:03 PM, Guillaume Cottenceau <[email protected]> wrote:\n\nHello,\n\nI am toying around with 9.2.1, trying to measure/determine how\nindex-only scans can improve our performance.\n\nA small script which is attached to this mail, shows that as long\nas the table has been VACUUM FULL'd, there is a unusual high\namount of heap fetches. It is strange that the visibilitymap_test\npredicate fails in these situations, is the visibility map\nsomehow trashed in this situation? It should not, or at least the\ndocumentation[1] should state it (my understanding is that vacuum\nfull does *more* than vacuum, but nothing less) (note to usual\nanti vacuum full trollers: I know you hate vacuum full).I don't find it very surprising given that VACUUM FULL is now implemented as a CLUSTER command which rewrites the entire heap, thus invalidating all the visibility map info whatsoever. The code paths that VACUUM FULL and LAZY VACUUM takes are now completely different.\nEven with the old VACUUM FULL we would have seen some impact on heap fetches because it used to move tuples around and thus potentially resetting visibility map bits. But its definitely going to be worse with the new implementation.\nNow can CLUSTER or VACUUM FULL recreate the visibility map with all bits set to visible, thats an entirely different question. I don't think it can, but then I haven't thought through this completely. \nThanks,Pavan", "msg_date": "Thu, 29 Nov 2012 17:20:01 +0530", "msg_from": "Pavan Deolasee <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.2.1 & index-only scans : abnormal heap fetches after VACUUM\n FULL" }, { "msg_contents": "On 2012-11-29 17:20:01 +0530, Pavan Deolasee wrote:\n> On Thu, Nov 29, 2012 at 5:03 PM, Guillaume Cottenceau <[email protected]> wrote:\n>\n> > Hello,\n> >\n> > I am toying around with 9.2.1, trying to measure/determine how\n> > index-only scans can improve our performance.\n> >\n> > A small script which is attached to this mail, shows that as long\n> > as the table has been VACUUM FULL'd, there is a unusual high\n> > amount of heap fetches. It is strange that the visibilitymap_test\n> > predicate fails in these situations, is the visibility map\n> > somehow trashed in this situation? It should not, or at least the\n> > documentation[1] should state it (my understanding is that vacuum\n> > full does *more* than vacuum, but nothing less) (note to usual\n> > anti vacuum full trollers: I know you hate vacuum full).\n> >\n> >\n> I don't find it very surprising given that VACUUM FULL is now implemented\n> as a CLUSTER command which rewrites the entire heap, thus invalidating all\n> the visibility map info whatsoever.\n\nMe neither.\n\n> Now can CLUSTER or VACUUM FULL recreate the visibility map with all bits\n> set to visible, thats an entirely different question. I don't think it can,\n> but then I haven't thought through this completely.\n\nIt can't set everything to visible as it also copies RECENTLY_DEAD\ntuples and tuples which are not yet visible to other transactions, but\nit should be relatively easy to keep enough information about whether it\ncan set the current page to all visible. At least for the data in the\nmain relation, the toast tables are a different matter.\nJust tracking whether the page in rewriteheap.c's state->rs_buffer\ncontains only tuples that are clearly visible according to the xmin\nhorizon seems to be enough.\n\nThe current effect of resetting the VM has the disadvantage of making\nthe next autovacuum basically a full table vacuum without any\nbenefits...\n\nGreetings,\n\nAndres\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 13:12:28 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.2.1 & index-only scans : abnormal heap fetches after VACUUM\n FULL" }, { "msg_contents": "On Thu, Nov 29, 2012 at 5:42 PM, Andres Freund <[email protected]>wrote:\n\n> On 2012-11-29 17:20:01 +0530, Pavan Deolasee wrote:\n>\n> > Now can CLUSTER or VACUUM FULL recreate the visibility map with all bits\n> > set to visible, thats an entirely different question. I don't think it\n> can,\n> > but then I haven't thought through this completely.\n>\n> It can't set everything to visible as it also copies RECENTLY_DEAD\n> tuples and tuples which are not yet visible to other transactions, but\n> it should be relatively easy to keep enough information about whether it\n> can set the current page to all visible.\n\n\nYeah, that looks fairly easy to have. Thinking about it more, now that we\nhave ability to skip WAL for the case when a table is created and populated\nin the same transaction, we could also set the visibility map bits for such\na table (if we are not doing that already). That should be fairly safe too.\n\nThanks,\nPavan\n\nOn Thu, Nov 29, 2012 at 5:42 PM, Andres Freund <[email protected]> wrote:\nOn 2012-11-29 17:20:01 +0530, Pavan Deolasee wrote:\n\n> Now can CLUSTER or VACUUM FULL recreate the visibility map with all bits\n> set to visible, thats an entirely different question. I don't think it can,\n> but then I haven't thought through this completely.\n\nIt can't set everything to visible as it also copies RECENTLY_DEAD\ntuples and tuples which are not yet visible to other transactions, but\nit should be relatively easy to keep enough information about whether it\ncan set the current page to all visible. Yeah, that looks fairly easy to have. Thinking about it more, now that we have ability to skip WAL for the case when a table is created and populated in the same transaction, we could also set the visibility map bits for such a table (if we are not doing that already). That should be fairly safe too.\nThanks,Pavan", "msg_date": "Thu, 29 Nov 2012 17:59:39 +0530", "msg_from": "Pavan Deolasee <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.2.1 & index-only scans : abnormal heap fetches after VACUUM\n FULL" }, { "msg_contents": "On 2012-11-29 17:59:39 +0530, Pavan Deolasee wrote:\n> On Thu, Nov 29, 2012 at 5:42 PM, Andres Freund <[email protected]>wrote:\n>\n> > On 2012-11-29 17:20:01 +0530, Pavan Deolasee wrote:\n> >\n> > > Now can CLUSTER or VACUUM FULL recreate the visibility map with all bits\n> > > set to visible, thats an entirely different question. I don't think it\n> > can,\n> > > but then I haven't thought through this completely.\n> >\n> > It can't set everything to visible as it also copies RECENTLY_DEAD\n> > tuples and tuples which are not yet visible to other transactions, but\n> > it should be relatively easy to keep enough information about whether it\n> > can set the current page to all visible.\n>\n>\n> Yeah, that looks fairly easy to have. Thinking about it more, now that we\n> have ability to skip WAL for the case when a table is created and populated\n> in the same transaction, we could also set the visibility map bits for such\n> a table (if we are not doing that already). That should be fairly safe too.\n\nI don't think the latter would be safe at all. Every repeatable read\ntransaction that started before the table creation would see that tables\ncontent based on the visibilitymap instead of seeing it as empty.\n\nGreetings,\n\nAndres Freund\n\n--\n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 13:36:13 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.2.1 & index-only scans : abnormal heap fetches after VACUUM\n FULL" }, { "msg_contents": "On Thu, Nov 29, 2012 at 6:06 PM, Andres Freund <[email protected]>wrote:\n\n> On 2012-11-29 17:59:39 +0530, Pavan Deolasee wrote:\n>\n> >\n> >\n> > Yeah, that looks fairly easy to have. Thinking about it more, now that we\n> > have ability to skip WAL for the case when a table is created and\n> populated\n> > in the same transaction, we could also set the visibility map bits for\n> such\n> > a table (if we are not doing that already). That should be fairly safe\n> too.\n>\n> I don't think the latter would be safe at all. Every repeatable read\n> transaction that started before the table creation would see that tables\n> content based on the visibilitymap instead of seeing it as empty.\n>\n\nYeah, but that should be easy to fix, no ? We know the transaction that\ncreated the table and we can check if that transaction is visible to our\nsnapshot or not. If the creating transaction itself is not visible, the\ndata in the table is not visible either. OTOH if the creating transaction\nis visible and is committed, we can trust the visibility map as well. Thats\nprobably better than scanning the entire table just to find that we\ncan/can't see all/any rows.\n\nIts getting slightly off-topic, so my apologies anyways.\n\nThanks,\nPavan\n\nOn Thu, Nov 29, 2012 at 6:06 PM, Andres Freund <[email protected]> wrote:\nOn 2012-11-29 17:59:39 +0530, Pavan Deolasee wrote:\n>\n>\n> Yeah, that looks fairly easy to have. Thinking about it more, now that we\n> have ability to skip WAL for the case when a table is created and populated\n> in the same transaction, we could also set the visibility map bits for such\n> a table (if we are not doing that already). That should be fairly safe too.\n\nI don't think the latter would be safe at all. Every repeatable read\ntransaction that started before the table creation would see that tables\ncontent based on the visibilitymap instead of seeing it as empty.Yeah, but that should be easy to fix, no ? We know the transaction that created the table and we can check if that transaction is visible to our snapshot or not. If the creating transaction itself is not visible, the data in the table is not visible either. OTOH if the creating transaction is visible and is committed, we can trust the visibility map as well. Thats probably better than scanning the entire table just to find that we can/can't see all/any rows.\nIts getting slightly off-topic, so my apologies anyways.Thanks,Pavan", "msg_date": "Thu, 29 Nov 2012 18:15:19 +0530", "msg_from": "Pavan Deolasee <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.2.1 & index-only scans : abnormal heap fetches after VACUUM\n FULL" } ]
[ { "msg_contents": "Niels Kristian Schjødt wrote:\n\n> Okay, now I'm done the updating as described above. I did the\n> postgres.conf changes. I did the kernel changes, i added two\n> SSD's in a software RAID1 where the pg_xlog is now located -\n> unfortunately the the picture is still the same :-( \n\nYou said before that you were seeing high disk wait numbers. Now it\nis zero accourding to your disk utilization graph. That sounds like\na change to me.\n\n> When the database is under \"heavy\" load, there is almost no\n> improvement to see in the performance compared to before the\n> changes.\n\nIn client-visible response time and throughput, I assume, not\nresource usage numbers?\n\n> A lot of both read and writes takes more than a 1000 times as\n> long as they usually do, under \"lighter\" overall load.\n\nAs an odd coincidence, you showed your max_connections setting to\nbe 1000.\n\nhttp://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Nov 2012 20:24:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Den 30/11/2012 kl. 02.24 skrev \"Kevin Grittner\" <[email protected]>:\n\n> Niels Kristian Schjødt wrote:\n> \n>> Okay, now I'm done the updating as described above. I did the\n>> postgres.conf changes. I did the kernel changes, i added two\n>> SSD's in a software RAID1 where the pg_xlog is now located -\n>> unfortunately the the picture is still the same :-( \n> \n> You said before that you were seeing high disk wait numbers. Now it\n> is zero accourding to your disk utilization graph. That sounds like\n> a change to me.\n> \n>> When the database is under \"heavy\" load, there is almost no\n>> improvement to see in the performance compared to before the\n>> changes.\n> \n> In client-visible response time and throughput, I assume, not\n> resource usage numbers?\n> \n>> A lot of both read and writes takes more than a 1000 times as\n>> long as they usually do, under \"lighter\" overall load.\n> \n> As an odd coincidence, you showed your max_connections setting to\n> be 1000.\n> \n> http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n> \n> -Kevin\n\nHehe, I'm sorry if it somehow was misleading, I just wrote \"a lot of I/O\" it was CPU I/O, it also states that in the chart in the link. \nHowever, as I'm not very familiar with these deep down database and server things, I had no idea wether a disk bottle neck could hide in this I/O, so i went along with Shauns great help, that unfortunately didn't solve my issues. \nBack to the issue: Could it be that it is the fact that I'm using ubuntus built in software raid to raid my disks, and that it is not at all capable of handling the throughput?\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 02:43:00 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Hmm I'm getting suspicious here. Maybe my new great setup with the SSD's is not really working as it should., and maybe new relic is not monitoring as It should.\n\nIf I do a \"sudo iostat -k 1\"\nI get a lot of output like this:\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsda 0.00 0.00 0.00 0 0\nsdb 0.00 0.00 0.00 0 0\nsdc 546.00 2296.00 6808.00 2296 6808\nsdd 593.00 1040.00 7416.00 1040 7416\nmd1 0.00 0.00 0.00 0 0\nmd0 0.00 0.00 0.00 0 0\nmd2 1398.00 3328.00 13064.00 3328 13064\nmd3 0.00 0.00 0.00 0 0\n\nThe storage thing is, that the sda and sdb is the SSD drives and the sdc and sdd is the HDD drives. The md0, md1 and md2 is the raid arrays on the HDD's and the md3 is the raid on the SSD's. Neither of the md3 or the SSD's are getting utilized - and I should expect that since they are serving my pg_xlog right? - so maybe I did something wrong in the setup. Here is the path I followed:\n\n# 1) First setup the SSD drives in a software RAID1 setup:\n# http://askubuntu.com/questions/223194/setup-of-two-additional-ssd-drives-in-raid-1\n#\n# 2) Then move the postgres pg_xlog dir\n# sudo /etc/init.d/postgresql-9.2 stop \n# sudo mkdir -p /ssd/pg_xlog \n# sudo chown -R postgres.postgres /ssd/pg_xlog \n# sudo chmod 700 /ssd/pg_xlog \n# sudo cp -rf /var/lib/postgresql/9.2/main/pg_xlog/* /ssd/pg_xlog \n# sudo mv /var/lib/postgresql/9.2/main/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog_old \n# sudo ln -s /ssd/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog \n# sudo /etc/init.d/postgresql-9.2 start\n\nCan you spot something wrong?\n\n\n \nDen 30/11/2012 kl. 02.43 skrev Niels Kristian Schjødt <[email protected]>:\n\n> Den 30/11/2012 kl. 02.24 skrev \"Kevin Grittner\" <[email protected]>:\n> \n>> Niels Kristian Schjødt wrote:\n>> \n>>> Okay, now I'm done the updating as described above. I did the\n>>> postgres.conf changes. I did the kernel changes, i added two\n>>> SSD's in a software RAID1 where the pg_xlog is now located -\n>>> unfortunately the the picture is still the same :-( \n>> \n>> You said before that you were seeing high disk wait numbers. Now it\n>> is zero accourding to your disk utilization graph. That sounds like\n>> a change to me.\n>> \n>>> When the database is under \"heavy\" load, there is almost no\n>>> improvement to see in the performance compared to before the\n>>> changes.\n>> \n>> In client-visible response time and throughput, I assume, not\n>> resource usage numbers?\n>> \n>>> A lot of both read and writes takes more than a 1000 times as\n>>> long as they usually do, under \"lighter\" overall load.\n>> \n>> As an odd coincidence, you showed your max_connections setting to\n>> be 1000.\n>> \n>> http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n>> \n>> -Kevin\n> \n> Hehe, I'm sorry if it somehow was misleading, I just wrote \"a lot of I/O\" it was CPU I/O, it also states that in the chart in the link. \n> However, as I'm not very familiar with these deep down database and server things, I had no idea wether a disk bottle neck could hide in this I/O, so i went along with Shauns great help, that unfortunately didn't solve my issues. \n> Back to the issue: Could it be that it is the fact that I'm using ubuntus built in software raid to raid my disks, and that it is not at all capable of handling the throughput?\n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 03:32:31 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Actually, what's the point in putting logs to ssd? SSDs are good for random\naccess and logs are accessed sequentially. I'd put table spaces on ssd and\nleave logs on hdd\n30 лист. 2012 04:33, \"Niels Kristian Schjødt\" <[email protected]>\nнапис.\n\n> Hmm I'm getting suspicious here. Maybe my new great setup with the SSD's\n> is not really working as it should., and maybe new relic is not monitoring\n> as It should.\n>\n> If I do a \"sudo iostat -k 1\"\n> I get a lot of output like this:\n> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdc 546.00 2296.00 6808.00 2296 6808\n> sdd 593.00 1040.00 7416.00 1040 7416\n> md1 0.00 0.00 0.00 0 0\n> md0 0.00 0.00 0.00 0 0\n> md2 1398.00 3328.00 13064.00 3328 13064\n> md3 0.00 0.00 0.00 0 0\n>\n> The storage thing is, that the sda and sdb is the SSD drives and the sdc\n> and sdd is the HDD drives. The md0, md1 and md2 is the raid arrays on the\n> HDD's and the md3 is the raid on the SSD's. Neither of the md3 or the SSD's\n> are getting utilized - and I should expect that since they are serving my\n> pg_xlog right? - so maybe I did something wrong in the setup. Here is the\n> path I followed:\n>\n> # 1) First setup the SSD drives in a software RAID1 setup:\n> #\n> http://askubuntu.com/questions/223194/setup-of-two-additional-ssd-drives-in-raid-1\n> #\n> # 2) Then move the postgres pg_xlog dir\n> # sudo /etc/init.d/postgresql-9.2 stop\n> # sudo mkdir -p /ssd/pg_xlog\n> # sudo chown -R postgres.postgres /ssd/pg_xlog\n> # sudo chmod 700 /ssd/pg_xlog\n> # sudo cp -rf /var/lib/postgresql/9.2/main/pg_xlog/* /ssd/pg_xlog\n> # sudo mv /var/lib/postgresql/9.2/main/pg_xlog\n> /var/lib/postgresql/9.2/main/pg_xlog_old\n> # sudo ln -s /ssd/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog\n> # sudo /etc/init.d/postgresql-9.2 start\n>\n> Can you spot something wrong?\n>\n>\n>\n> Den 30/11/2012 kl. 02.43 skrev Niels Kristian Schjødt <\n> [email protected]>:\n>\n> > Den 30/11/2012 kl. 02.24 skrev \"Kevin Grittner\" <[email protected]>:\n> >\n> >> Niels Kristian Schjødt wrote:\n> >>\n> >>> Okay, now I'm done the updating as described above. I did the\n> >>> postgres.conf changes. I did the kernel changes, i added two\n> >>> SSD's in a software RAID1 where the pg_xlog is now located -\n> >>> unfortunately the the picture is still the same :-(\n> >>\n> >> You said before that you were seeing high disk wait numbers. Now it\n> >> is zero accourding to your disk utilization graph. That sounds like\n> >> a change to me.\n> >>\n> >>> When the database is under \"heavy\" load, there is almost no\n> >>> improvement to see in the performance compared to before the\n> >>> changes.\n> >>\n> >> In client-visible response time and throughput, I assume, not\n> >> resource usage numbers?\n> >>\n> >>> A lot of both read and writes takes more than a 1000 times as\n> >>> long as they usually do, under \"lighter\" overall load.\n> >>\n> >> As an odd coincidence, you showed your max_connections setting to\n> >> be 1000.\n> >>\n> >> http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n> >>\n> >> -Kevin\n> >\n> > Hehe, I'm sorry if it somehow was misleading, I just wrote \"a lot of\n> I/O\" it was CPU I/O, it also states that in the chart in the link.\n> > However, as I'm not very familiar with these deep down database and\n> server things, I had no idea wether a disk bottle neck could hide in this\n> I/O, so i went along with Shauns great help, that unfortunately didn't\n> solve my issues.\n> > Back to the issue: Could it be that it is the fact that I'm using\n> ubuntus built in software raid to raid my disks, and that it is not at all\n> capable of handling the throughput?\n> >\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nActually, what's the point in putting logs to ssd? SSDs are good for random access and logs are accessed sequentially. I'd put table spaces on ssd and leave logs on hdd\n30 лист. 2012 04:33, \"Niels Kristian Schjødt\" <[email protected]> напис.\nHmm I'm getting suspicious here. Maybe my new great setup with the SSD's is not really working as it should., and maybe new relic is not monitoring as It should.\n\nIf I do a \"sudo iostat -k 1\"\nI get a lot of output like this:\nDevice:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn\nsda               0.00         0.00         0.00          0          0\nsdb               0.00         0.00         0.00          0          0\nsdc             546.00      2296.00      6808.00       2296       6808\nsdd             593.00      1040.00      7416.00       1040       7416\nmd1               0.00         0.00         0.00          0          0\nmd0               0.00         0.00         0.00          0          0\nmd2            1398.00      3328.00     13064.00       3328      13064\nmd3               0.00         0.00         0.00          0          0\n\nThe storage thing is, that the sda and sdb is the SSD drives and the sdc and sdd is the HDD drives. The md0, md1 and md2 is the raid arrays on the HDD's and the md3 is the raid on the SSD's. Neither of the md3 or the SSD's are getting utilized - and I should expect that since they are serving my pg_xlog right? - so maybe I did something wrong in the setup. Here is the path I followed:\n\n# 1) First setup the SSD drives in a software RAID1 setup:\n#   http://askubuntu.com/questions/223194/setup-of-two-additional-ssd-drives-in-raid-1\n#\n# 2) Then move the postgres pg_xlog dir\n#   sudo /etc/init.d/postgresql-9.2 stop\n#   sudo mkdir -p /ssd/pg_xlog\n#   sudo chown -R  postgres.postgres /ssd/pg_xlog\n#   sudo chmod 700 /ssd/pg_xlog\n#   sudo cp -rf /var/lib/postgresql/9.2/main/pg_xlog/* /ssd/pg_xlog\n#   sudo mv /var/lib/postgresql/9.2/main/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog_old\n#   sudo ln -s /ssd/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog\n#   sudo /etc/init.d/postgresql-9.2 start\n\nCan you spot something wrong?\n\n\n\nDen 30/11/2012 kl. 02.43 skrev Niels Kristian Schjødt <[email protected]>:\n\n> Den 30/11/2012 kl. 02.24 skrev \"Kevin Grittner\" <[email protected]>:\n>\n>> Niels Kristian Schjødt wrote:\n>>\n>>> Okay, now I'm done the updating as described above. I did the\n>>> postgres.conf changes. I did the kernel changes, i added two\n>>> SSD's in a software RAID1 where the pg_xlog is now located -\n>>> unfortunately the the picture is still the same :-(\n>>\n>> You said before that you were seeing high disk wait numbers. Now it\n>> is zero accourding to your disk utilization graph. That sounds like\n>> a change to me.\n>>\n>>> When the database is under \"heavy\" load, there is almost no\n>>> improvement to see in the performance compared to before the\n>>> changes.\n>>\n>> In client-visible response time and throughput, I assume, not\n>> resource usage numbers?\n>>\n>>> A lot of both read and writes takes more than a 1000 times as\n>>> long as they usually do, under \"lighter\" overall load.\n>>\n>> As an odd coincidence, you showed your max_connections setting to\n>> be 1000.\n>>\n>> http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n>>\n>> -Kevin\n>\n> Hehe, I'm sorry if it somehow was misleading, I just wrote \"a lot of I/O\" it was CPU I/O, it also states that in the chart in the link.\n> However, as I'm not very familiar with these deep down database and server things, I had no idea wether a disk bottle neck could hide in this I/O, so i went along with Shauns great help, that unfortunately didn't solve my issues.\n\n> Back to the issue: Could it be that it is the fact that I'm using ubuntus built in software raid to raid my disks, and that it is not at all capable of handling the throughput?\n>\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 30 Nov 2012 10:37:54 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Most modern SSD are much faster for fsync type operations than a \nspinning disk - similar performance to spinning disk + writeback raid \ncontroller + battery.\n\nHowever as you mention, they are great at random IO too, so Niels, it \nmight be worth putting your postgres logs *and* data on the SSDs and \nretesting.\n\nRegards\n\nMark\n\n\n\nOn 30/11/12 21:37, Vitalii Tymchyshyn wrote:\n> Actually, what's the point in putting logs to ssd? SSDs are good for\n> random access and logs are accessed sequentially. I'd put table spaces\n> on ssd and leave logs on hdd\n>\n> 30 лист. 2012 04:33, \"Niels Kristian Schjødt\"\n> <[email protected] <mailto:[email protected]>> напис.\n>\n> Hmm I'm getting suspicious here. Maybe my new great setup with the\n> SSD's is not really working as it should., and maybe new relic is\n> not monitoring as It should.\n>\n> If I do a \"sudo iostat -k 1\"\n> I get a lot of output like this:\n> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdc 546.00 2296.00 6808.00 2296 6808\n> sdd 593.00 1040.00 7416.00 1040 7416\n> md1 0.00 0.00 0.00 0 0\n> md0 0.00 0.00 0.00 0 0\n> md2 1398.00 3328.00 13064.00 3328 13064\n> md3 0.00 0.00 0.00 0 0\n>\n> The storage thing is, that the sda and sdb is the SSD drives and the\n> sdc and sdd is the HDD drives. The md0, md1 and md2 is the raid\n> arrays on the HDD's and the md3 is the raid on the SSD's. Neither of\n> the md3 or the SSD's are getting utilized - and I should expect that\n> since they are serving my pg_xlog right? - so maybe I did something\n> wrong in the setup. Here is the path I followed:\n>\n> # 1) First setup the SSD drives in a software RAID1 setup:\n> #\n> http://askubuntu.com/questions/223194/setup-of-two-additional-ssd-drives-in-raid-1\n> #\n> # 2) Then move the postgres pg_xlog dir\n> # sudo /etc/init.d/postgresql-9.2 stop\n> # sudo mkdir -p /ssd/pg_xlog\n> # sudo chown -R postgres.postgres /ssd/pg_xlog\n> # sudo chmod 700 /ssd/pg_xlog\n> # sudo cp -rf /var/lib/postgresql/9.2/main/pg_xlog/* /ssd/pg_xlog\n> # sudo mv /var/lib/postgresql/9.2/main/pg_xlog\n> /var/lib/postgresql/9.2/main/pg_xlog_old\n> # sudo ln -s /ssd/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog\n> # sudo /etc/init.d/postgresql-9.2 start\n>\n> Can you spot something wrong?\n>\n>\n>\n> Den 30/11/2012 kl. 02.43 skrev Niels Kristian Schjødt\n> <[email protected] <mailto:[email protected]>>:\n>\n> > Den 30/11/2012 kl. 02.24 skrev \"Kevin Grittner\" <[email protected]\n> <mailto:[email protected]>>:\n> >\n> >> Niels Kristian Schjødt wrote:\n> >>\n> >>> Okay, now I'm done the updating as described above. I did the\n> >>> postgres.conf changes. I did the kernel changes, i added two\n> >>> SSD's in a software RAID1 where the pg_xlog is now located -\n> >>> unfortunately the the picture is still the same :-(\n> >>\n> >> You said before that you were seeing high disk wait numbers. Now it\n> >> is zero accourding to your disk utilization graph. That sounds like\n> >> a change to me.\n> >>\n> >>> When the database is under \"heavy\" load, there is almost no\n> >>> improvement to see in the performance compared to before the\n> >>> changes.\n> >>\n> >> In client-visible response time and throughput, I assume, not\n> >> resource usage numbers?\n> >>\n> >>> A lot of both read and writes takes more than a 1000 times as\n> >>> long as they usually do, under \"lighter\" overall load.\n> >>\n> >> As an odd coincidence, you showed your max_connections setting to\n> >> be 1000.\n> >>\n> >> http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n> >>\n> >> -Kevin\n> >\n> > Hehe, I'm sorry if it somehow was misleading, I just wrote \"a lot\n> of I/O\" it was CPU I/O, it also states that in the chart in the link.\n> > However, as I'm not very familiar with these deep down database\n> and server things, I had no idea wether a disk bottle neck could\n> hide in this I/O, so i went along with Shauns great help, that\n> unfortunately didn't solve my issues.\n> > Back to the issue: Could it be that it is the fact that I'm using\n> ubuntus built in software raid to raid my disks, and that it is not\n> at all capable of handling the throughput?\n> >\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 22:19:27 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "When I try your command sequence I end up with the contents of the new \npg_xlog owned by root. Postgres will not start:\n\nPANIC: could not open file \"pg_xlog/000000010000000600000080\" (log file \n6, segment 128): Permission denied\n\nWhile this is fixable, I suspect you have managed to leave the xlogs \ndirectory that postgres is actually using on the HDD drives.\n\n\nWhen I do this I normally do:\n$ service postgresql stop\n$ sudo mkdir -p /ssd/pg_xlog\n$ sudo chown -R postgres.postgres /ssd/pg_xlog\n$ sudo chmod 700 /ssd/pg_xlog\n$ sudo su - postgres\npostgres $ mv /var/lib/postgresql/9.2/main/pg_xlog/* /ssd/pg_xlog\npostgres $ rmdir /var/lib/postgresql/9.2/main/pg_xlog\npostgres $ ln -s /ssd/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog\npostgres $ service postgresql start\n\nregards\n\nMark\n\nOn 30/11/12 15:32, Niels Kristian Schjødt wrote:\n> Hmm I'm getting suspicious here. Maybe my new great setup with the SSD's is not really working as it should., and maybe new relic is not monitoring as It should.\n>\n> If I do a \"sudo iostat -k 1\"\n> I get a lot of output like this:\n> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdc 546.00 2296.00 6808.00 2296 6808\n> sdd 593.00 1040.00 7416.00 1040 7416\n> md1 0.00 0.00 0.00 0 0\n> md0 0.00 0.00 0.00 0 0\n> md2 1398.00 3328.00 13064.00 3328 13064\n> md3 0.00 0.00 0.00 0 0\n>\n> The storage thing is, that the sda and sdb is the SSD drives and the sdc and sdd is the HDD drives. The md0, md1 and md2 is the raid arrays on the HDD's and the md3 is the raid on the SSD's. Neither of the md3 or the SSD's are getting utilized - and I should expect that since they are serving my pg_xlog right? - so maybe I did something wrong in the setup. Here is the path I followed:\n>\n> # 1) First setup the SSD drives in a software RAID1 setup:\n> # http://askubuntu.com/questions/223194/setup-of-two-additional-ssd-drives-in-raid-1\n> #\n> # 2) Then move the postgres pg_xlog dir\n> # sudo /etc/init.d/postgresql-9.2 stop\n> # sudo mkdir -p /ssd/pg_xlog\n> # sudo chown -R postgres.postgres /ssd/pg_xlog\n> # sudo chmod 700 /ssd/pg_xlog\n> # sudo cp -rf /var/lib/postgresql/9.2/main/pg_xlog/* /ssd/pg_xlog\n> # sudo mv /var/lib/postgresql/9.2/main/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog_old\n> # sudo ln -s /ssd/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog\n> # sudo /etc/init.d/postgresql-9.2 start\n>\n> Can you spot something wrong?\n>\n>\n>\n> Den 30/11/2012 kl. 02.43 skrev Niels Kristian Schjødt <[email protected]>:\n>\n>> Den 30/11/2012 kl. 02.24 skrev \"Kevin Grittner\" <[email protected]>:\n>>\n>>> Niels Kristian Schjødt wrote:\n>>>\n>>>> Okay, now I'm done the updating as described above. I did the\n>>>> postgres.conf changes. I did the kernel changes, i added two\n>>>> SSD's in a software RAID1 where the pg_xlog is now located -\n>>>> unfortunately the the picture is still the same :-(\n>>>\n>>> You said before that you were seeing high disk wait numbers. Now it\n>>> is zero accourding to your disk utilization graph. That sounds like\n>>> a change to me.\n>>>\n>>>> When the database is under \"heavy\" load, there is almost no\n>>>> improvement to see in the performance compared to before the\n>>>> changes.\n>>>\n>>> In client-visible response time and throughput, I assume, not\n>>> resource usage numbers?\n>>>\n>>>> A lot of both read and writes takes more than a 1000 times as\n>>>> long as they usually do, under \"lighter\" overall load.\n>>>\n>>> As an odd coincidence, you showed your max_connections setting to\n>>> be 1000.\n>>>\n>>> http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n>>>\n>>> -Kevin\n>>\n>> Hehe, I'm sorry if it somehow was misleading, I just wrote \"a lot of I/O\" it was CPU I/O, it also states that in the chart in the link.\n>> However, as I'm not very familiar with these deep down database and server things, I had no idea wether a disk bottle neck could hide in this I/O, so i went along with Shauns great help, that unfortunately didn't solve my issues.\n>> Back to the issue: Could it be that it is the fact that I'm using ubuntus built in software raid to raid my disks, and that it is not at all capable of handling the throughput?\n>>\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 22:38:48 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Oh, yes. I don't imagine DB server without RAID+BBU :)\nWhen there is no BBU, SSD can be handy.\nBut you know, SSD is worse in linear read/write than HDD.\n\nBest regards, Vitalii Tymchyshyn\n\n\n2012/11/30 Mark Kirkwood <[email protected]>\n\n> Most modern SSD are much faster for fsync type operations than a spinning\n> disk - similar performance to spinning disk + writeback raid controller +\n> battery.\n>\n> However as you mention, they are great at random IO too, so Niels, it\n> might be worth putting your postgres logs *and* data on the SSDs and\n> retesting.\n>\n> Regards\n>\n> Mark\n>\n>\n>\n>\n> On 30/11/12 21:37, Vitalii Tymchyshyn wrote:\n>\n>> Actually, what's the point in putting logs to ssd? SSDs are good for\n>> random access and logs are accessed sequentially. I'd put table spaces\n>> on ssd and leave logs on hdd\n>>\n>> 30 лист. 2012 04:33, \"Niels Kristian Schjødt\"\n>> <[email protected] <mailto:nielskristian@**autouncle.com<[email protected]>>>\n>> напис.\n>>\n>>\n>> Hmm I'm getting suspicious here. Maybe my new great setup with the\n>> SSD's is not really working as it should., and maybe new relic is\n>> not monitoring as It should.\n>>\n>> If I do a \"sudo iostat -k 1\"\n>> I get a lot of output like this:\n>> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n>> sda 0.00 0.00 0.00 0 0\n>> sdb 0.00 0.00 0.00 0 0\n>> sdc 546.00 2296.00 6808.00 2296 6808\n>> sdd 593.00 1040.00 7416.00 1040 7416\n>> md1 0.00 0.00 0.00 0 0\n>> md0 0.00 0.00 0.00 0 0\n>> md2 1398.00 3328.00 13064.00 3328 13064\n>> md3 0.00 0.00 0.00 0 0\n>>\n>> The storage thing is, that the sda and sdb is the SSD drives and the\n>> sdc and sdd is the HDD drives. The md0, md1 and md2 is the raid\n>> arrays on the HDD's and the md3 is the raid on the SSD's. Neither of\n>> the md3 or the SSD's are getting utilized - and I should expect that\n>> since they are serving my pg_xlog right? - so maybe I did something\n>> wrong in the setup. Here is the path I followed:\n>>\n>> # 1) First setup the SSD drives in a software RAID1 setup:\n>> #\n>> http://askubuntu.com/**questions/223194/setup-of-two-**\n>> additional-ssd-drives-in-raid-**1<http://askubuntu.com/questions/223194/setup-of-two-additional-ssd-drives-in-raid-1>\n>> #\n>> # 2) Then move the postgres pg_xlog dir\n>> # sudo /etc/init.d/postgresql-9.2 stop\n>> # sudo mkdir -p /ssd/pg_xlog\n>> # sudo chown -R postgres.postgres /ssd/pg_xlog\n>> # sudo chmod 700 /ssd/pg_xlog\n>> # sudo cp -rf /var/lib/postgresql/9.2/main/**pg_xlog/* /ssd/pg_xlog\n>> # sudo mv /var/lib/postgresql/9.2/main/**pg_xlog\n>> /var/lib/postgresql/9.2/main/**pg_xlog_old\n>> # sudo ln -s /ssd/pg_xlog /var/lib/postgresql/9.2/main/**pg_xlog\n>> # sudo /etc/init.d/postgresql-9.2 start\n>>\n>> Can you spot something wrong?\n>>\n>>\n>>\n>> Den 30/11/2012 kl. 02.43 skrev Niels Kristian Schjødt\n>> <[email protected] <mailto:nielskristian@**autouncle.com<[email protected]>\n>> >>:\n>>\n>>\n>> > Den 30/11/2012 kl. 02.24 skrev \"Kevin Grittner\" <[email protected]\n>> <mailto:[email protected]>>:\n>>\n>> >\n>> >> Niels Kristian Schjødt wrote:\n>> >>\n>> >>> Okay, now I'm done the updating as described above. I did the\n>> >>> postgres.conf changes. I did the kernel changes, i added two\n>> >>> SSD's in a software RAID1 where the pg_xlog is now located -\n>> >>> unfortunately the the picture is still the same :-(\n>> >>\n>> >> You said before that you were seeing high disk wait numbers. Now\n>> it\n>> >> is zero accourding to your disk utilization graph. That sounds\n>> like\n>> >> a change to me.\n>> >>\n>> >>> When the database is under \"heavy\" load, there is almost no\n>> >>> improvement to see in the performance compared to before the\n>> >>> changes.\n>> >>\n>> >> In client-visible response time and throughput, I assume, not\n>> >> resource usage numbers?\n>> >>\n>> >>> A lot of both read and writes takes more than a 1000 times as\n>> >>> long as they usually do, under \"lighter\" overall load.\n>> >>\n>> >> As an odd coincidence, you showed your max_connections setting to\n>> >> be 1000.\n>> >>\n>> >> http://wiki.postgresql.org/**wiki/Number_Of_Database_**\n>> Connections<http://wiki.postgresql.org/wiki/Number_Of_Database_Connections>\n>> >>\n>> >> -Kevin\n>> >\n>> > Hehe, I'm sorry if it somehow was misleading, I just wrote \"a lot\n>> of I/O\" it was CPU I/O, it also states that in the chart in the link.\n>> > However, as I'm not very familiar with these deep down database\n>> and server things, I had no idea wether a disk bottle neck could\n>> hide in this I/O, so i went along with Shauns great help, that\n>> unfortunately didn't solve my issues.\n>> > Back to the issue: Could it be that it is the fact that I'm using\n>> ubuntus built in software raid to raid my disks, and that it is not\n>> at all capable of handling the throughput?\n>> >\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list\n>> (pgsql-performance@postgresql.**org<[email protected]>\n>> <mailto:pgsql-performance@**postgresql.org<[email protected]>\n>> >)\n>>\n>> To make changes to your subscription:\n>> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>>\n>>\n>\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nOh, yes. I don't imagine DB server without RAID+BBU :)When there is no BBU, SSD can be handy.But you know, SSD is worse in linear read/write than HDD.Best regards, Vitalii Tymchyshyn\n2012/11/30 Mark Kirkwood <[email protected]>\nMost modern SSD are much faster for fsync type operations than a spinning disk - similar performance to spinning disk + writeback raid controller + battery.\n\nHowever as you mention, they are great at random IO too, so Niels, it might be worth putting your postgres logs *and* data on the SSDs and retesting.\n\nRegards\n\nMark\n\n\n\nOn 30/11/12 21:37, Vitalii Tymchyshyn wrote:\n\nActually, what's the point in putting logs to ssd? SSDs are good for\nrandom access and logs are accessed sequentially. I'd put table spaces\non ssd and leave logs on hdd\n\n30 лист. 2012 04:33, \"Niels Kristian Schjødt\"\n<[email protected] <mailto:[email protected]>> напис.\n\n\n    Hmm I'm getting suspicious here. Maybe my new great setup with the\n    SSD's is not really working as it should., and maybe new relic is\n    not monitoring as It should.\n\n    If I do a \"sudo iostat -k 1\"\n    I get a lot of output like this:\n    Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn\n    sda               0.00         0.00         0.00          0          0\n    sdb               0.00         0.00         0.00          0          0\n    sdc             546.00      2296.00      6808.00       2296       6808\n    sdd             593.00      1040.00      7416.00       1040       7416\n    md1               0.00         0.00         0.00          0          0\n    md0               0.00         0.00         0.00          0          0\n    md2            1398.00      3328.00     13064.00       3328      13064\n    md3               0.00         0.00         0.00          0          0\n\n    The storage thing is, that the sda and sdb is the SSD drives and the\n    sdc and sdd is the HDD drives. The md0, md1 and md2 is the raid\n    arrays on the HDD's and the md3 is the raid on the SSD's. Neither of\n    the md3 or the SSD's are getting utilized - and I should expect that\n    since they are serving my pg_xlog right? - so maybe I did something\n    wrong in the setup. Here is the path I followed:\n\n    # 1) First setup the SSD drives in a software RAID1 setup:\n    #\n    http://askubuntu.com/questions/223194/setup-of-two-additional-ssd-drives-in-raid-1\n\n    #\n    # 2) Then move the postgres pg_xlog dir\n    #   sudo /etc/init.d/postgresql-9.2 stop\n    #   sudo mkdir -p /ssd/pg_xlog\n    #   sudo chown -R  postgres.postgres /ssd/pg_xlog\n    #   sudo chmod 700 /ssd/pg_xlog\n    #   sudo cp -rf /var/lib/postgresql/9.2/main/pg_xlog/* /ssd/pg_xlog\n    #   sudo mv /var/lib/postgresql/9.2/main/pg_xlog\n    /var/lib/postgresql/9.2/main/pg_xlog_old\n    #   sudo ln -s /ssd/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog\n    #   sudo /etc/init.d/postgresql-9.2 start\n\n    Can you spot something wrong?\n\n\n\n    Den 30/11/2012 kl. 02.43 skrev Niels Kristian Schjødt\n    <[email protected] <mailto:[email protected]>>:\n\n\n     > Den 30/11/2012 kl. 02.24 skrev \"Kevin Grittner\" <[email protected]\n    <mailto:[email protected]>>:\n     >\n     >> Niels Kristian Schjødt wrote:\n     >>\n     >>> Okay, now I'm done the updating as described above. I did the\n     >>> postgres.conf changes. I did the kernel changes, i added two\n     >>> SSD's in a software RAID1 where the pg_xlog is now located -\n     >>> unfortunately the the picture is still the same :-(\n     >>\n     >> You said before that you were seeing high disk wait numbers. Now it\n     >> is zero accourding to your disk utilization graph. That sounds like\n     >> a change to me.\n     >>\n     >>> When the database is under \"heavy\" load, there is almost no\n     >>> improvement to see in the performance compared to before the\n     >>> changes.\n     >>\n     >> In client-visible response time and throughput, I assume, not\n     >> resource usage numbers?\n     >>\n     >>> A lot of both read and writes takes more than a 1000 times as\n     >>> long as they usually do, under \"lighter\" overall load.\n     >>\n     >> As an odd coincidence, you showed your max_connections setting to\n     >> be 1000.\n     >>\n     >> http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n     >>\n     >> -Kevin\n     >\n     > Hehe, I'm sorry if it somehow was misleading, I just wrote \"a lot\n    of I/O\" it was CPU I/O, it also states that in the chart in the link.\n     > However, as I'm not very familiar with these deep down database\n    and server things, I had no idea wether a disk bottle neck could\n    hide in this I/O, so i went along with Shauns great help, that\n    unfortunately didn't solve my issues.\n     > Back to the issue: Could it be that it is the fact that I'm using\n    ubuntus built in software raid to raid my disks, and that it is not\n    at all capable of handling the throughput?\n     >\n\n\n\n    --\n    Sent via pgsql-performance mailing list\n    ([email protected]\n    <mailto:[email protected]>)\n    To make changes to your subscription:\n    http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Fri, 30 Nov 2012 12:07:53 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Actually, what's the point in putting logs to ssd? SSDs are good for random access and logs are accessed sequentially. I'd put table spaces on ssd and leave logs on hdd\n\n30 占쌥иэ옙占�. 2012 04:33, \"Niels Kristian Schj占쏙옙dt\" <[email protected]> 占쌩аэ옙媚占�.\n\nBecause SSD's are considered faster. Then you have to put the most phyisical IO intensive operations on SSD. For the majority of databases, these are the logfiles. But you should investigate where the optimum is for your situation. \n \t\t \t \t\t \n\n\n\n\n Actually, what's the point in putting logs to ssd? SSDs are good for random access and logs are accessed sequentially. I'd put table spaces on ssd and leave logs on hdd\n\n30 占쌥иэ옙占�. 2012 04:33, \"Niels Kristian Schj占쏙옙dt\" <[email protected]> 占쌩аэ옙媚占�.\nBecause SSD's are considered faster. Then you have to put the most phyisical IO intensive operations on SSD. For the majority of databases, these are the logfiles. But you should investigate where the optimum is for your situation.", "msg_date": "Fri, 30 Nov 2012 10:14:15 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "SSDs are not faster for sequential IO as I know. That's why (with BBU or\nsynchronious_commit=off) I prefer to have logs on regular HDDs.\n\nBest reag\n\n\n2012/11/30 Willem Leenen <[email protected]>\n\n>\n> Actually, what's the point in putting logs to ssd? SSDs are good for\n> random access and logs are accessed sequentially. I'd put table spaces on\n> ssd and leave logs on hdd\n> 30 лист. 2012 04:33, \"Niels Kristian Schjødt\" <\n> [email protected]> напис.\n> Because SSD's are considered faster. Then you have to put the most\n> phyisical IO intensive operations on SSD. For the majority of databases,\n> these are the logfiles. But you should investigate where the optimum is for\n> your situation.\n>\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nSSDs are not faster for sequential IO as I know. That's why (with BBU or synchronious_commit=off) I prefer to have logs on regular HDDs.Best reag\n2012/11/30 Willem Leenen <[email protected]>\n\n Actually, what's the point in putting logs to ssd? SSDs are good for random access and logs are accessed sequentially. I'd put table spaces on ssd and leave logs on hdd\n\n30 лист. 2012 04:33, \"Niels Kristian Schjødt\" <[email protected]> напис.\nBecause SSD's are considered faster. Then you have to put the most phyisical IO intensive operations on SSD. For the majority of databases, these are the logfiles. But you should investigate where the optimum is for your situation. \n  \n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Fri, 30 Nov 2012 12:31:52 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "On 11/29/2012 08:32 PM, Niels Kristian Schjødt wrote:\n\n> If I do a \"sudo iostat -k 1\"\n> I get a lot of output like this:\n> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdc 546.00 2296.00 6808.00 2296 6808\n> sdd 593.00 1040.00 7416.00 1040 7416\n> md1 0.00 0.00 0.00 0 0\n> md0 0.00 0.00 0.00 0 0\n> md2 1398.00 3328.00 13064.00 3328 13064\n> md3 0.00 0.00 0.00 0 0\n>\n\n> The storage thing is, that the sda and sdb is the SSD drives and the\n> sdc and sdd is the HDD drives. The md0, md1 and md2 is the raid\n> arrays on the HDD's and the md3 is the raid on the SSD's. Neither of\n> the md3 or the SSD's are getting utilized - and I should expect that\n> since they are serving my pg_xlog right?\n\nNo, that's right. They are, but it would appear that the majority of\nyour traffic actually isn't due to transaction logs like I'd suspected.\nIf you get a chance, could you monitor the contents of:\n\n/var/lib/postgresql/9.2/main/base/pgsql_tmp\n\nYour main drives are getting way, way more writes than they should. 13MB\nper second is ridiculous even under heavy write loads. Based on the TPS\ncount, you're basically saturating the ability of those two 3TB drives.\nThose writes have to be coming from somewhere.\n\n> # sudo mkdir -p /ssd/pg_xlog\n\nThis is going to sound stupid, but are you *sure* the SSD is mounted at\n/ssd ?\n\n> # sudo chown -R postgres.postgres /ssd/pg_xlog\n> # sudo chmod 700 /ssd/pg_xlog\n> # sudo cp -rf /var/lib/postgresql/9.2/main/pg_xlog/* /ssd/pg_xlog\n> # sudo mv /var/lib/postgresql/9.2/main/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog_old\n> # sudo ln -s /ssd/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog\n> # sudo /etc/init.d/postgresql-9.2 start\n\nThe rest of this is fine, except that you probably should have added:\n\nsudo chown -R postgres:postgres /ssd/pg_xlog/*\n\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 08:02:39 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "On 11/30/2012 02:37 AM, Vitalii Tymchyshyn wrote:\n\n> Actually, what's the point in putting logs to ssd? SSDs are good for\n> random access and logs are accessed sequentially.\n\nWhile this is true, Niels' problem is that his regular HDs are getting \nsaturated. In that case, moving any activity off of them is an improvement.\n\nWhy not move the data to the SSDs, you ask? Because he bought two 3TB \ndrives. The assumption here is that a 256GB SSD will not have enough \nspace for the long-term lifespan of this database.\n\nEither way, based on the iostat activity he posted, clearly there's some \nother write stream happening we're not privy to.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 08:06:48 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Den 30/11/2012 kl. 15.02 skrev Shaun Thomas <[email protected]>:\n\n> On 11/29/2012 08:32 PM, Niels Kristian Schjødt wrote:\n> \n>> If I do a \"sudo iostat -k 1\"\n>> I get a lot of output like this:\n>> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n>> sda 0.00 0.00 0.00 0 0\n>> sdb 0.00 0.00 0.00 0 0\n>> sdc 546.00 2296.00 6808.00 2296 6808\n>> sdd 593.00 1040.00 7416.00 1040 7416\n>> md1 0.00 0.00 0.00 0 0\n>> md0 0.00 0.00 0.00 0 0\n>> md2 1398.00 3328.00 13064.00 3328 13064\n>> md3 0.00 0.00 0.00 0 0\n>> \n> \n>> The storage thing is, that the sda and sdb is the SSD drives and the\n>> sdc and sdd is the HDD drives. The md0, md1 and md2 is the raid\n>> arrays on the HDD's and the md3 is the raid on the SSD's. Neither of\n>> the md3 or the SSD's are getting utilized - and I should expect that\n>> since they are serving my pg_xlog right?\n> \n> No, that's right. They are, but it would appear that the majority of your traffic actually isn't due to transaction logs like I'd suspected. If you get a chance, could you monitor the contents of:\n> \n> /var/lib/postgresql/9.2/main/base/pgsql_tmp\n> \n> Your main drives are getting way, way more writes than they should. 13MB per second is ridiculous even under heavy write loads. Based on the TPS count, you're basically saturating the ability of those two 3TB drives. Those writes have to be coming from somewhere.\n> \n>> # sudo mkdir -p /ssd/pg_xlog\n> \n> This is going to sound stupid, but are you *sure* the SSD is mounted at /ssd ?\n> \n>> # sudo chown -R postgres.postgres /ssd/pg_xlog\n>> # sudo chmod 700 /ssd/pg_xlog\n>> # sudo cp -rf /var/lib/postgresql/9.2/main/pg_xlog/* /ssd/pg_xlog\n>> # sudo mv /var/lib/postgresql/9.2/main/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog_old\n>> # sudo ln -s /ssd/pg_xlog /var/lib/postgresql/9.2/main/pg_xlog\n>> # sudo /etc/init.d/postgresql-9.2 start\n> \n> The rest of this is fine, except that you probably should have added:\n> \n> sudo chown -R postgres:postgres /ssd/pg_xlog/*\n> \n> \n> -- \n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\nOh my, Shaun once again you nailed it! That's what you get from working too late in the night - I forgot to run 'sudo mount -a' I feel so embarrassed now :-( - In other words no the drive was not mounted to the /ssd dir. \nSo now it is, and this has gained me a performance increase of roughly around 20% - a little less than what I would have hoped for but still better - but anyways yes that's right.\nI still see a lot of CPU I/O when doing a lot of writes, so the question is, what's next. Should I try and go' for the connection pooling thing or monitor that /var/lib/postgresql/9.2/main/base/pgsql_tmp dir (and what exactly do you mean by monitor - size?)\n\nPS. comment on the \"Why not move the data to the SSDs\" you are exactly right. i don't think the SSD's will be big enough for the data within a not too long timeframe, so that is exactly why I want to keep my data on the \"big\" drives.\nPPS. I talked with New Relic and it turns out there is something wrong with the disk monitoring tool, so that's why there was nothing in the disk charts but iostat showed a lot of activity.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 15:48:57 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "On 11/30/2012 08:48 AM, Niels Kristian Schjødt wrote:\n\n> I forgot to run 'sudo mount -a' I feel so embarrassed now :-( - In\n> other words no the drive was not mounted to the /ssd dir.\n\nYeah, that'll get ya.\n\n> I still see a lot of CPU I/O when doing a lot of writes, so the\n> question is, what's next. Should I try and go' for the connection\n> pooling thing or monitor that\n> /var/lib/postgresql/9.2/main/base/pgsql_tmp dir (and what exactly do\n> you mean by monitor - size?)\n\nWell, like Keven said, if you have more than a couple dozen connections\non your hardware, you're losing TPS. It's probably a good idea to\ninstall pgbouncer or pgpool and let your clients connect to those\ninstead. You should see a good performance boost from that.\n\nBut what concerns me is that your previous CPU charts showed a lot of\niowait. Even with the SSD taking some of the load off your write stream,\nsomething else is going on, there. That's why you need to monitor the\n\"size\" in MB, or number of files, for the pgsql_tmp directory. That's\nwhere PG puts temp files when sorts are too big for your work_mem. If\nthat's getting a ton of activity, that would explain some of your write\noverhead.\n\n> PPS. I talked with New Relic and it turns out there is something\n> wrong with the disk monitoring tool, so that's why there was nothing\n> in the disk charts but iostat showed a lot of activity.\n\nYeah. Next time you need to check IO, use iostat. It's not as pretty,\nbut it tells everything. ;) Just to help out with that, use:\n\niostat -dmx\n\nThat will give you extended information, including the % utilization of\nyour drives. TPS stats are nice, but I was just guessing your drives\nwere stalling out based on experience. Getting an outright percentage is\nbetter. You should also use sar. Just a plain:\n\nsar 1 100\n\nWill give you a lot of info on what the CPU is doing. You want that\n%iowait column to be as low as possible.\n\nKeep us updated.\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 09:00:17 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Hmm very very interesting. Currently I run at \"medium\" load compared to the very high loads in the night.\nThis is what the CPU I/O on new relic show: https://rpm.newrelic.com/public/charts/8RnSOlWjfBy\nAnd this is what iostat shows:\n\nLinux 3.2.0-33-generic (master-db) \t11/30/2012 \t_x86_64_\t(8 CPU)\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\nsda 0.00 3.46 26.62 57.06 1.66 0.68 57.41 0.04 0.43 0.77 0.28 0.09 0.73\nsdb 0.03 16.85 0.01 70.26 0.00 2.35 68.36 0.06 0.81 0.21 0.81 0.10 0.73\nsdc 1.96 56.37 25.45 172.56 0.53 3.72 43.98 30.83 155.70 25.15 174.96 1.74 34.46\nsdd 1.83 56.52 25.48 172.42 0.52 3.72 43.90 30.50 154.11 25.66 173.09 1.74 34.37\nmd1 0.00 0.00 0.00 0.00 0.00 0.00 3.02 0.00 0.00 0.00 0.00 0.00 0.00\nmd0 0.00 0.00 0.57 0.59 0.00 0.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00\nmd2 0.00 0.00 54.14 227.94 1.05 3.72 34.61 0.00 0.00 0.00 0.00 0.00 0.00\nmd3 0.00 0.00 0.01 60.46 0.00 0.68 23.12 0.00 0.00 0.00 0.00 0.00 0.00\n\nA little reminder md3 is the raid array of the ssd drives sda and sdb and the md0-2 is the array of the regular hdd drives sdc and sdd\n\nThe pgsql_tmp dir is not changing at all it's constantly empty (a size of 4.0K).\n\nSo It doesn't seem like the ssd drives is at all utilized but the regular drives certainly is. but now i know for sure that the /ssd is mounted correctly:\n\n\"sudo df /ssd\"\nFilesystem 1K-blocks Used Available Use% Mounted on\n/dev/md3 230619228 5483796 213420620 3% /ssd\n\n\n\n \n\nDen 30/11/2012 kl. 16.00 skrev Shaun Thomas <[email protected]>:\n\n> On 11/30/2012 08:48 AM, Niels Kristian Schjødt wrote:\n> \n>> I forgot to run 'sudo mount -a' I feel so embarrassed now :-( - In\n>> other words no the drive was not mounted to the /ssd dir.\n> \n> Yeah, that'll get ya.\n> \n>> I still see a lot of CPU I/O when doing a lot of writes, so the\n>> question is, what's next. Should I try and go' for the connection\n>> pooling thing or monitor that\n>> /var/lib/postgresql/9.2/main/base/pgsql_tmp dir (and what exactly do\n>> you mean by monitor - size?)\n> \n> Well, like Keven said, if you have more than a couple dozen connections on your hardware, you're losing TPS. It's probably a good idea to install pgbouncer or pgpool and let your clients connect to those instead. You should see a good performance boost from that.\n> \n> But what concerns me is that your previous CPU charts showed a lot of iowait. Even with the SSD taking some of the load off your write stream, something else is going on, there. That's why you need to monitor the \"size\" in MB, or number of files, for the pgsql_tmp directory. That's where PG puts temp files when sorts are too big for your work_mem. If that's getting a ton of activity, that would explain some of your write overhead.\n> \n>> PPS. I talked with New Relic and it turns out there is something\n>> wrong with the disk monitoring tool, so that's why there was nothing\n>> in the disk charts but iostat showed a lot of activity.\n> \n> Yeah. Next time you need to check IO, use iostat. It's not as pretty, but it tells everything. ;) Just to help out with that, use:\n> \n> iostat -dmx\n> \n> That will give you extended information, including the % utilization of your drives. TPS stats are nice, but I was just guessing your drives were stalling out based on experience. Getting an outright percentage is better. You should also use sar. Just a plain:\n> \n> sar 1 100\n> \n> Will give you a lot of info on what the CPU is doing. You want that %iowait column to be as low as possible.\n> \n> Keep us updated.\n> \n> -- \n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\nHmm very very interesting. Currently I run at \"medium\" load compared to the very high loads in the night.This is what the CPU I/O on new relic show: https://rpm.newrelic.com/public/charts/8RnSOlWjfByAnd this is what iostat shows:Linux 3.2.0-33-generic (master-db) 11/30/2012 _x86_64_ (8 CPU)Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %utilsda               0.00     3.46   26.62   57.06     1.66     0.68    57.41     0.04    0.43    0.77    0.28   0.09   0.73sdb               0.03    16.85    0.01   70.26     0.00     2.35    68.36     0.06    0.81    0.21    0.81   0.10   0.73sdc               1.96    56.37   25.45  172.56     0.53     3.72    43.98    30.83  155.70   25.15  174.96   1.74  34.46sdd               1.83    56.52   25.48  172.42     0.52     3.72    43.90    30.50  154.11   25.66  173.09   1.74  34.37md1               0.00     0.00    0.00    0.00     0.00     0.00     3.02     0.00    0.00    0.00    0.00   0.00   0.00md0               0.00     0.00    0.57    0.59     0.00     0.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00md2               0.00     0.00   54.14  227.94     1.05     3.72    34.61     0.00    0.00    0.00    0.00   0.00   0.00md3               0.00     0.00    0.01   60.46     0.00     0.68    23.12     0.00    0.00    0.00    0.00   0.00   0.00A little reminder md3 is the raid array of the ssd drives sda and sdb and the md0-2 is the array of the regular hdd drives sdc and sddThe pgsql_tmp dir is not changing at all it's constantly empty (a size of 4.0K).So It doesn't seem like the ssd drives is at all utilized but the regular drives certainly is. but now i know for sure that the /ssd is mounted correctly:\"sudo df /ssd\"Filesystem     1K-blocks    Used Available Use% Mounted on/dev/md3       230619228 5483796 213420620   3% /ssd Den 30/11/2012 kl. 16.00 skrev Shaun Thomas <[email protected]>:On 11/30/2012 08:48 AM, Niels Kristian Schjødt wrote:I forgot to run 'sudo mount -a' I feel so embarrassed now :-( - Inother words no the drive was not mounted to the /ssd dir.Yeah, that'll get ya.I still see a lot of CPU I/O when doing a lot of writes, so thequestion is, what's next. Should I try and go' for the connectionpooling thing or monitor that/var/lib/postgresql/9.2/main/base/pgsql_tmp dir (and what exactly doyou mean by monitor - size?)Well, like Keven said, if you have more than a couple dozen connections on your hardware, you're losing TPS. It's probably a good idea to install pgbouncer or pgpool and let your clients connect to those instead. You should see a good performance boost from that.But what concerns me is that your previous CPU charts showed a lot of iowait. Even with the SSD taking some of the load off your write stream, something else is going on, there. That's why you need to monitor the \"size\" in MB, or number of files, for the pgsql_tmp directory. That's where PG puts temp files when sorts are too big for your work_mem. If that's getting a ton of activity, that would explain some of your write overhead.PPS. I talked with New Relic and it turns out there is somethingwrong with the disk monitoring tool, so that's why there was nothingin the disk charts but iostat showed a lot of activity.Yeah. Next time you need to check IO, use iostat. It's not as pretty, but it tells everything. ;) Just to help out with that, use:iostat -dmxThat will give you extended information, including the % utilization of your drives. TPS stats are nice, but I was just guessing your drives were stalling out based on experience. Getting an outright percentage is better. You should also use sar. Just a plain:sar 1 100Will give you a lot of info on what the CPU is doing. You want that %iowait column to be as low as possible.Keep us updated.-- Shaun ThomasOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, [email protected]______________________________________________See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email", "msg_date": "Fri, 30 Nov 2012 16:44:33 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "On 11/30/2012 09:44 AM, Niels Kristian Schjødt wrote:\n\nJust a note on your iostat numbers. The first reading is actually just\na summary. You want the subsequent readings.\n\n> The pgsql_tmp dir is not changing at all it's constantly empty (a size\n> of 4.0K).\n\nGood.\n\n> Filesystem 1K-blocks Used Available Use% Mounted on\n> /dev/md3 230619228 5483796 213420620 3% /ssd\n\nGood.\n\nYou could just be seeing lots of genuine activity. But going back on the\nthread, I remember seeing this in your postgresql.conf:\n\nshared_buffers = 7680MB\n\nChange this to:\n\nshared_buffers = 4GB\n\nI say that because you mentioned you're using Ubuntu 12.04, and we were\nhaving some problems with PG on that platform. With shared_buffers over\n4GB, it starts doing really weird things to the memory subsystem.\nWhatever it does causes the kernel to purge cache rather aggressively.\nWe saw a 60% reduction in read IO by reducing shared_buffers to 4GB.\nWithout as many reads, your writes should be much less disruptive.\n\nYou'll need to restart PG to adopt that change.\n\nBut I encourage you to keep iostat running in a terminal window so you\ncan watch it for a while. It's very revealing.\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 10:06:36 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "On Nov 30, 2012, at 8:06 AM, Shaun Thomas wrote:\n\n> I say that because you mentioned you're using Ubuntu 12.04, and we were\n> having some problems with PG on that platform. With shared_buffers over\n> 4GB, it starts doing really weird things to the memory subsystem.\n> Whatever it does causes the kernel to purge cache rather aggressively.\n> We saw a 60% reduction in read IO by reducing shared_buffers to 4GB.\n> Without as many reads, your writes should be much less disruptive.\n\nHm, this sounds like something we should look into. Before we start digging do you have more to share, or did you leave it with the \"huh, that's weird; this seems to fix it\" solution?\nOn Nov 30, 2012, at 8:06 AM, Shaun Thomas wrote:I say that because you mentioned you're using Ubuntu 12.04, and we werehaving some problems with PG on that platform. With shared_buffers over4GB, it starts doing really weird things to the memory subsystem.Whatever it does causes the kernel to purge cache rather aggressively.We saw a 60% reduction in read IO by reducing shared_buffers to 4GB.Without as many reads, your writes should be much less disruptive.Hm, this sounds like something we should look into. Before we start digging do you have more to share, or did you leave it with the \"huh, that's weird; this seems to fix it\" solution?", "msg_date": "Fri, 30 Nov 2012 11:57:07 -0800", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "shared_buffers on ubuntu precise" }, { "msg_contents": "On 11/30/2012 01:57 PM, Ben Chobot wrote:\n\n> Hm, this sounds like something we should look into. Before we start\n> digging do you have more to share, or did you leave it with the \"huh,\n> that's weird; this seems to fix it\" solution?\n\nWe're still testing. We're still on the -31 kernel. We tried the -33 \nkernel which *might* fix it, but then this happened:\n\nhttps://bugs.launchpad.net/ubuntu/+source/linux/+bug/1084264\n\nSo now we're testing -34 which is currently proposed. Either way, it's \npretty clear that Ubuntu's choice of patches to backport is rather \neclectic and a little wonky, or that nailing down load calculations went \nawry since the NOHZ stuff started, or both. At this point, I wish we'd \nstayed on CentOS.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 14:01:45 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers on ubuntu precise" }, { "msg_contents": "On Fri, Nov 30, 2012 at 02:01:45PM -0600, Shaun Thomas wrote:\n> On 11/30/2012 01:57 PM, Ben Chobot wrote:\n> \n> >Hm, this sounds like something we should look into. Before we start\n> >digging do you have more to share, or did you leave it with the \"huh,\n> >that's weird; this seems to fix it\" solution?\n> \n> We're still testing. We're still on the -31 kernel. We tried the -33\n> kernel which *might* fix it, but then this happened:\n> \n> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1084264\n> \n> So now we're testing -34 which is currently proposed. Either way,\n> it's pretty clear that Ubuntu's choice of patches to backport is\n> rather eclectic and a little wonky, or that nailing down load\n> calculations went awry since the NOHZ stuff started, or both. At\n> this point, I wish we'd stayed on CentOS.\n\nOr Debian. Not sure what would justify use of Ubuntu as a server,\nexcept wanting to have the exact same OS as their personal computers.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 15:38:38 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers on ubuntu precise" }, { "msg_contents": "On 11/30/2012 02:38 PM, Bruce Momjian wrote:\n\n> Or Debian. Not sure what would justify use of Ubuntu as a server,\n> except wanting to have the exact same OS as their personal computers.\n\nHonestly not sure why we went that direction. I'm not in the sysadmin \ngroup, though I do work with them pretty closely. I think it was because \nof the LTS label, and the fact that the packages are quite a bit more \nrecent than Debian stable.\n\nI can say however, that I'm testing the 3.4 kernel right now, and it \nseems much better. I may be able to convince them to install that \ninstead if their own tests prove beneficial.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 14:46:23 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers on ubuntu precise" }, { "msg_contents": "On Fri, Nov 30, 2012 at 12:38 PM, Bruce Momjian <[email protected]> wrote:\n> Or Debian. Not sure what would justify use of Ubuntu as a server,\n> except wanting to have the exact same OS as their personal computers.\n\nWe have switched from Debian to Ubuntu: there is definitely non-zero\nvalue in the PPA hosting (although it's rather terrible in many ways),\nregular LTS releases (even if you choose not to use them right away,\nand know they are somewhat buggy at times), and working with AWS and\nCanonical as organizations (that, most importantly, can interact\ndirectly without my own organization) on certain issues. For example,\nthis dog of a bug:\n\n https://bugs.launchpad.net/ubuntu/+source/linux-ec2/+bug/929941\n\nI also frequently take advantage of Debian unstable for backporting of\nspecific packages that are very important to me, so there's a lot of\nvalue to me in Ubuntu being quite similar to Debian. In fact, even\nthough I say we 'switched', it's not as though we re-did some\nentrenched systems from Debian to Ubuntu -- rather, we employ both\nsystems at the same time and I don't recall gnashing of teeth about\nthat, because they are very similar. Yet, there is a clear Ubuntu\npreference for new systems made today and, to wit, I can't think of\nanyone with more than the most mild preference for Debian. Conversely,\nI'd say the preference for Ubuntu for the aforementioned reasons is\nclear but moderate at most.\n\nAlso, there's the similarity to the lap/desktop environment. Often\ncited with some derision, yet it does add a lot of value, even if\npeople run slightly newer Ubuntus on their non-production computer.\n\n--\nfdr\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 14:21:36 -0800", "msg_from": "Daniel Farina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers on ubuntu precise" }, { "msg_contents": "On 01/12/12 11:21, Daniel Farina wrote:\n> On Fri, Nov 30, 2012 at 12:38 PM, Bruce Momjian <[email protected]> wrote:\n>> Or Debian. Not sure what would justify use of Ubuntu as a server,\n>> except wanting to have the exact same OS as their personal computers.\n>\n> We have switched from Debian to Ubuntu: there is definitely non-zero\n> value in the PPA hosting (although it's rather terrible in many ways),\n> regular LTS releases (even if you choose not to use them right away,\n> and know they are somewhat buggy at times), and working with AWS and\n> Canonical as organizations (that, most importantly, can interact\n> directly without my own organization) on certain issues. For example,\n> this dog of a bug:\n>\n> https://bugs.launchpad.net/ubuntu/+source/linux-ec2/+bug/929941\n>\n> I also frequently take advantage of Debian unstable for backporting of\n> specific packages that are very important to me, so there's a lot of\n> value to me in Ubuntu being quite similar to Debian. In fact, even\n> though I say we 'switched', it's not as though we re-did some\n> entrenched systems from Debian to Ubuntu -- rather, we employ both\n> systems at the same time and I don't recall gnashing of teeth about\n> that, because they are very similar. Yet, there is a clear Ubuntu\n> preference for new systems made today and, to wit, I can't think of\n> anyone with more than the most mild preference for Debian. Conversely,\n> I'd say the preference for Ubuntu for the aforementioned reasons is\n> clear but moderate at most.\n>\n> Also, there's the similarity to the lap/desktop environment. Often\n> cited with some derision, yet it does add a lot of value, even if\n> people run slightly newer Ubuntus on their non-production computer.\n>\n\n+1\n\nWe have gone through pretty much the same process in the last couple of \nyears. Most of our new systems run Ubuntu, some Debian.\n\nThere is definitely value in running the \"same\" system on the desktop \ntoo - often makes bug replication ridiculously easy (no having to find \nthe appropriate test environment, ask if I can hammer/punish/modify it \netc etc, and no need even spin up a VM).\n\nCheers\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 01 Dec 2012 11:36:34 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: shared_buffers on ubuntu precise" }, { "msg_contents": "Hmm - not strictly true as stated: 1 SSD will typically do 500MB/s \nsequential read/write. 1 HDD will be lucky to get a 1/3 that.\n\nWe are looking at replacing 4 to 6 disk RAID10 arrays of HDD with a \nRAID1 pair of SSD, as they perform about the same for sequential work \nand vastly better at random. Plus they only use 2x 2.5\" slots (or, ahem \n2x PCIe sockets), so allow smaller form factor servers and save on power \nand cooling.\n\nCheers\n\nMark\n\nOn 30/11/12 23:07, Vitalii Tymchyshyn wrote:\n> Oh, yes. I don't imagine DB server without RAID+BBU :)\n> When there is no BBU, SSD can be handy.\n> But you know, SSD is worse in linear read/write than HDD.\n>\n> Best regards, Vitalii Tymchyshyn\n>\n>\n> 2012/11/30 Mark Kirkwood <[email protected]\n> <mailto:[email protected]>>\n>\n> Most modern SSD are much faster for fsync type operations than a\n> spinning disk - similar performance to spinning disk + writeback\n> raid controller + battery.\n>\n> However as you mention, they are great at random IO too, so Niels,\n> it might be worth putting your postgres logs *and* data on the SSDs\n> and retesting.\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 01 Dec 2012 11:43:04 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Well, it seems that my data can be outdated, sorry for that. I've just\nchecked performance numbers on Tom's hardware and it seems that best sad\nreally do 500 MB/s. Some others do 100. So, I'd say one must choose wisely\n(as always :-) ).\n\nBest regards,\nVitalii Tymchyshyn\n1 груд. 2012 00:43, \"Mark Kirkwood\" <[email protected]> напис.\n\n> Hmm - not strictly true as stated: 1 SSD will typically do 500MB/s\n> sequential read/write. 1 HDD will be lucky to get a 1/3 that.\n>\n> We are looking at replacing 4 to 6 disk RAID10 arrays of HDD with a RAID1\n> pair of SSD, as they perform about the same for sequential work and vastly\n> better at random. Plus they only use 2x 2.5\" slots (or, ahem 2x PCIe\n> sockets), so allow smaller form factor servers and save on power and\n> cooling.\n>\n> Cheers\n>\n> Mark\n>\n> On 30/11/12 23:07, Vitalii Tymchyshyn wrote:\n>\n>> Oh, yes. I don't imagine DB server without RAID+BBU :)\n>> When there is no BBU, SSD can be handy.\n>> But you know, SSD is worse in linear read/write than HDD.\n>>\n>> Best regards, Vitalii Tymchyshyn\n>>\n>>\n>> 2012/11/30 Mark Kirkwood <[email protected]\n>> <mailto:mark.kirkwood@**catalyst.net.nz <[email protected]>>>\n>>\n>> Most modern SSD are much faster for fsync type operations than a\n>> spinning disk - similar performance to spinning disk + writeback\n>> raid controller + battery.\n>>\n>> However as you mention, they are great at random IO too, so Niels,\n>> it might be worth putting your postgres logs *and* data on the SSDs\n>> and retesting.\n>>\n>>\n>\n\nWell, it seems that my data can be outdated, sorry for that. I've just checked performance numbers on Tom's hardware and it seems that best sad really do 500 MB/s. Some others do 100. So, I'd say one must choose wisely (as always :-) ).\nBest regards,\nVitalii Tymchyshyn\n1 груд. 2012 00:43, \"Mark Kirkwood\" <[email protected]> напис.\nHmm - not strictly true as stated: 1 SSD will typically do 500MB/s sequential read/write. 1 HDD will be lucky to get a 1/3 that.\n\nWe are looking at replacing 4 to 6 disk RAID10 arrays of HDD with a RAID1 pair of SSD, as they perform about the same for sequential work and vastly better at random. Plus they only use 2x 2.5\" slots (or, ahem 2x PCIe sockets), so allow smaller form factor servers and save on power and cooling.\n\nCheers\n\nMark\n\nOn 30/11/12 23:07, Vitalii Tymchyshyn wrote:\n\nOh, yes. I don't imagine DB server without RAID+BBU :)\nWhen there is no BBU, SSD can be handy.\nBut you know, SSD is worse in linear read/write than HDD.\n\nBest regards, Vitalii Tymchyshyn\n\n\n2012/11/30 Mark Kirkwood <[email protected]\n<mailto:[email protected]>>\n\n    Most modern SSD are much faster for fsync type operations than a\n    spinning disk - similar performance to spinning disk + writeback\n    raid controller + battery.\n\n    However as you mention, they are great at random IO too, so Niels,\n    it might be worth putting your postgres logs *and* data on the SSDs\n    and retesting.", "msg_date": "Sun, 2 Dec 2012 13:14:24 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Yeah, this area is changing very fast!\n\nI agree - choosing carefully is important, as there are still plenty of \nolder models around that are substantially slower. Also choice of \nmotherboard chipset can strongly effect overall performance too. The 6 \nGbit/s ports on Sandy and Ivy bridge Mobos [1] seem to get close to that \nrated performance out of the SSD that I've tested (Crucial m4, Intel \nvarious).\n\nCheers\n\nMark\n\n[1] Which I think are actually Intel or Marvell controllers.\n\nOn 03/12/12 00:14, Vitalii Tymchyshyn wrote:\n> Well, it seems that my data can be outdated, sorry for that. I've just\n> checked performance numbers on Tom's hardware and it seems that best sad\n> really do 500 MB/s. Some others do 100. So, I'd say one must choose wisely\n> (as always :-) ).\n>\n> Best regards,\n> Vitalii Tymchyshyn\n> 1 груд. 2012 00:43, \"Mark Kirkwood\" <[email protected]> напис.\n>\n>> Hmm - not strictly true as stated: 1 SSD will typically do 500MB/s\n>> sequential read/write. 1 HDD will be lucky to get a 1/3 that.\n>>\n>> We are looking at replacing 4 to 6 disk RAID10 arrays of HDD with a RAID1\n>> pair of SSD, as they perform about the same for sequential work and vastly\n>> better at random. Plus they only use 2x 2.5\" slots (or, ahem 2x PCIe\n>> sockets), so allow smaller form factor servers and save on power and\n>> cooling.\n>>\n>> Cheers\n>>\n>> Mark\n>>\n>> On 30/11/12 23:07, Vitalii Tymchyshyn wrote:\n>>\n>>> Oh, yes. I don't imagine DB server without RAID+BBU :)\n>>> When there is no BBU, SSD can be handy.\n>>> But you know, SSD is worse in linear read/write than HDD.\n>>>\n>>> Best regards, Vitalii Tymchyshyn\n>>>\n>>>\n>>> 2012/11/30 Mark Kirkwood <[email protected]\n>>> <mailto:mark.kirkwood@**catalyst.net.nz <[email protected]>>>\n>>>\n>>> Most modern SSD are much faster for fsync type operations than a\n>>> spinning disk - similar performance to spinning disk + writeback\n>>> raid controller + battery.\n>>>\n>>> However as you mention, they are great at random IO too, so Niels,\n>>> it might be worth putting your postgres logs *and* data on the SSDs\n>>> and retesting.\n>>>\n>>>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 03 Dec 2012 12:34:19 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "\nDen 30/11/2012 kl. 17.06 skrev Shaun Thomas <[email protected]>:\n\n> On 11/30/2012 09:44 AM, Niels Kristian Schjødt wrote:\n> \n> Just a note on your iostat numbers. The first reading is actually just a summary. You want the subsequent readings.\n> \n>> The pgsql_tmp dir is not changing at all it's constantly empty (a size\n>> of 4.0K).\n> \n> Good.\n> \n>> Filesystem 1K-blocks Used Available Use% Mounted on\n>> /dev/md3 230619228 5483796 213420620 3% /ssd\n> \n> Good.\n> \n> You could just be seeing lots of genuine activity. But going back on the thread, I remember seeing this in your postgresql.conf:\n> \n> shared_buffers = 7680MB\n> \n> Change this to:\n> \n> shared_buffers = 4GB\n> \n> I say that because you mentioned you're using Ubuntu 12.04, and we were having some problems with PG on that platform. With shared_buffers over 4GB, it starts doing really weird things to the memory subsystem. Whatever it does causes the kernel to purge cache rather aggressively. We saw a 60% reduction in read IO by reducing shared_buffers to 4GB. Without as many reads, your writes should be much less disruptive.\n> \n> You'll need to restart PG to adopt that change.\n> \n> But I encourage you to keep iostat running in a terminal window so you can watch it for a while. It's very revealing.\n> \n> -- \n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\nCouldn't this be if you haven't changed these: http://www.postgresql.org/docs/9.2/static/kernel-resources.html ?\nI have changed the following in my configuration:\n\nkernel.shmmax = 8589934592 #(8GB)\nkernel.shmall = 17179869184 #(16GB)\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 3 Dec 2012 10:37:03 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" } ]
[ { "msg_contents": "Niels Kristian Schjødt wrote:\n\n>> You said before that you were seeing high disk wait numbers. Now\n>> it is zero accourding to your disk utilization graph. That\n>> sounds like a change to me.\n\n> Hehe, I'm sorry if it somehow was misleading, I just wrote \"a lot\n> of I/O\" it was CPU I/O\n\n>>> A lot of both read and writes takes more than a 1000 times as\n>>> long as they usually do, under \"lighter\" overall load.\n>> \n>> As an odd coincidence, you showed your max_connections setting\n>> to be 1000.\n>> \n>> http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n\n> Back to the issue: Could it be that it is the fact that I'm using\n> ubuntus built in software raid to raid my disks, and that it is\n> not at all capable of handling the throughput?\n\nFor high performance situations I would always use a high quality\nRAID controller with battery-backed RAM configured for write-back;\nhowever:\n\nThe graphs you included suggest that your problem has nothing to do\nwith your storage system. Now maybe you didn't capture the data for\nthe graphs while the problem was occurring, in which case the\ngraphs would be absolutely useless; but based on what slim data you\nhave provided, you need a connection pool (like maybe pgbouncer\nconfigured in transaction mode) to limit the number of database\nconnections used to something like twice the number of cores.\n\nIf you still have problems, pick the query which is using the most\ntime on your database server, and post it with the information\nsuggested on this page:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 07:03:59 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "Okay, So to understand this better before I go with that solution: \nIn theory what difference should it make to the performance, to have a pool in front of the database, that all my workers and web servers connect to instead of connecting directly? Where is the performance gain coming from in that situation?\n\nDen 30/11/2012 kl. 13.03 skrev \"Kevin Grittner\" <[email protected]>:\n\n> Niels Kristian Schjødt wrote:\n> \n>>> You said before that you were seeing high disk wait numbers. Now\n>>> it is zero accourding to your disk utilization graph. That\n>>> sounds like a change to me.\n> \n>> Hehe, I'm sorry if it somehow was misleading, I just wrote \"a lot\n>> of I/O\" it was CPU I/O\n> \n>>>> A lot of both read and writes takes more than a 1000 times as\n>>>> long as they usually do, under \"lighter\" overall load.\n>>> \n>>> As an odd coincidence, you showed your max_connections setting\n>>> to be 1000.\n>>> \n>>> http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n> \n>> Back to the issue: Could it be that it is the fact that I'm using\n>> ubuntus built in software raid to raid my disks, and that it is\n>> not at all capable of handling the throughput?\n> \n> For high performance situations I would always use a high quality\n> RAID controller with battery-backed RAM configured for write-back;\n> however:\n> \n> The graphs you included suggest that your problem has nothing to do\n> with your storage system. Now maybe you didn't capture the data for\n> the graphs while the problem was occurring, in which case the\n> graphs would be absolutely useless; but based on what slim data you\n> have provided, you need a connection pool (like maybe pgbouncer\n> configured in transaction mode) to limit the number of database\n> connections used to something like twice the number of cores.\n> \n> If you still have problems, pick the query which is using the most\n> time on your database server, and post it with the information\n> suggested on this page:\n> \n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n> \n> -Kevin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 14:31:34 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" }, { "msg_contents": "On 11/30/2012 07:31 AM, Niels Kristian Schjødt wrote:\n\n> In theory what difference should it make to the performance, to have\n> a pool in front of the database, that all my workers and web servers\n> connect to instead of connecting directly? Where is the performance\n> gain coming from in that situation?\n\nIf you have several more connections than you have processors, the\ndatabase does a *lot* more context switching, and among other things,\nthat drastically reduces PG performance. On a testbed, I can get over\n150k transactions per second on PG 9.1 with a 1-1 relationship between\nCPU and client. Increase that to a few hundred, and my TPS drops down to\n30k. Simply having the clients there kills performance.\n\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 07:49:08 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize update query" } ]
[ { "msg_contents": "Hello\n\nWe am running a web application on ubuntu 10.10 using postgres 8.4.3.\n\nWe are experiencing regular problems (each morning as the users come in)\nwhich seem to be caused by deadlocks in the postgres database. I am seeing\nmessages like:\n\n2012-11-30 10:24:36 GMT LOG: sending cancel to blocking autovacuum PID\n16951 at character 62\n2012-11-30 10:24:36 GMT DETAIL: Process 3368 waits for AccessShareLock on\nrelation 36183 of database 33864.\n2012-11-30 10:24:36 GMT STATEMENT: SELECT indicatorid, periodid,\norganisationunitid, value FROM aggregatedindicatorvalue WHERE indicatorid I\nN (41471, 46324, 41481, 41487) AND periodid IN (46422, 46423, 46424) AND\norganisationunitid IN (67)\n\nAlmost all of the postgres processes seem to be stuck in the \"PARSE\nWAITING\" state and the application ceases to respond as it becomes starved\nof database connections. The only way to get things moving again seems to\nbe to restart postgres.\n\nTrying to interpret this, does this mean that the autovacuum process is\nholding a lock which is required tn order to complete the select query? Is\nit possible that the autovacuum process is ignoring that 'cancel' request\nso everything stays blocked?\n\nSorry if these seem like basic questions. I am not too sure where to look\nto start resolving this. Any suggestions would be appreciated.\n\nBob\n\nHelloWe am running a web application on ubuntu 10.10 using postgres 8.4.3.We are experiencing regular problems (each morning as the users come in) which seem to be caused by deadlocks in the postgres database.  I am seeing messages like:\n2012-11-30 10:24:36 GMT LOG:  sending cancel to blocking autovacuum PID 16951 at character 622012-11-30 10:24:36 GMT DETAIL:  Process 3368 waits for AccessShareLock on relation 36183 of database 33864.\n2012-11-30 10:24:36 GMT STATEMENT:  SELECT indicatorid, periodid, organisationunitid, value FROM aggregatedindicatorvalue WHERE indicatorid IN (41471, 46324, 41481, 41487) AND periodid IN (46422, 46423, 46424) AND organisationunitid IN (67)\nAlmost all of the postgres processes seem to be stuck in the \"PARSE WAITING\" state and the application ceases to respond as it becomes starved of database connections.  The only way to get things moving again seems to be to restart postgres.\nTrying to interpret this, does this mean that the autovacuum process is holding a lock which is required tn order to complete the select query?  Is it possible that the autovacuum process is ignoring that 'cancel' request so everything stays blocked?\nSorry if these seem like basic questions.  I am not too sure where to look to start resolving this.  Any suggestions would be appreciated.Bob", "msg_date": "Fri, 30 Nov 2012 14:34:15 +0000", "msg_from": "Bob Jolliffe <[email protected]>", "msg_from_op": true, "msg_subject": "deadlock under load" }, { "msg_contents": "Bob Jolliffe <[email protected]> writes:\n> We am running a web application on ubuntu 10.10 using postgres 8.4.3.\n\nCurrent release in that branch is 8.4.14. (By this time next week\nit'll be 8.4.15.) You are missing a lot of bug fixes:\nhttp://www.postgresql.org/docs/8.4/static/release.html\n\n> Trying to interpret this, does this mean that the autovacuum process is\n> holding a lock which is required tn order to complete the select\n> query?\n\nPossibly. Looking into the pg_locks view would tell you more.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Nov 2012 10:57:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deadlock under load" }, { "msg_contents": "On 30 November 2012 15:57, Tom Lane <[email protected]> wrote:\n\n> Bob Jolliffe <[email protected]> writes:\n> > We am running a web application on ubuntu 10.10 using postgres 8.4.3.\n>\n> Current release in that branch is 8.4.14. (By this time next week\n> it'll be 8.4.15.) You are missing a lot of bug fixes:\n> http://www.postgresql.org/docs/8.4/static/release.html\n>\n>\nSorry I reported that incorrectly. 8.4.3 was initially installed but the\npackage system has kept it up to date. Currently it is in fact 8.4.14.\n\n\n> > Trying to interpret this, does this mean that the autovacuum process is\n> > holding a lock which is required tn order to complete the select\n> > query?\n>\n> Possibly. Looking into the pg_locks view would tell you more.\n>\n\nOk. I guess I will have to wait for it to lock up again to do this.\n\n\n>\n> regards, tom lane\n>\n\nOn 30 November 2012 15:57, Tom Lane <[email protected]> wrote:\nBob Jolliffe <[email protected]> writes:\n> We am running a web application on ubuntu 10.10 using postgres 8.4.3.\n\nCurrent release in that branch is 8.4.14.  (By this time next week\nit'll be 8.4.15.)  You are missing a lot of bug fixes:\nhttp://www.postgresql.org/docs/8.4/static/release.html\nSorry I reported that incorrectly.  8.4.3 was initially installed but the package system has kept it up to date.  Currently it is in fact 8.4.14. \n\n> Trying to interpret this, does this mean that the autovacuum process is\n> holding a lock which is required tn order to complete the select\n> query?\n\nPossibly.  Looking into the pg_locks view would tell you more.Ok.  I guess I will have to wait for it to lock up again to do this. \n\n                        regards, tom lane", "msg_date": "Fri, 30 Nov 2012 16:13:11 +0000", "msg_from": "Bob Jolliffe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: deadlock under load" } ]
[ { "msg_contents": "Hi guys (and girls)\n\nI've been banging my head over this for a few days now so if any of you kind\nsouls could take a minute to take a look at this I would be eternally\ngrateful.\n\nI have a pretty straightforward query that is very slow by default, and\nabout 70 times faster when I set enable_bitmapscan=off. I would like to\nconvince the planner to use my lovely indexes.\n\nThe scenario is this; I have two tables, trade and position_effect. A trade\nis a deal we do with somebody to exchange something for something else. It\nhas a time it was done, and is associated with a particular book for\naccounting purposes. A position effect records changes to our position (e.g.\nhow much we have) of an particular asset. One trade can many position\neffects (usually only 1,2 or 3)\n\nFor example, I do a trade of USD/GBP and I get two position effects, +1000\nGBP and -1200USD\n\n\nSCHEMA:\n-------\n\nThe actual schema is a bit more complicated but I will put the important\nparts here (if you think it important, the full schema for the two tables is\nhere: http://pastebin.com/6Y52aDFL):\n\nCREATE TABLE trade\n(\n id bigserial NOT NULL,\n time_executed timestamp with time zone NOT NULL,\n id_book integer NOT NULL,\n CONSTRAINT cons_trade_primary_key PRIMARY KEY (id),\n)\n\nCREATE INDEX idx_trade_id_book\n ON trade\n USING btree\n (id_book, time_executed, id);\n\nCREATE TABLE position_effect\n(\n id bigserial NOT NULL,\n id_trade bigint NOT NULL,\n id_asset integer NOT NULL,\n quantity double precision NOT NULL,\n CONSTRAINT cons_pe_primary_key PRIMARY KEY (id_trade, id_asset),\n)\n\nSETUP:\n------\n\nThese tables are relatively large (~100 million rows in position effect).\nThe box is a pretty beastly affair with 512Mb of ram and 4x10 2Ghz cores.\nThe postgres configuration is here:\n\nhttp://pastebin.com/48uyiak7\n\nI am using a 64bit postgresql 9.2.1, hand compiled on a RedHat 6.2 box.\n\nQUERY:\n------\n\nWhat I want to do is sum all of the position effects, for a particular asset\nwhile joined to the trade table to filter for the time it was executed and\nthe book it was traded into:\n\nSELECT sum(position_effect.quantity) \n FROM trade, position_effect\n WHERE trade.id = position_effect.id_trade\n AND position_effect.id_asset = 1837\n AND trade.time_executed >= '2012-10-28 00:00:00' \n AND trade.id_book = 41\n\nIn this case there are only 11 rows that need to be summed. If I just let\npostgres do its thing, that query takes 5000ms (Which when multiplied over\nmany books and assets gets very slow). I think this is because it is\nbitmapping the whole position_effect table which is very large. If I disable\nbitmap scans:\n\nset enable_bitmapscan = off;\n\nThe query takes 43ms, and properly uses the indexes I have set up.\n\nSlow version with bitmapscan enabled: http://explain.depesz.com/s/6I7\nFast version with bitmapscan disabled: http://explain.depesz.com/s/4MWG\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 15:06:48 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Slow query: bitmap scan troubles" }, { "msg_contents": "Bad form to reply to yourself I know but just check-reading that for the\nthird time I noticed two mistakes\n\n- The box has 128Gb of ram, not 512Mb\n\n- There is an additional constraint on the position_effect table (though I\ndon't think it matters for this discussion):\n CONSTRAINT cons_pe_trade FOREIGN KEY (id_trade) REFERENCES trade (id)\n\nSorry to clog your inboxes further!\n\nRegards,\n\nPhilip\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of\[email protected]\nSent: 04 December 2012 15:07\nTo: [email protected]\nSubject: [PERFORM] Slow query: bitmap scan troubles\n\nHi guys (and girls)\n\nI've been banging my head over this for a few days now so if any of you kind\nsouls could take a minute to take a look at this I would be eternally\ngrateful.\n\nI have a pretty straightforward query that is very slow by default, and\nabout 70 times faster when I set enable_bitmapscan=off. I would like to\nconvince the planner to use my lovely indexes.\n\nThe scenario is this; I have two tables, trade and position_effect. A trade\nis a deal we do with somebody to exchange something for something else. It\nhas a time it was done, and is associated with a particular book for\naccounting purposes. A position effect records changes to our position (e.g.\nhow much we have) of an particular asset. One trade can many position\neffects (usually only 1,2 or 3)\n\nFor example, I do a trade of USD/GBP and I get two position effects, +1000\nGBP and -1200USD\n\n\nSCHEMA:\n-------\n\nThe actual schema is a bit more complicated but I will put the important\nparts here (if you think it important, the full schema for the two tables is\nhere: http://pastebin.com/6Y52aDFL):\n\nCREATE TABLE trade\n(\n id bigserial NOT NULL,\n time_executed timestamp with time zone NOT NULL,\n id_book integer NOT NULL,\n CONSTRAINT cons_trade_primary_key PRIMARY KEY (id),\n)\n\nCREATE INDEX idx_trade_id_book\n ON trade\n USING btree\n (id_book, time_executed, id);\n\nCREATE TABLE position_effect\n(\n id bigserial NOT NULL,\n id_trade bigint NOT NULL,\n id_asset integer NOT NULL,\n quantity double precision NOT NULL,\n CONSTRAINT cons_pe_primary_key PRIMARY KEY (id_trade, id_asset),\n)\n\nSETUP:\n------\n\nThese tables are relatively large (~100 million rows in position effect).\nThe box is a pretty beastly affair with 512Mb of ram and 4x10 2Ghz cores.\nThe postgres configuration is here:\n\nhttp://pastebin.com/48uyiak7\n\nI am using a 64bit postgresql 9.2.1, hand compiled on a RedHat 6.2 box.\n\nQUERY:\n------\n\nWhat I want to do is sum all of the position effects, for a particular asset\nwhile joined to the trade table to filter for the time it was executed and\nthe book it was traded into:\n\nSELECT sum(position_effect.quantity) \n FROM trade, position_effect\n WHERE trade.id = position_effect.id_trade\n AND position_effect.id_asset = 1837\n AND trade.time_executed >= '2012-10-28 00:00:00' \n AND trade.id_book = 41\n\nIn this case there are only 11 rows that need to be summed. If I just let\npostgres do its thing, that query takes 5000ms (Which when multiplied over\nmany books and assets gets very slow). I think this is because it is\nbitmapping the whole position_effect table which is very large. If I disable\nbitmap scans:\n\nset enable_bitmapscan = off;\n\nThe query takes 43ms, and properly uses the indexes I have set up.\n\nSlow version with bitmapscan enabled: http://explain.depesz.com/s/6I7 Fast\nversion with bitmapscan disabled: http://explain.depesz.com/s/4MWG\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 15:21:17 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, Dec 4, 2012 at 12:06 PM, <[email protected]> wrote:\n> Slow version with bitmapscan enabled: http://explain.depesz.com/s/6I7\n> Fast version with bitmapscan disabled: http://explain.depesz.com/s/4MWG\n\nIf you check the \"fast\" plan, it has a higher cost compared against\nthe \"slow\" plan.\n\nThe difference between cost estimation and actual cost of your\nqueries, under relatively precise row estimates, seems to suggest your\ne_c_s or r_p_c aren't a reflection of your hardware's performance.\n\nFirst, make sure caching isn't interfering with your results. Run each\nquery several times.\n\nThen, if the difference persists, you may have to tweak\neffective_cache_size first, maybe random_page_cost too, to better\nmatch your I/O subsystem's actual performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 12:27:57 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, Dec 4, 2012 at 7:27 AM, Claudio Freire <[email protected]> wrote:\n> On Tue, Dec 4, 2012 at 12:06 PM, <[email protected]> wrote:\n>> Slow version with bitmapscan enabled: http://explain.depesz.com/s/6I7\n>> Fast version with bitmapscan disabled: http://explain.depesz.com/s/4MWG\n>\n> If you check the \"fast\" plan, it has a higher cost compared against\n> the \"slow\" plan.\n>\n> The difference between cost estimation and actual cost of your\n> queries, under relatively precise row estimates, seems to suggest your\n> e_c_s or r_p_c aren't a reflection of your hardware's performance.\n\nBut the row estimates are not precise at the top of the join/filter.\nIt thinks there will 2120 rows, but there are only 11.\n\nSo it seems like there is a negative correlation between the two\ntables which is not recognized.\n\n> First, make sure caching isn't interfering with your results. Run each\n> query several times.\n\nIf that is not how the production system works (running the same query\nover and over) then you want to model the cold cache, not the hot one.\n But in any case, the posted explains indicates that all buffers were\ncached.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 09:22:29 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, Dec 4, 2012 at 2:22 PM, Jeff Janes <[email protected]> wrote:\n> On Tue, Dec 4, 2012 at 7:27 AM, Claudio Freire <[email protected]> wrote:\n>> On Tue, Dec 4, 2012 at 12:06 PM, <[email protected]> wrote:\n>>> Slow version with bitmapscan enabled: http://explain.depesz.com/s/6I7\n>>> Fast version with bitmapscan disabled: http://explain.depesz.com/s/4MWG\n>>\n>> If you check the \"fast\" plan, it has a higher cost compared against\n>> the \"slow\" plan.\n>>\n>> The difference between cost estimation and actual cost of your\n>> queries, under relatively precise row estimates, seems to suggest your\n>> e_c_s or r_p_c aren't a reflection of your hardware's performance.\n>\n> But the row estimates are not precise at the top of the join/filter.\n> It thinks there will 2120 rows, but there are only 11.\n\nAh... I didn't spot that one...\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 14:25:56 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "\n>> But the row estimates are not precise at the top of the join/filter.\n>> It thinks there will 2120 rows, but there are only 11.\n\n>Ah... I didn't spot that one...\n\nYes, you are right there - this is probably a slightly atypical query of\nthis sort actually, 2012 is a pretty good guess.\n\nOn Claudio's suggestion I have found lots more things to read up on and am\neagerly awaiting 6pm when I can bring the DB down and start tweaking. The\neffective_work_mem setting is going from 6Gb->88Gb which I think will make\nquite a difference.\n\nI still can't quite wrap around my head why accessing an index is expected\nto use more disk access than doing a bitmap scan of the table itself, but I\nguess it does make a bit of sense if postgres assumes the table is more\nlikely to be cached.\n\nIt's all quite, quite fascinating :)\n\nI'll let you know how it goes.\n\n- Phil\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 17:35:32 -0000", "msg_from": "\"Philip Scott\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "\n>> But the row estimates are not precise at the top of the join/filter.\n>> It thinks there will 2120 rows, but there are only 11.\n\n>Ah... I didn't spot that one...\n\nYes, you are right there - this is probably a slightly atypical query of\nthis sort actually, 2012 is a pretty good guess.\n\nOn Claudio's suggestion I have found lots more things to read up on and am\neagerly awaiting 6pm when I can bring the DB down and start tweaking. The\neffective_work_mem setting is going from 6Gb->88Gb which I think will make\nquite a difference.\n\nI still can't quite wrap around my head why accessing an index is expected\nto use more disk access than doing a bitmap scan of the table itself, but I\nguess it does make a bit of sense if postgres assumes the table is more\nlikely to be cached.\n\nIt's all quite, quite fascinating :)\n\nI'll let you know how it goes.\n\n- Phil\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 17:47:29 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "> But the row estimates are not precise at the top of the join/filter.\n> It thinks there will 2120 rows, but there are only 11.\n\n> So it seems like there is a negative correlation between the two tables\nwhich is not recognized.\n\nYes, you are right there. I am only just beginning to understand how to\nparse these explain reports.. As I mentioned above, I probably picked a bad\nexample to run that query on 11 is an unusually low number of results to get\nback, a few thousand would be more normal.\n\nThough that doesn't account for the 70x difference between the speed of the\ntwo queries in actuality given a pretty similar expected speed (does it?).\nIt does go some way to explaining why a bad choice of plan was made.\n\nIs there some nice bit of literature somewhere that explains what sort of\ncosts are associated with the different types of lookup? I have found bits\nand bobs online but I still don't have a really clear idea in my head what\nthe difference is between a bitmap index scan and index only scan is, though\nI can sort of guess I don't see why one would be considered more likely to\nuse the disk than the other.\n\nOn the 'slow' query (with the better predicted score) \n>> First, make sure caching isn't interfering with your results. Run each \n>> query several times.\n> If that is not how the production system works (running the same query\nover and over) then you want to model the cold cache, not the hot one.\n> But in any case, the posted explains indicates that all buffers were\ncached.\n\nWe are in the rather pleasant situation here in that we are willing to spend\nmoney on the box (up to a point, but quite a large point) to get it up to\nthe spec so that it should hardly ever need to touch the disk, the trick is\nfiguring out how to let our favourite database server know that.\n\nI've just discovered pgtune and am having some fun with that too.\n\nCheers,\n\nPhil\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 18:03:29 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "> The difference between cost estimation and actual cost of your queries,\nunder relatively precise row estimates, seems to suggest your e_c_s or r_p_c\naren't a reflection of your hardware's performance.\n\nWow, so tweaking these has fixed it and then some. It now picks a slightly\ndifferent plan than the 'fast' one previously:\n\nNew super fast version with e_c_s 6GB->88Gb and r_p_c 2-> 1 (s_p_c 1->0.5):\nhttp://explain.depesz.com/s/ECk\n\nFor reference:\n> Slow version with bitmapscan enabled: http://explain.depesz.com/s/6I7 \n> Fast version with bitmapscan disabled: http://explain.depesz.com/s/4MWG\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 18:31:05 -0000", "msg_from": "\"Philip Scott\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, Dec 4, 2012 at 3:03 PM, <[email protected]> wrote:\n>\n> Though that doesn't account for the 70x difference between the speed of the\n> two queries in actuality given a pretty similar expected speed (does it?).\n> It does go some way to explaining why a bad choice of plan was made.\n\nI still don't think it does. I still think the problem is the GUC settings.\n\nThe slow plan joins in a way that processes all 3M rows in both sides\nof the join, and pg knows it.\nThe fast plan only processes 5k of them. And pg knows it. Why is it\nchoosing to process 3M rows?\n\nIf there's negative correlation, it only means less rows will be\nproduced, but the nested loop and and the right-hand index scan still\nhappens.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 15:31:50 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, Dec 4, 2012 at 3:31 PM, Philip Scott <[email protected]> wrote:\n> r_p_c 2-> 1 (s_p_c 1->0.5):\n\nIs this really necessary?\n\n(looks like a no-op, unless your CPU is slow)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 15:32:57 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "Well, you don't need to put anything down. Most settings that change\nplanner decisions can be tuned on per-quey basis by issuing set commands in\ngiven session. This should not affect other queries more than it is needed\nto run query in the way planner chooses.\n\nBest regards, Vitalii Tymchyshyn\n\n\n2012/12/4 <[email protected]>\n\n>\n> >> But the row estimates are not precise at the top of the join/filter.\n> >> It thinks there will 2120 rows, but there are only 11.\n>\n> >Ah... I didn't spot that one...\n>\n> Yes, you are right there - this is probably a slightly atypical query of\n> this sort actually, 2012 is a pretty good guess.\n>\n> On Claudio's suggestion I have found lots more things to read up on and am\n> eagerly awaiting 6pm when I can bring the DB down and start tweaking. The\n> effective_work_mem setting is going from 6Gb->88Gb which I think will make\n> quite a difference.\n>\n> I still can't quite wrap around my head why accessing an index is expected\n> to use more disk access than doing a bitmap scan of the table itself, but I\n> guess it does make a bit of sense if postgres assumes the table is more\n> likely to be cached.\n>\n> It's all quite, quite fascinating :)\n>\n> I'll let you know how it goes.\n>\n> - Phil\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nWell, you don't need to put anything down. Most settings that change planner decisions can be tuned on per-quey basis by issuing set commands in given session. This should not affect other queries more than it is needed to run query in the way planner chooses.\nBest regards, Vitalii Tymchyshyn2012/12/4 <[email protected]>\n\n>> But the row estimates are not precise at the top of the join/filter.\n>> It thinks there will 2120 rows, but there are only 11.\n\n>Ah... I didn't spot that one...\n\nYes, you are right there - this is probably a slightly atypical query of\nthis sort actually, 2012 is a pretty good guess.\n\nOn Claudio's suggestion I have found lots more things to read up on and am\neagerly awaiting 6pm when I can bring the DB down and start tweaking. The\neffective_work_mem setting is going from 6Gb->88Gb which I think will make\nquite a difference.\n\nI still can't quite wrap around my head why accessing an index is expected\nto use more disk access than doing a bitmap scan of the table itself, but I\nguess it does make a bit of sense if postgres assumes the table is more\nlikely to be cached.\n\nIt's all quite, quite fascinating :)\n\nI'll let you know how it goes.\n\n- Phil\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Tue, 4 Dec 2012 20:50:41 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "Ah, okay - my reasoning was there's a big fancy-pants raid array behind it\nthat makes disk operations faster relative to CPU ones.\n\nI'll test it and see if it actually makes any difference.\n\n-----Original Message-----\nFrom: Claudio Freire [mailto:[email protected]] \nSent: 04 December 2012 18:33\nTo: Philip Scott\nCc: [email protected]; postgres performance list\nSubject: Re: [PERFORM] Slow query: bitmap scan troubles\n\nOn Tue, Dec 4, 2012 at 3:31 PM, Philip Scott <[email protected]> wrote:\n> r_p_c 2-> 1 (s_p_c 1->0.5):\n\nIs this really necessary?\n\n(looks like a no-op, unless your CPU is slow)\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 18:54:29 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "Ah okay, thanks. I knew I could set various things but not\neffective_work_mem (I tried reloading the edited config file but it didn't\nseem to pick it up)\n\n \n\nFrom: Vitalii Tymchyshyn [mailto:[email protected]] \nSent: 04 December 2012 18:51\nTo: [email protected]\nCc: postgres performance list\nSubject: Re: [PERFORM] Slow query: bitmap scan troubles\n\n \n\nWell, you don't need to put anything down. Most settings that change planner\ndecisions can be tuned on per-quey basis by issuing set commands in given\nsession. This should not affect other queries more than it is needed to run\nquery in the way planner chooses.\n\n \n\nBest regards, Vitalii Tymchyshyn\n\n \n\n2012/12/4 <[email protected]>\n\n\n>> But the row estimates are not precise at the top of the join/filter.\n>> It thinks there will 2120 rows, but there are only 11.\n\n>Ah... I didn't spot that one...\n\nYes, you are right there - this is probably a slightly atypical query of\nthis sort actually, 2012 is a pretty good guess.\n\nOn Claudio's suggestion I have found lots more things to read up on and am\neagerly awaiting 6pm when I can bring the DB down and start tweaking. The\neffective_work_mem setting is going from 6Gb->88Gb which I think will make\nquite a difference.\n\nI still can't quite wrap around my head why accessing an index is expected\nto use more disk access than doing a bitmap scan of the table itself, but I\nguess it does make a bit of sense if postgres assumes the table is more\nlikely to be cached.\n\nIt's all quite, quite fascinating :)\n\nI'll let you know how it goes.\n\n- Phil\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n \n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n\nAh okay, thanks. I knew I could set various things but not effective_work_mem (I tried reloading the edited config file but it didn’t seem to pick it up) From: Vitalii Tymchyshyn [mailto:[email protected]] Sent: 04 December 2012 18:51To: [email protected]: postgres performance listSubject: Re: [PERFORM] Slow query: bitmap scan troubles Well, you don't need to put anything down. Most settings that change planner decisions can be tuned on per-quey basis by issuing set commands in given session. This should not affect other queries more than it is needed to run query in the way planner chooses. Best regards, Vitalii Tymchyshyn 2012/12/4 <[email protected]>>> But the row estimates are not precise at the top of the join/filter.>> It thinks there will 2120 rows, but there are only 11.>Ah... I didn't spot that one...Yes, you are right there - this is probably a slightly atypical query ofthis sort actually, 2012 is a pretty good guess.On Claudio's suggestion I have found lots more things to read up on and ameagerly awaiting 6pm when I can bring the DB down and start tweaking. Theeffective_work_mem setting is going from 6Gb->88Gb which I think will makequite a difference.I still can't quite wrap around my head why accessing an index is expectedto use more disk access than doing a bitmap scan of the table itself, but Iguess it does make a bit of sense if postgres assumes the table is morelikely to be cached.It's all quite, quite fascinating :)I'll let you know how it goes.- Phil--Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance -- Best regards, Vitalii Tymchyshyn", "msg_date": "Tue, 4 Dec 2012 18:55:17 -0000", "msg_from": "\"Philip Scott\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "Ah okay, thanks. I knew I could set various things but not\neffective_work_mem (I tried reloading the edited config file but it didn't\nseem to pick it up)\n\n \n\n \n\nFrom: Vitalii Tymchyshyn [mailto:[email protected]] \nSent: 04 December 2012 18:51\nTo: [email protected]\nCc: postgres performance list\nSubject: Re: [PERFORM] Slow query: bitmap scan troubles\n\n \n\nWell, you don't need to put anything down. Most settings that change planner\ndecisions can be tuned on per-quey basis by issuing set commands in given\nsession. This should not affect other queries more than it is needed to run\nquery in the way planner chooses.\n\n \n\nBest regards, Vitalii Tymchyshyn\n\n \n\n2012/12/4 <[email protected]>\n\n\n>> But the row estimates are not precise at the top of the join/filter.\n>> It thinks there will 2120 rows, but there are only 11.\n\n>Ah... I didn't spot that one...\n\nYes, you are right there - this is probably a slightly atypical query of\nthis sort actually, 2012 is a pretty good guess.\n\nOn Claudio's suggestion I have found lots more things to read up on and am\neagerly awaiting 6pm when I can bring the DB down and start tweaking. The\neffective_work_mem setting is going from 6Gb->88Gb which I think will make\nquite a difference.\n\nI still can't quite wrap around my head why accessing an index is expected\nto use more disk access than doing a bitmap scan of the table itself, but I\nguess it does make a bit of sense if postgres assumes the table is more\nlikely to be cached.\n\nIt's all quite, quite fascinating :)\n\nI'll let you know how it goes.\n\n- Phil\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n \n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n\nAh okay, thanks. I knew I could set various things but not effective_work_mem (I tried reloading the edited config file but it didn’t seem to pick it up)  From: Vitalii Tymchyshyn [mailto:[email protected]] Sent: 04 December 2012 18:51To: [email protected]: postgres performance listSubject: Re: [PERFORM] Slow query: bitmap scan troubles Well, you don't need to put anything down. Most settings that change planner decisions can be tuned on per-quey basis by issuing set commands in given session. This should not affect other queries more than it is needed to run query in the way planner chooses. Best regards, Vitalii Tymchyshyn 2012/12/4 <[email protected]>>> But the row estimates are not precise at the top of the join/filter.>> It thinks there will 2120 rows, but there are only 11.>Ah... I didn't spot that one...Yes, you are right there - this is probably a slightly atypical query ofthis sort actually, 2012 is a pretty good guess.On Claudio's suggestion I have found lots more things to read up on and ameagerly awaiting 6pm when I can bring the DB down and start tweaking. Theeffective_work_mem setting is going from 6Gb->88Gb which I think will makequite a difference.I still can't quite wrap around my head why accessing an index is expectedto use more disk access than doing a bitmap scan of the table itself, but Iguess it does make a bit of sense if postgres assumes the table is morelikely to be cached.It's all quite, quite fascinating :)I'll let you know how it goes.- Phil--Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance -- Best regards, Vitalii Tymchyshyn", "msg_date": "Tue, 4 Dec 2012 18:56:04 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, Dec 4, 2012 at 9:47 AM, <[email protected]> wrote:\n> eagerly awaiting 6pm when I can bring the DB down and start tweaking. The\n> effective_work_mem setting is going from 6Gb->88Gb which I think will make\n> quite a difference.\n\nI also wonder if increasing (say x10) of default_statistics_target or\njust doing ALTER TABLE SET STATISTICS for particular tables will help.\nIt will make planned to produce more precise estimations. Do not\nforget ANALYZE afer changing it.\n\n>\n> I still can't quite wrap around my head why accessing an index is expected\n> to use more disk access than doing a bitmap scan of the table itself, but I\n> guess it does make a bit of sense if postgres assumes the table is more\n> likely to be cached.\n>\n> It's all quite, quite fascinating :)\n>\n> I'll let you know how it goes.\n>\n> - Phil\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n--\nSergey Konoplev\nDatabase and Software Architect\nhttp://www.linkedin.com/in/grayhemp\n\nPhones:\nUSA +1 415 867 9984\nRussia, Moscow +7 901 903 0499\nRussia, Krasnodar +7 988 888 1979\n\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 12:11:53 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, Dec 4, 2012 at 9:47 AM, <[email protected]> wrote:\n>\n>>> But the row estimates are not precise at the top of the join/filter.\n>>> It thinks there will 2120 rows, but there are only 11.\n>\n>>Ah... I didn't spot that one...\n>\n> Yes, you are right there - this is probably a slightly atypical query of\n> this sort actually, 2012 is a pretty good guess.\n\nWhat do the timings look like on a more realistic example?\n\n> On Claudio's suggestion I have found lots more things to read up on and am\n> eagerly awaiting 6pm when I can bring the DB down and start tweaking. The\n> effective_work_mem setting is going from 6Gb->88Gb which I think will make\n> quite a difference.\n\nYou can change effective_cache_size just in your own session, or do it\nglobally with a \"reload\" or SIGHUP, no need to bring down the server.\n\nHowever, I don't think it will make much difference. Even though it\nthinks it is hitting the index 14,085 times, that is still small\ncompared to the overall size of the table.\n\n> I still can't quite wrap around my head why accessing an index is expected\n> to use more disk access than doing a bitmap scan of the table itself,\n\nIt is only doing an bitmap scan of those parts of the table which\ncontain relevant data, and it is doing them in physical order, so it\nthinks that much of the IO which it thinks it is going to do is\nlargely sequential.\n\n> but I\n> guess it does make a bit of sense if postgres assumes the table is more\n> likely to be cached.\n\nUnfortunately, postgres's planner doesn't know anything about that.\n From your \"explain\" I can see in hindsight that everything you needed\nwas cached, but that is not information that the planner can use\n(currently). And I don't know if *everything* is cached, or if just\nthose particular blocks are because you already ran the same query\nwith the same parameters recently.\n\nAlso, your work_mem is pretty low given the amount of RAM you have.\n\nwork_mem = 1MB\n\nI don't think the current planner attempts to take account of the fact\nthat a bitmap scan which overflows work_mem and so becomes \"lossy\" is\nquite a performance set-back. Nor does it look like explain analyze\ninforms you of this happening. But maybe I'm just looking in the\nwrong places.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 14:34:42 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, Dec 4, 2012 at 10:03 AM, <[email protected]> wrote:\n>\n> Though that doesn't account for the 70x difference between the speed of the\n> two queries in actuality given a pretty similar expected speed (does it?).\n\nIt kind of does. The expected speed is predicated on the number of\nrows being 200 fold higher. If the number of rows actually was that\nmuch higher, the two speeds might be closer together. That is why it\nwould be interesting to see a more typical case where the actual\nnumber of rows is closer to the 2000 estimate.\n\nBut I am curious about how the cost estimate for the primary key look\nup is arrived at:\n\nIndex Scan using cons_pe_primary_key on position_effect\n(cost=0.00..42.96 rows=1 width=16)\n\nThere should be a random page for the index leaf page, and a random\npage for the heap page. Since you set random_page_cost to 2, that\ncomes up to 4. Then there would be some almost negligible CPU costs.\nWhere the heck is the extra 38 cost coming from?\n\n> It does go some way to explaining why a bad choice of plan was made.\n>\n> Is there some nice bit of literature somewhere that explains what sort of\n> costs are associated with the different types of lookup?\n\nI've heard good things about Greg Smith's book, but I don't know if it\ncovers this particular thing.\n\nOtherwise, I don't know of a good single place which is a tutorial\nrather than a reference (or the code itself)\n\n>>> First, make sure caching isn't interfering with your results. Run each\n>>> query several times.\n>> If that is not how the production system works (running the same query\n> over and over) then you want to model the cold cache, not the hot one.\n>> But in any case, the posted explains indicates that all buffers were\n> cached.\n>\n> We are in the rather pleasant situation here in that we are willing to spend\n> money on the box (up to a point, but quite a large point) to get it up to\n> the spec so that it should hardly ever need to touch the disk, the trick is\n> figuring out how to let our favourite database server know that.\n\nWell, that part is fairly easy. Make random_page_cost and\nseq_page_cost much smaller than their defaults. Like, 0.04 and 0.03,\nfor example.\n\nI think the *_page_cost should strictly an estimate of actually doing\nIO, with a separate parameter to reflect likelihood of needing to do\nthe IO, like *_page_cachedness. But that isn't the way it is done\ncurrently.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Dec 2012 15:42:21 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, Dec 4, 2012 at 3:42 PM, Jeff Janes <[email protected]> wrote:\n\n(Regarding http://explain.depesz.com/s/4MWG, wrote)\n\n>\n> But I am curious about how the cost estimate for the primary key look\n> up is arrived at:\n>\n> Index Scan using cons_pe_primary_key on position_effect\n> (cost=0.00..42.96 rows=1 width=16)\n>\n> There should be a random page for the index leaf page, and a random\n> page for the heap page. Since you set random_page_cost to 2, that\n> comes up to 4. Then there would be some almost negligible CPU costs.\n> Where the heck is the extra 38 cost coming from?\n\nI now see where the cost is coming from. In commit 21a39de5809 (first\nappearing in 9.2) the \"fudge factor\" cost estimate for large indexes\nwas increased by about 10 fold, which really hits this index hard.\n\nThis was fixed in commit bf01e34b556 \"Tweak genericcostestimate's\nfudge factor for index size\", by changing it to use the log of the\nindex size. But that commit probably won't be shipped until 9.3.\n\nI'm not sure that this change would fix your problem, because it might\nalso change the costs of the alternative plans in a way that\nneutralizes things. But I suspect it would fix it. Of course, a\ncorrect estimate of the join size would also fix it--you have kind of\na perfect storm here.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 09:39:35 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes <[email protected]> wrote:\n> I'm not sure that this change would fix your problem, because it might\n> also change the costs of the alternative plans in a way that\n> neutralizes things. But I suspect it would fix it. Of course, a\n> correct estimate of the join size would also fix it--you have kind of\n> a perfect storm here.\n\nAs far as I can see on the explain, the misestimation is 3x~4x not 200x.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 14:43:49 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> I now see where the cost is coming from. In commit 21a39de5809 (first\n> appearing in 9.2) the \"fudge factor\" cost estimate for large indexes\n> was increased by about 10 fold, which really hits this index hard.\n\n> This was fixed in commit bf01e34b556 \"Tweak genericcostestimate's\n> fudge factor for index size\", by changing it to use the log of the\n> index size. But that commit probably won't be shipped until 9.3.\n\nHm. To tell you the truth, in October I'd completely forgotten about\nthe January patch, and was thinking that the 1/10000 cost had a lot\nof history behind it. But if we never shipped it before 9.2 then of\ncourse that idea is false. Perhaps we should backpatch the log curve\ninto 9.2 --- that would reduce the amount of differential between what\n9.2 does and what previous branches do for large indexes.\n\nIt would definitely be interesting to know if applying bf01e34b556\nhelps the OP's example.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 05 Dec 2012 13:05:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "That is very interesting indeed, these indexes are quite large!\n\nI will apply that patch and try it out this evening and let you know.\n\nThank you very much everyone for your time, the support has been amazing.\n\nPS: Just looked at this thread on the archives page and realised I don't\nhave my name in FROM: field, which is a misconfiguration of my email client,\nbut figured I would leave it to prevent confusion, sorry about that.\n\nAll the best,\n\nPhilip Scott\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: 05 December 2012 18:05\nTo: Jeff Janes\nCc: [email protected]; postgres performance list\nSubject: Re: [PERFORM] Slow query: bitmap scan troubles\n\nJeff Janes <[email protected]> writes:\n> I now see where the cost is coming from. In commit 21a39de5809 (first \n> appearing in 9.2) the \"fudge factor\" cost estimate for large indexes \n> was increased by about 10 fold, which really hits this index hard.\n\n> This was fixed in commit bf01e34b556 \"Tweak genericcostestimate's \n> fudge factor for index size\", by changing it to use the log of the \n> index size. But that commit probably won't be shipped until 9.3.\n\nHm. To tell you the truth, in October I'd completely forgotten about the\nJanuary patch, and was thinking that the 1/10000 cost had a lot of history\nbehind it. But if we never shipped it before 9.2 then of course that idea\nis false. Perhaps we should backpatch the log curve into 9.2 --- that would\nreduce the amount of differential between what\n9.2 does and what previous branches do for large indexes.\n\nIt would definitely be interesting to know if applying bf01e34b556 helps the\nOP's example.\n\n\t\t\tregards, tom lane\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 6 Dec 2012 12:52:07 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "> I also wonder if increasing (say x10) of default_statistics_target or just\ndoing ALTER TABLE SET STATISTICS for particular tables will help.\n> It will make planned to produce more precise estimations. Do not forget\nANALYZE afer changing it.\n\nThanks Sergey, I will try this too.\n\nI think the bother here is that this statistics are pretty good (we do\nanalyse regularly and default_statistics_target is already 1000), but once I\nstart filtering the two tables the correlations alter quite a bit. I don't\nthink there is that much that can be done about that :)\n\n- Phil\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 6 Dec 2012 12:56:26 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "Hi Jeff\n\n> It kind of does. The expected speed is predicated on the number of rows\nbeing 200 fold higher. If the number of rows actually was that much higher,\nthe two speeds might be closer together. That is why it would be\ninteresting to see a more typical case where the actual number of rows is\ncloser to the 2000 estimate.\n\nAh, I see of course. Makes a lot of sense when you think about it. This has\nbeen quite an enlightening adventure into the guts of postgres for me :)\n\n> But I am curious about how the cost estimate for the primary key look up\nis arrived at:\n( Delt with in your next reply, thanks for figuring that out! I will\ncertainly try the patch)\n\n\n> I've heard good things about Greg Smith's book, but I don't know if it\ncovers this particular thing.\n\nA copy is on its way, thank you.\n\n>> We are in the rather pleasant situation here in that we are willing to \n>> spend money on the box (up to a point, but quite a large point) to get \n>> it up to the spec so that it should hardly ever need to touch the \n>> disk, the trick is figuring out how to let our favourite database server\nknow that.\n> Well, that part is fairly easy. Make random_page_cost and seq_page_cost\nmuch smaller than their defaults. Like, 0.04 and 0.03, for example.\n\nYes, I have been playing a lot with that it makes a lot of difference. When\nI tweak them down I end up getting a lot of nested loops instead of hash or\nmerge joins and they are much faster (presumably we might have gotten a\nnested loop out of the planner if it could correctly estimate the low number\nof rows returned).\n\nI've got plenty of ammunition now to dig deeper, you guys have been\ninvaluable.\n\nCheers,\n\nPhil\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 6 Dec 2012 14:10:29 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire <[email protected]> wrote:\n> On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes <[email protected]> wrote:\n>> I'm not sure that this change would fix your problem, because it might\n>> also change the costs of the alternative plans in a way that\n>> neutralizes things. But I suspect it would fix it. Of course, a\n>> correct estimate of the join size would also fix it--you have kind of\n>> a perfect storm here.\n>\n> As far as I can see on the explain, the misestimation is 3x~4x not 200x.\n\nIt is 3x (14085 vs 4588) for selectivity on one of the tables, \"Index\nOnly Scan using idx_trade_id_book on trade\".\n\nBut for the join of both tables it is estimate 2120 vs actual 11.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 6 Dec 2012 09:27:48 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Thu, Dec 6, 2012 at 2:27 PM, Jeff Janes <[email protected]> wrote:\n> On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire <[email protected]> wrote:\n>> On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes <[email protected]> wrote:\n>>> I'm not sure that this change would fix your problem, because it might\n>>> also change the costs of the alternative plans in a way that\n>>> neutralizes things. But I suspect it would fix it. Of course, a\n>>> correct estimate of the join size would also fix it--you have kind of\n>>> a perfect storm here.\n>>\n>> As far as I can see on the explain, the misestimation is 3x~4x not 200x.\n>\n> It is 3x (14085 vs 4588) for selectivity on one of the tables, \"Index\n> Only Scan using idx_trade_id_book on trade\".\n>\n> But for the join of both tables it is estimate 2120 vs actual 11.\n\nBut the final result set isn't further worked on (except for the\naggregate), which means it doesn't affect the cost by much.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 6 Dec 2012 17:05:09 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Thu, Dec 6, 2012 at 12:05 PM, Claudio Freire <[email protected]> wrote:\n> On Thu, Dec 6, 2012 at 2:27 PM, Jeff Janes <[email protected]> wrote:\n>> On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire <[email protected]> wrote:\n>>> As far as I can see on the explain, the misestimation is 3x~4x not 200x.\n>>\n>> It is 3x (14085 vs 4588) for selectivity on one of the tables, \"Index\n>> Only Scan using idx_trade_id_book on trade\".\n>>\n>> But for the join of both tables it is estimate 2120 vs actual 11.\n>\n> But the final result set isn't further worked on (except for the\n> aggregate), which means it doesn't affect the cost by much.\n\nGood point. Both the NL and hash join do about the same amount of\nwork probing for success whether the success is actually there or not.\n\nSo scratch what I said about the correlation being important, in this\ncase it is not.\n\nThe 3x error is enough to push it over the edge, but the fudge factor\nis what gets it so close to that edge in the first place.\n\nAnd I'm now pretty sure the fudge factor change would fix this. The\ntruly-fast NL plan is getting overcharged by the fudge-factor once per\neach 14,085 of the loopings, while the truly-slow bitmap scan is\novercharged only once for the entire scan. So the change is by no\nmeans neutralized between the two plans.\n\nI don't know if my other theory that the bitmap scan is overflowing\nwork_mem (but not costed for doing so) is also contributing.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 6 Dec 2012 13:09:54 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, 2012-12-04 at 15:42 -0800, Jeff Janes wrote:\n> On Tue, Dec 4, 2012 at 10:03 AM, <[email protected]> wrote:\n> >[...]\n> >\n> > Is there some nice bit of literature somewhere that explains what sort of\n> > costs are associated with the different types of lookup?\n> \n> I've heard good things about Greg Smith's book, but I don't know if it\n> covers this particular thing.\n> \n> Otherwise, I don't know of a good single place which is a tutorial\n> rather than a reference (or the code itself)\n> \n\nGreg's book is awesome. It really gives a lot of\ninformations/tips/whatever on performances. I mostly remember all the\ninformations about hardware, OS, PostgreSQL configuration, and such. Not\nmuch on the EXPLAIN part.\n\nOn the EXPLAIN part, you may have better luck with some slides available\nhere and there.\n\nRobert Haas gave a talk on the query planner at pgCon 2010. The audio\nfeed of Robert Haas talk is available with this file:\nhttp://www.pgcon.org/2010/audio/15%20The%20PostgreSQL%20Query%\n20Planner.mp3\n\nYou can also find the slides on\nhttps://sites.google.com/site/robertmhaas/presentations\n\nYou can also read the \"Explaining the Postgres Query Optimizer\" talk\nwritten by Bruce Momjian. It's available there :\nhttp://momjian.us/main/presentations/internals.html\n\nAnd finally, you can grab my slides over here:\nhttp://www.dalibo.org/_media/understanding_explain.pdf. You have more\nthan slides. I tried to put a lot of informations in there.\n\n\n-- \nGuillaume\nhttp://blog.guillaume.lelarge.info\nhttp://www.dalibo.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 08 Dec 2012 16:15:42 +0100", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "> Greg's book is awesome. It really gives a lot of informations/tips/whatever on performances. I mostly remember all the informations about hardware, OS, PostgreSQL configuration, and such. Not much on the EXPLAIN part.\n\nArrived this morning :)\n\n> http://www.pgcon.org/2010/audio/15%20The%20PostgreSQL%20Query%\n> https://sites.google.com/site/robertmhaas/presentations\n> http://momjian.us/main/presentations/internals.html\n> http://www.dalibo.org/_media/understanding_explain.pdf\n\nWell that is my evenings occupied for the next week. Thank you kindly.\n\n- Phil\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 10 Dec 2012 09:52:42 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query: bitmap scan troubles" }, { "msg_contents": "[moved to hackers]\n\nOn Wednesday, December 5, 2012, Tom Lane wrote:\n\n> Jeff Janes <[email protected]> writes:\n> > I now see where the cost is coming from. In commit 21a39de5809 (first\n> > appearing in 9.2) the \"fudge factor\" cost estimate for large indexes\n> > was increased by about 10 fold, which really hits this index hard.\n>\n> > This was fixed in commit bf01e34b556 \"Tweak genericcostestimate's\n> > fudge factor for index size\", by changing it to use the log of the\n> > index size. But that commit probably won't be shipped until 9.3.\n>\n> Hm. To tell you the truth, in October I'd completely forgotten about\n> the January patch, and was thinking that the 1/10000 cost had a lot\n> of history behind it. But if we never shipped it before 9.2 then of\n> course that idea is false. Perhaps we should backpatch the log curve\n> into 9.2 --- that would reduce the amount of differential between what\n> 9.2 does and what previous branches do for large indexes.\n>\n\nI think we should backpatch it for 9.2.3. I've seen another email which is\nprobably due to the same issue (nested loop vs hash join). And some\nmonitoring of a database I am responsible for suggests it might be heading\nin that direction as well as the size grows.\n\nBut I am wondering if it should be present at all in 9.3. When it was\nintroduced, the argument seemed to be that smaller indexes might be easier\nto keep in cache. And surely that is so. But if a larger index that\ncovers the same type of queries exists when a smaller one also exists, we\ncan assume the larger one also exists for a reason. While it may be easier\nto keep a smaller index in cache, it is not easier to keep both a larger\nand a smaller one in cache as the same time. So it seems to me that this\nreasoning is a wash. (Countering this argument is that a partial index is\nmore esoteric, and so if one exists it is more likely to have been\nwell-thought out)\n\nThe argument for increasing the penalty by a factor of 10 was that the\nsmaller one could be \"swamped by noise such as page-boundary-roundoff\nbehavior\". I don't really know what that means, but it seems to me that if\nit is so easily swamped by noise, then it probably isn't so important in\nthe first place which one it chooses. Whereas, I think that even the log\nbased penalty has the risk of being too much on large indexes. (For one\nthing, it implicitly assumes the fan-out ratio at each level of btree is e,\nwhen it will usually be much larger than e.)\n\nOne thing which depends on the index size which, as far as I can tell, is\nnot currently being counted is the cost of comparing the tuples all the way\ndown the index. This would be proportional to log2(indextuples) *\ncpu_index_tuple_cost, or maybe log2(indextuples) *\n(cpu_index_tuple_cost+cpu_operator_cost), or something like that. This\ncost would depend on the number index tuples, not baserel tuples, and so\nwould penalize large indexes. It would be much smaller than the current\nlog(pages/10000) penalty, but it would be more principle-based rather than\nheuristic-based.\n\nThe log(pages/10000) change is more suitable for back-patching because it\nis more conservative, being asymptotic with the previous behavior at the\nlow end. But I don't think that the case for that previous behavior was\never all that strong.\n\nIf we really want a heuristic to give a bonus to partial indexes, maybe we\nshould explicitly give them a bonus, rather than penalizing ordinary\nindexes.\n\nmaybe something like bonus = 0.05 * (reltuples-indextuples)/reltuples\n\nCheers,\n\nJeff\n\n\n>\n\n[moved to hackers]On Wednesday, December 5, 2012, Tom Lane wrote:Jeff Janes <[email protected]> writes:\n\n\n> I now see where the cost is coming from.  In commit 21a39de5809 (first\n> appearing in 9.2) the \"fudge factor\" cost estimate for large indexes\n> was increased by about 10 fold, which really hits this index hard.\n\n> This was fixed in commit bf01e34b556 \"Tweak genericcostestimate's\n> fudge factor for index size\", by changing it to use the log of the\n> index size.  But that commit probably won't be shipped until 9.3.\n\nHm.  To tell you the truth, in October I'd completely forgotten about\nthe January patch, and was thinking that the 1/10000 cost had a lot\nof history behind it.  But if we never shipped it before 9.2 then of\ncourse that idea is false.  Perhaps we should backpatch the log curve\ninto 9.2 --- that would reduce the amount of differential between what\n9.2 does and what previous branches do for large indexes.I think we should backpatch it for 9.2.3.  I've seen another email which is probably due to the same issue (nested loop vs hash join).  And some monitoring of a database I am responsible for suggests it might be heading in that direction as well as the size grows.\nBut I am wondering if it should be present at all in 9.3.  When it was introduced, the argument seemed to be that smaller indexes might be easier to keep in cache.  And surely that is so.  But if a larger index that covers the same type of queries exists when a smaller one also exists, we can assume the larger one also exists for a reason.  While it may be easier to keep a smaller index in cache, it is not easier to keep both a larger and a smaller one in cache as the same time.  So it seems to me that this reasoning is a wash.  (Countering this argument is that a partial index is more esoteric, and so if one exists it is more likely to have been well-thought out)\nThe argument for increasing the penalty by a factor of 10 was that the smaller one could be \"swamped by noise such as page-boundary-roundoff behavior\".  I don't really know what that means, but it seems to me that if it is so easily swamped by noise, then it probably isn't so important in the first place which one it chooses.  Whereas, I think that even the log based penalty has the risk of being too much on large indexes.  (For one thing, it implicitly assumes the fan-out ratio at each level of btree is e, when it will usually be much larger than e.)\nOne thing which depends on the index size which, as far as I can tell, is not currently being counted is the cost of comparing the tuples all the way down the index.  This would be proportional to log2(indextuples) * cpu_index_tuple_cost, or maybe log2(indextuples) * (cpu_index_tuple_cost+cpu_operator_cost), or something like that.  This cost would depend on the number index tuples, not baserel tuples, and so would penalize large indexes.  It would be much smaller than the current log(pages/10000) penalty, but it would be more principle-based rather than heuristic-based.\nThe log(pages/10000) change is more suitable for back-patching because it is more conservative, being asymptotic with the previous behavior at the low end.  But I don't think that the case for that previous behavior was ever all that strong.\nIf we really want a heuristic to give a bonus to partial indexes, maybe we should explicitly give them a bonus, rather than penalizing ordinary indexes.maybe something like bonus = 0.05 * (reltuples-indextuples)/reltuples\nCheers,Jeff", "msg_date": "Mon, 17 Dec 2012 22:00:16 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "[moved to hackers]\n\nOn Wednesday, December 5, 2012, Tom Lane wrote:\n\n> Jeff Janes <[email protected] <javascript:;>> writes:\n> > I now see where the cost is coming from. In commit 21a39de5809 (first\n> > appearing in 9.2) the \"fudge factor\" cost estimate for large indexes\n> > was increased by about 10 fold, which really hits this index hard.\n>\n> > This was fixed in commit bf01e34b556 \"Tweak genericcostestimate's\n> > fudge factor for index size\", by changing it to use the log of the\n> > index size. But that commit probably won't be shipped until 9.3.\n>\n> Hm. To tell you the truth, in October I'd completely forgotten about\n> the January patch, and was thinking that the 1/10000 cost had a lot\n> of history behind it. But if we never shipped it before 9.2 then of\n> course that idea is false. Perhaps we should backpatch the log curve\n> into 9.2 --- that would reduce the amount of differential between what\n> 9.2 does and what previous branches do for large indexes.\n>\n\nI think we should backpatch it for 9.2.3. I've seen another email which is\nprobably due to the same issue (nested loop vs hash join). And some\nmonitoring of a database I am responsible for suggests it might be heading\nin that direction as well as the size grows.\n\nBut I am wondering if it should be present at all in 9.3. When it was\nintroduced, the argument seemed to be that smaller indexes might be easier\nto keep in cache. And surely that is so. But if a larger index that\ncovers the same type of queries exists when a smaller one also exists, we\ncan assume the larger one also exists for a reason. While it may be easier\nto keep a smaller index in cache, it is not easier to keep both a larger\nand a smaller one in cache as the same time. So it seems to me that this\nreasoning is a wash. (Countering this argument is that a partial index is\nmore esoteric, and so if one exists it is more likely to have been\nwell-thought out)\n\nThe argument for increasing the penalty by a factor of 10 was that the\nsmaller one could be \"swamped by noise such as page-boundary-roundoff\nbehavior\". I don't really know what that means, but it seems to me that if\nit is so easily swamped by noise, then it probably isn't so important in\nthe first place which one it chooses. Whereas, I think that even the log\nbased penalty has the risk of being too much on large indexes. (For one\nthing, it implicitly assumes the fan-out ratio at each level of btree is e,\nwhen it will usually be much larger than e.)\n\nOne thing which depends on the index size which, as far as I can tell, is\nnot currently being counted is the cost of comparing the tuples all the way\ndown the index. This would be proportional to log2(indextuples) *\ncpu_index_tuple_cost, or maybe log2(indextuples) *\n(cpu_index_tuple_cost+cpu_operator_cost), or something like that. This\ncost would depend on the number index tuples, not baserel tuples, and so\nwould penalize large indexes. It would be much smaller than the current\nlog(pages/10000) penalty, but it would be more principle-based rather than\nheuristic-based.\n\nThe log(pages/10000) change is more suitable for back-patching because it\nis more conservative, being asymptotic with the previous behavior at the\nlow end. But I don't think that the case for that previous behavior was\never all that strong.\n\nIf we really want a heuristic to give a bonus to partial indexes, maybe we\nshould explicitly give them a bonus, rather than penalizing ordinary\nindexes (which penalty is then used in comparing them to hash joins and\nsuch, not just partial indexes).\n\nmaybe something like bonus = 0.05 * (reltuples-indextuples)/reltuples\n\nCheers,\n\nJeff\n\n\n>\n\n[moved to hackers]On Wednesday, December 5, 2012, Tom Lane wrote:Jeff Janes <[email protected]> writes:\n\n> I now see where the cost is coming from.  In commit 21a39de5809 (first\n> appearing in 9.2) the \"fudge factor\" cost estimate for large indexes\n> was increased by about 10 fold, which really hits this index hard.\n\n> This was fixed in commit bf01e34b556 \"Tweak genericcostestimate's\n> fudge factor for index size\", by changing it to use the log of the\n> index size.  But that commit probably won't be shipped until 9.3.\n\nHm.  To tell you the truth, in October I'd completely forgotten about\nthe January patch, and was thinking that the 1/10000 cost had a lot\nof history behind it.  But if we never shipped it before 9.2 then of\ncourse that idea is false.  Perhaps we should backpatch the log curve\ninto 9.2 --- that would reduce the amount of differential between what\n9.2 does and what previous branches do for large indexes.I think we should backpatch it for 9.2.3.  I've seen another email which is probably due to the same issue (nested loop vs hash join).  And some monitoring of a database I am responsible for suggests it might be heading in that direction as well as the size grows.\nBut I am wondering if it should be present at all in 9.3.  When it was introduced, the argument seemed to be that smaller indexes might be easier to keep in cache.  And surely that is so.  But if a larger index that covers the same type of queries exists when a smaller one also exists, we can assume the larger one also exists for a reason.  While it may be easier to keep a smaller index in cache, it is not easier to keep both a larger and a smaller one in cache as the same time.  So it seems to me that this reasoning is a wash.  (Countering this argument is that a partial index is more esoteric, and so if one exists it is more likely to have been well-thought out)\nThe argument for increasing the penalty by a factor of 10 was that the smaller one could be \"swamped by noise such as page-boundary-roundoff behavior\".  I don't really know what that means, but it seems to me that if it is so easily swamped by noise, then it probably isn't so important in the first place which one it chooses.  Whereas, I think that even the log based penalty has the risk of being too much on large indexes.  (For one thing, it implicitly assumes the fan-out ratio at each level of btree is e, when it will usually be much larger than e.)\nOne thing which depends on the index size which, as far as I can tell, is not currently being counted is the cost of comparing the tuples all the way down the index.  This would be proportional to log2(indextuples) * cpu_index_tuple_cost, or maybe log2(indextuples) * (cpu_index_tuple_cost+cpu_operator_cost), or something like that.  This cost would depend on the number index tuples, not baserel tuples, and so would penalize large indexes.  It would be much smaller than the current log(pages/10000) penalty, but it would be more principle-based rather than heuristic-based.\nThe log(pages/10000) change is more suitable for back-patching because it is more conservative, being asymptotic with the previous behavior at the low end.  But I don't think that the case for that previous behavior was ever all that strong.\nIf we really want a heuristic to give a bonus to partial indexes, maybe we should explicitly give them a bonus, rather than penalizing ordinary indexes (which penalty is then used in comparing them to hash joins and such, not just partial indexes).\nmaybe something like bonus = 0.05 * (reltuples-indextuples)/reltuplesCheers,Jeff", "msg_date": "Tue, 18 Dec 2012 17:05:05 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On Tue, Dec 18, 2012 at 5:05 PM, Jeff Janes <[email protected]> wrote:\n\nSorry for the malformed and duplicate post. I was not trying to be\nemphatic; I was testing out gmail offline. Clearly the test didn't go\ntoo well.\n\nJeff\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 19 Dec 2012 06:40:54 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> [moved to hackers]\n> On Wednesday, December 5, 2012, Tom Lane wrote:\n>> Hm. To tell you the truth, in October I'd completely forgotten about\n>> the January patch, and was thinking that the 1/10000 cost had a lot\n>> of history behind it. But if we never shipped it before 9.2 then of\n>> course that idea is false. Perhaps we should backpatch the log curve\n>> into 9.2 --- that would reduce the amount of differential between what\n>> 9.2 does and what previous branches do for large indexes.\n\n> I think we should backpatch it for 9.2.3. I've seen another email which is\n> probably due to the same issue (nested loop vs hash join). And some\n> monitoring of a database I am responsible for suggests it might be heading\n> in that direction as well as the size grows.\n\nI received an off-list report of a case where not only did the 1/10000\nfactor cause a nestloop-vs-hashjoin decision to be made wrongly, but\neven adding the ln() computation as in commit bf01e34b556 didn't fix it.\nI believe the index in question was on the order of 20000 pages, so\nit's not too hard to see why this might be the case:\n\n* historical fudge factor\t4 * 20000/100000 = 0.8\n* 9.2 fudge factor\t\t4 * 20000/10000 = 8.0\n* with ln() correction\t\t4 * ln(1 + 20000/10000) = 4.39 or so\n\nAt this point I'm about ready to not only revert the 100000-to-10000\nchange, but keep the ln() adjustment, ie make the calculation be\nrandom_page_cost * ln(1 + index_pages/100000). This would give\nessentially the pre-9.2 behavior for indexes up to some tens of\nthousands of pages, and keep the fudge factor from getting out of\ncontrol even for very very large indexes.\n\n> But I am wondering if it should be present at all in 9.3. When it was\n> introduced, the argument seemed to be that smaller indexes might be easier\n> to keep in cache.\n\nNo. The argument is that if we don't have some such correction, the\nplanner is liable to believe that different-sized indexes have *exactly\nthe same cost*, if a given query would fetch the same number of index\nentries. This is quite easy to demonstrate when experimenting with\npartial indexes, in particular - without the fudge factor the planner\nsees no advantage of a partial index over a full index from which the\nquery would fetch the same number of entries. We do want the planner\nto pick the partial index if it's usable, and a fudge factor is about\nthe least unprincipled way to make it do so.\n\n> The argument for increasing the penalty by a factor of 10 was that the\n> smaller one could be \"swamped by noise such as page-boundary-roundoff\n> behavior\".\n\nYeah, I wrote that, but in hindsight it seems like a mistaken idea.\nThe noise problem is that because we round off page count and row count\nestimates to integers at various places, it's fairly easy for small\nchanges in statistics to move a plan's estimated cost by significantly\nmore than this fudge factor will. However, the case that the fudge\nfactor is meant to fix is indexes that are otherwise identical for\nthe query's purposes --- and any roundoff effects will be the same.\n(The fudge factor itself is *not* rounded off anywhere, it flows\ndirectly to the bottom-line cost for the indexscan.)\n\n> One thing which depends on the index size which, as far as I can tell, is\n> not currently being counted is the cost of comparing the tuples all the way\n> down the index. This would be proportional to log2(indextuples) *\n> cpu_index_tuple_cost, or maybe log2(indextuples) *\n> (cpu_index_tuple_cost+cpu_operator_cost), or something like that.\n\nYeah, I know. I've experimented repeatedly over the years with trying\nto account explicitly for index descent costs. But every time, anything\nthat looks even remotely principled turns out to produce an overly large\ncorrection that results in bad plan choices. I don't know exactly why\nthis is, but it's true.\n\nOne other point is that I think it is better for any such correction\nto depend on the index's total page count, not total tuple count,\nbecause otherwise two indexes that are identical except for bloat\neffects will appear to have identical costs. So from that standpoint,\nthe ln() form of the fudge factor seems quite reasonable as a crude form\nof index descent cost estimate. The fact that we're needing to dial\nit down so much reinforces my feeling that descent costs are close to\nnegligible in practice.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sat, 05 Jan 2013 17:18:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On Saturday, January 5, 2013, Tom Lane wrote:\n\n> Jeff Janes <[email protected] <javascript:;>> writes:\n> > [moved to hackers]\n> > On Wednesday, December 5, 2012, Tom Lane wrote:\n> >> Hm. To tell you the truth, in October I'd completely forgotten about\n> >> the January patch, and was thinking that the 1/10000 cost had a lot\n> >> of history behind it. But if we never shipped it before 9.2 then of\n> >> course that idea is false. Perhaps we should backpatch the log curve\n> >> into 9.2 --- that would reduce the amount of differential between what\n> >> 9.2 does and what previous branches do for large indexes.\n>\n> > I think we should backpatch it for 9.2.3. I've seen another email which\n> is\n> > probably due to the same issue (nested loop vs hash join). And some\n> > monitoring of a database I am responsible for suggests it might be\n> heading\n> > in that direction as well as the size grows.\n>\n> I received an off-list report of a case where not only did the 1/10000\n> factor cause a nestloop-vs-hashjoin decision to be made wrongly, but\n> even adding the ln() computation as in commit bf01e34b556 didn't fix it.\n> I believe the index in question was on the order of 20000 pages, so\n> it's not too hard to see why this might be the case:\n>\n> * historical fudge factor 4 * 20000/100000 = 0.8\n> * 9.2 fudge factor 4 * 20000/10000 = 8.0\n> * with ln() correction 4 * ln(1 + 20000/10000) = 4.39 or so\n>\n> At this point I'm about ready to not only revert the 100000-to-10000\n> change, but keep the ln() adjustment, ie make the calculation be\n> random_page_cost * ln(1 + index_pages/100000). This would give\n> essentially the pre-9.2 behavior for indexes up to some tens of\n> thousands of pages, and keep the fudge factor from getting out of\n> control even for very very large indexes.\n>\n\nYeah, I agree that even the log function grows too rapidly, especially at\nthe early stages. I didn't know if a change that changes that asymptote\nwould be welcome in a backpatch, though.\n\n\n>\n> > But I am wondering if it should be present at all in 9.3. When it was\n> > introduced, the argument seemed to be that smaller indexes might be\n> easier\n> > to keep in cache.\n>\n> No. The argument is that if we don't have some such correction, the\n> planner is liable to believe that different-sized indexes have *exactly\n> the same cost*, if a given query would fetch the same number of index\n> entries.\n\n\nBut it seems like they very likely *do* have exactly the same cost, unless\nyou want to take either the CPU cost of descending the index into account,\nor take cachebility into account. If they do have the same cost, why\nshouldn't the estimate reflect that? Using cpu_index_tuple_cost * lg(#\nindex tuples) would break the tie, but by such a small amount that it would\neasily get swamped by the stochastic nature of ANALYZE for nodes expected\nto return more than one row.\n\n\n> This is quite easy to demonstrate when experimenting with\n> partial indexes, in particular - without the fudge factor the planner\n> sees no advantage of a partial index over a full index from which the\n> query would fetch the same number of entries. We do want the planner\n> to pick the partial index if it's usable, and a fudge factor is about\n> the least unprincipled way to make it do so.\n>\n\nI noticed a long time ago that ordinary index scans seemed to be preferred\n over bitmap index scans with the same cost estimate, as best as I could\ndetermine because they are tested first and the tie goes to the first one\n(and there is something about it needs to be better by 1% to be counted as\nbetter--although that part might only apply when the start-up cost and the\nfull cost disagree over which one is best). If I've reconstructed that\ncorrectly, could something similar be done for partial indexes, where they\nare just considered first? I guess the problem there is a index scan on a\npartial index is not a separate node type from a index scan on a full\nindex, unlike index vs bitmap.\n\n>\n> > The argument for increasing the penalty by a factor of 10 was that the\n> > smaller one could be \"swamped by noise such as page-boundary-roundoff\n> > behavior\".\n>\n> Yeah, I wrote that, but in hindsight it seems like a mistaken idea.\n> The noise problem is that because we round off page count and row count\n> estimates to integers at various places, it's fairly easy for small\n> changes in statistics to move a plan's estimated cost by significantly\n> more than this fudge factor will. However, the case that the fudge\n> factor is meant to fix is indexes that are otherwise identical for\n> the query's purposes --- and any roundoff effects will be the same.\n> (The fudge factor itself is *not* rounded off anywhere, it flows\n> directly to the bottom-line cost for the indexscan.)\n>\n\nOK, and this agrees with my experience. It seemed like it was the\nstochastic nature of analyze, not round off problems, that caused the plans\nto go back and forth.\n\n\n>\n> > One thing which depends on the index size which, as far as I can tell, is\n> > not currently being counted is the cost of comparing the tuples all the\n> way\n> > down the index. This would be proportional to log2(indextuples) *\n> > cpu_index_tuple_cost, or maybe log2(indextuples) *\n> > (cpu_index_tuple_cost+cpu_operator_cost), or something like that.\n>\n> Yeah, I know. I've experimented repeatedly over the years with trying\n> to account explicitly for index descent costs. But every time, anything\n> that looks even remotely principled turns out to produce an overly large\n> correction that results in bad plan choices. I don't know exactly why\n> this is, but it's true.\n>\n\nlog2(indextuples) * cpu_index_tuple_cost should produce pretty darn small\ncorrections, at least if cost parameters are at the defaults. Do you\nremember if that one of the ones you tried?\n\n\n>\n> One other point is that I think it is better for any such correction\n> to depend on the index's total page count, not total tuple count,\n> because otherwise two indexes that are identical except for bloat\n> effects will appear to have identical costs.\n\n\nThis isn't so. A bloated index will be estimated to visit more pages than\nan otherwise identical non-bloated index, and so have a higher cost.\n\njeff=# create table bar as select * from generate_series(1,1000000);\njeff=# create index foo1 on bar (generate_series);\njeff=# create index foo2 on bar (generate_series);\njeff=# delete from bar where generate_series %100 !=0;\njeff=# reindex index foo1;\njeff=# analyze ;\njeff=# explain select count(*) from bar where generate_series between 6 and\n60;\n QUERY PLAN\n--------------------------------------------------------------------------\n Aggregate (cost=8.27..8.28 rows=1 width=0)\n -> Index Scan using foo1 on bar (cost=0.00..8.27 rows=1 width=0)\n Index Cond: ((generate_series >= 6) AND (generate_series <= 60))\n(3 rows)\n\njeff=# begin; drop index foo1; explain select count(*) from bar where\ngenerate_series between 6 and 600; rollback;\n QUERY PLAN\n---------------------------------------------------------------------------\n Aggregate (cost=14.47..14.48 rows=1 width=0)\n -> Index Scan using foo2 on bar (cost=0.00..14.46 rows=5 width=0)\n Index Cond: ((generate_series >= 6) AND (generate_series <= 600))\n(3 rows)\n\nThis is due to this in genericcostestimate\n\n if (index->pages > 1 && index->tuples > 1)\n numIndexPages = ceil(numIndexTuples * index->pages / index->tuples);\n\nIf the index is bloated (or just has wider index tuples), index->pages will\ngo up but index->tuples will not.\n\nIf it is just a partial index, however, then both will go down together and\nit will not be counted as a benefit from being smaller.\n\nFor the bloated index, this correction might even be too harsh. If the\nindex is bloated by having lots of mostly-empty pages, then this seems\nfair. If it is bloated by having lots of entirely empty pages that are not\neven linked into the tree, then those empty ones will never be visited and\nso it shouldn't be penalized.\n\nWorse, this over-punishment of bloat is more likely to penalize partial\nindexes. Since they are vacuumed on the table's schedule, not their own\nschedule, they likely get vacuumed less often relative to the amount of\nturn-over they experience and so have higher steady-state bloat. (I'm\nassuming the partial index is on the particularly hot rows, which I would\nexpect is how partial indexes would generally be used)\n\nThis extra bloat was one of the reasons the partial index was avoided in\n\"Why does the query planner use two full indexes, when a dedicated partial\nindex exists?\"\n\n So from that standpoint,\n> the ln() form of the fudge factor seems quite reasonable as a crude form\n> of index descent cost estimate. The fact that we're needing to dial\n> it down so much reinforces my feeling that descent costs are close to\n> negligible in practice.\n>\n\nIf they are negligible, why do we really care that it use a partial index\nvs a full index? It seems like the only reason we would care is\ncacheability. Unfortunately we don't have any infrastructure to model that\ndirectly.\n\nCheers,\n\nJeff\n\nOn Saturday, January 5, 2013, Tom Lane wrote:Jeff Janes <[email protected]> writes:\n\n> [moved to hackers]\n> On Wednesday, December 5, 2012, Tom Lane wrote:\n>> Hm.  To tell you the truth, in October I'd completely forgotten about\n>> the January patch, and was thinking that the 1/10000 cost had a lot\n>> of history behind it.  But if we never shipped it before 9.2 then of\n>> course that idea is false.  Perhaps we should backpatch the log curve\n>> into 9.2 --- that would reduce the amount of differential between what\n>> 9.2 does and what previous branches do for large indexes.\n\n> I think we should backpatch it for 9.2.3.  I've seen another email which is\n> probably due to the same issue (nested loop vs hash join).  And some\n> monitoring of a database I am responsible for suggests it might be heading\n> in that direction as well as the size grows.\n\nI received an off-list report of a case where not only did the 1/10000\nfactor cause a nestloop-vs-hashjoin decision to be made wrongly, but\neven adding the ln() computation as in commit bf01e34b556 didn't fix it.\nI believe the index in question was on the order of 20000 pages, so\nit's not too hard to see why this might be the case:\n\n* historical fudge factor       4 * 20000/100000 = 0.8\n* 9.2 fudge factor              4 * 20000/10000 = 8.0\n* with ln() correction          4 * ln(1 + 20000/10000) = 4.39 or so\n\nAt this point I'm about ready to not only revert the 100000-to-10000\nchange, but keep the ln() adjustment, ie make the calculation be\nrandom_page_cost * ln(1 + index_pages/100000).  This would give\nessentially the pre-9.2 behavior for indexes up to some tens of\nthousands of pages, and keep the fudge factor from getting out of\ncontrol even for very very large indexes.Yeah, I agree that even the log function grows too rapidly, especially at the early stages.  I didn't know if a change that changes that asymptote would be welcome in a backpatch, though.\n \n\n> But I am wondering if it should be present at all in 9.3.  When it was\n> introduced, the argument seemed to be that smaller indexes might be easier\n> to keep in cache.\n\nNo.  The argument is that if we don't have some such correction, the\nplanner is liable to believe that different-sized indexes have *exactly\nthe same cost*, if a given query would fetch the same number of index\nentries.  But it seems like they very likely *do* have exactly the same cost, unless you want to take either the CPU cost of descending the index into account, or take cachebility into account.  If they do have the same cost, why shouldn't the estimate reflect that?  Using cpu_index_tuple_cost * lg(# index tuples) would break the tie, but by such a small amount that it would easily get swamped by the stochastic nature of ANALYZE for nodes expected to return more than one row.\n This is quite easy to demonstrate when experimenting with\npartial indexes, in particular - without the fudge factor the planner\nsees no advantage of a partial index over a full index from which the\nquery would fetch the same number of entries.  We do want the planner\nto pick the partial index if it's usable, and a fudge factor is about\nthe least unprincipled way to make it do so.I noticed a long time ago that ordinary index scans seemed to be preferred  over bitmap index scans with the same cost estimate, as best as I could determine because they are tested first and the tie goes to the first one (and there is something about it needs to be better by 1% to be counted as better--although that part might only apply when the start-up cost and the full cost disagree over which one is best).  If I've reconstructed that correctly, could something similar be done for partial indexes, where they are just considered first?  I guess the problem there is a index scan on a partial index is not a separate node type from a index scan on a full index, unlike index vs bitmap.\n\n\n> The argument for increasing the penalty by a factor of 10 was that the\n> smaller one could be \"swamped by noise such as page-boundary-roundoff\n> behavior\".\n\nYeah, I wrote that, but in hindsight it seems like a mistaken idea.\nThe noise problem is that because we round off page count and row count\nestimates to integers at various places, it's fairly easy for small\nchanges in statistics to move a plan's estimated cost by significantly\nmore than this fudge factor will.  However, the case that the fudge\nfactor is meant to fix is indexes that are otherwise identical for\nthe query's purposes --- and any roundoff effects will be the same.\n(The fudge factor itself is *not* rounded off anywhere, it flows\ndirectly to the bottom-line cost for the indexscan.)OK, and this agrees with my experience.  It seemed like it was the stochastic nature of analyze, not round off problems, that caused the plans to go back and forth.\n \n\n> One thing which depends on the index size which, as far as I can tell, is\n> not currently being counted is the cost of comparing the tuples all the way\n> down the index.  This would be proportional to log2(indextuples) *\n> cpu_index_tuple_cost, or maybe log2(indextuples) *\n> (cpu_index_tuple_cost+cpu_operator_cost), or something like that.\n\nYeah, I know.  I've experimented repeatedly over the years with trying\nto account explicitly for index descent costs.  But every time, anything\nthat looks even remotely principled turns out to produce an overly large\ncorrection that results in bad plan choices.  I don't know exactly why\nthis is, but it's true.log2(indextuples) * cpu_index_tuple_cost  should produce pretty darn small corrections, at least if cost parameters are at the defaults.  Do you remember if that one of the ones you tried?\n \n\nOne other point is that I think it is better for any such correction\nto depend on the index's total page count, not total tuple count,\nbecause otherwise two indexes that are identical except for bloat\neffects will appear to have identical costs.This isn't so.  A bloated index will be estimated to visit more pages than an otherwise identical non-bloated index, and so have a higher cost.\njeff=# create table bar as select * from generate_series(1,1000000);jeff=# create index foo1 on bar (generate_series);jeff=# create index foo2 on bar (generate_series);\njeff=# delete from bar where generate_series %100 !=0;jeff=# reindex index foo1;jeff=# analyze ;jeff=# explain select count(*) from bar where generate_series between 6 and 60;\n                                QUERY PLAN                                -------------------------------------------------------------------------- Aggregate  (cost=8.27..8.28 rows=1 width=0)\n   ->  Index Scan using foo1 on bar  (cost=0.00..8.27 rows=1 width=0)         Index Cond: ((generate_series >= 6) AND (generate_series <= 60))(3 rows)jeff=# begin; drop index foo1; explain select count(*) from bar where generate_series between 6 and 600; rollback;\n                                QUERY PLAN                                 --------------------------------------------------------------------------- Aggregate  (cost=14.47..14.48 rows=1 width=0)\n   ->  Index Scan using foo2 on bar  (cost=0.00..14.46 rows=5 width=0)         Index Cond: ((generate_series >= 6) AND (generate_series <= 600))(3 rows)This is due to this in genericcostestimate\n    if (index->pages > 1 && index->tuples > 1)\n        numIndexPages = ceil(numIndexTuples * index->pages / index->tuples);If the index is bloated (or just has wider index tuples), index->pages will go up but index->tuples will not.\nIf it is just a partial index, however, then both will go down together and it will not be counted as a benefit from being smaller.For the bloated index, this correction might even be too harsh.  If the index is bloated by having lots of mostly-empty pages, then this seems fair.  If it is bloated by having lots of entirely empty pages that are not even linked into the tree, then those empty ones will never be visited and so it shouldn't be penalized. \nWorse, this over-punishment of bloat is more likely to penalize partial indexes.  Since they are  vacuumed on the table's schedule, not their own schedule, they likely get vacuumed less often relative to the amount of turn-over they experience and so have higher steady-state bloat. (I'm assuming the partial index is on the particularly hot rows, which I would expect is how partial indexes would generally be used)\nThis extra bloat was one of the reasons the partial index was avoided in \"Why does the query planner use two full indexes, when a dedicated partial index exists?\"\n So from that standpoint,\nthe ln() form of the fudge factor seems quite reasonable as a crude form\nof index descent cost estimate.  The fact that we're needing to dial\nit down so much reinforces my feeling that descent costs are close to\nnegligible in practice.If they are negligible, why do we really care that it use a partial index vs a full index?  It seems like the only reason we would care is cacheability.  Unfortunately we don't have any infrastructure to model that directly.\nCheers,Jeff", "msg_date": "Sun, 6 Jan 2013 08:29:17 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Saturday, January 5, 2013, Tom Lane wrote:\n>> Jeff Janes <[email protected] <javascript:;>> writes:\n>>> One thing which depends on the index size which, as far as I can tell, is\n>>> not currently being counted is the cost of comparing the tuples all the way\n>>> down the index. This would be proportional to log2(indextuples) *\n>>> cpu_index_tuple_cost, or maybe log2(indextuples) *\n>>> (cpu_index_tuple_cost+cpu_operator_cost), or something like that.\n\n>> Yeah, I know. I've experimented repeatedly over the years with trying\n>> to account explicitly for index descent costs. But every time, anything\n>> that looks even remotely principled turns out to produce an overly large\n>> correction that results in bad plan choices. I don't know exactly why\n>> this is, but it's true.\n\n> log2(indextuples) * cpu_index_tuple_cost should produce pretty darn small\n> corrections, at least if cost parameters are at the defaults. Do you\n> remember if that one of the ones you tried?\n\nWell, a picture is worth a thousand words, so see the attached plot of\nthe various proposed corrections for indexes of 10 to 1e9 tuples. For\npurposes of argument I've supposed that the index has loading factor\n256 tuples/page, and I used the default values of random_page_cost and\ncpu_index_tuple_cost. The red line is your proposal, the green one is\nmine, the blue one is current HEAD behavior.\n\nBoth the blue and green lines get to values that might be thought\nexcessively high for very large indexes, but I doubt that that really\nmatters: if the table contains a billion rows, the cost of a seqscan\nwill be so high that it'll hardly matter if we overshoot the cost of an\nindex probe a bit. (Also, once the table gets that large it's debatable\nwhether the upper index levels all fit in cache, so charging an extra\nrandom_page_cost or so isn't necessarily unrealistic.)\n\nThe real problem though is at the other end of the graph: I judge that\nthe red line represents an overcorrection for indexes of a few thousand\ntuples.\n\nIt might also be worth noting that for indexes of a million or so\ntuples, we're coming out to about the same place anyway.\n\n>> One other point is that I think it is better for any such correction\n>> to depend on the index's total page count, not total tuple count,\n>> because otherwise two indexes that are identical except for bloat\n>> effects will appear to have identical costs.\n\n> This isn't so. A bloated index will be estimated to visit more pages than\n> an otherwise identical non-bloated index, and so have a higher cost.\n\nNo it won't, or at least not reliably so, if there is no form of\ncorrection for index descent costs. For instance, in a probe into a\nunique index, we'll always estimate that we're visiting a single index\ntuple on a single index page. The example you show is tweaked to ensure\nthat it estimates visiting more than one index page, and in that context\nthe leaf-page-related costs probably do scale with bloat; but they won't\nif the query is only looking for one index entry.\n\n> For the bloated index, this correction might even be too harsh. If the\n> index is bloated by having lots of mostly-empty pages, then this seems\n> fair. If it is bloated by having lots of entirely empty pages that are not\n> even linked into the tree, then those empty ones will never be visited and\n> so it shouldn't be penalized.\n\nIt's true that an un-linked empty page adds no cost by itself. But if\nthere are a lot of now-empty pages, that probably means a lot of vacant\nspace on upper index pages (which must once have held downlinks to those\npages). Which means more upper pages traversed to get to the target\nleaf page than we'd have in a non-bloated index. Without more\nexperimental evidence than we've got at hand, I'm disinclined to suppose\nthat index bloat is free.\n\n> This extra bloat was one of the reasons the partial index was avoided in\n> \"Why does the query planner use two full indexes, when a dedicated partial\n> index exists?\"\n\nInteresting point, but it's far from clear that the planner was wrong in\nsupposing that that bloat had significant cost. We agree that the\ncurrent 9.2 correction is too large, but it doesn't follow that zero is\na better value.\n\n>> So from that standpoint,\n>> the ln() form of the fudge factor seems quite reasonable as a crude form\n>> of index descent cost estimate. The fact that we're needing to dial\n>> it down so much reinforces my feeling that descent costs are close to\n>> negligible in practice.\n\n> If they are negligible, why do we really care that it use a partial index\n> vs a full index?\n\nTBH, in situations like the ones I'm thinking about it's not clear that\na partial index is a win at all. The cases where a partial index really\nwins are where it doesn't index rows that you would otherwise have to\nvisit and make a non-indexed predicate test against --- and those costs\nwe definitely do model. However, if the planner doesn't pick the\npartial index if available, people are going to report that as a bug.\nThey won't be able to find out that they're wasting their time defining\na partial index if the planner won't pick it.\n\nSo, between the bloat issue and the partial-index issue, I think it's\nimportant that there be some component of indexscan cost that varies\naccording to index size, even when the same number of leaf pages and\nleaf index entries will be visited. It does not have to be a large\ncomponent; all experience to date says that it shouldn't be very large.\nBut there needs to be something.\n\n\t\t\tregards, tom lane\n\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers", "msg_date": "Sun, 06 Jan 2013 13:18:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On 5 January 2013 22:18, Tom Lane <[email protected]> wrote:\n\n>> But I am wondering if it should be present at all in 9.3. When it was\n>> introduced, the argument seemed to be that smaller indexes might be easier\n>> to keep in cache.\n>\n> No. The argument is that if we don't have some such correction, the\n> planner is liable to believe that different-sized indexes have *exactly\n> the same cost*, if a given query would fetch the same number of index\n> entries.\n\nThe only difference between a large and a small index is the initial\nfetch, since the depth of the index may vary. After that the size of\nthe index is irrelevant to the cost of the scan, since we're just\nscanning across the leaf blocks. (Other differences may exist but not\nrelated to size).\n\nPerhaps the cost of the initial fetch is what you mean by a\n\"correction\"? In that case, why not use the index depth directly from\nthe metapage, rather than play with size?\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sun, 6 Jan 2013 18:19:10 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On 6 January 2013 16:29, Jeff Janes <[email protected]> wrote:\n\n> Worse, this over-punishment of bloat is more likely to penalize partial\n> indexes. Since they are vacuumed on the table's schedule, not their own\n> schedule, they likely get vacuumed less often relative to the amount of\n> turn-over they experience and so have higher steady-state bloat. (I'm\n> assuming the partial index is on the particularly hot rows, which I would\n> expect is how partial indexes would generally be used)\n\nThat's an interesting thought. Thanks for noticing that.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sun, 6 Jan 2013 18:22:33 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On 5 January 2013 22:18, Tom Lane <[email protected]> wrote:\n>> No. The argument is that if we don't have some such correction, the\n>> planner is liable to believe that different-sized indexes have *exactly\n>> the same cost*, if a given query would fetch the same number of index\n>> entries.\n\n> The only difference between a large and a small index is the initial\n> fetch, since the depth of the index may vary. After that the size of\n> the index is irrelevant to the cost of the scan, since we're just\n> scanning across the leaf blocks. (Other differences may exist but not\n> related to size).\n\nRight: except for the \"fudge factor\" under discussion, all the indexscan\ncosts that we model come from accessing index leaf pages and leaf\ntuples. So to the extent that the fudge factor has any principled basis\nat all, it's an estimate of index descent costs. And in that role I\nbelieve that total index size needs to be taken into account.\n\n> Perhaps the cost of the initial fetch is what you mean by a\n> \"correction\"? In that case, why not use the index depth directly from\n> the metapage, rather than play with size?\n\nIIRC, one of my very first attempts to deal with this was to charge\nrandom_page_cost per level of index descended. This was such a horrid\noverestimate that it never went anywhere. I think that reflects that in\npractical applications, the upper levels of the index tend to stay in\ncache. We could ignore I/O on that assumption and still try to model\nCPU costs of the descent, which is basically what Jeff is proposing.\nMy objection to his formula is mainly that it ignores physical index\nsize, which I think is important to include somehow for the reasons\nI explained in my other message.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sun, 06 Jan 2013 13:58:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On 6 January 2013 18:58, Tom Lane <[email protected]> wrote:\n> Simon Riggs <[email protected]> writes:\n>> On 5 January 2013 22:18, Tom Lane <[email protected]> wrote:\n>>> No. The argument is that if we don't have some such correction, the\n>>> planner is liable to believe that different-sized indexes have *exactly\n>>> the same cost*, if a given query would fetch the same number of index\n>>> entries.\n>\n>> The only difference between a large and a small index is the initial\n>> fetch, since the depth of the index may vary. After that the size of\n>> the index is irrelevant to the cost of the scan, since we're just\n>> scanning across the leaf blocks. (Other differences may exist but not\n>> related to size).\n>\n> Right: except for the \"fudge factor\" under discussion, all the indexscan\n> costs that we model come from accessing index leaf pages and leaf\n> tuples. So to the extent that the fudge factor has any principled basis\n> at all, it's an estimate of index descent costs. And in that role I\n> believe that total index size needs to be taken into account.\n>\n>> Perhaps the cost of the initial fetch is what you mean by a\n>> \"correction\"? In that case, why not use the index depth directly from\n>> the metapage, rather than play with size?\n>\n> IIRC, one of my very first attempts to deal with this was to charge\n> random_page_cost per level of index descended. This was such a horrid\n> overestimate that it never went anywhere. I think that reflects that in\n> practical applications, the upper levels of the index tend to stay in\n> cache. We could ignore I/O on that assumption and still try to model\n> CPU costs of the descent, which is basically what Jeff is proposing.\n> My objection to his formula is mainly that it ignores physical index\n> size, which I think is important to include somehow for the reasons\n> I explained in my other message.\n\nHaving a well principled approach will help bring us towards a\nrealistic estimate.\n\nI can well believe what you say about random_page_cost * index_depth\nbeing an over-estimate.\n\nMaking a fudge factor be random_page_cost * ln(1 + index_pages/100000)\n just seems to presume an effective cache of 8GB and a fixed\ndepth:size ratio, which it might not be. On a busy system, or with a\nvery wide index that could also be wrong.\n\nI'd be more inclined to explicitly discount the first few levels by\nusing random_page_cost * (max(index_depth - 3, 0))\nor even better use a formula that includes the effective cache size\nand index width to work out the likely number of tree levels cached\nfor an index.\n\nWhatever we do we must document that we are estimating the cache\neffects on the cost of index descent, so we can pick that up on a\nfuture study on cacheing effects.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sun, 6 Jan 2013 19:47:48 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On 6 January 2013 18:58, Tom Lane <[email protected]> wrote:\n>> IIRC, one of my very first attempts to deal with this was to charge\n>> random_page_cost per level of index descended. This was such a horrid\n>> overestimate that it never went anywhere. I think that reflects that in\n>> practical applications, the upper levels of the index tend to stay in\n>> cache. We could ignore I/O on that assumption and still try to model\n>> CPU costs of the descent, which is basically what Jeff is proposing.\n>> My objection to his formula is mainly that it ignores physical index\n>> size, which I think is important to include somehow for the reasons\n>> I explained in my other message.\n\n> Having a well principled approach will help bring us towards a\n> realistic estimate.\n\nI thought about this some more and came up with what might be a\nreasonably principled compromise. Assume that we know there are N\nleaf entries in the index (from VACUUM stats) and that we know the\nroot page height is H (from looking at the btree metapage). (Note:\nH starts at zero for a single-page index.) If we assume that the\nnumber of tuples per page, P, is more or less uniform across leaf\nand upper pages (ie P is the fanout for upper pages), then we have\n\tN/P = number of leaf pages\n\tN/P/P = number of level 1 pages\n\tN/P^3 = number of level 2 pages\n\tN/P^(h+1) = number of level h pages\nSolving for the minimum P that makes N/P^(H+1) <= 1, we get\n\tP = ceil(exp(ln(N)/(H+1)))\nas an estimate of P given the known N and H values.\n\nNow, if we consider only CPU costs of index descent, we expect\nabout log2(P) comparisons to be needed on each of the H upper pages\nto be descended through, that is we have total descent cost\n\tcpu_index_tuple_cost * H * log2(P)\n\nIf we ignore the ceil() step as being a second-order correction, this\ncan be simplified to\n\n\tcpu_index_tuple_cost * H * log2(N)/(H+1)\n\nI propose this, rather than Jeff's formula of cpu_index_tuple_cost *\nlog2(N), as our fudge factor. The reason I like this better is that\nthe additional factor of H/(H+1) provides the correction I want for\nbloated indexes: if an index is bloated, the way that reflects into\nthe cost of any particular search is that the number of pages to be\ndescended through is larger than otherwise. The correction is fairly\nsmall, particularly for large indexes, but that seems to be what's\nexpected given the rest of our discussion.\n\nWe could further extend this by adding some I/O charge when the index is\nsufficiently large, as per Simon's comments, but frankly I think that's\nunnecessary. Unless the fan-out factor is really awful, practical-sized\nindexes probably have all their upper pages in memory. What's more, per\nmy earlier comment, when you start to think about tables so huge that\nthat's not true it really doesn't matter if we charge another\nrandom_page_cost or two for an indexscan --- it'll still be peanuts\ncompared to the seqscan alternative.\n\nTo illustrate the behavior of this function, I've replotted my previous\ngraph, still taking the assumed fanout to be 256 tuples/page. I limited\nthe range of the functions to 0.0001 to 100 to keep the log-scale graph\nreadable, but actually the H/(H+1) formulation would charge zero for\nindexes of less than 256 tuples. I think it's significant (and a good\nthing) that this curve is nowhere significantly more than the historical\npre-9.2 fudge factor.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n\nset terminal png small color\nset output 'new_fudge.png'\nset logscale x\nset logscale y\nh(x) = (x <= 256) ? 0.0001/0.005 : (x <= 256*256) ? (1./2)*log(x)/log(2) : (x <= 256^3) ? (2./3)*log(x)/log(2) : (x <= 256^4) ? (3./4)*log(x)/log(2) : (x <= 256^5) ? (4./5)*log(x)/log(2) : (5./6)*log(x)/log(2)\nhistorical(x) = (4 * x/100000) < 100 ? 4 * x/100000 : 1/0\nninepoint2(x) = (4 * x/10000) < 100 ? 4 * x/10000 : 1/0\nhead(x) = 4*log(1 + x/10000)\nplot [10:1e9] h(x)*0.005, 0.005 * log(x)/log(2), head(x), historical(x), ninepoint2(x)\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers", "msg_date": "Sun, 06 Jan 2013 18:03:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "I wrote:\n> [ slightly bogus graph ]\n\nOoops, it seems the ^ operator doesn't do what I thought in gnuplot.\nHere's a corrected version.\n\n\t\t\tregards, tom lane\n\n\n\nset terminal png small color\nset output 'new_fudge.png'\nset xlabel \"Index tuples\"\nset ylabel \"Added cost\"\nset logscale x\nset logscale y\nh(x) = (x <= 256) ? 0.0001/0.005 : (x <= 256*256) ? (1./2)*log(x)/log(2) : (x <= 256*256*256) ? (2./3)*log(x)/log(2) : (x <= 256.0*256*256*256) ? (3./4)*log(x)/log(2) : (x <= 256.0*256*256*256*256) ? (4./5)*log(x)/log(2) : (5./6)*log(x)/log(2)\nhistorical(x) = (4 * x/100000) < 100 ? 4 * x/100000 : 1/0\nninepoint2(x) = (4 * x/10000) < 100 ? 4 * x/10000 : 1/0\nhead(x) = 4*log(1 + x/10000)\nplot [10:1e9] h(x)*0.005, 0.005 * log(x)/log(2), head(x), historical(x), ninepoint2(x)\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers", "msg_date": "Sun, 06 Jan 2013 18:17:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On 6 January 2013 23:03, Tom Lane <[email protected]> wrote:\n> Simon Riggs <[email protected]> writes:\n>> On 6 January 2013 18:58, Tom Lane <[email protected]> wrote:\n>>> IIRC, one of my very first attempts to deal with this was to charge\n>>> random_page_cost per level of index descended. This was such a horrid\n>>> overestimate that it never went anywhere. I think that reflects that in\n>>> practical applications, the upper levels of the index tend to stay in\n>>> cache. We could ignore I/O on that assumption and still try to model\n>>> CPU costs of the descent, which is basically what Jeff is proposing.\n>>> My objection to his formula is mainly that it ignores physical index\n>>> size, which I think is important to include somehow for the reasons\n>>> I explained in my other message.\n>\n>> Having a well principled approach will help bring us towards a\n>> realistic estimate.\n>\n> I thought about this some more and came up with what might be a\n> reasonably principled compromise. Assume that we know there are N\n> leaf entries in the index (from VACUUM stats) and that we know the\n> root page height is H (from looking at the btree metapage). (Note:\n> H starts at zero for a single-page index.) If we assume that the\n> number of tuples per page, P, is more or less uniform across leaf\n> and upper pages (ie P is the fanout for upper pages), then we have\n> N/P = number of leaf pages\n> N/P/P = number of level 1 pages\n> N/P^3 = number of level 2 pages\n> N/P^(h+1) = number of level h pages\n> Solving for the minimum P that makes N/P^(H+1) <= 1, we get\n> P = ceil(exp(ln(N)/(H+1)))\n> as an estimate of P given the known N and H values.\n>\n> Now, if we consider only CPU costs of index descent, we expect\n> about log2(P) comparisons to be needed on each of the H upper pages\n> to be descended through, that is we have total descent cost\n> cpu_index_tuple_cost * H * log2(P)\n>\n> If we ignore the ceil() step as being a second-order correction, this\n> can be simplified to\n>\n> cpu_index_tuple_cost * H * log2(N)/(H+1)\n>\n> I propose this, rather than Jeff's formula of cpu_index_tuple_cost *\n> log2(N), as our fudge factor. The reason I like this better is that\n> the additional factor of H/(H+1) provides the correction I want for\n> bloated indexes: if an index is bloated, the way that reflects into\n> the cost of any particular search is that the number of pages to be\n> descended through is larger than otherwise. The correction is fairly\n> small, particularly for large indexes, but that seems to be what's\n> expected given the rest of our discussion.\n\nSeems good to have something with both N and H in it. This cost model\nfavours smaller indexes over larger ones, whether that be because\nthey're partial and so have smaller N, or whether the key values are\nthinner and so have lower H.\n\n> We could further extend this by adding some I/O charge when the index is\n> sufficiently large, as per Simon's comments, but frankly I think that's\n> unnecessary. Unless the fan-out factor is really awful, practical-sized\n> indexes probably have all their upper pages in memory. What's more, per\n> my earlier comment, when you start to think about tables so huge that\n> that's not true it really doesn't matter if we charge another\n> random_page_cost or two for an indexscan --- it'll still be peanuts\n> compared to the seqscan alternative.\n\nConsidering that we're trying to decide between various indexes on one\ntable, we don't have enough information to say which index the cache\nfavours and the other aspects of cacheing are the same for all indexes\nof any given size. So we can assume those effects cancel out for\ncomparison purposes, even if they're non-zero. And as you say, they're\nnegligible in comparison with bitmapindexscans etc..\n\nThe only time I'd question that would be in the case of a nested loops\njoin but that's not important here.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 7 Jan 2013 00:03:04 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "I wrote:\n> Now, if we consider only CPU costs of index descent, we expect\n> about log2(P) comparisons to be needed on each of the H upper pages\n> to be descended through, that is we have total descent cost\n> \tcpu_index_tuple_cost * H * log2(P)\n> If we ignore the ceil() step as being a second-order correction, this\n> can be simplified to\n> \tcpu_index_tuple_cost * H * log2(N)/(H+1)\n\nI thought some more about this and concluded that the above reasoning is\nincorrect, because it ignores the fact that initial positioning on the\nindex leaf page requires another log2(P) comparisons (to locate the\nfirst matching tuple if any). If you include those comparisons then the\nH/(H+1) factor drops out and you are left with just \"cost * log2(N)\",\nindependently of the tree height.\n\nBut all is not lost for including some representation of the physical\nindex size into this calculation, because it seems plausible to consider\nthat there is some per-page cost for descending through the upper pages.\nIt's not nearly as much as random_page_cost, if the pages are cached,\nbut we don't have to suppose it's zero. So that reasoning leads to a\nformula like\n\tcost-per-tuple * log2(N) + cost-per-page * (H+1)\nwhich is better than the above proposal anyway because we can now\ntwiddle the two cost factors separately rather than being tied to a\nfixed idea of how much a larger H hurts.\n\nAs for the specific costs to use, I'm now thinking that the\ncost-per-tuple should be just cpu_operator_cost (0.0025) not\ncpu_index_tuple_cost (0.005). The latter is meant to model costs such\nas reporting a TID back out of the index AM to the executor, which is\nnot what we're doing at an upper index entry. I also propose setting\nthe per-page cost to some multiple of cpu_operator_cost, since it's\nmeant to represent a CPU cost not an I/O cost.\n\nThere is already a charge of 100 times cpu_operator_cost in\ngenericcostestimate to model \"general costs of starting an indexscan\".\nI suggest that we should consider half of that to be actual fixed\noverhead and half of it to be per-page cost for the first page, then\nadd another 50 times cpu_operator_cost for each page descended through.\nThat gives a formula of\n\n\tcpu_operator_cost * log2(N) + cpu_operator_cost * 50 * (H+2)\n\nThis would lead to the behavior depicted in the attached plot, wherein\nI've modified the comparison lines (historical, 9.2, and HEAD behaviors)\nto include the existing 100 * cpu_operator_cost startup cost charge in\naddition to the fudge factor we've been discussing so far. The new\nproposed curve is a bit above the historical curve for indexes with\n250-5000 tuples, but the value is still quite small there, so I'm not\ntoo worried about that. The people who've been complaining about 9.2's\nbehavior have indexes much larger than that.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n\nset terminal png small color\nset output 'new_costs.png'\nset xlabel \"Index tuples\"\nset ylabel \"Added cost\"\nset logscale x\nset logscale y\nh(x) = (x <= 256.0) ? 0 : (x <= 256.0*256) ? 1 : (x <= 256.0*256*256) ? 2 : (x <= 256.0*256*256*256) ? 3 : (x <= 256.0*256*256*256*256) ? 4 : 5\nhead(x) = 4*log(1 + x/10000) + 0.25\nhistorical(x) = 4 * x/100000 + 0.25\nninepoint2(x) = 4 * x/10000 + 0.25\nplot [10:1e9][0.1:10] 0.0025*log(x)/log(2) + 0.125*(h(x)+2), head(x), historical(x), ninepoint2(x)\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers", "msg_date": "Mon, 07 Jan 2013 12:35:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On 7 January 2013 17:35, Tom Lane <[email protected]> wrote:\n\n> That gives a formula of\n>\n> cpu_operator_cost * log2(N) + cpu_operator_cost * 50 * (H+2)\n>\n> This would lead to the behavior depicted in the attached plot, wherein\n> I've modified the comparison lines (historical, 9.2, and HEAD behaviors)\n> to include the existing 100 * cpu_operator_cost startup cost charge in\n> addition to the fudge factor we've been discussing so far. The new\n> proposed curve is a bit above the historical curve for indexes with\n> 250-5000 tuples, but the value is still quite small there, so I'm not\n> too worried about that. The people who've been complaining about 9.2's\n> behavior have indexes much larger than that.\n>\n> Thoughts?\n\nAgain, this depends on N and H, so thats good.\n\nI think my retinas detached while reading your explanation, but I'm a\nlong way from coming up with a better or more principled one.\n\nIf we can describe this as a heuristic that appears to fit the\nobserved costs, we may keep the door open for something better a\nlittle later.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 7 Jan 2013 18:03:37 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On 7 January 2013 17:35, Tom Lane <[email protected]> wrote:\n>> That gives a formula of\n>>\tcpu_operator_cost * log2(N) + cpu_operator_cost * 50 * (H+2)\n\n> Again, this depends on N and H, so thats good.\n\n> I think my retinas detached while reading your explanation, but I'm a\n> long way from coming up with a better or more principled one.\n\n> If we can describe this as a heuristic that appears to fit the\n> observed costs, we may keep the door open for something better a\n> little later.\n\nI'm fairly happy with the general shape of this formula: it has a\nprincipled explanation and the resulting numbers appear to be sane.\nThe specific cost multipliers obviously are open to improvement based\non future evidence. (In particular, I intend to code it in a way that\ndoesn't tie the \"startup overhead\" and \"cost per page\" numbers to be\nequal, even though I'm setting them equal for the moment for lack of a\nbetter idea.)\n\nOne issue that needs some thought is that the argument for this formula\nis based entirely on thinking about b-trees. I think it's probably\nreasonable to apply it to gist, gin, and sp-gist as well, assuming we\ncan get some estimate of tree height for those, but it's obviously\nhogwash for hash indexes. We could possibly just take H=0 for hash,\nand still apply the log2(N) part ... not so much because that is right\nas because it's likely too small to matter.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 07 Jan 2013 13:27:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On Mon, Jan 7, 2013 at 3:27 PM, Tom Lane <[email protected]> wrote:\n>\n> One issue that needs some thought is that the argument for this formula\n> is based entirely on thinking about b-trees. I think it's probably\n> reasonable to apply it to gist, gin, and sp-gist as well, assuming we\n> can get some estimate of tree height for those, but it's obviously\n> hogwash for hash indexes. We could possibly just take H=0 for hash,\n> and still apply the log2(N) part ... not so much because that is right\n> as because it's likely too small to matter.\n\nHeight would be more precisely \"lookup cost\" (in comparisons). Most\nindexing structures have a well-studied lookup cost. For b-trees, it's\nlog_b(size), for hash it's 1 + size/buckets.\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 7 Jan 2013 15:48:12 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "I wrote:\n> I'm fairly happy with the general shape of this formula: it has a\n> principled explanation and the resulting numbers appear to be sane.\n> The specific cost multipliers obviously are open to improvement based\n> on future evidence. (In particular, I intend to code it in a way that\n> doesn't tie the \"startup overhead\" and \"cost per page\" numbers to be\n> equal, even though I'm setting them equal for the moment for lack of a\n> better idea.)\n\nI realized that there was a rather serious error in the graphs I showed\nbefore: they were computing the old cost models as #tuples/10000 or\n#tuples/100000, but really it's #pages. So naturally that moves those\ncurves down quite a lot. After some playing around I concluded that the\nbest way to avoid any major increases in the attributed cost is to drop\nthe constant \"costs of indexscan setup\" charge that I proposed before.\n(That was a little weird anyway since we don't model any similar cost\nfor any other sort of executor setup.) The attached graph shows the\ncorrected old cost curves and the proposed new one.\n\n> One issue that needs some thought is that the argument for this formula\n> is based entirely on thinking about b-trees. I think it's probably\n> reasonable to apply it to gist, gin, and sp-gist as well, assuming we\n> can get some estimate of tree height for those, but it's obviously\n> hogwash for hash indexes. We could possibly just take H=0 for hash,\n> and still apply the log2(N) part ... not so much because that is right\n> as because it's likely too small to matter.\n\nIn the attached patch, I use the proposed formula for btree, gist, and\nspgist indexes. For btree we read out the actual tree height from the\nmetapage and use that. For gist and spgist there's not a uniquely\ndeterminable tree height, but I propose taking log100(#pages) as a\nfirst-order estimate. For hash, I think we actually don't need any\ncorrections, for the reasons set out in the comment added to\nhashcostestimate. I left the estimate for GIN alone; I've not studied\nit enough to know whether it ought to be fooled with, but in any case it\nbehaves very little like btree.\n\nA big chunk of the patch diff comes from redesigning the API of\ngenericcostestimate so that it can cheaply pass back some additional\nvalues, so we don't have to recompute those values at the callers.\nOther than that and the new code to let btree report out its tree\nheight, this isn't a large patch. It basically gets rid of the two\nad-hoc calculations in genericcostestimate() and inserts substitute\ncalculations in the per-index-type functions.\n\nI've verified that this patch results in no changes in the regression\ntests. It's worth noting though that there is now a small nonzero\nstartup-cost charge for indexscans, for example:\n\nregression=# explain select * from tenk1 where unique1 = 42;\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 (cost=0.29..8.30 rows=1 width=244)\n Index Cond: (unique1 = 42)\n(2 rows)\n\nwhere in 9.2 the cost estimate was 0.00..8.28. I personally think this\nis a good idea, but we'll have to keep our eyes open to see if it\nchanges any plans in ways we don't like.\n\nThis is of course much too large a change to consider back-patching.\nWhat I now recommend we do about 9.2 is just revert it to the historical\nfudge factor (#pages/100000).\n\nComments?\n\n\t\t\tregards, tom lane\n\n\n\nset terminal png small color\nset output 'newer_costs.png'\nset xlabel \"Index tuples\"\nset ylabel \"Added cost\"\nset logscale x\nset logscale y\nfo = 256.0\nh(x) = (x <= fo) ? 0 : (x <= fo*fo) ? 1 : (x <= fo*fo*fo) ? 2 : (x <= fo*fo*fo*fo) ? 3 : (x <= fo*fo*fo*fo*fo) ? 4 : 5\nhead(x) = 4*log(1 + (x/fo)/10000) + 0.25\nhistorical(x) = 4 * (x/fo)/100000 + 0.25\nninepoint2(x) = 4 * (x/fo)/10000 + 0.25\nplot [10:1e9][0.1:10] 0.0025*(ceil(log(x)/log(2))) + 0.125*(h(x)+1), head(x), historical(x), ninepoint2(x)\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers", "msg_date": "Thu, 10 Jan 2013 20:07:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On Thu, Jan 10, 2013 at 8:07 PM, Tom Lane <[email protected]> wrote:\n> Comments?\n\nI'm not sure I have anything intelligent to add to this conversation -\ndoes that make me the wisest of all the Greeks? - but I do think it\nworth mentioning that I have heard occasional reports within EDB of\nthe query planner refusing to use extremely large indexes no matter\nhow large a hammer was applied. I have never been able to obtain\nenough details to understand the parameters of the problem, let alone\nreproduce it, but I thought it might be worth mentioning anyway in\ncase it's both real and related to the case at hand. Basically I\nguess that boils down to: it would be good to consider whether the\ncosting model is correct for an index of, say, 1TB.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 14 Jan 2013 11:45:01 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I'm not sure I have anything intelligent to add to this conversation -\n> does that make me the wisest of all the Greeks? - but I do think it\n> worth mentioning that I have heard occasional reports within EDB of\n> the query planner refusing to use extremely large indexes no matter\n> how large a hammer was applied. I have never been able to obtain\n> enough details to understand the parameters of the problem, let alone\n> reproduce it, but I thought it might be worth mentioning anyway in\n> case it's both real and related to the case at hand. Basically I\n> guess that boils down to: it would be good to consider whether the\n> costing model is correct for an index of, say, 1TB.\n\nWell, see the cost curves at\nhttp://www.postgresql.org/message-id/[email protected]\n\nThe old code definitely had an unreasonably large charge for indexes\nexceeding 1e8 or so tuples. This wouldn't matter that much for simple\nsingle-table lookup queries, but I could easily see it putting the\nkibosh on uses of an index on the inside of a nestloop.\n\nIt's possible that the new code goes too far in the other direction:\nwe're now effectively assuming that all inner btree pages stay in cache\nno matter how large the index is. At some point it'd likely be\nappropriate to start throwing in some random_page_cost charges for inner\npages beyond the third/fourth/fifth(?) level, as Simon speculated about\nupthread. But I thought we could let that go until we start seeing\ncomplaints traceable to it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 14 Jan 2013 12:23:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On Mon, Jan 14, 2013 at 12:23 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> I'm not sure I have anything intelligent to add to this conversation -\n>> does that make me the wisest of all the Greeks? - but I do think it\n>> worth mentioning that I have heard occasional reports within EDB of\n>> the query planner refusing to use extremely large indexes no matter\n>> how large a hammer was applied. I have never been able to obtain\n>> enough details to understand the parameters of the problem, let alone\n>> reproduce it, but I thought it might be worth mentioning anyway in\n>> case it's both real and related to the case at hand. Basically I\n>> guess that boils down to: it would be good to consider whether the\n>> costing model is correct for an index of, say, 1TB.\n>\n> Well, see the cost curves at\n> http://www.postgresql.org/message-id/[email protected]\n>\n> The old code definitely had an unreasonably large charge for indexes\n> exceeding 1e8 or so tuples. This wouldn't matter that much for simple\n> single-table lookup queries, but I could easily see it putting the\n> kibosh on uses of an index on the inside of a nestloop.\n\nThe reported behavior was that the planner would prefer to\nsequential-scan the table rather than use the index, even if\nenable_seqscan=off. I'm not sure what the query looked like, but it\ncould have been something best implemented as a nested loop w/inner\nindex-scan.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 14 Jan 2013 12:50:24 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Jan 14, 2013 at 12:23 PM, Tom Lane <[email protected]> wrote:\n>> The old code definitely had an unreasonably large charge for indexes\n>> exceeding 1e8 or so tuples. This wouldn't matter that much for simple\n>> single-table lookup queries, but I could easily see it putting the\n>> kibosh on uses of an index on the inside of a nestloop.\n\n> The reported behavior was that the planner would prefer to\n> sequential-scan the table rather than use the index, even if\n> enable_seqscan=off. I'm not sure what the query looked like, but it\n> could have been something best implemented as a nested loop w/inner\n> index-scan.\n\nRemember also that \"enable_seqscan=off\" merely adds 1e10 to the\nestimated cost of seqscans. For sufficiently large tables this is not\nexactly a hard disable, just a thumb on the scales. But I don't know\nwhat your definition of \"extremely large indexes\" is.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 14 Jan 2013 12:56:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "On Mon, Jan 14, 2013 at 12:56:37PM -0500, Tom Lane wrote:\n> > The reported behavior was that the planner would prefer to\n> > sequential-scan the table rather than use the index, even if\n> > enable_seqscan=off. I'm not sure what the query looked like, but it\n> > could have been something best implemented as a nested loop w/inner\n> > index-scan.\n> \n> Remember also that \"enable_seqscan=off\" merely adds 1e10 to the\n> estimated cost of seqscans. For sufficiently large tables this is not\n> exactly a hard disable, just a thumb on the scales. But I don't know\n> what your definition of \"extremely large indexes\" is.\n\nWow, do we need to bump up that value based on larger modern hardware?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 15 Jan 2013 14:46:39 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Mon, Jan 14, 2013 at 12:56:37PM -0500, Tom Lane wrote:\n>> Remember also that \"enable_seqscan=off\" merely adds 1e10 to the\n>> estimated cost of seqscans. For sufficiently large tables this is not\n>> exactly a hard disable, just a thumb on the scales. But I don't know\n>> what your definition of \"extremely large indexes\" is.\n\n> Wow, do we need to bump up that value based on larger modern hardware?\n\nI'm disinclined to bump it up very much. If it's more than about 1e16,\nordinary cost contributions would disappear into float8 roundoff error,\ncausing the planner to be making choices that are utterly random except\nfor minimizing the number of seqscans. Even at 1e14 or so you'd be\nlosing a lot of finer-grain distinctions. What we want is for the\nbehavior to be \"minimize the number of seqscans but plan normally\notherwise\", so those other cost contributions are still important.\n\nAnyway, at this point we're merely speculating about what's behind\nRobert's report --- I'd want to see some concrete real-world examples\nbefore changing anything.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 15 Jan 2013 15:11:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow query: bitmap scan troubles" } ]
[ { "msg_contents": "[email protected] wrote:\n\n> Ah okay, thanks. I knew I could set various things but not\n> effective_work_mem (I tried reloading the edited config file but\n> it didn't seem to pick it up)\n\nCheck the server log, maybe there was a typo or capitalization\nerror.\n\nTo test on a single connection you should be able to just run:\n\nSET effective_cache_size = '88GB';\n\nBy the way, one other setting that I have found a need to adjust to\nget good plans is cpu_tuple_cost. In my experience, better plans\nare chosen when this is in the 0.03 to 0.05 range than with the\ndefault of 0.01.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 04 Dec 2012 14:09:20 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query: bitmap scan troubles" } ]
[ { "msg_contents": "Hi,\n\n\nI have a partitioned table(partitioned on date) .There are about 1 million\ninsertions per day. There is a column called mess_id. This column is updated\n.But update query is taking huge time.When i checked , this column is not\nunique, and most of the time its having null . Say everyday out of 1\nmillion only 20K rows are not null for this column ;.Insertions are\nhappening very fast , but update is very slow.\n\nHow to optimize my update query? Is it good to create index on that\ncolumn .. i m worried about my insertions getting slow \n\nRgrds\nSuhas\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/CREATING-INDEX-on-column-having-null-values-tp5735127.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 00:10:10 -0800 (PST)", "msg_from": "\"suhas.basavaraj12\" <[email protected]>", "msg_from_op": true, "msg_subject": "CREATING INDEX on column having null values" }, { "msg_contents": "Hello Suhas,\n \nYou need to supply good information for an accurate answer. Please have a look at this link:\n \nhttp://wiki.postgresql.org/wiki/Slow_Query_Questions\n \nKind regards,\nWillem \n \n\n> Date: Wed, 5 Dec 2012 00:10:10 -0800\n> From: [email protected]\n> To: [email protected]\n> Subject: [PERFORM] CREATING INDEX on column having null values\n> \n> Hi,\n> \n> \n> I have a partitioned table(partitioned on date) .There are about 1 million\n> insertions per day. There is a column called mess_id. This column is updated\n> .But update query is taking huge time.When i checked , this column is not\n> unique, and most of the time its having null . Say everyday out of 1\n> million only 20K rows are not null for this column ;.Insertions are\n> happening very fast , but update is very slow.\n> \n> How to optimize my update query? Is it good to create index on that\n> column .. i m worried about my insertions getting slow \n> \n> Rgrds\n> Suhas\n> \n> \n> \n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/CREATING-INDEX-on-column-having-null-values-tp5735127.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n\n\n\nHello Suhas,\n \nYou need to supply good information for an accurate answer. Please have a look at this link:\n \nhttp://wiki.postgresql.org/wiki/Slow_Query_Questions\n \nKind regards,\nWillem  \n\n> Date: Wed, 5 Dec 2012 00:10:10 -0800> From: [email protected]> To: [email protected]> Subject: [PERFORM] CREATING INDEX on column having null values> > Hi,> > > I have a partitioned table(partitioned on date) .There are about 1 million> insertions per day. There is a column called mess_id. This column is updated> .But update query is taking huge time.When i checked , this column is not> unique, and most of the time its having null . Say everyday out of 1> million only 20K rows are not null for this column ;.Insertions are> happening very fast , but update is very slow.> > How to optimize my update query? Is it good to create index on that> column .. i m worried about my insertions getting slow > > Rgrds> Suhas> > > > --> View this message in context: http://postgresql.1045698.n5.nabble.com/CREATING-INDEX-on-column-having-null-values-tp5735127.html> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.> > > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 5 Dec 2012 08:47:42 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATING INDEX on column having null values" } ]
[ { "msg_contents": "I have upgraded from PostgreSQL 9.1.5 to 9.2.1:\n\n \"PostgreSQL 9.1.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit\"\n \"PostgreSQL 9.2.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit\"\n\nIt is on the same machine with default PostgreSQL configuration files (only\nport was changed).\n\nFor testing purpose I have simple table:\n\n CREATE TEMP TABLE test_table_md_speed(id serial primary key, n integer);\n\n\nWhich I test using function:\n\nCREATE OR REPLACE FUNCTION TEST_DB_SPEED(cnt integer) RETURNS text AS $$\nDECLARE\ntime_start timestamp;\ntime_stop timestamp;\ntime_total interval;\nBEGIN\ntime_start := cast(timeofday() AS TIMESTAMP);\nFOR i IN 1..cnt LOOP\nINSERT INTO test_table_md_speed(n) VALUES (i);\nEND LOOP;\ntime_stop := cast(timeofday() AS TIMESTAMP);\ntime_total := time_stop-time_start;\n\nRETURN extract (milliseconds from time_total);\nEND;\n$$ LANGUAGE plpgsql;\n\nAnd I call:\n\nSELECT test_db_speed(1000000);\n\nI see strange results. For PostgreSQL 9.1.5 I get \"8254.769\", and for 9.2.1\nI get: \"9022.219\". This means that new version is slower. I cannot find why.\n\nAny ideas why those results differ?\n\n-- \nPatryk Sidzina\n\nI have upgraded from PostgreSQL 9.1.5 to 9.2.1:    \"PostgreSQL 9.1.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit\"    \"PostgreSQL 9.2.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit\"\nIt is on the same machine with default PostgreSQL configuration files (only port was changed).For testing purpose I have simple table:    CREATE TEMP TABLE test_table_md_speed(id serial primary key, n integer);\nWhich I test using function: CREATE OR REPLACE FUNCTION TEST_DB_SPEED(cnt integer) RETURNS text AS $$ DECLARE\n time_start timestamp; time_stop timestamp; time_total interval;\n BEGIN time_start := cast(timeofday() AS TIMESTAMP); FOR i IN 1..cnt LOOP\n INSERT INTO test_table_md_speed(n) VALUES (i); END LOOP; time_stop := cast(timeofday() AS TIMESTAMP);\n time_total := time_stop-time_start; RETURN extract (milliseconds from time_total); END;\n $$ LANGUAGE plpgsql;And I call: SELECT test_db_speed(1000000);\nI see strange results. For PostgreSQL 9.1.5 I get \"8254.769\", and for 9.2.1 I get: \"9022.219\". This means that new version is slower. I cannot find why.Any ideas why those results differ?\n-- Patryk Sidzina", "msg_date": "Wed, 5 Dec 2012 13:09:59 +0100", "msg_from": "Patryk Sidzina <[email protected]>", "msg_from_op": true, "msg_subject": "Why is PostgreSQL 9.2 slower than 9.1 in my tests?" }, { "msg_contents": "On Wed, Dec 5, 2012 at 4:09 AM, Patryk Sidzina <[email protected]> wrote:\n>\n> CREATE TEMP TABLE test_table_md_speed(id serial primary key, n integer);\n>\n> CREATE OR REPLACE FUNCTION TEST_DB_SPEED(cnt integer) RETURNS text AS $$\n> DECLARE\n> time_start timestamp;\n> time_stop timestamp;\n> time_total interval;\n> BEGIN\n> time_start := cast(timeofday() AS TIMESTAMP);\n> FOR i IN 1..cnt LOOP\n> INSERT INTO test_table_md_speed(n) VALUES (i);\n> END LOOP;\n> time_stop := cast(timeofday() AS TIMESTAMP);\n> time_total := time_stop-time_start;\n>\n> RETURN extract (milliseconds from time_total);\n> END;\n> $$ LANGUAGE plpgsql;\n>\n>\n> SELECT test_db_speed(1000000);\n>\n> I see strange results. For PostgreSQL 9.1.5 I get \"8254.769\", and for 9.2.1\n> I get: \"9022.219\". This means that new version is slower. I cannot find why.\n>\n> Any ideas why those results differ?\n\nDid you just run it once each?\n\nThe run-to-run variability in timing can be substantial.\n\nI put the above into a custom file for \"pgbench -f sidzina.sql -t 1 -p\n$port\" and run it on both versions in random order for several hundred\niterations. There was no detectable difference in timing.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 9 Dec 2012 19:53:45 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is PostgreSQL 9.2 slower than 9.1 in my tests?" }, { "msg_contents": "On Mon, Dec 10, 2012 at 4:53 AM, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Dec 5, 2012 at 4:09 AM, Patryk Sidzina <[email protected]>\n> wrote:\n> >\n> > CREATE TEMP TABLE test_table_md_speed(id serial primary key, n integer);\n> >\n> > CREATE OR REPLACE FUNCTION TEST_DB_SPEED(cnt integer) RETURNS text AS $$\n> > DECLARE\n> > time_start timestamp;\n> > time_stop timestamp;\n> > time_total interval;\n> > BEGIN\n> > time_start := cast(timeofday() AS TIMESTAMP);\n> > FOR i IN 1..cnt LOOP\n> > INSERT INTO test_table_md_speed(n) VALUES (i);\n> > END LOOP;\n> > time_stop := cast(timeofday() AS TIMESTAMP);\n> > time_total := time_stop-time_start;\n> >\n> > RETURN extract (milliseconds from time_total);\n> > END;\n> > $$ LANGUAGE plpgsql;\n> >\n> >\n> > SELECT test_db_speed(1000000);\n> >\n> > I see strange results. For PostgreSQL 9.1.5 I get \"8254.769\", and for\n> 9.2.1\n> > I get: \"9022.219\". This means that new version is slower. I cannot find\n> why.\n> >\n> > Any ideas why those results differ?\n>\n> Did you just run it once each?\n>\n> The run-to-run variability in timing can be substantial.\n>\n> I put the above into a custom file for \"pgbench -f sidzina.sql -t 1 -p\n> $port\" and run it on both versions in random order for several hundred\n> iterations. There was no detectable difference in timing.\n>\n>\nSorry for the mix up. The above results are from one of our test machines.\nI wanted to simplify the function as much as possible.\nUnfortunately, I didn't test this on a different machine. I did that after\nyour post and like you said, there isn't much difference in the results.\nThe differences come up when you change the \"INSERT\" to \"EXECUTE 'INSERT'\"\n( and i checked this time on 3 machines, one of which was Windows):\n\nCREATE TEMP TABLE test_table_md_speed(id serial primary key, n integer);\n\nCREATE OR REPLACE FUNCTION test_db_speed(cnt integer)\n RETURNS text\n LANGUAGE plpgsql\nAS $function$\nDECLARE\n time_start timestamp;\n time_stop timestamp;\n time_total interval;\nBEGIN\n time_start := cast(timeofday() AS TIMESTAMP);\n FOR i IN 1..cnt LOOP\n EXECUTE 'INSERT INTO test_table_md_speed(n) VALUES (' || i\n|| ')';\n END LOOP;\n\n time_stop := cast(timeofday() AS TIMESTAMP);\n time_total := time_stop-time_start;\n\n RETURN extract (milliseconds from time_total);\nEND;\n$function$;\n\nSELECT test_db_speed(100000);\n\nI run the above several times and get \"4029.356\" on PGSQL 9.1.6 and\n\"5015.073\" on PGSQL 9.2.1.\nAgain, sorry for not double checking my results.\n\n-- \nPatryk Sidzina\n\nOn Mon, Dec 10, 2012 at 4:53 AM, Jeff Janes <[email protected]> wrote:\nOn Wed, Dec 5, 2012 at 4:09 AM, Patryk Sidzina <[email protected]> wrote:\n>\n>  CREATE TEMP TABLE test_table_md_speed(id serial primary key, n integer);\n>\n> CREATE OR REPLACE FUNCTION TEST_DB_SPEED(cnt integer) RETURNS text AS $$\n> DECLARE\n> time_start timestamp;\n> time_stop timestamp;\n> time_total interval;\n> BEGIN\n> time_start := cast(timeofday() AS TIMESTAMP);\n> FOR i IN 1..cnt LOOP\n> INSERT INTO test_table_md_speed(n) VALUES (i);\n> END LOOP;\n> time_stop := cast(timeofday() AS TIMESTAMP);\n> time_total := time_stop-time_start;\n>\n> RETURN extract (milliseconds from time_total);\n> END;\n> $$ LANGUAGE plpgsql;\n>\n>\n> SELECT test_db_speed(1000000);\n>\n> I see strange results. For PostgreSQL 9.1.5 I get \"8254.769\", and for 9.2.1\n> I get: \"9022.219\". This means that new version is slower. I cannot find why.\n>\n> Any ideas why those results differ?\n\nDid you just run it once each?\n\nThe run-to-run variability in timing can be substantial.\n\nI put the above into a custom file for \"pgbench -f sidzina.sql -t 1 -p\n$port\" and run it on both versions in random order for several hundred\niterations.  There was no detectable difference in timing.\nSorry for the mix up. The above results are from one of our test machines. I wanted to simplify the function as much as possible.Unfortunately, I didn't test this on a different machine. I did that after your post and like you said, there isn't much difference in the results.\nThe differences come up when you change the \"INSERT\" to \"EXECUTE 'INSERT'\" ( and i checked this time on 3 machines, one of which was Windows):\nCREATE TEMP TABLE test_table_md_speed(id serial primary key, n integer);\nCREATE OR REPLACE FUNCTION test_db_speed(cnt integer) RETURNS text LANGUAGE plpgsql\nAS $function$DECLARE        time_start timestamp;        time_stop timestamp;        time_total interval;\nBEGIN        time_start := cast(timeofday() AS TIMESTAMP);        FOR i IN 1..cnt LOOP                EXECUTE 'INSERT INTO test_table_md_speed(n) VALUES (' || i || ')';\n        END LOOP;        time_stop := cast(timeofday() AS TIMESTAMP);        time_total := time_stop-time_start;\n        RETURN extract (milliseconds from time_total);END;$function$;\nSELECT test_db_speed(100000);\nI run the above several times and get \"4029.356\" on PGSQL 9.1.6 and \"5015.073\" on PGSQL 9.2.1.Again, sorry for not double checking my results.\n-- Patryk Sidzina", "msg_date": "Tue, 11 Dec 2012 11:50:59 +0100", "msg_from": "Patryk Sidzina <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is PostgreSQL 9.2 slower than 9.1 in my tests?" }, { "msg_contents": "On Tue, Dec 11, 2012 at 2:50 AM, Patryk Sidzina\n<[email protected]> wrote:\n\n> The differences come up when you change the \"INSERT\" to \"EXECUTE 'INSERT'\" (\n> and i checked this time on 3 machines, one of which was Windows):\n>\n> CREATE TEMP TABLE test_table_md_speed(id serial primary key, n integer);\n>\n> CREATE OR REPLACE FUNCTION test_db_speed(cnt integer)\n> RETURNS text\n> LANGUAGE plpgsql\n> AS $function$\n> DECLARE\n> time_start timestamp;\n> time_stop timestamp;\n> time_total interval;\n> BEGIN\n> time_start := cast(timeofday() AS TIMESTAMP);\n> FOR i IN 1..cnt LOOP\n> EXECUTE 'INSERT INTO test_table_md_speed(n) VALUES (' || i\n> || ')';\n> END LOOP;\n>\n> time_stop := cast(timeofday() AS TIMESTAMP);\n> time_total := time_stop-time_start;\n>\n> RETURN extract (milliseconds from time_total);\n> END;\n> $function$;\n>\n> SELECT test_db_speed(100000);\n\nThe culprit is the commit below. I don't know exactly why this slows\ndown your case. A preliminary oprofile analysis suggests that it most\nof the slowdown is that it calls AllocSetAlloc more often. I suspect\nthat this slow-down will be considered acceptable trade-off for\ngetting good parameterized plans.\n\n\ncommit e6faf910d75027bdce7cd0f2033db4e912592bcc\nAuthor: Tom Lane <[email protected]>\nDate: Fri Sep 16 00:42:53 2011 -0400\n\n Redesign the plancache mechanism for more flexibility and efficiency.\n\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 15:53:47 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is PostgreSQL 9.2 slower than 9.1 in my tests?" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Tue, Dec 11, 2012 at 2:50 AM, Patryk Sidzina\n> <[email protected]> wrote:\n>> The differences come up when you change the \"INSERT\" to \"EXECUTE 'INSERT'\" (\n>> and i checked this time on 3 machines, one of which was Windows):\n\n>> FOR i IN 1..cnt LOOP\n>> EXECUTE 'INSERT INTO test_table_md_speed(n) VALUES (' || i || ')';\n>> END LOOP;\n\n> The culprit is the commit below. I don't know exactly why this slows\n> down your case. A preliminary oprofile analysis suggests that it most\n> of the slowdown is that it calls AllocSetAlloc more often. I suspect\n> that this slow-down will be considered acceptable trade-off for\n> getting good parameterized plans.\n\nI'm having a hard time getting excited about optimizing the above case:\nthe user can do far more to make it fast than we can, simply by not\nusing EXECUTE, which is utterly unnecessary in this example.\n\nHaving said that, though, it's not real clear to me why the plancache\nchanges would have affected the speed of EXECUTE at all --- the whole\npoint of that command is we don't cache a plan for the query.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 19:38:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is PostgreSQL 9.2 slower than 9.1 in my tests?" }, { "msg_contents": "On Tuesday, December 11, 2012, Tom Lane wrote:\n\n> Jeff Janes <[email protected] <javascript:;>> writes:\n> > On Tue, Dec 11, 2012 at 2:50 AM, Patryk Sidzina\n> > <[email protected] <javascript:;>> wrote:\n> >> The differences come up when you change the \"INSERT\" to \"EXECUTE\n> 'INSERT'\" (\n> >> and i checked this time on 3 machines, one of which was Windows):\n>\n> >> FOR i IN 1..cnt LOOP\n> >> EXECUTE 'INSERT INTO test_table_md_speed(n) VALUES (' || i || ')';\n> >> END LOOP;\n>\n> > The culprit is the commit below. I don't know exactly why this slows\n> > down your case. A preliminary oprofile analysis suggests that it most\n> > of the slowdown is that it calls AllocSetAlloc more often. I suspect\n> > that this slow-down will be considered acceptable trade-off for\n> > getting good parameterized plans.\n>\n> I'm having a hard time getting excited about optimizing the above case:\n> the user can do far more to make it fast than we can, simply by not\n> using EXECUTE, which is utterly unnecessary in this example.\n>\n\nI assumed his example was an intentionally simplified test-case, not a real\nworld use-case.\n\nFor a more realistic use, see \"[PERFORM] Performance on Bulk Insert to\nPartitioned Table\". There too it would probably be best to get rid of the\nEXECUTE, but doing so in that case would certainly have a high cost in\ntrigger-code complexity and maintainability. (In my test case of loading\n1e7 narrow tuples to 100 partitions, the plan cache change lead to a 26%\nslow down)\n\n\n\n> Having said that, though, it's not real clear to me why the plancache\n> changes would have affected the speed of EXECUTE at all --- the whole\n> point of that command is we don't cache a plan for the query.\n>\n\n\nDoing a bottom level profile isn't helpful because all of the extra time is\nin very low level code that is called from everywhere. Doing call-counts\nwith gprof, I see that there is big increase in the calls to copyObject\n(which indirectly leads to a big increase in AllocSetAlloc). Before the\nchange, each EXECUTE had one top-level (i.e. nonrecursive) copyObject call,\ncoming from _SPI_prepare_plan.\n\nAfter the change, each EXECUTE has 4 such top-level copyObject calls, one\neach from CreateCachedPlan and CompleteCachedPlan and two\nfrom BuildCachedPlan.\n\nCheers,\n\nJeff\n\nOn Tuesday, December 11, 2012, Tom Lane wrote:Jeff Janes <[email protected]> writes:\n\n> On Tue, Dec 11, 2012 at 2:50 AM, Patryk Sidzina\n> <[email protected]> wrote:\n>> The differences come up when you change the \"INSERT\" to \"EXECUTE 'INSERT'\" (\n>> and i checked this time on 3 machines, one of which was Windows):\n\n>> FOR i IN 1..cnt LOOP\n>> EXECUTE 'INSERT INTO test_table_md_speed(n) VALUES (' || i || ')';\n>> END LOOP;\n\n> The culprit is the commit below.  I don't know exactly why this slows\n> down your case.  A preliminary oprofile analysis suggests that it most\n> of the slowdown is that it calls AllocSetAlloc more often.  I suspect\n> that this slow-down will be considered acceptable trade-off for\n> getting good parameterized plans.\n\nI'm having a hard time getting excited about optimizing the above case:\nthe user can do far more to make it fast than we can, simply by not\nusing EXECUTE, which is utterly unnecessary in this example.I assumed his example was an intentionally simplified test-case, not a real world use-case.For a more realistic use, see \"[PERFORM] Performance on Bulk Insert to Partitioned Table\".  There too it would probably be best to get rid of the EXECUTE, but doing so in that case would certainly have a high cost in trigger-code complexity and maintainability.  (In my test case of loading 1e7 narrow tuples to 100 partitions, the plan cache change lead to a 26% slow down)\n \nHaving said that, though, it's not real clear to me why the plancache\nchanges would have affected the speed of EXECUTE at all --- the whole\npoint of that command is we don't cache a plan for the query.Doing a bottom level profile isn't helpful because all of the extra time is in very low level code that is called from everywhere.  Doing call-counts with gprof, I see that there is big increase in the calls to copyObject (which indirectly leads to a big increase in AllocSetAlloc).  Before the change, each EXECUTE had one top-level (i.e. nonrecursive) copyObject call, coming from _SPI_prepare_plan.\nAfter the change, each EXECUTE has 4 such top-level copyObject calls, one each from CreateCachedPlan and CompleteCachedPlan and two from BuildCachedPlan.Cheers,\nJeff", "msg_date": "Sun, 23 Dec 2012 14:55:16 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is PostgreSQL 9.2 slower than 9.1 in my tests?" } ]
[ { "msg_contents": "Hey guys,\n\nThis isn't a question, but a kind of summary over a ton of investigation\nI've been doing since a recent \"upgrade\". Anyone else out there with\n\"big iron\" might want to confirm this, but it seems pretty reproducible.\nThis seems to affect the latest 3.2 mainline and by extension, any\nplatform using it. My tests are restricted to Ubuntu 12.04, but it may\napply elsewhere.\n\nComparing the latest official 3.2 kernel to the latest official 3.4\nkernel (both Ubuntu), there are some rather striking differences. I'll\nstart with some pgbench tests.\n\n* This test is 800 read-only clients, with 2 controlling threads on a\n55GB database (scaling factor of 3600) for 3 minutes.\n * With 3.4:\n * Max TPS was 68933.\n * CPU was between 50 and 55% idle.\n * Load average was between 10 and 15.\n * With 3.2:\n * Max TPS was 17583. A total loss of 75% performance.\n * CPU was between 12 and 25% idle.\n * Load average was between 10 and 60---effectively random.\n * Next, we checked minimal write tests. This time, with only two\nclients. All other metrics are the same.\n * With 3.4:\n * Max TPS was 4548.\n * CPU was between 88 and 92% idle.\n * Load average was between 1.7 and 2.5.\n * With 3.2:\n * Max TPS was 4639.\n * CPU was between 88 and 92% idle.\n * Load average was between 3 and 4.\n\nOverall, performance was _much_ worse in 3.2 by almost every metric\nexcept for very low contention activity. More CPU for less transactions,\nand wildly inaccurate load reporting. The 3.2 kernel in its current\nstate should be considered detrimental and potentially malicious under\nhigh task contention.\n\nI'll admit not letting the tests run for more than 10 iterations, but I\ndidn't really need more than that. Even one iteration is enough to see\nthis in action. At least every Ubuntu 3.2 kernel since 3.2.0-31 exhibits\nthis, but I haven't tested further back. I've also examined both\nofficial Ubuntu 3.2 and Ubuntu mainline kernels as obtained from here:\n\nhttp://kernel.ubuntu.com/~kernel-ppa/mainline\n\nThe 3.2.34 mainline also has these problems. For reference, I tested the\n3.4.20 Quantal release on Precise because the Precise 3.4 kernel hasn't\nbeen maintained.\n\nAgain, anyone running 12.04 LTS, take a good hard look at your systems.\nHopefully you have a spare machine to test with. I'm frankly appalled\nthis thing is in an LTS release.\n\nI'll also note that all kernels exhibit some extent of client threads\nbloating load reports. In a pgbench for-loop (run, sleep 1, repeat), \nsometimes load will jump to some very high number between iterations, \nbut on a 3.4, it will settle down again. On a 3.2, it just jumps \nrandomly. I tested that with this script:\n\nnLoop=0\n\nwhile [ 1 -eq 1 ]; do\n\n if [ $[$nLoop % 20] -eq 0 ]; then\n echo -e \"Stat Time\\t\\tSleep\\tRun\\tLoad Avg\"\n fi\n\n stattime=$(date +\"%Y-%m-%d %H:%M:%S\")\n sleep=$(ps -emo stat | egrep -c 'D')\n run=$(ps -emo stat | egrep -c 'R')\n loadavg=$(cat /proc/loadavg | cut -d ' ' -f 1)\n\n echo -e \"${stattime}\\t${sleep}\\t${run}\\t${loadavg}\"\n sleep 1\n\n nLoop=$[$nLoop + 1]\n\ndone\n\nThe jumps look like this:\n\nStat Time\t\tSleep\tRun\tLoad Avg\n2012-12-05 12:23:13\t0\t16\t7.66\n2012-12-05 12:23:14\t0\t12\t7.66\n2012-12-05 12:23:15\t0\t7\t7.66\n2012-12-05 12:23:16\t0\t17\t7.66\n2012-12-05 12:23:17\t0\t1\t24.51\n2012-12-05 12:23:18\t0\t2\t24.51\n\nIt's much harder to trigger on 3.4, but still happens.\n\nIf anyone has tested against 3.6 or 3.7, I'd love to hear your input. \nInconsistent load reports are one thing... strangled performance and \ninflated CPU usage are quite another.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n100\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 12:28:23 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Ubuntu 12.04 / 3.2 Kernel Bad for PostgreSQL Performance" }, { "msg_contents": "Where as I can't say I yet tried out the 3.4 kernel, I can say that I am running 3.2 too, and maybe there is a connection to the past issues of strange CPU behavior I have had (as you know and have been so kind to try helping me solve). I will without a doubt try out 3.4 or 3.6 within the coming days, and report back on the topic.\n\n \nDen 05/12/2012 kl. 19.28 skrev Shaun Thomas <[email protected]>:\n\n> Hey guys,\n> \n> This isn't a question, but a kind of summary over a ton of investigation\n> I've been doing since a recent \"upgrade\". Anyone else out there with\n> \"big iron\" might want to confirm this, but it seems pretty reproducible.\n> This seems to affect the latest 3.2 mainline and by extension, any\n> platform using it. My tests are restricted to Ubuntu 12.04, but it may\n> apply elsewhere.\n> \n> Comparing the latest official 3.2 kernel to the latest official 3.4\n> kernel (both Ubuntu), there are some rather striking differences. I'll\n> start with some pgbench tests.\n> \n> * This test is 800 read-only clients, with 2 controlling threads on a\n> 55GB database (scaling factor of 3600) for 3 minutes.\n> * With 3.4:\n> * Max TPS was 68933.\n> * CPU was between 50 and 55% idle.\n> * Load average was between 10 and 15.\n> * With 3.2:\n> * Max TPS was 17583. A total loss of 75% performance.\n> * CPU was between 12 and 25% idle.\n> * Load average was between 10 and 60---effectively random.\n> * Next, we checked minimal write tests. This time, with only two\n> clients. All other metrics are the same.\n> * With 3.4:\n> * Max TPS was 4548.\n> * CPU was between 88 and 92% idle.\n> * Load average was between 1.7 and 2.5.\n> * With 3.2:\n> * Max TPS was 4639.\n> * CPU was between 88 and 92% idle.\n> * Load average was between 3 and 4.\n> \n> Overall, performance was _much_ worse in 3.2 by almost every metric\n> except for very low contention activity. More CPU for less transactions,\n> and wildly inaccurate load reporting. The 3.2 kernel in its current\n> state should be considered detrimental and potentially malicious under\n> high task contention.\n> \n> I'll admit not letting the tests run for more than 10 iterations, but I\n> didn't really need more than that. Even one iteration is enough to see\n> this in action. At least every Ubuntu 3.2 kernel since 3.2.0-31 exhibits\n> this, but I haven't tested further back. I've also examined both\n> official Ubuntu 3.2 and Ubuntu mainline kernels as obtained from here:\n> \n> http://kernel.ubuntu.com/~kernel-ppa/mainline\n> \n> The 3.2.34 mainline also has these problems. For reference, I tested the\n> 3.4.20 Quantal release on Precise because the Precise 3.4 kernel hasn't\n> been maintained.\n> \n> Again, anyone running 12.04 LTS, take a good hard look at your systems.\n> Hopefully you have a spare machine to test with. I'm frankly appalled\n> this thing is in an LTS release.\n> \n> I'll also note that all kernels exhibit some extent of client threads\n> bloating load reports. In a pgbench for-loop (run, sleep 1, repeat), sometimes load will jump to some very high number between iterations, but on a 3.4, it will settle down again. On a 3.2, it just jumps randomly. I tested that with this script:\n> \n> nLoop=0\n> \n> while [ 1 -eq 1 ]; do\n> \n> if [ $[$nLoop % 20] -eq 0 ]; then\n> echo -e \"Stat Time\\t\\tSleep\\tRun\\tLoad Avg\"\n> fi\n> \n> stattime=$(date +\"%Y-%m-%d %H:%M:%S\")\n> sleep=$(ps -emo stat | egrep -c 'D')\n> run=$(ps -emo stat | egrep -c 'R')\n> loadavg=$(cat /proc/loadavg | cut -d ' ' -f 1)\n> \n> echo -e \"${stattime}\\t${sleep}\\t${run}\\t${loadavg}\"\n> sleep 1\n> \n> nLoop=$[$nLoop + 1]\n> \n> done\n> \n> The jumps look like this:\n> \n> Stat Time\t\tSleep\tRun\tLoad Avg\n> 2012-12-05 12:23:13\t0\t16\t7.66\n> 2012-12-05 12:23:14\t0\t12\t7.66\n> 2012-12-05 12:23:15\t0\t7\t7.66\n> 2012-12-05 12:23:16\t0\t17\t7.66\n> 2012-12-05 12:23:17\t0\t1\t24.51\n> 2012-12-05 12:23:18\t0\t2\t24.51\n> \n> It's much harder to trigger on 3.4, but still happens.\n> \n> If anyone has tested against 3.6 or 3.7, I'd love to hear your input. Inconsistent load reports are one thing... strangled performance and inflated CPU usage are quite another.\n> \n> -- \n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n> 100\n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 22:45:04 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ubuntu 12.04 / 3.2 Kernel Bad for PostgreSQL Performance" }, { "msg_contents": "On Wed, Dec 5, 2012 at 10:28 AM, Shaun Thomas <[email protected]> wrote:\n> Hey guys,\n>\n> This isn't a question, but a kind of summary over a ton of investigation\n> I've been doing since a recent \"upgrade\". Anyone else out there with\n> \"big iron\" might want to confirm this, but it seems pretty reproducible.\n> This seems to affect the latest 3.2 mainline and by extension, any\n> platform using it. My tests are restricted to Ubuntu 12.04, but it may\n> apply elsewhere.\n>\n> Comparing the latest official 3.2 kernel to the latest official 3.4\n> kernel (both Ubuntu), there are some rather striking differences. I'll\n> start with some pgbench tests.\n\nIs 3.2 a significant regression from previous releases, or is 3.4 just\nfaster? Your wording only indicates that \"older kernel is slow,\" but\nyour tone would suggest that you feel this is a regression, cf. being\nunhappy that 3.2 made its way into a LTS release (why wouldn't it? it\nwas a relatively current kernel at the time).\n\n--\nfdr\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 14:19:41 -0800", "msg_from": "Daniel Farina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ubuntu 12.04 / 3.2 Kernel Bad for PostgreSQL Performance" }, { "msg_contents": "On 12/05/2012 04:19 PM, Daniel Farina wrote:\n\n> Is 3.2 a significant regression from previous releases, or is 3.4 just\n> faster? Your wording only indicates that \"older kernel is slow,\" but\n> your tone would suggest that you feel this is a regression, cf.\n\nIt's definitely a regression. I'm trying to pin it down, but the \n3.2.0-24 kernel didn't do the CPU drain down to single-digits on that \nclient load test. I'm working on 3.2.0-30 and going down to figure out \nwhich patch might have done it.\n\nOlder kernels performed better. And by older, I mean 2.6. Still not 3.4 \nlevels, but that's expected. I haven't checked 3.0, but other threads \nI've read suggest it had less problems. Sorry if I wasn't clear.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 16:25:28 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ubuntu 12.04 / 3.2 Kernel Bad for PostgreSQL Performance" }, { "msg_contents": "On Wed, Dec 5, 2012 at 04:25:28PM -0600, Shaun Thomas wrote:\n> On 12/05/2012 04:19 PM, Daniel Farina wrote:\n> \n> >Is 3.2 a significant regression from previous releases, or is 3.4 just\n> >faster? Your wording only indicates that \"older kernel is slow,\" but\n> >your tone would suggest that you feel this is a regression, cf.\n> \n> It's definitely a regression. I'm trying to pin it down, but the\n> 3.2.0-24 kernel didn't do the CPU drain down to single-digits on\n> that client load test. I'm working on 3.2.0-30 and going down to\n> figure out which patch might have done it.\n> \n> Older kernels performed better. And by older, I mean 2.6. Still not\n> 3.4 levels, but that's expected. I haven't checked 3.0, but other\n> threads I've read suggest it had less problems. Sorry if I wasn't\n> clear.\n\nAh, that is interesting about 2.6. I had wondered how Debian stable\nwould have performed, 2.6.32-5. This relates to a recent discussion\nabout the appropriateness of Ubuntu for database servers:\n\n\thttp://archives.postgresql.org/pgsql-performance/2012-11/msg00358.php\n\nThanks.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 17:41:48 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ubuntu 12.04 / 3.2 Kernel Bad for PostgreSQL\n Performance" }, { "msg_contents": "On 12/05/2012 04:41 PM, Bruce Momjian wrote:\n\n> Ah, that is interesting about 2.6. I had wondered how Debian stable\n> would have performed, 2.6.32-5. This relates to a recent discussion\n> about the appropriateness of Ubuntu for database servers:\n\nHmm. I may have to recant. I just removed our fusionIO driver from the \nloop and suddenly everything is honey and roses. It would appear that \nsome recent 3.2 kernel patch borks the driver in some horrible way. \nWihtout it, I see 50-ish percent CPU, 70k tps even with 800 clients... \nJust like 3.4.\n\nSo I jumped the gun a bit. Stupid drivers.\n\nI'm still curious why only recent 3.2's cause it, but 3.4 don't. That's \nmighty odd.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 17:04:28 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ubuntu 12.04 / 3.2 Kernel Bad for PostgreSQL Performance" }, { "msg_contents": "On Wed, Dec 5, 2012 at 4:04 PM, Shaun Thomas <[email protected]> wrote:\n> On 12/05/2012 04:41 PM, Bruce Momjian wrote:\n>\n>> Ah, that is interesting about 2.6. I had wondered how Debian stable\n>> would have performed, 2.6.32-5. This relates to a recent discussion\n>> about the appropriateness of Ubuntu for database servers:\n>\n>\n> Hmm. I may have to recant. I just removed our fusionIO driver from the loop\n> and suddenly everything is honey and roses. It would appear that some recent\n> 3.2 kernel patch borks the driver in some horrible way. Wihtout it, I see\n> 50-ish percent CPU, 70k tps even with 800 clients... Just like 3.4.\n>\n> So I jumped the gun a bit. Stupid drivers.\n>\n> I'm still curious why only recent 3.2's cause it, but 3.4 don't. That's\n> mighty odd.\n\nHave you got a support contract with fusion IO guys? Where I work we\nhave fusion IO cards and a support contract and are about to start\ndoing some testing on ubuntu 12.04 as well so I'll let you know what\nwe find out.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 17:54:09 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ubuntu 12.04 / 3.2 Kernel Bad for PostgreSQL Performance" }, { "msg_contents": "On Thu, Dec 6, 2012 at 1:28 AM, Shaun Thomas <[email protected]> wrote:\n\n> This isn't a question, but a kind of summary over a ton of investigation\n> I've been doing since a recent \"upgrade\". Anyone else out there with\n> \"big iron\" might want to confirm this, but it seems pretty reproducible.\n> This seems to affect the latest 3.2 mainline and by extension, any\n> platform using it. My tests are restricted to Ubuntu 12.04, but it may\n> apply elsewhere.\n\nI'm not seeing this on our production systems. I haven't run benchmarks.\n\nOne of our systems currently has a mixture of PG 8.4 shards running on\nUbuntu 10.04 (2.6 kernel) and PG 9.1 shards running on Ubuntu 12.04\n(3.2 kernel). Load & cpu utilization (per 'top') are comparable.\nShards have 64GB of RAM, shared_buffers=3GB, 60 active connections.\n\nAnother production PG 9.1 system with shared_buffers=5GB also seems\nfine. Old load graphs show the load is comparable from when it was\nrunning Ubuntu 10.04.\n\nMy big systems are still all on Ubuntu 10.04 (cut over in January I expect).\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 6 Dec 2012 14:51:41 +0700", "msg_from": "Stuart Bishop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ubuntu 12.04 / 3.2 Kernel Bad for PostgreSQL Performance" }, { "msg_contents": "On 05/12/2012 18:28, Shaun Thomas wrote:\n> Hey guys,\n>\n> This isn't a question, but a kind of summary over a ton of investigation\n> I've been doing since a recent \"upgrade\". Anyone else out there with\n> \"big iron\" might want to confirm this, but it seems pretty reproducible.\n> This seems to affect the latest 3.2 mainline and by extension, any\n> platform using it. My tests are restricted to Ubuntu 12.04, but it may\n> apply elsewhere.\n>\nVery interesting results, I've been trying to benchmark my box but \ngetting what I would call poor performance given the setup, but I am \nrunning 3.2 on ubuntu 12.04. Could I ask what hardware (offline if you \nwish) was used for the results below?\n> Comparing the latest official 3.2 kernel to the latest official 3.4\n> kernel (both Ubuntu), there are some rather striking differences. I'll\n> start with some pgbench tests.\n>\n> * This test is 800 read-only clients, with 2 controlling threads on a\n> 55GB database (scaling factor of 3600) for 3 minutes.\n> * With 3.4:\n> * Max TPS was 68933.\n> * CPU was between 50 and 55% idle.\n> * Load average was between 10 and 15.\n> * With 3.2:\n> * Max TPS was 17583. A total loss of 75% performance.\n> * CPU was between 12 and 25% idle.\n> * Load average was between 10 and 60---effectively random.\n> * Next, we checked minimal write tests. This time, with only two\n> clients. All other metrics are the same.\n> * With 3.4:\n> * Max TPS was 4548.\n> * CPU was between 88 and 92% idle.\n> * Load average was between 1.7 and 2.5.\n> * With 3.2:\n> * Max TPS was 4639.\n> * CPU was between 88 and 92% idle.\n> * Load average was between 3 and 4.\n>\nTIme to see what a 3.4 kernel does to my setup I think?\n\nThanks\n\nJohn\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 06 Dec 2012 08:29:52 +0000", "msg_from": "John Lister <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ubuntu 12.04 / 3.2 Kernel Bad for PostgreSQL Performance" } ]
[ { "msg_contents": "Hi,\n\nI'm struggling with a query for some time and the major problem of the\nquery is that the statistics are way wrong on a particular operation:\n -> Nested Loop (cost=3177.72..19172.84 rows=*2* width=112) (actual\ntime=139.221..603.929 rows=*355331* loops=1)\n Join Filter: (l.location_id = r.location_id)\n -> Hash Join (cost=3177.71..7847.52 rows=*33914* width=108)\n(actual time=138.343..221.852 rows=*36664* loops=1)\n Hash Cond: (el.location_id = l.location_id)\n ...\n -> Index Scan using idx_test1 on representations r\n(cost=0.01..0.32 rows=*1* width=12) (actual time=0.002..0.008\nrows=*10* loops=36664)\n ...\n(extracted from the original plan which is quite massive)\n\nI tried to improve the statistics of l.location_id, el.location_id,\nr.location_id and idx_test1.location_id (up to 5000) but it doesn't\nget better.\n\nAny idea on how I could get better statistics in this particular\nexample and why the estimate of the nested loop is so wrong while the\nones for each individual operations are quite good?\n\nThis is with PostgreSQL 9.2.1.\n\nThanks.\n\n-- \nGuillaume\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 20:39:14 +0100", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": true, "msg_subject": "Any idea on how to improve the statistics estimates for this plan?" }, { "msg_contents": "On Wed, Dec 5, 2012 at 11:39 AM, Guillaume Smet\n<[email protected]> wrote:\n> Hi,\n>\n> I'm struggling with a query for some time and the major problem of the\n> query is that the statistics are way wrong on a particular operation:\n> -> Nested Loop (cost=3177.72..19172.84 rows=*2* width=112) (actual\n> time=139.221..603.929 rows=*355331* loops=1)\n> Join Filter: (l.location_id = r.location_id)\n> -> Hash Join (cost=3177.71..7847.52 rows=*33914* width=108)\n> (actual time=138.343..221.852 rows=*36664* loops=1)\n> Hash Cond: (el.location_id = l.location_id)\n> ...\n> -> Index Scan using idx_test1 on representations r\n> (cost=0.01..0.32 rows=*1* width=12) (actual time=0.002..0.008\n> rows=*10* loops=36664)\n> ...\n> (extracted from the original plan which is quite massive)\n\nCould you reduce the plan size by removing joins that are extraneous\nto this specific problem?\n\n> I tried to improve the statistics of l.location_id, el.location_id,\n> r.location_id and idx_test1.location_id (up to 5000) but it doesn't\n> get better.\n\nIf there is a correlation that PostgreSQL is incapable of\nunderstanding, than no amount of increase is going to help.\n\n>\n> Any idea on how I could get better statistics in this particular\n> example and why the estimate of the nested loop is so wrong while the\n> ones for each individual operations are quite good?\n\nThe trivial answer to \"why\" is that it thinks that the vast majority\nof the 33914 rows from the hash join will find no partners in r, but\nin fact each has about 10 partner in r. Why does it think that?\nWithout seeing all the join conditions and filter conditions on those\ntables, plus the size of each unfiltered pair-wise joins, it is hard\nto speculate.\n\nIf you remove all filters (all members of the \"where\" which are not\njoin criteria), then what does the plan look like?\n\nIf those estimates are better, it probably means that your filter\ncondition is picking a part of the \"el JOIN l\" that has much different\nselectivity to r than the full set does, and PostgreSQL has no way of\nknowing that.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 7 Dec 2012 18:32:15 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any idea on how to improve the statistics estimates for this\n plan?" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> The trivial answer to \"why\" is that it thinks that the vast majority\n> of the 33914 rows from the hash join will find no partners in r, but\n> in fact each has about 10 partner in r. Why does it think that?\n\nI'm wondering if maybe the vast majority of the rows indeed have no join\npartners, but there are a small number with a large number of partners.\nThe statistics might miss these, if so.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 07 Dec 2012 23:16:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any idea on how to improve the statistics estimates for this\n plan?" }, { "msg_contents": "(cough cough, missed the Reply to all button)\n\nHi Jeff,\n\nOn Sat, Dec 8, 2012 at 3:32 AM, Jeff Janes <[email protected]> wrote:\n> If those estimates are better, it probably means that your filter\n> condition is picking a part of the \"el JOIN l\" that has much different\n> selectivity to r than the full set does, and PostgreSQL has no way of\n> knowing that.\n\nIt's certainly that. The fact is that this query is OK on most of the\nFrench territory but it doesn't go well when you're looking at Paris\narea in particular. As the query is supposed to return the shows you\ncan book, the selectivity is quite different as Paris has a lot of\nplaces AND places organize a lot more shows in Paris than in the rest\nof France. I was hoping that the high number of places would be enough\nto circumvent the second fact which is much harder for PostgreSQL to\nget but it looks like it's not.\n\nIs there any way I could mitigate this issue by playing with planner\nknobs? I don't remember having seen something I could use for\nselectivity (such as the n_distinct stuff). It's not that big a deal\nif it's a little worth elsewhere as there are far less places so the\neffects of a bad plan are more contained.\n\n-- \nGuillaume\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 8 Dec 2012 15:51:39 +0100", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any idea on how to improve the statistics estimates for this\n plan?" }, { "msg_contents": "On Sat, Dec 8, 2012 at 5:19 AM, Guillaume Smet <[email protected]> wrote:\n> Hi Jeff,\n>\n> On Sat, Dec 8, 2012 at 3:32 AM, Jeff Janes <[email protected]> wrote:\n>> If those estimates are better, it probably means that your filter\n>> condition is picking a part of the \"el JOIN l\" that has much different\n>> selectivity to r than the full set does, and PostgreSQL has no way of\n>> knowing that.\n>\n> It's certainly that. The fact is that this query is OK on most of the\n> French territory but it doesn't go well when you're looking at Paris\n> area in particular. As the query is supposed to return the shows you\n> can book, the selectivity is quite different as Paris has a lot of\n> places AND places organize a lot more shows in Paris than in the rest\n> of France. I was hoping that the high number of places would be enough\n> to circumvent the second fact which is much harder for PostgreSQL to\n> get but it looks like it's not.\n>\n> Is there any way I could mitigate this issue by playing with planner\n> knobs?\n\nI don't know the answer to that. But does it matter? If it knew you\nwere going to get 300,000 rows rather than 2, would it pick a better\nplan?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 8 Dec 2012 11:03:27 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any idea on how to improve the statistics estimates for this\n plan?" } ]
[ { "msg_contents": "Hi everyone ,I have a question. I have a table with large data (i was used\nbytea datatype and insert a binary content to table ) so that Postgres help\nme get a TOAST table to storage out-of-line values .\nAssume that my table is \" tbl_test \" and toast table oid is 16816\n\nWhen i peform EXPLAIN ANALYZE select query on tbl_test ( EXPLAIN ANALYZE\nSELECT * FROM tbl_test).It show that sequential scan was performed on\ntbl_test ,but when i check pg_toast table with this query : \n\n\nSELECT\t\n\trelid,\n\tschemaname,\n\trelname,\n\tseq_scan,\n\tseq_tup_read,\n\tidx_scan,\nFROM pg_stat_all_tables \t\nWHERE relid IN ( SELECT oid\t\n\tFROM pg_class\n\tWHERE relkind = 't' ) AND relid = 16816\n\nI saw that seq_tup_read = 0 and the seq_scan is always is 1 .idx_scan is\nincrease arcording to the number of query on tbl_test\n \n I was wordering : Do have a sequential scan perform on tbl_test and other\nindex scan will be peforming on TOAST after this sequential scan ?\nCan you explain this dump question to me ,please ? \nP/S : sorry for my bad English .Thanks ! :)\n\n\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Perform-scan-on-Toast-table-tp5735406.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 5 Dec 2012 19:08:07 -0800 (PST)", "msg_from": "classical_89 <[email protected]>", "msg_from_op": true, "msg_subject": "Perform scan on Toast table" }, { "msg_contents": "classical_89 wrote:\n> Hi everyone ,I have a question. I have a table with large data (i was used\n> bytea datatype and insert a binary content to table ) so that Postgres help\n> me get a TOAST table to storage out-of-line values .\n> Assume that my table is \" tbl_test \" and toast table oid is 16816\n> \n> When i peform EXPLAIN ANALYZE select query on tbl_test ( EXPLAIN ANALYZE\n> SELECT * FROM tbl_test).It show that sequential scan was performed on\n> tbl_test ,but when i check pg_toast table with this query :\n> \n> \n> SELECT\n> \trelid,\n> \tschemaname,\n> \trelname,\n> \tseq_scan,\n> \tseq_tup_read,\n> \tidx_scan,\n> FROM pg_stat_all_tables\n> WHERE relid IN ( SELECT oid\n> \tFROM pg_class\n> \tWHERE relkind = 't' ) AND relid = 16816\n> \n> I saw that seq_tup_read = 0 and the seq_scan is always is 1 .idx_scan is\n> increase arcording to the number of query on tbl_test\n> \n> I was wordering : Do have a sequential scan perform on tbl_test and other\n> index scan will be peforming on TOAST after this sequential scan ?\n> Can you explain this dump question to me ,please ?\n\nThe entries in the TOAST table need not be in the same order\nas the entries in the main table. So if you'd fetch them\nsequentially, you'd have to reorder them afterwards.\n\nIt seems logical that access via the TOAST index is cheaper.\n\nYours,\nLaurenz Albe\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 10 Dec 2012 08:57:11 +0000", "msg_from": "Albe Laurenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perform scan on Toast table" } ]
[ { "msg_contents": "Hi,\n\nI'm using a foreign data wrapper to access mongodb and I'm looking for a\nway to monitor query stats against foreign tables.\n\nIt looks like the common methods have limited support for foreign tables at\nthis time. pg_stat_statements collects the query, total time, and rows\nreturned, which is useful. But all the disk access stats are zero\n(understandably). Looks like pg_stat_all_tables doesn't have any info on\nforeign tables from my tests.\n\nI'm interested in the following:\n\n - Foreign table rows read\n - Foreign table bytes read\n - Foreign table read time\n\nI'm working with my own fork of the FDW, so I could build these in myself,\nbut I was wondering if there's more generic support for this sort of stuff.\n Or at the least, if I do implement it can I push it into another stats\ncollection framework rather than logging it.\n\nThanks,\nDavid Crawford\n\nHi,I'm using a foreign data wrapper to access mongodb and I'm looking for a way to monitor query stats against foreign tables.It looks like the common methods have limited support for foreign tables at this time.  pg_stat_statements collects the query, total time, and rows returned, which is useful.  But all the disk access stats are zero (understandably).  Looks like pg_stat_all_tables doesn't have any info on foreign tables from my tests.\nI'm interested in the following:Foreign table rows readForeign table bytes readForeign table read timeI'm working with my own fork of the FDW, so I could build these in myself, but I was wondering if there's more generic support for this sort of stuff.  Or at the least, if I do implement it can I push it into another stats collection framework rather than logging it.\nThanks,David Crawford", "msg_date": "Fri, 7 Dec 2012 17:09:52 -0500", "msg_from": "David Crawford <[email protected]>", "msg_from_op": true, "msg_subject": "How do I track stats on foreign table access through foreign data\n\twrapper?" } ]
[ { "msg_contents": "#### Pitch ######################################################################################\nI previously posted this question http://archives.postgresql.org/pgsql-performance/2012-11/msg00289.php about a performance issue with an update query. \nThe question evolved into a more general discussion about my setup, and about a lot of I/O wait that I was encountering. Since then, I have gotten a whole lot more familiar with measuring things, and now I \"just\" need some experienced eyes to judge which direction I should go in - do I have a hardware issue, or a software issue - and what action should I take?\n\n##### My setup #############################################################################\nThe use case:\nAt night time we are doing a LOT of data maintenance, and hence the load on the database is very different from the day time. However we would like to be able to do some of it in the daytime, it's simply just too \"heavy\" on the database as is right now. The stats shown below is from one of those \"heavy\" load times.\n\nHardware: \n - 32Gb ram \n - 8 core Xeon E3-1245 processor\n - Two SEAGATE ST33000650NS drives (called sdc and sdd in the stats) in a softeware RAID1 array (called md2 in the stats)\n - Two INTEL SSDSC2CW240A3 SSD drives (called sda and sdb in the stats) in a software RAID1 (called md3 in the stats)\n\nSoftware:\nPostgres 9.2 running on 64bit ubuntu 12.04 with kernel 3.2\n\nConfiguration:\n# postgresql.conf (a shortlist of everything changed from the default)\ndata_directory = '/var/lib/postgresql/9.2/main'\nhba_file = '/etc/postgresql/9.2/main/pg_hba.conf'\nident_file = '/etc/postgresql/9.2/main/pg_ident.conf'\nexternal_pid_file = '/var/run/postgresql/9.2-main.pid'\nlisten_addresses = '192.168.0.2, localhost'\nport = 5432\nmax_connections = 300\nunix_socket_directory = '/var/run/postgresql'\nwal_level = hot_standby\nsynchronous_commit = off\narchive_mode = on\narchive_command = 'rsync -a %p [email protected]:/var/lib/postgresql/9.2/wals/%f </dev/null'\nmax_wal_senders = 1\nwal_keep_segments = 32\nlog_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d '\ndatestyle = 'iso, mdy'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\ndefault_text_search_config = 'pg_catalog.english'\ndefault_statistics_target = 100\nmaintenance_work_mem = 1GB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 22GB\nwork_mem = 160MB\nwal_buffers = 4MB\ncheckpoint_segments = 100\nshared_buffers = 4GB\ncheckpoint_timeout = 10min\n\nThe kernel has bee tweaked like so:\nvm.dirty_ratio = 10\nvm.dirty_background_ratio = 1\nkernel.shmmax = 8589934592\nkernel.shmall = 17179869184\n\nThe pg_xlog folder has been moved onto the SSD array (md3), and symlinked back into the postgres dir.\n\n##### The stats ###############################################################\nThese are the typical observations/stats I see in one of these periods:\n\n1)\nAt top level this is what I see in new relic:\nhttps://rpm.newrelic.com/public/charts/6ewGRle6bmc\n\n2)\nWhen the database is loaded like this, I see a lot of queries talking up to 1000 times as long, as they would when the database is not loaded so heavily.\n\n3)\nsudo iostat -dmx (typical usage)\nLinux 3.2.0-33-generic (master-db) \t12/10/2012 \t_x86_64_\t(8 CPU)\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\nsda 0.00 6.52 3.59 26.61 0.22 0.74 65.49 0.01 0.40 0.77 0.35 0.14 0.43\nsdb 0.00 8.31 0.03 28.38 0.00 0.97 69.63 0.01 0.52 0.27 0.52 0.15 0.43\nsdc 1.71 46.01 34.83 116.62 0.56 4.06 62.47 1.90 12.57 21.81 9.81 1.89 28.66\nsdd 1.67 46.14 34.89 116.49 0.56 4.06 62.46 1.58 10.43 21.66 7.07 1.89 28.60\nmd1 0.00 0.00 0.00 0.00 0.00 0.00 2.69 0.00 0.00 0.00 0.00 0.00 0.00\nmd0 0.00 0.00 0.11 0.24 0.00 0.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00\nmd2 0.00 0.00 72.99 161.95 1.11 4.06 45.10 0.00 0.00 0.00 0.00 0.00 0.00\nmd3 0.00 0.00 0.05 32.32 0.00 0.74 47.00 0.00 0.00 0.00 0.00 0.00 0.00\n\n3)\nsudo iotop -oa (running for about a minute or so)\nTID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND\n 292 be/4 root 0.00 B 0.00 B 0.00 % 99.33 % [md2_raid1]\n 2815 be/4 postgres 19.51 M 25.90 M 0.00 % 45.49 % postgres: autovacuum worker process production\n32553 be/4 postgres 45.74 M 9.38 M 0.00 % 37.89 % postgres: user production 192.168.0.3(58866) UPDATE\n32570 be/4 postgres 6.91 M 35.02 M 0.00 % 16.71 % postgres: user production 192.168.0.3(35547) idle\n32575 be/4 postgres 4.06 M 43.90 M 0.00 % 16.62 % postgres: user production 192.168.0.3(35561) SELECT\n31673 be/4 postgres 4.14 M 52.16 M 0.00 % 16.24 % postgres: user production 192.168.0.3(39112) idle\n32566 be/4 postgres 4.73 M 44.95 M 0.00 % 15.66 % postgres: user production 192.168.0.3(35531) idle\n32568 be/4 postgres 4.50 M 33.84 M 0.00 % 14.62 % postgres: user production 192.168.0.3(35543) SELECT\n32573 be/4 postgres 3.20 M 34.44 M 0.00 % 13.98 % postgres: user production 192.168.0.3(35559) idle\n31590 be/4 postgres 3.23 M 29.72 M 0.00 % 13.90 % postgres: user production 192.168.0.3(50690) idle in transaction\n32577 be/4 postgres 5.09 M 25.54 M 0.00 % 13.63 % postgres: user production 192.168.0.3(35563) idle\n32565 be/4 postgres 2.06 M 35.93 M 0.00 % 13.41 % postgres: user production 192.168.0.3(35529) SELECT\n32546 be/4 postgres 4.48 M 36.49 M 0.00 % 13.39 % postgres: user production 192.168.0.3(56927) UPDATE waiting\n32569 be/4 postgres 3.50 M 26.75 M 0.00 % 12.82 % postgres: user production 192.168.0.3(35545) INSERT\n31671 be/4 postgres 4.58 M 24.45 M 0.00 % 12.76 % postgres: user production 192.168.0.3(34841) idle in transaction\n32551 be/4 postgres 3.26 M 31.77 M 0.00 % 12.06 % postgres: user production 192.168.0.3(58864) idle in transaction\n32574 be/4 postgres 5.32 M 32.92 M 0.00 % 11.70 % postgres: user production 192.168.0.3(35560) idle\n32572 be/4 postgres 3.00 M 32.66 M 0.00 % 11.66 % postgres: user production 192.168.0.3(35558) UPDATE\n32560 be/4 postgres 5.12 M 25.89 M 0.00 % 11.52 % postgres: user production 192.168.0.3(33886) SELECT\n32567 be/4 postgres 4.66 M 36.47 M 0.00 % 11.44 % postgres: user production 192.168.0.3(35534) SELECT\n32571 be/4 postgres 2.86 M 31.27 M 0.00 % 11.31 % postgres: user production 192.168.0.3(35557) SELECT\n32552 be/4 postgres 4.38 M 28.75 M 0.00 % 11.09 % postgres: user production 192.168.0.3(58865) idle in transaction\n32554 be/4 postgres 3.69 M 30.21 M 0.00 % 10.90 % postgres: user production 192.168.0.3(58870) UPDATE\n 339 be/3 root 0.00 B 2.29 M 0.00 % 9.81 % [jbd2/md2-8]\n32576 be/4 postgres 3.37 M 19.91 M 0.00 % 9.73 % postgres: user production 192.168.0.3(35562) idle\n32555 be/4 postgres 3.09 M 31.96 M 0.00 % 9.02 % postgres: user production 192.168.0.3(58875) SELECT\n27548 be/4 postgres 0.00 B 97.12 M 0.00 % 7.41 % postgres: writer process\n31445 be/4 postgres 924.00 K 27.35 M 0.00 % 7.11 % postgres: user production 192.168.0.1(34536) idle\n31443 be/4 postgres 2.54 M 4.56 M 0.00 % 6.32 % postgres: user production 192.168.0.1(34508) idle\n31459 be/4 postgres 1480.00 K 21.36 M 0.00 % 5.63 % postgres: user production 192.168.0.1(34543) idle\n 1801 be/4 postgres 1896.00 K 10.89 M 0.00 % 5.57 % postgres: user production 192.168.0.3(34177) idle\n32763 be/4 postgres 1696.00 K 6.95 M 0.00 % 5.33 % postgres: user production 192.168.0.3(57984) SELECT\n 1800 be/4 postgres 2.46 M 5.13 M 0.00 % 5.24 % postgres: user production 192.168.0.3(34175) SELECT\n 1803 be/4 postgres 1816.00 K 9.09 M 0.00 % 5.16 % postgres: user production 192.168.0.3(34206) idle\n32578 be/4 postgres 2.57 M 11.62 M 0.00 % 5.06 % postgres: user production 192.168.0.3(35564) SELECT\n31440 be/4 postgres 3.02 M 4.04 M 0.00 % 4.65 % postgres: user production 192.168.0.1(34463) idle\n32605 be/4 postgres 1844.00 K 11.82 M 0.00 % 4.49 % postgres: user production 192.168.0.3(40399) idle\n27547 be/4 postgres 0.00 B 0.00 B 0.00 % 3.93 % postgres: checkpointer process\n31356 be/4 postgres 1368.00 K 3.27 M 0.00 % 3.93 % postgres: user production 192.168.0.1(34450) idle\n32542 be/4 postgres 1180.00 K 6.05 M 0.00 % 3.90 % postgres: user production 192.168.0.3(56859) idle\n32523 be/4 postgres 1088.00 K 4.33 M 0.00 % 3.59 % postgres: user production 192.168.0.3(48164) idle\n32606 be/4 postgres 1964.00 K 6.94 M 0.00 % 3.51 % postgres: user production 192.168.0.3(40426) SELECT\n31466 be/4 postgres 1596.00 K 3.11 M 0.00 % 3.47 % postgres: user production 192.168.0.1(34550) idle\n32544 be/4 postgres 1184.00 K 4.25 M 0.00 % 3.38 % postgres: user production 192.168.0.3(56861) idle\n31458 be/4 postgres 1088.00 K 1528.00 K 0.00 % 3.33 % postgres: user production 192.168.0.1(34541) idle\n31444 be/4 postgres 884.00 K 4.23 M 0.00 % 3.27 % postgres: user production 192.168.0.1(34510) idle\n32522 be/4 postgres 408.00 K 2.98 M 0.00 % 3.27 % postgres: user production 192.168.0.5(38361) idle\n32762 be/4 postgres 1156.00 K 5.28 M 0.00 % 3.20 % postgres: user production 192.168.0.3(57962) idle\n32582 be/4 postgres 1084.00 K 3.38 M 0.00 % 2.86 % postgres: user production 192.168.0.5(43104) idle\n31353 be/4 postgres 2.04 M 3.02 M 0.00 % 2.82 % postgres: user production 192.168.0.1(34444) idle\n31441 be/4 postgres 700.00 K 2.68 M 0.00 % 2.64 % postgres: user production 192.168.0.1(34465) idle\n31462 be/4 postgres 980.00 K 3.50 M 0.00 % 2.57 % postgres: user production 192.168.0.1(34547) idle\n32709 be/4 postgres 428.00 K 3.23 M 0.00 % 2.56 % postgres: user production 192.168.0.5(34323) idle\n 685 be/4 postgres 748.00 K 3.59 M 0.00 % 2.41 % postgres: user production 192.168.0.3(34911) idle\n 683 be/4 postgres 728.00 K 3.19 M 0.00 % 2.38 % postgres: user production 192.168.0.3(34868) idle\n32765 be/4 postgres 464.00 K 3.76 M 0.00 % 2.21 % postgres: user production 192.168.0.3(58074) idle\n32760 be/4 postgres 808.00 K 6.18 M 0.00 % 2.16 % postgres: user production 192.168.0.3(57958) idle\n 1912 be/4 postgres 372.00 K 3.03 M 0.00 % 2.16 % postgres: user production 192.168.0.5(33743) idle\n31446 be/4 postgres 1004.00 K 2.09 M 0.00 % 2.16 % postgres: user production 192.168.0.1(34539) idle\n31460 be/4 postgres 584.00 K 2.74 M 0.00 % 2.10 % postgres: user production 192.168.0.1(34545) idle\n\n5) vmstat 1\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 1 1 573424 321080 27124 28504352 0 0 143 618 0 4 2 0 91 7\n 0 1 573424 320764 27124 28504496 0 0 104 15654 3788 4961 1 0 85 14\n 0 1 573424 320684 27124 28504616 0 0 276 12736 4099 5374 0 1 84 15\n 0 1 573424 319672 27124 28504900 0 0 80 7746 3624 4949 2 0 82 16\n 0 1 573424 319180 27124 28504972 0 0 36 12489 3653 4761 2 0 86 12\n 0 1 573424 318184 27132 28505000 0 0 8 10482 3413 4898 0 0 87 13\n 0 1 573424 318424 27132 28505016 0 0 0 9564 2746 4290 0 0 87 13\n 0 1 573424 318308 27168 28505016 0 0 36 10562 1895 2149 0 0 87 12\n 0 3 573424 318208 27168 28505020 0 0 84 18529 3035 3265 1 0 85 14\n 0 1 573424 318732 27176 28505080 0 0 84 14574 2986 3231 0 0 84 16\n 0 2 573424 317588 27176 28505184 0 0 4 6681 1991 2207 2 1 86 12\n 0 1 573424 316852 27176 28505260 0 0 76 7670 2910 3996 2 1 85 13\n 0 1 573424 316632 27184 28505256 0 0 0 7186 2661 3740 0 0 87 12\n 0 1 573424 316720 27188 28505260 0 0 0 2590 1731 2474 0 0 88 12\n 0 1 573424 314252 27192 28505696 0 0 460 11612 1757 2431 0 0 82 18\n 0 2 573424 313504 27192 28505724 0 0 0 19656 1775 2099 0 0 83 17\n 0 3 573424 313300 27196 28505780 0 0 188 6237 2746 3193 2 0 80 17\n 0 2 573424 312736 27200 28506348 0 0 804 18466 5014 6430 2 1 75 23\n 2 35 573424 307564 27200 28509920 0 0 3912 16280 14377 15470 14 3 28 56\n 0 5 573424 282848 27208 28533964 0 0 7484 27580 22017 25938 17 3 17 63\n 1 5 573424 221100 27208 28563360 0 0 2852 3120 19639 28664 12 5 52 31\n 0 4 573428 229912 26704 28519184 0 4 1208 5890 13976 20851 13 3 56 28\n 0 2 573448 234680 26672 28513632 0 20 0 17204 1694 2636 0 0 71 28\n 3 7 573452 220836 26644 28525548 0 4 1540 36370 27928 36551 17 5 50 27\n 1 3 573488 234380 26556 28517416 0 36 584 19066 8275 9467 3 2 60 36\n 0 1 573488 234496 26556 28517852 0 0 56 47429 3290 4310 0 0 79 20\n\n6) sudo lsof - a hell of a lot of output, I can post it if anyone is interested :-)\n\n#### Notes and thoughts ##############################################################################\n\nAs you can see, even though I have moved the pg_xlog folder to the SSD array (md3) the by far largest amount of writes still goes to the regular HDD's (md2), which puzzles me - what can that be?\nFrom stat 3) (the iostat) I notice that the SSD's doesn't seem to be something near fully utilized - maybe something else than just pg_xlog could be moved her? \nI have no idea if the amount of reads/writes is within the acceptable/capable for my kind of hardware, or if it is far beyond?\nIn stat 3) (the iotop) it says that the RAID array (md2) is the most \"waiting\" part, does that taste like a root cause, or more like a symptom of some other bottleneck?\n\nThanks, for taking the time to look at by data! :-)\n#### Pitch ######################################################################################I previously posted this question http://archives.postgresql.org/pgsql-performance/2012-11/msg00289.php about a performance issue with an update query. The question evolved into a more general discussion about my setup, and about a lot of I/O wait that I was encountering. Since then, I have gotten a whole lot more familiar with measuring things, and now I \"just\" need some experienced eyes to judge which direction I should go in - do I have a hardware issue, or a software issue - and what action should I take?#####  My setup #############################################################################The use case:At night time we are doing a LOT of data maintenance, and hence the load on the database is very different from the day time. However we would like to be able to do some of it in the daytime, it's simply just too \"heavy\" on the database as is right now. The stats shown below is from one of those \"heavy\" load times.Hardware:   - 32Gb ram   - 8 core Xeon E3-1245 processor  - Two SEAGATE ST33000650NS drives (called sdc and sdd in the stats) in a softeware RAID1 array (called md2 in the stats)  - Two INTEL SSDSC2CW240A3 SSD drives (called sda and sdb in the stats) in a software RAID1 (called md3 in the stats)Software:Postgres 9.2 running on 64bit ubuntu 12.04 with kernel 3.2Configuration:# postgresql.conf (a shortlist of everything changed from the default)data_directory = '/var/lib/postgresql/9.2/main'hba_file = '/etc/postgresql/9.2/main/pg_hba.conf'ident_file = '/etc/postgresql/9.2/main/pg_ident.conf'external_pid_file = '/var/run/postgresql/9.2-main.pid'listen_addresses = '192.168.0.2, localhost'port = 5432max_connections = 300unix_socket_directory = '/var/run/postgresql'wal_level = hot_standbysynchronous_commit = offarchive_mode = onarchive_command = 'rsync -a %p [email protected]:/var/lib/postgresql/9.2/wals/%f </dev/null'max_wal_senders = 1wal_keep_segments = 32log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d 'datestyle = 'iso, mdy'lc_monetary = 'en_US.UTF-8'lc_numeric = 'en_US.UTF-8'lc_time = 'en_US.UTF-8'default_text_search_config = 'pg_catalog.english'default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9effective_cache_size = 22GBwork_mem = 160MBwal_buffers = 4MBcheckpoint_segments = 100shared_buffers = 4GBcheckpoint_timeout = 10minThe kernel has bee tweaked like so:vm.dirty_ratio = 10vm.dirty_background_ratio = 1kernel.shmmax = 8589934592kernel.shmall = 17179869184The pg_xlog folder has been moved onto the SSD array (md3), and symlinked back into the postgres dir.##### The stats ###############################################################These are the typical observations/stats I see in one of these periods:1)At top level this is what I see in new relic:https://rpm.newrelic.com/public/charts/6ewGRle6bmc2)When the database is loaded like this, I see a lot of queries talking up to 1000 times as long, as they would when the database is not loaded so heavily.3)sudo iostat -dmx (typical usage)Linux 3.2.0-33-generic (master-db)  12/10/2012  _x86_64_ (8 CPU)Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %utilsda               0.00     6.52    3.59   26.61     0.22     0.74    65.49     0.01    0.40    0.77    0.35   0.14   0.43sdb               0.00     8.31    0.03   28.38     0.00     0.97    69.63     0.01    0.52    0.27    0.52   0.15   0.43sdc               1.71    46.01   34.83  116.62     0.56     4.06    62.47     1.90   12.57   21.81    9.81   1.89  28.66sdd               1.67    46.14   34.89  116.49     0.56     4.06    62.46     1.58   10.43   21.66    7.07   1.89  28.60md1               0.00     0.00    0.00    0.00     0.00     0.00     2.69     0.00    0.00    0.00    0.00   0.00   0.00md0               0.00     0.00    0.11    0.24     0.00     0.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00md2               0.00     0.00   72.99  161.95     1.11     4.06    45.10     0.00    0.00    0.00    0.00   0.00   0.00md3               0.00     0.00    0.05   32.32     0.00     0.74    47.00     0.00    0.00    0.00    0.00   0.00   0.003)sudo iotop -oa (running for about a minute or so)TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND  292    be/4 root               0.00 B      0.00 B    0.00 % 99.33 % [md2_raid1] 2815  be/4 postgres     19.51 M     25.90 M  0.00 % 45.49 % postgres: autovacuum worker process   production32553 be/4 postgres     45.74 M      9.38 M  0.00 % 37.89 % postgres: user production 192.168.0.3(58866) UPDATE32570 be/4 postgres      6.91 M     35.02 M  0.00 % 16.71 % postgres: user production 192.168.0.3(35547) idle32575 be/4 postgres      4.06 M     43.90 M  0.00 % 16.62 % postgres: user production 192.168.0.3(35561) SELECT31673 be/4 postgres      4.14 M     52.16 M  0.00 % 16.24 % postgres: user production 192.168.0.3(39112) idle32566 be/4 postgres      4.73 M     44.95 M  0.00 % 15.66 % postgres: user production 192.168.0.3(35531) idle32568 be/4 postgres      4.50 M     33.84 M  0.00 % 14.62 % postgres: user production 192.168.0.3(35543) SELECT32573 be/4 postgres      3.20 M     34.44 M  0.00 % 13.98 % postgres: user production 192.168.0.3(35559) idle31590 be/4 postgres      3.23 M     29.72 M  0.00 % 13.90 % postgres: user production 192.168.0.3(50690) idle in transaction32577 be/4 postgres      5.09 M     25.54 M  0.00 % 13.63 % postgres: user production 192.168.0.3(35563) idle32565 be/4 postgres      2.06 M     35.93 M  0.00 % 13.41 % postgres: user production 192.168.0.3(35529) SELECT32546 be/4 postgres      4.48 M     36.49 M  0.00 % 13.39 % postgres: user production 192.168.0.3(56927) UPDATE waiting32569 be/4 postgres      3.50 M     26.75 M  0.00 % 12.82 % postgres: user production 192.168.0.3(35545) INSERT31671 be/4 postgres      4.58 M     24.45 M  0.00 % 12.76 % postgres: user production 192.168.0.3(34841) idle in transaction32551 be/4 postgres      3.26 M     31.77 M  0.00 % 12.06 % postgres: user production 192.168.0.3(58864) idle in transaction32574 be/4 postgres      5.32 M     32.92 M  0.00 % 11.70 % postgres: user production 192.168.0.3(35560) idle32572 be/4 postgres      3.00 M     32.66 M  0.00 % 11.66 % postgres: user production 192.168.0.3(35558) UPDATE32560 be/4 postgres      5.12 M     25.89 M  0.00 % 11.52 % postgres: user production 192.168.0.3(33886) SELECT32567 be/4 postgres      4.66 M     36.47 M  0.00 % 11.44 % postgres: user production 192.168.0.3(35534) SELECT32571 be/4 postgres      2.86 M     31.27 M  0.00 % 11.31 % postgres: user production 192.168.0.3(35557) SELECT32552 be/4 postgres      4.38 M     28.75 M  0.00 % 11.09 % postgres: user production 192.168.0.3(58865) idle in transaction32554 be/4 postgres      3.69 M     30.21 M  0.00 % 10.90 % postgres: user production 192.168.0.3(58870) UPDATE  339    be/3 root               0.00 B       2.29 M  0.00 %  9.81 % [jbd2/md2-8]32576 be/4 postgres      3.37 M     19.91 M  0.00 %  9.73 % postgres: user production 192.168.0.3(35562) idle32555 be/4 postgres      3.09 M     31.96 M  0.00 %  9.02 % postgres: user production 192.168.0.3(58875) SELECT27548 be/4 postgres      0.00 B     97.12 M  0.00 %  7.41 % postgres: writer process31445 be/4 postgres    924.00 K     27.35 M  0.00 %  7.11 % postgres: user production 192.168.0.1(34536) idle31443 be/4 postgres      2.54 M      4.56 M  0.00 %  6.32 % postgres: user production 192.168.0.1(34508) idle31459 be/4 postgres   1480.00 K     21.36 M  0.00 %  5.63 % postgres: user production 192.168.0.1(34543) idle 1801 be/4 postgres   1896.00 K     10.89 M  0.00 %  5.57 % postgres: user production 192.168.0.3(34177) idle32763 be/4 postgres   1696.00 K      6.95 M  0.00 %  5.33 % postgres: user production 192.168.0.3(57984) SELECT 1800 be/4 postgres      2.46 M      5.13 M  0.00 %  5.24 % postgres: user production 192.168.0.3(34175) SELECT 1803 be/4 postgres   1816.00 K      9.09 M  0.00 %  5.16 % postgres: user production 192.168.0.3(34206) idle32578 be/4 postgres      2.57 M     11.62 M  0.00 %  5.06 % postgres: user production 192.168.0.3(35564) SELECT31440 be/4 postgres      3.02 M      4.04 M  0.00 %  4.65 % postgres: user production 192.168.0.1(34463) idle32605 be/4 postgres   1844.00 K     11.82 M  0.00 %  4.49 % postgres: user production 192.168.0.3(40399) idle27547 be/4 postgres      0.00 B      0.00 B  0.00 %  3.93 % postgres: checkpointer process31356 be/4 postgres   1368.00 K      3.27 M  0.00 %  3.93 % postgres: user production 192.168.0.1(34450) idle32542 be/4 postgres   1180.00 K      6.05 M  0.00 %  3.90 % postgres: user production 192.168.0.3(56859) idle32523 be/4 postgres   1088.00 K      4.33 M  0.00 %  3.59 % postgres: user production 192.168.0.3(48164) idle32606 be/4 postgres   1964.00 K      6.94 M  0.00 %  3.51 % postgres: user production 192.168.0.3(40426) SELECT31466 be/4 postgres   1596.00 K      3.11 M  0.00 %  3.47 % postgres: user production 192.168.0.1(34550) idle32544 be/4 postgres   1184.00 K      4.25 M  0.00 %  3.38 % postgres: user production 192.168.0.3(56861) idle31458 be/4 postgres   1088.00 K   1528.00 K  0.00 %  3.33 % postgres: user production 192.168.0.1(34541) idle31444 be/4 postgres    884.00 K      4.23 M  0.00 %  3.27 % postgres: user production 192.168.0.1(34510) idle32522 be/4 postgres    408.00 K      2.98 M  0.00 %  3.27 % postgres: user production 192.168.0.5(38361) idle32762 be/4 postgres   1156.00 K      5.28 M  0.00 %  3.20 % postgres: user production 192.168.0.3(57962) idle32582 be/4 postgres   1084.00 K      3.38 M  0.00 %  2.86 % postgres: user production 192.168.0.5(43104) idle31353 be/4 postgres      2.04 M      3.02 M  0.00 %  2.82 % postgres: user production 192.168.0.1(34444) idle31441 be/4 postgres    700.00 K      2.68 M  0.00 %  2.64 % postgres: user production 192.168.0.1(34465) idle31462 be/4 postgres    980.00 K      3.50 M  0.00 %  2.57 % postgres: user production 192.168.0.1(34547) idle32709 be/4 postgres    428.00 K      3.23 M  0.00 %  2.56 % postgres: user production 192.168.0.5(34323) idle  685 be/4 postgres    748.00 K      3.59 M  0.00 %  2.41 % postgres: user production 192.168.0.3(34911) idle  683 be/4 postgres    728.00 K      3.19 M  0.00 %  2.38 % postgres: user production 192.168.0.3(34868) idle32765 be/4 postgres    464.00 K      3.76 M  0.00 %  2.21 % postgres: user production 192.168.0.3(58074) idle32760 be/4 postgres    808.00 K      6.18 M  0.00 %  2.16 % postgres: user production 192.168.0.3(57958) idle 1912 be/4 postgres    372.00 K      3.03 M  0.00 %  2.16 % postgres: user production 192.168.0.5(33743) idle31446 be/4 postgres   1004.00 K      2.09 M  0.00 %  2.16 % postgres: user production 192.168.0.1(34539) idle31460 be/4 postgres    584.00 K      2.74 M  0.00 %  2.10 % postgres: user production 192.168.0.1(34545) idle5) vmstat 1procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa 1  1 573424 321080  27124 28504352    0    0   143   618    0    4  2  0 91  7 0  1 573424 320764  27124 28504496    0    0   104 15654 3788 4961  1  0 85 14 0  1 573424 320684  27124 28504616    0    0   276 12736 4099 5374  0  1 84 15 0  1 573424 319672  27124 28504900    0    0    80  7746 3624 4949  2  0 82 16 0  1 573424 319180  27124 28504972    0    0    36 12489 3653 4761  2  0 86 12 0  1 573424 318184  27132 28505000    0    0     8 10482 3413 4898  0  0 87 13 0  1 573424 318424  27132 28505016    0    0     0  9564 2746 4290  0  0 87 13 0  1 573424 318308  27168 28505016    0    0    36 10562 1895 2149  0  0 87 12 0  3 573424 318208  27168 28505020    0    0    84 18529 3035 3265  1  0 85 14 0  1 573424 318732  27176 28505080    0    0    84 14574 2986 3231  0  0 84 16 0  2 573424 317588  27176 28505184    0    0     4  6681 1991 2207  2  1 86 12 0  1 573424 316852  27176 28505260    0    0    76  7670 2910 3996  2  1 85 13 0  1 573424 316632  27184 28505256    0    0     0  7186 2661 3740  0  0 87 12 0  1 573424 316720  27188 28505260    0    0     0  2590 1731 2474  0  0 88 12 0  1 573424 314252  27192 28505696    0    0   460 11612 1757 2431  0  0 82 18 0  2 573424 313504  27192 28505724    0    0     0 19656 1775 2099  0  0 83 17 0  3 573424 313300  27196 28505780    0    0   188  6237 2746 3193  2  0 80 17 0  2 573424 312736  27200 28506348    0    0   804 18466 5014 6430  2  1 75 23 2 35 573424 307564  27200 28509920    0    0  3912 16280 14377 15470 14  3 28 56 0  5 573424 282848  27208 28533964    0    0  7484 27580 22017 25938 17  3 17 63 1  5 573424 221100  27208 28563360    0    0  2852  3120 19639 28664 12  5 52 31 0  4 573428 229912  26704 28519184    0    4  1208  5890 13976 20851 13  3 56 28 0  2 573448 234680  26672 28513632    0   20     0 17204 1694 2636  0  0 71 28 3  7 573452 220836  26644 28525548    0    4  1540 36370 27928 36551 17  5 50 27 1  3 573488 234380  26556 28517416    0   36   584 19066 8275 9467  3  2 60 36 0  1 573488 234496  26556 28517852    0    0    56 47429 3290 4310  0  0 79 206) sudo lsof - a hell of a lot of output, I can post it if anyone is interested :-)#### Notes and thoughts  ##############################################################################As you can see, even though I have moved the pg_xlog folder to the SSD array (md3) the by far largest amount of writes still goes to the regular HDD's (md2), which puzzles me - what can that be?From stat 3) (the iostat) I notice that the SSD's doesn't seem to be something near fully utilized - maybe something else than just pg_xlog could be moved her? I have no idea if the amount of reads/writes is  within the acceptable/capable for my kind of hardware, or if it is far beyond?In stat 3) (the iotop) it says that the RAID array (md2) is the most \"waiting\" part, does that taste like a root cause, or more like a symptom of some other bottleneck?Thanks, for taking the time to look at by data! :-)", "msg_date": "Mon, 10 Dec 2012 23:51:58 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "Do I have a hardware or a software problem?" }, { "msg_contents": "On Dec 11, 2012, at 2:51 AM, Niels Kristian Schjødt <[email protected]> wrote:\n\n> #### Pitch ######################################################################################\n> I previously posted this question http://archives.postgresql.org/pgsql-performance/2012-11/msg00289.php about a performance issue with an update query. \n> The question evolved into a more general discussion about my setup, and about a lot of I/O wait that I was encountering. Since then, I have gotten a whole lot more familiar with measuring things, and now I \"just\" need some experienced eyes to judge which direction I should go in - do I have a hardware issue, or a software issue - and what action should I take?\n> \n> ##### My setup #############################################################################\n> The use case:\n> At night time we are doing a LOT of data maintenance, and hence the load on the database is very different from the day time. However we would like to be able to do some of it in the daytime, it's simply just too \"heavy\" on the database as is right now. The stats shown below is from one of those \"heavy\" load times.\n> \n> Hardware: \n> - 32Gb ram \n> - 8 core Xeon E3-1245 processor\n> - Two SEAGATE ST33000650NS drives (called sdc and sdd in the stats) in a softeware RAID1 array (called md2 in the stats)\n> - Two INTEL SSDSC2CW240A3 SSD drives (called sda and sdb in the stats) in a software RAID1 (called md3 in the stats)\n> \n> Software:\n> Postgres 9.2 running on 64bit ubuntu 12.04 with kernel 3.2\n> \n> Configuration:\n> # postgresql.conf (a shortlist of everything changed from the default)\n> data_directory = '/var/lib/postgresql/9.2/main'\n> hba_file = '/etc/postgresql/9.2/main/pg_hba.conf'\n> ident_file = '/etc/postgresql/9.2/main/pg_ident.conf'\n> external_pid_file = '/var/run/postgresql/9.2-main.pid'\n> listen_addresses = '192.168.0.2, localhost'\n> port = 5432\n> max_connections = 300\n> unix_socket_directory = '/var/run/postgresql'\n> wal_level = hot_standby\n> synchronous_commit = off\n> archive_mode = on\n> archive_command = 'rsync -a %p [email protected]:/var/lib/postgresql/9.2/wals/%f </dev/null'\n> max_wal_senders = 1\n> wal_keep_segments = 32\n> log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d '\n> datestyle = 'iso, mdy'\n> lc_monetary = 'en_US.UTF-8'\n> lc_numeric = 'en_US.UTF-8'\n> lc_time = 'en_US.UTF-8'\n> default_text_search_config = 'pg_catalog.english'\n> default_statistics_target = 100\n> maintenance_work_mem = 1GB\n> checkpoint_completion_target = 0.9\n> effective_cache_size = 22GB\n> work_mem = 160MB\n> wal_buffers = 4MB\n> checkpoint_segments = 100\n> shared_buffers = 4GB\n> checkpoint_timeout = 10min\n> \n> The kernel has bee tweaked like so:\n> vm.dirty_ratio = 10\n> vm.dirty_background_ratio = 1\n> kernel.shmmax = 8589934592\n> kernel.shmall = 17179869184\n> \n> The pg_xlog folder has been moved onto the SSD array (md3), and symlinked back into the postgres dir.\n> \n\nActually, you should move xlog to rotating drives, since wal logs written sequentially, and everything else to ssd, because of random io pattern.\n\n\n> ##### The stats ###############################################################\n> These are the typical observations/stats I see in one of these periods:\n> \n> 1)\n> At top level this is what I see in new relic:\n> https://rpm.newrelic.com/public/charts/6ewGRle6bmc\n> \n> 2)\n> When the database is loaded like this, I see a lot of queries talking up to 1000 times as long, as they would when the database is not loaded so heavily.\n> \n> 3)\n> sudo iostat -dmx (typical usage)\n> Linux 3.2.0-33-generic (master-db) \t12/10/2012 \t_x86_64_\t(8 CPU)\n> \n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util\n> sda 0.00 6.52 3.59 26.61 0.22 0.74 65.49 0.01 0.40 0.77 0.35 0.14 0.43\n> sdb 0.00 8.31 0.03 28.38 0.00 0.97 69.63 0.01 0.52 0.27 0.52 0.15 0.43\n> sdc 1.71 46.01 34.83 116.62 0.56 4.06 62.47 1.90 12.57 21.81 9.81 1.89 28.66\n> sdd 1.67 46.14 34.89 116.49 0.56 4.06 62.46 1.58 10.43 21.66 7.07 1.89 28.60\n> md1 0.00 0.00 0.00 0.00 0.00 0.00 2.69 0.00 0.00 0.00 0.00 0.00 0.00\n> md0 0.00 0.00 0.11 0.24 0.00 0.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00\n> md2 0.00 0.00 72.99 161.95 1.11 4.06 45.10 0.00 0.00 0.00 0.00 0.00 0.00\n> md3 0.00 0.00 0.05 32.32 0.00 0.74 47.00 0.00 0.00 0.00 0.00 0.00 0.00\n> \n> 3)\n> sudo iotop -oa (running for about a minute or so)\n> TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND\n> 292 be/4 root 0.00 B 0.00 B 0.00 % 99.33 % [md2_raid1]\n> 2815 be/4 postgres 19.51 M 25.90 M 0.00 % 45.49 % postgres: autovacuum worker process production\n> 32553 be/4 postgres 45.74 M 9.38 M 0.00 % 37.89 % postgres: user production 192.168.0.3(58866) UPDATE\n> 32570 be/4 postgres 6.91 M 35.02 M 0.00 % 16.71 % postgres: user production 192.168.0.3(35547) idle\n> 32575 be/4 postgres 4.06 M 43.90 M 0.00 % 16.62 % postgres: user production 192.168.0.3(35561) SELECT\n> 31673 be/4 postgres 4.14 M 52.16 M 0.00 % 16.24 % postgres: user production 192.168.0.3(39112) idle\n> 32566 be/4 postgres 4.73 M 44.95 M 0.00 % 15.66 % postgres: user production 192.168.0.3(35531) idle\n> 32568 be/4 postgres 4.50 M 33.84 M 0.00 % 14.62 % postgres: user production 192.168.0.3(35543) SELECT\n> 32573 be/4 postgres 3.20 M 34.44 M 0.00 % 13.98 % postgres: user production 192.168.0.3(35559) idle\n> 31590 be/4 postgres 3.23 M 29.72 M 0.00 % 13.90 % postgres: user production 192.168.0.3(50690) idle in transaction\n> 32577 be/4 postgres 5.09 M 25.54 M 0.00 % 13.63 % postgres: user production 192.168.0.3(35563) idle\n> 32565 be/4 postgres 2.06 M 35.93 M 0.00 % 13.41 % postgres: user production 192.168.0.3(35529) SELECT\n> 32546 be/4 postgres 4.48 M 36.49 M 0.00 % 13.39 % postgres: user production 192.168.0.3(56927) UPDATE waiting\n> 32569 be/4 postgres 3.50 M 26.75 M 0.00 % 12.82 % postgres: user production 192.168.0.3(35545) INSERT\n> 31671 be/4 postgres 4.58 M 24.45 M 0.00 % 12.76 % postgres: user production 192.168.0.3(34841) idle in transaction\n> 32551 be/4 postgres 3.26 M 31.77 M 0.00 % 12.06 % postgres: user production 192.168.0.3(58864) idle in transaction\n> 32574 be/4 postgres 5.32 M 32.92 M 0.00 % 11.70 % postgres: user production 192.168.0.3(35560) idle\n> 32572 be/4 postgres 3.00 M 32.66 M 0.00 % 11.66 % postgres: user production 192.168.0.3(35558) UPDATE\n> 32560 be/4 postgres 5.12 M 25.89 M 0.00 % 11.52 % postgres: user production 192.168.0.3(33886) SELECT\n> 32567 be/4 postgres 4.66 M 36.47 M 0.00 % 11.44 % postgres: user production 192.168.0.3(35534) SELECT\n> 32571 be/4 postgres 2.86 M 31.27 M 0.00 % 11.31 % postgres: user production 192.168.0.3(35557) SELECT\n> 32552 be/4 postgres 4.38 M 28.75 M 0.00 % 11.09 % postgres: user production 192.168.0.3(58865) idle in transaction\n> 32554 be/4 postgres 3.69 M 30.21 M 0.00 % 10.90 % postgres: user production 192.168.0.3(58870) UPDATE\n> 339 be/3 root 0.00 B 2.29 M 0.00 % 9.81 % [jbd2/md2-8]\n> 32576 be/4 postgres 3.37 M 19.91 M 0.00 % 9.73 % postgres: user production 192.168.0.3(35562) idle\n> 32555 be/4 postgres 3.09 M 31.96 M 0.00 % 9.02 % postgres: user production 192.168.0.3(58875) SELECT\n> 27548 be/4 postgres 0.00 B 97.12 M 0.00 % 7.41 % postgres: writer process\n> 31445 be/4 postgres 924.00 K 27.35 M 0.00 % 7.11 % postgres: user production 192.168.0.1(34536) idle\n> 31443 be/4 postgres 2.54 M 4.56 M 0.00 % 6.32 % postgres: user production 192.168.0.1(34508) idle\n> 31459 be/4 postgres 1480.00 K 21.36 M 0.00 % 5.63 % postgres: user production 192.168.0.1(34543) idle\n> 1801 be/4 postgres 1896.00 K 10.89 M 0.00 % 5.57 % postgres: user production 192.168.0.3(34177) idle\n> 32763 be/4 postgres 1696.00 K 6.95 M 0.00 % 5.33 % postgres: user production 192.168.0.3(57984) SELECT\n> 1800 be/4 postgres 2.46 M 5.13 M 0.00 % 5.24 % postgres: user production 192.168.0.3(34175) SELECT\n> 1803 be/4 postgres 1816.00 K 9.09 M 0.00 % 5.16 % postgres: user production 192.168.0.3(34206) idle\n> 32578 be/4 postgres 2.57 M 11.62 M 0.00 % 5.06 % postgres: user production 192.168.0.3(35564) SELECT\n> 31440 be/4 postgres 3.02 M 4.04 M 0.00 % 4.65 % postgres: user production 192.168.0.1(34463) idle\n> 32605 be/4 postgres 1844.00 K 11.82 M 0.00 % 4.49 % postgres: user production 192.168.0.3(40399) idle\n> 27547 be/4 postgres 0.00 B 0.00 B 0.00 % 3.93 % postgres: checkpointer process\n> 31356 be/4 postgres 1368.00 K 3.27 M 0.00 % 3.93 % postgres: user production 192.168.0.1(34450) idle\n> 32542 be/4 postgres 1180.00 K 6.05 M 0.00 % 3.90 % postgres: user production 192.168.0.3(56859) idle\n> 32523 be/4 postgres 1088.00 K 4.33 M 0.00 % 3.59 % postgres: user production 192.168.0.3(48164) idle\n> 32606 be/4 postgres 1964.00 K 6.94 M 0.00 % 3.51 % postgres: user production 192.168.0.3(40426) SELECT\n> 31466 be/4 postgres 1596.00 K 3.11 M 0.00 % 3.47 % postgres: user production 192.168.0.1(34550) idle\n> 32544 be/4 postgres 1184.00 K 4.25 M 0.00 % 3.38 % postgres: user production 192.168.0.3(56861) idle\n> 31458 be/4 postgres 1088.00 K 1528.00 K 0.00 % 3.33 % postgres: user production 192.168.0.1(34541) idle\n> 31444 be/4 postgres 884.00 K 4.23 M 0.00 % 3.27 % postgres: user production 192.168.0.1(34510) idle\n> 32522 be/4 postgres 408.00 K 2.98 M 0.00 % 3.27 % postgres: user production 192.168.0.5(38361) idle\n> 32762 be/4 postgres 1156.00 K 5.28 M 0.00 % 3.20 % postgres: user production 192.168.0.3(57962) idle\n> 32582 be/4 postgres 1084.00 K 3.38 M 0.00 % 2.86 % postgres: user production 192.168.0.5(43104) idle\n> 31353 be/4 postgres 2.04 M 3.02 M 0.00 % 2.82 % postgres: user production 192.168.0.1(34444) idle\n> 31441 be/4 postgres 700.00 K 2.68 M 0.00 % 2.64 % postgres: user production 192.168.0.1(34465) idle\n> 31462 be/4 postgres 980.00 K 3.50 M 0.00 % 2.57 % postgres: user production 192.168.0.1(34547) idle\n> 32709 be/4 postgres 428.00 K 3.23 M 0.00 % 2.56 % postgres: user production 192.168.0.5(34323) idle\n> 685 be/4 postgres 748.00 K 3.59 M 0.00 % 2.41 % postgres: user production 192.168.0.3(34911) idle\n> 683 be/4 postgres 728.00 K 3.19 M 0.00 % 2.38 % postgres: user production 192.168.0.3(34868) idle\n> 32765 be/4 postgres 464.00 K 3.76 M 0.00 % 2.21 % postgres: user production 192.168.0.3(58074) idle\n> 32760 be/4 postgres 808.00 K 6.18 M 0.00 % 2.16 % postgres: user production 192.168.0.3(57958) idle\n> 1912 be/4 postgres 372.00 K 3.03 M 0.00 % 2.16 % postgres: user production 192.168.0.5(33743) idle\n> 31446 be/4 postgres 1004.00 K 2.09 M 0.00 % 2.16 % postgres: user production 192.168.0.1(34539) idle\n> 31460 be/4 postgres 584.00 K 2.74 M 0.00 % 2.10 % postgres: user production 192.168.0.1(34545) idle\n> \n> 5) vmstat 1\n> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 1 1 573424 321080 27124 28504352 0 0 143 618 0 4 2 0 91 7\n> 0 1 573424 320764 27124 28504496 0 0 104 15654 3788 4961 1 0 85 14\n> 0 1 573424 320684 27124 28504616 0 0 276 12736 4099 5374 0 1 84 15\n> 0 1 573424 319672 27124 28504900 0 0 80 7746 3624 4949 2 0 82 16\n> 0 1 573424 319180 27124 28504972 0 0 36 12489 3653 4761 2 0 86 12\n> 0 1 573424 318184 27132 28505000 0 0 8 10482 3413 4898 0 0 87 13\n> 0 1 573424 318424 27132 28505016 0 0 0 9564 2746 4290 0 0 87 13\n> 0 1 573424 318308 27168 28505016 0 0 36 10562 1895 2149 0 0 87 12\n> 0 3 573424 318208 27168 28505020 0 0 84 18529 3035 3265 1 0 85 14\n> 0 1 573424 318732 27176 28505080 0 0 84 14574 2986 3231 0 0 84 16\n> 0 2 573424 317588 27176 28505184 0 0 4 6681 1991 2207 2 1 86 12\n> 0 1 573424 316852 27176 28505260 0 0 76 7670 2910 3996 2 1 85 13\n> 0 1 573424 316632 27184 28505256 0 0 0 7186 2661 3740 0 0 87 12\n> 0 1 573424 316720 27188 28505260 0 0 0 2590 1731 2474 0 0 88 12\n> 0 1 573424 314252 27192 28505696 0 0 460 11612 1757 2431 0 0 82 18\n> 0 2 573424 313504 27192 28505724 0 0 0 19656 1775 2099 0 0 83 17\n> 0 3 573424 313300 27196 28505780 0 0 188 6237 2746 3193 2 0 80 17\n> 0 2 573424 312736 27200 28506348 0 0 804 18466 5014 6430 2 1 75 23\n> 2 35 573424 307564 27200 28509920 0 0 3912 16280 14377 15470 14 3 28 56\n> 0 5 573424 282848 27208 28533964 0 0 7484 27580 22017 25938 17 3 17 63\n> 1 5 573424 221100 27208 28563360 0 0 2852 3120 19639 28664 12 5 52 31\n> 0 4 573428 229912 26704 28519184 0 4 1208 5890 13976 20851 13 3 56 28\n> 0 2 573448 234680 26672 28513632 0 20 0 17204 1694 2636 0 0 71 28\n> 3 7 573452 220836 26644 28525548 0 4 1540 36370 27928 36551 17 5 50 27\n> 1 3 573488 234380 26556 28517416 0 36 584 19066 8275 9467 3 2 60 36\n> 0 1 573488 234496 26556 28517852 0 0 56 47429 3290 4310 0 0 79 20\n> \n> 6) sudo lsof - a hell of a lot of output, I can post it if anyone is interested :-)\n> \n> #### Notes and thoughts ##############################################################################\n> \n> As you can see, even though I have moved the pg_xlog folder to the SSD array (md3) the by far largest amount of writes still goes to the regular HDD's (md2), which puzzles me - what can that be?\n> From stat 3) (the iostat) I notice that the SSD's doesn't seem to be something near fully utilized - maybe something else than just pg_xlog could be moved her? \n> I have no idea if the amount of reads/writes is within the acceptable/capable for my kind of hardware, or if it is far beyond?\n> In stat 3) (the iotop) it says that the RAID array (md2) is the most \"waiting\" part, does that taste like a root cause, or more like a symptom of some other bottleneck?\n> \n> Thanks, for taking the time to look at by data! :-)\n\n\nOn Dec 11, 2012, at 2:51 AM, Niels Kristian Schjødt <[email protected]> wrote:#### Pitch ######################################################################################I previously posted this question http://archives.postgresql.org/pgsql-performance/2012-11/msg00289.php about a performance issue with an update query. The question evolved into a more general discussion about my setup, and about a lot of I/O wait that I was encountering. Since then, I have gotten a whole lot more familiar with measuring things, and now I \"just\" need some experienced eyes to judge which direction I should go in - do I have a hardware issue, or a software issue - and what action should I take?#####  My setup #############################################################################The use case:At night time we are doing a LOT of data maintenance, and hence the load on the database is very different from the day time. However we would like to be able to do some of it in the daytime, it's simply just too \"heavy\" on the database as is right now. The stats shown below is from one of those \"heavy\" load times.Hardware:   - 32Gb ram   - 8 core Xeon E3-1245 processor  - Two SEAGATE ST33000650NS drives (called sdc and sdd in the stats) in a softeware RAID1 array (called md2 in the stats)  - Two INTEL SSDSC2CW240A3 SSD drives (called sda and sdb in the stats) in a software RAID1 (called md3 in the stats)Software:Postgres 9.2 running on 64bit ubuntu 12.04 with kernel 3.2Configuration:# postgresql.conf (a shortlist of everything changed from the default)data_directory = '/var/lib/postgresql/9.2/main'hba_file = '/etc/postgresql/9.2/main/pg_hba.conf'ident_file = '/etc/postgresql/9.2/main/pg_ident.conf'external_pid_file = '/var/run/postgresql/9.2-main.pid'listen_addresses = '192.168.0.2, localhost'port = 5432max_connections = 300unix_socket_directory = '/var/run/postgresql'wal_level = hot_standbysynchronous_commit = offarchive_mode = onarchive_command = 'rsync -a %p [email protected]:/var/lib/postgresql/9.2/wals/%f </dev/null'max_wal_senders = 1wal_keep_segments = 32log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d 'datestyle = 'iso, mdy'lc_monetary = 'en_US.UTF-8'lc_numeric = 'en_US.UTF-8'lc_time = 'en_US.UTF-8'default_text_search_config = 'pg_catalog.english'default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9effective_cache_size = 22GBwork_mem = 160MBwal_buffers = 4MBcheckpoint_segments = 100shared_buffers = 4GBcheckpoint_timeout = 10minThe kernel has bee tweaked like so:vm.dirty_ratio = 10vm.dirty_background_ratio = 1kernel.shmmax = 8589934592kernel.shmall = 17179869184The pg_xlog folder has been moved onto the SSD array (md3), and symlinked back into the postgres dir.Actually, you should move xlog to rotating drives, since wal logs written sequentially, and everything else to ssd, because of random io pattern.##### The stats ###############################################################These are the typical observations/stats I see in one of these periods:1)At top level this is what I see in new relic:https://rpm.newrelic.com/public/charts/6ewGRle6bmc2)When the database is loaded like this, I see a lot of queries talking up to 1000 times as long, as they would when the database is not loaded so heavily.3)sudo iostat -dmx (typical usage)Linux 3.2.0-33-generic (master-db)  12/10/2012  _x86_64_ (8 CPU)Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %utilsda               0.00     6.52    3.59   26.61     0.22     0.74    65.49     0.01    0.40    0.77    0.35   0.14   0.43sdb               0.00     8.31    0.03   28.38     0.00     0.97    69.63     0.01    0.52    0.27    0.52   0.15   0.43sdc               1.71    46.01   34.83  116.62     0.56     4.06    62.47     1.90   12.57   21.81    9.81   1.89  28.66sdd               1.67    46.14   34.89  116.49     0.56     4.06    62.46     1.58   10.43   21.66    7.07   1.89  28.60md1               0.00     0.00    0.00    0.00     0.00     0.00     2.69     0.00    0.00    0.00    0.00   0.00   0.00md0               0.00     0.00    0.11    0.24     0.00     0.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00md2               0.00     0.00   72.99  161.95     1.11     4.06    45.10     0.00    0.00    0.00    0.00   0.00   0.00md3               0.00     0.00    0.05   32.32     0.00     0.74    47.00     0.00    0.00    0.00    0.00   0.00   0.003)sudo iotop -oa (running for about a minute or so)TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND  292    be/4 root               0.00 B      0.00 B    0.00 % 99.33 % [md2_raid1] 2815  be/4 postgres     19.51 M     25.90 M  0.00 % 45.49 % postgres: autovacuum worker process   production32553 be/4 postgres     45.74 M      9.38 M  0.00 % 37.89 % postgres: user production 192.168.0.3(58866) UPDATE32570 be/4 postgres      6.91 M     35.02 M  0.00 % 16.71 % postgres: user production 192.168.0.3(35547) idle32575 be/4 postgres      4.06 M     43.90 M  0.00 % 16.62 % postgres: user production 192.168.0.3(35561) SELECT31673 be/4 postgres      4.14 M     52.16 M  0.00 % 16.24 % postgres: user production 192.168.0.3(39112) idle32566 be/4 postgres      4.73 M     44.95 M  0.00 % 15.66 % postgres: user production 192.168.0.3(35531) idle32568 be/4 postgres      4.50 M     33.84 M  0.00 % 14.62 % postgres: user production 192.168.0.3(35543) SELECT32573 be/4 postgres      3.20 M     34.44 M  0.00 % 13.98 % postgres: user production 192.168.0.3(35559) idle31590 be/4 postgres      3.23 M     29.72 M  0.00 % 13.90 % postgres: user production 192.168.0.3(50690) idle in transaction32577 be/4 postgres      5.09 M     25.54 M  0.00 % 13.63 % postgres: user production 192.168.0.3(35563) idle32565 be/4 postgres      2.06 M     35.93 M  0.00 % 13.41 % postgres: user production 192.168.0.3(35529) SELECT32546 be/4 postgres      4.48 M     36.49 M  0.00 % 13.39 % postgres: user production 192.168.0.3(56927) UPDATE waiting32569 be/4 postgres      3.50 M     26.75 M  0.00 % 12.82 % postgres: user production 192.168.0.3(35545) INSERT31671 be/4 postgres      4.58 M     24.45 M  0.00 % 12.76 % postgres: user production 192.168.0.3(34841) idle in transaction32551 be/4 postgres      3.26 M     31.77 M  0.00 % 12.06 % postgres: user production 192.168.0.3(58864) idle in transaction32574 be/4 postgres      5.32 M     32.92 M  0.00 % 11.70 % postgres: user production 192.168.0.3(35560) idle32572 be/4 postgres      3.00 M     32.66 M  0.00 % 11.66 % postgres: user production 192.168.0.3(35558) UPDATE32560 be/4 postgres      5.12 M     25.89 M  0.00 % 11.52 % postgres: user production 192.168.0.3(33886) SELECT32567 be/4 postgres      4.66 M     36.47 M  0.00 % 11.44 % postgres: user production 192.168.0.3(35534) SELECT32571 be/4 postgres      2.86 M     31.27 M  0.00 % 11.31 % postgres: user production 192.168.0.3(35557) SELECT32552 be/4 postgres      4.38 M     28.75 M  0.00 % 11.09 % postgres: user production 192.168.0.3(58865) idle in transaction32554 be/4 postgres      3.69 M     30.21 M  0.00 % 10.90 % postgres: user production 192.168.0.3(58870) UPDATE  339    be/3 root               0.00 B       2.29 M  0.00 %  9.81 % [jbd2/md2-8]32576 be/4 postgres      3.37 M     19.91 M  0.00 %  9.73 % postgres: user production 192.168.0.3(35562) idle32555 be/4 postgres      3.09 M     31.96 M  0.00 %  9.02 % postgres: user production 192.168.0.3(58875) SELECT27548 be/4 postgres      0.00 B     97.12 M  0.00 %  7.41 % postgres: writer process31445 be/4 postgres    924.00 K     27.35 M  0.00 %  7.11 % postgres: user production 192.168.0.1(34536) idle31443 be/4 postgres      2.54 M      4.56 M  0.00 %  6.32 % postgres: user production 192.168.0.1(34508) idle31459 be/4 postgres   1480.00 K     21.36 M  0.00 %  5.63 % postgres: user production 192.168.0.1(34543) idle 1801 be/4 postgres   1896.00 K     10.89 M  0.00 %  5.57 % postgres: user production 192.168.0.3(34177) idle32763 be/4 postgres   1696.00 K      6.95 M  0.00 %  5.33 % postgres: user production 192.168.0.3(57984) SELECT 1800 be/4 postgres      2.46 M      5.13 M  0.00 %  5.24 % postgres: user production 192.168.0.3(34175) SELECT 1803 be/4 postgres   1816.00 K      9.09 M  0.00 %  5.16 % postgres: user production 192.168.0.3(34206) idle32578 be/4 postgres      2.57 M     11.62 M  0.00 %  5.06 % postgres: user production 192.168.0.3(35564) SELECT31440 be/4 postgres      3.02 M      4.04 M  0.00 %  4.65 % postgres: user production 192.168.0.1(34463) idle32605 be/4 postgres   1844.00 K     11.82 M  0.00 %  4.49 % postgres: user production 192.168.0.3(40399) idle27547 be/4 postgres      0.00 B      0.00 B  0.00 %  3.93 % postgres: checkpointer process31356 be/4 postgres   1368.00 K      3.27 M  0.00 %  3.93 % postgres: user production 192.168.0.1(34450) idle32542 be/4 postgres   1180.00 K      6.05 M  0.00 %  3.90 % postgres: user production 192.168.0.3(56859) idle32523 be/4 postgres   1088.00 K      4.33 M  0.00 %  3.59 % postgres: user production 192.168.0.3(48164) idle32606 be/4 postgres   1964.00 K      6.94 M  0.00 %  3.51 % postgres: user production 192.168.0.3(40426) SELECT31466 be/4 postgres   1596.00 K      3.11 M  0.00 %  3.47 % postgres: user production 192.168.0.1(34550) idle32544 be/4 postgres   1184.00 K      4.25 M  0.00 %  3.38 % postgres: user production 192.168.0.3(56861) idle31458 be/4 postgres   1088.00 K   1528.00 K  0.00 %  3.33 % postgres: user production 192.168.0.1(34541) idle31444 be/4 postgres    884.00 K      4.23 M  0.00 %  3.27 % postgres: user production 192.168.0.1(34510) idle32522 be/4 postgres    408.00 K      2.98 M  0.00 %  3.27 % postgres: user production 192.168.0.5(38361) idle32762 be/4 postgres   1156.00 K      5.28 M  0.00 %  3.20 % postgres: user production 192.168.0.3(57962) idle32582 be/4 postgres   1084.00 K      3.38 M  0.00 %  2.86 % postgres: user production 192.168.0.5(43104) idle31353 be/4 postgres      2.04 M      3.02 M  0.00 %  2.82 % postgres: user production 192.168.0.1(34444) idle31441 be/4 postgres    700.00 K      2.68 M  0.00 %  2.64 % postgres: user production 192.168.0.1(34465) idle31462 be/4 postgres    980.00 K      3.50 M  0.00 %  2.57 % postgres: user production 192.168.0.1(34547) idle32709 be/4 postgres    428.00 K      3.23 M  0.00 %  2.56 % postgres: user production 192.168.0.5(34323) idle  685 be/4 postgres    748.00 K      3.59 M  0.00 %  2.41 % postgres: user production 192.168.0.3(34911) idle  683 be/4 postgres    728.00 K      3.19 M  0.00 %  2.38 % postgres: user production 192.168.0.3(34868) idle32765 be/4 postgres    464.00 K      3.76 M  0.00 %  2.21 % postgres: user production 192.168.0.3(58074) idle32760 be/4 postgres    808.00 K      6.18 M  0.00 %  2.16 % postgres: user production 192.168.0.3(57958) idle 1912 be/4 postgres    372.00 K      3.03 M  0.00 %  2.16 % postgres: user production 192.168.0.5(33743) idle31446 be/4 postgres   1004.00 K      2.09 M  0.00 %  2.16 % postgres: user production 192.168.0.1(34539) idle31460 be/4 postgres    584.00 K      2.74 M  0.00 %  2.10 % postgres: user production 192.168.0.1(34545) idle5) vmstat 1procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa 1  1 573424 321080  27124 28504352    0    0   143   618    0    4  2  0 91  7 0  1 573424 320764  27124 28504496    0    0   104 15654 3788 4961  1  0 85 14 0  1 573424 320684  27124 28504616    0    0   276 12736 4099 5374  0  1 84 15 0  1 573424 319672  27124 28504900    0    0    80  7746 3624 4949  2  0 82 16 0  1 573424 319180  27124 28504972    0    0    36 12489 3653 4761  2  0 86 12 0  1 573424 318184  27132 28505000    0    0     8 10482 3413 4898  0  0 87 13 0  1 573424 318424  27132 28505016    0    0     0  9564 2746 4290  0  0 87 13 0  1 573424 318308  27168 28505016    0    0    36 10562 1895 2149  0  0 87 12 0  3 573424 318208  27168 28505020    0    0    84 18529 3035 3265  1  0 85 14 0  1 573424 318732  27176 28505080    0    0    84 14574 2986 3231  0  0 84 16 0  2 573424 317588  27176 28505184    0    0     4  6681 1991 2207  2  1 86 12 0  1 573424 316852  27176 28505260    0    0    76  7670 2910 3996  2  1 85 13 0  1 573424 316632  27184 28505256    0    0     0  7186 2661 3740  0  0 87 12 0  1 573424 316720  27188 28505260    0    0     0  2590 1731 2474  0  0 88 12 0  1 573424 314252  27192 28505696    0    0   460 11612 1757 2431  0  0 82 18 0  2 573424 313504  27192 28505724    0    0     0 19656 1775 2099  0  0 83 17 0  3 573424 313300  27196 28505780    0    0   188  6237 2746 3193  2  0 80 17 0  2 573424 312736  27200 28506348    0    0   804 18466 5014 6430  2  1 75 23 2 35 573424 307564  27200 28509920    0    0  3912 16280 14377 15470 14  3 28 56 0  5 573424 282848  27208 28533964    0    0  7484 27580 22017 25938 17  3 17 63 1  5 573424 221100  27208 28563360    0    0  2852  3120 19639 28664 12  5 52 31 0  4 573428 229912  26704 28519184    0    4  1208  5890 13976 20851 13  3 56 28 0  2 573448 234680  26672 28513632    0   20     0 17204 1694 2636  0  0 71 28 3  7 573452 220836  26644 28525548    0    4  1540 36370 27928 36551 17  5 50 27 1  3 573488 234380  26556 28517416    0   36   584 19066 8275 9467  3  2 60 36 0  1 573488 234496  26556 28517852    0    0    56 47429 3290 4310  0  0 79 206) sudo lsof - a hell of a lot of output, I can post it if anyone is interested :-)#### Notes and thoughts  ##############################################################################As you can see, even though I have moved the pg_xlog folder to the SSD array (md3) the by far largest amount of writes still goes to the regular HDD's (md2), which puzzles me - what can that be?From stat 3) (the iostat) I notice that the SSD's doesn't seem to be something near fully utilized - maybe something else than just pg_xlog could be moved her? I have no idea if the amount of reads/writes is  within the acceptable/capable for my kind of hardware, or if it is far beyond?In stat 3) (the iotop) it says that the RAID array (md2) is the most \"waiting\" part, does that taste like a root cause, or more like a symptom of some other bottleneck?Thanks, for taking the time to look at by data! :-)", "msg_date": "Tue, 11 Dec 2012 03:00:51 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Mon, Dec 10, 2012 at 2:51 PM, Niels Kristian Schjødt\n<[email protected]> wrote:\n\n> synchronous_commit = off\n>\n> The pg_xlog folder has been moved onto the SSD array (md3), and symlinked\n> back into the postgres dir.\n\nWith synchronous_commit = off, or with large transactions, there is\nprobably no advantage to moving those to SSD.\n\n\n> 2)\n> When the database is loaded like this, I see a lot of queries talking up to\n> 1000 times as long, as they would when the database is not loaded so\n> heavily.\n\nWhat kinds of queries are they? single-row look-ups, full table scans, etc.\n\n\n>\n> #### Notes and thoughts\n> ##############################################################################\n>\n> As you can see, even though I have moved the pg_xlog folder to the SSD array\n> (md3) the by far largest amount of writes still goes to the regular HDD's\n> (md2), which puzzles me - what can that be?\n\nEvery row you insert or non-HOT update has to do maintenance on all\nindexes of that table. If the rows are not inserted/updated in index\norder, this means you every row inserted/updated dirties a randomly\nscattered 8KB for each of the indexes. If you have lots of indexes\nper table, that adds up fast.\n\nThe fact that there is much more writing than reading tells me that\nmost of your indexes are in RAM. The amount of index you are rapidly\nreading and dirtying is large enough to fit in RAM, but is not large\nenough to fit in shared_buffers + kernel's dirty-buffer comfort level.\n So you are redirtying the same blocks over and over, PG is\ndesperately dumping them to the kernel (because shared_buffers it too\nsmall to hold them) and the kernel is desperately dumping them to\ndisk, because vm.dirty_background_ratio is so low. There is little\nopportunity for write-combining, because they don't sit in memory long\nenough to accumulate neighbors.\n\nHow big are your indexes?\n\nYou could really crank up shared_buffers or vm.dirty_background_ratio,\nbut doing so might cause problems with checkpoints stalling and\nlatency spikes. That would probably not be a problem during the\nnight, but could be during the day.\n\nRather than moving maintenance to the day and hoping it doesn't\ninterfere with normal operations, I'd focus on making night-time\nmaintenance more efficient, for example by dropping indexes (either\njust at night, or if some indexes are not useful, just get rid of them\naltogether), or cranking up shared_buffers at night, or maybe\npartitioning or look into pg_bulkload.\n\n> From stat 3) (the iostat) I notice that the SSD's doesn't seem to be\n> something near fully utilized - maybe something else than just pg_xlog could\n> be moved her?\n\nI don't know how big each disk is, or how big your various categories\nof data are. Could you move everything to SSD? Could you move all\nyour actively updated indexes there?\n\nOr, more fundamentally, it looks like you spent too much on CPUs (86%\nidle) and not nearly enough on disks. Maybe you can fix that for less\nmoney than it will cost you in your optimization time to make the best\nof the disks you already have.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 10 Dec 2012 15:58:29 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "Den 11/12/2012 kl. 00.58 skrev Jeff Janes <[email protected]>:\n\n> On Mon, Dec 10, 2012 at 2:51 PM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n> \n>> synchronous_commit = off\n>> \n>> The pg_xlog folder has been moved onto the SSD array (md3), and symlinked\n>> back into the postgres dir.\n> \n> With synchronous_commit = off, or with large transactions, there is\n> probably no advantage to moving those to SSD.\n> \n> \n>> 2)\n>> When the database is loaded like this, I see a lot of queries talking up to\n>> 1000 times as long, as they would when the database is not loaded so\n>> heavily.\n> \n> What kinds of queries are they? single-row look-ups, full table scans, etc.\nWell Mostly they are updates. Like the one shown in the previous question I referenced.\n>> \n>> #### Notes and thoughts\n>> ##############################################################################\n>> \n>> As you can see, even though I have moved the pg_xlog folder to the SSD array\n>> (md3) the by far largest amount of writes still goes to the regular HDD's\n>> (md2), which puzzles me - what can that be?\n> \n> Every row you insert or non-HOT update has to do maintenance on all\n> indexes of that table. If the rows are not inserted/updated in index\n> order, this means you every row inserted/updated dirties a randomly\n> scattered 8KB for each of the indexes. If you have lots of indexes\n> per table, that adds up fast.\n> \n> The fact that there is much more writing than reading tells me that\n> most of your indexes are in RAM. The amount of index you are rapidly\n> reading and dirtying is large enough to fit in RAM, but is not large\n> enough to fit in shared_buffers + kernel's dirty-buffer comfort level.\nMaybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n> So you are redirtying the same blocks over and over, PG is\n> desperately dumping them to the kernel (because shared_buffers it too\n> small to hold them) and the kernel is desperately dumping them to\n> disk, because vm.dirty_background_ratio is so low. There is little\n> opportunity for write-combining, because they don't sit in memory long\n> enough to accumulate neighbors.\n> \n> How big are your indexes?\nThis is a size list of all my indexes: 117 MB, 118 MB, 11 MB, 12 MB, 12 MB, 12 MB, 12 MB, 140 MB, 15 MB, 15 MB, 16 kB, 16 kB, 16 kB, 16 kB, 16 kB, 16 kB, 16 kB, 16 kB, 16 kB, 16 kB, 16 kB, 16 kB, 16 MB, 16 MB, 176 kB, 176 kB, 17 MB, 18 MB, 19 MB, 23 MB, 240 kB, 24 MB, 256 kB, 25 MB, 25 MB, 26 MB, 26 MB, 27 MB, 27 MB, 27 MB, 27 MB, 280 MB, 2832 kB, 2840 kB, 288 kB, 28 MB, 28 MB, 28 MB, 28 MB, 28 MB, 28 MB, 28 MB, 28 MB, 29 MB, 29 MB, 3152 kB, 3280 kB, 32 kB, 32 MB, 32 MB, 3352 kB, 3456 kB, 34 MB, 36 MB, 3744 kB, 3776 kB, 37 MB, 37 MB, 3952 kB, 400 kB, 408 kB, 40 kB, 40 kB, 40 kB, 416 kB, 416 kB, 42 MB, 432 kB, 4520 kB, 4720 kB, 47 MB, 48 kB, 496 kB, 49 MB, 512 kB, 52 MB, 52 MB, 5304 kB, 5928 kB, 6088 kB, 61 MB, 6224 kB, 62 MB, 6488 kB, 64 kB, 6512 kB, 71 MB, 72 kB, 72 kB, 8192 bytes, 8400 kB, 88 MB, 95 MB, 98 MB\n> You could really crank up shared_buffers or vm.dirty_background_ratio,\n> but doing so might cause problems with checkpoints stalling and\n> latency spikes. That would probably not be a problem during the\n> night, but could be during the day.\nWhat do you have in mind here? Tweaking what parameters to what values?\n> .\n> Rather than moving maintenance to the day and hoping it doesn't\n> interfere with normal operations, I'd focus on making night-time\n> maintenance more efficient, for example by dropping indexes (either\n> just at night, or if some indexes are not useful, just get rid of them\n> altogether), or cranking up shared_buffers at night, or maybe\n> partitioning or look into pg_bulkload.\n> \n>> From stat 3) (the iostat) I notice that the SSD's doesn't seem to be\n>> something near fully utilized - maybe something else than just pg_xlog could\n>> be moved her?\n> \n> I don't know how big each disk is, or how big your various categories\n> of data are. Could you move everything to SSD? Could you move all\n> your actively updated indexes there?\nWith table spaces you mean?\n> Or, more fundamentally, it looks like you spent too much on CPUs (86%\n> idle) and not nearly enough on disks. Maybe you can fix that for less\n> money than it will cost you in your optimization time to make the best\n> of the disks you already have.\nThe SSD's I use a are 240Gb each which will grow too small within a few months - so - how does moving the whole data dir onto four of those in a RAID5 array sound?\n> \n> Cheers,\n> \n> Jeff\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 11:04:46 +0100", "msg_from": "=?windows-1252?Q?Niels_Kristian_Schj=F8dt?=\n <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n>\n> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\nFor an update-mostly workload it probably won't do you tons of good so\nlong as all your indexes fit in RAM. You're clearly severely\nbottlenecked on disk I/O not RAM.\n> The SSD's I use a are 240Gb each which will grow too small within a\n> few months - so - how does moving the whole data dir onto four of\n> those in a RAID5 array sound? \n\nNot RAID 5!\n\nUse a RAID10 of four or six SSDs.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 21:29:00 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "\nDen 11/12/2012 kl. 14.29 skrev Craig Ringer <[email protected]>:\n\n> On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n>> \n>> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n> For an update-mostly workload it probably won't do you tons of good so\n> long as all your indexes fit in RAM. You're clearly severely\n> bottlenecked on disk I/O not RAM.\n>> The SSD's I use a are 240Gb each which will grow too small within a\n>> few months - so - how does moving the whole data dir onto four of\n>> those in a RAID5 array sound? \n> \n> Not RAID 5!\n> \n> Use a RAID10 of four or six SSDs.\n> \n> -- \n> Craig Ringer http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n> \nHehe got it - did you have a look at the SSD's I am considering building it of? http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC \nAre they suitable do you think?\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 14:35:09 +0100", "msg_from": "=?windows-1252?Q?Niels_Kristian_Schj=F8dt?=\n <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "\nOn Dec 11, 2012, at 5:35 PM, Niels Kristian Schjødt <[email protected]> wrote:\n\n> \n> Den 11/12/2012 kl. 14.29 skrev Craig Ringer <[email protected]>:\n> \n>> On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n>>> \n>>> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n>> For an update-mostly workload it probably won't do you tons of good so\n>> long as all your indexes fit in RAM. You're clearly severely\n>> bottlenecked on disk I/O not RAM.\n>>> The SSD's I use a are 240Gb each which will grow too small within a\n>>> few months - so - how does moving the whole data dir onto four of\n>>> those in a RAID5 array sound? \n>> \n>> Not RAID 5!\n>> \n>> Use a RAID10 of four or six SSDs.\n>> \n>> -- \n>> Craig Ringer http://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Training & Services\n>> \n> Hehe got it - did you have a look at the SSD's I am considering building it of? http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC \n> Are they suitable do you think?\n> \n\nI am not Craig, but i use them in production in raid10 array now.\n\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 21:15:59 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Tue, Dec 11, 2012 at 2:04 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> Den 11/12/2012 kl. 00.58 skrev Jeff Janes <[email protected]>:\n>\n>>\n>> The fact that there is much more writing than reading tells me that\n>> most of your indexes are in RAM. The amount of index you are rapidly\n>> reading and dirtying is large enough to fit in RAM, but is not large\n>> enough to fit in shared_buffers + kernel's dirty-buffer comfort level.\n\n> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n\nWhat tool do you use to determine that? Is that on top of the 4GB\nshared_buffers, are including it?\n\nHow big is your entire data set? Maybe all your data fits in 5GB\n(believable, as all your indexes listed below sum to < 2.5GB) so there\nis no need to use more.\n\nOr maybe you have hit an bug in the 3.2 kernel. At least one of those\nhas been frequently discussed.\n\n\n>> You could really crank up shared_buffers or vm.dirty_background_ratio,\n>> but doing so might cause problems with checkpoints stalling and\n>> latency spikes. That would probably not be a problem during the\n>> night, but could be during the day.\n\n> What do you have in mind here? Tweaking what parameters to what values?\n\nI'd set shared_buffers to 20GB (or 10GB, if that will hold all of your\ndata) and see what happens. And probably increase checkpoint_timeout\nand checkpoint_segments about 3x each. Also, turn on log_checkpoints\nso you can see what kinds of problem those changes may be causing\nthere (i.e. long sync times). Preferably you do this on some kind of\npre-production or test server.\n\nBut if your database is growing so rapidly that it soon won't fit on\n240GB, then cranking up shared_buffers won't do for long. If you can\nget your tables and all of their indexes clustered together, then you\ncan do the updates in an order that makes IO more efficient. Maybe\npartitioning would help.\n\n\n>> I don't know how big each disk is, or how big your various categories\n>> of data are. Could you move everything to SSD? Could you move all\n>> your actively updated indexes there?\n\n> With table spaces you mean?\n\nYes. Or moving everything to SSD if it fits, then you don't have go\nthrough and separate objects.\n\nThe UPDATE you posted in a previous thread looked like the table\nblocks might also be getting dirtied in a fairly random order, which\nmeans the table blocks are in the same condition as the index blocks\nso maybe singling out the indexes isn't warranted.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 09:25:21 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "And what is your experience so far?\nDen 11/12/2012 18.16 skrev \"Evgeny Shishkin\" <[email protected]>:\n\n>\n> On Dec 11, 2012, at 5:35 PM, Niels Kristian Schjødt <\n> [email protected]> wrote:\n>\n> >\n> > Den 11/12/2012 kl. 14.29 skrev Craig Ringer <[email protected]>:\n> >\n> >> On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n> >>>\n> >>> Maybe I should mention, that I never see more than max 5Gb out of my\n> total 32Gb being in use on the server… Can I somehow utilize more of it?\n> >> For an update-mostly workload it probably won't do you tons of good so\n> >> long as all your indexes fit in RAM. You're clearly severely\n> >> bottlenecked on disk I/O not RAM.\n> >>> The SSD's I use a are 240Gb each which will grow too small within a\n> >>> few months - so - how does moving the whole data dir onto four of\n> >>> those in a RAID5 array sound?\n> >>\n> >> Not RAID 5!\n> >>\n> >> Use a RAID10 of four or six SSDs.\n> >>\n> >> --\n> >> Craig Ringer http://www.2ndQuadrant.com/\n> >> PostgreSQL Development, 24x7 Support, Training & Services\n> >>\n> > Hehe got it - did you have a look at the SSD's I am considering building\n> it of?\n> http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC\n> > Are they suitable do you think?\n> >\n>\n> I am not Craig, but i use them in production in raid10 array now.\n>\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\nAnd what is your experience so far?\nDen 11/12/2012 18.16 skrev \"Evgeny Shishkin\" <[email protected]>:\n\nOn Dec 11, 2012, at 5:35 PM, Niels Kristian Schjødt <[email protected]> wrote:\n\n>\n> Den 11/12/2012 kl. 14.29 skrev Craig Ringer <[email protected]>:\n>\n>> On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n>>>\n>>> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n>> For an update-mostly workload it probably won't do you tons of good so\n>> long as all your indexes fit in RAM. You're clearly severely\n>> bottlenecked on disk I/O not RAM.\n>>> The SSD's I use a are 240Gb each which will grow too small within a\n>>> few months - so - how does moving the whole data dir onto four of\n>>> those in a RAID5 array sound?\n>>\n>> Not RAID 5!\n>>\n>> Use a RAID10 of four or six SSDs.\n>>\n>> --\n>> Craig Ringer                   http://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Training & Services\n>>\n> Hehe got it - did you have a look at the SSD's I am considering building it of? http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC\n\n> Are they suitable do you think?\n>\n\nI am not Craig, but i use them in production in raid10 array now.\n\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 11 Dec 2012 19:54:28 +0100", "msg_from": "=?ISO-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Dec 11, 2012, at 10:54 PM, Niels Kristian Schjødt <[email protected]> wrote:\n\n> And what is your experience so far?\n> \nIncreased tps by a factor of 10, database no longer a limiting factor of application.\nAnd it is cheaper than brand rotating drives.\n\n\n> Den 11/12/2012 18.16 skrev \"Evgeny Shishkin\" <[email protected]>:\n> \n> On Dec 11, 2012, at 5:35 PM, Niels Kristian Schjødt <[email protected]> wrote:\n> \n> >\n> > Den 11/12/2012 kl. 14.29 skrev Craig Ringer <[email protected]>:\n> >\n> >> On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n> >>>\n> >>> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n> >> For an update-mostly workload it probably won't do you tons of good so\n> >> long as all your indexes fit in RAM. You're clearly severely\n> >> bottlenecked on disk I/O not RAM.\n> >>> The SSD's I use a are 240Gb each which will grow too small within a\n> >>> few months - so - how does moving the whole data dir onto four of\n> >>> those in a RAID5 array sound?\n> >>\n> >> Not RAID 5!\n> >>\n> >> Use a RAID10 of four or six SSDs.\n> >>\n> >> --\n> >> Craig Ringer http://www.2ndQuadrant.com/\n> >> PostgreSQL Development, 24x7 Support, Training & Services\n> >>\n> > Hehe got it - did you have a look at the SSD's I am considering building it of? http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC\n> > Are they suitable do you think?\n> >\n> \n> I am not Craig, but i use them in production in raid10 array now.\n> \n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\nOn Dec 11, 2012, at 10:54 PM, Niels Kristian Schjødt <[email protected]> wrote:And what is your experience so far?Increased tps by a factor of 10, database no longer a limiting factor of application.And it is cheaper than brand rotating drives.\nDen 11/12/2012 18.16 skrev \"Evgeny Shishkin\" <[email protected]>:\n\nOn Dec 11, 2012, at 5:35 PM, Niels Kristian Schjødt <[email protected]> wrote:\n\n>\n> Den 11/12/2012 kl. 14.29 skrev Craig Ringer <[email protected]>:\n>\n>> On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n>>>\n>>> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n>> For an update-mostly workload it probably won't do you tons of good so\n>> long as all your indexes fit in RAM. You're clearly severely\n>> bottlenecked on disk I/O not RAM.\n>>> The SSD's I use a are 240Gb each which will grow too small within a\n>>> few months - so - how does moving the whole data dir onto four of\n>>> those in a RAID5 array sound?\n>>\n>> Not RAID 5!\n>>\n>> Use a RAID10 of four or six SSDs.\n>>\n>> --\n>> Craig Ringer                   http://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Training & Services\n>>\n> Hehe got it - did you have a look at the SSD's I am considering building it of? http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC\n\n> Are they suitable do you think?\n>\n\nI am not Craig, but i use them in production in raid10 array now.\n\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 11 Dec 2012 23:11:14 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "Are you using a hardware based raid controller with them?\nDen 11/12/2012 20.11 skrev \"Evgeny Shishkin\" <[email protected]>:\n\n>\n> On Dec 11, 2012, at 10:54 PM, Niels Kristian Schjødt <\n> [email protected]> wrote:\n>\n> And what is your experience so far?\n>\n> Increased tps by a factor of 10, database no longer a limiting factor of\n> application.\n> And it is cheaper than brand rotating drives.\n>\n>\n> Den 11/12/2012 18.16 skrev \"Evgeny Shishkin\" <[email protected]>:\n>\n>>\n>> On Dec 11, 2012, at 5:35 PM, Niels Kristian Schjødt <\n>> [email protected]> wrote:\n>>\n>> >\n>> > Den 11/12/2012 kl. 14.29 skrev Craig Ringer <[email protected]>:\n>> >\n>> >> On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n>> >>>\n>> >>> Maybe I should mention, that I never see more than max 5Gb out of my\n>> total 32Gb being in use on the server… Can I somehow utilize more of it?\n>> >> For an update-mostly workload it probably won't do you tons of good so\n>> >> long as all your indexes fit in RAM. You're clearly severely\n>> >> bottlenecked on disk I/O not RAM.\n>> >>> The SSD's I use a are 240Gb each which will grow too small within a\n>> >>> few months - so - how does moving the whole data dir onto four of\n>> >>> those in a RAID5 array sound?\n>> >>\n>> >> Not RAID 5!\n>> >>\n>> >> Use a RAID10 of four or six SSDs.\n>> >>\n>> >> --\n>> >> Craig Ringer http://www.2ndQuadrant.com/<http://www.2ndquadrant.com/>\n>> >> PostgreSQL Development, 24x7 Support, Training & Services\n>> >>\n>> > Hehe got it - did you have a look at the SSD's I am considering\n>> building it of?\n>> http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC\n>> > Are they suitable do you think?\n>> >\n>>\n>> I am not Craig, but i use them in production in raid10 array now.\n>>\n>> >\n>> >\n>> > --\n>> > Sent via pgsql-performance mailing list (\n>> [email protected])\n>> > To make changes to your subscription:\n>> > http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>\n\nAre you using a hardware based raid controller with them? \nDen 11/12/2012 20.11 skrev \"Evgeny Shishkin\" <[email protected]>:\nOn Dec 11, 2012, at 10:54 PM, Niels Kristian Schjødt <[email protected]> wrote:\nAnd what is your experience so far?Increased tps by a factor of 10, database no longer a limiting factor of application.And it is cheaper than brand rotating drives.\n\nDen 11/12/2012 18.16 skrev \"Evgeny Shishkin\" <[email protected]>:\n\nOn Dec 11, 2012, at 5:35 PM, Niels Kristian Schjødt <[email protected]> wrote:\n\n>\n> Den 11/12/2012 kl. 14.29 skrev Craig Ringer <[email protected]>:\n>\n>> On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n>>>\n>>> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n>> For an update-mostly workload it probably won't do you tons of good so\n>> long as all your indexes fit in RAM. You're clearly severely\n>> bottlenecked on disk I/O not RAM.\n>>> The SSD's I use a are 240Gb each which will grow too small within a\n>>> few months - so - how does moving the whole data dir onto four of\n>>> those in a RAID5 array sound?\n>>\n>> Not RAID 5!\n>>\n>> Use a RAID10 of four or six SSDs.\n>>\n>> --\n>> Craig Ringer                   http://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Training & Services\n>>\n> Hehe got it - did you have a look at the SSD's I am considering building it of? http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC\n\n\n> Are they suitable do you think?\n>\n\nI am not Craig, but i use them in production in raid10 array now.\n\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 11 Dec 2012 23:41:37 +0100", "msg_from": "=?ISO-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Dec 12, 2012, at 2:41 AM, Niels Kristian Schjødt <[email protected]> wrote:\n\n> Are you using a hardware based raid controller with them?\n> \nYes, of course. Hardware raid with cache and bbu is a must. You can't get fast fsync without it.\nAlso mdadm is a pain in the ass and is suitable only on amazon and other cloud shit.\n\n> Den 11/12/2012 20.11 skrev \"Evgeny Shishkin\" <[email protected]>:\n> \n> On Dec 11, 2012, at 10:54 PM, Niels Kristian Schjødt <[email protected]> wrote:\n> \n>> And what is your experience so far?\n>> \n> Increased tps by a factor of 10, database no longer a limiting factor of application.\n> And it is cheaper than brand rotating drives.\n> \n> \n>> Den 11/12/2012 18.16 skrev \"Evgeny Shishkin\" <[email protected]>:\n>> \n>> On Dec 11, 2012, at 5:35 PM, Niels Kristian Schjødt <[email protected]> wrote:\n>> \n>> >\n>> > Den 11/12/2012 kl. 14.29 skrev Craig Ringer <[email protected]>:\n>> >\n>> >> On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n>> >>>\n>> >>> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n>> >> For an update-mostly workload it probably won't do you tons of good so\n>> >> long as all your indexes fit in RAM. You're clearly severely\n>> >> bottlenecked on disk I/O not RAM.\n>> >>> The SSD's I use a are 240Gb each which will grow too small within a\n>> >>> few months - so - how does moving the whole data dir onto four of\n>> >>> those in a RAID5 array sound?\n>> >>\n>> >> Not RAID 5!\n>> >>\n>> >> Use a RAID10 of four or six SSDs.\n>> >>\n>> >> --\n>> >> Craig Ringer http://www.2ndQuadrant.com/\n>> >> PostgreSQL Development, 24x7 Support, Training & Services\n>> >>\n>> > Hehe got it - did you have a look at the SSD's I am considering building it of? http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC\n>> > Are they suitable do you think?\n>> >\n>> \n>> I am not Craig, but i use them in production in raid10 array now.\n>> \n>> >\n>> >\n>> > --\n>> > Sent via pgsql-performance mailing list ([email protected])\n>> > To make changes to your subscription:\n>> > http://www.postgresql.org/mailpref/pgsql-performance\n>> \n> \n\n\nOn Dec 12, 2012, at 2:41 AM, Niels Kristian Schjødt <[email protected]> wrote:Are you using a hardware based raid controller with them? Yes, of course. Hardware raid with cache and bbu is a must. You can't get fast fsync without it.Also mdadm is a pain in the ass and is suitable only on amazon and other cloud shit.\nDen 11/12/2012 20.11 skrev \"Evgeny Shishkin\" <[email protected]>:\nOn Dec 11, 2012, at 10:54 PM, Niels Kristian Schjødt <[email protected]> wrote:And what is your experience so far?Increased tps by a factor of 10, database no longer a limiting factor of application.And it is cheaper than brand rotating drives.\n\nDen 11/12/2012 18.16 skrev \"Evgeny Shishkin\" <[email protected]>:\n\nOn Dec 11, 2012, at 5:35 PM, Niels Kristian Schjødt <[email protected]> wrote:\n\n>\n> Den 11/12/2012 kl. 14.29 skrev Craig Ringer <[email protected]>:\n>\n>> On 12/11/2012 06:04 PM, Niels Kristian Schjødt wrote:\n>>>\n>>> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n>> For an update-mostly workload it probably won't do you tons of good so\n>> long as all your indexes fit in RAM. You're clearly severely\n>> bottlenecked on disk I/O not RAM.\n>>> The SSD's I use a are 240Gb each which will grow too small within a\n>>> few months - so - how does moving the whole data dir onto four of\n>>> those in a RAID5 array sound?\n>>\n>> Not RAID 5!\n>>\n>> Use a RAID10 of four or six SSDs.\n>>\n>> --\n>> Craig Ringer                   http://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Training & Services\n>>\n> Hehe got it - did you have a look at the SSD's I am considering building it of? http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC\n\n\n> Are they suitable do you think?\n>\n\nI am not Craig, but i use them in production in raid10 array now.\n\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 12 Dec 2012 02:44:24 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/12/2012 06:44 AM, Evgeny Shishkin wrote:\n>\n> On Dec 12, 2012, at 2:41 AM, Niels Kristian Schjødt\n> <[email protected] <mailto:[email protected]>> wrote:\n>\n>> Are you using a hardware based raid controller with them?\n>>\n> Yes, of course. Hardware raid with cache and bbu is a must. You can't\n> get fast fsync without it.\n\nMost SSDs should offer fairly fast fsync without a hardware RAID\ncontroller, as they do write-back caching. The trick is to find ones\nthat do write-back caching safely, so you don't get severe data\ncorruption on power-loss.\n\nA HW RAID controller is an absolute must for rotating magnetic media,\nthough.\n\n> Also mdadm is a pain in the ass and is suitable only on amazon and\n> other cloud shit.\n\nI've personally been pretty happy with mdadm. I find the array\nportability it offers very useful, so I don't need to buy a second RAID\ncontroller just in case my main controller dies and I need a compatible\none to get the array running again. If you don't need a BBU for safe\nwrite-back caching then mdadm has advantages over hardware RAID.\n\nI'll certainly use mdadm over onboard fakeraid solutions or low-end\nhardware RAID controllers. I suspect a mid- to high end HW RAID unit\nwill generally win.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n\n\nOn 12/12/2012 06:44 AM, Evgeny Shishkin\n wrote:\n\n\n\n\n\nOn Dec 12, 2012, at 2:41 AM, Niels Kristian Schjødt <[email protected]>\n wrote:\n\n\nAre you using a hardware based raid controller\n with them? \n\nYes, of course. Hardware raid with cache and bbu is a must.\n You can't get fast fsync without it.\n\n\n\n Most SSDs should offer fairly fast fsync without a hardware RAID\n controller, as they do write-back caching. The trick is to find ones\n that do write-back caching safely, so you don't get severe data\n corruption on power-loss. \n\n A HW RAID controller is an absolute must for rotating magnetic\n media, though.\n\n\n\nAlso mdadm is a pain in the ass and is suitable only on\n amazon and other cloud shit.\n\n\n\n I've personally been pretty happy with mdadm. I find the array\n portability it offers very useful, so I don't need to buy a second\n RAID controller just in case my main controller dies and I need a\n compatible one to get the array running again. If you don't need a\n BBU for safe write-back caching then mdadm has advantages over\n hardware RAID.\n\n I'll certainly use mdadm over onboard fakeraid solutions or low-end\n hardware RAID controllers. I suspect a mid- to high end HW RAID unit\n will generally win.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 12 Dec 2012 09:03:14 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Dec 12, 2012, at 5:03 AM, Craig Ringer <[email protected]> wrote:\n\n> On 12/12/2012 06:44 AM, Evgeny Shishkin wrote:\n>> \n>> On Dec 12, 2012, at 2:41 AM, Niels Kristian Schjødt <[email protected]> wrote:\n>> \n>>> Are you using a hardware based raid controller with them?\n>>> \n>> Yes, of course. Hardware raid with cache and bbu is a must. You can't get fast fsync without it.\n> \n> Most SSDs should offer fairly fast fsync without a hardware RAID controller, as they do write-back caching. The trick is to find ones that do write-back caching safely, so you don't get severe data corruption on power-loss. \n> \n\nActually most of low-end SSDs don't do write caching, they do not have enough ram for that. Sandforce for example.\n\n> A HW RAID controller is an absolute must for rotating magnetic media, though.\n> \n>> Also mdadm is a pain in the ass and is suitable only on amazon and other cloud shit.\n> \n> I've personally been pretty happy with mdadm. I find the array portability it offers very useful, so I don't need to buy a second RAID controller just in case my main controller dies and I need a compatible one to get the array running again. If you don't need a BBU for safe write-back caching then mdadm has advantages over hardware RAID.\n> \n\nIf we are talking about dedicated machine for database with ssd drives, why would anybody don't by hardware raid for about 500-700$? \n\n> I'll certainly use mdadm over onboard fakeraid solutions or low-end hardware RAID controllers. I suspect a mid- to high end HW RAID unit will generally win.\n> \n\n> -- \n> Craig Ringer http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\nOn Dec 12, 2012, at 5:03 AM, Craig Ringer <[email protected]> wrote:\n\n\nOn 12/12/2012 06:44 AM, Evgeny Shishkin\n wrote:\n\n\n\n\n\nOn Dec 12, 2012, at 2:41 AM, Niels Kristian Schjødt <[email protected]>\n wrote:\n\nAre you using a hardware based raid controller\n with them? \n\nYes, of course. Hardware raid with cache and bbu is a must.\n You can't get fast fsync without it.\n\n\n\n Most SSDs should offer fairly fast fsync without a hardware RAID\n controller, as they do write-back caching. The trick is to find ones\n that do write-back caching safely, so you don't get severe data\n corruption on power-loss. Actually most of low-end SSDs don't do write caching, they do not have enough ram for that. Sandforce for example.\n A HW RAID controller is an absolute must for rotating magnetic\n media, though.\n\n\n\nAlso mdadm is a pain in the ass and is suitable only on\n amazon and other cloud shit.\n\n\n\n I've personally been pretty happy with mdadm. I find the array\n portability it offers very useful, so I don't need to buy a second\n RAID controller just in case my main controller dies and I need a\n compatible one to get the array running again. If you don't need a\n BBU for safe write-back caching then mdadm has advantages over\n hardware RAID.\nIf we are talking about dedicated machine for database with ssd drives, why would anybody don't by hardware raid for about 500-700$?  \n I'll certainly use mdadm over onboard fakeraid solutions or low-end\n hardware RAID controllers. I suspect a mid- to high end HW RAID unit\n will generally win.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 12 Dec 2012 05:17:14 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/12/2012 09:17 AM, Evgeny Shishkin wrote:\n>\n> Actually most of low-end SSDs don't do write caching, they do not have\n> enough ram for that. Sandforce for example.\n>\nOr, worse, some of them do limited write caching but don't protect their\nwrite cache from power loss. Instant data corruption!\n\nI would be extremely reluctant to use low-end SSDs for a database server.\n\n> If we are talking about dedicated machine for database with ssd\n> drives, why would anybody don't by hardware raid for about 500-700$?\nI'd want to consider whether the same money is better spent on faster,\nhigher quality SSDs with their own fast write caches.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n\n\nOn 12/12/2012 09:17 AM, Evgeny Shishkin\n wrote:\n\n\n\n\n\nActually most of low-end SSDs don't do write caching, they\n do not have enough ram for that. Sandforce for example.\n\n\n\n Or, worse, some of them do limited write caching but don't protect\n their write cache from power loss. Instant data corruption!\n\n I would be extremely reluctant to use low-end SSDs for a database\n server.\n\n\n\nIf we are talking about dedicated machine for database with\n ssd drives, why would anybody don't by hardware raid for about\n 500-700$?\n\n\n\n I'd want to consider whether the same money is better spent on\n faster, higher quality SSDs with their own fast write caches.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 12 Dec 2012 09:29:02 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Dec 12, 2012, at 5:29 AM, Craig Ringer <[email protected]> wrote:\n\n> On 12/12/2012 09:17 AM, Evgeny Shishkin wrote:\n>> \n>> Actually most of low-end SSDs don't do write caching, they do not have enough ram for that. Sandforce for example.\n>> \n> Or, worse, some of them do limited write caching but don't protect their write cache from power loss. Instant data corruption!\n> \n> I would be extremely reluctant to use low-end SSDs for a database server.\n> \n>> If we are talking about dedicated machine for database with ssd drives, why would anybody don't by hardware raid for about 500-700$?\n> I'd want to consider whether the same money is better spent on faster, higher quality SSDs with their own fast write caches.\n> \n\nHigh quality ssd costs 5-7$ per GB. Consumer grade ssd - 1$. Highend - 11$\nNew intel dc s3700 2-3$ per GB as far as i remember.\n\nSo far, more than a year already, i bought consumer ssds with 300-400$ hw raid. Cost effective and fast, may be not very safe, but so far so good. All data protection measures from postgresql are on, of course.\n> -- \n> Craig Ringer http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\nOn Dec 12, 2012, at 5:29 AM, Craig Ringer <[email protected]> wrote:\n\n\nOn 12/12/2012 09:17 AM, Evgeny Shishkin\n wrote:\n\n\n\n\n\nActually most of low-end SSDs don't do write caching, they\n do not have enough ram for that. Sandforce for example.\n\n\n\n Or, worse, some of them do limited write caching but don't protect\n their write cache from power loss. Instant data corruption!\n\n I would be extremely reluctant to use low-end SSDs for a database\n server.\n\n\n\nIf we are talking about dedicated machine for database with\n ssd drives, why would anybody don't by hardware raid for about\n 500-700$?\n\n\n\n I'd want to consider whether the same money is better spent on\n faster, higher quality SSDs with their own fast write caches.\nHigh quality ssd costs 5-7$ per GB. Consumer grade ssd - 1$. Highend - 11$New intel dc s3700 2-3$ per GB as far as i remember.So far, more than a year already, i bought consumer ssds with 300-400$ hw raid. Cost effective and fast, may be not very safe, but so far so good. All data protection measures from postgresql are on, of course.\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 12 Dec 2012 05:44:24 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Tue, Dec 11, 2012 at 5:17 PM, Evgeny Shishkin <[email protected]>wrote:\n\n> Actually most of low-end SSDs don't do write caching, they do not have\n> enough ram for that.\n>\n\nAIUI, *all* SSDs do write-caching of a sort: writes are actually flushed to\nthe NAND media by erasing, and then overwriting the erased space, and\nerasing is done in fixed-size blocks, usually much larger than a\nfilesystem's pages. The drive's controller accumulates writes in an\non-board cache until it has an \"erase block\"'s worth of them, which are\nthen flushed. From casual searching, a common erase block size is 256\nkbytes, while filesystem-level pages are usually 4k.\n\nMost low-end (and even many mid-range) SSDs, including Sandforce-based\ndrives, don't offer any form of protection (e.g., supercaps, as featured on\nthe Intel 320 and 710-series drives) for the data in that write cache,\nhowever, which may be what you're thinking of. I wouldn't let one of those\nanywhere near one of my servers, unless it was a completely disposable,\nload-balanced slave, and probably not even then.\n\nrls\n\n-- \n:wq\n\nOn Tue, Dec 11, 2012 at 5:17 PM, Evgeny Shishkin <[email protected]> wrote:\nActually most of low-end SSDs don't do write caching, they do not have enough ram for that.AIUI, *all* SSDs do write-caching of a sort: writes are actually flushed to the NAND media by erasing, and then overwriting the erased space, and erasing is done in fixed-size blocks, usually much larger than a filesystem's pages.  The drive's controller accumulates writes in an on-board cache until it has an \"erase block\"'s worth of them, which are then flushed.  From casual searching, a common erase block size is 256 kbytes, while filesystem-level pages are usually 4k.\nMost low-end (and even many mid-range) SSDs, including Sandforce-based drives, don't offer any form of protection (e.g., supercaps, as featured on the Intel 320 and 710-series drives) for the data in that write cache, however, which may be what you're thinking of.  I wouldn't let one of those anywhere near one of my servers, unless it was a completely disposable, load-balanced slave, and probably not even then.\nrls-- :wq", "msg_date": "Tue, 11 Dec 2012 17:47:28 -0800", "msg_from": "Rosser Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/12/2012 09:44 AM, Evgeny Shishkin wrote:\n> So far, more than a year already, i bought consumer ssds with 300-400$\n> hw raid. Cost effective and fast, may be not very safe, but so far so\n> good. All data protection measures from postgresql are on, of course.\n\nYou're aware that many low end SSDs lie to the RAID controller about\nhaving written data, right? Even if the RAID controller sends a flush\ncommand, the SSD might cache the write in non-durable cache. If you're\nusing such SSDs and you lose power, data corruption is extremely likely,\nbecause your SSDs are essentially ignoring fsync.\n\nYour RAID controller's BBU won't save you, because once the disks tell\nthe RAID controller the data has hit durable storage, the RAID\ncontroller feels free to flush it from its battery backed cache. If the\ndisks are lying...\n\nThe only solid way to find out if this is an issue with your SSDs is to\ndo plug-pull testing and find out.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n\n\nOn 12/12/2012 09:44 AM, Evgeny Shishkin\n wrote:\n\n\n\n So far, more than a year already, i bought consumer ssds with\n 300-400$ hw raid. Cost effective and fast, may be not very safe,\n but so far so good. All data protection measures from postgresql\n are on, of course.\n\n\n You're aware that many low end SSDs lie to the RAID controller about\n having written data, right? Even if the RAID controller sends a\n flush command, the SSD might cache the write in non-durable cache.\n If you're using such SSDs and you lose power, data corruption is\n extremely likely, because your SSDs are essentially ignoring fsync.\n\n Your RAID controller's BBU won't save you, because once the disks\n tell the RAID controller the data has hit durable storage, the RAID\n controller feels free to flush it from its battery backed cache. If\n the disks are lying...\n\n The only solid way to find out if this is an issue with your SSDs is\n to do plug-pull testing and find out.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 12 Dec 2012 10:02:00 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Dec 12, 2012, at 6:02 AM, Craig Ringer <[email protected]> wrote:\n\n> On 12/12/2012 09:44 AM, Evgeny Shishkin wrote:\n>> So far, more than a year already, i bought consumer ssds with 300-400$ hw raid. Cost effective and fast, may be not very safe, but so far so good. All data protection measures from postgresql are on, of course.\n> \n> You're aware that many low end SSDs lie to the RAID controller about having written data, right? Even if the RAID controller sends a flush command, the SSD might cache the write in non-durable cache. If you're using such SSDs and you lose power, data corruption is extremely likely, because your SSDs are essentially ignoring fsync.\n> \n> Your RAID controller's BBU won't save you, because once the disks tell the RAID controller the data has hit durable storage, the RAID controller feels free to flush it from its battery backed cache. If the disks are lying...\n> \n> The only solid way to find out if this is an issue with your SSDs is to do plug-pull testing and find out.\n> \n\nYes, i am aware of this issue. Never experienced this neither on intel 520, no ocz vertex 3.\nHave you heard of them on this list?\n> -- \n> Craig Ringer http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\nOn Dec 12, 2012, at 6:02 AM, Craig Ringer <[email protected]> wrote:\n\n\nOn 12/12/2012 09:44 AM, Evgeny Shishkin\n wrote:\n\n\n\n So far, more than a year already, i bought consumer ssds with\n 300-400$ hw raid. Cost effective and fast, may be not very safe,\n but so far so good. All data protection measures from postgresql\n are on, of course.\n\n\n You're aware that many low end SSDs lie to the RAID controller about\n having written data, right? Even if the RAID controller sends a\n flush command, the SSD might cache the write in non-durable cache.\n If you're using such SSDs and you lose power, data corruption is\n extremely likely, because your SSDs are essentially ignoring fsync.\n\n Your RAID controller's BBU won't save you, because once the disks\n tell the RAID controller the data has hit durable storage, the RAID\n controller feels free to flush it from its battery backed cache. If\n the disks are lying...\n\n The only solid way to find out if this is an issue with your SSDs is\n to do plug-pull testing and find out.\nYes, i am aware of this issue. Never experienced this neither on intel 520, no ocz vertex 3.Have you heard of them on this list?\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 12 Dec 2012 06:13:22 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/11/2012 7:13 PM, Evgeny Shishkin wrote:\n> Yes, i am aware of this issue. Never experienced this neither on intel \n> 520, no ocz vertex 3.\n> Have you heard of them on this list?\nPeople have done plug-pull tests and reported the results on the list \n(sometime in the past couple of years).\n\nBut you don't need to do the test to know these drives are not safe. \nThey're unsafe by design.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 19:15:33 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "\nOn Dec 12, 2012, at 6:15 AM, David Boreham <[email protected]> wrote:\n\n> On 12/11/2012 7:13 PM, Evgeny Shishkin wrote:\n>> Yes, i am aware of this issue. Never experienced this neither on intel 520, no ocz vertex 3.\n>> Have you heard of them on this list?\n> People have done plug-pull tests and reported the results on the list (sometime in the past couple of years).\n> \n> But you don't need to do the test to know these drives are not safe. They're unsafe by design.\n> \n\nOh, there is no 100% safe system. The only way to be sure is to read data back.\nEverything about system design is tradeoff between cost and risks.\n\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 06:20:29 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/11/2012 7:20 PM, Evgeny Shishkin wrote:\n> Oh, there is no 100% safe system.\nIn this case we're discussing specifically \"safety in the event of power \nloss shortly after the drive indicates to the controller that it has \ncommitted a write operation\". Some drives do provide 100% safety against \nthis event, and they don't cost much more than those that don't.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 19:26:45 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "\nOn Dec 12, 2012, at 6:26 AM, David Boreham <[email protected]> wrote:\n\n> On 12/11/2012 7:20 PM, Evgeny Shishkin wrote:\n>> Oh, there is no 100% safe system.\n> In this case we're discussing specifically \"safety in the event of power loss shortly after the drive indicates to the controller that it has committed a write operation\". Some drives do provide 100% safety against this event, and they don't cost much more than those that don't.\n\nWhich drives would you recommend? Besides intel 320 and 710.\n\n\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 06:38:30 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/11/2012 7:38 PM, Evgeny Shishkin wrote:\n> Which drives would you recommend? Besides intel 320 and 710.\nThose are the only drive types we have deployed in servers at present \n(almost all 710, but we have some 320 for less mission-critical \nmachines). The new DC-S3700 Series looks nice too, but isn't yet in the \nsales channel :\nhttp://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-dc-s3700-series.html\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 19:41:07 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/12/2012 10:13 AM, Evgeny Shishkin wrote:\n>\n> Yes, i am aware of this issue. Never experienced this neither on intel\n> 520, no ocz vertex 3.\n>\n\nI wouldn't trust either of those drives. The 520 doesn't have Intel's \"\nEnhanced Power Loss Data Protection\"; it's going to lose its buffers if\nit loses power. Similarly, the Vertex 3 doesn't have any kind of power\nprotection. See:\n\nhttp://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-series-power-loss-data-protection-brief.html\nhttp://ark.intel.com/products/family/56572/Intel-SSD-500-Family\n\nhttp://www.ocztechnology.com/res/manuals/OCZ_SSD_Breakdown_Q2-11_1.pdf\n\nThe only way I'd use those for a production server was if I had\nsynchronous replication running to another machine with trustworthy,\ndurable storage - and if I didn't mind some downtime to restore the\ncorrupt DB from the replica after power loss.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n\n\nOn 12/12/2012 10:13 AM, Evgeny Shishkin\n wrote:\n\n\n\n\n\n Yes, i am aware of this issue. Never experienced this neither on\n intel 520, no ocz vertex 3.\n\n\n\n\n I wouldn't trust either of those drives. The 520 doesn't have\n Intel's \"\n \n Enhanced Power Loss Data Protection\"; it's going to lose its buffers\n if it loses power. Similarly, the Vertex 3 doesn't have any kind of\n power protection. See:\n\n\nhttp://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-series-power-loss-data-protection-brief.html\n\nhttp://ark.intel.com/products/family/56572/Intel-SSD-500-Family\n\n\nhttp://www.ocztechnology.com/res/manuals/OCZ_SSD_Breakdown_Q2-11_1.pdf\n\n The only way I'd use those for a production server was if I had\n synchronous replication running to another machine with trustworthy,\n durable storage - and if I didn't mind some downtime to restore the\n corrupt DB from the replica after power loss.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 12 Dec 2012 10:47:51 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Dec 12, 2012, at 6:41 AM, David Boreham <[email protected]> wrote:\n\n> On 12/11/2012 7:38 PM, Evgeny Shishkin wrote:\n>> Which drives would you recommend? Besides intel 320 and 710.\n> Those are the only drive types we have deployed in servers at present (almost all 710, but we have some 320 for less mission-critical machines). The new DC-S3700 Series looks nice too, but isn't yet in the sales channel :\n> http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-dc-s3700-series.html\n\nYeah, s3700 looks promising, but sata interface is limiting factor for this drive.\nI'm looking towards SMART ssd http://www.storagereview.com/smart_storage_systems_optimus_sas_enterprise_ssd_review\n\nbut i don't heard of it anywhere else.\n\n\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\nOn Dec 12, 2012, at 6:41 AM, David Boreham <[email protected]> wrote:On 12/11/2012 7:38 PM, Evgeny Shishkin wrote:Which drives would you recommend? Besides intel 320 and 710.Those are the only drive types we have deployed in servers at present (almost all 710, but we have some 320 for less mission-critical machines). The new DC-S3700 Series looks nice too, but isn't yet in the sales channel :http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-dc-s3700-series.htmlYeah, s3700 looks promising, but sata interface is limiting factor for this drive.I'm looking towards SMART ssd http://www.storagereview.com/smart_storage_systems_optimus_sas_enterprise_ssd_reviewbut i don't heard of it anywhere else.-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 12 Dec 2012 06:49:12 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/11/2012 7:49 PM, Evgeny Shishkin wrote:\n> Yeah, s3700 looks promising, but sata interface is limiting factor for \n> this drive.\n> I'm looking towards SMART ssd \n> http://www.storagereview.com/smart_storage_systems_optimus_sas_enterprise_ssd_review\n>\nWhat don't you like about SATA ?\n\nI prefer to avoid SAS drives if possible due to the price premium for \ndubious benefits besides vague hand-waving \"enterprise-ness\" promises.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 20:05:13 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/12/12 15:41, David Boreham wrote:\n> On 12/11/2012 7:38 PM, Evgeny Shishkin wrote:\n>> Which drives would you recommend? Besides intel 320 and 710.\n> Those are the only drive types we have deployed in servers at present \n> (almost all 710, but we have some 320 for less mission-critical \n> machines). The new DC-S3700 Series looks nice too, but isn't yet in \n> the sales channel :\n> http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-dc-s3700-series.html \n>\n>\n>\n>\n>\n>\n+1\n\nThe s3700 is probably the one to get (when it is available). I'd opt for \nthe 710 if you need something now. I'd avoid the 320 - we have \nencountered the firmware bug whereby you get an 8MB (yes 8MB) capacity \nafter powerdown with a depressingly large number of them (they were \nupdated to the latest firmware too).\n\nRegards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 16:07:31 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Dec 12, 2012, at 7:05 AM, David Boreham <[email protected]> wrote:\n\n> On 12/11/2012 7:49 PM, Evgeny Shishkin wrote:\n>> Yeah, s3700 looks promising, but sata interface is limiting factor for this drive.\n>> I'm looking towards SMART ssd http://www.storagereview.com/smart_storage_systems_optimus_sas_enterprise_ssd_review\n>> \n> What don't you like about SATA ?\n> \n> I prefer to avoid SAS drives if possible due to the price premium for dubious benefits besides vague hand-waving \"enterprise-ness\" promises.\n> \n\nQuoting http://www.storagereview.com/intel_ssd_dc_s3700_series_enterprise_ssd_review\n\nIntel makes the case that the S3700 is the ideal drive for entry, mainstream and performance enterprise computing including HPC use cases. The claim is bold, largely because of the decision to go with a SATA interface, which has several limitations in the enterprise. The SATA interface tops out at a queue depth 32 (SAS scales as high as 256 in most cases) which means that when requests go above that level average and peak latency spike as we saw in all of our workloads.\n\nAnother huge advantage of SAS is the ability to offer dual-port modes for high availability scenarios, where there are two controllers interfacing with the same drive at the same time. In the event one goes offline, the connection with the SSD is not lost, as it would with a standard SATA interface without additional hardware. Some SAS drives also offer wide-port configurations used to increase total bandwidth above a single-link connection. While the Intel SSD DC S3700 against other SATA competitors is very fast, the story changes when you introduce the latest MLC and SLC-based SAS SSDs, which can cope better with increased thread and queue levels.\n\nWe picked the primary post-preconditioning sections of our benchmarks after each SSD had reached steady-state. For the purposes of this section, we added the Intel SSD DC S3700 onto the throughput charts of the newest SAS high-performance SSDs. There are also significant latency differences at higher queue depths that play a significant factor, but for the sake of easy comparison we stick with raw I/O speed across varying thread and queue counts.\n\nIn a 100% 4K random write or random read scenario, the Intel SSD DC 3700 performs quite well up against the high-end SAS competition, with the second fastest 4K steady-state speed. When you switch focus to read throughput at a heavy 16T/16Q load it only offers 1/2 to 1/3 the performance of SSDs in this category.\n\nhttp://www.storagereview.com/images/intel_ssd_dc_s3700_main_slc_4kwrite_throughput.png\n\n\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\nOn Dec 12, 2012, at 7:05 AM, David Boreham <[email protected]> wrote:On 12/11/2012 7:49 PM, Evgeny Shishkin wrote:Yeah, s3700 looks promising, but sata interface is limiting factor for this drive.I'm looking towards SMART ssd http://www.storagereview.com/smart_storage_systems_optimus_sas_enterprise_ssd_reviewWhat don't you like about SATA ?I prefer to avoid SAS drives if possible due to the price premium for dubious benefits besides vague hand-waving \"enterprise-ness\" promises.Quoting http://www.storagereview.com/intel_ssd_dc_s3700_series_enterprise_ssd_reviewIntel makes the case that the S3700 is the ideal drive for entry, mainstream and performance enterprise computing including HPC use cases. The claim is bold, largely because of the decision to go with a SATA interface, which has several limitations in the enterprise. The SATA interface tops out at a queue depth 32 (SAS scales as high as 256 in most cases) which means that when requests go above that level average and peak latency spike as we saw in all of our workloads.Another huge advantage of SAS is the ability to offer dual-port modes for high availability scenarios, where there are two controllers interfacing with the same drive at the same time. In the event one goes offline, the connection with the SSD is not lost, as it would with a standard SATA interface without additional hardware. Some SAS drives also offer wide-port configurations used to increase total bandwidth above a single-link connection. While the Intel SSD DC S3700 against other SATA competitors is very fast, the story changes when you introduce the latest MLC and SLC-based SAS SSDs, which can cope better with increased thread and queue levels.We picked the primary post-preconditioning sections of our benchmarks after each SSD had reached steady-state. For the purposes of this section, we added the Intel SSD DC S3700 onto the throughput charts of the newest SAS high-performance SSDs. There are also significant latency differences at higher queue depths that play a significant factor, but for the sake of easy comparison we stick with raw I/O speed across varying thread and queue counts.In a 100% 4K random write or random read scenario, the Intel SSD DC 3700 performs quite well up against the high-end SAS competition, with the second fastest 4K steady-state speed. When you switch focus to read throughput at a heavy 16T/16Q load it only offers 1/2 to 1/3 the performance of SSDs in this category.http://www.storagereview.com/images/intel_ssd_dc_s3700_main_slc_4kwrite_throughput.png-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 12 Dec 2012 07:11:43 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 12/11/2012 8:11 PM, Evgeny Shishkin wrote:\n>\n> Quoting \n> http://www.storagereview.com/intel_ssd_dc_s3700_series_enterprise_ssd_review\nHeh. A fine example of the kind of hand-waving of which I spoke ;)\n\nHigher performance is certainly a benefit, although at present we can't \nsaturate even a single 710 series drive (the application, CPU, OS, etc \nis the bottleneck). Similarly while dual-porting certainly has its uses, \nit is not something I need.\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 20:19:35 -0700", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "Well, In fact I do (as you can see from my configuration). I have a similar server running with hot standby replication - and it runs two 3T HDD in a RAID1 array.\n\nSo - is it still very bad if I choose to put four intel 520 disks in a RAID10 array on the other production server?\n\nDen 12/12/2012 kl. 03.47 skrev Craig Ringer <[email protected]>:\n\n> On 12/12/2012 10:13 AM, Evgeny Shishkin wrote:\n>> \n>> Yes, i am aware of this issue. Never experienced this neither on intel 520, no ocz vertex 3.\n>> \n> \n> I wouldn't trust either of those drives. The 520 doesn't have Intel's \" Enhanced Power Loss Data Protection\"; it's going to lose its buffers if it loses power. Similarly, the Vertex 3 doesn't have any kind of power protection. See:\n> \n> http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-series-power-loss-data-protection-brief.html\n> http://ark.intel.com/products/family/56572/Intel-SSD-500-Family\n> \n> http://www.ocztechnology.com/res/manuals/OCZ_SSD_Breakdown_Q2-11_1.pdf\n> \n> The only way I'd use those for a production server was if I had synchronous replication running to another machine with trustworthy, durable storage - and if I didn't mind some downtime to restore the corrupt DB from the replica after power loss.\n> \n> -- \n> Craig Ringer http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\nWell, In fact I do (as you can see from my configuration). I have a similar server running with hot standby replication - and it runs two 3T HDD in a RAID1 array.So - is it still very bad if I choose to put four intel 520 disks in a RAID10 array on the other production server?Den 12/12/2012 kl. 03.47 skrev Craig Ringer <[email protected]>:\n\n\nOn 12/12/2012 10:13 AM, Evgeny Shishkin\n wrote:\n\n\n\n\n\n Yes, i am aware of this issue. Never experienced this neither on\n intel 520, no ocz vertex 3.\n\n\n\n\n I wouldn't trust either of those drives. The 520 doesn't have\n Intel's \"\n \n Enhanced Power Loss Data Protection\"; it's going to lose its buffers\n if it loses power. Similarly, the Vertex 3 doesn't have any kind of\n power protection. See:\n\n\nhttp://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-series-power-loss-data-protection-brief.html\n\nhttp://ark.intel.com/products/family/56572/Intel-SSD-500-Family\n\n\nhttp://www.ocztechnology.com/res/manuals/OCZ_SSD_Breakdown_Q2-11_1.pdf\n\n The only way I'd use those for a production server was if I had\n synchronous replication running to another machine with trustworthy,\n durable storage - and if I didn't mind some downtime to restore the\n corrupt DB from the replica after power loss.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 12 Dec 2012 17:22:10 +0100", "msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "\nDen 11/12/2012 kl. 18.25 skrev Jeff Janes <[email protected]>:\n\n> On Tue, Dec 11, 2012 at 2:04 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n>> Den 11/12/2012 kl. 00.58 skrev Jeff Janes <[email protected]>:\n>> \n>>> \n>>> The fact that there is much more writing than reading tells me that\n>>> most of your indexes are in RAM. The amount of index you are rapidly\n>>> reading and dirtying is large enough to fit in RAM, but is not large\n>>> enough to fit in shared_buffers + kernel's dirty-buffer comfort level.\n> \n>> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n> \n> What tool do you use to determine that? Is that on top of the 4GB\n> shared_buffers, are including it?\n\nOkay I might not have made myself clear, I was talking \"physical\" memory utilization. Here is the stats:\nfree -m\ntotal used free shared buffers cached\nMem: 32075 25554 6520 0 69 22694\n-/+ buffers/cache: 2791 29284\nSwap: 2046 595 1451\n> \n> How big is your entire data set? Maybe all your data fits in 5GB\n> (believable, as all your indexes listed below sum to < 2.5GB) so there\n> is no need to use more.\n\nIt doesn't we are a search engine for used cars, and there are quite a lot of those out there :-) However, my indexes are almost all partial indexes, which mean that they are only on cars which is still for sale, so in that sense, the indexes them selves doesn't really grow, but the tables do.\n\n> \n> Or maybe you have hit an bug in the 3.2 kernel. At least one of those\n> has been frequently discussed.\n> \nMight be true - but likely?\n> \n>>> You could really crank up shared_buffers or vm.dirty_background_ratio,\n>>> but doing so might cause problems with checkpoints stalling and\n>>> latency spikes. That would probably not be a problem during the\n>>> night, but could be during the day.\n> \n>> What do you have in mind here? Tweaking what parameters to what values?\n> \n> I'd set shared_buffers to 20GB (or 10GB, if that will hold all of your\n\nI had that before, Shaun suggested that I changed it to 4GB as he was talking about a strange behavior when larger than that on 12.04. But I can say, that there has not been any notable difference between having it at 4Gb and at 8Gb.\n\n> data) and see what happens. And probably increase checkpoint_timeout\n> and checkpoint_segments about 3x each. Also, turn on log_checkpoints\n> so you can see what kinds of problem those changes may be causing\n> there (i.e. long sync times). Preferably you do this on some kind of\n> pre-production or test server.\n> \n> But if your database is growing so rapidly that it soon won't fit on\n> 240GB, then cranking up shared_buffers won't do for long. If you can\n> get your tables and all of their indexes clustered together, then you\n> can do the updates in an order that makes IO more efficient. Maybe\n> partitioning would help.\n\nCan you explain a little more about this, or provide me a good link?\n> \n> \n>>> I don't know how big each disk is, or how big your various categories\n>>> of data are. Could you move everything to SSD? Could you move all\n>>> your actively updated indexes there?\n> \n>> With table spaces you mean?\n> \n> Yes. Or moving everything to SSD if it fits, then you don't have go\n> through and separate objects.\n> \n> The UPDATE you posted in a previous thread looked like the table\n> blocks might also be getting dirtied in a fairly random order, which\n> means the table blocks are in the same condition as the index blocks\n> so maybe singling out the indexes isn't warranted.\n> \n> Cheers,\n> \n> Jeff\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 17:46:11 +0100", "msg_from": "=?windows-1252?Q?Niels_Kristian_Schj=F8dt?=\n <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On 13/12/2012 12:22 AM, Niels Kristian Schj�dt wrote:\n> Well, In fact I do (as you can see from my configuration). I have a\n> similar server running with hot standby replication - and it runs two\n> 3T HDD in a RAID1 array.\n>\n> So - is it still very bad if I choose to put four intel 520 disks in a\n> RAID10 array on the other production server?\nSo long as you have it recording to a synchronous replia on another\nmachine and you're fully prepared to accept the small risk that you'll\nhave total and unrecoverable data corruption on that server, with the\ncorresponding downtime while you rebuild it from the replica, it should\nbe OK.\n\nAlternately, you could use PITR with a basebackup to ship WAL to another\nmachine or a reliable HDD, so you can recover all but the last\ncheckpoint_timeout minutes of data from the base backup + WAL. There's\nsmall window of data loss that way, but you don't need a second machine\nas a streaming replication follower. barman might is worth checking out\nas a management tool for PITR backups.\n\nIf the data is fairly low-value you could even just take nightly backups\nand accept the risk of losing some data.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 13/12/2012 12:22 AM, Niels Kristian\n Schj�dt wrote:\n\n\n\n Well, In fact I do (as you can see from my configuration). I have\n a similar server running with hot standby replication - and it\n runs two 3T HDD in a RAID1 array.\n \n\nSo - is it still very bad if\n I choose to put four intel 520 disks in a RAID10 array on the\n other production server?\n\n So long as you have it recording to a synchronous replia on another\n machine and you're fully prepared to accept the small risk that\n you'll have total and unrecoverable data corruption on that server,\n with the corresponding downtime while you rebuild it from the\n replica, it should be OK.\n\n Alternately, you could use PITR with a basebackup to ship WAL to\n another machine or a reliable HDD, so you can recover all but the\n last checkpoint_timeout minutes of data from the base backup + WAL.\n There's small window of data loss that way, but you don't need a\n second machine as a streaming replication follower. barman might is\n worth checking out as a management tool for PITR backups.\n\n If the data is fairly low-value you could even just take nightly\n backups and accept the risk of losing some data.\n\n --\n Craig Ringer", "msg_date": "Thu, 13 Dec 2012 07:26:36 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Wed, Dec 12, 2012 at 8:46 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n>\n> Den 11/12/2012 kl. 18.25 skrev Jeff Janes <[email protected]>:\n>\n>> On Tue, Dec 11, 2012 at 2:04 AM, Niels Kristian Schjødt\n>> <[email protected]> wrote:\n>>\n>>> Maybe I should mention, that I never see more than max 5Gb out of my total 32Gb being in use on the server… Can I somehow utilize more of it?\n>>\n>> What tool do you use to determine that? Is that on top of the 4GB\n>> shared_buffers, are including it?\n>\n> Okay I might not have made myself clear, I was talking \"physical\" memory utilization. Here is the stats:\n> free -m\n> total used free shared buffers cached\n> Mem: 32075 25554 6520 0 69 22694\n> -/+ buffers/cache: 2791 29284\n> Swap: 2046 595 1451\n\nI don't how you get 5 Gig from that, though. You have 22 Gig of\ncached file-system, which for your purposes probably counts as being\nutilized. Although we don't know how much of this is for postgres\ndata files, chances are it is a pretty good chunk.\n\n\n>>\n>> How big is your entire data set? Maybe all your data fits in 5GB\n>> (believable, as all your indexes listed below sum to < 2.5GB) so there\n>> is no need to use more.\n>\n> It doesn't we are a search engine for used cars, and there are quite a lot of those out there :-)\n\nBut how big? More than 22GB? (you can use \\l+ in psql, or du -s on\nthe data directory)\n\n> However, my indexes are almost all partial indexes, which mean that they are only on cars which is still for sale, so in that sense, the indexes them selves doesn't really grow, but the tables do.\n\nSo maybe this reverses things. If your car table is huge and the\nactive cars are scattered randomly among all the inactive ones, then\nupdating random active cars is going to generate a lot of randomly\nscattered writing which can't be combined into sequential writes.\n\nDo you have plans for archiving cars that are no longer for sale? Why\ndo you keep them around in the first place, i.e. what types of queries\ndo you do on inactive ones?\n\nUnfortunately you currently can't use CLUSTER with partial indexes,\notherwise that might be a good idea. You could build a full index on\nwhatever it is you use as the criterion for the partial indexes,\ncluster on that, and then drop it.\n\nBut the table would eventually become unclustered again, so if this\nworks you might want to implement partitioning between active and\ninactive partitions so as to maintain the clustering.\n\n\n>>>> You could really crank up shared_buffers or vm.dirty_background_ratio,\n>>>> but doing so might cause problems with checkpoints stalling and\n>>>> latency spikes. That would probably not be a problem during the\n>>>> night, but could be during the day.\n>>\n>>> What do you have in mind here? Tweaking what parameters to what values?\n>>\n>> I'd set shared_buffers to 20GB (or 10GB, if that will hold all of your\n>\n> I had that before, Shaun suggested that I changed it to 4GB as he was talking about a strange behavior when larger than that on 12.04. But I can say, that there has not been any notable difference between having it at 4Gb and at 8Gb.\n\nIt is almost an all or nothing thing. If you need 16 or 20GB, just\ngoing from 4 to 8 isn't going to show much difference. If you can\ntest this easily, I'd just set it to 24 or even 28GB and run the bulk\nupdate. I don't think you'd want to run a server permanently at those\nsettings, but it is an easy way to rule in or out different theories\nabout what is going on.\n\n>> But if your database is growing so rapidly that it soon won't fit on\n>> 240GB, then cranking up shared_buffers won't do for long. If you can\n>> get your tables and all of their indexes clustered together, then you\n>> can do the updates in an order that makes IO more efficient. Maybe\n>> partitioning would help.\n>\n> Can you explain a little more about this, or provide me a good link?\n\nIf all your partial btree indexes are using the same WHERE clause,\nthen your indexes are already clustered together in a sense--a partial\nindex is kind of like a composite index with the WHERE clause as the\nfirst column.\n\nSo the trick would be to get the table to be clustered on the same\nthing--either by partitioning or by the CLUSTER command, or something\nequivalent to those. I don't know of a good link, other than the\ndocumentation (which is more about how to do it, rather than why you\nwould want to or how to design it)\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 07:10:37 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" }, { "msg_contents": "On Tue, Dec 11, 2012 at 6:03 PM, Craig Ringer <[email protected]> wrote:\n> On 12/12/2012 06:44 AM, Evgeny Shishkin wrote:\n>\n>\n> On Dec 12, 2012, at 2:41 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n>\n> Are you using a hardware based raid controller with them?\n>\n> Yes, of course. Hardware raid with cache and bbu is a must. You can't get\n> fast fsync without it.\n>\n>\n> Most SSDs should offer fairly fast fsync without a hardware RAID controller,\n> as they do write-back caching. The trick is to find ones that do write-back\n> caching safely, so you don't get severe data corruption on power-loss.\n>\n> A HW RAID controller is an absolute must for rotating magnetic media,\n> though.\n>\n>\n> Also mdadm is a pain in the ass and is suitable only on amazon and other\n> cloud shit.\n>\n>\n> I've personally been pretty happy with mdadm. I find the array portability\n> it offers very useful, so I don't need to buy a second RAID controller just\n> in case my main controller dies and I need a compatible one to get the array\n> running again. If you don't need a BBU for safe write-back caching then\n> mdadm has advantages over hardware RAID.\n>\n> I'll certainly use mdadm over onboard fakeraid solutions or low-end hardware\n> RAID controllers. I suspect a mid- to high end HW RAID unit will generally\n> win.\n\nAlso for sequential throughput md RAID is usually faster than most\nRAID controllers, even the high end Areca and LSI ones.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 17:22:14 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I have a hardware or a software problem?" } ]
[ { "msg_contents": "Hello,\n\nI am seeing some strange performance on a new pg9.1 instance. We are seeing occasional statement timeouts on some TRUNCATEs and INSERTs. In both cases, the statements are quite simple:\n - TRUNCATE schema.table;\n - INSERT INTO schema.table VALUES ($1,2,$2,'');\n\nSometimes these will succeed. Occasionally I see timeouts. The statement_timeout is set to 60 seconds. These tables are not particularly large; in the case of the insert, the table only has three rows. \n\nOur previous Postgresql 8.2 instance did not have this problem. Any ideas about how to track down the issue? \n\nThanks,\n\n--Jeff O", "msg_date": "Tue, 11 Dec 2012 16:19:20 -0500", "msg_from": "\"Osborn, Jeff\" <[email protected]>", "msg_from_op": true, "msg_subject": "Occasional timeouts on TRUNCATE and simple INSERTs" }, { "msg_contents": "On Tue, Dec 11, 2012 at 1:19 PM, Osborn, Jeff <[email protected]> wrote:\n> I am seeing some strange performance on a new pg9.1 instance. We are seeing occasional statement timeouts on some TRUNCATEs and INSERTs. In both cases, the statements are quite simple:\n> - TRUNCATE schema.table;\n> - INSERT INTO schema.table VALUES ($1,2,$2,'');\n>\n> Sometimes these will succeed. Occasionally I see timeouts. The statement_timeout is set to 60 seconds. These tables are not particularly large; in the case of the insert, the table only has three rows.\n\nA most common case is when backup (pg_dump*) is running TRUNCATE has\nto wait for it because it acquires an access exclusive lock on a table\nand all other queries including INSERT have to wait for the TRUNCATE.\nCheck the backup case first.\n\n> Our previous Postgresql 8.2 instance did not have this problem.\n\nThis is strange for me.\n\n-- \nSergey Konoplev\nDatabase and Software Architect\nhttp://www.linkedin.com/in/grayhemp\n\nPhones:\nUSA +1 415 867 9984\nRussia, Moscow +7 901 903 0499\nRussia, Krasnodar +7 988 888 1979\n\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 13:38:38 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Occasional timeouts on TRUNCATE and simple INSERTs" }, { "msg_contents": "On Tue, Dec 11, 2012 at 3:38 PM, Sergey Konoplev <[email protected]> wrote:\n> On Tue, Dec 11, 2012 at 1:19 PM, Osborn, Jeff <[email protected]> wrote:\n>> I am seeing some strange performance on a new pg9.1 instance. We are seeing occasional statement timeouts on some TRUNCATEs and INSERTs. In both cases, the statements are quite simple:\n>> - TRUNCATE schema.table;\n>> - INSERT INTO schema.table VALUES ($1,2,$2,'');\n>>\n>> Sometimes these will succeed. Occasionally I see timeouts. The statement_timeout is set to 60 seconds. These tables are not particularly large; in the case of the insert, the table only has three rows.\n>\n> A most common case is when backup (pg_dump*) is running TRUNCATE has\n> to wait for it because it acquires an access exclusive lock on a table\n> and all other queries including INSERT have to wait for the TRUNCATE.\n> Check the backup case first.\n\nYeah: absolute first thing to check is if your statements are being\nblocked -- you can get that via pg_stat_activity from another session.\n It's a completely different beast if that's the case.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 11 Dec 2012 16:16:43 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Occasional timeouts on TRUNCATE and simple INSERTs" }, { "msg_contents": "Yeah I've been running a cron pulling relevant info from pg_stat_activity. Haven't seen anything yet. Currently looking into the pg_dump situation.\n\n--Jeff O\n \nOn Dec 11, 2012, at 5:16 PM, Merlin Moncure wrote:\n\n> On Tue, Dec 11, 2012 at 3:38 PM, Sergey Konoplev <[email protected]> wrote:\n> \n> Yeah: absolute first thing to check is if your statements are being\n> blocked -- you can get that via pg_stat_activity from another session.\n> It's a completely different beast if that's the case.\n> \n> merlin", "msg_date": "Tue, 11 Dec 2012 17:34:16 -0500", "msg_from": "\"Osborn, Jeff\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occasional timeouts on TRUNCATE and simple INSERTs" }, { "msg_contents": "You all were right. The time-outs for TRUNCATE were due to a rogue pg_dump. And the issue with the inserts was due to an unrelated code change. \n\nThanks for your help!\n\n--Jeff O\n\nOn Dec 11, 2012, at 5:34 PM, Osborn, Jeff wrote:\n\n> Yeah I've been running a cron pulling relevant info from pg_stat_activity. Haven't seen anything yet. Currently looking into the pg_dump situation.\n> \n> --Jeff O\n> \n> On Dec 11, 2012, at 5:16 PM, Merlin Moncure wrote:\n> \n>> On Tue, Dec 11, 2012 at 3:38 PM, Sergey Konoplev <[email protected]> wrote:\n>> \n>> Yeah: absolute first thing to check is if your statements are being\n>> blocked -- you can get that via pg_stat_activity from another session.\n>> It's a completely different beast if that's the case.\n>> \n>> merlin\n>", "msg_date": "Thu, 13 Dec 2012 15:57:21 -0500", "msg_from": "\"Osborn, Jeff\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Occasional timeouts on TRUNCATE and simple INSERTs" } ]
[ { "msg_contents": "Hello All\n\nWhile investigating switching to Postgres, we come across a query plan that\nuses hash join and is a lot slower than a nested loop join.\n\nI don't understand why the optimiser chooses the hash join in favor of the\nnested loop. What can I do to get the optimiser to make a better decision\n(nested loop in this case)? I have run analyze on both tables.\n\nThe query is,\n\n/*\n smalltable has about 48,000 records.\n bigtable has about 168,000,000 records.\n invtranref is char(10) and is the primary key for both tables\n*/\nSELECT\n *\nFROM IM_Match_Table smalltable\n inner join invtran bigtable on\n bigtable.invtranref = smalltable.invtranref\n\nThe hash join plan is,\n\n\"Hash Join (cost=1681.87..6414169.04 rows=48261 width=171)\"\n\" Output: smalltable.invtranref, smalltable.itbatchref,\nsmalltable.trantype, smalltable.trandate, smalltable.invprodref,\nsmalltable.invheadref, bigtable.itbatchref, bigtable.invtranref,\nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref,\nbigtable.trantype, bigtable.trandate, bigtable.pricedate, bigtable.units,\nbigtable.tranamount, bigtable.createmode, bigtable.transtat,\nbigtable.sysversion, bigtable.sysuser, bigtable.rectype, bigtable.recstat,\nbigtable.seqnum, bigtable.transign\"\n\" Hash Cond: (bigtable.invtranref = smalltable.invtranref)\"\n\" -> Seq Scan on public.invtran bigtable (cost=0.00..4730787.28\nrows=168121728 width=108)\"\n\" Output: bigtable.itbatchref, bigtable.invtranref,\nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref,\nbigtable.trantype, bigtable.trandate, bigtable.pricedate, bigtable.units,\nbigtable.tranamount, bigtable.createmode, bigtable.transtat,\nbigtable.sysversion, bigtable.sysuser, bigtable.rectype, bigtable.recstat,\nbigtable.seqnum, bigtable.transign\"\n\" -> Hash (cost=1078.61..1078.61 rows=48261 width=63)\"\n\" Output: smalltable.invtranref, smalltable.itbatchref,\nsmalltable.trantype, smalltable.trandate, smalltable.invprodref,\nsmalltable.invheadref\"\n\" -> Seq Scan on public.im_match_table smalltable\n(cost=0.00..1078.61 rows=48261 width=63)\"\n\" Output: smalltable.invtranref, smalltable.itbatchref,\nsmalltable.trantype, smalltable.trandate, smalltable.invprodref,\nsmalltable.invheadref\"\n\nThe nested loop join plan is,\n\n\"Nested Loop (cost=0.00..12888684.07 rows=48261 width=171)\"\n\" Output: smalltable.invtranref, smalltable.itbatchref,\nsmalltable.trantype, smalltable.trandate, smalltable.invprodref,\nsmalltable.invheadref, bigtable.itbatchref, bigtable.invtranref,\nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref,\nbigtable.trantype, bigtable.trandate, bigtable.pricedate, bigtable.units,\nbigtable.tranamount, bigtable.createmode, bigtable.transtat,\nbigtable.sysversion, bigtable.sysuser, bigtable.rectype, bigtable.recstat,\nbigtable.seqnum, bigtable.transign\"\n\" -> Seq Scan on public.im_match_table smalltable (cost=0.00..1078.61\nrows=48261 width=63)\"\n\" Output: smalltable.invtranref, smalltable.itbatchref,\nsmalltable.trantype, smalltable.trandate, smalltable.invprodref,\nsmalltable.invheadref\"\n\" -> Index Scan using pk_invtran on public.invtran bigtable\n(cost=0.00..267.03 rows=1 width=108)\"\n\" Output: bigtable.itbatchref, bigtable.invtranref,\nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref,\nbigtable.trantype, bigtable.trandate, bigtable.pricedate, bigtable.units,\nbigtable.tranamount, bigtable.createmode, bigtable.transtat,\nbigtable.sysversion, bigtable.sysuser, bigtable.rectype, bigtable.recstat,\nbigtable.seqnum, bigtable.transign\"\n\" Index Cond: (bigtable.invtranref = smalltable.invtranref)\"\n\nThe version is PostgreSQL 9.2.0 on x86_64-unknown-linux-gnu, compiled by\ngcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit. Server specs are:\n\n - Centos, ext4\n - 24GB memory\n - 6 cores hyper-threaded (Intel(R) Xeon(R) CPU E5645).\n - raid 10 on 4 sata disks\n\nConfig changes are\n\n - shared_buffers = 6GB\n - effective_cache_size = 18GB\n - work_mem = 10MB\n - maintenance_work_mem = 3GB\n\nMany Thanks\nHuan\n\nHello AllWhile investigating switching to Postgres, we come across a query plan \nthat uses hash join and is a lot slower than a nested loop join.\n\nI don't understand why the optimiser chooses the hash join in favor of \nthe nested loop. What can I do to get the optimiser to make a better \ndecision (nested loop in this case)? I have run analyze on both tables.\n\nThe query is,\n/*\n   smalltable has about 48,000 records.\n   bigtable has about 168,000,000 records.\n   invtranref is char(10) and is the primary key for both tables\n*/\nSELECT\n  *\nFROM IM_Match_Table smalltable\n  inner join invtran bigtable on     bigtable.invtranref = smalltable.invtranref\n\nThe hash join plan is,\n\"Hash Join  (cost=1681.87..6414169.04 rows=48261 width=171)\"\n\"  Output: smalltable.invtranref, smalltable.itbatchref, \nsmalltable.trantype, smalltable.trandate, smalltable.invprodref, \nsmalltable.invheadref, bigtable.itbatchref, bigtable.invtranref, \nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, \nbigtable.trantype, bigtable.trandate, bigtable.pricedate, \nbigtable.units, bigtable.tranamount, bigtable.createmode, \nbigtable.transtat, bigtable.sysversion, bigtable.sysuser, \nbigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n\"  Hash Cond: (bigtable.invtranref = smalltable.invtranref)\"\n\"  ->  Seq Scan on public.invtran bigtable  (cost=0.00..4730787.28 rows=168121728 width=108)\"\n\"        Output: bigtable.itbatchref, bigtable.invtranref, \nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, \nbigtable.trantype, bigtable.trandate, bigtable.pricedate, \nbigtable.units, bigtable.tranamount, bigtable.createmode, \nbigtable.transtat, bigtable.sysversion, bigtable.sysuser, \nbigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n\"  ->  Hash  (cost=1078.61..1078.61 rows=48261 width=63)\"\n\"        Output: smalltable.invtranref, smalltable.itbatchref, \nsmalltable.trantype, smalltable.trandate, smalltable.invprodref, \nsmalltable.invheadref\"\n\"        ->  Seq Scan on public.im_match_table smalltable  (cost=0.00..1078.61 rows=48261 width=63)\"\n\"              Output: smalltable.invtranref, \nsmalltable.itbatchref, smalltable.trantype, smalltable.trandate, \nsmalltable.invprodref, smalltable.invheadref\"\n\nThe nested loop join plan is,\n\"Nested Loop  (cost=0.00..12888684.07 rows=48261 width=171)\"\n\"  Output: smalltable.invtranref, smalltable.itbatchref, \nsmalltable.trantype, smalltable.trandate, smalltable.invprodref, \nsmalltable.invheadref, bigtable.itbatchref, bigtable.invtranref, \nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, \nbigtable.trantype, bigtable.trandate, bigtable.pricedate, \nbigtable.units, bigtable.tranamount, bigtable.createmode, \nbigtable.transtat, bigtable.sysversion, bigtable.sysuser, \nbigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n\"  ->  Seq Scan on public.im_match_table smalltable  (cost=0.00..1078.61 rows=48261 width=63)\"\n\"        Output: smalltable.invtranref, smalltable.itbatchref, \nsmalltable.trantype, smalltable.trandate, smalltable.invprodref, \nsmalltable.invheadref\"\n\"  ->  Index Scan using pk_invtran on public.invtran bigtable  (cost=0.00..267.03 rows=1 width=108)\"\n\"        Output: bigtable.itbatchref, bigtable.invtranref, \nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, \nbigtable.trantype, bigtable.trandate, bigtable.pricedate, \nbigtable.units, bigtable.tranamount, bigtable.createmode, \nbigtable.transtat, bigtable.sysversion, bigtable.sysuser, \nbigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n\"        Index Cond: (bigtable.invtranref = smalltable.invtranref)\"\n\nThe version is PostgreSQL 9.2.0 on x86_64-unknown-linux-gnu, compiled by\n gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit. Server specs are:\nCentos, ext4\n24GB memory \n6 cores hyper-threaded (Intel(R) Xeon(R) CPU E5645). raid 10 on 4 sata disks\nConfig changes are\nshared_buffers = 6GBeffective_cache_size = 18GBwork_mem = 10MBmaintenance_work_mem = 3GB\nMany Thanks\nHuan", "msg_date": "Wed, 12 Dec 2012 15:25:48 +1100", "msg_from": "Huan Ruan <[email protected]>", "msg_from_op": true, "msg_subject": "hash join vs nested loop join" }, { "msg_contents": "On Dec 12, 2012, at 8:25 AM, Huan Ruan <[email protected]> wrote:\n\n> Hello All\n> \n> While investigating switching to Postgres, we come across a query plan that uses hash join and is a lot slower than a nested loop join.\n> \n> I don't understand why the optimiser chooses the hash join in favor of the nested loop. What can I do to get the optimiser to make a better decision (nested loop in this case)? I have run analyze on both tables.\n> \n\nOptimiser thinks that nested loop is more expensive, because of point PK lookups, which a random io.\nCan you set random_page_cost to 2 or 3 and try again?\n\n\n> The query is,\n> /*\n> smalltable has about 48,000 records.\n> bigtable has about 168,000,000 records.\n> invtranref is char(10) and is the primary key for both tables\n> */\n> SELECT\n> *\n> FROM IM_Match_Table smalltable\n> inner join invtran bigtable on \n> bigtable.invtranref = smalltable.invtranref\n> The hash join plan is,\n> \"Hash Join (cost=1681.87..6414169.04 rows=48261 width=171)\"\n> \" Output: smalltable.invtranref, smalltable.itbatchref, smalltable.trantype, smalltable.trandate, smalltable.invprodref, smalltable.invheadref, bigtable.itbatchref, bigtable.invtranref, bigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, bigtable.trantype, bigtable.trandate, bigtable.pricedate, bigtable.units, bigtable.tranamount, bigtable.createmode, bigtable.transtat, bigtable.sysversion, bigtable.sysuser, bigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n> \" Hash Cond: (bigtable.invtranref = smalltable.invtranref)\"\n> \" -> Seq Scan on public.invtran bigtable (cost=0.00..4730787.28 rows=168121728 width=108)\"\n> \" Output: bigtable.itbatchref, bigtable.invtranref, bigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, bigtable.trantype, bigtable.trandate, bigtable.pricedate, bigtable.units, bigtable.tranamount, bigtable.createmode, bigtable.transtat, bigtable.sysversion, bigtable.sysuser, bigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n> \" -> Hash (cost=1078.61..1078.61 rows=48261 width=63)\"\n> \" Output: smalltable.invtranref, smalltable.itbatchref, smalltable.trantype, smalltable.trandate, smalltable.invprodref, smalltable.invheadref\"\n> \" -> Seq Scan on public.im_match_table smalltable (cost=0.00..1078.61 rows=48261 width=63)\"\n> \" Output: smalltable.invtranref, smalltable.itbatchref, smalltable.trantype, smalltable.trandate, smalltable.invprodref, smalltable.invheadref\"\n> The nested loop join plan is,\n> \"Nested Loop (cost=0.00..12888684.07 rows=48261 width=171)\"\n> \" Output: smalltable.invtranref, smalltable.itbatchref, smalltable.trantype, smalltable.trandate, smalltable.invprodref, smalltable.invheadref, bigtable.itbatchref, bigtable.invtranref, bigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, bigtable.trantype, bigtable.trandate, bigtable.pricedate, bigtable.units, bigtable.tranamount, bigtable.createmode, bigtable.transtat, bigtable.sysversion, bigtable.sysuser, bigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n> \" -> Seq Scan on public.im_match_table smalltable (cost=0.00..1078.61 rows=48261 width=63)\"\n> \" Output: smalltable.invtranref, smalltable.itbatchref, smalltable.trantype, smalltable.trandate, smalltable.invprodref, smalltable.invheadref\"\n> \" -> Index Scan using pk_invtran on public.invtran bigtable (cost=0.00..267.03 rows=1 width=108)\"\n> \" Output: bigtable.itbatchref, bigtable.invtranref, bigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, bigtable.trantype, bigtable.trandate, bigtable.pricedate, bigtable.units, bigtable.tranamount, bigtable.createmode, bigtable.transtat, bigtable.sysversion, bigtable.sysuser, bigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n> \" Index Cond: (bigtable.invtranref = smalltable.invtranref)\"\n> The version is PostgreSQL 9.2.0 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit. Server specs are:\n> Centos, ext4\n> 24GB memory \n> 6 cores hyper-threaded (Intel(R) Xeon(R) CPU E5645).\n> raid 10 on 4 sata disks\n> Config changes are\n> \n> shared_buffers = 6GB\n> effective_cache_size = 18GB\n> work_mem = 10MB\n> maintenance_work_mem = 3GB\n> Many Thanks\n> Huan\n> \n> \n> \n> \n\n\nOn Dec 12, 2012, at 8:25 AM, Huan Ruan <[email protected]> wrote:Hello AllWhile investigating switching to Postgres, we come across a query plan \nthat uses hash join and is a lot slower than a nested loop join.\n\nI don't understand why the optimiser chooses the hash join in favor of \nthe nested loop. What can I do to get the optimiser to make a better \ndecision (nested loop in this case)? I have run analyze on both tables.\nOptimiser thinks that nested loop is more expensive, because of point PK lookups, which a random io.Can you set random_page_cost to 2 or 3 and try again?\nThe query is,\n/*\n   smalltable has about 48,000 records.\n   bigtable has about 168,000,000 records.\n   invtranref is char(10) and is the primary key for both tables\n*/\nSELECT\n  *\nFROM IM_Match_Table smalltable\n  inner join invtran bigtable on     bigtable.invtranref = smalltable.invtranref\n\nThe hash join plan is,\n\"Hash Join  (cost=1681.87..6414169.04 rows=48261 width=171)\"\n\"  Output: smalltable.invtranref, smalltable.itbatchref, \nsmalltable.trantype, smalltable.trandate, smalltable.invprodref, \nsmalltable.invheadref, bigtable.itbatchref, bigtable.invtranref, \nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, \nbigtable.trantype, bigtable.trandate, bigtable.pricedate, \nbigtable.units, bigtable.tranamount, bigtable.createmode, \nbigtable.transtat, bigtable.sysversion, bigtable.sysuser, \nbigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n\"  Hash Cond: (bigtable.invtranref = smalltable.invtranref)\"\n\"  ->  Seq Scan on public.invtran bigtable  (cost=0.00..4730787.28 rows=168121728 width=108)\"\n\"        Output: bigtable.itbatchref, bigtable.invtranref, \nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, \nbigtable.trantype, bigtable.trandate, bigtable.pricedate, \nbigtable.units, bigtable.tranamount, bigtable.createmode, \nbigtable.transtat, bigtable.sysversion, bigtable.sysuser, \nbigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n\"  ->  Hash  (cost=1078.61..1078.61 rows=48261 width=63)\"\n\"        Output: smalltable.invtranref, smalltable.itbatchref, \nsmalltable.trantype, smalltable.trandate, smalltable.invprodref, \nsmalltable.invheadref\"\n\"        ->  Seq Scan on public.im_match_table smalltable  (cost=0.00..1078.61 rows=48261 width=63)\"\n\"              Output: smalltable.invtranref, \nsmalltable.itbatchref, smalltable.trantype, smalltable.trandate, \nsmalltable.invprodref, smalltable.invheadref\"\n\nThe nested loop join plan is,\n\"Nested Loop  (cost=0.00..12888684.07 rows=48261 width=171)\"\n\"  Output: smalltable.invtranref, smalltable.itbatchref, \nsmalltable.trantype, smalltable.trandate, smalltable.invprodref, \nsmalltable.invheadref, bigtable.itbatchref, bigtable.invtranref, \nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, \nbigtable.trantype, bigtable.trandate, bigtable.pricedate, \nbigtable.units, bigtable.tranamount, bigtable.createmode, \nbigtable.transtat, bigtable.sysversion, bigtable.sysuser, \nbigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n\"  ->  Seq Scan on public.im_match_table smalltable  (cost=0.00..1078.61 rows=48261 width=63)\"\n\"        Output: smalltable.invtranref, smalltable.itbatchref, \nsmalltable.trantype, smalltable.trandate, smalltable.invprodref, \nsmalltable.invheadref\"\n\"  ->  Index Scan using pk_invtran on public.invtran bigtable  (cost=0.00..267.03 rows=1 width=108)\"\n\"        Output: bigtable.itbatchref, bigtable.invtranref, \nbigtable.invheadref, bigtable.feeplanref, bigtable.invprodref, \nbigtable.trantype, bigtable.trandate, bigtable.pricedate, \nbigtable.units, bigtable.tranamount, bigtable.createmode, \nbigtable.transtat, bigtable.sysversion, bigtable.sysuser, \nbigtable.rectype, bigtable.recstat, bigtable.seqnum, bigtable.transign\"\n\"        Index Cond: (bigtable.invtranref = smalltable.invtranref)\"\n\nThe version is PostgreSQL 9.2.0 on x86_64-unknown-linux-gnu, compiled by\n gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit. Server specs are:\nCentos, ext4\n24GB memory \n6 cores hyper-threaded (Intel(R) Xeon(R) CPU E5645). raid 10 on 4 sata disksConfig changes are\nshared_buffers = 6GBeffective_cache_size = 18GBwork_mem = 10MBmaintenance_work_mem = 3GBMany Thanks\nHuan", "msg_date": "Wed, 12 Dec 2012 08:33:33 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash join vs nested loop join" }, { "msg_contents": "On Dec 12, 2012, at 8:57 AM, Evgeny Shishkin <[email protected]> wrote:\n\n> \n> On Dec 12, 2012, at 8:44 AM, Huan Ruan <[email protected]> wrote:\n> \n>> \n>> On 12 December 2012 15:33, Evgeny Shishkin <[email protected]> wrote:\n>> Optimiser thinks that nested loop is more expensive, because of point PK lookups, which a random io.\n>> Can you set random_page_cost to 2 or 3 and try again?\n>> \n>> Hi Evgeny\n>> \n>> Thanks for the quick reply. Setting random_page_cost to 3 doesn't make a difference, but to 2 makes the optimiser to choose nested loop. However, with such a small penalty for random I/O, I'm worry about this setting will make other small queries incorrectly use index when it should be a sequential scan though. I understand random I/O is expensive, but in this case the optimiser already knows the big table is really big, should it consider a sequential scan will be slower than an index lookup? Scan 170 million records vs index lookup of 50,000 records. Any thoughts?\n>> \n> \n> Yes, this is the most common issue for me. \n> Usually you just have to find the right combination of random and seq scan costs, shared_buffers and effective_cache_size.\n> If some of the queries work well with another value of, say, random_page_cost, then, since it is per session parameter, you can SET it in your session before the query. But over time your table may change in size and distribution and everything brakes. No speaking about general ugliness from application standpoint.\n> \n> May be somebody more experienced would help.\n> \n> Also you can set different costs per tablespace.\n> \n>> Thanks\n>> Huan\n> \n\nAdded CC.\n\n\nOn Dec 12, 2012, at 8:57 AM, Evgeny Shishkin <[email protected]> wrote:On Dec 12, 2012, at 8:44 AM, Huan Ruan <[email protected]> wrote:On 12 December 2012 15:33, Evgeny Shishkin <[email protected]> wrote:\nOptimiser thinks that nested loop is more expensive, because of point PK lookups, which a random io.Can you set random_page_cost to 2 or 3 and try again?Hi Evgeny\nThanks for the quick reply. Setting random_page_cost to 3 doesn't make a difference, but to 2 makes the optimiser to choose nested loop. However, with such a small penalty for random I/O, I'm worry about this setting will make other small queries incorrectly use index when it should be a sequential scan though. I understand random I/O is expensive, but in this case the optimiser already knows the big table is really big, should it consider a sequential scan will be slower than an index lookup? Scan 170 million records vs index lookup of 50,000 records. Any thoughts?\nYes, this is the most common issue for me. Usually you just have to find the right combination of random and seq scan costs, shared_buffers and effective_cache_size.If some of the queries work well with another value of, say, random_page_cost, then, since it is per session parameter, you can SET it in your session before the query. But over time your table may change in size and distribution and everything brakes. No speaking about general ugliness from application standpoint.May be somebody more experienced would help.Also you can set different costs per tablespace.Thanks\nHuan\nAdded CC.", "msg_date": "Wed, 12 Dec 2012 08:59:04 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash join vs nested loop join" }, { "msg_contents": "On Tue, Dec 11, 2012 at 8:25 PM, Huan Ruan <[email protected]> wrote:\n> Hello All\n>\n> While investigating switching to Postgres, we come across a query plan that\n> uses hash join and is a lot slower than a nested loop join.\n>\n> I don't understand why the optimiser chooses the hash join in favor of the\n> nested loop. What can I do to get the optimiser to make a better decision\n> (nested loop in this case)? I have run analyze on both tables.\n>\n> The query is,\n>\n> /*\n> smalltable has about 48,000 records.\n> bigtable has about 168,000,000 records.\n> invtranref is char(10) and is the primary key for both tables\n> */\n> SELECT\n> *\n> FROM IM_Match_Table smalltable\n> inner join invtran bigtable on\n> bigtable.invtranref = smalltable.invtranref\n\n..\n\n> \" -> Index Scan using pk_invtran on public.invtran bigtable (cost=0.00..267.03 rows=1 width=108)\"\n\n\nThis looks like the same large-index over-penalty as discussed in the\nrecent thread \"[PERFORM] Slow query: bitmap scan troubles\".\n\nBack-patching the log(npages) change is starting to look like a good idea.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 08:28:13 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash join vs nested loop join" }, { "msg_contents": "On 13 December 2012 03:28, Jeff Janes <[email protected]> wrote:\n\n>\n> This looks like the same large-index over-penalty as discussed in the\n> recent thread \"[PERFORM] Slow query: bitmap scan troubles\".\n>\n> Back-patching the log(npages) change is starting to look like a good idea.\n>\n> Cheers,\n>\n> Jeff\n\n\nThanks for the information Jeff. That does seem to be related.\n\nOn 13 December 2012 03:28, Jeff Janes <[email protected]> wrote:\n\nThis looks like the same large-index over-penalty as discussed in the\nrecent thread \"[PERFORM] Slow query: bitmap scan troubles\".\n\nBack-patching the log(npages) change is starting to look like a good idea.\n\nCheers,\n\nJeffThanks for the information Jeff. That does seem to be related.", "msg_date": "Thu, 13 Dec 2012 11:56:21 +1100", "msg_from": "Huan Ruan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash join vs nested loop join" } ]
[ { "msg_contents": "Hi,\n\nAnybody knows a JDBC or a multiplatform code that let read the delete rows of a table without writing of a table file?\nAnybody knows how to create a table using a table file?\n\nthanks\n\nHi,Anybody knows a JDBC or a multiplatform code that let read the delete rows of a table without writing of a table file?Anybody knows how to create a table using a table file?thanks", "msg_date": "Wed, 12 Dec 2012 16:26:46 +0000 (GMT)", "msg_from": "Alejandro Carrillo <[email protected]>", "msg_from_op": true, "msg_subject": "Read rows deleted" }, { "msg_contents": "Hi,\n\nOn Wed, Dec 12, 2012 at 8:26 AM, Alejandro Carrillo <[email protected]> wrote:\n> Anybody knows a JDBC or a multiplatform code that let read the delete rows\n> of a table without writing of a table file?\n> Anybody knows how to create a table using a table file?\n\nI am not sure what you mean but may be one of this links will help you:\n\n- http://www.postgresql.org/docs/9.2/static/file-fdw.html\n- http://pgxn.org/dist/odbc_fdw/.\n\n>\n> thanks\n\n\n\n--\nSergey Konoplev\nDatabase and Software Architect\nhttp://www.linkedin.com/in/grayhemp\n\nPhones:\nUSA +1 415 867 9984\nRussia, Moscow +7 901 903 0499\nRussia, Krasnodar +7 988 888 1979\n\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 12:13:56 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read rows deleted" }, { "msg_contents": "Hi,\n\nAnybody knows how to create a table using a table file? It isn't a fdw, is a file that compose the table in postgresql and get with the pg_relation_filepath function. Ex:\n\nselect pg_relation_filepath('pg_proc');\n\nAnybody knows a JDBC or a multiplatform code that let read the delete rows of a table without writing of a table file?\n\nThanks\n\n\n\n\n>________________________________\n> De: Sergey Konoplev <[email protected]>\n>Para: Alejandro Carrillo <[email protected]> \n>CC: \"[email protected]\" <[email protected]> \n>Enviado: Miércoles 12 de diciembre de 2012 15:13\n>Asunto: Re: [PERFORM] Read rows deleted\n> \n>Hi,\n>\n>On Wed, Dec 12, 2012 at 8:26 AM, Alejandro Carrillo <[email protected]> wrote:\n>> Anybody knows a JDBC or a multiplatform code that let read the delete rows\n>> of a table without writing of a table file?\n>> Anybody knows how to create a table using a table file?\n>\n>I am not sure what you mean but may be one of this links will help you:\n>\n>- http://www.postgresql.org/docs/9.2/static/file-fdw.html\n>- http://pgxn.org/dist/odbc_fdw/.\n>\n>>\n>> thanks\n>\n>\n>\n>--\n>Sergey Konoplev\n>Database and Software Architect\n>http://www.linkedin.com/in/grayhemp\n>\n>Phones:\n>USA +1 415 867 9984\n>Russia, Moscow +7 901 903 0499\n>Russia, Krasnodar +7 988 888 1979\n>\n>Skype: gray-hemp\n>Jabber: [email protected]\n>\n>\n>\nHi,Anybody knows how to create a table using a table file? It isn't a fdw, is a file that compose the table in postgresql and get with the pg_relation_filepath function. Ex:select\n pg_relation_filepath('pg_proc');Anybody knows a JDBC or a multiplatform code that let read the delete rows of a table without writing of a table file?Thanks De: Sergey Konoplev <[email protected]> Para: Alejandro Carrillo <[email protected]> CC: \"[email protected]\" <[email protected]> Enviado: Miércoles 12 de diciembre de 2012 15:13 Asunto: Re: [PERFORM] Read rows deleted Hi,On Wed, Dec 12, 2012 at 8:26 AM, Alejandro Carrillo <[email protected]> wrote:> Anybody knows a JDBC or a multiplatform code that let read the delete rows> of a table without writing of a table file?> Anybody knows how to create a table using a table file?I am not sure what you mean but may be one of this links will help you:- http://www.postgresql.org/docs/9.2/static/file-fdw.html- http://pgxn.org/dist/odbc_fdw/.>> thanks--Sergey KonoplevDatabase and Software Architecthttp://www.linkedin.com/in/grayhempPhones:USA +1 415 867 9984Russia, Moscow +7 901 903 0499Russia, Krasnodar +7 988 888 1979Skype: gray-hempJabber: [email protected]", "msg_date": "Wed, 12 Dec 2012 20:24:21 +0000 (GMT)", "msg_from": "Alejandro Carrillo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Read rows deleted" }, { "msg_contents": "\nOn 12/12/2012 03:24 PM, Alejandro Carrillo wrote:\n> Hi,\n>\n> Anybody knows how to create a table using a table file? It isn't a \n> fdw, is a file that compose the table in postgresql and get with the \n> pg_relation_filepath function. Ex:\n>\n> select pg_relation_filepath('pg_proc');\n>\n> Anybody knows a JDBC or a multiplatform code that let read the delete \n> rows of a table without writing of a table file?\n>\n>\n\n\nThis isn't a performance related question. Please ask on the correct \nlist (probably pgsql-general).\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 15:30:11 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read rows deleted" } ]
[ { "msg_contents": "Is there a performance downside to setting track_activity_query_size to \na significantly larger value than the default 1024 (say 10240), given \nthat there's plenty of memory to spare?\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 15:02:54 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "track_activity_query_size" } ]
[ { "msg_contents": "\nA client is testing a migration from 9.1 to 9.2, and has found that a \nlarge number of queries run much faster if they use index-only scans. \nHowever, the only way he has found to get such a plan is by increasing \nthe seq_page_cost to insanely high levels (3.5). Is there any approved \nway to encourage such scans that's a but less violent than this?\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 16:06:52 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "encouraging index-only scans" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> A client is testing a migration from 9.1 to 9.2, and has found that a \n> large number of queries run much faster if they use index-only scans. \n> However, the only way he has found to get such a plan is by increasing \n> the seq_page_cost to insanely high levels (3.5). Is there any approved \n> way to encourage such scans that's a but less violent than this?\n\nIs the pg_class.relallvisible estimate for the table realistic? They\nmight need a few more VACUUM and ANALYZE cycles to get it into the\nneighborhood of reality, if not.\n\nKeep in mind also that small values of random_page_cost necessarily\ndecrease the apparent advantage of index-only scans. If you think 3.5\nis an \"insanely high\" setting, I wonder whether you haven't driven those\nnumbers too far in the other direction to compensate for something else.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 16:32:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encouraging index-only scans" }, { "msg_contents": "\nOn 12/12/2012 04:32 PM, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> A client is testing a migration from 9.1 to 9.2, and has found that a\n>> large number of queries run much faster if they use index-only scans.\n>> However, the only way he has found to get such a plan is by increasing\n>> the seq_page_cost to insanely high levels (3.5). Is there any approved\n>> way to encourage such scans that's a but less violent than this?\n> Is the pg_class.relallvisible estimate for the table realistic? They\n> might need a few more VACUUM and ANALYZE cycles to get it into the\n> neighborhood of reality, if not.\n\nThat was the problem - I didn't know this hadn't been done.\n\n>\n> Keep in mind also that small values of random_page_cost necessarily\n> decrease the apparent advantage of index-only scans. If you think 3.5\n> is an \"insanely high\" setting, I wonder whether you haven't driven those\n> numbers too far in the other direction to compensate for something else.\n\nRight.\n\nThanks for the help.\n\ncheers\n\nandrew\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 17:12:36 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: encouraging index-only scans" }, { "msg_contents": "\nOn 12/12/2012 05:12 PM, Andrew Dunstan wrote:\n>\n> On 12/12/2012 04:32 PM, Tom Lane wrote:\n>> Andrew Dunstan <[email protected]> writes:\n>>> A client is testing a migration from 9.1 to 9.2, and has found that a\n>>> large number of queries run much faster if they use index-only scans.\n>>> However, the only way he has found to get such a plan is by increasing\n>>> the seq_page_cost to insanely high levels (3.5). Is there any approved\n>>> way to encourage such scans that's a but less violent than this?\n>> Is the pg_class.relallvisible estimate for the table realistic? They\n>> might need a few more VACUUM and ANALYZE cycles to get it into the\n>> neighborhood of reality, if not.\n>\n> That was the problem - I didn't know this hadn't been done.\n>\n\nActually, the table had been analysed but not vacuumed, so this kinda \nbegs the question what will happen to this value on pg_upgrade? Will \npeople's queries suddenly get slower until autovacuum kicks in on the table?\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 17:27:39 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: encouraging index-only scans" }, { "msg_contents": "On Wed, Dec 12, 2012 at 05:27:39PM -0500, Andrew Dunstan wrote:\n> \n> On 12/12/2012 05:12 PM, Andrew Dunstan wrote:\n> >\n> >On 12/12/2012 04:32 PM, Tom Lane wrote:\n> >>Andrew Dunstan <[email protected]> writes:\n> >>>A client is testing a migration from 9.1 to 9.2, and has found that a\n> >>>large number of queries run much faster if they use index-only scans.\n> >>>However, the only way he has found to get such a plan is by increasing\n> >>>the seq_page_cost to insanely high levels (3.5). Is there any approved\n> >>>way to encourage such scans that's a but less violent than this?\n> >>Is the pg_class.relallvisible estimate for the table realistic? They\n> >>might need a few more VACUUM and ANALYZE cycles to get it into the\n> >>neighborhood of reality, if not.\n> >\n> >That was the problem - I didn't know this hadn't been done.\n> >\n> \n> Actually, the table had been analysed but not vacuumed, so this\n> kinda begs the question what will happen to this value on\n> pg_upgrade? Will people's queries suddenly get slower until\n> autovacuum kicks in on the table?\n\n[ moved to hackers list.]\n\nYes, this does seem like a problem for upgrades from 9.2 to 9.3? We can\nhave pg_dump --binary-upgrade set these, or have ANALYZE set it. I\nwould prefer the later.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 12 Dec 2012 21:48:37 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Wed, Dec 12, 2012 at 05:27:39PM -0500, Andrew Dunstan wrote:\n>> Actually, the table had been analysed but not vacuumed, so this\n>> kinda begs the question what will happen to this value on\n>> pg_upgrade? Will people's queries suddenly get slower until\n>> autovacuum kicks in on the table?\n\n> [ moved to hackers list.]\n\n> Yes, this does seem like a problem for upgrades from 9.2 to 9.3? We can\n> have pg_dump --binary-upgrade set these, or have ANALYZE set it. I\n> would prefer the later.\n\nANALYZE does not set that value, and is not going to start doing so,\nbecause it doesn't scan enough of the table to derive a trustworthy\nvalue.\n\nIt's been clear for some time that pg_upgrade ought to do something\nabout transferring the \"statistics\" columns in pg_class to the new\ncluster. This is just another example of why.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 12 Dec 2012 22:51:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Thu, Dec 13, 2012 at 9:21 AM, Tom Lane <[email protected]> wrote:\n> Bruce Momjian <[email protected]> writes:\n>> On Wed, Dec 12, 2012 at 05:27:39PM -0500, Andrew Dunstan wrote:\n>>> Actually, the table had been analysed but not vacuumed, so this\n>>> kinda begs the question what will happen to this value on\n>>> pg_upgrade? Will people's queries suddenly get slower until\n>>> autovacuum kicks in on the table?\n>\n>> [ moved to hackers list.]\n>\n>> Yes, this does seem like a problem for upgrades from 9.2 to 9.3? We can\n>> have pg_dump --binary-upgrade set these, or have ANALYZE set it. I\n>> would prefer the later.\n>\n> ANALYZE does not set that value, and is not going to start doing so,\n> because it doesn't scan enough of the table to derive a trustworthy\n> value.\n>\n\nShould we do that though ? i.e. scan the entire map and count the\nnumber of bits at the end of ANALYZE, like we do at the end of VACUUM\n? I recently tried to optimize that code path by not recounting at the\nend of the vacuum and instead track the number of all-visible bits\nwhile scanning them in the earlier phases on vacuum. But it turned out\nthat its so fast to count even a million bits that its probably not\nworth doing so.\n\n> It's been clear for some time that pg_upgrade ought to do something\n> about transferring the \"statistics\" columns in pg_class to the new\n> cluster. This is just another example of why.\n>\n\n+1.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nhttp://www.linkedin.com/in/pavandeolasee\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 13 Dec 2012 10:18:48 +0530", "msg_from": "Pavan Deolasee <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 13 December 2012 03:51, Tom Lane <[email protected]> wrote:\n\n>> Yes, this does seem like a problem for upgrades from 9.2 to 9.3? We can\n>> have pg_dump --binary-upgrade set these, or have ANALYZE set it. I\n>> would prefer the later.\n>\n> ANALYZE does not set that value, and is not going to start doing so,\n> because it doesn't scan enough of the table to derive a trustworthy\n> value.\n\nISTM that ANALYZE doesn't need to scan the table to do this. The\nvismap is now trustworthy and we can scan it separately on ANALYZE.\n\nMore to the point, since we run ANALYZE more frequently than we run\nVACUUM, the value stored by the last VACUUM could be very stale.\n\n> It's been clear for some time that pg_upgrade ought to do something\n> about transferring the \"statistics\" columns in pg_class to the new\n> cluster. This is just another example of why.\n\nAgreed, but that could bring other problems as well.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 13 Dec 2012 09:40:40 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Thu, Dec 13, 2012 at 09:40:40AM +0000, Simon Riggs wrote:\n> On 13 December 2012 03:51, Tom Lane <[email protected]> wrote:\n> \n> >> Yes, this does seem like a problem for upgrades from 9.2 to 9.3? We can\n> >> have pg_dump --binary-upgrade set these, or have ANALYZE set it. I\n> >> would prefer the later.\n> >\n> > ANALYZE does not set that value, and is not going to start doing so,\n> > because it doesn't scan enough of the table to derive a trustworthy\n> > value.\n> \n> ISTM that ANALYZE doesn't need to scan the table to do this. The\n> vismap is now trustworthy and we can scan it separately on ANALYZE.\n> \n> More to the point, since we run ANALYZE more frequently than we run\n> VACUUM, the value stored by the last VACUUM could be very stale.\n\nWouldn't inserts affect the relallvisible ratio, but not cause a vacuum?\nSeems we should be having analyze update this independent of pg_upgrade\nneeding it. Also, why is this in pg_class?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 13 Dec 2012 08:46:45 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 13 December 2012 03:51, Tom Lane <[email protected]> wrote:\n> ANALYZE does not set that value, and is not going to start doing so,\n> because it doesn't scan enough of the table to derive a trustworthy\n> value.\n\nI'm slightly surprised by your remarks here, because the commit\nmessage where the relallvisible column was added (commit\na2822fb9337a21f98ac4ce850bb4145acf47ca27) says:\n\n\"Add a column pg_class.relallvisible to remember the number of pages\nthat were all-visible according to the visibility map as of the last\nVACUUM\n(or ANALYZE, or some other operations that update pg_class.relpages).\nUse relallvisible/relpages, instead of an arbitrary constant, to\nestimate how many heap page fetches can be avoided during an\nindex-only scan.\"\n\nHave I missed some nuance?\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 13 Dec 2012 15:31:06 +0000", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Thu, Dec 13, 2012 at 03:31:06PM +0000, Peter Geoghegan wrote:\n> On 13 December 2012 03:51, Tom Lane <[email protected]> wrote:\n> > ANALYZE does not set that value, and is not going to start doing so,\n> > because it doesn't scan enough of the table to derive a trustworthy\n> > value.\n> \n> I'm slightly surprised by your remarks here, because the commit\n> message where the relallvisible column was added (commit\n> a2822fb9337a21f98ac4ce850bb4145acf47ca27) says:\n> \n> \"Add a column pg_class.relallvisible to remember the number of pages\n> that were all-visible according to the visibility map as of the last\n> VACUUM\n> (or ANALYZE, or some other operations that update pg_class.relpages).\n> Use relallvisible/relpages, instead of an arbitrary constant, to\n> estimate how many heap page fetches can be avoided during an\n> index-only scan.\"\n> \n> Have I missed some nuance?\n\nI am looking back at this issue now and I think you are correct. The\ncommit you mention (Oct 7 2011) says ANALYZE updates the visibility map,\nand the code matches that:\n\n\t if (!inh)\n\t vac_update_relstats(onerel,\n\t RelationGetNumberOfBlocks(onerel),\n\t totalrows,\n-->\t visibilitymap_count(onerel),\n\t hasindex,\n\t InvalidTransactionId);\n\nso if an index scan was not being used after an ANALYZE, it isn't a bad\nallvisibile estimate but something else. This code was in PG 9.2.\n\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 4 Sep 2013 16:56:55 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Wed, Sep 4, 2013 at 04:56:55PM -0400, Bruce Momjian wrote:\n> > \"Add a column pg_class.relallvisible to remember the number of pages\n> > that were all-visible according to the visibility map as of the last\n> > VACUUM\n> > (or ANALYZE, or some other operations that update pg_class.relpages).\n> > Use relallvisible/relpages, instead of an arbitrary constant, to\n> > estimate how many heap page fetches can be avoided during an\n> > index-only scan.\"\n> > \n> > Have I missed some nuance?\n> \n> I am looking back at this issue now and I think you are correct. The\n> commit you mention (Oct 7 2011) says ANALYZE updates the visibility map,\n> and the code matches that:\n> \n> \t if (!inh)\n> \t vac_update_relstats(onerel,\n> \t RelationGetNumberOfBlocks(onerel),\n> \t totalrows,\n> -->\t visibilitymap_count(onerel),\n> \t hasindex,\n> \t InvalidTransactionId);\n> \n> so if an index scan was not being used after an ANALYZE, it isn't a bad\n> allvisibile estimate but something else. This code was in PG 9.2.\n\nActually, I now realize it is more complex than that, and worse. There\nare several questions to study to understand when pg_class.relallvisible\nis updated (which is used to determine if index-only scans are a good\noptimization choice), and when VM all-visible bits are set so heap pages\ncan be skipped during index-only scans:\n\n\t1) When are VM bits set:\n\t\tvacuum (non-full)\n\t\tanalyze (only some random pages)\n\t2) When are massive rows added but VM bits not set:\n\t\tcopy\n\t3) When are VM bits cleared:\n\t\tinsert/update/delete\n\t\tvacuum (non-full)\n\t4) When are VM map files cleared:\n\t\tvacuum full\n\t\tcluster\n\t5) When is pg_class.relallvisible updated via a VM map file scan:\n\t\tvacuum (non-full)\n\t\tanalyze\n\nVacuums run by autovacuum are driven by n_dead_tuples, which is only\nupdate and delete. Therefore, any scenario where vacuum (non-full) is\nnever run will not have significant VM bits set. The only bits that\nwill be set will be by pages visited randomly by analyze.\n\nThe following table activities will not set proper VM bits:\n\n vacuum full\n cluster\n copy\n\t\tinsert-only\n\nIf updates and deletes happen, there will eventually be sufficient\nreason for autovacuum to vacuum the table and set proper VM bits, and\npg_class.relallvisible.\n\nThe calculus we should use to determine when we need to run vacuum has\nchanged with index-only scans, and I am not sure we ever fully addressed\nthis.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 5 Sep 2013 20:14:37 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Thu, Sep 5, 2013 at 8:14 PM, Bruce Momjian <[email protected]> wrote:\n> Actually, I now realize it is more complex than that, and worse. There\n> are several questions to study to understand when pg_class.relallvisible\n> is updated (which is used to determine if index-only scans are a good\n> optimization choice), and when VM all-visible bits are set so heap pages\n> can be skipped during index-only scans:\n>\n> 1) When are VM bits set:\n> vacuum (non-full)\n> analyze (only some random pages)\n\nAnalyze doesn't set visibility-map bits. It only updates statistics\nabout how many are set.\n\n> The calculus we should use to determine when we need to run vacuum has\n> changed with index-only scans, and I am not sure we ever fully addressed\n> this.\n\nYeah, we didn't. I think the hard part is figuring out what behavior\nwould be best. Counting inserts as well as updates and deletes would\nbe a simple approach, but I don't have much confidence in it. My\nexperience is that having vacuum or analyze kick in during a bulk-load\noperation is a disaster. We'd kinda like to come up with a way to\nmake vacuum run after the bulk load is complete, maybe, but how would\nwe identify that time, and there are probably cases where that's not\nright either.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 5 Sep 2013 21:10:06 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 06/09/13 13:10, Robert Haas wrote:\n> On Thu, Sep 5, 2013 at 8:14 PM, Bruce Momjian <[email protected]> wrote:\n>> Actually, I now realize it is more complex than that, and worse. There\n>> are several questions to study to understand when pg_class.relallvisible\n>> is updated (which is used to determine if index-only scans are a good\n>> optimization choice), and when VM all-visible bits are set so heap pages\n>> can be skipped during index-only scans:\n>>\n>> 1) When are VM bits set:\n>> vacuum (non-full)\n>> analyze (only some random pages)\n> Analyze doesn't set visibility-map bits. It only updates statistics\n> about how many are set.\n>\n>> The calculus we should use to determine when we need to run vacuum has\n>> changed with index-only scans, and I am not sure we ever fully addressed\n>> this.\n> Yeah, we didn't. I think the hard part is figuring out what behavior\n> would be best. Counting inserts as well as updates and deletes would\n> be a simple approach, but I don't have much confidence in it. My\n> experience is that having vacuum or analyze kick in during a bulk-load\n> operation is a disaster. We'd kinda like to come up with a way to\n> make vacuum run after the bulk load is complete, maybe, but how would\n> we identify that time, and there are probably cases where that's not\n> right either.\n>\nHow about a 'VACUUM AFTER' command (part of the 'BEGIN' transaction \nsyntax?) that would:\n\n 1. only be valid in a transaction\n 2. initiate a vacuum after the current transaction completed\n 3. defer any vacuum triggered due to other criteria\n\nIf the transaction was rolled back: then if there was a pending vacuum, \ndue to other reasons, it would then be actioned.\n\nOn normal transaction completion, then if there was a pending vacuum it \nwould be combined with the one in the transaction.\n\nStill would need some method of ensuring any pending vacuum was done if \nthe transaction hung, or took too long.\n\n\nCheers,\nGavin\n\n\n\n\n\n\nOn 06/09/13 13:10, Robert Haas wrote:\n\n\nOn Thu, Sep 5, 2013 at 8:14 PM, Bruce Momjian <[email protected]> wrote:\n\n\nActually, I now realize it is more complex than that, and worse. There\nare several questions to study to understand when pg_class.relallvisible\nis updated (which is used to determine if index-only scans are a good\noptimization choice), and when VM all-visible bits are set so heap pages\ncan be skipped during index-only scans:\n\n 1) When are VM bits set:\n vacuum (non-full)\n analyze (only some random pages)\n\n\n\nAnalyze doesn't set visibility-map bits. It only updates statistics\nabout how many are set.\n\n\n\nThe calculus we should use to determine when we need to run vacuum has\nchanged with index-only scans, and I am not sure we ever fully addressed\nthis.\n\n\n\nYeah, we didn't. I think the hard part is figuring out what behavior\nwould be best. Counting inserts as well as updates and deletes would\nbe a simple approach, but I don't have much confidence in it. My\nexperience is that having vacuum or analyze kick in during a bulk-load\noperation is a disaster. We'd kinda like to come up with a way to\nmake vacuum run after the bulk load is complete, maybe, but how would\nwe identify that time, and there are probably cases where that's not\nright either.\n\n\n\nHow about a 'VACUUM AFTER' command (part of\n the 'BEGIN' transaction syntax?) that would:\n\nonly be valid in a transaction\ninitiate a vacuum after the current transaction completed\ndefer any vacuum triggered due to other criteria\n\n\n If the transaction was rolled back: then if there was a pending\n vacuum, due to other reasons, it would then be actioned.\n\n On normal transaction completion, then if there was a pending vacuum\n it would be combined with the one in the transaction.\n\n Still would need some method of ensuring any pending vacuum was done\n if the transaction hung, or took too long.\n\n\n Cheers,\n Gavin", "msg_date": "Fri, 06 Sep 2013 13:29:32 +1200", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Thu, Sep 5, 2013 at 09:10:06PM -0400, Robert Haas wrote:\n> On Thu, Sep 5, 2013 at 8:14 PM, Bruce Momjian <[email protected]> wrote:\n> > Actually, I now realize it is more complex than that, and worse. There\n> > are several questions to study to understand when pg_class.relallvisible\n> > is updated (which is used to determine if index-only scans are a good\n> > optimization choice), and when VM all-visible bits are set so heap pages\n> > can be skipped during index-only scans:\n> >\n> > 1) When are VM bits set:\n> > vacuum (non-full)\n> > analyze (only some random pages)\n> \n> Analyze doesn't set visibility-map bits. It only updates statistics\n> about how many are set.\n\nSorry, yes you are correct.\n\n> > The calculus we should use to determine when we need to run vacuum has\n> > changed with index-only scans, and I am not sure we ever fully addressed\n> > this.\n> \n> Yeah, we didn't. I think the hard part is figuring out what behavior\n> would be best. Counting inserts as well as updates and deletes would\n> be a simple approach, but I don't have much confidence in it. My\n> experience is that having vacuum or analyze kick in during a bulk-load\n> operation is a disaster. We'd kinda like to come up with a way to\n> make vacuum run after the bulk load is complete, maybe, but how would\n> we identify that time, and there are probably cases where that's not\n> right either.\n\nI am unsure how we have gone a year with index-only scans and I am just\nnow learning that it only works well with update/delete workloads or by\nrunning vacuum manually. I only found this out going back over January\nemails. Did other people know this? Was it not considered a serious\nproblem?\n\nWell, our logic has been that vacuum is only for removing expired rows. \nI think we either need to improve that, or somehow make sequential scans\nupdate the VM map, and then find a way to trigger update of\nrelallvisible even without inserts.\n\nIdeas\n-----\n\nI think we need to detect tables that do not have VM bits set and try to\ndetermine if they should be vacuumed. If a table has most of its VM\nbits set, there in need to vacuum it for VM bit setting.\n\nAutovacuum knows how many pages are in the table via its file size, and\nit can scan the VM map to see how many pages are _not_ marked\nall-visible. If the VM map has many pages that are _not_ marked as\nall-visible, and change count since last vacuum is low, those pages\nmight now be all-visible and vacuum might find them. One problem is\nthat a long-running transaction is not going to update relallvisible\nuntil commit, so you might be vacuuming a table that is being modified,\ne.g. bulk loads. Do we have any way of detecting if a backend is\nmodifying a table?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 5 Sep 2013 22:00:43 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "Bruce Momjian escribi�:\n\n> Ideas\n> -----\n> \n> I think we need to detect tables that do not have VM bits set and try to\n> determine if they should be vacuumed. If a table has most of its VM\n> bits set, there in need to vacuum it for VM bit setting.\n\nI think it's shortsighted to keep thinking of autovacuum as just a way\nto run VACUUM and ANALYZE. We have already discussed work items that\nneed to be done separately, such as truncating the last few empty pages\non a relation that was vacuumed recently. We also need to process a GIN\nindex' pending insertion list; and with minmax indexes I will want to\nrun summarization of heap page ranges.\n\nSo maybe instead of trying to think of VM bit setting as part of vacuum,\nwe could just keep stats about how many pages we might need to scan\nbecause of possibly needing to set the bit, and then doing that in\nautovacuum, independently from actually vacuuming the relation.\n\nI'm not sure if we need to expose all these new maintenance actions as\nSQL commands.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 6 Sep 2013 01:22:36 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I'm not sure if we need to expose all these new maintenance actions as\n> SQL commands.\n\nI strongly think we should, if only for diagnostic purposes. Also to\nadapt to some well defined workloads that the automatic system is not\ndesigned to handle.\n\nRegards,\n-- \nDimitri Fontaine\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 06 Sep 2013 09:23:15 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 09/06/2013 09:23 AM, Dimitri Fontaine wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> I'm not sure if we need to expose all these new maintenance actions as\n>> SQL commands.\n> I strongly think we should, if only for diagnostic purposes. \nIt would be much easier and more flexible to expose them\nas pg_*() function calls, not proper \"commands\".\n> Also to\n> adapt to some well defined workloads that the automatic system is not\n> designed to handle.\n+1\n\n-- \nHannu Krosing\nPostgreSQL Consultant\nPerformance, Scalability and High Availability\n2ndQuadrant Nordic O�\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 06 Sep 2013 13:38:56 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2013-09-06 01:22:36 -0400, Alvaro Herrera wrote:\n> I think it's shortsighted to keep thinking of autovacuum as just a way\n> to run VACUUM and ANALYZE. We have already discussed work items that\n> need to be done separately, such as truncating the last few empty pages\n> on a relation that was vacuumed recently. We also need to process a GIN\n> index' pending insertion list; and with minmax indexes I will want to\n> run summarization of heap page ranges.\n\nAgreed.\n\n> So maybe instead of trying to think of VM bit setting as part of vacuum,\n> we could just keep stats about how many pages we might need to scan\n> because of possibly needing to set the bit, and then doing that in\n> autovacuum, independently from actually vacuuming the relation.\n\nI am not sure I understand this though. What would be the point to go\nand set all visible and not do the rest of the vacuuming work?\n\nI think triggering vacuuming by scanning the visibility map for the\nnumber of unset bits and use that as another trigger is a good idea. The\nvm should ensure we're not doing superflous work.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 6 Sep 2013 15:08:54 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2013-09-06 13:38:56 +0200, Hannu Krosing wrote:\n> On 09/06/2013 09:23 AM, Dimitri Fontaine wrote:\n> > Alvaro Herrera <[email protected]> writes:\n> >> I'm not sure if we need to expose all these new maintenance actions as\n> >> SQL commands.\n> > I strongly think we should, if only for diagnostic purposes. \n> It would be much easier and more flexible to expose them\n> as pg_*() function calls, not proper \"commands\".\n\nI don't think that's as easy as you might imagine. For much of what's\ndone in that context you cannot be in a transaction, you even need to be\nin a toplevel statement (since we internally\nCommitTransactionCommand/StartTransactionCommand).\n\nSo those pg_* commands couldn't be called (except possibly via the\nfastpath function call API ...) which might restrict their usefulnes a\nteensy bit ;)\n\nSo, I think extending the options passed to VACUUM - since it can take\npretty generic options these days - is a more realistic path.\n\n> > Also to\n> > adapt to some well defined workloads that the automatic system is not\n> > designed to handle.\n> +1\n\nWhat would you like to expose individually?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 6 Sep 2013 15:12:07 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 09/06/2013 03:12 PM, Andres Freund wrote:\n> On 2013-09-06 13:38:56 +0200, Hannu Krosing wrote:\n>> On 09/06/2013 09:23 AM, Dimitri Fontaine wrote:\n>>> Alvaro Herrera <[email protected]> writes:\n>>>> I'm not sure if we need to expose all these new maintenance actions as\n>>>> SQL commands.\n>>> I strongly think we should, if only for diagnostic purposes. \n>> It would be much easier and more flexible to expose them\n>> as pg_*() function calls, not proper \"commands\".\n> I don't think that's as easy as you might imagine. For much of what's\n> done in that context you cannot be in a transaction, you even need to be\n> in a toplevel statement (since we internally\n> CommitTransactionCommand/StartTransactionCommand).\n>\n> So those pg_* commands couldn't be called (except possibly via the\n> fastpath function call API ...) which might restrict their usefulnes a\n> teensy bit ;)\n>\n> So, I think extending the options passed to VACUUM - since it can take\n> pretty generic options these days - is a more realistic path.\nMight be something convoluted like \n\nVACUUM indexname WITH (function = \"pg_cleanup_gin($1)\");\n\n:)\n>\n>>> Also to\n>>> adapt to some well defined workloads that the automatic system is not\n>>> designed to handle.\n>> +1\n> What would you like to expose individually?\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 06 Sep 2013 16:49:19 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Fri, Sep 6, 2013 at 03:08:54PM +0200, Andres Freund wrote:\n> On 2013-09-06 01:22:36 -0400, Alvaro Herrera wrote:\n> > I think it's shortsighted to keep thinking of autovacuum as just a way\n> > to run VACUUM and ANALYZE. We have already discussed work items that\n> > need to be done separately, such as truncating the last few empty pages\n> > on a relation that was vacuumed recently. We also need to process a GIN\n> > index' pending insertion list; and with minmax indexes I will want to\n> > run summarization of heap page ranges.\n> \n> Agreed.\n> \n> > So maybe instead of trying to think of VM bit setting as part of vacuum,\n> > we could just keep stats about how many pages we might need to scan\n> > because of possibly needing to set the bit, and then doing that in\n> > autovacuum, independently from actually vacuuming the relation.\n> \n> I am not sure I understand this though. What would be the point to go\n> and set all visible and not do the rest of the vacuuming work?\n> \n> I think triggering vacuuming by scanning the visibility map for the\n> number of unset bits and use that as another trigger is a good idea. The\n> vm should ensure we're not doing superflous work.\n\nYes, I think it might be hard to justify a separate VM-set-only scan of\nthe table. If you are already reading the table, and already checking\nto see if you can set the VM bit, I am not sure why you would not also\nremove old rows, especially since removing those rows might be necessary\nto allow setting VM bits.\n\nAnother problem I thought of is that while automatic vacuuming only\nhappens with high update/delete load, index-only scans are best on\nmostly non-write tables, so we have bad behavior where the ideal case\n(static data) doesn't get vm-bits set, while update/delete has the\nvm-bits set, but then cleared as more update/deletes occur.\n\nThe more I look at this the worse it appears. How has this gone\nunaddressed for over a year?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 6 Sep 2013 12:30:56 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2013-09-06 12:30:56 -0400, Bruce Momjian wrote:\n> > I am not sure I understand this though. What would be the point to go\n> > and set all visible and not do the rest of the vacuuming work?\n> >\n> > I think triggering vacuuming by scanning the visibility map for the\n> > number of unset bits and use that as another trigger is a good idea. The\n> > vm should ensure we're not doing superflous work.\n>\n> Yes, I think it might be hard to justify a separate VM-set-only scan of\n> the table. If you are already reading the table, and already checking\n> to see if you can set the VM bit, I am not sure why you would not also\n> remove old rows, especially since removing those rows might be necessary\n> to allow setting VM bits.\n\nYep. Although adding the table back into the fsm will lead to it being\nused for new writes again...\n\n> Another problem I thought of is that while automatic vacuuming only\n> happens with high update/delete load, index-only scans are best on\n> mostly non-write tables, so we have bad behavior where the ideal case\n> (static data) doesn't get vm-bits set, while update/delete has the\n> vm-bits set, but then cleared as more update/deletes occur.\n\nWell, older tables will get vacuumed due to vacuum_freeze_table_age. So\nat some point they will get vacuumed and the vm bits will get set.\n\n> The more I look at this the worse it appears. How has this gone\n> unaddressed for over a year?\n\nIt's been discussed several times including during the introduction of\nthe feature. I am a bit surprised about the panickey tone in this\nthread.\nYes, we need to overhaul the way vacuum works (to reduce the frequency\nof rewriting stuff repeatedly) and the way it's triggered (priorization,\nmore trigger conditions) but all these are known things and \"just\" need\nsomebody with time.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 6 Sep 2013 18:36:47 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Fri, Sep 6, 2013 at 06:36:47PM +0200, Andres Freund wrote:\n> On 2013-09-06 12:30:56 -0400, Bruce Momjian wrote:\n> > > I am not sure I understand this though. What would be the point to go\n> > > and set all visible and not do the rest of the vacuuming work?\n> > >\n> > > I think triggering vacuuming by scanning the visibility map for the\n> > > number of unset bits and use that as another trigger is a good idea. The\n> > > vm should ensure we're not doing superflous work.\n> >\n> > Yes, I think it might be hard to justify a separate VM-set-only scan of\n> > the table. If you are already reading the table, and already checking\n> > to see if you can set the VM bit, I am not sure why you would not also\n> > remove old rows, especially since removing those rows might be necessary\n> > to allow setting VM bits.\n> \n> Yep. Although adding the table back into the fsm will lead to it being\n> used for new writes again...\n\nYou mean adding _pages_ back into the table's FSM? Yes, that is going\nto cause those pages to get dirty, but it is better than expanding the\ntable size. I don't see why you would not update the FSM.\n\n> > Another problem I thought of is that while automatic vacuuming only\n> > happens with high update/delete load, index-only scans are best on\n> > mostly non-write tables, so we have bad behavior where the ideal case\n> > (static data) doesn't get vm-bits set, while update/delete has the\n> > vm-bits set, but then cleared as more update/deletes occur.\n> \n> Well, older tables will get vacuumed due to vacuum_freeze_table_age. So\n> at some point they will get vacuumed and the vm bits will get set.\n\nHmm, good point. That would help with an insert-only workload, as long\nas you can chew through 200M transactions. That doesn't help with a\nread-only workload as we don't consume transction IDs for SELECT.\n\n> > The more I look at this the worse it appears. How has this gone\n> > unaddressed for over a year?\n> \n> It's been discussed several times including during the introduction of\n> the feature. I am a bit surprised about the panickey tone in this\n> thread.\n\nThis December 2012 thread by Andrew Dunstan shows he wasn't aware that a\nmanual VACUUM was required for index-only scans. That thread ended with\nus realizing that pg_upgrade's ANALYZE runs will populate\npg_class.relallvisible.\n\nWhat I didn't see in that thread is an analysis of what cases are going\nto require manual vacuum, and I have seen no work in 9.3 to improve\nthat. I don't even see it on the TODO list.\n\nIt bothers me that we spent time developing index-only scans, but have\nsignificant workloads where it doesn't work, no efforts on improving it,\nand no documentation on manual workarounds. I have not even seen\ndiscussion on how we are going to improve this. I would like to have\nthat discussion now.\n\n> Yes, we need to overhaul the way vacuum works (to reduce the frequency\n> of rewriting stuff repeatedly) and the way it's triggered (priorization,\n> more trigger conditions) but all these are known things and \"just\" need\n> somebody with time.\n\nBased on the work needed to improve this, I would have thought someone\nwould have taken this on during 9.3 development.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 6 Sep 2013 13:01:59 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Fri, Sep 6, 2013 at 01:01:59PM -0400, Bruce Momjian wrote:\n> This December 2012 thread by Andrew Dunstan shows he wasn't aware that a\n> manual VACUUM was required for index-only scans. That thread ended with\n> us realizing that pg_upgrade's ANALYZE runs will populate\n> pg_class.relallvisible.\n> \n> What I didn't see in that thread is an analysis of what cases are going\n> to require manual vacuum, and I have seen no work in 9.3 to improve\n> that. I don't even see it on the TODO list.\n\nOK, let's start the discussion then. I have added a TODO list:\n\n\tImprove setting of visibility map bits for read-only and insert-only workloads\n\nSo, what should trigger an auto-vacuum vacuum for these workloads? \nRather than activity, which is what normally drives autovacuum, it is\nlack of activity that should drive it, combined with a high VM cleared\nbit percentage.\n\nIt seems we can use these statistics values:\n\n\t n_tup_ins | bigint \n\t n_tup_upd | bigint \n\t n_tup_del | bigint \n\t n_tup_hot_upd | bigint \n\t n_live_tup | bigint \n\t n_dead_tup | bigint \n\t n_mod_since_analyze | bigint \n\t last_vacuum | timestamp with time zone \n\t last_autovacuum | timestamp with time zone \n\nParticilarly last_vacuum and last_autovacuum can tell us the last time\nof vacuum. If the n_tup_upd/n_tup_del counts are low, and the VM set\nbit count is low, it might need vacuuming, though inserts into existing\npages would complicate that.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 6 Sep 2013 15:13:30 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 9/5/13 8:29 PM, Gavin Flower wrote:\n> How about a 'VACUUM AFTER' command (part of the 'BEGIN' transaction syntax?) that would:\n>\n> 1. only be valid in a transaction\n> 2. initiate a vacuum after the current transaction completed\n> 3. defer any vacuum triggered due to other criteria\n>\n> If the transaction was rolled back: then if there was a pending vacuum, due to other reasons, it would then be actioned.\n>\n> On normal transaction completion, then if there was a pending vacuum it would be combined with the one in the transaction.\n>\n> Still would need some method of ensuring any pending vacuum was done if the transaction hung, or took too long.\n\nI *really* like the idea of BEGIN VACUUM AFTER, but I suspect it would be of very limited usefulness if it didn't account for currently running transactions.\n\nI'm thinking we add a vacuum_after_xid field somewhere (pg_class), and instead of attempting to vacuum inside the backend at commit time the transaction would set that field to it's XID unless the field already had a newer XID in it.\n\nautovac would then add all tables where vacuum_after_xid < the oldest running transaction to it's priority list.\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 06 Sep 2013 15:00:02 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 9/6/13 2:13 PM, Bruce Momjian wrote:\n> On Fri, Sep 6, 2013 at 01:01:59PM -0400, Bruce Momjian wrote:\n>> This December 2012 thread by Andrew Dunstan shows he wasn't aware that a\n>> manual VACUUM was required for index-only scans. That thread ended with\n>> us realizing that pg_upgrade's ANALYZE runs will populate\n>> pg_class.relallvisible.\n>>\n>> What I didn't see in that thread is an analysis of what cases are going\n>> to require manual vacuum, and I have seen no work in 9.3 to improve\n>> that. I don't even see it on the TODO list.\n>\n> OK, let's start the discussion then. I have added a TODO list:\n>\n> \tImprove setting of visibility map bits for read-only and insert-only workloads\n>\n> So, what should trigger an auto-vacuum vacuum for these workloads?\n> Rather than activity, which is what normally drives autovacuum, it is\n> lack of activity that should drive it, combined with a high VM cleared\n> bit percentage.\n>\n> It seems we can use these statistics values:\n>\n> \t n_tup_ins | bigint\n> \t n_tup_upd | bigint\n> \t n_tup_del | bigint\n> \t n_tup_hot_upd | bigint\n> \t n_live_tup | bigint\n> \t n_dead_tup | bigint\n> \t n_mod_since_analyze | bigint\n> \t last_vacuum | timestamp with time zone\n> \t last_autovacuum | timestamp with time zone\n>\n> Particilarly last_vacuum and last_autovacuum can tell us the last time\n> of vacuum. If the n_tup_upd/n_tup_del counts are low, and the VM set\n> bit count is low, it might need vacuuming, though inserts into existing\n> pages would complicate that.\n\nSomething else that might be useful to look at is if there are any FSM entries or not. True insert only shouldn't have any FSM.\n\nThat said, there's definitely another case to think about... tables that see update activity on newly inserted rows but not on older rows. A work queue that is not pruned would be an example of that:\n\nINSERT new work item\nUPDATE work item SET status = 'In process';\nUPDATE work item SET completion = '50%';\nUPDATE work item SET sattus = 'Complete\", completion = '100%';\n\nIn this case I would expect most of the pages in the table (except the very end) to be all visible.\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 06 Sep 2013 15:10:06 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2013-09-06 13:01:59 -0400, Bruce Momjian wrote:\n> On Fri, Sep 6, 2013 at 06:36:47PM +0200, Andres Freund wrote:\n> > On 2013-09-06 12:30:56 -0400, Bruce Momjian wrote:\n> > > > I am not sure I understand this though. What would be the point to go\n> > > > and set all visible and not do the rest of the vacuuming work?\n> > > >\n> > > > I think triggering vacuuming by scanning the visibility map for the\n> > > > number of unset bits and use that as another trigger is a good idea. The\n> > > > vm should ensure we're not doing superflous work.\n> > >\n> > > Yes, I think it might be hard to justify a separate VM-set-only scan of\n> > > the table. If you are already reading the table, and already checking\n> > > to see if you can set the VM bit, I am not sure why you would not also\n> > > remove old rows, especially since removing those rows might be necessary\n> > > to allow setting VM bits.\n> > \n> > Yep. Although adding the table back into the fsm will lead to it being\n> > used for new writes again...\n> \n> You mean adding _pages_ back into the table's FSM? Yes, that is going\n> to cause those pages to get dirty, but it is better than expanding the\n> table size. I don't see why you would not update the FSM.\n\nYou're right, we should add them, I wasn't really questioning that. I\nwas, quietly so you couldn't hear it, wondering whether we should\npriorize the target buffer selection differently.\n\n> > > Another problem I thought of is that while automatic vacuuming only\n> > > happens with high update/delete load, index-only scans are best on\n> > > mostly non-write tables, so we have bad behavior where the ideal case\n> > > (static data) doesn't get vm-bits set, while update/delete has the\n> > > vm-bits set, but then cleared as more update/deletes occur.\n> > \n> > Well, older tables will get vacuumed due to vacuum_freeze_table_age. So\n> > at some point they will get vacuumed and the vm bits will get set.\n> \n> Hmm, good point. That would help with an insert-only workload, as long\n> as you can chew through 200M transactions. That doesn't help with a\n> read-only workload as we don't consume transction IDs for SELECT.\n\nIt's even 150mio. For the other workloads, its pretty \"common\" wisdom to\nVACUUM after bulk data loading. I think we even document that.\n\n> > > The more I look at this the worse it appears. How has this gone\n> > > unaddressed for over a year?\n> > \n> > It's been discussed several times including during the introduction of\n> > the feature. I am a bit surprised about the panickey tone in this\n> > thread.\n> \n> This December 2012 thread by Andrew Dunstan shows he wasn't aware that a\n> manual VACUUM was required for index-only scans. That thread ended with\n> us realizing that pg_upgrade's ANALYZE runs will populate\n> pg_class.relallvisible.\n\n> What I didn't see in that thread is an analysis of what cases are going\n> to require manual vacuum, and I have seen no work in 9.3 to improve\n> that. I don't even see it on the TODO list.\n\nYes, TODO maybe missing.\n\n> It bothers me that we spent time developing index-only scans, but have\n> significant workloads where it doesn't work, no efforts on improving it,\n> and no documentation on manual workarounds. I have not even seen\n> discussion on how we are going to improve this. I would like to have\n> that discussion now.\n\nIt's not like the feature is useless in this case. You just need to\nperform an extra operation to activate it. I am not saying we shouldn't\ndocument it better, but it seriously worries me that a useful feature is\ndepicted as useless because it requires a manual VACUUM in some\ncircumstances.\n\n> > Yes, we need to overhaul the way vacuum works (to reduce the frequency\n> > of rewriting stuff repeatedly) and the way it's triggered (priorization,\n> > more trigger conditions) but all these are known things and \"just\" need\n> > somebody with time.\n\n> Based on the work needed to improve this, I would have thought someone\n> would have taken this on during 9.3 development.\n\nThere has been some discussion about it indirectly via the freezing\nstuff. That also would require more \"advanced\" scheduling.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sat, 7 Sep 2013 00:22:41 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2013-09-06 15:13:30 -0400, Bruce Momjian wrote:\n> On Fri, Sep 6, 2013 at 01:01:59PM -0400, Bruce Momjian wrote:\n> > This December 2012 thread by Andrew Dunstan shows he wasn't aware that a\n> > manual VACUUM was required for index-only scans. That thread ended with\n> > us realizing that pg_upgrade's ANALYZE runs will populate\n> > pg_class.relallvisible.\n> > \n> > What I didn't see in that thread is an analysis of what cases are going\n> > to require manual vacuum, and I have seen no work in 9.3 to improve\n> > that. I don't even see it on the TODO list.\n> \n> OK, let's start the discussion then. I have added a TODO list:\n> \n> \tImprove setting of visibility map bits for read-only and insert-only workloads\n> \n> So, what should trigger an auto-vacuum vacuum for these workloads? \n> Rather than activity, which is what normally drives autovacuum, it is\n> lack of activity that should drive it, combined with a high VM cleared\n> bit percentage.\n> \n> It seems we can use these statistics values:\n> \n> \t n_tup_ins | bigint \n> \t n_tup_upd | bigint \n> \t n_tup_del | bigint \n> \t n_tup_hot_upd | bigint \n> \t n_live_tup | bigint \n> \t n_dead_tup | bigint \n> \t n_mod_since_analyze | bigint \n> \t last_vacuum | timestamp with time zone \n> \t last_autovacuum | timestamp with time zone \n> \n> Particilarly last_vacuum and last_autovacuum can tell us the last time\n> of vacuum. If the n_tup_upd/n_tup_del counts are low, and the VM set\n> bit count is low, it might need vacuuming, though inserts into existing\n> pages would complicate that.\n\nI wonder if we shouldn't trigger most vacuums (not analyze!) via unset\nfsm bits. Perhaps combined with keeping track of RecentGlobalXmin to\nmake sure we're not repeatedly checking for work that cannot yet be\ndone.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sat, 7 Sep 2013 00:26:23 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Sat, Sep 7, 2013 at 12:26:23AM +0200, Andres Freund wrote:\n> > So, what should trigger an auto-vacuum vacuum for these workloads? \n> > Rather than activity, which is what normally drives autovacuum, it is\n> > lack of activity that should drive it, combined with a high VM cleared\n> > bit percentage.\n> > \n> > It seems we can use these statistics values:\n> > \n> > \t n_tup_ins | bigint \n> > \t n_tup_upd | bigint \n> > \t n_tup_del | bigint \n> > \t n_tup_hot_upd | bigint \n> > \t n_live_tup | bigint \n> > \t n_dead_tup | bigint \n> > \t n_mod_since_analyze | bigint \n> > \t last_vacuum | timestamp with time zone \n> > \t last_autovacuum | timestamp with time zone \n> > \n> > Particilarly last_vacuum and last_autovacuum can tell us the last time\n> > of vacuum. If the n_tup_upd/n_tup_del counts are low, and the VM set\n> > bit count is low, it might need vacuuming, though inserts into existing\n> > pages would complicate that.\n> \n> I wonder if we shouldn't trigger most vacuums (not analyze!) via unset\n> fsm bits. Perhaps combined with keeping track of RecentGlobalXmin to\n\nFsm bits? FSM tracks the free space on each page. How does that help?\n\n> make sure we're not repeatedly checking for work that cannot yet be\n> done.\n\nThe idea of using RecentGlobalXmin to see how much _work_ has happened\nsince the last vacuum is interesting, but it doesn't handle read-only\ntransactions; I am not sure how they can be tracked. You make a good\npoint that 5 minutes passing is meaningless --- you really want to know\nhow many transactions have completed. Unfortunately, our virtual\ntransactions make that hard to compute.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 6 Sep 2013 20:29:08 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2013-09-06 20:29:08 -0400, Bruce Momjian wrote:\n> On Sat, Sep 7, 2013 at 12:26:23AM +0200, Andres Freund wrote:\n> > I wonder if we shouldn't trigger most vacuums (not analyze!) via unset\n> > fsm bits. Perhaps combined with keeping track of RecentGlobalXmin to\n> \n> Fsm bits? FSM tracks the free space on each page. How does that\n> help?\n\nErr. I was way too tired when I wrote that. vm bits.\n\n> > make sure we're not repeatedly checking for work that cannot yet be\n> > done.\n\n> The idea of using RecentGlobalXmin to see how much _work_ has happened\n> since the last vacuum is interesting, but it doesn't handle read-only\n> transactions; I am not sure how they can be tracked. You make a good\n> point that 5 minutes passing is meaningless --- you really want to know\n> how many transactions have completed.\n\nSo, what I was pondering went slightly into a different direction:\n\n(lets ignore anti wraparound vacuum for now)\n\nCurrently we trigger autovacuums by the assumed number of dead\ntuples. In the course of it's action it usually will find that it cannot\nremove all dead rows and that it cannot mark everything as all\nvisible. That's because the xmin horizon hasn't advanced far enough. We\nwon't trigger another vacuum after that unless there are further dead\ntuples in the relation...\nOne trick if we want to overcome that problem and that we do not handle\nsetting all visible nicely for INSERT only workloads would be to trigger\nvacuum by the amount of pages that are not marked all visible in the vm.\n\nThe problem there is that repeatedly scanning a relation that's only 50%\nvisible where the rest cannot be marked all visible because of a\nlongrunning pg_dump obivously isn't a good idea. So we need something to\nnotify us when there's work to be done. Using elapsed time seems like a\nbad idea because it doesn't adapt to changing workloads very well and\ndoesn't work nicely for different relations.\n\nWhat I was thinking of was to keep track of the oldest xids on pages\nthat cannot be marked all visible. I haven't thought about the\nstatistics part much, but what if we binned the space between\n[RecentGlobalXmin, ->nextXid) into 10 bins and counted the number of\npages falling into each bin. Then after the vacuum finished we could\ncompute how far RecentGlobalXmin would have to progress to make another\nvacuum worthwile by counting the number of pages from the lowest bin\nupwards and use the bin's upper limit as the triggering xid.\n\nNow, we'd definitely need to amend that scheme by something that handles\npages that are newly written to, but it seems something like that\nwouldn't be too hard to implement and would make autovacuum more useful.\n\n> Unfortunately, our virtual transactions make that hard to compute.\n\nI don't think they pose too much of a complexity. We basically only have\nto care about PGXACT->xmin here and virtual transactions don't change\nthe handling of that ...\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sat, 7 Sep 2013 07:34:49 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Sat, Sep 7, 2013 at 07:34:49AM +0200, Andres Freund wrote:\n> > The idea of using RecentGlobalXmin to see how much _work_ has happened\n> > since the last vacuum is interesting, but it doesn't handle read-only\n> > transactions; I am not sure how they can be tracked. You make a good\n> > point that 5 minutes passing is meaningless --- you really want to know\n> > how many transactions have completed.\n> \n> So, what I was pondering went slightly into a different direction:\n> \n> (lets ignore anti wraparound vacuum for now)\n> \n> Currently we trigger autovacuums by the assumed number of dead\n> tuples. In the course of it's action it usually will find that it cannot\n> remove all dead rows and that it cannot mark everything as all\n> visible. That's because the xmin horizon hasn't advanced far enough. We\n> won't trigger another vacuum after that unless there are further dead\n> tuples in the relation...\n> One trick if we want to overcome that problem and that we do not handle\n> setting all visible nicely for INSERT only workloads would be to trigger\n> vacuum by the amount of pages that are not marked all visible in the vm.\n> \n> The problem there is that repeatedly scanning a relation that's only 50%\n> visible where the rest cannot be marked all visible because of a\n> longrunning pg_dump obivously isn't a good idea. So we need something to\n> notify us when there's work to be done. Using elapsed time seems like a\n> bad idea because it doesn't adapt to changing workloads very well and\n> doesn't work nicely for different relations.\n> \n> What I was thinking of was to keep track of the oldest xids on pages\n> that cannot be marked all visible. I haven't thought about the\n> statistics part much, but what if we binned the space between\n> [RecentGlobalXmin, ->nextXid) into 10 bins and counted the number of\n> pages falling into each bin. Then after the vacuum finished we could\n> compute how far RecentGlobalXmin would have to progress to make another\n> vacuum worthwile by counting the number of pages from the lowest bin\n> upwards and use the bin's upper limit as the triggering xid.\n> \n> Now, we'd definitely need to amend that scheme by something that handles\n> pages that are newly written to, but it seems something like that\n> wouldn't be too hard to implement and would make autovacuum more useful.\n\nThat seems very complicated. I think it would be enough to record the\ncurrent xid at the time of the vacuum, and when testing for later\nvacuums, if that saved xid is earlier than the RecentGlobalXmin, and\nthere have been no inserts/updates/deletes, we know that all of\nthe pages can now be marked as allvisible.\n\nWhat this doesn't handle is the insert case. What we could do there is\nto record the total free space map space, and if the FSM has not changed\nbetween the last vacuum, we can even vacuum if inserts happened in that\nperiod because we assume the inserts are on new pages. One problem\nthere is that the FSM is only updated if an insert will not fit on the\npage. We could record the table size and make sure the table size has\nincreased before we allow inserts to trigger a vm-set vacuum.\n\nNone of this is perfect, but it is better than what we have, and it\nwould eventually get the VM bits set.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sat, 7 Sep 2013 12:50:59 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "Hi,\n\nOn 2013-09-07 12:50:59 -0400, Bruce Momjian wrote:\n> That seems very complicated. I think it would be enough to record the\n> current xid at the time of the vacuum, and when testing for later\n> vacuums, if that saved xid is earlier than the RecentGlobalXmin, and\n> there have been no inserts/updates/deletes, we know that all of\n> the pages can now be marked as allvisible.\n\nBut that would constantly trigger vacuums, or am I missing something? Or\nwhat are you suggesting this xid to be used for?\n\nWhat I was talking about was how to evaluate the benefit of triggering\nan VACUUM even if there's not a significant amount of new dead rows. If\nwe know that for a certain xmin horizon there's N pages that potentially\ncan be cleaned and marked all visible we have a change of making\nsensible decisions.\nWe could just use one bin (i.e. use one cutoff xid as you propose) and\ncount the number of pages that would be affected. But that would mean\nwe'd only trigger vacuums very irregularly if you have a workload with\nseveral longrunning transactions. When the oldest of a set of\nlongrunning transactions finishes you possibly can already clean up a\ngood bit reducing the chance of further bloat. Otherwise you have to\nwait for all of them to finish.\n\n> What this doesn't handle is the insert case. What we could do there is\n> to record the total free space map space, and if the FSM has not changed\n> between the last vacuum, we can even vacuum if inserts happened in that\n> period because we assume the inserts are on new pages. One problem\n> there is that the FSM is only updated if an insert will not fit on the\n> page. We could record the table size and make sure the table size has\n> increased before we allow inserts to trigger a vm-set vacuum.\n\nNot sure why that's better than just counting the number of pages that\nhave unset vm bits?\nNote that you cannot rely on the FSM data to be correct all the time, we\ncan only use such tricks to trigger vacuums not for the actual operation\nin the vacuum.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sun, 8 Sep 2013 00:47:35 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Thu, Sep 5, 2013 at 7:00 PM, Bruce Momjian <[email protected]> wrote:\n> On Thu, Sep 5, 2013 at 09:10:06PM -0400, Robert Haas wrote:\n>> On Thu, Sep 5, 2013 at 8:14 PM, Bruce Momjian <[email protected]> wrote:\n>> > Actually, I now realize it is more complex than that, and worse. There\n>> > are several questions to study to understand when pg_class.relallvisible\n>> > is updated (which is used to determine if index-only scans are a good\n>> > optimization choice), and when VM all-visible bits are set so heap pages\n>> > can be skipped during index-only scans:\n>> >\n>> > 1) When are VM bits set:\n>> > vacuum (non-full)\n>> > analyze (only some random pages)\n>>\n>> Analyze doesn't set visibility-map bits. It only updates statistics\n>> about how many are set.\n>\n> Sorry, yes you are correct.\n>\n>> > The calculus we should use to determine when we need to run vacuum has\n>> > changed with index-only scans, and I am not sure we ever fully addressed\n>> > this.\n>>\n>> Yeah, we didn't. I think the hard part is figuring out what behavior\n>> would be best. Counting inserts as well as updates and deletes would\n>> be a simple approach, but I don't have much confidence in it. My\n>> experience is that having vacuum or analyze kick in during a bulk-load\n>> operation is a disaster. We'd kinda like to come up with a way to\n>> make vacuum run after the bulk load is complete, maybe, but how would\n>> we identify that time, and there are probably cases where that's not\n>> right either.\n>\n> I am unsure how we have gone a year with index-only scans and I am just\n> now learning that it only works well with update/delete workloads or by\n> running vacuum manually. I only found this out going back over January\n> emails. Did other people know this? Was it not considered a serious\n> problem?\n\nI thought it was well known, but maybe I was overly optimistic. I've\nconsidered IOS to be mostly useful for data mining work on read-mostly\ntables, which you would probably vacuum manually after a bulk load.\n\nFor transactional tables, I think that trying to keep the vm set-bit\ndensity high enough would be a losing battle. If we redefined the\nnature of the vm so that doing a HOT update would not clear the\nvisibility bit, perhaps that would change the outcome of this battle.\n\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sun, 8 Sep 2013 14:05:00 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Mon, Sep 9, 2013 at 2:35 AM, Jeff Janes <[email protected]> wrote:\n> On Thu, Sep 5, 2013 at 7:00 PM, Bruce Momjian <[email protected]> wrote:\n>> On Thu, Sep 5, 2013 at 09:10:06PM -0400, Robert Haas wrote:\n>>> On Thu, Sep 5, 2013 at 8:14 PM, Bruce Momjian <[email protected]> wrote:\n>>> > Actually, I now realize it is more complex than that, and worse. There\n>>> > are several questions to study to understand when pg_class.relallvisible\n>>> > is updated (which is used to determine if index-only scans are a good\n>>> > optimization choice), and when VM all-visible bits are set so heap pages\n>>> > can be skipped during index-only scans:\n>>> >\n>>> > 1) When are VM bits set:\n>>> > vacuum (non-full)\n>>> > analyze (only some random pages)\n>>>\n>>> Analyze doesn't set visibility-map bits. It only updates statistics\n>>> about how many are set.\n>>\n>> Sorry, yes you are correct.\n>>\n>>> > The calculus we should use to determine when we need to run vacuum has\n>>> > changed with index-only scans, and I am not sure we ever fully addressed\n>>> > this.\n>>>\n>>> Yeah, we didn't. I think the hard part is figuring out what behavior\n>>> would be best. Counting inserts as well as updates and deletes would\n>>> be a simple approach, but I don't have much confidence in it. My\n>>> experience is that having vacuum or analyze kick in during a bulk-load\n>>> operation is a disaster. We'd kinda like to come up with a way to\n>>> make vacuum run after the bulk load is complete, maybe, but how would\n>>> we identify that time, and there are probably cases where that's not\n>>> right either.\n>>\n>> I am unsure how we have gone a year with index-only scans and I am just\n>> now learning that it only works well with update/delete workloads or by\n>> running vacuum manually. I only found this out going back over January\n>> emails. Did other people know this? Was it not considered a serious\n>> problem?\n>\n> I thought it was well known, but maybe I was overly optimistic. I've\n> considered IOS to be mostly useful for data mining work on read-mostly\n> tables, which you would probably vacuum manually after a bulk load.\n>\n> For transactional tables, I think that trying to keep the vm set-bit\n> density high enough would be a losing battle. If we redefined the\n> nature of the vm so that doing a HOT update would not clear the\n> visibility bit, perhaps that would change the outcome of this battle.\n\nWouldn't it make the Vacuum bit in-efficient in the sense that it will\nskip some of the pages in which there are only\nHOT updates for cleaning dead rows.\n\n\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 9 Sep 2013 09:19:03 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Sun, Sep 8, 2013 at 8:49 PM, Amit Kapila <[email protected]> wrote:\n> On Mon, Sep 9, 2013 at 2:35 AM, Jeff Janes <[email protected]> wrote:\n>> I thought it was well known, but maybe I was overly optimistic. I've\n>> considered IOS to be mostly useful for data mining work on read-mostly\n>> tables, which you would probably vacuum manually after a bulk load.\n>>\n>> For transactional tables, I think that trying to keep the vm set-bit\n>> density high enough would be a losing battle. If we redefined the\n>> nature of the vm so that doing a HOT update would not clear the\n>> visibility bit, perhaps that would change the outcome of this battle.\n>\n> Wouldn't it make the Vacuum bit in-efficient in the sense that it will\n> skip some of the pages in which there are only\n> HOT updates for cleaning dead rows.\n\nMaybe. But anyone is competent to clean up dead rows from HOT\nupdates, it is not exclusively vacuum that can do it, like it is for\nnon-HOT tuples. So I think any inefficiency would be very small.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 9 Sep 2013 09:03:19 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Sun, Sep 8, 2013 at 12:47:35AM +0200, Andres Freund wrote:\n> Hi,\n> \n> On 2013-09-07 12:50:59 -0400, Bruce Momjian wrote:\n> > That seems very complicated. I think it would be enough to record the\n> > current xid at the time of the vacuum, and when testing for later\n> > vacuums, if that saved xid is earlier than the RecentGlobalXmin, and\n> > there have been no inserts/updates/deletes, we know that all of\n> > the pages can now be marked as allvisible.\n> \n> But that would constantly trigger vacuums, or am I missing something? Or\n> what are you suggesting this xid to be used for?\n\nOK, let me give some specifices. Let's suppose we run a vacuum, and at\nthe time the current xid counter is 200. If we later have autovacuum\ncheck if it should vacuum, and there have been no dead rows generated\n(no update/delete/abort), if the current RecentGlobalXmin is >200, then\nwe know that all the transactions that prevented all-visible marking the\nlast time we ran vacuum has completed. That leaves us with just\ninserts that could prevent all-visible.\n\nIf there have been no inserts, we can assume that we can vacuum just the\nnon-all-visible pages, and even if there are only 10, it just means we\nhave to read 10 8k blocks, not the entire table, because the all-visible\nis set for all the rest of the pages.\n\nNow, if there have been inserts, there are a few cases. If the inserts\nhappened in pages that were previously marked all-visible, then we now\nhave pages that lost all-visible, and we probably don't want to vacuum\nthose. Of course, we will not have recorded which pages changed, but\nany decrease in the all-visible table count perhaps should have us\navoiding vacuum just to set the visibility map. We should probably\nupdate our stored vm bit-set count and current xid value so we can check\nagain later to see if things have sabilized.\n\nIf the vm-set bit count is the same as the last time autovacuum checked\nthe table, then the inserts happened either in the vm-bit cleared pages,\nor in new data pages. If the table size is the same, the inserts\nhappened in existing pages, so we probably don't want to vacuum. If the\ntable size has increased, some inserts went into new pages, so we might\nwant to vacuum, but I am unclear how many new pages should force a\nvacuum.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 9 Sep 2013 13:55:51 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Mon, Sep 9, 2013 at 9:33 PM, Jeff Janes <[email protected]> wrote:\n> On Sun, Sep 8, 2013 at 8:49 PM, Amit Kapila <[email protected]> wrote:\n>> On Mon, Sep 9, 2013 at 2:35 AM, Jeff Janes <[email protected]> wrote:\n>>> I thought it was well known, but maybe I was overly optimistic. I've\n>>> considered IOS to be mostly useful for data mining work on read-mostly\n>>> tables, which you would probably vacuum manually after a bulk load.\n>>>\n>>> For transactional tables, I think that trying to keep the vm set-bit\n>>> density high enough would be a losing battle. If we redefined the\n>>> nature of the vm so that doing a HOT update would not clear the\n>>> visibility bit, perhaps that would change the outcome of this battle.\n>>\n>> Wouldn't it make the Vacuum bit in-efficient in the sense that it will\n>> skip some of the pages in which there are only\n>> HOT updates for cleaning dead rows.\n>\n> Maybe. But anyone is competent to clean up dead rows from HOT\n> updates, it is not exclusively vacuum that can do it, like it is for\n> non-HOT tuples.\n\nYes, that is right, but how about freezing of tuples, delaying that\nalso might not be good. Also it might not be good for all kind of\nscenarios that always foreground operations take care of cleaning up\ndead rows leaving very less chance for Vacuum (only when it has to\nscan all pages aka anti-wraparound vacuum) to cleanup dead rows.\n\nIf we are sure that Vacuum skipping pages in a database where there\nare less non-HOT updates and deletes (or mostly inserts and\nHot-updates) is not having any significant impact, then it can be\nquite useful for IOS.\n\n\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 10 Sep 2013 10:39:08 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 9/7/13 12:34 AM, Andres Freund wrote:\n> What I was thinking of was to keep track of the oldest xids on pages\n> that cannot be marked all visible. I haven't thought about the\n> statistics part much, but what if we binned the space between\n> [RecentGlobalXmin, ->nextXid) into 10 bins and counted the number of\n> pages falling into each bin. Then after the vacuum finished we could\n> compute how far RecentGlobalXmin would have to progress to make another\n> vacuum worthwile by counting the number of pages from the lowest bin\n> upwards and use the bin's upper limit as the triggering xid.\n>\n> Now, we'd definitely need to amend that scheme by something that handles\n> pages that are newly written to, but it seems something like that\n> wouldn't be too hard to implement and would make autovacuum more useful.\n\nIf we're binning by XID though you're still dependent on scanning to build that range. Anything that creates dead tuples will also be be problematic, because it's going to unset VM bits on you, and you won't know if it's due to INSERTS or dead tuples.\n\nWhat if we maintained XID stats for ranges of pages in a separate fork? Call it the XidStats fork. Presumably the interesting pieces would be min(xmin) and max(xmax) for pages that aren't all visible. If we did that at a granularity of, say, 1MB worth of pages[1] we're talking 8 bytes per MB, or 1 XidStats page per GB of heap. (Worst case alignment bumps that up to 2 XidStats pages per GB of heap.)\n\nHaving both min(xmin) and max(xmax) for a range of pages would allow for very granular operation of vacuum. Instead of hitting every heap page that's not all-visible, it would only hit those that are not visible and where min(xmin) or max(xmax) were less than RecentGlobalXmin.\n\nOne concern is maintaining this data. A key point is that we don't have to update it every time it changes; if the min/max are only off by a few hundred XIDs there's no point to updating the XidStats page. We'd obviously need the XidStats page to be read in, but even a 100GB heap would be either 100 or 200 XidStats pages.\n\n[1]: There's a trade-off between how much space we 'waste' on XidStats pages and how many heap pages we potentially have to scan in the range. We'd want to see what this looked like in a real system. The thing that helps here is that regardless of what the stats for a particular heap range are, you're not going to scan any pages in that range that are already all-visible.\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 17 Sep 2013 11:37:35 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2013-09-17 11:37:35 -0500, Jim Nasby wrote:\n> On 9/7/13 12:34 AM, Andres Freund wrote:\n> >What I was thinking of was to keep track of the oldest xids on pages\n> >that cannot be marked all visible. I haven't thought about the\n> >statistics part much, but what if we binned the space between\n> >[RecentGlobalXmin, ->nextXid) into 10 bins and counted the number of\n> >pages falling into each bin. Then after the vacuum finished we could\n> >compute how far RecentGlobalXmin would have to progress to make another\n> >vacuum worthwile by counting the number of pages from the lowest bin\n> >upwards and use the bin's upper limit as the triggering xid.\n> >\n> >Now, we'd definitely need to amend that scheme by something that handles\n> >pages that are newly written to, but it seems something like that\n> >wouldn't be too hard to implement and would make autovacuum more useful.\n> \n> If we're binning by XID though you're still dependent on scanning to\n> build that range. Anything that creates dead tuples will also be be\n> problematic, because it's going to unset VM bits on you, and you won't\n> know if it's due to INSERTS or dead tuples.\n\nI don't think that's all that much of a problem. In the end, it's a good\nidea to look at pages shortly after they have been filled/been\ntouched. Setting hint bits at that point avoid repetitive IO and in many\ncases we will already be able to mark them all-visible.\nThe binning idea was really about sensibly estimating whether a new scan\nalready makes sense which is currently very hard to judge.\n\nI generally think the current logic for triggering VACUUMs via\nautovacuum doesn't really make all that much sense in the days where we\nhave the visibility map.\n\n> What if we maintained XID stats for ranges of pages in a separate\n> fork? Call it the XidStats fork. Presumably the interesting pieces\n> would be min(xmin) and max(xmax) for pages that aren't all visible. If\n> we did that at a granularity of, say, 1MB worth of pages[1] we're\n> talking 8 bytes per MB, or 1 XidStats page per GB of heap. (Worst case\n> alignment bumps that up to 2 XidStats pages per GB of heap.)\n\nYes, I have thought about similar ideas as well, but I came to the\nconclusion that it's not worth it. If you want to make the boundaries\nprecise and the xidstats fork small, you're introducing new contention\npoints because every DML will need to make sure it's correct.\nAlso, the amount of code that would require seems to be bigger than\njustified by the increase of precision when to vacuum.\n\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 18 Sep 2013 01:10:50 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 9/17/13 6:10 PM, Andres Freund wrote:\n>> What if we maintained XID stats for ranges of pages in a separate\n>> >fork? Call it the XidStats fork. Presumably the interesting pieces\n>> >would be min(xmin) and max(xmax) for pages that aren't all visible. If\n>> >we did that at a granularity of, say, 1MB worth of pages[1] we're\n>> >talking 8 bytes per MB, or 1 XidStats page per GB of heap. (Worst case\n>> >alignment bumps that up to 2 XidStats pages per GB of heap.)\n\n> Yes, I have thought about similar ideas as well, but I came to the\n> conclusion that it's not worth it. If you want to make the boundaries\n> precise and the xidstats fork small, you're introducing new contention\n> points because every DML will need to make sure it's correct.\n\nActually, that's not true... the XidStats only need to be \"relatively\" precise. IE: within a few hundred or thousand XIDs.\n\nSo for example, you'd only need to attempt an update if the XID already stored was more than a few hundred/thousand/whatever XIDs away from your XID. If it's any closer don't even bother to update.\n\nThat still leaves potential for thundering herd on the fork buffer lock if you've got a ton of DML on one table across a bunch of backends, but there might be other ways around that. For example, if you know you can update the XID with a CPU-atomic instruction, you don't need to lock the page.\n\n> Also, the amount of code that would require seems to be bigger than\n> justified by the increase of precision when to vacuum.\n\nThat's very possibly true. I haven't had a chance to see how much VM bits help reduce vacuum overhead yet, so I don't have anything to add on this front. Perhaps others might.\n-- \nJim C. Nasby, Data Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 18 Sep 2013 15:28:53 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Tue, Sep 17, 2013 at 7:10 PM, Andres Freund <[email protected]> wrote:\n> I generally think the current logic for triggering VACUUMs via\n> autovacuum doesn't really make all that much sense in the days where we\n> have the visibility map.\n\nRight now, whether or not to autovacuum is the rest of a two-pronged\ntest. The first prong is based on number of updates and deletes\nrelative to table size; that triggers a regular autovacuum. The\nsecond prong is based on age(relfrozenxid) and triggers a\nnon-page-skipping vacuum (colloquially, an anti-wraparound vacuum).\n\nThe typical case in which this doesn't work out well is when the table\nhas a lot of inserts but few or no updates and deletes. So I propose\nthat we change the first prong to count inserts as well as updates and\ndeletes when deciding whether it needs to vacuum the table. We\nalready use that calculation to decide whether to auto-analyze, so it\nwouldn't be very novel. We know that the work of marking pages\nall-visible will need to be done at some point, and doing it sooner\nwill result in doing it in smaller batches, which seems generally\ngood.\n\nHowever, I do have one concern: it might lead to excessive\nindex-vacuuming. Right now, we skip the index vac step only if there\nZERO dead tuples are found during the heap scan. Even one dead tuple\n(or line pointer) will cause an index vac cycle, which may easily be\nexcessive. So I further propose that we introduce a threshold for\nindex-vac; so that we only do index vac cycle if the number of dead\ntuples exceeds, say 0.1% of the table size.\n\nThoughts? Let the hurling of rotten tomatoes begin.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 19 Sep 2013 14:39:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "Robert Haas <[email protected]> wrote:\n\n> Right now, whether or not to autovacuum is the rest of a two-pronged\n\n> test.  The first prong is based on number of updates and deletes\n> relative to table size; that triggers a regular autovacuum.  The\n> second prong is based on age(relfrozenxid) and triggers a\n> non-page-skipping vacuum (colloquially, an anti-wraparound vacuum).\n> \n> The typical case in which this doesn't work out well is when the table\n> has a lot of inserts but few or no updates and deletes.  So I propose\n> that we change the first prong to count inserts as well as updates and\n> deletes when deciding whether it needs to vacuum the table.  We\n> already use that calculation to decide whether to auto-analyze, so it\n> wouldn't be very novel.  We know that the work of marking pages\n> all-visible will need to be done at some point, and doing it sooner\n> will result in doing it in smaller batches, which seems generally\n> good.\n> \n> However, I do have one concern: it might lead to excessive\n> index-vacuuming.  Right now, we skip the index vac step only if there\n> ZERO dead tuples are found during the heap scan.  Even one dead tuple\n> (or line pointer) will cause an index vac cycle, which may easily be\n> excessive.  So I further propose that we introduce a threshold for\n> index-vac; so that we only do index vac cycle if the number of dead\n> tuples exceeds, say 0.1% of the table size.\n\n+1  I've been thinking of suggesting something along the same lines,\nfor the same reasons.\n\n \n-- \nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 19 Sep 2013 14:36:53 -0700 (PDT)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2013-09-19 14:39:43 -0400, Robert Haas wrote:\n> On Tue, Sep 17, 2013 at 7:10 PM, Andres Freund <[email protected]> wrote:\n> > I generally think the current logic for triggering VACUUMs via\n> > autovacuum doesn't really make all that much sense in the days where we\n> > have the visibility map.\n> \n> Right now, whether or not to autovacuum is the rest of a two-pronged\n> test. The first prong is based on number of updates and deletes\n> relative to table size; that triggers a regular autovacuum. The\n> second prong is based on age(relfrozenxid) and triggers a\n> non-page-skipping vacuum (colloquially, an anti-wraparound vacuum).\n\nAnd I have some hopes we can get rid of that in 9.4 (that alone would be\nworth a bump to 10.0 ;)). I really like Heikki's patch, even if I am\nenvious that I didn't have the idea :P. Although it needs quite a bit of\nwork to be ready.\n\n> The typical case in which this doesn't work out well is when the table\n> has a lot of inserts but few or no updates and deletes. So I propose\n> that we change the first prong to count inserts as well as updates and\n> deletes when deciding whether it needs to vacuum the table. We\n> already use that calculation to decide whether to auto-analyze, so it\n> wouldn't be very novel. We know that the work of marking pages\n> all-visible will need to be done at some point, and doing it sooner\n> will result in doing it in smaller batches, which seems generally\n> good.\n\nYes, that's a desperately needed change.\n\nThe reason I suggested keeping track of the xids of unremovable tuples\nis that the current logic doesn't handle that at all. We just\nunconditionally set n_dead_tuples to zero after a vacuum even if not a\nsingle row could actually be cleaned out. Which has the effect that we\nwill not start a vacuum until enough bloat (or after changing this, new\ninserts) has collected to start vacuum anew. Which then will do twice\nthe work.\n\nResetting n_dead_tuples to the actual remaining dead tuples wouldn't do\nmuch good either - we would just immediately trigger a new vacuum the\nnext time we check, even if the xmin horizon is still the same.\n\n> However, I do have one concern: it might lead to excessive\n> index-vacuuming. Right now, we skip the index vac step only if there\n> ZERO dead tuples are found during the heap scan. Even one dead tuple\n> (or line pointer) will cause an index vac cycle, which may easily be\n> excessive. So I further propose that we introduce a threshold for\n> index-vac; so that we only do index vac cycle if the number of dead\n> tuples exceeds, say 0.1% of the table size.\n\nYes, that's a pretty valid concern. But we can't really do it that\neasily. a) We can only remove dead line pointers when we know there's no\nindex pointing to it anymore. Which we only know after the index has\nbeen removed. b) We cannot check the validity of an index pointer if\nthere's no heap tuple for it. Sure, we could check whether we're\npointing to a dead line pointer, but the random io costs of that are\nprohibitive.\nNow, we could just mark line pointers as dead and not mark that page as\nall-visible and pick it up again on the next vacuum cycle. But that\nwould suck long-term.\n\nI think the only real solution here is to store removed tuples tids\n(i.e. items where we've marked as dead) somewhere. Whenever we've found\nsufficient tuples to-be-removed from indexes we do phase 2.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 20 Sep 2013 00:59:29 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Thu, Sep 19, 2013 at 6:59 PM, Andres Freund <[email protected]> wrote:\n> The reason I suggested keeping track of the xids of unremovable tuples\n> is that the current logic doesn't handle that at all. We just\n> unconditionally set n_dead_tuples to zero after a vacuum even if not a\n> single row could actually be cleaned out. Which has the effect that we\n> will not start a vacuum until enough bloat (or after changing this, new\n> inserts) has collected to start vacuum anew. Which then will do twice\n> the work.\n>\n> Resetting n_dead_tuples to the actual remaining dead tuples wouldn't do\n> much good either - we would just immediately trigger a new vacuum the\n> next time we check, even if the xmin horizon is still the same.\n\nOne idea would be to store the xmin we used for the vacuum somewhere.\nCould we make that part of the pgstats infrastructure? Or store it in\na new pg_class column? Then we could avoid re-triggering until it\nadvances. Or, maybe better, we could remember the oldest XID that we\nweren't able to remove due to xmin considerations and re-trigger when\nthe horizon passes that point.\n\n>> However, I do have one concern: it might lead to excessive\n>> index-vacuuming. Right now, we skip the index vac step only if there\n>> ZERO dead tuples are found during the heap scan. Even one dead tuple\n>> (or line pointer) will cause an index vac cycle, which may easily be\n>> excessive. So I further propose that we introduce a threshold for\n>> index-vac; so that we only do index vac cycle if the number of dead\n>> tuples exceeds, say 0.1% of the table size.\n>\n> Yes, that's a pretty valid concern. But we can't really do it that\n> easily. a) We can only remove dead line pointers when we know there's no\n> index pointing to it anymore. Which we only know after the index has\n> been removed. b) We cannot check the validity of an index pointer if\n> there's no heap tuple for it. Sure, we could check whether we're\n> pointing to a dead line pointer, but the random io costs of that are\n> prohibitive.\n> Now, we could just mark line pointers as dead and not mark that page as\n> all-visible and pick it up again on the next vacuum cycle. But that\n> would suck long-term.\n>\n> I think the only real solution here is to store removed tuples tids\n> (i.e. items where we've marked as dead) somewhere. Whenever we've found\n> sufficient tuples to-be-removed from indexes we do phase 2.\n\nI don't really agree with that. Yes, we could make that change, and\nyes, it might be better than what we're doing today, but it would be\ncomplex and have its own costs. And it doesn't mean that lesser steps\nare without merit. A vacuum pass over the heap buys us a LOT of space\nfor reuse even without touching the indexes: we don't reclaim the line\npointers, but we do reclaim the space for the tuples themselves, which\nis a big deal. So being able to do that more frequently without\ncausing problems has a lot of value, I think. The fact that we get to\nset all-visible bits along the way makes future vacuums cheaper, and\nmakes index scans work better, so that's good too. And the first\nvacuum to find a dead tuple will dirty the page to truncate it to a\ndead line pointer, while any subsequent revisits prior to the index\nvac cycle will only examine the page without dirtying it. All in all,\njust leaving the page to be caught be a future vacuum doesn't seem\nthat bad to me, at least for a first cut.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 20 Sep 2013 11:30:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2013-09-20 11:30:26 -0400, Robert Haas wrote:\n> On Thu, Sep 19, 2013 at 6:59 PM, Andres Freund <[email protected]> wrote:\n> > The reason I suggested keeping track of the xids of unremovable tuples\n> > is that the current logic doesn't handle that at all. We just\n> > unconditionally set n_dead_tuples to zero after a vacuum even if not a\n> > single row could actually be cleaned out. Which has the effect that we\n> > will not start a vacuum until enough bloat (or after changing this, new\n> > inserts) has collected to start vacuum anew. Which then will do twice\n> > the work.\n> >\n> > Resetting n_dead_tuples to the actual remaining dead tuples wouldn't do\n> > much good either - we would just immediately trigger a new vacuum the\n> > next time we check, even if the xmin horizon is still the same.\n> \n> One idea would be to store the xmin we used for the vacuum somewhere.\n> Could we make that part of the pgstats infrastructure? Or store it in\n> a new pg_class column? Then we could avoid re-triggering until it\n> advances. Or, maybe better, we could remember the oldest XID that we\n> weren't able to remove due to xmin considerations and re-trigger when\n> the horizon passes that point.\n\nI suggested a slightly more complex variant of this upthread:\nhttp://archives.postgresql.org/message-id/20130907053449.GE626072%40alap2.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 20 Sep 2013 17:51:33 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Fri, Sep 20, 2013 at 11:51 AM, Andres Freund <[email protected]> wrote:\n> On 2013-09-20 11:30:26 -0400, Robert Haas wrote:\n>> On Thu, Sep 19, 2013 at 6:59 PM, Andres Freund <[email protected]> wrote:\n>> > The reason I suggested keeping track of the xids of unremovable tuples\n>> > is that the current logic doesn't handle that at all. We just\n>> > unconditionally set n_dead_tuples to zero after a vacuum even if not a\n>> > single row could actually be cleaned out. Which has the effect that we\n>> > will not start a vacuum until enough bloat (or after changing this, new\n>> > inserts) has collected to start vacuum anew. Which then will do twice\n>> > the work.\n>> >\n>> > Resetting n_dead_tuples to the actual remaining dead tuples wouldn't do\n>> > much good either - we would just immediately trigger a new vacuum the\n>> > next time we check, even if the xmin horizon is still the same.\n>>\n>> One idea would be to store the xmin we used for the vacuum somewhere.\n>> Could we make that part of the pgstats infrastructure? Or store it in\n>> a new pg_class column? Then we could avoid re-triggering until it\n>> advances. Or, maybe better, we could remember the oldest XID that we\n>> weren't able to remove due to xmin considerations and re-trigger when\n>> the horizon passes that point.\n>\n> I suggested a slightly more complex variant of this upthread:\n> http://archives.postgresql.org/message-id/20130907053449.GE626072%40alap2.anarazel.de\n\nAh, yeah. Sorry, I forgot about that.\n\nPersonally, I'd try the simpler version first. But I think whoever\ntakes the time to implement this will probably get to pick.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 20 Sep 2013 11:58:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Thu, Sep 19, 2013 at 02:39:43PM -0400, Robert Haas wrote:\n> Right now, whether or not to autovacuum is the rest of a two-pronged\n> test. The first prong is based on number of updates and deletes\n> relative to table size; that triggers a regular autovacuum. The\n> second prong is based on age(relfrozenxid) and triggers a\n> non-page-skipping vacuum (colloquially, an anti-wraparound vacuum).\n> \n> The typical case in which this doesn't work out well is when the table\n> has a lot of inserts but few or no updates and deletes. So I propose\n> that we change the first prong to count inserts as well as updates and\n> deletes when deciding whether it needs to vacuum the table. We\n> already use that calculation to decide whether to auto-analyze, so it\n> wouldn't be very novel. We know that the work of marking pages\n> all-visible will need to be done at some point, and doing it sooner\n> will result in doing it in smaller batches, which seems generally\n> good.\n> \n> However, I do have one concern: it might lead to excessive\n> index-vacuuming. Right now, we skip the index vac step only if there\n> ZERO dead tuples are found during the heap scan. Even one dead tuple\n> (or line pointer) will cause an index vac cycle, which may easily be\n> excessive. So I further propose that we introduce a threshold for\n> index-vac; so that we only do index vac cycle if the number of dead\n> tuples exceeds, say 0.1% of the table size.\n> \n> Thoughts? Let the hurling of rotten tomatoes begin.\n\nRobert, where are we on this? Should I post a patch?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 31 Jan 2014 22:22:07 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Fri, Jan 31, 2014 at 10:22 PM, Bruce Momjian <[email protected]> wrote:\n> On Thu, Sep 19, 2013 at 02:39:43PM -0400, Robert Haas wrote:\n>> Right now, whether or not to autovacuum is the rest of a two-pronged\n>> test. The first prong is based on number of updates and deletes\n>> relative to table size; that triggers a regular autovacuum. The\n>> second prong is based on age(relfrozenxid) and triggers a\n>> non-page-skipping vacuum (colloquially, an anti-wraparound vacuum).\n>>\n>> The typical case in which this doesn't work out well is when the table\n>> has a lot of inserts but few or no updates and deletes. So I propose\n>> that we change the first prong to count inserts as well as updates and\n>> deletes when deciding whether it needs to vacuum the table. We\n>> already use that calculation to decide whether to auto-analyze, so it\n>> wouldn't be very novel. We know that the work of marking pages\n>> all-visible will need to be done at some point, and doing it sooner\n>> will result in doing it in smaller batches, which seems generally\n>> good.\n>>\n>> However, I do have one concern: it might lead to excessive\n>> index-vacuuming. Right now, we skip the index vac step only if there\n>> ZERO dead tuples are found during the heap scan. Even one dead tuple\n>> (or line pointer) will cause an index vac cycle, which may easily be\n>> excessive. So I further propose that we introduce a threshold for\n>> index-vac; so that we only do index vac cycle if the number of dead\n>> tuples exceeds, say 0.1% of the table size.\n>>\n>> Thoughts? Let the hurling of rotten tomatoes begin.\n>\n> Robert, where are we on this? Should I post a patch?\n\nI started working on this at one point but didn't finish the\nimplementation, let alone the no-doubt-onerous performance testing\nthat will be needed to validate whatever we come up with. It would be\nreally easy to cause serious regressions with ill-considered changes\nin this area, and I don't think many people here have the bandwidth\nfor a detailed study of all the different workloads that might be\naffected here right this very minute. More generally, you're sending\nall these pings three weeks after the deadline for CF4. I don't think\nthat's a good time to encourage people to *start* revising old\npatches, or writing new ones.\n\nI've also had some further thoughts about the right way to drive\nvacuum scheduling. I think what we need to do is tightly couple the\nrate at which we're willing to do vacuuming to the rate at which we're\nincurring \"vacuum debt\". That is, if we're creating 100kB/s of pages\nneeding vacuum, we vacuum at 2-3MB/s (with default settings). If\nwe're creating 10MB/s of pages needing vacuum, we *still* vacuum at\n2-3MB/s. Not shockingly, vacuum gets behind, the database bloats, and\neverything goes to heck. The rate of vacuuming needs to be tied\nsomehow to the rate at which we're creating stuff that needs to be\nvacuumed. Right now we don't even have a way to measure that, let\nalone auto-regulate the aggressiveness of autovacuum on that basis.\n\nSimilarly, for marking of pages as all-visible, we currently make the\nsame decision whether the relation is getting index-scanned (in which\ncase the failure to mark those pages all-visible may be suppressing\nthe use of index scans or making them less effective) or whether it's\nnot being accessed at all (in which case vacuuming it won't help\nanything, and might hurt by pushing other pages out of cache). Again,\nif we had better statistics, we could measure this - counting heap\nfetches for actual index-only scans plus heap fetches for index scans\nthat might have been planned index-only scans but for the relation\nhaving too few all-visible pages doesn't sound like an impossible\nmetric to gather. And if we had that, we could use it to trigger\nvacuuming, instead of guessing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Mon, 3 Feb 2014 11:55:34 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Mon, Feb 3, 2014 at 8:55 AM, Robert Haas <[email protected]> wrote:\n\n\n> I've also had some further thoughts about the right way to drive\n> vacuum scheduling. I think what we need to do is tightly couple the\n> rate at which we're willing to do vacuuming to the rate at which we're\n> incurring \"vacuum debt\". That is, if we're creating 100kB/s of pages\n> needing vacuum, we vacuum at 2-3MB/s (with default settings).\n\n\nIf we can tolerate 2-3MB/s without adverse impact on other work, then we\ncan tolerate it. Do we gain anything substantial by sand-bagging it?\n\n\n\n> If\n> we're creating 10MB/s of pages needing vacuum, we *still* vacuum at\n> 2-3MB/s. Not shockingly, vacuum gets behind, the database bloats, and\n> everything goes to heck.\n\n\n(Your reference to bloat made be me think your comments here are about\nvacuuming in general, not specific to IOS. If that isn't the case, then\nplease ignore.)\n\nIf we can only vacuum at 2-3MB/s without adversely impacting other\nactivity, but we are creating 10MB/s of future vacuum need, then there are\nbasically two possibilities I can think of. Either the 10MB/s represents a\nspike, and vacuum should tolerate it and hope to catch up on the debt\nlater. Or it represents a new permanent condition, in which case I bought\ntoo few hard drives for the work load, and no scheduling decision that\nautovacuum can make will save me from my folly. Perhaps there is some\nmiddle ground between those possibilities, but I don't see room for much\nmiddle ground.\n\nI guess there might be entirely different possibilities not between those\ntwo; for example, I don't realize I'm doing something that is generating\n10MB/s of vacuum debt, and would like to have this thing I'm doing be\nautomatically throttled to the point it doesn't interfere with other\nprocesses (either directly, or indirectly by bloat)\n\n\n\n> The rate of vacuuming needs to be tied\n> somehow to the rate at which we're creating stuff that needs to be\n> vacuumed. Right now we don't even have a way to measure that, let\n> alone auto-regulate the aggressiveness of autovacuum on that basis.\n>\n\nThere is the formula used to decide when a table gets vacuumed. Isn't the\ntime delta in this formula a measure of how fast we are creating stuff that\nneeds to be vacuumed for bloat reasons? Is your objection that it doesn't\ninclude other reasons we might want to vacuum, or that it just doesn't work\nvery well, or that is not explicitly exposed?\n\n\n\n\n> Similarly, for marking of pages as all-visible, we currently make the\n> same decision whether the relation is getting index-scanned (in which\n> case the failure to mark those pages all-visible may be suppressing\n> the use of index scans or making them less effective) or whether it's\n> not being accessed at all (in which case vacuuming it won't help\n> anything, and might hurt by pushing other pages out of cache).\n\n\nIf it is not getting accessed at all because the database is not very\nactive right now, that would be the perfect time to vacuum it. Between \"I\ncan accurately project current patterns of (in)activity into the future\"\nand \"People don't build large tables just to ignore them forever\", I think\nthe latter is more likely to be true. If the system is busy but this\nparticular table is not, then that would be a better reason to\nde-prioritise vacuuming that table. But can this degree of reasoning\nreally be implemented in a practical way? In core?\n\n\n> Again,\n> if we had better statistics, we could measure this - counting heap\n> fetches for actual index-only scans plus heap fetches for index scans\n> that might have been planned index-only scans but for the relation\n> having too few all-visible pages doesn't sound like an impossible\n> metric to gather.\n\n\nMy experience has been that if too few pages are all visible, it generally\nswitches to a seq scan, not an index scan of a different index. But many\nthings that are semantically possible to be index-only-scans would never be\nplanned that way even if allvisible were 100%, so I think it would have to\ndo two planning passes, one with the real allvisible, and a hypothetical\none with allvisible set to 100%. And then there is the possibility that,\nwhile a high allvisible would be useful, the table is so active that no\namount of vacuuming could ever keep it high.\n\nCheers,\n\nJeff\n\nOn Mon, Feb 3, 2014 at 8:55 AM, Robert Haas <[email protected]> wrote:\n I've also had some further thoughts about the right way to drive\n\nvacuum scheduling.  I think what we need to do is tightly couple the\nrate at which we're willing to do vacuuming to the rate at which we're\nincurring \"vacuum debt\".  That is, if we're creating 100kB/s of pages\nneeding vacuum, we vacuum at 2-3MB/s (with default settings).  If we can tolerate 2-3MB/s without adverse impact on other work, then we can tolerate it.  Do we gain anything substantial by sand-bagging it?\n If\nwe're creating 10MB/s of pages needing vacuum, we *still* vacuum at\n2-3MB/s.  Not shockingly, vacuum gets behind, the database bloats, and\neverything goes to heck.  (Your reference to bloat made be me think your comments here are about vacuuming in general, not specific to IOS.  If that isn't the case, then please ignore.)\nIf we can only vacuum at 2-3MB/s without adversely impacting other activity, but we are creating 10MB/s of future vacuum need, then there are basically two possibilities I can think of.  Either the 10MB/s represents a spike, and vacuum should tolerate it and hope to catch up on the debt later.  Or it represents a new permanent condition, in which case I bought too few hard drives for the work load, and no scheduling decision that autovacuum can make will save me from my folly. Perhaps there is some middle ground between those possibilities, but I don't see room for much middle ground.  \nI guess there might be entirely different possibilities not between those two; for example, I don't realize I'm doing something that is generating 10MB/s of vacuum debt, and would like to have this thing I'm doing be automatically throttled to the point it doesn't interfere with other processes (either directly, or indirectly by bloat)\n The rate of vacuuming needs to be tied\n\n\nsomehow to the rate at which we're creating stuff that needs to be\nvacuumed.  Right now we don't even have a way to measure that, let\nalone auto-regulate the aggressiveness of autovacuum on that basis.There is the formula used to decide when a table gets vacuumed.  Isn't the time delta in this formula a measure of how fast we are creating stuff that needs to be vacuumed for bloat reasons?  Is your objection that it doesn't include other reasons we might want to vacuum, or that it just doesn't work very well, or that is not explicitly exposed?\n \nSimilarly, for marking of pages as all-visible, we currently make the\nsame decision whether the relation is getting index-scanned (in which\ncase the failure to mark those pages all-visible may be suppressing\nthe use of index scans or making them less effective) or whether it's\nnot being accessed at all (in which case vacuuming it won't help\nanything, and might hurt by pushing other pages out of cache). If it is not getting accessed at all because the database is not very active right now, that would be the perfect time to vacuum it.  Between \"I can accurately project current patterns of (in)activity into the future\" and \"People don't build large tables just to ignore them forever\", I think the latter is more likely to be true.  If the system is busy but this particular table is not, then that would be a better reason to de-prioritise vacuuming that table.  But can this degree of reasoning really be implemented in a practical way?  In core?\n  Again,\nif we had better statistics, we could measure this - counting heap\nfetches for actual index-only scans plus heap fetches for index scans\nthat might have been planned index-only scans but for the relation\nhaving too few all-visible pages doesn't sound like an impossible\nmetric to gather.My experience has been that if too few pages are all visible, it generally switches to a seq scan, not an index scan of a different index.  But many things that are semantically possible to be index-only-scans would never be planned that way even if allvisible were 100%, so I think it would have to do two planning passes, one with the real allvisible, and a hypothetical one with allvisible set to 100%.  And then there is the possibility that, while a high allvisible would be useful, the table is so active that no amount of vacuuming could ever keep it high.\nCheers,Jeff", "msg_date": "Tue, 4 Feb 2014 16:14:51 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "First, thanks for this thoughtful email.\n\nOn Tue, Feb 4, 2014 at 7:14 PM, Jeff Janes <[email protected]> wrote:\n> On Mon, Feb 3, 2014 at 8:55 AM, Robert Haas <[email protected]> wrote:\n>> I've also had some further thoughts about the right way to drive\n>> vacuum scheduling. I think what we need to do is tightly couple the\n>> rate at which we're willing to do vacuuming to the rate at which we're\n>> incurring \"vacuum debt\". That is, if we're creating 100kB/s of pages\n>> needing vacuum, we vacuum at 2-3MB/s (with default settings).\n>\n> If we can tolerate 2-3MB/s without adverse impact on other work, then we can\n> tolerate it. Do we gain anything substantial by sand-bagging it?\n\nNo. The problem is the other direction.\n\n>> If\n>> we're creating 10MB/s of pages needing vacuum, we *still* vacuum at\n>> 2-3MB/s. Not shockingly, vacuum gets behind, the database bloats, and\n>> everything goes to heck.\n>\n> (Your reference to bloat made be me think your comments here are about\n> vacuuming in general, not specific to IOS. If that isn't the case, then\n> please ignore.)\n>\n> If we can only vacuum at 2-3MB/s without adversely impacting other activity,\n> but we are creating 10MB/s of future vacuum need, then there are basically\n> two possibilities I can think of. Either the 10MB/s represents a spike, and\n> vacuum should tolerate it and hope to catch up on the debt later. Or it\n> represents a new permanent condition, in which case I bought too few hard\n> drives for the work load, and no scheduling decision that autovacuum can\n> make will save me from my folly. Perhaps there is some middle ground between\n> those possibilities, but I don't see room for much middle ground.\n>\n> I guess there might be entirely different possibilities not between those\n> two; for example, I don't realize I'm doing something that is generating\n> 10MB/s of vacuum debt, and would like to have this thing I'm doing be\n> automatically throttled to the point it doesn't interfere with other\n> processes (either directly, or indirectly by bloat)\n\nThe underlying issue here is that, in order for there not to be a\nproblem, a user needs to configure their autovacuum processes to\nvacuum at a rate which is greater than or equal to the average rate at\nwhich vacuum debt is being created. If they don't, they get runaway\nbloat. But to do that, they need to know at what rate they are\ncreating vacuum debt, which is almost impossible to figure out right\nnow; and even if they did know it, they'd then need to figure out what\nvacuum cost delay settings would allow vacuuming at a rate sufficient\nto keep up, which isn't quite as hard to estimate but certainly\ninvolves nontrivial math. So a lot of people have this set wrong, and\nit's not easy to get it right except by frobbing the settings until\nyou find something that works well in practice.\n\nAlso, a whole *lot* of problems in this area are caused by cases where\nthe rate at which vacuum debt is being created *changes*. Autovacuum\nis keeping up, but then you have either a load spike or just a gradual\nincrease in activity and it doesn't keep up any more. You don't\nnecessarily notice right away, and by the time you do there's no easy\nway to recover. If you've got a table with lots of dead tuples in it,\nbut it's also got enough internal freespace to satisfy as many inserts\nand updates as are happening, then it's possibly reasonable to put off\nvacuuming in the hopes that system load will be lower at some time in\nthe future. But if you've got a table with lots of dead tuples in it,\nand you're extending it to create internal freespace instead of\nvacuuming it, it is highly like that you are not doing what will make\nthe user most happy. Even if vacuuming that table slows down\nforeground activity quite badly, it is probably better than\naccumulating an arbitrary amount of bloat.\n\n>> The rate of vacuuming needs to be tied\n>> somehow to the rate at which we're creating stuff that needs to be\n>> vacuumed. Right now we don't even have a way to measure that, let\n>> alone auto-regulate the aggressiveness of autovacuum on that basis.\n>\n> There is the formula used to decide when a table gets vacuumed. Isn't the\n> time delta in this formula a measure of how fast we are creating stuff that\n> needs to be vacuumed for bloat reasons? Is your objection that it doesn't\n> include other reasons we might want to vacuum, or that it just doesn't work\n> very well, or that is not explicitly exposed?\n\nAFAICT, the problem isn't when the table gets vacuumed so much as *how\nfast* it gets vacuumed. The autovacuum algorithm does a fine job\nselecting tables for vacuuming, for the most part. There are problems\nwith insert-only tables and sometimes for large tables the default\nthreshold (0.20) is too high, but it's not terrible. However, the\nlimit on the overall rate of vacuuming activity to 2-3MB/s regardless\nof how fast we're creating vacuum debt is a big problem.\n\n>> Similarly, for marking of pages as all-visible, we currently make the\n>> same decision whether the relation is getting index-scanned (in which\n>> case the failure to mark those pages all-visible may be suppressing\n>> the use of index scans or making them less effective) or whether it's\n>> not being accessed at all (in which case vacuuming it won't help\n>> anything, and might hurt by pushing other pages out of cache).\n>\n> If it is not getting accessed at all because the database is not very active\n> right now, that would be the perfect time to vacuum it. Between \"I can\n> accurately project current patterns of (in)activity into the future\" and\n> \"People don't build large tables just to ignore them forever\", I think the\n> latter is more likely to be true. If the system is busy but this particular\n> table is not, then that would be a better reason to de-prioritise vacuuming\n> that table. But can this degree of reasoning really be implemented in a\n> practical way? In core?\n\nI don't know. But the algorithm for determining the rate at which we\nvacuum (2-3MB/s) could hardly be stupider than it is right now. It's\nalmost a constant, and to the extent that it's not a constant, it\ndepends on the wrong things. The fact that getting this perfectly\nright is unlikely to be easy, and may be altogether impossible,\nshouldn't discourage us from trying to come up with something better\nthan what we have now.\n\n> My experience has been that if too few pages are all visible, it generally\n> switches to a seq scan, not an index scan of a different index. But many\n> things that are semantically possible to be index-only-scans would never be\n> planned that way even if allvisible were 100%, so I think it would have to\n> do two planning passes, one with the real allvisible, and a hypothetical one\n> with allvisible set to 100%. And then there is the possibility that, while\n> a high allvisible would be useful, the table is so active that no amount of\n> vacuuming could ever keep it high.\n\nYeah, those are all good points.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Wed, 5 Feb 2014 16:19:20 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Mon, Feb 3, 2014 at 11:55:34AM -0500, Robert Haas wrote:\n> > Robert, where are we on this? Should I post a patch?\n> \n> I started working on this at one point but didn't finish the\n> implementation, let alone the no-doubt-onerous performance testing\n> that will be needed to validate whatever we come up with. It would be\n> really easy to cause serious regressions with ill-considered changes\n> in this area, and I don't think many people here have the bandwidth\n> for a detailed study of all the different workloads that might be\n> affected here right this very minute. More generally, you're sending\n> all these pings three weeks after the deadline for CF4. I don't think\n> that's a good time to encourage people to *start* revising old\n> patches, or writing new ones.\n> \n> I've also had some further thoughts about the right way to drive\n> vacuum scheduling. I think what we need to do is tightly couple the\n\nI understand the problems with vacuum scheduling, but I was trying to\naddress _just_ the insert-only workload problem for index-only scans.\n\nRight now, as I remember, only vacuum sets the visibility bits. If we\ndon't want to make vacuum trigger for insert-only workloads, can we set\npages all-visible more often? \n\nIs there a reason that a sequential scan, which does do page pruning,\ndoesn't set the visibility bits too? Or does it? Can an non-index-only\nindex scan that finds the heap tuple all-visible and the page not \nall-visible check the other items on the page to see if the page can be\nmarked all-visible? Does analyze set pages all-visible?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 10:56:07 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Tue, Feb 11, 2014 at 10:56 AM, Bruce Momjian <[email protected]> wrote:\n> Right now, as I remember, only vacuum sets the visibility bits. If we\n> don't want to make vacuum trigger for insert-only workloads, can we set\n> pages all-visible more often?\n>\n> Is there a reason that a sequential scan, which does do page pruning,\n> doesn't set the visibility bits too? Or does it? Can an non-index-only\n> index scan that finds the heap tuple all-visible and the page not\n> all-visible check the other items on the page to see if the page can be\n> marked all-visible? Does analyze set pages all-visible?\n\nA sequential scan will set hint bits and will prune the page, but\npruning the page doesn't ever mark it all-visible; that logic is\nentirely in vacuum. If that could be made cheap enough to be\nnegligible, it might well be worth doing in heap_page_prune(). I\nthink there might be a way to do that, but it's a bit tricky because\nthe pruning logic iterates over the page in a somewhat complex way,\nnot just a straightforward scan of all the item pointers the way the\nexisting logic doesn't. It would be pretty cool if we could just use\na bit out of the heap-prune xlog record to indicate whether the\nall-visible bit should be set; then we'd gain the benefit of marking\nthings all-visible much more often without needing vacuum.\n\nThat doesn't help insert-only tables much, though, because those won't\nrequire pruning. We set hint bits (which dirties the page) but\ncurrently don't write WAL. We'd have to change that to set the\nall-visible bit when scanning such a table, and that would be\nexpensive. :-(\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 11:28:36 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Tue, Feb 11, 2014 at 11:28:36AM -0500, Robert Haas wrote:\n> A sequential scan will set hint bits and will prune the page, but\n> pruning the page doesn't ever mark it all-visible; that logic is\n> entirely in vacuum. If that could be made cheap enough to be\n> negligible, it might well be worth doing in heap_page_prune(). I\n> think there might be a way to do that, but it's a bit tricky because\n> the pruning logic iterates over the page in a somewhat complex way,\n> not just a straightforward scan of all the item pointers the way the\n> existing logic doesn't. It would be pretty cool if we could just use\n> a bit out of the heap-prune xlog record to indicate whether the\n> all-visible bit should be set; then we'd gain the benefit of marking\n> things all-visible much more often without needing vacuum.\n> \n> That doesn't help insert-only tables much, though, because those won't\n> require pruning. We set hint bits (which dirties the page) but\n> currently don't write WAL. We'd have to change that to set the\n> all-visible bit when scanning such a table, and that would be\n> expensive. :-(\n\nYes, that pretty much sums it up. We introduced index-only scans in 9.2\n(2012) but they still seem to be not usable for insert-only workloads\ntwo years later. Based on current progress, it doesn't look like this\nwill be corrected until 9.5 (2015). I am kind of confused why this has\nnot generated more urgency.\n\nI guess my question is what approach do we want to take to fixing this? \nIf we are doing pruning, aren't we emitting WAL? You are right that for\nan insert-only workload, we aren't going to prune, but if pruning WAL\noverhead is acceptable for a sequential scan, isn't index-only\npage-all-visible WAL overhead acceptable?\n\nDo we want to track the number of inserts in statistics and trigger an\nauto-vacuum after a specified number of inserts? The problem there is\nthat we really don't need to do any index cleanup, which is what vacuum\ntypically does --- we just want to scan the table and set the\nall-visible bits, so that approach seems non-optimal.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 12:12:13 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2014-02-11 12:12:13 -0500, Bruce Momjian wrote:\n> Yes, that pretty much sums it up. We introduced index-only scans in 9.2\n> (2012) but they still seem to be not usable for insert-only workloads\n> two years later. Based on current progress, it doesn't look like this\n> will be corrected until 9.5 (2015). I am kind of confused why this has\n> not generated more urgency.\n\nI think this largely FUD. They are hugely beneficial in some scenarios\nand less so in others. Just like lots of other features we have.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 18:54:10 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Tue, Feb 11, 2014 at 06:54:10PM +0100, Andres Freund wrote:\n> On 2014-02-11 12:12:13 -0500, Bruce Momjian wrote:\n> > Yes, that pretty much sums it up. We introduced index-only scans in 9.2\n> > (2012) but they still seem to be not usable for insert-only workloads\n> > two years later. Based on current progress, it doesn't look like this\n> > will be corrected until 9.5 (2015). I am kind of confused why this has\n> > not generated more urgency.\n> \n> I think this largely FUD. They are hugely beneficial in some scenarios\n> and less so in others. Just like lots of other features we have.\n\nI don't understand. Index-only scans are known to have benefits --- if\nan insert-only workload can't use that, why is that acceptable? What is\nfear-uncertainty-and-doubt about that? Please explain.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 13:23:19 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2014-02-11 13:23:19 -0500, Bruce Momjian wrote:\n> On Tue, Feb 11, 2014 at 06:54:10PM +0100, Andres Freund wrote:\n> > On 2014-02-11 12:12:13 -0500, Bruce Momjian wrote:\n> > > Yes, that pretty much sums it up. We introduced index-only scans in 9.2\n> > > (2012) but they still seem to be not usable for insert-only workloads\n> > > two years later. Based on current progress, it doesn't look like this\n> > > will be corrected until 9.5 (2015). I am kind of confused why this has\n> > > not generated more urgency.\n> > \n> > I think this largely FUD. They are hugely beneficial in some scenarios\n> > and less so in others. Just like lots of other features we have.\n> \n> I don't understand. Index-only scans are known to have benefits --- if\n> an insert-only workload can't use that, why is that acceptable? What is\n> fear-uncertainty-and-doubt about that? Please explain.\n\nUh, for one, insert only workloads certainly aren't the majority of\nusecases. Ergo there are plenty of cases where index only scans work out\nof the box.\nAlso, they *do* work for insert only workloads, you just either have to\nwait longer, or manually trigger VACUUMs. That's a far cry from not\nbeing usable.\n\nI am not saying it shouldn't be improved, I just don't see the point of\nbringing it up while everyone is busy with the last CF and claiming it\nis unusable and that stating that it is surprisising that nobody really\ncares.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 19:31:03 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Tue, Feb 11, 2014 at 07:31:03PM +0100, Andres Freund wrote:\n> On 2014-02-11 13:23:19 -0500, Bruce Momjian wrote:\n> > On Tue, Feb 11, 2014 at 06:54:10PM +0100, Andres Freund wrote:\n> > > On 2014-02-11 12:12:13 -0500, Bruce Momjian wrote:\n> > > > Yes, that pretty much sums it up. We introduced index-only scans in 9.2\n> > > > (2012) but they still seem to be not usable for insert-only workloads\n> > > > two years later. Based on current progress, it doesn't look like this\n> > > > will be corrected until 9.5 (2015). I am kind of confused why this has\n> > > > not generated more urgency.\n> > > \n> > > I think this largely FUD. They are hugely beneficial in some scenarios\n> > > and less so in others. Just like lots of other features we have.\n> > \n> > I don't understand. Index-only scans are known to have benefits --- if\n> > an insert-only workload can't use that, why is that acceptable? What is\n> > fear-uncertainty-and-doubt about that? Please explain.\n> \n> Uh, for one, insert only workloads certainly aren't the majority of\n> usecases. Ergo there are plenty of cases where index only scans work out\n> of the box.\n\nTrue.\n\n> Also, they *do* work for insert only workloads, you just either have to\n> wait longer, or manually trigger VACUUMs. That's a far cry from not\n\nWait longer for what? Anti-xid-wraparound vacuum?\n\n> being usable.\n\nIs using VACUUM for these cases documented? Should it be?\n\n> I am not saying it shouldn't be improved, I just don't see the point of\n> bringing it up while everyone is busy with the last CF and claiming it\n> is unusable and that stating that it is surprisising that nobody really\n> cares.\n\nWell, I brought it up in September too. My point was not that it is a\nnew issue but that it has been such an ignored issue for two years. I\nam not asking for a fix, but right now we don't even have a plan on how\nto improve this.\n\nI still don't see how this is FUD, and you have not explained it to me. \nThis is a known limitation for two years, not documented (?), and with\nno TODO item and no plan on how to improve it. Do you want to declare\nsuch cases FUD and just ignore them? I don't see how that moves us\nforward.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 13:41:46 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Tue, Feb 11, 2014 at 07:31:03PM +0100, Andres Freund wrote:\n>> I am not saying it shouldn't be improved, I just don't see the point of\n>> bringing it up while everyone is busy with the last CF and claiming it\n>> is unusable and that stating that it is surprisising that nobody really\n>> cares.\n\n> Well, I brought it up in September too. My point was not that it is a\n> new issue but that it has been such an ignored issue for two years. I\n> am not asking for a fix, but right now we don't even have a plan on how\n> to improve this.\n\nIndeed, and considering that we're all busy with the CF, I think it's\nquite unreasonable of you to expect that we'll drop everything else\nto think about this problem right now. The reason it's like it is\nis that it's not easy to see how to make it better; so even if we did\ndrop everything else, it's not clear to me that any plan would emerge\nanytime soon.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 13:54:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On 2014-02-11 13:41:46 -0500, Bruce Momjian wrote:\n> Wait longer for what? Anti-xid-wraparound vacuum?\n\nYes.\n\n> Is using VACUUM for these cases documented? Should it be?\n\nNo idea, it seems to be part of at least part of the folkloric\nknowledge, from what I see at clients.\n\n> > I am not saying it shouldn't be improved, I just don't see the point of\n> > bringing it up while everyone is busy with the last CF and claiming it\n> > is unusable and that stating that it is surprisising that nobody really\n> > cares.\n\n> Well, I brought it up in September too. My point was not that it is a\n> new issue but that it has been such an ignored issue for two years. I\n> am not asking for a fix, but right now we don't even have a plan on how\n> to improve this.\n\nComing up with a plan for this takes time and discussion, not something\nwe seem to have aplenty of atm. And even if were to agree on a plan\nright now, we wouldn't incorporate it into 9.4, so what's the point of\nbringing it up now?\n\n> I still don't see how this is FUD, and you have not explained it to me. \n> This is a known limitation for two years, not documented (?), and with\n> no TODO item and no plan on how to improve it. Do you want to declare\n> such cases FUD and just ignore them? I don't see how that moves us\n> forward.\n\nClaiming something doesn't work while it just has manageable usability\nissues doesn't strike me as a reasonable starting point. If it bugs\nsomebody enough to come up with a rough proposal it will get fixed...\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 20:03:47 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Tue, Feb 11, 2014 at 01:54:48PM -0500, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Tue, Feb 11, 2014 at 07:31:03PM +0100, Andres Freund wrote:\n> >> I am not saying it shouldn't be improved, I just don't see the point of\n> >> bringing it up while everyone is busy with the last CF and claiming it\n> >> is unusable and that stating that it is surprisising that nobody really\n> >> cares.\n> \n> > Well, I brought it up in September too. My point was not that it is a\n> > new issue but that it has been such an ignored issue for two years. I\n> > am not asking for a fix, but right now we don't even have a plan on how\n> > to improve this.\n> \n> Indeed, and considering that we're all busy with the CF, I think it's\n> quite unreasonable of you to expect that we'll drop everything else\n> to think about this problem right now. The reason it's like it is\n> is that it's not easy to see how to make it better; so even if we did\n> drop everything else, it's not clear to me that any plan would emerge\n> anytime soon.\n\nWell, documenting the VACUUM requirement and adding it to the TODO list\nare things we should consider for 9.4. If you think doing that after\nthe commit-fest is best, I can do that.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 14:04:35 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Tue, Feb 11, 2014 at 9:12 AM, Bruce Momjian <[email protected]> wrote:\n\n> On Tue, Feb 11, 2014 at 11:28:36AM -0500, Robert Haas wrote:\n> > A sequential scan will set hint bits and will prune the page, but\n> > pruning the page doesn't ever mark it all-visible; that logic is\n> > entirely in vacuum. If that could be made cheap enough to be\n> > negligible, it might well be worth doing in heap_page_prune(). I\n> > think there might be a way to do that, but it's a bit tricky because\n> > the pruning logic iterates over the page in a somewhat complex way,\n> > not just a straightforward scan of all the item pointers the way the\n> > existing logic doesn't. It would be pretty cool if we could just use\n> > a bit out of the heap-prune xlog record to indicate whether the\n> > all-visible bit should be set; then we'd gain the benefit of marking\n> > things all-visible much more often without needing vacuum.\n> >\n> > That doesn't help insert-only tables much, though, because those won't\n> > require pruning. We set hint bits (which dirties the page) but\n> > currently don't write WAL. We'd have to change that to set the\n> > all-visible bit when scanning such a table, and that would be\n> > expensive. :-(\n>\n> Yes, that pretty much sums it up. We introduced index-only scans in 9.2\n> (2012) but they still seem to be not usable for insert-only workloads\n> two years later. Based on current progress, it doesn't look like this\n> will be corrected until 9.5 (2015). I am kind of confused why this has\n> not generated more urgency.\n>\n\n\nFor insert and select only, they are usable (if your queries are of the\ntype that could benefit from them), you just have to do some manual\nintervention. The list of features that sometimes require a DBA to do\nsomething to make maximum use of them under some circumstance would be a\nlong one. It would be nice if it were better, but I don't see why this\nfeature is particularly urgent compared to all the other things that could\nbe improved. In particular I think the Freezing without IO is much more\nimportant. Freezing is rather unimportant until suddenly it is is the most\nimportant thing in the universe. If we could stop worrying about that, I\nthink it would free up other aspects of vacuum scheduling to have more\nmeddling/optimization done to it.\n\n\n\n>\n> I guess my question is what approach do we want to take to fixing this?\n> If we are doing pruning, aren't we emitting WAL? You are right that for\n> an insert-only workload, we aren't going to prune, but if pruning WAL\n> overhead is acceptable for a sequential scan, isn't index-only\n> page-all-visible WAL overhead acceptable?\n>\n\n\nWe often don't find that pruning particularly acceptable in seq scans, and\nthere is a patch pending to conditionally turn it off for them.\n\n\n>\n> Do we want to track the number of inserts in statistics and trigger an\n> auto-vacuum after a specified number of inserts?\n\n\nWe track relpages and relallvisible, which seems like a more direct\nmeasure. Once analyze is done (which is already triggered by inserts) and\nsets those, it could fire a vacuum based on the ratio of those values, or\nthe autovac process could just look at the ratio after naptime. So just\nintroduce autovacuum_vacuum_visible_factor. A problem there is that it\nwould be a lot of work to aggressively keep the ratio high, and pointless\nif the types of queries done on that table don't benefit from IOS anyway,\nor if pages are dirtied so rapidly that no amount of vacuuming will keep\nthe ratio high. Would we try to automatically tell which tables were\nwhich, or rely on the DBA setting per-table\nautovacuum_vacuum_visible_factor for tables that differ from the database\nnorm?\n\n\n> The problem there is\n> that we really don't need to do any index cleanup, which is what vacuum\n> typically does --- we just want to scan the table and set the\n> all-visible bits, so that approach seems non-optimal.\n>\n\nIn the case of no updates or deletes (or aborted inserts?), there would be\nnothing to clean up in the indexes and that step would be skipped (already\nin the current code). And if the indexes do need cleaning up, we certainly\ncan't set the page all visible without doing that clean up.\n\nCheers,\n\nJeff\n\nOn Tue, Feb 11, 2014 at 9:12 AM, Bruce Momjian <[email protected]> wrote:\nOn Tue, Feb 11, 2014 at 11:28:36AM -0500, Robert Haas wrote:\n\n> A sequential scan will set hint bits and will prune the page, but\n> pruning the page doesn't ever mark it all-visible; that logic is\n> entirely in vacuum.  If that could be made cheap enough to be\n> negligible, it might well be worth doing in heap_page_prune().  I\n> think there might be a way to do that, but it's a bit tricky because\n> the pruning logic iterates over the page in a somewhat complex way,\n> not just a straightforward scan of all the item pointers the way the\n> existing logic doesn't.  It would be pretty cool if we could just use\n> a bit out of the heap-prune xlog record to indicate whether the\n> all-visible bit should be set; then we'd gain the benefit of marking\n> things all-visible much more often without needing vacuum.\n>\n> That doesn't help insert-only tables much, though, because those won't\n> require pruning.  We set hint bits (which dirties the page) but\n> currently don't write WAL.  We'd have to change that to set the\n> all-visible bit when scanning such a table, and that would be\n> expensive.  :-(\n\nYes, that pretty much sums it up.  We introduced index-only scans in 9.2\n(2012) but they still seem to be not usable for insert-only workloads\ntwo years later.  Based on current progress, it doesn't look like this\nwill be corrected until 9.5 (2015).  I am kind of confused why this has\nnot generated more urgency.For insert and select only, they are usable (if your queries are of the type that could benefit from them), you just have to do some manual intervention.  The list of features that sometimes require a DBA to do something to make maximum use of them under some circumstance would be a long one.  It would be nice if it were better, but I don't see why this feature is particularly urgent compared to all the other things that could be improved.  In particular I think the Freezing without IO is much more important.  Freezing is rather unimportant until suddenly it is is the most important thing in the universe.  If we could stop worrying about that, I think it would free up other aspects of vacuum scheduling to have more meddling/optimization done to it.\n \n\nI guess my question is what approach do we want to take to fixing this?\nIf we are doing pruning, aren't we emitting WAL?  You are right that for\nan insert-only workload, we aren't going to prune, but if pruning WAL\noverhead is acceptable for a sequential scan, isn't index-only\npage-all-visible WAL overhead acceptable?We often don't find that pruning particularly acceptable in seq scans, and there is a patch pending to conditionally turn it off for them.\n \n\nDo we want to track the number of inserts in statistics and trigger an\nauto-vacuum after a specified number of inserts?We track relpages and relallvisible, which seems like a more direct measure.  Once analyze is done (which is already triggered by inserts) and sets those, it could fire a vacuum based on the ratio of those values, or the autovac process could just look at the ratio after naptime.  So just introduce autovacuum_vacuum_visible_factor. A problem there is that it would be a lot of work to aggressively keep the ratio high, and pointless if the types of queries done on that table don't benefit from IOS anyway, or if pages are dirtied so rapidly that no amount of vacuuming will keep the ratio high.  Would we try to automatically tell which tables were which, or rely on the DBA setting per-table autovacuum_vacuum_visible_factor for tables that differ from the database norm?\n   The problem there is\nthat we really don't need to do any index cleanup, which is what vacuum\ntypically does --- we just want to scan the table and set the\nall-visible bits, so that approach seems non-optimal.In the case of no updates or deletes (or aborted inserts?), there would be nothing to clean up in the indexes and that step would be skipped (already in the current code). And if the indexes do need cleaning up, we certainly can't set the page all visible without doing that clean up.\n Cheers,Jeff", "msg_date": "Tue, 11 Feb 2014 11:13:00 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Tue, Feb 11, 2014 at 4:13 PM, Jeff Janes <[email protected]> wrote:\n>>\n>> Do we want to track the number of inserts in statistics and trigger an\n>> auto-vacuum after a specified number of inserts?\n>\n>\n> We track relpages and relallvisible, which seems like a more direct measure.\n> Once analyze is done (which is already triggered by inserts) and sets those,\n> it could fire a vacuum based on the ratio of those values, or the autovac\n> process could just look at the ratio after naptime. So just introduce\n> autovacuum_vacuum_visible_factor. A problem there is that it would be a lot\n> of work to aggressively keep the ratio high, and pointless if the types of\n> queries done on that table don't benefit from IOS anyway, or if pages are\n> dirtied so rapidly that no amount of vacuuming will keep the ratio high.\n> Would we try to automatically tell which tables were which, or rely on the\n> DBA setting per-table autovacuum_vacuum_visible_factor for tables that\n> differ from the database norm?\n\n\nWhy not track how many times an IOS would be used but wasn't, or how\nmany heap fetches in IOS have to be performed?\n\nSeems like a more direct measure of whether allvisible needs an update.\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 17:51:36 -0200", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" }, { "msg_contents": "On Tue, Feb 11, 2014 at 05:51:36PM -0200, Claudio Freire wrote:\n> > We track relpages and relallvisible, which seems like a more direct measure.\n> > Once analyze is done (which is already triggered by inserts) and sets those,\n> > it could fire a vacuum based on the ratio of those values, or the autovac\n> > process could just look at the ratio after naptime. So just introduce\n> > autovacuum_vacuum_visible_factor. A problem there is that it would be a lot\n> > of work to aggressively keep the ratio high, and pointless if the types of\n> > queries done on that table don't benefit from IOS anyway, or if pages are\n> > dirtied so rapidly that no amount of vacuuming will keep the ratio high.\n> > Would we try to automatically tell which tables were which, or rely on the\n> > DBA setting per-table autovacuum_vacuum_visible_factor for tables that\n> > differ from the database norm?\n> \n> \n> Why not track how many times an IOS would be used but wasn't, or how\n> many heap fetches in IOS have to be performed?\n> \n> Seems like a more direct measure of whether allvisible needs an update.\n\nNow that is in interesting idea, and more direct. \n\nDo we need to adjust for the insert count, i.e. would the threadhold to\ntrigger an autovacuum after finding index lookups that had to check the\nheap page for visibility be higher if many inserts are happening,\nperhaps dirtying pages? (If we are dirtying via update/delete,\nautovacuum will already trigger.)\n\nWe are aggressive in clearing the page-all-visible flag (we have to be),\nbut I think we need a little more aggressiveness for setting it.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 11 Feb 2014 17:40:14 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] encouraging index-only scans" } ]
[ { "msg_contents": "Huan Ruan wrote:\n\n> is a lot slower than a nested loop join.\n\nGiving actual numbers is more useful than terms like \"a lot\". Even\nbetter is to provide the output of EXPLAIN ANALYZZE rather than\njust EXPLAIN. This shows estimates against actual numbers, and give\ntimings. For more suggestions see this page:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n> I don't understand why the optimiser chooses the hash join in\n> favor of the nested loop. What can I do to get the optimiser to\n> make a better decision (nested loop in this case)? I have run\n> analyze on both tables.\n\n> Config changes are\n> \n>  - shared_buffers = 6GB\n>  - effective_cache_size = 18GB\n>  - work_mem = 10MB\n>  - maintenance_work_mem = 3GB\n\nAs already suggested, there was a change made in 9.2 which may have\nover-penalized nested loops using index scans. This may be fixed in\nthe next minor release.\n\nAlso, as already suggested, you may want to reduce random_page\ncost, to bring it in line with the actual cost relative to\nseq_page_cost based on your cache hit ratio.\n\nAdditionally, I just routinely set cpu_tuple_cost higher than the\ndefault of 0.01. I find that 0.03 to 0.05 better models the actual\nrelative cost of processing a tuple.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Dec 2012 18:47:32 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hash join vs nested loop join" }, { "msg_contents": "Hi Kevin\n\nOn 13 December 2012 10:47, Kevin Grittner <[email protected]> wrote:\n\n> Huan Ruan wrote:\n>\n> > is a lot slower than a nested loop join.\n>\n> Giving actual numbers is more useful than terms like \"a lot\". Even\n> better is to provide the output of EXPLAIN ANALYZZE rather than\n> just EXPLAIN. This shows estimates against actual numbers, and give\n> timings. For more suggestions see this page:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n\nYou are right. I realised my information wasn't accurate. Was a bit slack\nand canceled the slower one. The full outputs are\n\nHash 1st run\n\n\"QUERY PLAN\"\n\"Hash Join (cost=1681.87..6414169.04 rows=48261 width=171) (actual\ntime=2182.450..88158.645 rows=48257 loops=1)\"\n\" Hash Cond: (bigtable.invtranref = smalltable.invtranref)\"\n\" Buffers: shared hit=3950 read=3046219\"\n\" -> Seq Scan on invtran bigtable (cost=0.00..4730787.28 rows=168121728\nwidth=108) (actual time=0.051..32581.052 rows=168121657 loops=1)\"\n\" Buffers: shared hit=3351 read=3046219\"\n\" -> Hash (cost=1078.61..1078.61 rows=48261 width=63) (actual\ntime=21.751..21.751 rows=48261 loops=1)\"\n\" Buckets: 8192 Batches: 1 Memory Usage: 4808kB\"\n\" Buffers: shared hit=596\"\n\" -> Seq Scan on im_match_table smalltable (cost=0.00..1078.61\nrows=48261 width=63) (actual time=0.007..8.299 rows=48261 loops=1)\"\n\" Buffers: shared hit=596\"\n\"Total runtime: 88162.417 ms\"\n\nHash 2nd run (after disconnect and reconnect)\n\n\n \"QUERY PLAN\"\n\"Hash Join (cost=1681.87..6414169.04 rows=48261 width=171) (actual\ntime=2280.390..87934.540 rows=48257 loops=1)\"\n\" Hash Cond: (bigtable.invtranref = smalltable.invtranref)\"\n\" Buffers: shared hit=3982 read=3046187\"\n\" -> Seq Scan on invtran bigtable (cost=0.00..4730787.28 rows=168121728\nwidth=108) (actual time=0.052..32747.805 rows=168121657 loops=1)\"\n\" Buffers: shared hit=3383 read=3046187\"\n\" -> Hash (cost=1078.61..1078.61 rows=48261 width=63) (actual\ntime=62.161..62.161 rows=48261 loops=1)\"\n\" Buckets: 8192 Batches: 1 Memory Usage: 4808kB\"\n\" Buffers: shared hit=596\"\n\" -> Seq Scan on im_match_table smalltable (cost=0.00..1078.61\nrows=48261 width=63) (actual time=0.006..8.209 rows=48261 loops=1)\"\n\" Buffers: shared hit=596\"\n\"Total runtime: 87938.584 ms\"\n\nNL 1st run\n\n\"QUERY PLAN\"\n\"Nested Loop (cost=0.00..6451637.88 rows=48261 width=171) (actual\ntime=0.056..551.438 rows=48257 loops=1)\"\n\" Buffers: shared hit=242267\"\n\" -> Seq Scan on im_match_table smalltable (cost=0.00..1078.61\nrows=48261 width=63) (actual time=0.009..7.353 rows=48261 loops=1)\"\n\" Buffers: shared hit=596\"\n\" -> Index Scan using pk_invtran on invtran bigtable (cost=0.00..133.65\nrows=1 width=108) (actual time=0.010..0.010 rows=1 loops=48261)\"\n\" Index Cond: (invtranref = smalltable.invtranref)\"\n\" Buffers: shared hit=241671\"\n\"Total runtime: 555.336 ms\"\n\nNL 2nd run (after disconnect and reconnect)\n\n\"QUERY PLAN\"\n\"Nested Loop (cost=0.00..6451637.88 rows=48261 width=171) (actual\ntime=0.058..554.215 rows=48257 loops=1)\"\n\" Buffers: shared hit=242267\"\n\" -> Seq Scan on im_match_table smalltable (cost=0.00..1078.61\nrows=48261 width=63) (actual time=0.009..7.416 rows=48261 loops=1)\"\n\" Buffers: shared hit=596\"\n\" -> Index Scan using pk_invtran on invtran bigtable (cost=0.00..133.65\nrows=1 width=108) (actual time=0.010..0.010 rows=1 loops=48261)\"\n\" Index Cond: (invtranref = smalltable.invtranref)\"\n\" Buffers: shared hit=241671\"\n\"Total runtime: 558.095 ms\"\n\n\n\n>\n>\n> > I don't understand why the optimiser chooses the hash join in\n> > favor of the nested loop. What can I do to get the optimiser to\n> > make a better decision (nested loop in this case)? I have run\n> > analyze on both tables.\n>\n> > Config changes are\n> >\n> > - shared_buffers = 6GB\n> > - effective_cache_size = 18GB\n> > - work_mem = 10MB\n> > - maintenance_work_mem = 3GB\n>\n> As already suggested, there was a change made in 9.2 which may have\n> over-penalized nested loops using index scans. This may be fixed in\n> the next minor release.\n>\n\nWill keep this in mind.\n\n\n>\n> Also, as already suggested, you may want to reduce random_page\n> cost, to bring it in line with the actual cost relative to\n> seq_page_cost based on your cache hit ratio.\n>\n> Additionally, I just routinely set cpu_tuple_cost higher than the\n> default of 0.01. I find that 0.03 to 0.05 better models the actual\n> relative cost of processing a tuple.\n>\n\nI originally reduced random_page_cost to 2 to achieve the nested loop join.\nNow I set cpu_tuple_cost to 0.05 and reset random_page_cost back to 4, I\ncan also achieve a nested loop join.\n\nI'm still new in Postgres, but I'm worried about random_page_cost being 2\nis too low, so maybe increasing cpu_tuple_cost is a better choice. All\nthese tuning probably also depends on the above mentioned possible fix as\nwell. Can you see any obvious issues with the other memory settings I\nchanged?\n\nThanks for your help.\n\nCheers\nHuan\n\n\n> -Kevin\n>\n\nHi KevinOn 13 December 2012 10:47, Kevin Grittner <[email protected]> wrote:\nHuan Ruan wrote:\n\n> is a lot slower than a nested loop join.\n\nGiving actual numbers is more useful than terms like \"a lot\". Even\nbetter is to provide the output of EXPLAIN ANALYZZE rather than\njust EXPLAIN. This shows estimates against actual numbers, and give\ntimings. For more suggestions see this page:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestionsYou are right. I realised my information wasn't accurate. Was a bit slack and canceled the slower one. The full outputs are\n\nHash 1st run\n\n\"QUERY PLAN\"\n\"Hash Join  (cost=1681.87..6414169.04 rows=48261 width=171) (actual time=2182.450..88158.645 rows=48257 loops=1)\"\n\"  Hash Cond: (bigtable.invtranref = smalltable.invtranref)\"\n\"  Buffers: shared hit=3950 read=3046219\"\n\"  ->  Seq Scan on invtran bigtable \n (cost=0.00..4730787.28 rows=168121728 width=108) (actual \ntime=0.051..32581.052 rows=168121657 loops=1)\"\n\"        Buffers: shared hit=3351 read=3046219\"\n\"  ->  Hash  (cost=1078.61..1078.61 rows=48261 width=63) (actual time=21.751..21.751 rows=48261 loops=1)\"\n\"        Buckets: 8192  Batches: 1  Memory Usage: 4808kB\"\n\"        Buffers: shared hit=596\"\n\"        ->  Seq Scan on im_match_table smalltable \n (cost=0.00..1078.61 rows=48261 width=63) (actual time=0.007..8.299 \nrows=48261 loops=1)\"\n\"              Buffers: shared hit=596\"\n\"Total runtime: 88162.417 ms\"\n\nHash 2nd run (after disconnect and reconnect)\n\n\n\n\"QUERY PLAN\"\n\"Hash Join  (cost=1681.87..6414169.04 rows=48261 width=171) (actual time=2280.390..87934.540 rows=48257 loops=1)\"\n\"  Hash Cond: (bigtable.invtranref = smalltable.invtranref)\"\n\"  Buffers: shared hit=3982 read=3046187\"\n\"  ->  Seq Scan on invtran bigtable \n (cost=0.00..4730787.28 rows=168121728 width=108) (actual \ntime=0.052..32747.805 rows=168121657 loops=1)\"\n\"        Buffers: shared hit=3383 read=3046187\"\n\"  ->  Hash  (cost=1078.61..1078.61 rows=48261 width=63) (actual time=62.161..62.161 rows=48261 loops=1)\"\n\"        Buckets: 8192  Batches: 1  Memory Usage: 4808kB\"\n\"        Buffers: shared hit=596\"\n\"        ->  Seq Scan on im_match_table smalltable \n (cost=0.00..1078.61 rows=48261 width=63) (actual time=0.006..8.209 \nrows=48261 loops=1)\"\n\"              Buffers: shared hit=596\"\n\"Total runtime: 87938.584 ms\"\n\nNL 1st run\n\n\"QUERY PLAN\"\n\"Nested Loop  (cost=0.00..6451637.88 rows=48261 width=171) (actual time=0.056..551.438 rows=48257 loops=1)\"\n\"  Buffers: shared hit=242267\"\n\"  ->  Seq Scan on im_match_table smalltable \n (cost=0.00..1078.61 rows=48261 width=63) (actual time=0.009..7.353 \nrows=48261 loops=1)\"\n\"        Buffers: shared hit=596\"\n\"  ->  Index Scan using pk_invtran on invtran bigtable \n (cost=0.00..133.65 rows=1 width=108) (actual time=0.010..0.010 rows=1 \nloops=48261)\"\n\"        Index Cond: (invtranref = smalltable.invtranref)\"\n\"        Buffers: shared hit=241671\"\n\"Total runtime: 555.336 ms\"\n\nNL 2nd run (after disconnect and reconnect)\n\n\"QUERY PLAN\"\n\"Nested Loop  (cost=0.00..6451637.88 rows=48261 width=171) (actual time=0.058..554.215 rows=48257 loops=1)\"\n\"  Buffers: shared hit=242267\"\n\"  ->  Seq Scan on im_match_table smalltable \n (cost=0.00..1078.61 rows=48261 width=63) (actual time=0.009..7.416 \nrows=48261 loops=1)\"\n\"        Buffers: shared hit=596\"\n\"  ->  Index Scan using pk_invtran on invtran bigtable \n (cost=0.00..133.65 rows=1 width=108) (actual time=0.010..0.010 rows=1 \nloops=48261)\"\n\"        Index Cond: (invtranref = smalltable.invtranref)\"\n\"        Buffers: shared hit=241671\"\n\"Total runtime: 558.095 ms\"\n\n \n\n> I don't understand why the optimiser chooses the hash join in\n> favor of the nested loop. What can I do to get the optimiser to\n> make a better decision (nested loop in this case)? I have run\n> analyze on both tables.\n\n> Config changes are\n>\n>  - shared_buffers = 6GB\n>  - effective_cache_size = 18GB\n>  - work_mem = 10MB\n>  - maintenance_work_mem = 3GB\n\nAs already suggested, there was a change made in 9.2 which may have\nover-penalized nested loops using index scans. This may be fixed in\nthe next minor release.Will keep this in mind.  \n\nAlso, as already suggested, you may want to reduce random_page\ncost, to bring it in line with the actual cost relative to\nseq_page_cost based on your cache hit ratio.\n\nAdditionally, I just routinely set cpu_tuple_cost higher than the\ndefault of 0.01. I find that 0.03 to 0.05 better models the actual\nrelative cost of processing a tuple.I originally reduced random_page_cost to 2 to achieve the nested loop join. Now I set cpu_tuple_cost to 0.05 and reset random_page_cost back to 4, I can also achieve a nested loop join.\nI'm still new in Postgres, but I'm worried about random_page_cost being 2 is too low, so maybe increasing cpu_tuple_cost is a better choice. All these tuning probably also depends on the above mentioned possible fix as well. Can you see any obvious issues with the other memory settings I changed?\nThanks for your help.CheersHuan\n\n-Kevin", "msg_date": "Thu, 13 Dec 2012 12:10:24 +1100", "msg_from": "Huan Ruan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash join vs nested loop join" } ]
[ { "msg_contents": "Hi\n\nOur scripts automatically add \"LIMIT ALL\" & \"OFFSET 0\" to every select\nquery if no values are passed on for these parameters. I remember reading\nthrough the mailing list that it's better not to pass them if they are not\nneeded as they add a cost to the query plan. Is this the case, or am i\nlooking at a very minor optimization.\n\n\nAmitabh\n\nP.S. I haven't checked my query plans to see if there are any actual effect\nof these keywords as I am still working my way through reading the output\nof \"Explain\" ouput.\n\nHiOur scripts automatically add \"LIMIT ALL\" & \"OFFSET 0\" to every select query if no values are passed on for these parameters. I  remember reading through the mailing list that it's better not to pass them if they are not needed as they add a cost to the query plan. Is this the case, or am i looking at a very minor optimization.\nAmitabhP.S. I haven't checked my query plans to see if there are any actual effect of these keywords as I am still working my way through reading the output of \"Explain\" ouput.", "msg_date": "Thu, 13 Dec 2012 09:38:09 +0530", "msg_from": "Amitabh Kant <[email protected]>", "msg_from_op": true, "msg_subject": "Limit & offset effect on query plans" }, { "msg_contents": "On Thu, Dec 13, 2012 at 9:38 AM, Amitabh Kant <[email protected]> wrote:\n> Hi\n>\n> Our scripts automatically add \"LIMIT ALL\" & \"OFFSET 0\" to every select query\n> if no values are passed on for these parameters. I remember reading through\n> the mailing list that it's better not to pass them if they are not needed as\n> they add a cost to the query plan. Is this the case, or am i looking at a\n> very minor optimization.\n>\n\nI would tend to think that is the latter. While undoubtedly\nlimit/offset clause will add another node during query planning and\nexecution, AFAICS the OFFSET 0 and LIMIT ALL cases are optimized to a\ngood extent. So the overhead of having them will not be significant.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nhttp://www.linkedin.com/in/pavandeolasee\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 10:13:26 +0530", "msg_from": "Pavan Deolasee <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit & offset effect on query plans" } ]
[ { "msg_contents": "One parent table, having 100 child tables. \nIn this scenario we observed that delete and update are taking more memory\nfor preparing a plan \n\nIf the system under peak i am getting out of memory issue. \n\ni.e \nSelect on parent table is using memory - 331456 in message context \nDelete on parent table is using memory - 3746432 in message context \n\nDelete on single child table is using memory - 8800 in message context \nSelect on single child table is using memory - 9328 in message context \n\nFor 250 child tables \n Select on parent table is using memory - 810864 in message context\n\n Delete on parent table is using memory - 21273088 in message context\n\n\nI had seen the plans for both delete & select on parent table almost same. \nwhy this much of memory increase in message context for delete and update\noperation. \n\nRegards, \nHari babu.\n\n\n One parent table, having 100 child tables. In this scenario we observed that delete and update are taking more memory for preparing a plan If the system under peak i am getting out of memory issue. i.e Select on parent table is using memory -  331456 in message context Delete on parent table is using memory - 3746432 in message context Delete on single child table is using memory - 8800 in message context Select on single child table is using memory - 9328 in message context For 250 child tables         Select on parent table is using memory -   810864 in message context         Delete on parent table is using memory - 21273088 in message context I had seen the plans for both delete & select on parent table almost same. why this much of memory increase in message context for delete and update operation. Regards, Hari babu.", "msg_date": "Thu, 13 Dec 2012 10:22:12 +0530", "msg_from": "Hari Babu <[email protected]>", "msg_from_op": true, "msg_subject": "Memory issue for inheritance tables." } ]
[ { "msg_contents": "Huan Ruan wrote:\n\n> Hash 1st run\n\n> \"Hash Join (cost=1681.87..6414169.04 rows=48261 width=171)\n> (actual time=2182.450..88158.645 rows=48257 loops=1)\"\n\n> \" -> Seq Scan on invtran bigtable (cost=0.00..4730787.28\n> rows=168121728 width=108) (actual time=0.051..32581.052\n> rows=168121657 loops=1)\"\n\n194 nanoseconds per row suggests 100% cache hits.\n\n> NL 1st run\n\n> \"Nested Loop (cost=0.00..6451637.88 rows=48261 width=171) (actual\n> time=0.056..551.438 rows=48257 loops=1)\"\n\n> \" -> Index Scan using pk_invtran on invtran bigtable\n> (cost=0.00..133.65 rows=1 width=108) (actual time=0.010..0.010\n> rows=1 loops=48261)\"\n\n10 microseconds per index scan (each index scan requiring multiple\n\"random\" accesses) also suggests 100% cache hits.\n\n> I originally reduced random_page_cost to 2 to achieve the nested\n> loop join. Now I set cpu_tuple_cost to 0.05 and reset\n> random_page_cost back to 4, I can also achieve a nested loop\n> join.\n> \n> I'm still new in Postgres, but I'm worried about random_page_cost\n> being 2 is too low, so maybe increasing cpu_tuple_cost is a\n> better choice.\n\nIf these are typical of what you would expect in production, then\nthe fact that with default cost factors the costs are barely\ndifferent (by 0.6%) for actual run times which differ by two orders\nof magnitude (the chosen plan is 160 times slower) means that the\nmodeling of cost factors is off by a lot.\n\nIf you expect the active portion of your database to be fully\ncached like this, it makes sense to reduce random_page_cost to be\nequal to seq_page_cost. But that only adjusts the costs by at most\na factor of four, and we've established that in the above query\nthey're off by a factor of 160. To help make up the difference, it\nmakes sense to de-emphasize page access compared to cpu-related\ncosts by reducing both page costs to 0.1. Combined, these\nadjustments still can't compensate for how far off the estimate\nwas.\n\nIn my experience default cpu_tuple_cost is understated compared to\nother cpu-related costs, so I would do the above *plus* a boost to\ncpu_tuple_cost. Personally, I have never seen a difference between\nplans chosen with that set to 0.03 and 0.05, so I can't say where\nin that range is the ideal value; you should feel free to\nexperiment if there is a query which seems to be choosing a bad\nplan. If the above results really do represent cache hit levels you\nexpect in production, the combination of the above changes should\ncome reasonably close to modeling costs realistically, resulting in\nbetter plan choice.\n\nIf you don't expect such high cache hit ratios in production, you\nprobably don't want to go so low with page costs.\n\n>>> - shared_buffers = 6GB\n>>> - effective_cache_size = 18GB\n>>> - work_mem = 10MB\n>>> - maintenance_work_mem = 3GB\n\n> Can you see any obvious issues with the other memory settings I\n> changed?\n\nI might bump up work_mem to 20MB to 60MB, as long as you're not\ngoing crazy with max_connections. I would probably take\nmaintenance_work_mem down to 1GB to 2GB -- you can have several of\nthese allocations at one time, and you don't want to blow away your\ncache. (I think it might actually be adjusted down to 2GB\ninternally anyway; but I would need to check.)\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 10:26:24 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hash join vs nested loop join" }, { "msg_contents": "Hi Kevin\n\nAgain, many thanks for your time and help.\n\nOn 14 December 2012 02:26, Kevin Grittner <[email protected]> wrote:\n\n> Huan Ruan wrote:\n>\n> > Hash 1st run\n>\n> > \"Hash Join (cost=1681.87..6414169.04 rows=48261 width=171)\n> > (actual time=2182.450..88158.645 rows=48257 loops=1)\"\n>\n> > \" -> Seq Scan on invtran bigtable (cost=0.00..4730787.28\n> > rows=168121728 width=108) (actual time=0.051..32581.052\n> > rows=168121657 loops=1)\"\n>\n> 194 nanoseconds per row suggests 100% cache hits.\n>\n> > NL 1st run\n>\n> > \"Nested Loop (cost=0.00..6451637.88 rows=48261 width=171) (actual\n> > time=0.056..551.438 rows=48257 loops=1)\"\n>\n> > \" -> Index Scan using pk_invtran on invtran bigtable\n> > (cost=0.00..133.65 rows=1 width=108) (actual time=0.010..0.010\n> > rows=1 loops=48261)\"\n>\n> 10 microseconds per index scan (each index scan requiring multiple\n> \"random\" accesses) also suggests 100% cache hits.\n>\n\nInteresting to see how you derived 100% cache hits. I assume by 'cache' you\nmean the pg shared buffer plus the OS cache? Because the table is 23GB but\nthe shared buffer is only 6GB. Even then, I'm not completely convinced\nbecause the total RAM is just 24GB, part of which will have to be used for\nother data and indexes.\n\nI read somewhere that a pg shared buffer that's too big can hurt the\nperformance and it's better just leave it to the OS cache. I'm not sure why\nbut for now, I just configured the shared buffer to be 1/4 of the total RAM.\n\n\n> > I originally reduced random_page_cost to 2 to achieve the nested\n> > loop join. Now I set cpu_tuple_cost to 0.05 and reset\n> > random_page_cost back to 4, I can also achieve a nested loop\n> > join.\n> >\n> > I'm still new in Postgres, but I'm worried about random_page_cost\n> > being 2 is too low, so maybe increasing cpu_tuple_cost is a\n> > better choice.\n>\n> If these are typical of what you would expect in production, then\n> the fact that with default cost factors the costs are barely\n> different (by 0.6%) for actual run times which differ by two orders\n> of magnitude (the chosen plan is 160 times slower) means that the\n> modeling of cost factors is off by a lot.\n>\n> If you expect the active portion of your database to be fully\n> cached like this, it makes sense to reduce random_page_cost to be\n> equal to seq_page_cost. But that only adjusts the costs by at most\n> a factor of four, and we've established that in the above query\n> they're off by a factor of 160. To help make up the difference, it\n> makes sense to de-emphasize page access compared to cpu-related\n> costs by reducing both page costs to 0.1. Combined, these\n> adjustments still can't compensate for how far off the estimate\n> was.\n\n\n> In my experience default cpu_tuple_cost is understated compared to\n> other cpu-related costs, so I would do the above *plus* a boost to\n> cpu_tuple_cost. Personally, I have never seen a difference between\n> plans chosen with that set to 0.03 and 0.05, so I can't say where\n> in that range is the ideal value; you should feel free to\n> experiment if there is a query which seems to be choosing a bad\n> plan. If the above results really do represent cache hit levels you\n> expect in production, the combination of the above changes should\n> come reasonably close to modeling costs realistically, resulting in\n> better plan choice.\n>\n\nIn production, 60% of the database would be able to fit in the RAM. But\nroughly, all the active data we need to use should be able to fit in 100%.\nOn the test server I'm playing with now, RAM is only 8% of the database\nsize. Nonetheless, I will play with these parameters like you suggested.\n\nI was wondering on our production server where the effetive_cache_size will\nbe much bigger, will pg then guess that probably most data is cached anyway\ntherefore leaning towards nested loop join rather than a scan for hash join?\n\nEven on a test server where the cache hit rate is much smaller, for a big\ntable like this, under what circumstances, will a hash join perform better\nthan nested loop join though?\n\n\n>\n> If you don't expect such high cache hit ratios in production, you\n> probably don't want to go so low with page costs.\n>\n> >>> - shared_buffers = 6GB\n> >>> - effective_cache_size = 18GB\n> >>> - work_mem = 10MB\n> >>> - maintenance_work_mem = 3GB\n>\n> > Can you see any obvious issues with the other memory settings I\n> > changed?\n>\n> I might bump up work_mem to 20MB to 60MB, as long as you're not\n> going crazy with max_connections. I would probably take\n> maintenance_work_mem down to 1GB to 2GB -- you can have several of\n> these allocations at one time, and you don't want to blow away your\n> cache. (I think it might actually be adjusted down to 2GB\n> internally anyway; but I would need to check.)\n>\n\nYes, I had bumped up work_mem yesterday to speed up another big group by\nquery. I used 80MB. I assumed this memory will only be used if the query\nneeds it and will be released as soon as it's finished, so it won't be too\nmuch an issue as long as I don't have too many concurrently sorting queries\nrunning (which is true in our production). Is this correct?\n\nI increased maintenance_work_mem initially to speed up the index creation\nwhen I first pump in the data. In production environment, we don't do run\ntime index creation, so I think only the vacuum and analyze will consume\nthis memory?\n\nThanks\nHuan\n\n\n>\n> -Kevin\n>\n\nHi KevinAgain, many thanks for your time and help.On 14 December 2012 02:26, Kevin Grittner <[email protected]> wrote:\nHuan Ruan wrote:\n\n> Hash 1st run\n\n> \"Hash Join (cost=1681.87..6414169.04 rows=48261 width=171)\n> (actual time=2182.450..88158.645 rows=48257 loops=1)\"\n\n> \" -> Seq Scan on invtran bigtable (cost=0.00..4730787.28\n> rows=168121728 width=108) (actual time=0.051..32581.052\n> rows=168121657 loops=1)\"\n\n194 nanoseconds per row suggests 100% cache hits.\n\n> NL 1st run\n\n> \"Nested Loop (cost=0.00..6451637.88 rows=48261 width=171) (actual\n> time=0.056..551.438 rows=48257 loops=1)\"\n\n> \" -> Index Scan using pk_invtran on invtran bigtable\n> (cost=0.00..133.65 rows=1 width=108) (actual time=0.010..0.010\n> rows=1 loops=48261)\"\n\n10 microseconds per index scan (each index scan requiring multiple\n\"random\" accesses) also suggests 100% cache hits.Interesting to see how you derived 100% cache hits. I assume by 'cache' you mean the pg shared buffer plus the OS cache? Because the table is 23GB but the shared buffer is only 6GB. Even then, I'm not completely convinced because the total RAM is just 24GB, part of which will have to be used for other data and indexes.\nI read somewhere that a pg shared buffer that's too big can hurt the performance and it's better just leave it to the OS cache. I'm not sure why but for now, I just configured the shared buffer to be 1/4 of the total RAM.\n\n\n> I originally reduced random_page_cost to 2 to achieve the nested\n> loop join. Now I set cpu_tuple_cost to 0.05 and reset\n> random_page_cost back to 4, I can also achieve a nested loop\n> join.\n>\n> I'm still new in Postgres, but I'm worried about random_page_cost\n> being 2 is too low, so maybe increasing cpu_tuple_cost is a\n> better choice.\n\nIf these are typical of what you would expect in production, then\nthe fact that with default cost factors the costs are barely\ndifferent (by 0.6%) for actual run times which differ by two orders\nof magnitude (the chosen plan is 160 times slower) means that the\nmodeling of cost factors is off by a lot.\n\nIf you expect the active portion of your database to be fully\ncached like this, it makes sense to reduce random_page_cost to be\nequal to seq_page_cost. But that only adjusts the costs by at most\na factor of four, and we've established that in the above query\nthey're off by a factor of 160. To help make up the difference, it\nmakes sense to de-emphasize page access compared to cpu-related\ncosts by reducing both page costs to 0.1. Combined, these\nadjustments still can't compensate for how far off the estimate\nwas.\n\nIn my experience default cpu_tuple_cost is understated compared to\nother cpu-related costs, so I would do the above *plus* a boost to\ncpu_tuple_cost. Personally, I have never seen a difference between\nplans chosen with that set to 0.03 and 0.05, so I can't say where\nin that range is the ideal value; you should feel free to\nexperiment if there is a query which seems to be choosing a bad\nplan. If the above results really do represent cache hit levels you\nexpect in production, the combination of the above changes should\ncome reasonably close to modeling costs realistically, resulting in\nbetter plan choice.In production, 60% of the database would be able to fit in the RAM. But roughly, all the active data we need to use should be able to fit in 100%. On the test server I'm playing with now, RAM is only 8% of the database size. Nonetheless, I will play with these parameters like you suggested.\nI was wondering on our production server where the effetive_cache_size will be much bigger, will pg then guess that probably most data is cached anyway therefore leaning towards nested loop join rather than a scan for hash join?\nEven on a test server where the cache hit rate is much smaller, for a big table like this, under what circumstances, will a hash join perform better than nested loop join though?  \n\nIf you don't expect such high cache hit ratios in production, you\nprobably don't want to go so low with page costs.\n\n>>> - shared_buffers = 6GB\n>>> - effective_cache_size = 18GB\n>>> - work_mem = 10MB\n>>> - maintenance_work_mem = 3GB\n\n> Can you see any obvious issues with the other memory settings I\n> changed?\n\nI might bump up work_mem to 20MB to 60MB, as long as you're not\ngoing crazy with max_connections. I would probably take\nmaintenance_work_mem down to 1GB to 2GB -- you can have several of\nthese allocations at one time, and you don't want to blow away your\ncache. (I think it might actually be adjusted down to 2GB\ninternally anyway; but I would need to check.)Yes, I had bumped up work_mem yesterday to speed up another big group by query. I used 80MB. I assumed this memory will only be used if the query needs it and will be released as soon as it's finished, so it won't be too much an issue as long as I don't have too many concurrently sorting queries running (which is true in our production). Is this correct?\nI increased maintenance_work_mem initially to speed up the index creation when I first pump in the data. In production environment, we don't do run time index creation, so I think only the vacuum and analyze will consume this memory?\nThanksHuan \n\n-Kevin", "msg_date": "Fri, 14 Dec 2012 10:51:27 +1100", "msg_from": "Huan Ruan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash join vs nested loop join" } ]
[ { "msg_contents": "Hi\n\nI have currently some trouble with inserts into a table\n\nINSERT INTO LPP (PPID, LID)\nSELECT DISTINCT PPid, LID FROM\n (SELECT * FROM PP WHERE s_id = sid) pp\n INNER JOIN\n has_protein hp1\n ON pp.p1id = hp1.pid\n INNER JOIN\n has_protein hp2\n ON pp.p2_id = hp2.pid\n INNER JOIN\n (SELECT * FROM L WHERE s_id = sid) l\n ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +\npp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)\n OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +\npp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)\n ;\n\nIf I run only\n\nSELECT DISTINCT PPid, LID FROM\n (SELECT * FROM PP WHERE s_id = 708) pp\n INNER JOIN\n has_protein hp1\n ON pp.p1id = hp1.pid\n INNER JOIN\n has_protein hp2\n ON pp.p2_id = hp2.pid\n INNER JOIN\n (SELECT * FROM L WHERE s_id = 708) l\n ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +\npp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)\n OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +\npp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)\n ;\n\nit returns 200620 rows in 170649 ms ( thats just under 3 minutes). I\nstopped the actual insert after about 8h.\n\nThe table that the insert happens to, is following:\nCREATE TABLE LPP\n(\n ppid bigint NOT NULL,\n lid bigint NOT NULL,\n CONSTRAINT pk_lpp PRIMARY KEY (ppid,lid)\n)\n\nI also tried without the primary key but that one is still running for\nmore that a day.\n\nCurrently the table LPP holds 471139 rows. Its linking the PP and the L\ntable.\n\nThere are no foreign keys referring to that table nor are there any\nother constraints on it.\nPreviously I had foreign keys on lid and ppid refering to the L and PP\ntable. But in a desperate try to get some speed up I deleted these. -\nBut still...\n\nI am running postgresql 9.2 on a windows 2008 R2 server with 256 GB and\nthe database is on something like a raid 1+0 (actually a raid1e)\nconsisting of 3x4TB disks (limit of what could easily be fitted into the\nserver).\n\nAt the given time there were no concurrent access to any of the\ninvolved tables.\n\nHas anybody some idea why the insert takes so long and/or how to speed\nthings up a bit? I could live with something like half an hour - better\nwould be in minutes.\n\n\nThanks for any responds,\n\nLutz Fischer\n\n\n-- \nThe University of Edinburgh is a charitable body, registered in\nScotland, with registration number SC005336.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 15:37:33 +0000", "msg_from": "Lutz Fischer <[email protected]>", "msg_from_op": true, "msg_subject": "problem with large inserts" }, { "msg_contents": "I would strongly discourage you from droppping the referential integrity. You risk data corruption, which will cost you a good deal of time to sort it out properly, and corruption prevents you to apply the R.I. again. Also it has hardly any performance impact. \n\nAre the plans different? ( i guess you've looked at http://wiki.postgresql.org/wiki/Slow_Query_Questions ?)\n\n> Date: Thu, 13 Dec 2012 15:37:33 +0000\n> From: [email protected]\n> To: [email protected]\n> Subject: [PERFORM] problem with large inserts\n> \n> Hi\n> \n> I have currently some trouble with inserts into a table\n> \n> INSERT INTO LPP (PPID, LID)\n> SELECT DISTINCT PPid, LID FROM\n> (SELECT * FROM PP WHERE s_id = sid) pp\n> INNER JOIN\n> has_protein hp1\n> ON pp.p1id = hp1.pid\n> INNER JOIN\n> has_protein hp2\n> ON pp.p2_id = hp2.pid\n> INNER JOIN\n> (SELECT * FROM L WHERE s_id = sid) l\n> ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +\n> pp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)\n> OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +\n> pp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)\n> ;\n> \n> If I run only\n> \n> SELECT DISTINCT PPid, LID FROM\n> (SELECT * FROM PP WHERE s_id = 708) pp\n> INNER JOIN\n> has_protein hp1\n> ON pp.p1id = hp1.pid\n> INNER JOIN\n> has_protein hp2\n> ON pp.p2_id = hp2.pid\n> INNER JOIN\n> (SELECT * FROM L WHERE s_id = 708) l\n> ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +\n> pp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)\n> OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +\n> pp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)\n> ;\n> \n> it returns 200620 rows in 170649 ms ( thats just under 3 minutes). I\n> stopped the actual insert after about 8h.\n> \n> The table that the insert happens to, is following:\n> CREATE TABLE LPP\n> (\n> ppid bigint NOT NULL,\n> lid bigint NOT NULL,\n> CONSTRAINT pk_lpp PRIMARY KEY (ppid,lid)\n> )\n> \n> I also tried without the primary key but that one is still running for\n> more that a day.\n> \n> Currently the table LPP holds 471139 rows. Its linking the PP and the L\n> table.\n> \n> There are no foreign keys referring to that table nor are there any\n> other constraints on it.\n> Previously I had foreign keys on lid and ppid refering to the L and PP\n> table. But in a desperate try to get some speed up I deleted these. -\n> But still...\n> \n> I am running postgresql 9.2 on a windows 2008 R2 server with 256 GB and\n> the database is on something like a raid 1+0 (actually a raid1e)\n> consisting of 3x4TB disks (limit of what could easily be fitted into the\n> server).\n> \n> At the given time there were no concurrent access to any of the\n> involved tables.\n> \n> Has anybody some idea why the insert takes so long and/or how to speed\n> things up a bit? I could live with something like half an hour - better\n> would be in minutes.\n> \n> \n> Thanks for any responds,\n> \n> Lutz Fischer\n> \n> \n> -- \n> The University of Edinburgh is a charitable body, registered in\n> Scotland, with registration number SC005336.\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n\n\n\nI would strongly discourage you from droppping the referential integrity. You risk data corruption, which will cost you a good deal of time to sort it out properly, and corruption prevents you to apply the R.I. again. Also it has hardly any performance impact. Are the plans different? ( i guess you've looked at http://wiki.postgresql.org/wiki/Slow_Query_Questions ?)> Date: Thu, 13 Dec 2012 15:37:33 +0000> From: [email protected]> To: [email protected]> Subject: [PERFORM] problem with large inserts> > Hi> > I have currently some trouble with inserts into a table> > INSERT INTO LPP (PPID, LID)> SELECT DISTINCT PPid, LID FROM> (SELECT * FROM PP WHERE s_id = sid) pp> INNER JOIN> has_protein hp1> ON pp.p1id = hp1.pid> INNER JOIN> has_protein hp2> ON pp.p2_id = hp2.pid> INNER JOIN> (SELECT * FROM L WHERE s_id = sid) l> ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +> pp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)> OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +> pp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)> ;> > If I run only> > SELECT DISTINCT PPid, LID FROM> (SELECT * FROM PP WHERE s_id = 708) pp> INNER JOIN> has_protein hp1> ON pp.p1id = hp1.pid> INNER JOIN> has_protein hp2> ON pp.p2_id = hp2.pid> INNER JOIN> (SELECT * FROM L WHERE s_id = 708) l> ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +> pp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)> OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +> pp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)> ;> > it returns 200620 rows in 170649 ms ( thats just under 3 minutes). I> stopped the actual insert after about 8h.> > The table that the insert happens to, is following:> CREATE TABLE LPP> (> ppid bigint NOT NULL,> lid bigint NOT NULL,> CONSTRAINT pk_lpp PRIMARY KEY (ppid,lid)> )> > I also tried without the primary key but that one is still running for> more that a day.> > Currently the table LPP holds 471139 rows. Its linking the PP and the L> table.> > There are no foreign keys referring to that table nor are there any> other constraints on it.> Previously I had foreign keys on lid and ppid refering to the L and PP> table. But in a desperate try to get some speed up I deleted these. -> But still...> > I am running postgresql 9.2 on a windows 2008 R2 server with 256 GB and> the database is on something like a raid 1+0 (actually a raid1e)> consisting of 3x4TB disks (limit of what could easily be fitted into the> server).> > At the given time there were no concurrent access to any of the> involved tables.> > Has anybody some idea why the insert takes so long and/or how to speed> things up a bit? I could live with something like half an hour - better> would be in minutes.> > > Thanks for any responds,> > Lutz Fischer> > > -- > The University of Edinburgh is a charitable body, registered in> Scotland, with registration number SC005336.> > > > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 13 Dec 2012 15:49:44 +0000", "msg_from": "Willem Leenen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with large inserts" }, { "msg_contents": "Just an idea - how long does it take to run _only_\nCREATE TEMP TABLE foo AS <your SELECT here>\n\n\n\n\nOn Thu, Dec 13, 2012 at 4:37 PM, Lutz Fischer\n<[email protected]> wrote:\n> Hi\n>\n> I have currently some trouble with inserts into a table\n>\n> INSERT INTO LPP (PPID, LID)\n> SELECT DISTINCT PPid, LID FROM\n> (SELECT * FROM PP WHERE s_id = sid) pp\n> INNER JOIN\n> has_protein hp1\n> ON pp.p1id = hp1.pid\n> INNER JOIN\n> has_protein hp2\n> ON pp.p2_id = hp2.pid\n> INNER JOIN\n> (SELECT * FROM L WHERE s_id = sid) l\n> ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +\n> pp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)\n> OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +\n> pp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)\n> ;\n>\n> If I run only\n>\n> SELECT DISTINCT PPid, LID FROM\n> (SELECT * FROM PP WHERE s_id = 708) pp\n> INNER JOIN\n> has_protein hp1\n> ON pp.p1id = hp1.pid\n> INNER JOIN\n> has_protein hp2\n> ON pp.p2_id = hp2.pid\n> INNER JOIN\n> (SELECT * FROM L WHERE s_id = 708) l\n> ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +\n> pp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)\n> OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +\n> pp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)\n> ;\n>\n> it returns 200620 rows in 170649 ms ( thats just under 3 minutes). I\n> stopped the actual insert after about 8h.\n>\n> The table that the insert happens to, is following:\n> CREATE TABLE LPP\n> (\n> ppid bigint NOT NULL,\n> lid bigint NOT NULL,\n> CONSTRAINT pk_lpp PRIMARY KEY (ppid,lid)\n> )\n>\n> I also tried without the primary key but that one is still running for\n> more that a day.\n>\n> Currently the table LPP holds 471139 rows. Its linking the PP and the L\n> table.\n>\n> There are no foreign keys referring to that table nor are there any\n> other constraints on it.\n> Previously I had foreign keys on lid and ppid refering to the L and PP\n> table. But in a desperate try to get some speed up I deleted these. -\n> But still...\n>\n> I am running postgresql 9.2 on a windows 2008 R2 server with 256 GB and\n> the database is on something like a raid 1+0 (actually a raid1e)\n> consisting of 3x4TB disks (limit of what could easily be fitted into the\n> server).\n>\n> At the given time there were no concurrent access to any of the\n> involved tables.\n>\n> Has anybody some idea why the insert takes so long and/or how to speed\n> things up a bit? I could live with something like half an hour - better\n> would be in minutes.\n>\n>\n> Thanks for any responds,\n>\n> Lutz Fischer\n>\n>\n> --\n> The University of Edinburgh is a charitable body, registered in\n> Scotland, with registration number SC005336.\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 17:09:19 +0100", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with large inserts" }, { "msg_contents": "Lutz Fischer <[email protected]> writes:\n> I have currently some trouble with inserts into a table\n> If I run only [ the select part ]\n> it returns 200620 rows in 170649 ms ( thats just under 3 minutes). I\n> stopped the actual insert after about 8h.\n\nIt should not take 8h to insert 200k rows on any machine made this\ncentury. Frankly, I'm wondering if the insert is doing anything at all,\nor is blocked on a lock somewhere. You say there's no concurrent\nactivity, but how hard did you look? Did you check that, say, the\nphysical disk file for the table is growing?\n\n> I am running postgresql 9.2 on a windows 2008 R2 server with 256 GB and\n> the database is on something like a raid 1+0 (actually a raid1e)\n> consisting of 3x4TB disks (limit of what could easily be fitted into the\n> server).\n\nA different line of thought is that there's something seriously broken\nabout the raid configuration. Have you tried basic disk-speed\nbenchmarks? (Perhaps there's an equivalent of bonnie++ for windows.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 11:10:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with large inserts" }, { "msg_contents": "Thanks a lot you saved my day\n\ncreate temp table foo AS SELECT DISTINCT ...\ndid take a mere 77464.744 ms\nAnd an additional\nInsert into LPP select * from foo;\nJust 576.909 ms\n\nI don't really understand why it's working via a temp table but not\ndirectly (or in any reasonable amount of time) - but at least I have a\nsolution I can work with.\n\n\nOn 13/12/12 16:09, Filip Rembiałkowski wrote:\n> Just an idea - how long does it take to run _only_\n> CREATE TEMP TABLE foo AS <your SELECT here>\n>\n>\n>\n>\n> On Thu, Dec 13, 2012 at 4:37 PM, Lutz Fischer\n> <[email protected]> wrote:\n>> Hi\n>>\n>> I have currently some trouble with inserts into a table\n>>\n>> INSERT INTO LPP (PPID, LID)\n>> SELECT DISTINCT PPid, LID FROM\n>> (SELECT * FROM PP WHERE s_id = sid) pp\n>> INNER JOIN\n>> has_protein hp1\n>> ON pp.p1id = hp1.pid\n>> INNER JOIN\n>> has_protein hp2\n>> ON pp.p2_id = hp2.pid\n>> INNER JOIN\n>> (SELECT * FROM L WHERE s_id = sid) l\n>> ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +\n>> pp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)\n>> OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +\n>> pp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)\n>> ;\n>>\n>> If I run only\n>>\n>> SELECT DISTINCT PPid, LID FROM\n>> (SELECT * FROM PP WHERE s_id = 708) pp\n>> INNER JOIN\n>> has_protein hp1\n>> ON pp.p1id = hp1.pid\n>> INNER JOIN\n>> has_protein hp2\n>> ON pp.p2_id = hp2.pid\n>> INNER JOIN\n>> (SELECT * FROM L WHERE s_id = 708) l\n>> ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +\n>> pp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)\n>> OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +\n>> pp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)\n>> ;\n>>\n>> it returns 200620 rows in 170649 ms ( thats just under 3 minutes). I\n>> stopped the actual insert after about 8h.\n>>\n>> The table that the insert happens to, is following:\n>> CREATE TABLE LPP\n>> (\n>> ppid bigint NOT NULL,\n>> lid bigint NOT NULL,\n>> CONSTRAINT pk_lpp PRIMARY KEY (ppid,lid)\n>> )\n>>\n>> I also tried without the primary key but that one is still running for\n>> more that a day.\n>>\n>> Currently the table LPP holds 471139 rows. Its linking the PP and the L\n>> table.\n>>\n>> There are no foreign keys referring to that table nor are there any\n>> other constraints on it.\n>> Previously I had foreign keys on lid and ppid refering to the L and PP\n>> table. But in a desperate try to get some speed up I deleted these. -\n>> But still...\n>>\n>> I am running postgresql 9.2 on a windows 2008 R2 server with 256 GB and\n>> the database is on something like a raid 1+0 (actually a raid1e)\n>> consisting of 3x4TB disks (limit of what could easily be fitted into the\n>> server).\n>>\n>> At the given time there were no concurrent access to any of the\n>> involved tables.\n>>\n>> Has anybody some idea why the insert takes so long and/or how to speed\n>> things up a bit? I could live with something like half an hour - better\n>> would be in minutes.\n>>\n>>\n>> Thanks for any responds,\n>>\n>> Lutz Fischer\n>>\n>>\n>> --\n>> The University of Edinburgh is a charitable body, registered in\n>> Scotland, with registration number SC005336.\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nLutz Fischer\[email protected]\n+44 131 6517057\n\n\nThe University of Edinburgh is a charitable body, registered in\nScotland, with registration number SC005336.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 16:33:28 +0000", "msg_from": "Lutz Fischer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problem with large inserts" }, { "msg_contents": "Hmm, so it is some kind of file / table locking issue, not general IO\nsystem malfunction.\n\nIt would be interesting and useful to run this use case on other\npostgres instance (or several instances), including non-Windows ones.\n\nOTOH Pg on Windows housekeeping was always \"fun\" - I advise all my\nclients to avoid it for production purposes.\n\n\n\n\nOn Thu, Dec 13, 2012 at 5:33 PM, Lutz Fischer\n<[email protected]> wrote:\n> Thanks a lot you saved my day\n>\n> create temp table foo AS SELECT DISTINCT ...\n> did take a mere 77464.744 ms\n> And an additional\n> Insert into LPP select * from foo;\n> Just 576.909 ms\n>\n> I don't really understand why it's working via a temp table but not\n> directly (or in any reasonable amount of time) - but at least I have a\n> solution I can work with.\n>\n>\n> On 13/12/12 16:09, Filip Rembiałkowski wrote:\n>> Just an idea - how long does it take to run _only_\n>> CREATE TEMP TABLE foo AS <your SELECT here>\n>>\n>>\n>>\n>>\n>> On Thu, Dec 13, 2012 at 4:37 PM, Lutz Fischer\n>> <[email protected]> wrote:\n>>> Hi\n>>>\n>>> I have currently some trouble with inserts into a table\n>>>\n>>> INSERT INTO LPP (PPID, LID)\n>>> SELECT DISTINCT PPid, LID FROM\n>>> (SELECT * FROM PP WHERE s_id = sid) pp\n>>> INNER JOIN\n>>> has_protein hp1\n>>> ON pp.p1id = hp1.pid\n>>> INNER JOIN\n>>> has_protein hp2\n>>> ON pp.p2_id = hp2.pid\n>>> INNER JOIN\n>>> (SELECT * FROM L WHERE s_id = sid) l\n>>> ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +\n>>> pp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)\n>>> OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +\n>>> pp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)\n>>> ;\n>>>\n>>> If I run only\n>>>\n>>> SELECT DISTINCT PPid, LID FROM\n>>> (SELECT * FROM PP WHERE s_id = 708) pp\n>>> INNER JOIN\n>>> has_protein hp1\n>>> ON pp.p1id = hp1.pid\n>>> INNER JOIN\n>>> has_protein hp2\n>>> ON pp.p2_id = hp2.pid\n>>> INNER JOIN\n>>> (SELECT * FROM L WHERE s_id = 708) l\n>>> ON (hp1.pid = l.p1id AND hp2.pid = l.p2id AND hp1.ppos +\n>>> pp.s1 = l.s1 AND hp2.ppos + pp.s2 = l.s2)\n>>> OR (hp1.pid = l.p2id AND hp2.pid = l.p1id AND hp1.ppos +\n>>> pp.s1 = l.s2 AND hp2.ppos + pp.s2 = l.s1)\n>>> ;\n>>>\n>>> it returns 200620 rows in 170649 ms ( thats just under 3 minutes). I\n>>> stopped the actual insert after about 8h.\n>>>\n>>> The table that the insert happens to, is following:\n>>> CREATE TABLE LPP\n>>> (\n>>> ppid bigint NOT NULL,\n>>> lid bigint NOT NULL,\n>>> CONSTRAINT pk_lpp PRIMARY KEY (ppid,lid)\n>>> )\n>>>\n>>> I also tried without the primary key but that one is still running for\n>>> more that a day.\n>>>\n>>> Currently the table LPP holds 471139 rows. Its linking the PP and the L\n>>> table.\n>>>\n>>> There are no foreign keys referring to that table nor are there any\n>>> other constraints on it.\n>>> Previously I had foreign keys on lid and ppid refering to the L and PP\n>>> table. But in a desperate try to get some speed up I deleted these. -\n>>> But still...\n>>>\n>>> I am running postgresql 9.2 on a windows 2008 R2 server with 256 GB and\n>>> the database is on something like a raid 1+0 (actually a raid1e)\n>>> consisting of 3x4TB disks (limit of what could easily be fitted into the\n>>> server).\n>>>\n>>> At the given time there were no concurrent access to any of the\n>>> involved tables.\n>>>\n>>> Has anybody some idea why the insert takes so long and/or how to speed\n>>> things up a bit? I could live with something like half an hour - better\n>>> would be in minutes.\n>>>\n>>>\n>>> Thanks for any responds,\n>>>\n>>> Lutz Fischer\n>>>\n>>>\n>>> --\n>>> The University of Edinburgh is a charitable body, registered in\n>>> Scotland, with registration number SC005336.\n>>>\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n> --\n> Lutz Fischer\n> [email protected]\n> +44 131 6517057\n>\n>\n> The University of Edinburgh is a charitable body, registered in\n> Scotland, with registration number SC005336.\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 17:44:01 +0100", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with large inserts" } ]
[ { "msg_contents": "Pavan Deolasee wrote:\n> Amitabh Kant <[email protected]> wrote:\n\n>> Our scripts automatically add \"LIMIT ALL\" & \"OFFSET 0\" to every\n>> select query if no values are passed on for these parameters. I\n>> remember reading through the mailing list that it's better not\n>> to pass them if they are not needed as they add a cost to the\n>> query plan. Is this the case, or am i looking at a very minor\n>> optimization.\n>>\n> \n> I would tend to think that is the latter. While undoubtedly\n> limit/offset clause will add another node during query planning\n> and execution, AFAICS the OFFSET 0 and LIMIT ALL cases are\n> optimized to a good extent. So the overhead of having them will\n> not be significant.\n\nI ran some quick tests on my i7 under Linux. Plan time was\nincreased by about 40 microseconds (based on EXPLAIN runtime) and\nadded a limit node to the plan. Execution time on a SELECT * FROM\ntenk1 in the regression database went up by 1.35 ms on fully cached\nruns.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 10:47:14 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit & offset effect on query plans" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Pavan Deolasee wrote:\n>> I would tend to think that is the latter. While undoubtedly\n>> limit/offset clause will add another node during query planning\n>> and execution, AFAICS the OFFSET 0 and LIMIT ALL cases are\n>> optimized to a good extent. So the overhead of having them will\n>> not be significant.\n\n> I ran some quick tests on my i7 under Linux. Plan time was\n> increased by about 40 microseconds (based on EXPLAIN runtime) and\n> added a limit node to the plan. Execution time on a SELECT * FROM\n> tenk1 in the regression database went up by 1.35 ms on fully cached\n> runs.\n\n1.35ms out of what?\n\nFWIW, I've been considering teaching the planner to not bother with\nan actual Limit plan node if the limit clause is an obvious no-op.\nI wasn't thinking about applications that blindly insert such clauses,\nbut rather about not penalizing subqueries when someone uses one of\nthese as an optimization fence. (The clauses would still work as an\nopt fence, you'd just not see any Limit node in the final plan.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 11:20:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit & offset effect on query plans" } ]
[ { "msg_contents": "Tom Lane wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n\n>> I ran some quick tests on my i7 under Linux. Plan time was\n>> increased by about 40 microseconds (based on EXPLAIN runtime)\n>> and added a limit node to the plan. Execution time on a SELECT *\n>> FROM tenk1 in the regression database went up by 1.35 ms on\n>> fully cached runs.\n> \n> 1.35ms out of what?\n\nWithout the limit node the runtimes (after \"priming\" the cache)\nwere:\n\n1.805, 2.533\n1.805, 2.495\n1.800, 2.446\n1.818, 2.470\n1.804, 2.502\n\nThe first time for each run is \"Total runtime\" reported by EXPLAIN,\nthe second is what psql reported from having \\timing on.\n\nWith the limit node:\n\n3.237, 3.914\n3.243, 3.918\n3.263, 4.010\n3.265, 3.943\n3.272, 3.953\n\nI eyeballed that in the console window and said 1.35 based on rough\nin-my-head calculations, although with it laid out in a nicer\nformat, I think I was a little low.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 12:07:00 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit & offset effect on query plans" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane wrote:\n>> 1.35ms out of what?\n\n> Without the limit node the runtimes (after \"priming\" the cache)\n> were:\n\n> 1.805, 2.533\n> 1.805, 2.495\n> 1.800, 2.446\n> 1.818, 2.470\n> 1.804, 2.502\n\n> The first time for each run is \"Total runtime\" reported by EXPLAIN,\n> the second is what psql reported from having \\timing on.\n\n> With the limit node:\n\n> 3.237, 3.914\n> 3.243, 3.918\n> 3.263, 4.010\n> 3.265, 3.943\n> 3.272, 3.953\n\n> I eyeballed that in the console window and said 1.35 based on rough\n> in-my-head calculations, although with it laid out in a nicer\n> format, I think I was a little low.\n\nHuh, so on a percentage basis the Limit-node overhead is actually pretty\nsignificant, at least for a trivial seqscan plan like this case.\n(This is probably about the worst-case scenario, really, since it's\ntough to beat a simple seqscan for cost-per-emitted-row. Also I gather\nyou're not actually transmitting any data to the client ...)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 12:12:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit & offset effect on query plans" } ]
[ { "msg_contents": "Hello,\n\n\nI have a customer that experience a strange behaviour related to statictics.\n\nThrere is a vacuum analyze planned during the night.\nThe morning, 1 day out of 2, there are some extremely slow queries.\nThose queries lasts more than 5 minutes (never waited more and cancelled\nthem) whereas when everything is OK they last less than 300ms.\n\nIn order to resolve this i have to execute a least one analyze, sometimes\nmore.\n\nMy Configuration:\nWindows\nPostgreSQL 8.4.8\ndefault_statistics_target = 100\n\nIn addition to an increase for shared_buffers, work_mem, ..., i changed\nthe default_statistics_target to 500 with no effect.\nIt was even worse as i never managed to get rid of the slow queries after\nrunning many analyze.\nSo i fell back to default_statistics_target=100 in order to get rid of\nthose slow queries.\n\n\nI have no idea of what i can do to solve this issue.\nAny help would be greatly appreciated.\n\n\nCordialement,\n*Ghislain ROUVIGNAC*\nIngénieur R&D\[email protected]\n\n7 rue Marcel Dassault - Z.A. La Mouline - 81990 Cambon d'Albi - FRANCE\nTel : 05 63 53 08 18 - Fax : 05 63 53 07 42 - www.sylob.com\nSupport : 05 63 53 78 35 - [email protected]\nEntreprise certifiée ISO 9001 version 2008 par Bureau Veritas.\n\nHello,I have a customer that experience a strange behaviour related to statictics.Threre is a vacuum analyze planned during the night.The morning, 1 day out of 2, there are some extremely slow queries.\nThose queries lasts more than 5 minutes (never waited more and cancelled them) whereas when everything is OK they last less than 300ms.In order to resolve this i have to execute a least one analyze, sometimes more.\nMy Configuration:WindowsPostgreSQL 8.4.8default_statistics_target = 100In addition to an increase for shared_buffers, work_mem, ..., i changed the default_statistics_target to 500 with no effect.\nIt was even worse as i never managed to get rid of the slow queries after running many analyze.So i fell back to default_statistics_target=100 in order to get rid of those slow queries.\nI have no idea of what i can do to solve this issue.Any help would be greatly appreciated.Cordialement,\nGhislain ROUVIGNAC\nIngénieur R&[email protected]\n7 rue\nMarcel Dassault - Z.A. La Mouline - 81990 Cambon d'Albi - FRANCETel : 05 63 53 08 18 -\nFax : 05 63 53 07 42 - www.sylob.com\nSupport : 05 63 53 78\n35 - [email protected] certifiée\nISO 9001 version 2008 par Bureau Veritas.", "msg_date": "Thu, 13 Dec 2012 18:10:04 +0100", "msg_from": "Ghislain ROUVIGNAC <[email protected]>", "msg_from_op": true, "msg_subject": "Slow queries after vacuum analyze" } ]
[ { "msg_contents": "Tom Lane wrote:\n\n> Huh, so on a percentage basis the Limit-node overhead is actually\n> pretty significant, at least for a trivial seqscan plan like this\n> case. (This is probably about the worst-case scenario, really,\n> since it's tough to beat a simple seqscan for cost-per-emitted-\n> row. Also I gather you're not actually transmitting any data to\n> the client ...)\n\nRight, I was trying to isolate the cost, and in a more complex\nquery, or with results streaming back, that could easily be lost in\nthe noise. Assuming that the setup time for the node is trivial\ncompared to filtering 10,000 rows, the time per row which passes\nthrough the limit node seems to be (very roughly) 140 nanoseconds\non an i7. I don't know whether that will vary based on the number\nor types of columns.\n\nI just tried with returning the results rather than running EXPLAIN\nANALYZE, and any difference was lost in the noise with only five\nsamples each way. I wonder how much of the difference with EXPLAIN\nANALYZE might have been from the additional time checking. Maybe on\na normal run the difference would be less significant.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 13:14:32 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Limit & offset effect on query plans" } ]
[ { "msg_contents": "Ghislain ROUVIGNAC wrote:\n\n> Threre is a vacuum analyze planned during the night.\n> The morning, 1 day out of 2, there are some extremely slow\n> queries. Those queries lasts more than 5 minutes (never waited\n> more and cancelled them) whereas when everything is OK they last\n> less than 300ms.\n> \n> In order to resolve this i have to execute a least one analyze,\n> sometimes more.\n> \n> My Configuration:\n> Windows\n> PostgreSQL 8.4.8\n> default_statistics_target = 100\n> \n> In addition to an increase for shared_buffers, work_mem, ..., i\n> changed the default_statistics_target to 500 with no effect.\n> It was even worse as i never managed to get rid of the slow\n> queries after running many analyze.\n> So i fell back to default_statistics_target=100 in order to get\n> rid of those slow queries.\n\nYou probably need to adjust your cost factors to more accurately\nreflect the actual costs of various activities on your system. What\nis probably happening is that there are two plans which are very\nclose together in estimated costs using the current values, while\nthe actual costs are very different.  The particular random sample\nchosen can push the balance one way or the other.\n\nPlease show the results from running the query on this page:\n\nhttp://wiki.postgresql.org/wiki/Server_Configuration\n\nAlso, a description of the run environment would help.\n\nOther information listed on this page would help, although cores,\nRAM, and storage system information would probably be most\nimportant.\n\nhttp://wiki.postgresql.org/wiki/Server_Configuration\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 15:42:44 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow queries after vacuum analyze" } ]
[ { "msg_contents": "Why does the number of rows are different in actual and estimated?\nThe default_statistics_target is set to 100.\n\n\nexplain analyze\nselect *\nFROM (\nSELECT\nentity.id AS \"con_s_id\", entity.setype AS \"con_s_setype\" ,\ncon_details.salutation AS \"con_s_salutationtype\", con_details.firstname AS\n\"con_s_firstname\",\ncon_details.phone AS \"con_s_phone\", con_details.lastname AS\n\"con_s_lastname\",\ncon_details.accountid AS \"con_s_account_id_entityid\", con_details.mobile AS\n\"con_s_mobile\",\ncon_details.title AS \"con_s_title\", con_details.donotcall AS\n\"con_s_donotcall\",\ncon_details.fax AS \"con_s_fax\", con_details.department AS\n\"con_s_department\",\ncon_details.email AS \"con_s_email\", con_details.yahooid AS \"con_s_yahooid\",\ncon_details.emailoptout AS \"con_s_emailoptout\", con_details.reportsto AS\n\"con_s_con__id_entityid\",\ncon_details.reference AS \"con_s_reference\", entity.smownerid AS\n\"con_s_assigned_user_id_entityid\",\nCASE WHEN entity.owner_type='U' THEN users.user_name ELSE groups.groupname\nEND AS \"con_s_assigned_user_id_name\",\nCASE WHEN entity.owner_type='U' THEN users.first_name || ' ' ||\nusers.last_name ELSE groups.groupname END AS \"con_s_assigned_user_id\",\nCASE WHEN entity.owner_type='U' THEN 'Users' ELSE 'Groups' END AS\n\"con_s_assigned_user_id_linkmodule\",\nentity.modifiedtime AS \"con_s_modifiedtime\", con_details.notify_owner AS\n\"con_s_notify_owner\",\nentity.createdtime AS \"con_s_createdtime\", entity.description AS\n\"con_s_description\",\ncon_details.imagename AS \"con_s_imagename\"\nFROM con_details\nINNER JOIN entity ON con_details.con_id=entity.id\nLEFT JOIN groups ON groups.groupid = entity.smownerid\nLEFT join users ON entity.smownerid= users.id\nWHERE entity.setype='con_s' AND entity.deleted=0\nAND (((con_details.email ILIKE '%@%')))\n) con_base\nINNER JOIN con_scf ON con_s_base.\"con_s_id\"=con_scf.con_id\nINNER JOIN con_subdetails ON\ncon_s_base.\"con_s_id\"=con_subdetails.con_subscriptionid\nINNER JOIN customerdetails ON\ncon_s_base.\"con_s_id\"=customerdetails.customerid\nINNER JOIN con_address ON con_s_base.\"con_s_id\"=con_address.con_addressid\n\n\nNested Loop (cost=18560.97..26864.83 rows=24871 width=535) (actual\ntime=1335.157..8492.414 rows=157953 loops=1)\n -> Hash Left Join (cost=18560.97..26518.91 rows=116 width=454) (actual\ntime=1335.117..6996.585 rows=205418 loops=1)\n Hash Cond: (entity.smownerid = users.id)\n -> Hash Left Join (cost=18547.22..26503.57 rows=116 width=419)\n(actual time=1334.354..6671.442 rows=205418 loops=1)\n Hash Cond: (entity.smownerid = groups.groupid)\n -> Nested Loop (cost=18546.83..26502.72 rows=116\nwidth=398) (actual time=1334.314..6385.664 rows=205418 loops=1)\n -> Nested Loop (cost=18546.83..26273.40 rows=774\nwidth=319) (actual time=1334.272..5025.175 rows=205418 loops=1)\n -> Hash Join (cost=18546.83..24775.02\nrows=5213 width=273) (actual time=1334.238..3666.748 rows=205420 loops=1)\n Hash Cond:\n(con_subdetails.con_subscriptionid = entity.id)\n -> Index Scan using con_subdetails_pkey\non con_subdetails (cost=0.00..4953.41 rows=326040 width=29) (actual\ntime=0.019..350\n.736 rows=327328 loops=1)\n -> Hash (cost=18115.71..18115.71\nrows=34489 width=244) (actual time=1334.147..1334.147 rows=205420 loops=1)\n Buckets: 4096 Batches: 1 Memory\nUsage: 19417kB\n -> Hash Join\n (cost=9337.97..18115.71 rows=34489 width=244) (actual\ntime=418.054..1156.453 rows=205420 loops=1)\n Hash Cond:\n(customerdetails.customerid = entity.id)\n -> Seq Scan on\ncustomerdetails (cost=0.00..4752.46 rows=327146 width=13) (actual\ntime=0.021..176.389 rows=327328 loops=1)\n -> Hash\n (cost=6495.65..6495.65 rows=227386 width=231) (actual\ntime=417.839..417.839 rows=205420 loops=1)\n Buckets: 32768 Batches:\n1 Memory Usage: 16056kB\n -> Index Scan using\nentity_setype_idx on entity (cost=0.00..6495.65 rows=227386 width=231)\n(actual time=0.033..2\n53.880 rows=205420 loops=1)\n Index Cond:\n((setype)::text = 'con_s'::text)\n -> Index Scan using con_address_pkey on\ncon_address (cost=0.00..0.27 rows=1 width=46) (actual time=0.003..0.004\nrows=1 loops=2054\n20)\n Index Cond: (con_addressid = entity.id)\n -> Index Scan using con_scf_pkey on con_scf\n (cost=0.00..0.28 rows=1 width=79) (actual time=0.003..0.004 rows=1\nloops=205418)\n Index Cond: (con_id = entity.id)\n -> Hash (cost=0.34..0.34 rows=4 width=25) (actual\ntime=0.016..0.016 rows=4 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using groups_pkey on groups\n (cost=0.00..0.34 rows=4 width=25) (actual time=0.008..0.012 rows=4 loops=1)\n -> Hash (cost=9.00..9.00 rows=380 width=39) (actual\ntime=0.746..0.746 rows=380 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 28kB\n -> Index Scan using users_pkey on users (cost=0.00..9.00\nrows=380 width=39) (actual time=0.014..0.440 rows=380 loops=1)\n -> Index Scan using con_details_pkey on con_details (cost=0.00..0.29\nrows=1 width=85) (actual time=0.004..0.004 rows=1 loops=205418)\n Index Cond: (con_id = entity.id)\n Filter: ((email)::text ~~* '%@%'::text)\n Total runtime: 8573.237 ms\n\nWhy does the number of rows are different in actual and estimated?The default_statistics_target is set to 100.explain analyzeselect *FROM ( \nSELECT entity.id AS \"con_s_id\", entity.setype AS \"con_s_setype\" ,  con_details.salutation AS \"con_s_salutationtype\", con_details.firstname AS \"con_s_firstname\", \n con_details.phone AS \"con_s_phone\", con_details.lastname AS \"con_s_lastname\",  con_details.accountid AS \"con_s_account_id_entityid\", con_details.mobile AS \"con_s_mobile\", \n con_details.title AS \"con_s_title\", con_details.donotcall AS \"con_s_donotcall\",  con_details.fax AS \"con_s_fax\", con_details.department AS \"con_s_department\", \n con_details.email AS \"con_s_email\", con_details.yahooid AS \"con_s_yahooid\",  con_details.emailoptout AS \"con_s_emailoptout\", con_details.reportsto AS \"con_s_con__id_entityid\", \n con_details.reference AS \"con_s_reference\", entity.smownerid AS \"con_s_assigned_user_id_entityid\",  CASE WHEN entity.owner_type='U' THEN users.user_name ELSE groups.groupname END AS \"con_s_assigned_user_id_name\", \n CASE WHEN entity.owner_type='U' THEN users.first_name || ' ' || users.last_name ELSE groups.groupname END AS \"con_s_assigned_user_id\", \n CASE WHEN entity.owner_type='U' THEN 'Users' ELSE 'Groups' END AS \"con_s_assigned_user_id_linkmodule\",  entity.modifiedtime AS \"con_s_modifiedtime\", con_details.notify_owner AS \"con_s_notify_owner\", \n entity.createdtime AS \"con_s_createdtime\", entity.description AS \"con_s_description\",  con_details.imagename AS \"con_s_imagename\" \nFROM con_details INNER JOIN entity ON con_details.con_id=entity.id LEFT JOIN groups ON groups.groupid = entity.smownerid LEFT join users ON entity.smownerid= users.id \nWHERE entity.setype='con_s' AND entity.deleted=0 AND (((con_details.email ILIKE '%@%'))) ) con_base INNER JOIN con_scf ON con_s_base.\"con_s_id\"=con_scf.con_id \nINNER JOIN con_subdetails ON con_s_base.\"con_s_id\"=con_subdetails.con_subscriptionid INNER JOIN customerdetails ON con_s_base.\"con_s_id\"=customerdetails.customerid INNER JOIN con_address ON con_s_base.\"con_s_id\"=con_address.con_addressid \nNested Loop  (cost=18560.97..26864.83 rows=24871 width=535) (actual time=1335.157..8492.414 rows=157953 loops=1)   ->  Hash Left Join  (cost=18560.97..26518.91 rows=116 width=454) (actual time=1335.117..6996.585 rows=205418 loops=1)\n         Hash Cond: (entity.smownerid = users.id)         ->  Hash Left Join  (cost=18547.22..26503.57 rows=116 width=419) (actual time=1334.354..6671.442 rows=205418 loops=1)\n               Hash Cond: (entity.smownerid = groups.groupid)               ->  Nested Loop  (cost=18546.83..26502.72 rows=116 width=398) (actual time=1334.314..6385.664 rows=205418 loops=1)                     ->  Nested Loop  (cost=18546.83..26273.40 rows=774 width=319) (actual time=1334.272..5025.175 rows=205418 loops=1)\n                           ->  Hash Join  (cost=18546.83..24775.02 rows=5213 width=273) (actual time=1334.238..3666.748 rows=205420 loops=1)                                 Hash Cond: (con_subdetails.con_subscriptionid = entity.id)\n                                 ->  Index Scan using con_subdetails_pkey on con_subdetails  (cost=0.00..4953.41 rows=326040 width=29) (actual time=0.019..350.736 rows=327328 loops=1)                                 ->  Hash  (cost=18115.71..18115.71 rows=34489 width=244) (actual time=1334.147..1334.147 rows=205420 loops=1)\n                                       Buckets: 4096  Batches: 1  Memory Usage: 19417kB                                       ->  Hash Join  (cost=9337.97..18115.71 rows=34489 width=244) (actual time=418.054..1156.453 rows=205420 loops=1)\n                                             Hash Cond: (customerdetails.customerid = entity.id)                                             ->  Seq Scan on customerdetails  (cost=0.00..4752.46 rows=327146 width=13) (actual time=0.021..176.389 rows=327328 loops=1)\n                                             ->  Hash  (cost=6495.65..6495.65 rows=227386 width=231) (actual time=417.839..417.839 rows=205420 loops=1)                                                   Buckets: 32768  Batches: 1  Memory Usage: 16056kB\n                                                   ->  Index Scan using entity_setype_idx on entity  (cost=0.00..6495.65 rows=227386 width=231) (actual time=0.033..253.880 rows=205420 loops=1)\n                                                         Index Cond: ((setype)::text = 'con_s'::text)                           ->  Index Scan using con_address_pkey on con_address  (cost=0.00..0.27 rows=1 width=46) (actual time=0.003..0.004 rows=1 loops=2054\n20)                                 Index Cond: (con_addressid = entity.id)                     ->  Index Scan using con_scf_pkey on con_scf  (cost=0.00..0.28 rows=1 width=79) (actual time=0.003..0.004 rows=1 loops=205418)\n                           Index Cond: (con_id = entity.id)               ->  Hash  (cost=0.34..0.34 rows=4 width=25) (actual time=0.016..0.016 rows=4 loops=1)                     Buckets: 1024  Batches: 1  Memory Usage: 1kB\n                     ->  Index Scan using groups_pkey on groups  (cost=0.00..0.34 rows=4 width=25) (actual time=0.008..0.012 rows=4 loops=1)         ->  Hash  (cost=9.00..9.00 rows=380 width=39) (actual time=0.746..0.746 rows=380 loops=1)\n               Buckets: 1024  Batches: 1  Memory Usage: 28kB               ->  Index Scan using users_pkey on users  (cost=0.00..9.00 rows=380 width=39) (actual time=0.014..0.440 rows=380 loops=1)\n   ->  Index Scan using con_details_pkey on con_details  (cost=0.00..0.29 rows=1 width=85) (actual time=0.004..0.004 rows=1 loops=205418)         Index Cond: (con_id = entity.id)\n         Filter: ((email)::text ~~* '%@%'::text) Total runtime: 8573.237 ms", "msg_date": "Thu, 13 Dec 2012 17:12:22 -0500", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "Why does the number of rows are different in actual and estimated." }, { "msg_contents": "\nOn 12/13/2012 05:12 PM, AI Rumman wrote:\n> Why does the number of rows are different in actual and estimated?\n>\n\n\nIsn't that in the nature of estimates? An estimate is a heuristic guess \nat the number of rows it will find for the given query or part of a \nquery. It's not uncommon for estimates to be out by several orders of \nmagnitude. Guaranteeing estimates within bounded accuracy and in a given \nshort amount of time (you don't want your planning time to overwhelm \nyour execution time) isn't possible.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 17:36:26 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "On Dec 14, 2012, at 2:36 AM, Andrew Dunstan <[email protected]> wrote:\n\n> \n> On 12/13/2012 05:12 PM, AI Rumman wrote:\n>> Why does the number of rows are different in actual and estimated?\n>> \n> \n> \n> Isn't that in the nature of estimates? An estimate is a heuristic guess at the number of rows it will find for the given query or part of a query. It's not uncommon for estimates to be out by several orders of magnitude. Guaranteeing estimates within bounded accuracy and in a given short amount of time (you don't want your planning time to overwhelm your execution time) isn't possible.\n> \n\nThe main question i think is what to do with it.\n\nThe problem starts here\n\n -> Hash Join (cost=9337.97..18115.71 rows=34489 width=244) (actual time=418.054..1156.453 rows=205420 loops=1)\n Hash Cond: (customerdetails.customerid = entity.id)\n -> Seq Scan on customerdetails (cost=0.00..4752.46 rows=327146 width=13) (actual time=0.021..176.389 rows=327328 loops=1)\n -> Hash (cost=6495.65..6495.65 rows=227386 width=231) (actual time=417.839..417.839 rows=205420 loops=1)\n Buckets: 32768 Batches: 1 Memory Usage: 16056kB\n -> Index Scan using entity_setype_idx on entity (cost=0.00..6495.65 rows=227386 width=231) (actual time=0.033..2\n53.880 rows=205420 loops=1)\n Index Cond: ((setype)::text = 'con_s'::text)\n -> Index Scan using con_address_pkey on con_address (cost=0.00..0.27 rows=1 width=46) (actual time=0.003..0.004 rows=1 loops=2054\n20)\n\nAs you see access methods estimates are ok, it is join result set which is wrong.\n\nHow to deal with it?\n\nMay be a hack with CTE can help, but is there a way to improve statistics correlation?\n\n> cheers\n> \n> andrew\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\nOn Dec 14, 2012, at 2:36 AM, Andrew Dunstan <[email protected]> wrote:On 12/13/2012 05:12 PM, AI Rumman wrote:Why does the number of rows are different in actual and estimated?Isn't that in the nature of estimates? An estimate is a heuristic guess at the number of rows it will find for the given query or part of a query. It's not uncommon for estimates to be out by several orders of magnitude. Guaranteeing estimates within bounded accuracy and in a given short amount of time (you don't want your planning time to overwhelm your execution time) isn't possible.The main question i think is what to do with it.The problem starts here  ->  Hash Join  (cost=9337.97..18115.71 rows=34489 width=244) (actual time=418.054..1156.453 rows=205420 loops=1)                                             Hash Cond: (customerdetails.customerid = entity.id)                                             ->  Seq Scan on customerdetails  (cost=0.00..4752.46 rows=327146 width=13) (actual time=0.021..176.389 rows=327328 loops=1)                                             ->  Hash  (cost=6495.65..6495.65 rows=227386 width=231) (actual time=417.839..417.839 rows=205420 loops=1)                                                   Buckets: 32768  Batches: 1  Memory Usage: 16056kB                                                   ->  Index Scan using entity_setype_idx on entity  (cost=0.00..6495.65 rows=227386 width=231) (actual time=0.033..253.880 rows=205420 loops=1)                                                         Index Cond: ((setype)::text = 'con_s'::text)                           ->  Index Scan using con_address_pkey on con_address  (cost=0.00..0.27 rows=1 width=46) (actual time=0.003..0.004 rows=1 loops=205420)As you see access methods estimates are ok, it is join result set which is wrong.How to deal with it?May be a hack with CTE can help, but is there a way to improve statistics correlation?cheersandrew-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 14 Dec 2012 02:40:35 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "On Thu, Dec 13, 2012 at 7:36 PM, Andrew Dunstan <[email protected]> wrote:\n> On 12/13/2012 05:12 PM, AI Rumman wrote:\n>>\n>> Why does the number of rows are different in actual and estimated?\n>>\n>\n>\n> Isn't that in the nature of estimates? An estimate is a heuristic guess at\n> the number of rows it will find for the given query or part of a query. It's\n> not uncommon for estimates to be out by several orders of magnitude.\n> Guaranteeing estimates within bounded accuracy and in a given short amount\n> of time (you don't want your planning time to overwhelm your execution time)\n> isn't possible.\n\nAlthough this kind of difference could be indeed a problem:\n> Nested Loop (cost=18560.97..26864.83 rows=24871 width=535) (actual time=1335.157..8492.414 rows=157953 loops=1)\n> -> Hash Left Join (cost=18560.97..26518.91 rows=116 width=454) (actual time=1335.117..6996.585 rows=205418 loops=1)\n\nIt usually is due to some unrecognized correlation between the joined tables.\n\nAnd it looks like it all may be starting to go south here:\n> -> Hash Join (cost=9337.97..18115.71 rows=34489 width=244) (actual time=418.054..1156.453 rows=205420 loops=1)\n> Hash Cond: (customerdetails.customerid = entity.id)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 19:42:05 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "\nOn 12/13/2012 05:42 PM, Claudio Freire wrote:\n> And it looks like it all may be starting to go south here:\n>> -> Hash Join (cost=9337.97..18115.71 rows=34489 width=244) (actual time=418.054..1156.453 rows=205420 loops=1)\n>> Hash Cond: (customerdetails.customerid = entity.id)\n\n\nWell, it looks like it's choosing a join order that's quite a bit \ndifferent from the way the query is expressed, so the OP might need to \nplay around with forcing the join order some.\n\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 18:09:42 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "\nOn Dec 14, 2012, at 3:09 AM, Andrew Dunstan <[email protected]> wrote:\n\n> \n> On 12/13/2012 05:42 PM, Claudio Freire wrote:\n>> And it looks like it all may be starting to go south here:\n>>> -> Hash Join (cost=9337.97..18115.71 rows=34489 width=244) (actual time=418.054..1156.453 rows=205420 loops=1)\n>>> Hash Cond: (customerdetails.customerid = entity.id)\n> \n> \n> Well, it looks like it's choosing a join order that's quite a bit different from the way the query is expressed, so the OP might need to play around with forcing the join order some.\n> \n> \n\nOP joins 8 tables, and i suppose join collapse limit is set to default 8. I thought postgresql's optimiser is not mysql's.\n\n> cheers\n> \n> andrew\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 03:13:29 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "Evgeny Shishkin <[email protected]> writes:\n> On Dec 14, 2012, at 3:09 AM, Andrew Dunstan <[email protected]> wrote:\n>> Well, it looks like it's choosing a join order that's quite a bit different from the way the query is expressed, so the OP might need to play around with forcing the join order some.\n\n> OP joins 8 tables, and i suppose join collapse limit is set to default 8. I thought postgresql's optimiser is not mysql's.\n\nIt's not obvious to me that there's anything very wrong with the plan.\nAn 8-way join that produces 150K rows is unlikely to run in milliseconds\nno matter what the plan. The planner would possibly have done the last\njoin step differently if it had had a better rowcount estimate, but even\nif that were free the query would still have been 7 seconds (vs 8.5).\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 18:36:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "\nOn Dec 14, 2012, at 3:36 AM, Tom Lane <[email protected]> wrote:\n\n> Evgeny Shishkin <[email protected]> writes:\n>> On Dec 14, 2012, at 3:09 AM, Andrew Dunstan <[email protected]> wrote:\n>>> Well, it looks like it's choosing a join order that's quite a bit different from the way the query is expressed, so the OP might need to play around with forcing the join order some.\n> \n>> OP joins 8 tables, and i suppose join collapse limit is set to default 8. I thought postgresql's optimiser is not mysql's.\n> \n> It's not obvious to me that there's anything very wrong with the plan.\n> An 8-way join that produces 150K rows is unlikely to run in milliseconds\n> no matter what the plan. The planner would possibly have done the last\n> join step differently if it had had a better rowcount estimate, but even\n> if that were free the query would still have been 7 seconds (vs 8.5).\n> \n\nMay be in this case it is. I once wrote to this list regarding similar problem - joining 4 tables, result set are off by 2257 times - 750ms vs less then 1ms. Unfortunately the question was not accepted to the list.\n\nI spoke to Bruce Momjian about that problem on one local conference, he said shit happens :) \n\n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 03:50:19 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "On Thu, Dec 13, 2012 at 8:50 PM, Evgeny Shishkin <[email protected]> wrote:\n>>> OP joins 8 tables, and i suppose join collapse limit is set to default 8. I thought postgresql's optimiser is not mysql's.\n>>\n>> It's not obvious to me that there's anything very wrong with the plan.\n>> An 8-way join that produces 150K rows is unlikely to run in milliseconds\n>> no matter what the plan. The planner would possibly have done the last\n>> join step differently if it had had a better rowcount estimate, but even\n>> if that were free the query would still have been 7 seconds (vs 8.5).\n>>\n>\n> May be in this case it is. I once wrote to this list regarding similar problem - joining 4 tables, result set are off by 2257 times - 750ms vs less then 1ms. Unfortunately the question was not accepted to the list.\n>\n> I spoke to Bruce Momjian about that problem on one local conference, he said shit happens :)\n\nI think it's more likely a missing FK constraint.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 21:38:53 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "Does FK Constraint help to improve performance? Or it is only\nfor maintaining data integrity?\n\nOn Thu, Dec 13, 2012 at 7:38 PM, Claudio Freire <[email protected]>wrote:\n\n> On Thu, Dec 13, 2012 at 8:50 PM, Evgeny Shishkin <[email protected]>\n> wrote:\n> >>> OP joins 8 tables, and i suppose join collapse limit is set to default\n> 8. I thought postgresql's optimiser is not mysql's.\n> >>\n> >> It's not obvious to me that there's anything very wrong with the plan.\n> >> An 8-way join that produces 150K rows is unlikely to run in milliseconds\n> >> no matter what the plan. The planner would possibly have done the last\n> >> join step differently if it had had a better rowcount estimate, but even\n> >> if that were free the query would still have been 7 seconds (vs 8.5).\n> >>\n> >\n> > May be in this case it is. I once wrote to this list regarding similar\n> problem - joining 4 tables, result set are off by 2257 times - 750ms vs\n> less then 1ms. Unfortunately the question was not accepted to the list.\n> >\n> > I spoke to Bruce Momjian about that problem on one local conference, he\n> said shit happens :)\n>\n> I think it's more likely a missing FK constraint.\n>\n\nDoes FK Constraint help to improve performance? Or it is only for maintaining data integrity?On Thu, Dec 13, 2012 at 7:38 PM, Claudio Freire <[email protected]> wrote:\nOn Thu, Dec 13, 2012 at 8:50 PM, Evgeny Shishkin <[email protected]> wrote:\n\n>>> OP joins 8 tables, and i suppose join collapse limit is set to default 8. I thought postgresql's optimiser is not mysql's.\n>>\n>> It's not obvious to me that there's anything very wrong with the plan.\n>> An 8-way join that produces 150K rows is unlikely to run in milliseconds\n>> no matter what the plan.  The planner would possibly have done the last\n>> join step differently if it had had a better rowcount estimate, but even\n>> if that were free the query would still have been 7 seconds (vs 8.5).\n>>\n>\n> May be in this case it is. I once wrote to this list regarding similar problem - joining 4 tables, result set are off by 2257 times - 750ms vs less then 1ms. Unfortunately the question was not accepted to the list.\n\n>\n> I spoke to Bruce Momjian about that problem on one local conference, he said shit happens :)\n\nI think it's more likely a missing FK constraint.", "msg_date": "Fri, 14 Dec 2012 14:01:50 -0500", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "On Fri, Dec 14, 2012 at 4:01 PM, AI Rumman <[email protected]> wrote:\n> Does FK Constraint help to improve performance? Or it is only for\n> maintaining data integrity?\n\nI'm not entirely sure it's taken into account, I think it is, but a FK\nwould tell the planner that every non-null value will produce a row.\nIt seems to think there are a large portion of non-null values that\ndon't.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 16:12:56 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." } ]
[ { "msg_contents": "Huan Ruan wrote:\n\n> Interesting to see how you derived 100% cache hits. I assume by 'cache' you\n> mean the pg shared buffer plus the OS cache? Because the table is 23GB but\n> the shared buffer is only 6GB. Even then, I'm not completely convinced\n> because the total RAM is just 24GB, part of which will have to be used for\n> other data and indexes.\n\nWell, you can't have more than a few disk hits, which typically\ntake something like 10 ms each, and still get an average less than 200\nnanoseconds.\n\n> I read somewhere that a pg shared buffer that's too big can hurt the\n> performance and it's better just leave it to the OS cache. I'm not sure why\n> but for now, I just configured the shared buffer to be 1/4 of the total RAM.\n\nPostgreSQL goes through the OS and its filesystems, unlike some\nother products. The right balance of memory in the PostgreSQL\nshared buffers versus what is left for a combination of OS caching\nand other memory allocations can be hard to determine. 25% is a\ngood starting point, but your best performance might be higher or\nlower. It's a good idea to try incremental adjustments using your\nactual workload. Just remember you need to allow enough for several\nmaintenance_work_mem allocations, about one work_mem allocation per\nmax_connections setting, plus a reasonable OS cache size.\n\n> I was wondering on our production server where the effetive_cache_size will\n> be much bigger, will pg then guess that probably most data is cached anyway\n> therefore leaning towards nested loop join rather than a scan for hash join?\n\nOnce effective_cache_size is larger than your largest index, its\nexact value doesn't matter all that much.\n\n> Even on a test server where the cache hit rate is much smaller, for a big\n> table like this, under what circumstances, will a hash join perform better\n> than nested loop join though?\n\nWith a low cache hit rate, that would generally be when the number\nof lookups into the table exceeds about 10% of the table's rows.\n\n> Yes, I had bumped up work_mem yesterday to speed up another big group by\n> query. I used 80MB. I assumed this memory will only be used if the query\n> needs it and will be released as soon as it's finished, so it won't be too\n> much an issue as long as I don't have too many concurrently sorting queries\n> running (which is true in our production). Is this correct?\n\nEach connection running a query can allocate one work_mem\nallocation per plan node (depending on node type), which will be\nfreed after the query completes. A common \"rule of thumb\" is to\nplan on peaks of max_conncetions allocations of work_mem.\n\n> I increased maintenance_work_mem initially to speed up the index creation\n> when I first pump in the data. In production environment, we don't do run\n> time index creation, so I think only the vacuum and analyze will consume\n> this memory?\n\nYou'll probably be creating indexes from time to time. Figure an\noccasional one of those plus up to one allocation per autovacuum\nworker (and you probably shouldn't go below three of those).\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 13 Dec 2012 19:54:43 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hash join vs nested loop join" }, { "msg_contents": ">\n> With a low cache hit rate, that would generally be when the number\n> of lookups into the table exceeds about 10% of the table's rows.\n>\n>\n> So far, my main performance issue comes down to this pattern where\nPostgres chooses hash join that's slower than a nest loop indexed join. By\nchanging those cost parameters, this query works as expected now, but there\nare others fall into the same category and appear to be harder to convince\nthe optimiser.\n\nI'm still a bit worried about this query as Postgres gets the record count\nright, and knows the index is a primary key index, therefore it knows it's\n0.05m out of 170m records (0.03%) but still chooses the sequential scan.\nHopefully this is just related to that big index penalty bug introduced in\n9.2.\n\n\n\nWith a low cache hit rate, that would generally be when the number\nof lookups into the table exceeds about 10% of the table's rows.\n\n\nSo far, my main performance issue comes down to this pattern where Postgres chooses hash join that's slower than a nest loop indexed join. By changing those cost parameters, this query works as expected now, but there are others fall into the same category and appear to be harder to convince the optimiser.\nI'm still a bit worried about this query as Postgres gets the record count right, and knows the index is a primary key index, therefore it knows it's 0.05m out of 170m records (0.03%) but still chooses the sequential scan. Hopefully this is just related to that big index penalty bug introduced in 9.2.", "msg_date": "Fri, 14 Dec 2012 15:46:44 +1100", "msg_from": "Huan Ruan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash join vs nested loop join" } ]
[ { "msg_contents": "Huan Ruan wrote:\n> Kevin Grittner wrote:\n\n>> With a low cache hit rate, that would generally be when the number\n>> of lookups into the table exceeds about 10% of the table's rows.\n>\n> So far, my main performance issue comes down to this pattern where\n> Postgres chooses hash join that's slower than a nest loop indexed join. By\n> changing those cost parameters, this query works as expected now, but there\n> are others fall into the same category and appear to be harder to convince\n> the optimiser.\n> \n> I'm still a bit worried about this query as Postgres gets the record count\n> right, and knows the index is a primary key index, therefore it knows it's\n> 0.05m out of 170m records (0.03%) but still chooses the sequential scan.\n> Hopefully this is just related to that big index penalty bug introduced in\n> 9.2.\n\nQuite possibly, but it could be any of a number of other things,\nlike a type mismatch. It might be best to rule out other causes. If\nyou post the new query and EXPLAIN ANALYZE output, along with the\nsettings you have now adopted, someone may be able to spot\nsomething. It wouldn't hurt to repeat OS and hardware info with it\nso people have it handy for reference.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 08:48:30 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hash join vs nested loop join" }, { "msg_contents": "> Quite possibly, but it could be any of a number of other things,\n> like a type mismatch. It might be best to rule out other causes. If\n> you post the new query and EXPLAIN ANALYZE output, along with the\n> settings you have now adopted, someone may be able to spot\n> something. It wouldn't hurt to repeat OS and hardware info with it\n> so people have it handy for reference.\n>\n>\nSorry for the late reply. To summarise,\n\nThe version is PostgreSQL 9.2.0 on x86_64-unknown-linux-gnu, compiled by\ngcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit. Server specs are:\n\n - Centos, ext4\n - 24GB memory\n - 6 cores hyper-threaded (Intel(R) Xeon(R) CPU E5645).\n - raid 10 on 4 sata disks\n\nConfig changes are\n\n\n - shared_buffers = 6GB\n - work_mem = 80MB\n - maintenance_work_mem = 3GB\n - effective_cache_size = 22GB\n - seq_page_cost = 0.1\n - random_page_cost = 0.1\n - cpu_tuple_cost = 0.05\n - geqo = off\n\nThe query is,\n\nexplain (analyze, buffers)\nSELECT\n *\nFROM IM_Match_Table smalltable\n inner join invtran bigtable on bigtable.invtranref = smalltable.invtranref\n\nThe result is,\n\n\"QUERY PLAN\"\n\"Nested Loop (cost=0.00..341698.92 rows=48261 width=171) (actual\ntime=0.042..567.980 rows=48257 loops=1)\"\n\" Buffers: shared hit=242267\"\n\" -> Seq Scan on im_match_table smalltable (cost=0.00..2472.65\nrows=48261 width=63) (actual time=0.006..8.230 rows=48261 loops=1)\"\n\" Buffers: shared hit=596\"\n\" -> Index Scan using pk_invtran on invtran bigtable (cost=0.00..6.98\nrows=1 width=108) (actual time=0.010..0.011 rows=1 loops=48261)\"\n\" Index Cond: (invtranref = smalltable.invtranref)\"\n\" Buffers: shared hit=241671\"\n\"Total runtime: 571.662 ms\"\n\nQuite possibly, but it could be any of a number of other things,\n\nlike a type mismatch. It might be best to rule out other causes. If\nyou post the new query and EXPLAIN ANALYZE output, along with the\nsettings you have now adopted, someone may be able to spot\nsomething. It wouldn't hurt to repeat OS and hardware info with it\nso people have it handy for reference.Sorry for the late reply. To summarise,The version is PostgreSQL 9.2.0 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit. Server specs are:\n\nCentos, ext4\n24GB memory \n6 cores hyper-threaded (Intel(R) Xeon(R) CPU E5645).\nraid 10 on 4 sata disks\nConfig changes are\nshared_buffers = 6GB\nwork_mem = 80MB\nmaintenance_work_mem = 3GB\neffective_cache_size = 22GB\nseq_page_cost = 0.1\nrandom_page_cost = 0.1\ncpu_tuple_cost = 0.05\ngeqo = off\nThe query is,\nexplain (analyze, buffers)\nSELECT\n  *\nFROM IM_Match_Table smalltable\n  inner join invtran bigtable on bigtable.invtranref = smalltable.invtranref\n\nThe result is,\n\"QUERY PLAN\"\n\"Nested Loop  (cost=0.00..341698.92 rows=48261 width=171) (actual time=0.042..567.980 rows=48257 loops=1)\"\n\"  Buffers: shared hit=242267\"\n\"  ->  Seq Scan on im_match_table smalltable  \n(cost=0.00..2472.65 rows=48261 width=63) (actual time=0.006..8.230 \nrows=48261 loops=1)\"\n\"        Buffers: shared hit=596\"\n\"  ->  Index Scan using pk_invtran on invtran bigtable  \n(cost=0.00..6.98 rows=1 width=108) (actual time=0.010..0.011 rows=1 \nloops=48261)\"\n\"        Index Cond: (invtranref = smalltable.invtranref)\"\n\"        Buffers: shared hit=241671\"\n\"Total runtime: 571.662 ms\"", "msg_date": "Wed, 19 Dec 2012 12:55:04 +1100", "msg_from": "Huan Ruan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash join vs nested loop join" } ]
[ { "msg_contents": "One of my clients has an odd problem. Every so often a backend will \nsuddenly become very slow. The odd thing is that once this has happened \nit remains slowed down, for all subsequent queries. Zone reclaim is off. \nThere is no IO or CPU spike, no checkpoint issues or stats timeouts, no \nother symptom that we can see. The problem was a lot worse that it is \nnow, but two steps have alleviated it mostly, but not completely: much \nless aggressive autovacuuming and reducing the maximum lifetime of \nbackends in the connection pooler to 30 minutes.\n\nIt's got us rather puzzled. Has anyone seen anything like this?\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 13:40:04 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "backend suddenly becomes slow, then remains slow" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> One of my clients has an odd problem. Every so often a backend will \n> suddenly become very slow. The odd thing is that once this has happened \n> it remains slowed down, for all subsequent queries. Zone reclaim is off. \n> There is no IO or CPU spike, no checkpoint issues or stats timeouts, no \n> other symptom that we can see. The problem was a lot worse that it is \n> now, but two steps have alleviated it mostly, but not completely: much \n> less aggressive autovacuuming and reducing the maximum lifetime of \n> backends in the connection pooler to 30 minutes.\n\n> It's got us rather puzzled. Has anyone seen anything like this?\n\nMaybe the kernel is auto-nice'ing the process once it's accumulated X\namount of CPU time?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 14:56:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backend suddenly becomes slow, then remains slow" }, { "msg_contents": "On 12/14/2012 02:56 PM, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> One of my clients has an odd problem. Every so often a backend will\n>> suddenly become very slow. The odd thing is that once this has happened\n>> it remains slowed down, for all subsequent queries. Zone reclaim is off.\n>> There is no IO or CPU spike, no checkpoint issues or stats timeouts, no\n>> other symptom that we can see. The problem was a lot worse that it is\n>> now, but two steps have alleviated it mostly, but not completely: much\n>> less aggressive autovacuuming and reducing the maximum lifetime of\n>> backends in the connection pooler to 30 minutes.\n>> It's got us rather puzzled. Has anyone seen anything like this?\n> Maybe the kernel is auto-nice'ing the process once it's accumulated X\n> amount of CPU time?\n>\n> \t\t\t\n\n\nThat was my initial thought, but the client said not. We'll check again.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 15:16:19 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backend suddenly becomes slow, then remains slow" }, { "msg_contents": "On Fri, Dec 14, 2012 at 10:40 AM, Andrew Dunstan <\[email protected]> wrote:\n> One of my clients has an odd problem. Every so often a backend will\nsuddenly\n> become very slow. The odd thing is that once this has happened it remains\n> slowed down, for all subsequent queries. Zone reclaim is off. There is no\nIO\n> or CPU spike, no checkpoint issues or stats timeouts, no other symptom\nthat\n> we can see.\n\nBy \"no spike\", do you mean that the system as a whole is not using an\nunusual amount of IO or CPU, or that this specific slow back-end is not\nusing an unusual amount?\n\nCould you strace is and see what it is doing?\n\n> The problem was a lot worse that it is now, but two steps have\n> alleviated it mostly, but not completely: much less aggressive\nautovacuuming\n> and reducing the maximum lifetime of backends in the connection pooler to\n30\n> minutes.\n\nDo you have a huge number of tables? Maybe over the course of a long-lived\nconnection, it touches enough tables to bloat the relcache / syscache. I\ndon't know how the autovac would be involved in that, though.\n\n\nCheers,\n\nJeff\n\nOn Fri, Dec 14, 2012 at 10:40 AM, Andrew Dunstan <[email protected]> wrote:\n> One of my clients has an odd problem. Every so often a backend will suddenly\n> become very slow. The odd thing is that once this has happened it remains\n> slowed down, for all subsequent queries. Zone reclaim is off. There is no IO\n> or CPU spike, no checkpoint issues or stats timeouts, no other symptom that\n> we can see.\n\nBy \"no spike\", do you mean that the system as a whole is not using an unusual amount of IO or CPU, or that this specific slow back-end is not using an unusual amount?\n\nCould you strace is and see what it is doing?\n\n> The problem was a lot worse that it is now, but two steps have\n> alleviated it mostly, but not completely: much less aggressive autovacuuming\n> and reducing the maximum lifetime of backends in the connection pooler to 30\n> minutes.Do you have a huge number of tables?  Maybe over the course of a long-lived connection, it touches enough tables to bloat the relcache / syscache.  I don't know how the autovac would be involved in that, though.\nCheers,Jeff", "msg_date": "Wed, 26 Dec 2012 23:03:33 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "backend suddenly becomes slow, then remains slow" }, { "msg_contents": "On 12/26/2012 11:03 PM, Jeff Janes wrote:\n> On Fri, Dec 14, 2012 at 10:40 AM, Andrew Dunstan \n> <[email protected]> wrote:\n> > One of my clients has an odd problem. Every so often a backend will \n> suddenly\n> > become very slow. The odd thing is that once this has happened it \n> remains\n> > slowed down, for all subsequent queries. Zone reclaim is off. There \n> is no IO\n> > or CPU spike, no checkpoint issues or stats timeouts, no other \n> symptom that\n> > we can see.\n>\n> By \"no spike\", do you mean that the system as a whole is not using an \n> unusual amount of IO or CPU, or that this specific slow back-end is \n> not using an unusual amount?\n\n\nboth, really.\n\n>\n> Could you strace is and see what it is doing?\n\n\nNot very easily, because it's a pool connection and we've lowered the \npool session lifetime as part of the amelioration :-) So it's not \nhappening very much any more.\n\n>\n> > The problem was a lot worse that it is now, but two steps have\n> > alleviated it mostly, but not completely: much less aggressive \n> autovacuuming\n> > and reducing the maximum lifetime of backends in the connection \n> pooler to 30\n> > minutes.\n>\n> Do you have a huge number of tables? Maybe over the course of a \n> long-lived connection, it touches enough tables to bloat the relcache \n> / syscache. I don't know how the autovac would be involved in that, \n> though.\n>\n>\n\nYes, we do indeed have a huge number of tables. This seems a plausible \nthesis.\n\ncheers\n\nandrew\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 12:43:31 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: backend suddenly becomes slow, then remains slow" }, { "msg_contents": "On Thursday, December 27, 2012, Andrew Dunstan wrote:\n\n> On 12/26/2012 11:03 PM, Jeff Janes wrote:\n>\n>>\n>> Do you have a huge number of tables? Maybe over the course of a\n>> long-lived connection, it touches enough tables to bloat the relcache /\n>> syscache. I don't know how the autovac would be involved in that, though.\n>>\n>>\n>>\n> Yes, we do indeed have a huge number of tables. This seems a plausible\n> thesis.\n>\n\nAll of the syscache things have compiled hard-coded numbers of buckets, at\nmost 2048, and once those are exceeded the resulting collision resolution\nbecomes essentially linear. It is not hard to exceed 2048 tables by a\nsubstantial multiple, and even less hard to exceed 2048 columns (summed\nover all tables).\n\nI don't know why syscache doesn't use dynahash; whether it is older than\ndynahash is and was never converted out of inertia, or if there are extra\nfeatures that don't fit the dynahash API. If the former, then converting\nthem to use dynahash should give automatic resizing for free. Maybe that\nconversion should be a To Do item?\n\n\n\nCheers,\n\nJeff\n\nOn Thursday, December 27, 2012, Andrew Dunstan wrote:On 12/26/2012 11:03 PM, Jeff Janes wrote:\n\n\nDo you have a huge number of tables?  Maybe over the course of a long-lived connection, it touches enough tables to bloat the relcache / syscache.  I don't know how the autovac would be involved in that, though.\n\n\n\n\nYes, we do indeed have a huge number of tables. This seems a plausible thesis.All of the syscache things have compiled hard-coded numbers of buckets, at most 2048, and once those are exceeded the resulting collision resolution becomes essentially linear.  It is not hard to exceed 2048 tables by a substantial multiple, and even less hard to exceed 2048 columns (summed over all tables).\nI don't know why syscache doesn't use dynahash; whether it is older than dynahash is and was never converted out of inertia, or if there are extra features that don't fit the dynahash API.  If the former, then converting them to use dynahash should give automatic resizing for free.  Maybe that conversion should be a To Do item?\nCheers,Jeff", "msg_date": "Thu, 27 Dec 2012 19:11:23 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backend suddenly becomes slow, then remains slow" } ]
[ { "msg_contents": "AI Rumman wrote:\n> Claudio Freire <[email protected]>wrote:\n>> I think it's more likely a missing FK constraint.\n\n> Does FK Constraint help to improve performance? Or it is only\n> for maintaining data integrity?\n\nI'm not aware of any situation where adding a foreign key\nconstraint would improve performance.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 14:14:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> AI Rumman wrote:\n>> Does FK Constraint help to improve performance? Or it is only\n>> for maintaining data integrity?\n\n> I'm not aware of any situation where adding a foreign key\n> constraint would improve performance.\n\nThere's been talk of teaching the planner to use the existence of FK\nconstraints to improve plans, but I don't believe any such thing is\nin the code today.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 14:22:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "So I am going to change\njoin_collapse_limit\nand\nfrom_collapse_limit\nto 20.\n\nDo I need to set geqo_threshold to greater than 20. Now it is 12 ( default).\n\nAnd could you let me know why geqo_optimizer is not working good in this\ncase?\n\n\n\nOn Fri, Dec 14, 2012 at 2:22 PM, Tom Lane <[email protected]> wrote:\n\n> \"Kevin Grittner\" <[email protected]> writes:\n> > AI Rumman wrote:\n> >> Does FK Constraint help to improve performance? Or it is only\n> >> for maintaining data integrity?\n>\n> > I'm not aware of any situation where adding a foreign key\n> > constraint would improve performance.\n>\n> There's been talk of teaching the planner to use the existence of FK\n> constraints to improve plans, but I don't believe any such thing is\n> in the code today.\n>\n> regards, tom lane\n>\n\nSo I am going to change join_collapse_limit and from_collapse_limit to 20.Do I need to set geqo_threshold to greater than 20. Now it is 12 ( default).\nAnd could you let me know why geqo_optimizer is not working good in this case?On Fri, Dec 14, 2012 at 2:22 PM, Tom Lane <[email protected]> wrote:\n\"Kevin Grittner\" <[email protected]> writes:\n> AI Rumman wrote:\n>> Does FK Constraint help to improve performance? Or it is only\n>> for maintaining data integrity?\n\n> I'm not aware of any situation where adding a foreign key\n> constraint would improve performance.\n\nThere's been talk of teaching the planner to use the existence of FK\nconstraints to improve plans, but I don't believe any such thing is\nin the code today.\n\n                        regards, tom lane", "msg_date": "Fri, 14 Dec 2012 14:28:50 -0500", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "On Fri, Dec 14, 2012 at 4:22 PM, Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> AI Rumman wrote:\n>>> Does FK Constraint help to improve performance? Or it is only\n>>> for maintaining data integrity?\n>\n>> I'm not aware of any situation where adding a foreign key\n>> constraint would improve performance.\n>\n> There's been talk of teaching the planner to use the existence of FK\n> constraints to improve plans, but I don't believe any such thing is\n> in the code today.\n\nThat made me look the code.\n\nSo, eqjoinsel_inner in selfuncs.c would need those smarts. Cool.\n\nAnyway, reading the code, I think I can now spot the possible issue\nbehind all of this.\n\nSelectivity is decided based on the number of distinct values on both\nsides, and the table's name \"entity\" makes me think it's a table that\nis reused for several things. That could be a problem, since that\ninflates distinct values, feeding misinformation to the planner.\n\nRather than a generic \"entity\" table, perhaps it would be best to\nseparate them different entities into different tables. Failing that,\nmaybe if you have an \"entity type\" kind of column, you could try\nrefining the join condition to filter by that kind, hopefully there's\nan index over entity kind and the planner can use more accurate MCV\ndata.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 17:10:18 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "Yes, I do have a column in entity table like\nsetype where the values are 'Contacts', 'Candidate' etc.\nI have an index on it also.\nAre you suggesting to make different table for Contacts, Candidate etc.\n\nOn Fri, Dec 14, 2012 at 3:10 PM, Claudio Freire <[email protected]>wrote:\n\n> On Fri, Dec 14, 2012 at 4:22 PM, Tom Lane <[email protected]> wrote:\n> > \"Kevin Grittner\" <[email protected]> writes:\n> >> AI Rumman wrote:\n> >>> Does FK Constraint help to improve performance? Or it is only\n> >>> for maintaining data integrity?\n> >\n> >> I'm not aware of any situation where adding a foreign key\n> >> constraint would improve performance.\n> >\n> > There's been talk of teaching the planner to use the existence of FK\n> > constraints to improve plans, but I don't believe any such thing is\n> > in the code today.\n>\n> That made me look the code.\n>\n> So, eqjoinsel_inner in selfuncs.c would need those smarts. Cool.\n>\n> Anyway, reading the code, I think I can now spot the possible issue\n> behind all of this.\n>\n> Selectivity is decided based on the number of distinct values on both\n> sides, and the table's name \"entity\" makes me think it's a table that\n> is reused for several things. That could be a problem, since that\n> inflates distinct values, feeding misinformation to the planner.\n>\n> Rather than a generic \"entity\" table, perhaps it would be best to\n> separate them different entities into different tables. Failing that,\n> maybe if you have an \"entity type\" kind of column, you could try\n> refining the join condition to filter by that kind, hopefully there's\n> an index over entity kind and the planner can use more accurate MCV\n> data.\n>\n\nYes, I do have a column in entity table likesetype where the values are 'Contacts', 'Candidate' etc.I have an index on it also.Are you suggesting to make different table for Contacts, Candidate etc.\nOn Fri, Dec 14, 2012 at 3:10 PM, Claudio Freire <[email protected]> wrote:\nOn Fri, Dec 14, 2012 at 4:22 PM, Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> AI Rumman wrote:\n>>> Does FK Constraint help to improve performance? Or it is only\n>>> for maintaining data integrity?\n>\n>> I'm not aware of any situation where adding a foreign key\n>> constraint would improve performance.\n>\n> There's been talk of teaching the planner to use the existence of FK\n> constraints to improve plans, but I don't believe any such thing is\n> in the code today.\n\nThat made me look the code.\n\nSo, eqjoinsel_inner in selfuncs.c would need those smarts. Cool.\n\nAnyway, reading the code, I think I can now spot the possible issue\nbehind all of this.\n\nSelectivity is decided based on the number of distinct values on both\nsides, and the table's name \"entity\" makes me think it's a table that\nis reused for several things. That could be a problem, since that\ninflates distinct values, feeding misinformation to the planner.\n\nRather than a generic \"entity\" table, perhaps it would be best to\nseparate them different entities into different tables. Failing that,\nmaybe if you have an \"entity type\" kind of column, you could try\nrefining the join condition to filter by that kind, hopefully there's\nan index over entity kind and the planner can use more accurate MCV\ndata.", "msg_date": "Fri, 14 Dec 2012 15:25:42 -0500", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "On Fri, Dec 14, 2012 at 5:25 PM, AI Rumman <[email protected]> wrote:\n> Are you suggesting to make different table for Contacts, Candidate etc.\n\nYes\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 17:32:07 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." } ]
[ { "msg_contents": "Claudio Freire wrote:\n\n> Selectivity is decided based on the number of distinct values on\n> both sides, and the table's name \"entity\" makes me think it's a\n> table that is reused for several things. That could be a problem,\n> since that inflates distinct values, feeding misinformation to\n> the planner.\n> \n> Rather than a generic \"entity\" table, perhaps it would be best to\n> separate them different entities into different tables.\n\nI missed that; good catch. Good advice.\n\nDon't try to build a \"database within a database\" by having one\ntable for different types of data, with a code to sort them out.\nEAV is a seriously bad approach for every situation where I've seen\nsomeone try to use it. I was about to say it's like trying to drive\na nail with a pipe wrench, then realized it's more like putting a\nbunch of hammers in a bag and swinging the bag at the nail.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 15:34:03 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." }, { "msg_contents": "On Fri, Dec 14, 2012 at 3:34 PM, Kevin Grittner <[email protected]> wrote:\n\n> Claudio Freire wrote:\n>\n> > Selectivity is decided based on the number of distinct values on\n> > both sides, and the table's name \"entity\" makes me think it's a\n> > table that is reused for several things. That could be a problem,\n> > since that inflates distinct values, feeding misinformation to\n> > the planner.\n> >\n> > Rather than a generic \"entity\" table, perhaps it would be best to\n> > separate them different entities into different tables.\n>\n> I missed that; good catch. Good advice.\n>\n> Don't try to build a \"database within a database\" by having one\n> table for different types of data, with a code to sort them out.\n> EAV is a seriously bad approach for every situation where I've seen\n> someone try to use it. I was about to say it's like trying to drive\n> a nail with a pipe wrench, then realized it's more like putting a\n> bunch of hammers in a bag and swinging the bag at the nail.\n>\n> -Kevin\n>\n\nThe ENTITY table has 2164493 rows with data as follows:\n\n type | count\n-----------------------+--------\n Contacts | 327352\n Candidate | 34668\n Emailst | 33604\n Calendar | 493956\n Contacts Image | 7\n PriceBooks | 2\n Notes Attachment | 17\n SalesOrder | 6\n Acc | 306832\n...\n..\n(29 rows)\n\nDo you think partitioning will improve the overall performance of the\napplication where all the queries have join with this table?\n\nOn Fri, Dec 14, 2012 at 3:34 PM, Kevin Grittner <[email protected]> wrote:\nClaudio Freire wrote:\n\n> Selectivity is decided based on the number of distinct values on\n> both sides, and the table's name \"entity\" makes me think it's a\n> table that is reused for several things. That could be a problem,\n> since that inflates distinct values, feeding misinformation to\n> the planner.\n>\n> Rather than a generic \"entity\" table, perhaps it would be best to\n> separate them different entities into different tables.\n\nI missed that; good catch. Good advice.\n\nDon't try to build a \"database within a database\" by having one\ntable for different types of data, with a code to sort them out.\nEAV is a seriously bad approach for every situation where I've seen\nsomeone try to use it. I was about to say it's like trying to drive\na nail with a pipe wrench, then realized it's more like putting a\nbunch of hammers in a bag and swinging the bag at the nail.\n\n-Kevin\nThe ENTITY table has 2164493 rows with data as follows:        type         | count  -----------------------+-------- Contacts              | 327352\n Candidate            |  34668 Emailst     |  33604 Calendar              | 493956 Contacts Image        |      7 PriceBooks            |      2 Notes Attachment      |     17\n SalesOrder            |      6 Acc              | 306832.....(29 rows)Do you think partitioning will improve the overall performance of the application where all the queries have join with this table?", "msg_date": "Fri, 14 Dec 2012 17:12:39 -0500", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." } ]
[ { "msg_contents": "AI Rumman wrote:\n\n> The ENTITY table has 2164493 rows with data as follows:\n> \n>  type | count\n> -----------------------+--------\n>  Contacts | 327352\n>  Candidate | 34668\n>  Emailst | 33604\n>  Calendar | 493956\n>  Contacts Image | 7\n>  PriceBooks | 2\n>  Notes Attachment | 17\n>  SalesOrder | 6\n>  Acc | 306832\n> ...\n> ..\n> (29 rows)\n> \n> Do you think partitioning will improve the overall performance of\n> the application where all the queries have join with this table?\n\nI would not consider putting contacts, calendars, and sales orders\nin separate tables as \"partitioning\". It is normalizing. That will\nbe useful if you happen to discover, for instance, that the data\nelements needed or relationships to other types of data for a\ncalendar don't exactly match those for a contact image or a sales\norder.\n\nAnd yes, I would expect that using separate tables for\nfundamentally different types of data would improve performance. If\nsome of these objects (like contacts and candidates) have common\nelements, you might want to either have both inherit from a common\nPerson table, or (usually better, IMO) have both reference rows in\na Person table. The latter is especially important if a contact can\nbe a candidate and you want to be able to associate them.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Dec 2012 17:47:39 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the number of rows are different in actual and\n estimated." } ]
[ { "msg_contents": "There is a tool called pg Fouine . I am sure this will help you..\n\nhttp://pgfouine.projects.pgfoundry.org/tutorial.html\n\nRgrds\nSuhas\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/How-can-i-find-out-top-high-load-sql-queries-in-PostgreSQL-tp5736854p5736865.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 17 Dec 2012 01:21:43 -0800 (PST)", "msg_from": "\"suhas.basavaraj12\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can i find out top high load sql queries in PostgreSQL." }, { "msg_contents": "\nOn Dec 17, 2012, at 3:21 AM, suhas.basavaraj12 <[email protected]> wrote:\n\n> There is a tool called pg Fouine . I am sure this will help you..\n> \n> http://pgfouine.projects.pgfoundry.org/tutorial.html\n\n+1\n\nYou can also use pgbadger, which seemed more flexible than pgFouine.\nhttp://dalibo.github.com/pgbadger/\n\nThanks & Regards,\nVibhor Kumar\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\nBlog:http://vibhork.blogspot.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 17 Dec 2012 10:33:40 -0600", "msg_from": "Vibhor Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can i find out top high load sql queries in PostgreSQL." }, { "msg_contents": "> -----Original Message-----\n> From: Vibhor Kumar [mailto:[email protected]]\n> Sent: Monday, December 17, 2012 11:34 AM\n> To: suhas.basavaraj12\n> Cc: [email protected]\n> Subject: Re: How can i find out top high load sql queries in\n> PostgreSQL.\n> \n> \n> On Dec 17, 2012, at 3:21 AM, suhas.basavaraj12 <[email protected]>\n> wrote:\n> \n> > There is a tool called pg Fouine . I am sure this will help you..\n> >\n> > http://pgfouine.projects.pgfoundry.org/tutorial.html\n> \n> +1\n> \n> You can also use pgbadger, which seemed more flexible than pgFouine.\n> http://dalibo.github.com/pgbadger/\n> \n> Thanks & Regards,\n> Vibhor Kumar\n> EnterpriseDB Corporation\n> The Enterprise PostgreSQL Company\n> Blog:http://vibhork.blogspot.com\n> \n\nPg_stat_statements extension tracks SQL statements execution statistics.\n\nRegards,\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Dec 2012 19:13:42 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can i find out top high load sql queries in PostgreSQL." } ]
[ { "msg_contents": "Ghislain ROUVIGNAC wrote:\n\n> Memory : In use 4 Go, Free 15Go, cache 5 Go.\n\nIf the active portion of your database is actually small enough\nthat it fits in the OS cache, I recommend:\n\nseq_page_cost = 0.1\nrandom_page_cost = 0.1\ncpu_tuple_cost = 0.05\n\n> I plan to increase various parameters as follow:\n> shared_buffers = 512MB\n> temp_buffers = 16MB\n> work_mem = 32MB\n> wal_buffers = 16MB\n> checkpoint_segments = 32\n> effective_cache_size = 2560MB\n> default_statistics_target = 500\n> autovacuum_vacuum_scale_factor = 0.05\n> autovacuum_analyze_scale_factor = 0.025\n\nYou could probably go a little higher on work_mem and\neffective_cache_size. I would leave default_statistics_target alone\nunless you see a lot of estimates which are off by more than an\norder of magnitude. Even then, it is often better to set a higher\nvalue for a few individual columns than for everything. Remember\nthat this setting has no effect until you reload the configuration\nand then VACUUM.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Dec 2012 15:09:54 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow queries after vacuum analyze" }, { "msg_contents": "Hello Kevin,\n\n\nI solved the issue.\nI reproduced it immediatly after installing PostgreSQL 8.4.1.\nI thougth they were using PostgreSQL 8.4.8 but was never able to reproduce\nit with that version.\nSo something was changed related to my problem, but i didn't see explicitly\nwhat in the change notes.\nNevermind.\n\nYou wrote:\n\n> I would leave default_statistics_target alone unless you see a lot of\n> estimates which are off by more than an order of magnitude. Even then, it\n> is often better to set a higher value for a few individual columns than for\n> everything.\n\n\nWe had an issue with a customer where we had to increase the statistics\nparameter for a primary key.\nSo I'd like to know if there is a way to identify for which column we have\nto change the statistics.\n\n\n*Ghislain ROUVIGNAC*\n\n\n2012/12/18 Kevin Grittner <[email protected]>\n\n> Ghislain ROUVIGNAC wrote:\n>\n> > Memory : In use 4 Go, Free 15Go, cache 5 Go.\n>\n> If the active portion of your database is actually small enough\n> that it fits in the OS cache, I recommend:\n>\n> seq_page_cost = 0.1\n> random_page_cost = 0.1\n> cpu_tuple_cost = 0.05\n>\n> > I plan to increase various parameters as follow:\n> > shared_buffers = 512MB\n> > temp_buffers = 16MB\n> > work_mem = 32MB\n> > wal_buffers = 16MB\n> > checkpoint_segments = 32\n> > effective_cache_size = 2560MB\n> > default_statistics_target = 500\n> > autovacuum_vacuum_scale_factor = 0.05\n> > autovacuum_analyze_scale_factor = 0.025\n>\n> You could probably go a little higher on work_mem and\n> effective_cache_size. I would leave default_statistics_target alone\n> unless you see a lot of estimates which are off by more than an\n> order of magnitude. Even then, it is often better to set a higher\n> value for a few individual columns than for everything. Remember\n> that this setting has no effect until you reload the configuration\n> and then VACUUM.\n>\n> -Kevin\n>\n\nHello Kevin,I solved the issue.I reproduced it immediatly after installing PostgreSQL 8.4.1.I thougth they were using PostgreSQL 8.4.8 but was never able to reproduce it with that version.\nSo something was changed related to my problem, but i didn't see explicitly what in the change notes.Nevermind.You wrote:\nI would leave default_statistics_target alone unless you see a lot of estimates which are off by more than an order of magnitude. Even then, it is often better to set a higher value for a few individual columns than for everything.\nWe had an issue with a customer where we had to increase the statistics parameter for a primary key.So I'd like to know if there is a way to identify for which column we have to change the statistics.\nGhislain ROUVIGNAC\n2012/12/18 Kevin Grittner <[email protected]>\nGhislain ROUVIGNAC wrote:\n\n> Memory : In use 4 Go, Free 15Go, cache 5 Go.\n\nIf the active portion of your database is actually small enough\nthat it fits in the OS cache, I recommend:\n\nseq_page_cost = 0.1\nrandom_page_cost = 0.1\ncpu_tuple_cost = 0.05\n\n> I plan to increase various parameters as follow:\n> shared_buffers = 512MB\n> temp_buffers = 16MB\n> work_mem = 32MB\n> wal_buffers = 16MB\n> checkpoint_segments = 32\n> effective_cache_size = 2560MB\n> default_statistics_target = 500\n> autovacuum_vacuum_scale_factor = 0.05\n> autovacuum_analyze_scale_factor = 0.025\n\nYou could probably go a little higher on work_mem and\neffective_cache_size. I would leave default_statistics_target alone\nunless you see a lot of estimates which are off by more than an\norder of magnitude. Even then, it is often better to set a higher\nvalue for a few individual columns than for everything. Remember\nthat this setting has no effect until you reload the configuration\nand then VACUUM.\n\n-Kevin", "msg_date": "Fri, 21 Dec 2012 11:48:58 +0100", "msg_from": "Ghislain ROUVIGNAC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow queries after vacuum analyze" } ]
[ { "msg_contents": "\"Huan Ruan\" <[email protected]> wrote:\n\n> explain (analyze, buffers)\n> SELECT\n>  *\n> FROM IM_Match_Table smalltable\n>  inner join invtran bigtable on bigtable.invtranref = smalltable.invtranref\n\nWell, one table or the other will need to be read in full, and you\nwould normally want that one to be the small table. When there is\nno ORDER BY clause, the fastest way to do that will normally be a\nseqscan. So that part of the query is as it should be. The only\nquestion is whether the access to the big table is using the\nfastest technique.\n\nIf you want to see what the planner's second choice would have\nbeen, you could run:\n\nSET enable_indexscan = off;\n\non a connection and try the explain again. If you don't like that\none, you might be able to disable another node type and see what\nyou get. If one of the other alternatives is faster, that would\nsuggest that adjustments are needed to the costing factors;\notherwise, it just takes that long to read hundreds of thousands of\nrows in one table and look for related data for each of them in\nanother table.\n\n> \"Nested Loop (cost=0.00..341698.92 rows=48261 width=171) (actual\n> time=0.042..567.980 rows=48257 loops=1)\"\n\nFrankly, at 12 microseconds per matched pair of rows, I think\nyou're doing OK.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Dec 2012 08:16:37 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hash join vs nested loop join" }, { "msg_contents": "Frankly, at 12 microseconds per matched pair of rows, I think\n> you're doing OK.\n>\n> This plan is the good one, I want the indexscan nested loop join and this\nis only achieved after making all these costing factors change. Before\nthat, it was hash join and was very slow.\n\nHowever, I'm worried about the config changes being too 'extreme', i.e.\nboth sequential I/O and random I/O have the same cost and being only 0.1.\nSo, I was more wondering why I have to make such dramatic changes to\nconvince the optimiser to use NL join instead of hash join. And also, I'm\nnot sure what impact will these changes have on other queries yet. e.g.\nwill a query that's fine with hash join now choose NL join and runs slower?\n\n\nFrankly, at 12 microseconds per matched pair of rows, I think\nyou're doing OK.This plan is the good one, I want the indexscan nested loop join and this is only achieved after making all these costing factors change. Before that, it was hash join and was very slow.\nHowever, I'm worried about the config changes being too 'extreme', i.e. both sequential I/O and random I/O have the same cost and being only 0.1. So, I was more wondering why I have to make such dramatic changes to convince the optimiser to use NL join instead of hash join. And also, I'm not sure what impact will these changes have on other queries yet. e.g. will a query that's fine with hash join now choose NL join and runs slower?", "msg_date": "Thu, 20 Dec 2012 12:02:17 +1100", "msg_from": "Huan Ruan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash join vs nested loop join" } ]
[ { "msg_contents": "Trying again since I couldn't post without being subscribed. The message \ngot stalled and was never sent, so I just subscribed and I'm trying \nagain. Original message I tried to send two days ago:\n\n----------------------------------\n\nI've explained a bit of how my application works in this thread as well \nas the reasons why I couldn't use PG 9.2.1 due to performance issues and \nhad to rollback to 9.1.\n\nhttp://postgresql.1045698.n5.nabble.com/Query-completed-in-lt-1s-in-PG-9-1-and-700s-in-PG-9-2-td5730899.html\n\nNow I found that 9.2.2 was released while 9.1 is performing worse for \nthe same db schema, but the data is now different.\n\nSo, here are the output of some explain analyze:\n\nOld DB dump, PG 9.1: http://explain.depesz.com/s/mvf (0.2s)\nNew DB dump, PG 9.1: http://explain.depesz.com/s/vT2k (4.3s)\nNew DB dump, PG 9.2.2: http://explain.depesz.com/s/uu0 (0.04s)\n\nI've already upgraded back to PG 9.2.2 but I thought you might be \ninterested on backporting that improvement to 9.1 as well and I'm not \neven sure if the bug above was fixed intentionally or by chance so I'd \nlike to be sure about that...\n\nThe query I used was:\n\nSELECT t.id as tid,\n t.acquiror_company_name || ' / ' || t.target_company_name as tname,\n exists(select id from condition_document_excerpt where \ncondition_id=c1726.id) as v1726_has_reference,\n l1726.value as v1726\n FROM company_transaction t\n left outer join\ncondition_option_value v1726\n inner join transaction_condition c1726\n on c1726.id=v1726.condition_id and type_id=1726\n inner join condition_option_label l1726\n on l1726.id=v1726.value_id\n on c1726.transaction_id = t.id\n WHERE t.edit_status = 'Finished' and\n (t.id in (select transaction_id from\ncondition_option_value v1726\n inner join transaction_condition c1726\n on c1726.id=v1726.condition_id and type_id=1726\n inner join condition_option_label l1726\n on l1726.id=v1726.value_id\n AND (v1726.value_id = 278)\n)\n)\n ORDER BY\n t.acquiror_company_name, t.target_company_name\n\n\n\nIf I simplify the WHERE condition it performs much better in 9.1 for \nthis particular case (but I can't do that as the queries are generated \ndynamically, please see first mentioned link to understand the reason):\n\n WHERE t.edit_status = 'Finished' and v1726.value_id = 278\n\nNew DB dump, 9.1, simplified query: http://explain.depesz.com/s/oj1 (0.03s)\n\nThe inner query (for the \"in\" clause) alone takes 44ms:\n\nselect transaction_id from\ncondition_option_value v1726\n inner join transaction_condition c1726\n on c1726.id=v1726.condition_id and type_id=1726\n inner join condition_option_label l1726\n on l1726.id=v1726.value_id\n AND (v1726.value_id = 278)\n\n\nSo, what would be the reason for the full original query to take over 4s \nin PG 9.1?\n\nBest,\n\nRodrigo.\n\n\n\n\n\n\n\n\n Trying again since I couldn't post without being subscribed. The\n message got stalled and was never sent, so I just subscribed and I'm\n trying again. Original message I tried to send two days ago:\n\n ----------------------------------\n\n I've explained a bit of how my application works in this thread as\n well as the reasons why I couldn't use PG 9.2.1 due to performance\n issues and had to rollback to 9.1.\n\nhttp://postgresql.1045698.n5.nabble.com/Query-completed-in-lt-1s-in-PG-9-1-and-700s-in-PG-9-2-td5730899.html\n\n Now I found that 9.2.2 was released while 9.1 is performing worse\n for the same db schema, but the data is now different.\n\n So, here are the output of some explain analyze:\n\n Old DB dump, PG 9.1:\n \nhttp://explain.depesz.com/s/mvf\n (0.2s)\n New DB dump, PG 9.1:\n \nhttp://explain.depesz.com/s/vT2k\n (4.3s)\n New DB dump, PG 9.2.2:\n \nhttp://explain.depesz.com/s/uu0\n (0.04s)\n\n I've already upgraded back to PG 9.2.2 but I thought you might be\n interested on backporting that improvement to 9.1 as well and I'm\n not even sure if the bug above was fixed intentionally or by chance\n so I'd like to be sure about that...\n\n The query I used was:\n\n SELECT t.id as tid,\n   t.acquiror_company_name || ' / ' || t.target_company_name as\n tname,\n   exists(select id from condition_document_excerpt where\n condition_id=c1726.id) as v1726_has_reference,\n   l1726.value as v1726\n  FROM company_transaction t\n  left outer join\n condition_option_value v1726\n  inner join transaction_condition c1726\n   on c1726.id=v1726.condition_id and type_id=1726\n   inner join condition_option_label l1726\n    on l1726.id=v1726.value_id\n  on c1726.transaction_id = t.id\n  WHERE t.edit_status = 'Finished' and\n  (t.id in (select transaction_id from\n condition_option_value v1726\n  inner join transaction_condition c1726\n   on c1726.id=v1726.condition_id and type_id=1726\n   inner join condition_option_label l1726\n    on l1726.id=v1726.value_id\n  AND (v1726.value_id = 278)\n )\n )\n  ORDER BY\n  t.acquiror_company_name, t.target_company_name\n\n\n\n If I simplify the WHERE condition it performs much better in 9.1 for\n this particular case (but I can't do that as the queries are\n generated dynamically, please see first mentioned link to understand\n the reason):\n\n  WHERE t.edit_status = 'Finished' and v1726.value_id = 278\n\n New DB dump, 9.1, simplified query:\n \nhttp://explain.depesz.com/s/oj1\n (0.03s)\n\n The inner query (for the \"in\" clause) alone takes 44ms:\n\n select transaction_id from\n condition_option_value v1726\n  inner join transaction_condition c1726\n   on c1726.id=v1726.condition_id and type_id=1726\n   inner join condition_option_label l1726\n    on l1726.id=v1726.value_id\n  AND (v1726.value_id = 278)\n\n\n So, what would be the reason for the full original query to take\n over 4s in PG 9.1?\n\n Best,\n\n Rodrigo.", "msg_date": "Wed, 19 Dec 2012 17:35:57 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "PG 9.1 performance loss due to query plan being changed depending\n\ton db data (4s vs 200ms)" } ]
[ { "msg_contents": "Dear All,\n\nI've just joined this list, and I'd like to request some advice.\n\nI have a table (1 GB in size) with 24 columns, and 5.6 million rows. Of \nthese, we're interested in two columns, parcel_id_code, and exit_state.\n\n\tparcel_id_code has a fairly uniform distribution of integers \t\n\tfrom 1-99999, it's never null.\n\n\texit_state has 3 possible values, 1,2 and null.\n\tAlmost all the rows are 1, about 0.1% have the value 2, and\n\tonly 153 rows are null\n\n\nThe query I'm trying to optimise looks like this:\n\n\tSELECT * from tbl_tracker\n\tWHERE parcel_id_code='53030' AND exit_state IS NULL;\n\nSo, I have a partial index:\n\n\t\"tbl_tracker_performance_1_idx\" btree (parcel_id_code) WHERE\n\texit_state IS NULL\n\nwhich works fine if it's the only index.\n\n\nBUT, for other queries (unrelated to this question), I also have to have \nfull indexes on these columns:\n\n \"tbl_tracker_exit_state_idx\" btree (exit_state)\n \"tbl_tracker_parcel_id_code_idx\" btree (parcel_id_code)\n\n\nThe problem is, when I now run my query, the planner ignores the \ndedicated index \"tbl_tracker_performance_1_idx\", and instead uses both \nof the full indexes... resulting in a much much slower query (9ms vs \n0.08ms).\n\nA psql session is below. This shows that, if I force the planner to use \nthe partial index, by dropping the others, then it's fast. But as soon \nas I put the full indexes back (which I need for other queries), the \nquery planner chooses them instead, and is slow.\n\n\nThanks very much for your help,\n\nRichard\n\n\n\n\n\n\n\n\n\n\nfsc_log => \\d tbl_tracker\n\n Column | Type | Modifiers\n---------------------+--------------------------+------------------\n id | bigint | not null default \nnextval('master_id_seq'::regclass)\n dreq_timestamp_1 | timestamp with time zone |\n barcode_1 | character varying(13) |\n barcode_2 | character varying(13) |\n barcode_best | character varying(13) |\n entrance_point | character varying(13) |\n induct | character varying(5) |\n entrance_state_x | integer |\n dreq_count | integer |\n parcel_id_code | integer |\n host_id_code | bigint |\n original_dest | integer |\n drep_timestamp_n | timestamp with time zone |\n actual_dest | integer |\n exit_state | integer |\n chute | integer |\n original_dest_state | integer |\n srep_timestamp | timestamp with time zone |\n asn | character varying(9) |\n is_asn_token | boolean |\n track_state | integer |\n warning | boolean |\nIndexes:\n \"tbl_tracker_pkey\" PRIMARY KEY, btree (id) CLUSTER\n \"tbl_tracker_barcode_best_idx\" btree (barcode_best)\n \"tbl_tracker_chute_idx\" btree (chute)\n \"tbl_tracker_drep_timestamp_n_idx\" btree (drep_timestamp_n) WHERE \ndrep_timestamp_n IS NOT NULL\n \"tbl_tracker_dreq_timestamp_1_idx\" btree (dreq_timestamp_1) WHERE \ndreq_timestamp_1 IS NOT NULL\n \"tbl_tracker_exit_state_idx\" btree (exit_state)\n \"tbl_tracker_parcel_id_code_idx\" btree (parcel_id_code)\n \"tbl_tracker_performance_1_idx\" btree (parcel_id_code) WHERE \nexit_state IS NULL\n \"tbl_tracker_performance_2_idx\" btree (host_id_code, id)\n \"tbl_tracker_performance_3_idx\" btree (srep_timestamp) WHERE \nexit_state = 1 AND srep_timestamp IS NOT NULL\n \"tbl_tracker_srep_timestamp_idx\" btree (srep_timestamp) WHERE \nsrep_timestamp IS NOT NULL\n\n\n\n\nfsc_log=> explain analyse select * from tbl_tracker where \nparcel_id_code='53030' AND exit_state IS NULL;\n\nQUERY PLAN\n-----------------------------------------------------------------------\n Bitmap Heap Scan on tbl_tracker (cost=8.32..10.84 rows=1 width=174) \n(actual time=9.334..9.334 rows=0 loops=1)\n Recheck Cond: ((parcel_id_code = 53030) AND (exit_state IS NULL))\n -> BitmapAnd (cost=8.32..8.32 rows=1 width=0) (actual \ntime=9.329..9.329 rows=0 loops=1)\n -> Bitmap Index Scan on tbl_tracker_parcel_id_code_idx \n(cost=0.00..3.67 rows=57 width=0) (actual time=0.026..0.026 rows=65 loops=1)\n Index Cond: (parcel_id_code = 53030)\n -> Bitmap Index Scan on tbl_tracker_exit_state_idx \n(cost=0.00..4.40 rows=150 width=0) (actual time=9.289..9.289 rows=93744 \nloops=1)\n Index Cond: (exit_state IS NULL)\n Total runtime: 9.366 ms\n(8 rows)\n\n\n\nfsc_log=> drop index tbl_tracker_exit_state_idx;\nDROP INDEX\n\nfsc_log=> explain analyse select * from tbl_tracker where \nparcel_id_code='53030' AND exit_state IS NULL;\n\nQUERY PLAN\n----------------------------------------------------------------------------------------\n Bitmap Heap Scan on tbl_tracker (cost=3.67..145.16 rows=1 width=174) \n(actual time=0.646..0.646 rows=0 loops=1)\n Recheck Cond: (parcel_id_code = 53030)\n Filter: (exit_state IS NULL)\n -> Bitmap Index Scan on tbl_tracker_parcel_id_code_idx \n(cost=0.00..3.67 rows=57 width=0) (actual time=0.024..0.024 rows=65 loops=1)\n Index Cond: (parcel_id_code = 53030)\n Total runtime: 0.677 ms\n(6 rows)\n\n\n\n\nfsc_log=> drop index tbl_tracker_parcel_id_code_idx;\nDROP INDEX\n\nfsc_log=> explain analyse select * from tbl_tracker where \nparcel_id_code='53030' AND exit_state IS NULL;\n \nQUERY PLAN\n--------------------------------------------------------------------------\n Index Scan using tbl_tracker_performance_1_idx on tbl_tracker \n(cost=0.00..5440.83 rows=1 width=174) (actual time=0.052..0.052 rows=0 \nloops=1)\n Index Cond: (parcel_id_code = 53030)\n Total runtime: 0.080 ms\n(3 rows)\n\n\n\nServer hardware: 8 core, 2.5 GHz, 24 GB, SSD in RAID-1.\n\nPostgresql config (non-default):\n\n version | PostgreSQL 9.1.6 on x86_64\n checkpoint_segments | 128\n client_encoding | UTF8\n commit_delay | 50000\n commit_siblings | 5\n default_statistics_target | 5000\n effective_cache_size | 12000MB\n lc_collate | en_GB.UTF-8\n lc_ctype | en_GB.UTF-8\n log_line_prefix | %t\n log_min_duration_statement | 50\n maintenance_work_mem | 2GB\n max_connections | 100\n max_stack_depth | 4MB\n port | 5432\n random_page_cost | 2.5\n server_encoding | UTF8\n shared_buffers | 6000MB\n ssl | on\n standard_conforming_strings | off\n temp_buffers | 128MB\n TimeZone | GB\n wal_buffers | 16MB\n work_mem | 256MB\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Dec 2012 21:13:06 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Why does the query planner use two full indexes, when a dedicated\n\tpartial index exists?" }, { "msg_contents": "Hi,\n\nOn Wed, Dec 19, 2012 at 1:13 PM, Richard Neill <[email protected]> wrote:\n> Index Scan using tbl_tracker_performance_1_idx on tbl_tracker\n> (cost=0.00..5440.83 rows=1 width=174) (actual time=0.052..0.052 rows=0\n> loops=1)\n> Index Cond: (parcel_id_code = 53030)\n\nIt looks like your index is bloated. Have you had a lot of\nupdates/deletes on rows with exit_state is null?\n\nTry to reindex tbl_tracker_performance_1_idx.\n\nTo reindex it without locks create a new index with temporary name\nconcurrently, delete the old one and rename the new one using the old\nname.\n\n--\nSergey Konoplev\nDatabase and Software Architect\nhttp://www.linkedin.com/in/grayhemp\n\nPhones:\nUSA +1 415 867 9984\nRussia, Moscow +7 901 903 0499\nRussia, Krasnodar +7 988 888 1979\n\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Dec 2012 14:59:39 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists?" }, { "msg_contents": "\n\nOn 19/12/12 22:59, Sergey Konoplev wrote:\n\n> On Wed, Dec 19, 2012 at 1:13 PM, Richard Neill <[email protected]> wrote:\n>> Index Scan using tbl_tracker_performance_1_idx on tbl_tracker\n>> (cost=0.00..5440.83 rows=1 width=174) (actual time=0.052..0.052 rows=0\n>> loops=1)\n>> Index Cond: (parcel_id_code = 53030)\n>\n> It looks like your index is bloated. Have you had a lot of\n> updates/deletes on rows with exit_state is null?\n>\n> Try to reindex tbl_tracker_performance_1_idx.\n>\n> To reindex it without locks create a new index with temporary name\n> concurrently, delete the old one and rename the new one using the old\n> name.\n>\n\nHi Sergey,\n\nThanks for your suggestion. Yes, I can see what you mean: over the 3 \nweeks during which we deployed the system, every single row has at one \npoint had the exit_state as null, before being updated.\n\nEssentially, as time moves on, new rows are added, initially with \nexit_state null, then a few minutes later we update them to exit_state \n1, then a few weeks later, they are removed.\n\n[Explanation: the system tracks books around a physical sortation \nmachine; the sorter uses a \"parcel_id_code\" which (for some really daft \nreason suffers wraparound at 99999, i.e. about every 3 hours), books \nwhose exit_state is null are those which are still on the sortation \nmachine; once they exit, the state is either 1 (successful delivery) or \n2 (collision, and down the dump chute).]\n\nBUT....\n\n* The reindex solution doesn't work. I just tried it, and the query \nplanner is still using the wrong indexes.\n\n* If the tbl_tracker_performance_1_idx had indeed become bloated, \nwouldn't that have meant that when the query planner was forced to use \nit (by deleting the alternative indexes), it would have been slow?\n\nAlso, I thought that reindex wasn't supposed to be needed in normal \noperation.\n\nBest wishes,\n\nRichard\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Dec 2012 23:49:34 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "On Wed, Dec 19, 2012 at 3:49 PM, Richard Neill <[email protected]> wrote:\n> * The reindex solution doesn't work. I just tried it, and the query planner\n> is still using the wrong indexes.\n\nCan you show the explain analyze with tbl_tracker_performance_1_idx\nstraight after reindex (eg. before it has been bloated again)?\n\n> * If the tbl_tracker_performance_1_idx had indeed become bloated, wouldn't\n> that have meant that when the query planner was forced to use it (by\n> deleting the alternative indexes), it would have been slow?\n\nIt is hard to say. There might be a bloating threshold after with it\nwill be slow. Also it depends on the index column values.\n\n--\nSergey Konoplev\nDatabase and Software Architect\nhttp://www.linkedin.com/in/grayhemp\n\nPhones:\nUSA +1 415 867 9984\nRussia, Moscow +7 901 903 0499\nRussia, Krasnodar +7 988 888 1979\n\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Dec 2012 16:08:58 -0800", "msg_from": "Sergey Konoplev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists?" }, { "msg_contents": "Thanks for your help,\n\nOn 20/12/12 00:08, Sergey Konoplev wrote:\n> On Wed, Dec 19, 2012 at 3:49 PM, Richard Neill <[email protected]> wrote:\n>> * The reindex solution doesn't work. I just tried it, and the query planner\n>> is still using the wrong indexes.\n>\n> Can you show the explain analyze with tbl_tracker_performance_1_idx\n> straight after reindex (eg. before it has been bloated again)?\n\nSure. Just done it now... the system has been fairly lightly loaded for \nthe last few hours - though I did have to change the specific number of \nthe parcel_id_code in the query.\n\n\n\nfsc_log=> explain analyse select * from tbl_tracker where \nparcel_id_code=92223 and exit_state is null;\n\nQUERY PLAN\n-----------------------------------------------------------\n Index Scan using tbl_tracker_exit_state_idx on tbl_tracker \n(cost=0.00..6.34 rows=1 width=174) (actual time=0.321..1.871 rows=1 loops=1)\n Index Cond: (exit_state IS NULL)\n Filter: (parcel_id_code = 92223)\n Total runtime: 1.905 ms\n(4 rows)\n\n\n\nAnd now, force it, by dropping the other index (temporarily):\n\nfsc_log=> drop index tbl_tracker_exit_state_idx;\nDROP INDEX\n\n\nfsc_log=> explain analyse select * from tbl_tracker where \nparcel_id_code=92223 and exit_state is null;\n\nQUERY PLAN\n---------------------------------------------------------------------\n Index Scan using tbl_tracker_performance_1_idx on tbl_tracker \n(cost=0.00..7.78 rows=1 width=174) (actual time=0.040..0.041 rows=1 loops=1)\n Index Cond: (parcel_id_code = 92223)\n Total runtime: 0.077 ms\n(3 rows)\n\n\n\nAs far as I can tell, the query planner really is just getting it wrong.\n\nBTW, there is a significant effect on speed caused by running the same \nquery twice (it pulls stuff from disk into the OS disk-cache), but I've \nalready accounted for this.\n\n\nRichard\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 00:22:49 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "Richard Neill <[email protected]> writes:\n> The problem is, when I now run my query, the planner ignores the \n> dedicated index \"tbl_tracker_performance_1_idx\", and instead uses both \n> of the full indexes... resulting in a much much slower query (9ms vs \n> 0.08ms).\n\n> A psql session is below. This shows that, if I force the planner to use \n> the partial index, by dropping the others, then it's fast. But as soon \n> as I put the full indexes back (which I need for other queries), the \n> query planner chooses them instead, and is slow.\n\n[ experiments with a similar test case ... ] I think the reason why the\nplanner is overestimating the cost of using the partial index is that\n9.1 and earlier fail to account for the partial-index predicate when\nestimating the number of index rows that will be visited. Because the\npartial-index predicate is so highly selective in this case, that\nresults in a significant overestimate of how much of the index will be\ntraversed.\n\nWe fixed this for 9.2 in\nhttp://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=21a39de5809cd3050a37d2554323cc1d0cbeed9d\nbut did not want to risk back-patching such a behavioral change. If\nyou're stuck on 9.1 you might want to think about applying that as a\nlocal patch though.\n\n(BTW, the \"fudge factor\" change in that patch has been criticized\nrecently; we've changed it again already for 9.3 and might choose to\nback-patch that into 9.2.3. But it's the rest of it that you care about\nanyway.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Dec 2012 22:06:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes,\n\twhen a dedicated partial index exists?" }, { "msg_contents": "On Wednesday, December 19, 2012, Richard Neill wrote:\n\n> Thanks for your help,\n>\n> On 20/12/12 00:08, Sergey Konoplev wrote:\n>\n>> On Wed, Dec 19, 2012 at 3:49 PM, Richard Neill <[email protected]>\n>> wrote:\n>>\n>>> * The reindex solution doesn't work. I just tried it, and the query\n>>> planner\n>>> is still using the wrong indexes.\n>>>\n>>\n\nIt switched to a better one of the wrong indices, though, and got several\ntimes faster.\n\nHow did it get so bloated in the first place? Is the table being updated\nso rapidly that the statistics might be wrong even immediately after\nanalyze finishes?\n\nIn any case, I can't get it to prefer the full index in 9.1.6 at all. The\npartial index wins hands down unless the table is physically clustered by\nthe parcel_id_code column. In which that case, the partial index wins by\nonly a little bit.\n\nThis is what I did for the table:\n\ncreate table tbl_tracker as select case when random()<0.001 then 2 else\ncase when random()< 0.00003 then NULL else 1 end end as exit_state,\n(random()*99999)::int as parcel_id_code from generate_series(1,5000000) ;\n\nCheers,\n\nJeff\n\n\n>\n\nOn Wednesday, December 19, 2012, Richard Neill wrote:Thanks for your help,\n\nOn 20/12/12 00:08, Sergey Konoplev wrote:\n\nOn Wed, Dec 19, 2012 at 3:49 PM, Richard Neill <[email protected]> wrote:\n\n* The reindex solution doesn't work. I just tried it, and the query planner\nis still using the wrong indexes.It switched to a better one of the wrong indices, though, and got several times faster.\nHow did it get so bloated in the first place?  Is the table being updated so rapidly that the statistics might be wrong even immediately after analyze finishes?In any case, I can't get it to prefer the full index in 9.1.6 at all.  The partial index wins hands down unless the table is physically clustered by the parcel_id_code column.  In which that case, the partial index wins by only a little bit.\nThis is what I did for the table:create table tbl_tracker as select case when random()<0.001 then 2 else case when random()< 0.00003 then NULL else 1 end end as exit_state, (random()*99999)::int as parcel_id_code from generate_series(1,5000000) ;\nCheers,Jeff", "msg_date": "Wed, 19 Dec 2012 21:12:14 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists?" }, { "msg_contents": "Dear Tom,\n\nThanks very much for your advice.\n\n>> A psql session is below. This shows that, if I force the planner to use\n>> the partial index, by dropping the others, then it's fast. But as soon\n>> as I put the full indexes back (which I need for other queries), the\n>> query planner chooses them instead, and is slow.\n>\n> [ experiments with a similar test case ... ] I think the reason why the\n> planner is overestimating the cost of using the partial index is that\n> 9.1 and earlier fail to account for the partial-index predicate when\n> estimating the number of index rows that will be visited. Because the\n> partial-index predicate is so highly selective in this case, that\n> results in a significant overestimate of how much of the index will be\n> traversed.\n\nI think that seems likely to me.\n\nI'll try out 9.2 and see if it helps. As it's a production server, I \nhave to wait for some downtime, probably Friday night before I can find \nout - will report back.\n\nBest wishes,\n\nRichard\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 05:51:57 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "Dear Jeff,\n\nThanks for your help,\n\n> * The reindex solution doesn't work. I just tried it, and\n> the query planner\n> is still using the wrong indexes.\n>\n>\n> It switched to a better one of the wrong indices, though, and got\n> several times faster.\n>\n\nI think that this is a red herring. The switching between the two \n\"wrong\" indices seems to be caused by non-uniformity in the \nparcel_id_code: although it's distributed fairly well across 1-99999, \nit's not perfect.\n\nAs for the speed-up, I think that's mostly caused by the fact that \nrunning \"Analyse\" is pulling the entire table (and the relevant index) \ninto RAM and flushing other things out of that cache.\n\n> How did it get so bloated in the first place? Is the table being\n> updated so rapidly that the statistics might be wrong even immediately\n> after analyze finishes?\n\nI don't think it is. We're doing about 10 inserts and 20 updates per \nsecond on that table. But when I tested it, production had stopped for \nthe night - so the system was quiescent between the analyse and the select.\n\n> In any case, I can't get it to prefer the full index in 9.1.6 at all.\n> The partial index wins hands down unless the table is physically\n> clustered by the parcel_id_code column. In which that case, the partial\n> index wins by only a little bit.\n\nInteresting that you should say that... the original setup script did \nchoose to cluster the table on that column.\n\nAlso, I wonder whether it matters which order the indexes are created in?\n\n\nBest wishes,\n\nRichard\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 05:57:14 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "Richard Neill <[email protected]> writes:\n> Also, I wonder whether it matters which order the indexes are created in?\n\nIIRC, if the estimated costs of using two different indexes come out the\nsame (to within 1% or so), then the planner keeps the first-generated\npath, which will result in preferring the index with smaller OID. This\neffect doesn't apply to your problem query though, since we can see from\nthe drop-experiments that the estimated costs are quite a bit different.\n\nA more likely explanation if you see some effect that looks like order\ndependency is that the more recently created index has accumulated less\nbloat, and thus has a perfectly justifiable cost advantage.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 10:43:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes,\n\twhen a dedicated partial index exists?" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> In any case, I can't get it to prefer the full index in 9.1.6 at all. The\n> partial index wins hands down unless the table is physically clustered by\n> the parcel_id_code column. In which that case, the partial index wins by\n> only a little bit.\n\n> This is what I did for the table:\n\n> create table tbl_tracker as select case when random()<0.001 then 2 else\n> case when random()< 0.00003 then NULL else 1 end end as exit_state,\n> (random()*99999)::int as parcel_id_code from generate_series(1,5000000) ;\n\nWhat I did to try to duplicate Richard's situation was to create a test\ntable in which all the exit_state values were NULL, then build the\nindex, then UPDATE all but a small random fraction of the rows to 1,\nthen vacuum. This results in a rather bloated partial index, but I\nthink that's probably what he's got given that every record initially\nenters the table with NULL exit_state. It would take extremely frequent\nvacuuming to keep the partial index from accumulating a lot of dead\nentries.\n\nIn this scenario, with 9.1, I got overly large estimates for the cost of\nusing the partial index; which matches up with his reports.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 10:49:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes,\n\twhen a dedicated partial index exists?" }, { "msg_contents": "Dear Tom,\n\nThanks againg for your help on this.\n\nOn 20/12/12 03:06, Tom Lane wrote:\n> Richard Neill <[email protected]> writes:\n>> The problem is, when I now run my query, the planner ignores the\n>> dedicated index \"tbl_tracker_performance_1_idx\", and instead uses both\n>> of the full indexes... resulting in a much much slower query (9ms vs\n>> 0.08ms).\n>\n\nI've now installed 9.2. As you said, thanks to the change in 9.2 it \ninitially prefers the partial index.\n\nBUT, after 1 cycle of inserting 500k rows, then deleting them all, then \nstarting to insert again, I find that the planner has reverted to the \nformer bad behaviour.\n\nReindexing only takes a couple of seconds, and restores correctness.\n\nWhat's going on? Do I need to run reindex in a cron-job? I thought that \nreindex wasn't \"normally\" needed, and that index bloat happened only \nafter every row had changed value hundreds of times.\n\nThanks,\n\nRichard\n\n\n---------------------\nHere's the same session again.\n\n[Please ignore the dreq_1_timestamp check - I mistakenly failed to \nsimplify it out of the query, and now that I reindexed, I can't redo the \nexperiment. I don't think it makes any difference.]\n\n\nfsc_log=> explain analyse select * from tbl_tracker WHERE\nparcel_id_code='90820' AND exit_state IS NULL AND (dreq_timestamp_1 > \ntimestamp '2012-12-20 13:02:36.652' - INTERVAL '36 hours');\n\nQUERY PLAN\n---------------------------------------------------------------\n Bitmap Heap Scan on tbl_tracker (cost=17.35..19.86 rows=1 width=174) \n(actual time=8.056..8.056 rows=0 loops=1)\n Recheck Cond: ((exit_state IS NULL) AND (parcel_id_code = 90820))\n Filter: (dreq_timestamp_1 > '2012-12-19 01:02:36.652'::timestamp \nwithout time zone)\n -> BitmapAnd (cost=17.35..17.35 rows=1 width=0) (actual \ntime=8.053..8.053 rows=0 loops=1)\n -> Bitmap Index Scan on tbl_tracker_exit_state_idx \n(cost=0.00..8.36 rows=151 width=0) (actual time=7.946..7.946 rows=20277 \nloops=1)\n Index Cond: (exit_state IS NULL)\n -> Bitmap Index Scan on tbl_tracker_parcel_id_code_idx \n(cost=0.00..8.73 rows=58 width=0) (actual time=0.025..0.025 rows=72 loops=1)\n Index Cond: (parcel_id_code = 90820)\n Total runtime: 8.090 ms\n(9 rows)\n\n\nfsc_log=> REINDEX index tbl_tracker_performance_1_idx;\n#This only took a couple of seconds to do.\n\nfsc_log=> explain analyse select * from tbl_tracker WHERE \nparcel_id_code='90820' AND exit_state IS NULL AND (dreq_timestamp_1 > \ntimestamp '2012-12-20 13:02:36.652' - INTERVAL '36 hours');\n\nQUERY PLAN\n---------------------------------------------------------------\n\n\n Index Scan using tbl_tracker_performance_1_idx on tbl_tracker \n(cost=0.00..5.27 rows=1 width=174) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (parcel_id_code = 90820)\n Filter: (dreq_timestamp_1 > '2012-12-19 01:02:36.652'::timestamp \nwithout time zone)\n Total runtime: 0.047 ms\n(4 rows)\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Dec 2012 02:34:44 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "\n\nOn 21/12/12 02:34, Richard Neill wrote:\n>\n> Reindexing only takes a couple of seconds, and restores correctness.\n>\n\nInterestingly, the partial index (after reindexing) is only 16kB in \nsize; whereas the table is 1.1 GB, and the normal single-column indexes \nare about 250MB in size.\n\nIn terms of what's physically happening in reality,\n\n- tbl_tracker keeps a record of all books that move through the system\n over a period of one month (at a rate of about 20/second, or 1\n million/day), after which they are deleted.\n\n- the partial index, tbl_tracker_performance_1_idx tracks only those\n books which are currently \"in flight\" - books remain in flight for\n about 200 seconds as they go round the machine.\n (While in flight, these have exit_state = NULL)\n\n- the partial index is used to overcome a design defect(*) in the\n sorter machine, namely that it doesn't number each book uniquely,\n but wraps the parcel_id_code every few hours. Worse, some books can\n remain on the sorter for several days (if they jam), so the numbering\n isn't a clean \"wraparound\", but more like a fragmented (and\n occasionally lossy) filesystem.\n\n- What I'm trying to do is trace the history of the books\n through the system and assign each one a proper unique id.\n So, if I see a book with \"parcel_id_code = 37\",\n is it a new book (after pid wrap), or is it the same book I saw 1\n minute ago, that hasn't exited the sorter?\n\n\nSo... is there some way to, for example, set a trigger that will reindex \nevery time the index exceeds 1000 rows?\n\n\nRichard\n\n\n\n(*)Readers of The Daily WTF might appreciate another curious anomaly: \nthis machine originally had an RS-232 port; it now uses ethernet, but \nTxD and RxD use different TCP sockets on different network ports!\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Dec 2012 03:16:20 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "On Thursday, December 20, 2012, Tom Lane wrote:\n\n> Jeff Janes <[email protected] <javascript:;>> writes:\n> > In any case, I can't get it to prefer the full index in 9.1.6 at all.\n> The\n> > partial index wins hands down unless the table is physically clustered by\n> > the parcel_id_code column. In which that case, the partial index wins by\n> > only a little bit.\n>\n> > This is what I did for the table:\n>\n> > create table tbl_tracker as select case when random()<0.001 then 2 else\n> > case when random()< 0.00003 then NULL else 1 end end as exit_state,\n> > (random()*99999)::int as parcel_id_code from generate_series(1,5000000) ;\n>\n> What I did to try to duplicate Richard's situation was to create a test\n> table in which all the exit_state values were NULL, then build the\n> index, then UPDATE all but a small random fraction of the rows to 1,\n> then vacuum. This results in a rather bloated partial index, but I\n> think that's probably what he's got given that every record initially\n> enters the table with NULL exit_state. It would take extremely frequent\n> vacuuming to keep the partial index from accumulating a lot of dead\n> entries.\n>\n\nI played with this scenario too, but still the only way I could get it to\nspurn the partial index is if I rebuild the full one (to remove the bloat)\nbut not the partial one. But still, the cost were always in the 8 to 11\nrange for either index with default cost settings. It is hard to imagine\nthe amount of bloat needed to drive it up to 5000, like in his initial\nreport before he rebuilt it.\n\nCheers,\n\nJeff\n\n>\n>\n\nOn Thursday, December 20, 2012, Tom Lane wrote:Jeff Janes <[email protected]> writes:\n\n> In any case, I can't get it to prefer the full index in 9.1.6 at all.  The\n> partial index wins hands down unless the table is physically clustered by\n> the parcel_id_code column.  In which that case, the partial index wins by\n> only a little bit.\n\n> This is what I did for the table:\n\n> create table tbl_tracker as select case when random()<0.001 then 2 else\n> case when random()< 0.00003 then NULL else 1 end end as exit_state,\n> (random()*99999)::int as parcel_id_code from generate_series(1,5000000) ;\n\nWhat I did to try to duplicate Richard's situation was to create a test\ntable in which all the exit_state values were NULL, then build the\nindex, then UPDATE all but a small random fraction of the rows to 1,\nthen vacuum.  This results in a rather bloated partial index, but I\nthink that's probably what he's got given that every record initially\nenters the table with NULL exit_state.  It would take extremely frequent\nvacuuming to keep the partial index from accumulating a lot of dead\nentries.I played with this scenario too, but still the only way I could get it to spurn the partial index is if I rebuild the full one (to remove the bloat) but not the partial one.  But still, the cost were always in the 8 to 11 range for either index with default cost settings.  It is hard to imagine the amount of bloat needed to drive it up to 5000, like in his initial report before he rebuilt it.\nCheers,Jeff", "msg_date": "Thu, 20 Dec 2012 19:40:11 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists?" }, { "msg_contents": "On Thursday, December 20, 2012, Richard Neill wrote:\n\n> Dear Tom,\n>\n> Thanks againg for your help on this.\n>\n> On 20/12/12 03:06, Tom Lane wrote:\n>\n>> Richard Neill <[email protected]> writes:\n>>\n>>> The problem is, when I now run my query, the planner ignores the\n>>> dedicated index \"tbl_tracker_performance_1_**idx\", and instead uses both\n>>> of the full indexes... resulting in a much much slower query (9ms vs\n>>> 0.08ms).\n>>>\n>>\n>>\n> I've now installed 9.2. As you said, thanks to the change in 9.2 it\n> initially prefers the partial index.\n>\n> BUT, after 1 cycle of inserting 500k rows, then deleting them all, then\n> starting to insert again, I find that the planner has reverted to the\n> former bad behaviour.\n>\n\nPresumably the real work load has this type of turn over happen one row at\na time, rather than all in one giant mass update transaction, right? That\nmakes a big difference in the way space is re-used.\n\n\n>\n> Reindexing only takes a couple of seconds, and restores correctness.\n>\n\nEven your slow query is pretty fast. If you can't afford that, can you\nafford to take an exclusive lock for a couple of seconds every few minutes?\n\n\n>\n> What's going on? Do I need to run reindex in a cron-job? I thought that\n> reindex wasn't \"normally\" needed, and that index bloat happened only after\n> every row had changed value hundreds of times.\n>\n\nThe partial index is highly leveraged. If every tuple in the table is\nupdated once, that amounts to every tuple in the index being updated 25,000\ntimes.\n\nFor the same reason, it is probably not getting vacuum often enough. The\ndefault settings have the table vacuumed once 20% of its rows turns over,\nbut that means the partial index has been turned over many many times. You\ncould crank down the auto-vacuum settings for that table, or run manual\nvacuum with a cron job.\n\nVacuum will not unbloat the index, but if you run it often enough it will\nkeep the bloat from getting to bad in the first place.\n\nBut what I think I'd do is change one of your full indexes to contain the\nother column as well, and get rid of the partial index. It might not be\nquite as efficient as the partial index might theoretically be, but it\nshould be pretty good and also be less fragile.\n\n\n>\n>\n> -> Bitmap Index Scan on tbl_tracker_exit_state_idx\n> (cost=0.00..8.36 rows=151 width=0) (actual time=7.946..7.946 rows=20277\n> loops=1)\n>\n\nThis is finding 100 times more rows than it thinks it will. If that could\nbe fixed, surely this plan would not look as good. But then, it would\nprobably just switch to another plan that is not the one you want, either.\n\n\nCheers,\n\nJeff\n\n\n>\n\nOn Thursday, December 20, 2012, Richard Neill wrote:Dear Tom,\n\nThanks againg for your help on this.\n\nOn 20/12/12 03:06, Tom Lane wrote:\n\nRichard Neill <[email protected]> writes:\n\nThe problem is, when I now run my query, the planner ignores the\ndedicated index \"tbl_tracker_performance_1_idx\", and instead uses both\nof the full indexes... resulting in a much much slower query (9ms vs\n0.08ms).\n\n\n\n\nI've now installed 9.2. As you said, thanks to the change in 9.2 it initially prefers the partial index.\n\nBUT, after 1 cycle of inserting 500k rows, then deleting them all, then starting to insert again, I find that the planner has reverted to the former bad behaviour.Presumably the real work load has this type of turn over happen one row at a time, rather than all in one giant mass update transaction, right?  That makes a big difference in the way space is re-used. \n \n\nReindexing only takes a couple of seconds, and restores correctness.Even your slow query is pretty fast.  If you can't afford that, can you afford to take an exclusive lock for a couple of seconds every few minutes?\n \nWhat's going on? Do I need to run reindex in a cron-job? I thought that reindex wasn't \"normally\" needed, and that index bloat happened only after every row had changed value hundreds of times.\nThe partial index is highly leveraged.  If every tuple in the table is updated once, that amounts to every tuple in the index being updated 25,000 times.For the same reason, it is probably not getting vacuum often enough.  The default settings have the table vacuumed once 20% of its rows turns over, but that means the partial index has been turned over many many times.  You could crank down the auto-vacuum settings for that table, or run manual vacuum with a cron job.\nVacuum will not unbloat the index, but if you run it often enough it will keep the bloat from getting to bad in the first place.But what I think I'd do is change one of your full indexes to contain the other column as well, and get rid of the partial index.  It might not be quite as efficient as the partial index might theoretically be, but it should be pretty good and also be less fragile.\n \n\n         ->  Bitmap Index Scan on tbl_tracker_exit_state_idx (cost=0.00..8.36 rows=151 width=0) (actual time=7.946..7.946 rows=20277 loops=1)This is finding 100 times more rows than it thinks it will.  If that could be fixed, surely this plan would not look as good.  But then, it would probably just switch to another plan that is not the one you want, either.\nCheers,Jeff", "msg_date": "Thu, 20 Dec 2012 21:15:16 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists?" }, { "msg_contents": "On Thursday, December 20, 2012, Richard Neill wrote:\n\n>\n>\n>\n> - What I'm trying to do is trace the history of the books\n> through the system and assign each one a proper unique id.\n> So, if I see a book with \"parcel_id_code = 37\",\n> is it a new book (after pid wrap), or is it the same book I saw 1\n> minute ago, that hasn't exited the sorter?\n>\n\nI'm not sure how you are implementing this goal, but I don't think it is\nbest done by looping over all books (presumably from some other table?) and\nissuing an individual query for each one, if that is what you are doing.\n Some kind of bulk join would probably be more efficient.\n\nCheers,\n\nJeff\n\n>\n>\n\nOn Thursday, December 20, 2012, Richard Neill wrote:\n\n- What I'm trying to do is trace the history of the books\n  through the system and assign each one a proper unique id.\n  So, if I see a book with \"parcel_id_code = 37\",\n  is it a new book (after pid wrap), or is it the same book I saw 1\n  minute ago, that hasn't exited the sorter?I'm not sure how you are implementing this goal, but I don't think it is best done by looping over all books (presumably from some other table?) and issuing an individual query for each one, if that is what you are doing.  Some kind of bulk join would probably be more efficient.\nCheers,Jeff", "msg_date": "Thu, 20 Dec 2012 21:15:17 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists?" }, { "msg_contents": "\n> I've now installed 9.2. As you said, thanks to the change in 9.2 it\n> initially prefers the partial index.\n>\n> BUT, after 1 cycle of inserting 500k rows, then deleting them all,\n> then starting to insert again, I find that the planner has reverted\n> to the former bad behaviour.\n>\n>\n> Presumably the real work load has this type of turn over happen one row\n> at a time, rather than all in one giant mass update transaction, right?\n> That makes a big difference in the way space is re-used.\n\nSorry - I meant a \"real\" workload here. I replayed a whole day's worth \nof real data into the DB, and that's what I meant by a cycle. Everything \nwas row-at-a-time.\n(It currently takes about an hour to do this)\n\n>\n> Reindexing only takes a couple of seconds, and restores correctness.\n>\n>\n> Even your slow query is pretty fast. If you can't afford that, can you\n> afford to take an exclusive lock for a couple of seconds every few minutes?\n\nYes, I can. If that's the root cause, I'll do that. But it seems to me \nthat I've stumbled upon some rather awkward behaviour that I need to \nunderstand fully, and if the index is bloating that badly and that \nquickly, then perhaps it's a PG bug (or at least cause for a logfile \nwarning).\n\nBTW, The index has gone from 16kB to 4.5MB in 6 hours of runtime today. \nIt still only has 252 matching rows.\n\n\n> What's going on? Do I need to run reindex in a cron-job? I thought\n> that reindex wasn't \"normally\" needed, and that index bloat happened\n> only after every row had changed value hundreds of times.\n>\n>\n> The partial index is highly leveraged. If every tuple in the table is\n> updated once, that amounts to every tuple in the index being updated\n> 25,000 times.\n\nHow so? That sounds like O(n_2) behaviour.\n\n\n>\n> For the same reason, it is probably not getting vacuum often enough.\n> The default settings have the table vacuumed once 20% of its rows\n> turns over, but that means the partial index has been turned over many\n> many times. You could crank down the auto-vacuum settings for that\n> table, or run manual vacuum with a cron job.\n>\n> Vacuum will not unbloat the index, but if you run it often enough it\n> will keep the bloat from getting too bad in the first place.\n\nThanks. I've reduced autovacuum_vacuum_scale_factor from 0.2 to 0.05\n(and set autovacuum_analyze_scale_factor = 0.05 for good measure)\n\nAs I understand it, both of these can run in parallel, and I have 7 \ncores usually idle, while the other is maxed out.\n\n> But what I think I'd do is change one of your full indexes to contain\n> the other column as well, and get rid of the partial index. It might\n> not be quite as efficient as the partial index might theoretically be,\n> but it should be pretty good and also be less fragile.\n\nI'll try that.\n\nThanks,\n\nRichard\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 22 Dec 2012 17:29:03 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "\n\nOn 21/12/12 05:15, Jeff Janes wrote:\n>\n>\n> - What I'm trying to do is trace the history of the books\n> through the system and assign each one a proper unique id.\n> So, if I see a book with \"parcel_id_code = 37\",\n> is it a new book (after pid wrap), or is it the same book I saw 1\n> minute ago, that hasn't exited the sorter?\n>\n> I'm not sure how you are implementing this goal, but I don't think it is\n> best done by looping over all books (presumably from some other table?)\n> and issuing an individual query for each one, if that is what you are\n> doing. Some kind of bulk join would probably be more efficient.\n\nIt would be nice to do a bulk join, but it's not possible: the query is \ntime sensitive. Consider:\n\nid/pkey pid timestamp exit_state \tdestination\n\n1 77\t-24 hours\t1\t\t212\n2\t77\t-18 hours\t1\t\t213\n3\t77\t-12 hours\t1\t\t45\n4\t77\t-6 hours\t1\t\t443\n5\t77\t0 hours\t\tnull\t\t\n\n[in future...]\n5\t77\t0 hours\t\t1\t\t92\n6\t77\t4 hours\t\tnull\n\n\nAt time +5 minutes, I receive a report that a book with parcel_id 77 has \nsuccessfully been delivered to destination 92. So, what I have to do is:\n\n* First, find the id of the most recent book which had pid=77 and where \nthe exit state is null. (hopefully, but not always, this yields exactly \none row, which in this case is id=5)\n\n* Then update the table to set the destination to 92, where the id=5.\n\n\nIt's a rather cursed query, because:\n - the sorter machine doesn't give me full info in each message, only\n deltas, and I have to reconstruct the global state.\n - pids are reused within hours, but don't increase monotonically,\n (more like drawing repeatedly from a shuffled deck, where cards\n are only returned to the deck sporadically.\n - some pids get double-reported\n - 1% of books fall off the machine, or get stuck on it.\n - occasionally, messages are lost.\n - the sorter state isn't self-consistent (it can be restarted)\n\n\nThe tracker table is my attempt to consistently combine all the state we \nknow, and merge in the deltas as we receive messages from the sorter \nmachine. It ends up reflecting reality about 99% of the time.\n\n\nRichard\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 22 Dec 2012 17:46:30 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "Dear All,\n\nI think periodic reindex may be the solution. Even after reducing the \nautovacuum fraction to 0.05, the index still seems to bloat.\n\nAfter another couple of days runtime, the index is using 11MB, and\nI get a query that takes 2.448ms. Then I reindex (takes about 3 sec), \nand the index falls to 16kB, and the query takes 0.035ms.\n\nSo... problem solved for me: I just have to reindex every few hours.\nBUT, this suggests a few remaining things:\n\n\n1. The documentation still suggests that reindex should not be needed in \n\"normal\" operation... is this true? Or are the docs wrong? Or have I \ngot such an edge case? Does this suggest that an auto-reindexer would be \na useful feature?\n\n\n2. Is there any way to force the planner to use (or ignore) a specific \nindex, for testing purposes, short of actually dropping the index?\nThis would be very useful for debugging, especially given that query \nplans can only really be fully tested on production systems, and that \ndropping indexes is rather a bad thing to do when live operation is \nsimultaneously happening on that server!\n\nThanks again for your help.\n\nBest wishes,\n\nRichard\n\n\n\n\n\nfsc_log=> explain analyse select * from tbl_tracker WHERE \nparcel_id_code='32453' AND exit_state IS NULL;\n \nQUERY PLAN\n------------------------------------------------------\n Bitmap Heap Scan on tbl_tracker (cost=20.81..23.32 rows=1 width=174) \n(actual time=2.408..2.408 rows=1 loops=1)\n Recheck Cond: ((exit_state IS NULL) AND (parcel_id_code = 32453))\n -> BitmapAnd (cost=20.81..20.81 rows=1 width=0) (actual \ntime=2.403..2.403 rows=0 loops=1)\n -> Bitmap Index Scan on tbl_tracker_exit_state_idx \n(cost=0.00..9.25 rows=132 width=0) (actual time=2.378..2.378 rows=5 loops=1)\n Index Cond: (exit_state IS NULL)\n -> Bitmap Index Scan on tbl_tracker_parcel_id_code_idx \n(cost=0.00..11.30 rows=62 width=0) (actual time=0.022..0.022 rows=65 \nloops=1)\n Index Cond: (parcel_id_code = 32453)\n Total runtime: 2.448 ms\n\n\nfsc_log => REINDEX;\n\n\nfsc_log=> explain analyse select * from tbl_tracker WHERE \nparcel_id_code='32453' AND exit_state IS NULL;\n\nQUERY PLAN\n-------------------------------------------------\nIndex Scan using tbl_tracker_performance_1_idx on tbl_tracker \n(cost=0.00..5.27 rows=1 width=174) (actual time=0.007..0.008 rows=1 loops=1)\n Index Cond: (parcel_id_code = 32453)\n Total runtime: 0.035 ms\n(3 rows)\n\n\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Dec 2012 18:37:11 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists? (solved?)" }, { "msg_contents": "On Mon, Dec 24, 2012 at 06:37:11PM +0000, Richard Neill wrote:\n> [...]\n> So... problem solved for me: I just have to reindex every few hours.\n> BUT, this suggests a few remaining things:\n> [...]\n> 2. Is there any way to force the planner to use (or ignore) a\n> specific index, for testing purposes, short of actually dropping the\n> index?\n> This would be very useful for debugging, especially given that query\n> plans can only really be fully tested on production systems, and\n> that dropping indexes is rather a bad thing to do when live\n> operation is simultaneously happening on that server!\n\nI believe that:\n\n BEGIN;\n drop index ....\n explain analyze ...\n explain analyze ...\n ROLLBACK;\n\nwill do what you want. IIUC Postgres allows DDL within transactions\nand thus be rolled back and the operations within the transaction\naren't visible to your other queries running outside the transaction.\n\n http://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis\n\nand\n\n http://www.postgresql.org/docs/9.2/static/sql-dropindex.html\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Dec 2012 20:42:52 +0000", "msg_from": "John Rouillard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes,\n\twhen a dedicated partial index exists? (solved?)" }, { "msg_contents": "On Saturday, December 22, 2012, Richard Neill wrote:\n\n>\n>\n>> The partial index is highly leveraged. If every tuple in the table is\n>> updated once, that amounts to every tuple in the index being updated\n>> 25,000 times.\n>>\n>\n> How so? That sounds like O(n_2) behaviour.\n>\n\n\nIf the table has 5 million rows while the index has 200 (active) rows at\nany given time, then to update every row in the table to null and back\nagain would be 100% turn over of the table. But each such change would\nlead to an addition and then a deletion from the index. So 100% turnover\nof the table would be a 5 million / 200 = 25,000 fold turn of the index.\n\nThere is some code that allows a btree index entry to get killed (and so\nthe slot to be reused) without any vacuum, if a scan follows that entry and\nfinds the corresponding tuple in the table no longer visible to anyone. I\nhave not examined this code, and don't know whether it is doing its job but\njust isn't enough to prevent the bloat, or if for some reason it is not\napplicable to your situation.\n\n\n Cheers,\n\nJeff\n\nOn Saturday, December 22, 2012, Richard Neill wrote:\n\n\nThe partial index is highly leveraged.  If every tuple in the table is\nupdated once, that amounts to every tuple in the index being updated\n25,000 times.\n\n\nHow so? That sounds like O(n_2) behaviour.If the table has 5 million rows while the index has 200 (active) rows at any given time, then to update every row in the table to null and back again would be 100% turn over of the table.  But each such change would lead to an addition and then a deletion from the index.  So 100% turnover of the table would be a 5 million / 200 = 25,000 fold turn of the index.\nThere is some code that allows a btree index entry to get killed (and so the slot to be reused) without any vacuum, if a scan follows that entry and finds the corresponding tuple in the table no longer visible to anyone.  I have not examined this code, and don't know whether it is doing its job but just isn't enough to prevent the bloat, or if for some reason it is not applicable to your situation.\n Cheers,Jeff", "msg_date": "Wed, 26 Dec 2012 23:03:32 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists?" }, { "msg_contents": "On Monday, December 24, 2012, John Rouillard wrote:\n\n> On Mon, Dec 24, 2012 at 06:37:11PM +0000, Richard Neill wrote:\n> > [...]\n> > So... problem solved for me: I just have to reindex every few hours.\n> > BUT, this suggests a few remaining things:\n> > [...]\n> > 2. Is there any way to force the planner to use (or ignore) a\n> > specific index, for testing purposes, short of actually dropping the\n> > index?\n> > This would be very useful for debugging, especially given that query\n> > plans can only really be fully tested on production systems, and\n> > that dropping indexes is rather a bad thing to do when live\n> > operation is simultaneously happening on that server!\n>\n> I believe that:\n>\n> BEGIN;\n> drop index ....\n> explain analyze ...\n> explain analyze ...\n> ROLLBACK;\n>\n\nThere are two cautions here. One is that doing the drop index takes an\naccess exclusive lock on the table, and so brings all other connections to\na screeching halt. That is not much nicer to do on a production system\nthan actually dropping the index, so don't dilly-dally around before doing\nthe rollback. rollback first, then ruminate on the results of the explain.\n\nAlso, this will forcibly cancel any autovacuums occurring on the table. I\nthink one of the reasons he needs to reindex so much is that he is already\ndesperately short of vacuuming behavior.\n\nCheers,\n\nJeff\n\n>\n>\n\nOn Monday, December 24, 2012, John Rouillard wrote:On Mon, Dec 24, 2012 at 06:37:11PM +0000, Richard Neill wrote:\n\n> [...]\n> So... problem solved for me: I just have to reindex every few hours.\n> BUT, this suggests a few remaining things:\n> [...]\n> 2. Is there any way to force the planner to use (or ignore) a\n> specific index, for testing purposes, short of actually dropping the\n> index?\n> This would be very useful for debugging, especially given that query\n> plans can only really be fully tested on production systems, and\n> that dropping indexes is rather a bad thing to do when live\n> operation is simultaneously happening on that server!\n\nI believe that:\n\n  BEGIN;\n  drop index ....\n  explain analyze ...\n  explain analyze ...\n  ROLLBACK;There are two cautions here.  One is that doing the drop index takes an access exclusive lock on the table, and so brings all other connections to a screeching halt.  That is not much nicer to do on a production system than actually dropping the index, so don't dilly-dally around before doing the rollback.  rollback first, then ruminate on the results of the explain.\nAlso, this will forcibly cancel any autovacuums occurring on the table.  I think one of the reasons he needs to reindex so much is that he is already desperately short of vacuuming behavior.\nCheers,Jeff", "msg_date": "Wed, 26 Dec 2012 23:03:35 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists? (solved?)" }, { "msg_contents": "On Monday, December 24, 2012, Richard Neill wrote:\n\n> Dear All,\n>\n> I think periodic reindex may be the solution. Even after reducing the\n> autovacuum fraction to 0.05, the index still seems to bloat.\n>\n\n\nSince your index is so tiny compared to the table, I'd lower it even more.\n0.0001, maybe.\n\nHowever, vacuums can't overlap a table, so you can't get the table to be\nvacuumed more often than the time it takes to run the vacuum (which took\naround 4 minutes at default vacuum cost settings in a toy system) and so\nyou may need to lower the autovacuum_vacuum_cost_delay to 0 for this table.\n (I suspect it is all in cache anyway, so the default settings are too\npessimistic.) Finally, you might need to lower autovacuum_naptime, because\nthe usually table also won't be vacuumed any more often than that.\n\n\n\n>\n> 1. The documentation still suggests that reindex should not be needed in\n> \"normal\" operation... is this true? Or are the docs wrong? Or have I got\n> such an edge case?\n>\n\nYour case seems pretty far out there to me.\n\nCheers,\n\nJeff\n\nOn Monday, December 24, 2012, Richard Neill wrote:Dear All,\n\nI think periodic reindex may be the solution. Even after reducing the autovacuum fraction to 0.05, the index still seems to bloat.Since your index is so tiny compared to the table, I'd lower it even more. 0.0001, maybe.\nHowever, vacuums can't overlap a table, so you can't get the table to be vacuumed more often than the time it takes to run the vacuum (which took around 4 minutes at default vacuum cost settings in a toy system) and so you may need to lower the autovacuum_vacuum_cost_delay to 0 for this table.  (I suspect it is all in cache anyway, so the default settings are too pessimistic.)  Finally, you might need to lower autovacuum_naptime, because the usually table also won't be vacuumed any more often than that.\n \n\n1. The documentation still suggests that reindex should not be needed in \"normal\" operation...  is this true? Or are the docs wrong? Or have I got such an edge case?Your case seems pretty far out there to me.  \n Cheers,Jeff", "msg_date": "Wed, 26 Dec 2012 23:03:36 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists? (solved?)" }, { "msg_contents": "\n> The partial index is highly leveraged. If every tuple in the\n> table is updated once, that amounts to every tuple in the index\n > being updated 25,000 times.\n>\n> How so? That sounds like O(n_2) behaviour.\n>\n> If the table has 5 million rows while the index has 200 (active) rows at\n> any given time, then to update every row in the table to null and back\n> again would be 100% turn over of the table. But each such change would\n> lead to an addition and then a deletion from the index. So 100%\n> turnover of the table would be a 5 million / 200 = 25,000 fold turn of\n> the index.\n\nSorry, I was being dense. I misread that as:\n \"every time a single tuple in the table is updated, the entire index\n (every row) is updated\".\nYes, of course your explanation makes sense.\n\n>\n> There is some code that allows a btree index entry to get killed (and so\n> the slot to be reused) without any vacuum, if a scan follows that entry\n> and finds the corresponding tuple in the table no longer visible to\n> anyone. I have not examined this code, and don't know whether it is\n> doing its job but just isn't enough to prevent the bloat, or if for some\n> reason it is not applicable to your situation.\n>\n\nIt looks like my solution is going to be a REINDEX invoked from cron, or \nmaybe just every 100k inserts.\n\n\nIn terms of trying to improve this behaviour for other PG users in the \nfuture, are there any more diagnostics I can do for you? Having found a \nspecial case, I'd like to help permanently resolve it if I can.\n\n\nThanks very much again.\n\nBest wishes,\n\nRichard\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 10:49:58 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "\n> The partial index is highly leveraged. If every tuple in the\n> table is updated once, that amounts to every tuple in the index\n> being updated 25,000 times.\n>\n> How so? That sounds like O(n_2) behaviour.\n>\n> If the table has 5 million rows while the index has 200 (active) rows at\n> any given time, then to update every row in the table to null and back\n> again would be 100% turn over of the table. But each such change would\n> lead to an addition and then a deletion from the index. So 100%\n> turnover of the table would be a 5 million / 200 = 25,000 fold turn of\n> the index.\n\nSorry, I was being dense. I misread that as:\n \"every time a single tuple in the table is updated, the entire index\n (every row) is updated\".\nYes, of course your explanation makes sense.\n\n>\n> There is some code that allows a btree index entry to get killed (and so\n> the slot to be reused) without any vacuum, if a scan follows that entry\n> and finds the corresponding tuple in the table no longer visible to\n> anyone. I have not examined this code, and don't know whether it is\n> doing its job but just isn't enough to prevent the bloat, or if for some\n> reason it is not applicable to your situation.\n>\n\nIt looks like my solution is going to be a REINDEX invoked from cron, or \nmaybe just every 100k inserts.\n\n\nIn terms of trying to improve this behaviour for other PG users in the \nfuture, are there any more diagnostics I can do for you? Having found a \nspecial case, I'd like to help permanently resolve it if I can.\n\n\nThanks very much again.\n\nBest wishes,\n\nRichard\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 10:52:01 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "On Thursday, December 20, 2012, Jeff Janes wrote:\n\n> On Thursday, December 20, 2012, Richard Neill wrote:\n>\n>>\n>>\n>> -> Bitmap Index Scan on tbl_tracker_exit_state_idx\n>> (cost=0.00..8.36 rows=151 width=0) (actual time=7.946..7.946 rows=20277\n>> loops=1)\n>>\n>\n> This is finding 100 times more rows than it thinks it will. If that could\n> be fixed, surely this plan would not look as good. But then, it would\n> probably just switch to another plan that is not the one you want, either.\n>\n\nI guess the issue here is that the histogram postgres uses to estimate the\nnumber of rows that will be found is based on visible rows, and it is\ncorrectly estimating the number of visible rows that will be found. And\nthat is the relevant thing to pass up to a higher join for its estimation.\n But for estimating the number of blocks a given index scan will access,\nthe right thing would be the number of tuples visited, not the number of\nthem found to be visible. So that is where this plan goes systematically\nwrong.\n\nI guess the correct thing would be for postgres to keep two histograms, one\nof all tuples and one of all visible tuples, and to produce different\nselectivity estimates for different purposes. But I don't see that change\ngetting made. It is only meaningful in cases where there is a fundamental\nskew in distribution between visible tuples and\ninvisible-but-as-yet-unvacuumed tuples.\n\nI think that that fundamental skew is the source of both the\nunderestimation of the bitmap scan cost, and overestimation of the partial\nindex scan (although I can't get it to overestimate that be anywhere near\nthe amount you were seeing).\n\nI still think your best bet is to get rid of the partial index and trade\nthe full one on (parcel_id_code) for one on (parcel_id_code,exit_state). I\nthink that will be much less fragile than reindexing in a cron job.\n\nCheers,\n\nJeff\n\n>\n\nOn Thursday, December 20, 2012, Jeff Janes wrote:On Thursday, December 20, 2012, Richard Neill wrote:\n\n         ->  Bitmap Index Scan on tbl_tracker_exit_state_idx (cost=0.00..8.36 rows=151 width=0) (actual time=7.946..7.946 rows=20277 loops=1)This is finding 100 times more rows than it thinks it will.  If that could be fixed, surely this plan would not look as good.  But then, it would probably just switch to another plan that is not the one you want, either.\nI guess the issue here is that the histogram postgres uses to estimate the number of rows that will be found is based on visible rows, and it is correctly estimating the number of visible rows that will be found.  And that is the relevant thing to pass up to a higher join for its estimation.  But for estimating the number of blocks a given index scan will access, the right thing would be the number of tuples visited, not the number of them found to be visible.  So that is where this plan goes systematically wrong.  \nI guess the correct thing would be for postgres to keep two histograms, one of all tuples and one of all visible tuples, and to produce different selectivity estimates for different purposes.  But I don't see that change getting made.  It is only meaningful in cases where there is a fundamental skew in distribution between visible tuples and invisible-but-as-yet-unvacuumed tuples.\nI think that that fundamental skew is the source of both the underestimation of the bitmap scan cost, and overestimation of the partial index scan (although I can't get it to overestimate that be anywhere near the amount you were seeing).\nI still think your best bet is to get rid of the partial index and trade the full one on (parcel_id_code) for one on (parcel_id_code,exit_state).  I think that will be much less fragile than reindexing in a cron job.\nCheers,Jeff", "msg_date": "Thu, 27 Dec 2012 11:17:16 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Why does the query planner use two full indexes, when a\n\tdedicated partial index exists?" }, { "msg_contents": "\n\nOn 27/12/12 16:17, Jeff Janes wrote:\n>\n> I still think your best bet is to get rid of the partial index and trade\n> the full one on (parcel_id_code) for one on (parcel_id_code,exit_state).\n> I think that will be much less fragile than reindexing in a cron job.\n>\n\nSo, at the moment, I have 3 indexes:\n full: parcel_id_code\n full: exit_state\n full: parcel_id_code where exit state is null\n\nAm I right that when you suggest just a single, joint index\n (parcel_id_code,exit_state)\ninstead of all 3 of the others,\n\nit will allow me to optimally run all of the following? :\n\n1. SELECT * from tbl_tracker WHERE parcel_id_code=22345 AND exit_state \nIS NULL\n\n(this is the one we've been discussing)\n\n\n2. SELECT * from tbl_tracker where parcel_id_code=44533\n\n3. SELECT * from tbl_tracker where exit_code = 2\n\n(2 and 3 are examples of queries I need to run for other purposes, \nunrelated to this thread, but which use the other indexes.).\n\n\nThanks,\n\nRichard\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 16:43:38 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does the query planner use two full indexes, when\n\ta dedicated partial index exists?" }, { "msg_contents": "Richard Neill <[email protected]> writes:\n> So, at the moment, I have 3 indexes:\n> full: parcel_id_code\n> full: exit_state\n> full: parcel_id_code where exit state is null\n\n> Am I right that when you suggest just a single, joint index\n> (parcel_id_code,exit_state)\n> instead of all 3 of the others,\n\nI think he was just recommending replacing the first and third indexes.\n\n> it will allow me to optimally run all of the following? :\n> 1. SELECT * from tbl_tracker WHERE parcel_id_code=22345 AND exit_state \n> IS NULL\n> 2. SELECT * from tbl_tracker where parcel_id_code=44533\n> 3. SELECT * from tbl_tracker where exit_code = 2\n\nYou will need an index with exit_state as the first column to make #3\nperform well --- at least, assuming that an index is going to help at\nall anyway. The rule of thumb is that if a query is going to fetch\nmore than a few percent of a table, an index is not useful because\nit's going to be touching most table pages anyway, so a seqscan will\nwin. I've forgotten now what you said the stats for exit_code values\nother than null were, but it's quite possible that an index is useless\nfor #3.\n\nThese considerations are mostly covered in the manual:\nhttp://www.postgresql.org/docs/9.2/static/indexes.html\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 12:05:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes,\n\twhen a dedicated partial index exists?" }, { "msg_contents": "On Thursday, December 27, 2012, Richard Neill wrote:\n\n>\n>\n> On 27/12/12 16:17, Jeff Janes wrote:\n>\n>>\n>> I still think your best bet is to get rid of the partial index and trade\n>> the full one on (parcel_id_code) for one on (parcel_id_code,exit_state).\n>> I think that will be much less fragile than reindexing in a cron job.\n>>\n>>\n> So, at the moment, I have 3 indexes:\n> full: parcel_id_code\n> full: exit_state\n> full: parcel_id_code where exit state is null\n>\n> Am I right that when you suggest just a single, joint index\n> (parcel_id_code,exit_state)\n> instead of all 3 of the others,\n>\n\nNo, just instead of 1 and 3. You still need an index on (exit_state) in\norder to efficiently satisfy query 3 below.\n\nAlternative, you could keep index 1, and replace 2 and 3 with one on\n(exit_state, parcel_id_code). And in fact this might be the better way to\ngo, because a big problem you are facing is that the (exit_state) index is\nlooking falsely attractive, and the easiest way to overcome that is to get\nrid of that index and replace it with one that can do everything that it\ncan do, but more.\n\nTheoretically there is technique called \"loose scan\" or \"skip scan\" which\ncould allow you to make one index, (exit_state, parcel_id_code) to replace\nall 3 of the above, but postgres does not yet implement that technique. I\nthink there is a way to achieve the same thing using recursive sql. But I\ndoubt it would be worth it, as too much index maintenance is not your root\nproblem.\n\n\n\n> 3. SELECT * from tbl_tracker where exit_code = 2\n>\n\nCheers,\n\nJeff\n\nOn Thursday, December 27, 2012, Richard Neill wrote:\n\nOn 27/12/12 16:17, Jeff Janes wrote:\n\n\nI still think your best bet is to get rid of the partial index and trade\nthe full one on (parcel_id_code) for one on (parcel_id_code,exit_state).\n  I think that will be much less fragile than reindexing in a cron job.\n\n\n\nSo, at the moment, I have 3 indexes:\n  full:     parcel_id_code\n  full:     exit_state\n  full:     parcel_id_code where exit state is null\n\nAm I right that when you suggest just a single, joint index\n    (parcel_id_code,exit_state)\ninstead of all 3 of the others,No, just instead of 1 and 3.  You still need an index on (exit_state) in order to efficiently satisfy query 3 below.Alternative, you could keep index 1, and replace 2 and 3 with one on (exit_state, parcel_id_code).  And in fact this might be the better way to go, because a big problem you are facing is that the (exit_state) index is looking falsely attractive, and the easiest way to overcome that is to get rid of that index and replace it with one that can do everything that it can do, but more.\nTheoretically there is technique called \"loose scan\" or \"skip scan\" which could allow you to make one index, (exit_state, parcel_id_code) to replace all 3 of the above, but postgres does not yet implement that technique.  I think there is a way to achieve the same thing using recursive sql.  But I doubt it would be worth it, as too much index maintenance is not your root problem.\n \n3.  SELECT * from tbl_tracker where exit_code = 2Cheers,Jeff", "msg_date": "Thu, 27 Dec 2012 13:07:17 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists?" }, { "msg_contents": "On Thursday, December 20, 2012, Jeff Janes wrote:\n\n> On Thursday, December 20, 2012, Tom Lane wrote:\n>\n>>\n>> What I did to try to duplicate Richard's situation was to create a test\n>> table in which all the exit_state values were NULL, then build the\n>> index, then UPDATE all but a small random fraction of the rows to 1,\n>> then vacuum. This results in a rather bloated partial index, but I\n>> think that's probably what he's got given that every record initially\n>> enters the table with NULL exit_state. It would take extremely frequent\n>> vacuuming to keep the partial index from accumulating a lot of dead\n>> entries.\n>>\n>\n>\nOnce I cranked up default_statistics_target, I could start reproducing the\nvery high estimates (5000) for the partial index in 9.1.\n\nAs you say, switching to 9.2 or above lowers it quite a bit, I still get\nsome pretty high estimates, ~100 when 8 would be more accurate.\n\nThe problem is in genericcostestimate\n\n if (index->pages > 1 && index->tuples > 1)\n numIndexPages = ceil(numIndexTuples * index->pages / index->tuples);\n\nThe index->pages should probably not include index pages which are empty.\n Even with aggressive vacuuming, most of the pages in the partial index\nseem to be empty at any given time.\n\nHowever, I don't know if that number is exposed readily. And it seems to\nbe updated too slowly to be useful, if pg_freespace is to be believed.\n\nBut I wonder if it couldn't be clamped to so that we there can be no more\npages than there are tuples.\n\n numIndexPages = ceil(numIndexTuples * Min(1,index->pages /\nindex->tuples));\n\n\n Cheers,\n\nJeff\n\nOn Thursday, December 20, 2012, Jeff Janes wrote:On Thursday, December 20, 2012, Tom Lane wrote:\n\nWhat I did to try to duplicate Richard's situation was to create a test\ntable in which all the exit_state values were NULL, then build the\nindex, then UPDATE all but a small random fraction of the rows to 1,\nthen vacuum.  This results in a rather bloated partial index, but I\nthink that's probably what he's got given that every record initially\nenters the table with NULL exit_state.  It would take extremely frequent\nvacuuming to keep the partial index from accumulating a lot of dead\nentries.Once I cranked up default_statistics_target, I could start reproducing the very high estimates (5000) for the partial index in 9.1.\nAs you say, switching to 9.2 or above lowers it quite a bit, I still get some pretty high estimates, ~100 when 8 would be more accurate.The problem is in genericcostestimate\n    if (index->pages > 1 && index->tuples > 1)        numIndexPages = ceil(numIndexTuples * index->pages / index->tuples);The index->pages should probably not include index pages which are empty.  Even with aggressive vacuuming, most of the pages in the partial index seem to be empty at any given time.\nHowever, I don't know if that number is exposed readily.  And it seems to be updated too slowly to be useful, if pg_freespace is to be believed.But I wonder if it couldn't be clamped to so that we there can be no more pages than there are tuples.\n        numIndexPages = ceil(numIndexTuples * Min(1,index->pages / index->tuples)); Cheers,Jeff", "msg_date": "Thu, 27 Dec 2012 21:35:31 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does the query planner use two full indexes, when a\n\tdedicated partial index exists?" } ]
[ { "msg_contents": "Huan Ruan wrote:\n> Kevin Grittner wrote:\n\n>> Frankly, at 12 microseconds per matched pair of rows, I think\n>> you're doing OK.\n>\n> This plan is the good one, I want the indexscan nested loop join and this\n> is only achieved after making all these costing factors change. Before\n> that, it was hash join and was very slow.\n> \n> However, I'm worried about the config changes being too 'extreme', i.e.\n> both sequential I/O and random I/O have the same cost and being only 0.1.\n> So, I was more wondering why I have to make such dramatic changes to\n> convince the optimiser to use NL join instead of hash join. And also, I'm\n> not sure what impact will these changes have on other queries yet. e.g.\n> will a query that's fine with hash join now choose NL join and runs slower?\n\nI understand the concern, but PostgreSQL doesn't yet have a knob to\nturn for \"cache hit ratio\". You essentially need to build that into\nthe page costs. Since your cache hit ratio (between shared buffers\nand the OS) is so high, the cost of page access relative to CPU\ncosts has declined and there isn't any effective difference between\nsequential and random access. As the level of caching changes, you\nmay need to adjust. In one production environment where there was\nsignificant caching, but far enough from 100% to matter, we tested\nvarious configurations and found the fastest plans being chosen\nwith seq_page_cost = 0.3 and random_page_cost = 0.5. Tune to your\nworkload.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 09:06:29 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hash join vs nested loop join" }, { "msg_contents": "On 21 December 2012 01:06, Kevin Grittner <[email protected]> wrote:\n\n> Huan Ruan wrote:\n> > Kevin Grittner wrote:\n>\n> >> Frankly, at 12 microseconds per matched pair of rows, I think\n> >> you're doing OK.\n> >\n> > This plan is the good one, I want the indexscan nested loop join and this\n> > is only achieved after making all these costing factors change. Before\n> > that, it was hash join and was very slow.\n> >\n> > However, I'm worried about the config changes being too 'extreme', i.e.\n> > both sequential I/O and random I/O have the same cost and being only 0.1.\n> > So, I was more wondering why I have to make such dramatic changes to\n> > convince the optimiser to use NL join instead of hash join. And also, I'm\n> > not sure what impact will these changes have on other queries yet. e.g.\n> > will a query that's fine with hash join now choose NL join and runs\n> slower?\n>\n> I understand the concern, but PostgreSQL doesn't yet have a knob to\n> turn for \"cache hit ratio\". You essentially need to build that into\n> the page costs. Since your cache hit ratio (between shared buffers\n> and the OS) is so high, the cost of page access relative to CPU\n> costs has declined and there isn't any effective difference between\n> sequential and random access. As the level of caching changes, you\n> may need to adjust. In one production environment where there was\n> significant caching, but far enough from 100% to matter, we tested\n> various configurations and found the fastest plans being chosen\n> with seq_page_cost = 0.3 and random_page_cost = 0.5. Tune to your\n> workload.\n>\n>\n> Thanks Kevin. I think I get some ideas now that I can try on the\nproduction server when we switch.\n\nOn 21 December 2012 01:06, Kevin Grittner <[email protected]> wrote:\nHuan Ruan wrote:\n> Kevin Grittner wrote:\n\n>> Frankly, at 12 microseconds per matched pair of rows, I think\n>> you're doing OK.\n>\n> This plan is the good one, I want the indexscan nested loop join and this\n> is only achieved after making all these costing factors change. Before\n> that, it was hash join and was very slow.\n>\n> However, I'm worried about the config changes being too 'extreme', i.e.\n> both sequential I/O and random I/O have the same cost and being only 0.1.\n> So, I was more wondering why I have to make such dramatic changes to\n> convince the optimiser to use NL join instead of hash join. And also, I'm\n> not sure what impact will these changes have on other queries yet. e.g.\n> will a query that's fine with hash join now choose NL join and runs slower?\n\nI understand the concern, but PostgreSQL doesn't yet have a knob to\nturn for \"cache hit ratio\". You essentially need to build that into\nthe page costs. Since your cache hit ratio (between shared buffers\nand the OS) is so high, the cost of page access relative to CPU\ncosts has declined and there isn't any effective difference between\nsequential and random access. As the level of caching changes, you\nmay need to adjust. In one production environment where there was\nsignificant caching, but far enough from 100% to matter, we tested\nvarious configurations and found the fastest plans being chosen\nwith seq_page_cost = 0.3 and random_page_cost = 0.5. Tune to your\nworkload.\n\nThanks Kevin. I think I get some ideas now that I can try on the production server when we switch.", "msg_date": "Fri, 21 Dec 2012 08:16:24 +1100", "msg_from": "Huan Ruan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash join vs nested loop join" } ]
[ { "msg_contents": "Hello guys\n\n \n\nI’m doing 1.2 Billion inserts into a table partitioned in\n15.\n\n \n\nWhen I target the MASTER table on all the inserts and let\nthe trigger decide what partition to choose from it takes 4 hours.\n\nIf I target the partitioned table directly during the\ninsert I can get 4 times better performance. It takes 1 hour.\n\n \n\n \n\nI’m trying to get more performance while still using the\ntrigger to choose the table, so partitions can be changed without changing the\napplication that inserts the data.\n\n \n\nWhat I noticed that iostat is not showing an I/O bottle\nneck.\n\niostat –xN 1\n\nDevice:        \nrrqm/s   wrqm/s     r/s    \nw/s   rsec/s   wsec/s avgrq-sz avgqu-sz  \nawait  svctm  %util\n\nPgresql--data    \n0.00     0.00    0.00\n8288.00     0.00 66304.00    \n8.00    60.92    7.35  \n0.01   4.30\n\n \n\niostat –m 1\n\nDevice:           \ntps    MB_read/s    MB_wrtn/s   \nMB_read    MB_wrtn\n\ndm-2          \n4096.00        \n0.00       \n16.00         \n0         16\n\n \n\nI also don’t see a CPU bottleneck or context switching\nbottle neck.\n\n \n\nPostgresql does not seem to write more than 16MB/s or 4K\ntransactions per second unless I target each individual partition.\n\n \n\nDid anybody have done some studies on partitioning bulk\ninsert performance? \n\n \n\nAny suggestions on a way to accelerate it ?\n\n \n\n \n\nRunning pgsql 9.2.2 on RHEL 6.3\n\n \n\nMy trigger is pretty straight forward:\n\nCREATE OR REPLACE FUNCTION quotes_insert_trigger()\n\nRETURNS trigger AS $$\n\nDECLARE\n\ntablename varchar(24);\n\nbdate varchar(10);\n\nedate varchar(10);\n\nBEGIN\n\ntablename = 'quotes_' ||\nto_char(new.received_time,'YYYY_MM_DD');\n\nEXECUTE 'INSERT INTO '|| tablename ||' VALUES (($1).*)'\nUSING NEW ;\n\nRETURN NULL;\n\nEND;\n\n$$\n\nLANGUAGE plpgsql;\n\n \n\nCREATE TRIGGER quotes_insert_trigger\n\nBEFORE INSERT ON quotes\n\nFOR EACH ROW EXECUTE PROCEDURE quotes_insert_trigger();\n\n \n\n \n\nThanks\n\nCharles \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 12:29:19 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Thu, Dec 20, 2012 at 10:29 AM, Charles Gomes <[email protected]> wrote:\n> Hello guys\n>\n> I’m doing 1.2 Billion inserts into a table partitioned in\n> 15.\n>\n> When I target the MASTER table on all the inserts and let\n> the trigger decide what partition to choose from it takes 4 hours.\n>\n> If I target the partitioned table directly during the\n> insert I can get 4 times better performance. It takes 1 hour.\n>\n> I’m trying to get more performance while still using the\n> trigger to choose the table, so partitions can be changed without changing the\n> application that inserts the data.\n>\n> What I noticed that iostat is not showing an I/O bottle\n> neck.\n\nSNIP\n\n> I also don’t see a CPU bottleneck or context switching\n> bottle neck.\n\nAre you sure? How are you measuring CPU usage? If you've got > 1\ncore, you might need to look at individual cores in which case you\nshould see a single core maxed out.\n\nWithout writing your trigger in C you're not likely to do much better\nthan you're doing now.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 10:39:25 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "None of the cores went to 100%. Looking at top during the inserts I can see several cores working, but never more than 60% busy. The machine has 8 cores (16 in HT).\n The load is spread through the cores, didn't have a single maxed out. However with HT on, technically it is overloaded.\n\n top - 13:14:07 up 7 days, 3:10, 3 users, load average: 0.25, 0.12, 0.10\n Tasks: 871 total, 13 running, 858 sleeping, 0 stopped, 0 zombie\n Cpu(s): 60.6%us, 5.0%sy, 0.0%ni, 34.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\n Mem: 49282716k total, 9311612k used, 39971104k free, 231116k buffers\n Swap: 44354416k total, 171308k used, 44183108k free, 2439608k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 21832 postgres 20 0 22.7g 93m 90m S 15 0.2 0:19.91 postgres: cgomes historical_ticks 10.254.109.10(44093) COPY\n 21817 postgres 20 0 22.7g 92m 89m S 15 0.2 0:20.24 postgres: cgomes historical_ticks 10.254.109.10(44077) idle\n 21842 postgres 20 0 22.7g 96m 93m S 15 0.2 0:20.39 postgres: cgomes historical_ticks 10.254.109.10(44103) COPY\n 21792 postgres 20 0 22.7g 93m 90m R 15 0.2 0:20.34 postgres: cgomes historical_ticks 10.254.109.10(44045) COPY\n 21793 postgres 20 0 22.7g 90m 88m S 15 0.2 0:20.13 postgres: cgomes historical_ticks 10.254.109.10(44048) COPY\n 21806 postgres 20 0 22.7g 94m 91m S 15 0.2 0:20.14 postgres: cgomes historical_ticks 10.254.109.10(44066) COPY\n 21809 postgres 20 0 22.7g 92m 89m S 15 0.2 0:19.82 postgres: cgomes historical_ticks 10.254.109.10(44069) COPY\n 21813 postgres 20 0 22.7g 92m 89m S 15 0.2 0:19.98 postgres: cgomes historical_ticks 10.254.109.10(44073) COPY\n 21843 postgres 20 0 22.7g 95m 92m S 15 0.2 0:20.56 postgres: cgomes historical_ticks 10.254.109.10(44104) COPY\n 21854 postgres 20 0 22.7g 91m 88m S 15 0.2 0:20.08 postgres: cgomes historical_ticks 10.254.109.10(44114) COPY\n 21796 postgres 20 0 22.7g 89m 86m S 14 0.2 0:20.03 postgres: cgomes historical_ticks 10.254.109.10(44056) COPY\n 21797 postgres 20 0 22.7g 92m 90m R 14 0.2 0:20.18 postgres: cgomes historical_ticks 10.254.109.10(44057) COPY\n 21804 postgres 20 0 22.7g 95m 92m S 14 0.2 0:20.28 postgres: cgomes historical_ticks 10.254.109.10(44064) COPY\n 21807 postgres 20 0 22.7g 94m 91m S 14 0.2 0:20.15 postgres: cgomes historical_ticks 10.254.109.10(44067) COPY\n 21808 postgres 20 0 22.7g 92m 89m S 14 0.2 0:20.05 postgres: cgomes historical_ticks 10.254.109.10(44068) COPY\n 21815 postgres 20 0 22.7g 90m 88m S 14 0.2 0:20.13 postgres: cgomes historical_ticks 10.254.109.10(44075) COPY\n 21818 postgres 20 0 22.7g 91m 88m S 14 0.2 0:20.01 postgres: cgomes historical_ticks 10.254.109.10(44078) COPY\n 21825 postgres 20 0 22.7g 92m 89m S 14 0.2 0:20.00 postgres: cgomes historical_ticks 10.254.109.10(44085) COPY\n 21836 postgres 20 0 22.7g 91m 88m R 14 0.2 0:20.22 postgres: cgomes historical_ticks 10.254.109.10(44097) COPY\n 21857 postgres 20 0 22.7g 89m 86m R 14 0.2 0:19.92 postgres: cgomes historical_ticks 10.254.109.10(44118) COPY\n 21858 postgres 20 0 22.7g 95m 93m S 14 0.2 0:20.36 postgres: cgomes historical_ticks 10.254.109.10(44119) COPY\n 21789 postgres 20 0 22.7g 92m 89m S 14 0.2 0:20.05 postgres: cgomes historical_ticks 10.254.109.10(44044) COPY\n 21795 postgres 20 0 22.7g 93m 90m S 14 0.2 0:20.27 postgres: cgomes historical_ticks 10.254.109.10(44055) COPY\n 21798 postgres 20 0 22.7g 89m 86m S 14 0.2 0:20.06 postgres: cgomes historical_ticks 10.254.109.10(44058) COPY\n 21800 postgres 20 0 22.7g 93m 90m S 14 0.2 0:20.04 postgres: cgomes historical_ticks 10.254.109.10(44060) COPY\n 21802 postgres 20 0 22.7g 89m 87m S 14 0.2 0:20.10 postgres: cgomes historical_ticks 10.254.109.10(44062) COPY\n\n\n Looks like I will have to disable HT.\n\n\n I've been looking at converting the trigger to C, but could not find\n a good example trigger for partitions written in C to start from. Have\n you heard of anyone implementing the partitioning trigger in C ?\n\n----------------------------------------\n> Date: Thu, 20 Dec 2012 10:39:25 -0700\n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n>\n> On Thu, Dec 20, 2012 at 10:29 AM, Charles Gomes <[email protected]> wrote:\n> > Hello guys\n> >\n> > I’m doing 1.2 Billion inserts into a table partitioned in\n> > 15.\n> >\n> > When I target the MASTER table on all the inserts and let\n> > the trigger decide what partition to choose from it takes 4 hours.\n> >\n> > If I target the partitioned table directly during the\n> > insert I can get 4 times better performance. It takes 1 hour.\n> >\n> > I’m trying to get more performance while still using the\n> > trigger to choose the table, so partitions can be changed without changing the\n> > application that inserts the data.\n> >\n> > What I noticed that iostat is not showing an I/O bottle\n> > neck.\n>\n> SNIP\n>\n> > I also don’t see a CPU bottleneck or context switching\n> > bottle neck.\n>\n> Are you sure? How are you measuring CPU usage? If you've got > 1\n> core, you might need to look at individual cores in which case you\n> should see a single core maxed out.\n>\n> Without writing your trigger in C you're not likely to do much better\n> than you're doing now.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 13:55:29 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Charles,\n\n* Charles Gomes ([email protected]) wrote:\n> I’m doing 1.2 Billion inserts into a table partitioned in\n> 15.\n\nDo you end up having multiple threads writing to the same, underlying,\ntables..? If so, I've seen that problem before. Look at pg_locks while\nthings are running and see if there are 'extend' locks that aren't being\nimmediately granted.\n\nBasically, there's a lock that PG has on a per-relation basis to extend\nthe relation (by a mere 8K..) which will block other writers. If\nthere's a lot of contention around that lock, you'll get poor\nperformance and it'll be faster to have independent threads writing\ndirectly to the underlying tables. I doubt rewriting the trigger in C\nwill help if the problem is the extent lock.\n\nIf you do get this working well, I'd love to hear what you did to\naccomplish that. Note also that you can get bottle-necked on the WAL\ndata, unless you've taken steps to avoid that WAL.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 20 Dec 2012 15:02:34 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Without hyperthreading CPU still not a bottleneck, while I/O is only 10% utilization.\n\ntop - 14:55:01 up 27 min,  2 users,  load average: 0.17, 0.19, 0.14\nTasks: 614 total,  17 running, 597 sleeping,   0 stopped,   0 zombie\nCpu(s): 73.8%us,  4.3%sy,  0.0%ni, 21.6%id,  0.1%wa,  0.0%hi,  0.1%si,  0.0%st\nMem:  49282716k total,  5855492k used, 43427224k free,    37400k buffers\nSwap: 44354416k total,        0k used, 44354416k free,  1124900k cached\n\n  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n19903 postgres  20   0 22.7g  34m  32m S  9.6  0.1   0:02.66 postgres: cgomes historical_ticks 10.254.109.10(46103) COPY\n19934 postgres  20   0 22.7g  34m  32m S  9.6  0.1   0:02.61 postgres: cgomes historical_ticks 10.254.109.10(46134) COPY\n19947 postgres  20   0 22.7g  34m  31m S  9.6  0.1   0:02.64 postgres: cgomes historical_ticks 10.254.109.10(46147) COPY\n19910 postgres  20   0 22.7g  34m  32m S  9.2  0.1   0:02.67 postgres: cgomes historical_ticks 10.254.109.10(46110) COPY\n19924 postgres  20   0 22.7g  33m  31m S  9.2  0.1   0:02.65 postgres: cgomes historical_ticks 10.254.109.10(46124) COPY\n19952 postgres  20   0 22.7g  34m  32m R  9.2  0.1   0:02.71 postgres: cgomes historical_ticks 10.254.109.10(46152) COPY\n19964 postgres  20   0 22.7g  34m  32m R  9.2  0.1   0:02.59 postgres: cgomes historical_ticks 10.254.109.10(46164) COPY\n19901 postgres  20   0 22.7g  35m  32m S  8.9  0.1   0:02.66 postgres: cgomes historical_ticks 10.254.109.10(46101) COPY\n19914 postgres  20   0 22.7g  34m  31m S  8.9  0.1   0:02.62 postgres: cgomes historical_ticks 10.254.109.10(46114) COPY\n19923 postgres  20   0 22.7g  34m  31m S  8.9  0.1   0:02.74 postgres: cgomes historical_ticks 10.254.109.10(46123) COPY\n19925 postgres  20   0 22.7g  34m  31m R  8.9  0.1   0:02.65 postgres: cgomes historical_ticks 10.254.109.10(46125) COPY\n19926 postgres  20   0 22.7g  34m  32m S  8.9  0.1   0:02.79 postgres: cgomes historical_ticks 10.254.109.10(46126) COPY\n19929 postgres  20   0 22.7g  34m  31m S  8.9  0.1   0:02.64 postgres: cgomes historical_ticks 10.254.109.10(46129) COPY\n19936 postgres  20   0 22.7g  34m  32m S  8.9  0.1   0:02.72 postgres: cgomes historical_ticks 10.254.109.10(46136) COPY\n\nI believe the bottleneck may be that pgsql has fight with it's siblings to update the indexes. Is there a way good way to add probes to check where things are slowing down ?\n\n\n----------------------------------------\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> Date: Thu, 20 Dec 2012 13:55:29 -0500\n>\n> None of the cores went to 100%. Looking at top during the inserts I can see several cores working, but never more than 60% busy. The machine has 8 cores (16 in HT).\n> The load is spread through the cores, didn't have a single maxed out. However with HT on, technically it is overloaded.\n>\n> top - 13:14:07 up 7 days, 3:10, 3 users, load average: 0.25, 0.12, 0.10\n> Tasks: 871 total, 13 running, 858 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 60.6%us, 5.0%sy, 0.0%ni, 34.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\n> Mem: 49282716k total, 9311612k used, 39971104k free, 231116k buffers\n> Swap: 44354416k total, 171308k used, 44183108k free, 2439608k cached\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 21832 postgres 20 0 22.7g 93m 90m S 15 0.2 0:19.91 postgres: cgomes historical_ticks 10.254.109.10(44093) COPY\n> 21817 postgres 20 0 22.7g 92m 89m S 15 0.2 0:20.24 postgres: cgomes historical_ticks 10.254.109.10(44077) idle\n> 21842 postgres 20 0 22.7g 96m 93m S 15 0.2 0:20.39 postgres: cgomes historical_ticks 10.254.109.10(44103) COPY\n> 21792 postgres 20 0 22.7g 93m 90m R 15 0.2 0:20.34 postgres: cgomes historical_ticks 10.254.109.10(44045) COPY\n> 21793 postgres 20 0 22.7g 90m 88m S 15 0.2 0:20.13 postgres: cgomes historical_ticks 10.254.109.10(44048) COPY\n> 21806 postgres 20 0 22.7g 94m 91m S 15 0.2 0:20.14 postgres: cgomes historical_ticks 10.254.109.10(44066) COPY\n> 21809 postgres 20 0 22.7g 92m 89m S 15 0.2 0:19.82 postgres: cgomes historical_ticks 10.254.109.10(44069) COPY\n> 21813 postgres 20 0 22.7g 92m 89m S 15 0.2 0:19.98 postgres: cgomes historical_ticks 10.254.109.10(44073) COPY\n> 21843 postgres 20 0 22.7g 95m 92m S 15 0.2 0:20.56 postgres: cgomes historical_ticks 10.254.109.10(44104) COPY\n> 21854 postgres 20 0 22.7g 91m 88m S 15 0.2 0:20.08 postgres: cgomes historical_ticks 10.254.109.10(44114) COPY\n> 21796 postgres 20 0 22.7g 89m 86m S 14 0.2 0:20.03 postgres: cgomes historical_ticks 10.254.109.10(44056) COPY\n> 21797 postgres 20 0 22.7g 92m 90m R 14 0.2 0:20.18 postgres: cgomes historical_ticks 10.254.109.10(44057) COPY\n> 21804 postgres 20 0 22.7g 95m 92m S 14 0.2 0:20.28 postgres: cgomes historical_ticks 10.254.109.10(44064) COPY\n> 21807 postgres 20 0 22.7g 94m 91m S 14 0.2 0:20.15 postgres: cgomes historical_ticks 10.254.109.10(44067) COPY\n> 21808 postgres 20 0 22.7g 92m 89m S 14 0.2 0:20.05 postgres: cgomes historical_ticks 10.254.109.10(44068) COPY\n> 21815 postgres 20 0 22.7g 90m 88m S 14 0.2 0:20.13 postgres: cgomes historical_ticks 10.254.109.10(44075) COPY\n> 21818 postgres 20 0 22.7g 91m 88m S 14 0.2 0:20.01 postgres: cgomes historical_ticks 10.254.109.10(44078) COPY\n> 21825 postgres 20 0 22.7g 92m 89m S 14 0.2 0:20.00 postgres: cgomes historical_ticks 10.254.109.10(44085) COPY\n> 21836 postgres 20 0 22.7g 91m 88m R 14 0.2 0:20.22 postgres: cgomes historical_ticks 10.254.109.10(44097) COPY\n> 21857 postgres 20 0 22.7g 89m 86m R 14 0.2 0:19.92 postgres: cgomes historical_ticks 10.254.109.10(44118) COPY\n> 21858 postgres 20 0 22.7g 95m 93m S 14 0.2 0:20.36 postgres: cgomes historical_ticks 10.254.109.10(44119) COPY\n> 21789 postgres 20 0 22.7g 92m 89m S 14 0.2 0:20.05 postgres: cgomes historical_ticks 10.254.109.10(44044) COPY\n> 21795 postgres 20 0 22.7g 93m 90m S 14 0.2 0:20.27 postgres: cgomes historical_ticks 10.254.109.10(44055) COPY\n> 21798 postgres 20 0 22.7g 89m 86m S 14 0.2 0:20.06 postgres: cgomes historical_ticks 10.254.109.10(44058) COPY\n> 21800 postgres 20 0 22.7g 93m 90m S 14 0.2 0:20.04 postgres: cgomes historical_ticks 10.254.109.10(44060) COPY\n> 21802 postgres 20 0 22.7g 89m 87m S 14 0.2 0:20.10 postgres: cgomes historical_ticks 10.254.109.10(44062) COPY\n>\n>\n> Looks like I will have to disable HT.\n>\n>\n> I've been looking at converting the trigger to C, but could not find\n> a good example trigger for partitions written in C to start from. Have\n> you heard of anyone implementing the partitioning trigger in C ?\n>\n> ----------------------------------------\n> > Date: Thu, 20 Dec 2012 10:39:25 -0700\n> > Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> > From: [email protected]\n> > To: [email protected]\n> > CC: [email protected]\n> >\n> > On Thu, Dec 20, 2012 at 10:29 AM, Charles Gomes <[email protected]> wrote:\n> > > Hello guys\n> > >\n> > > I’m doing 1.2 Billion inserts into a table partitioned in\n> > > 15.\n> > >\n> > > When I target the MASTER table on all the inserts and let\n> > > the trigger decide what partition to choose from it takes 4 hours.\n> > >\n> > > If I target the partitioned table directly during the\n> > > insert I can get 4 times better performance. It takes 1 hour.\n> > >\n> > > I’m trying to get more performance while still using the\n> > > trigger to choose the table, so partitions can be changed without changing the\n> > > application that inserts the data.\n> > >\n> > > What I noticed that iostat is not showing an I/O bottle\n> > > neck.\n> >\n> > SNIP\n> >\n> > > I also don’t see a CPU bottleneck or context switching\n> > > bottle neck.\n> >\n> > Are you sure? How are you measuring CPU usage? If you've got > 1\n> > core, you might need to look at individual cores in which case you\n> > should see a single core maxed out.\n> >\n> > Without writing your trigger in C you're not likely to do much better\n> > than you're doing now.\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 15:08:33 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Thu, Dec 20, 2012 at 9:29 AM, Charles Gomes <[email protected]> wrote:\n> Hello guys\n>\n>\n>\n> I’m doing 1.2 Billion inserts into a table partitioned in\n> 15.\n>\n>\n>\n> When I target the MASTER table on all the inserts and let\n> the trigger decide what partition to choose from it takes 4 hours.\n>\n> If I target the partitioned table directly during the\n> insert I can get 4 times better performance. It takes 1 hour.\n\nHow do you target them directly? By implementing the\n\"trigger-equivalent-code\" in the application code tuple by tuple, or\nby pre-segregating the tuples and then bulk loading each segment to\nits partition?\n\nWhat if you get rid of the partitioning and just load data to the\nmaster, is that closer to 4 hours or to 1 hour?\n\n...\n>\n>\n> What I noticed that iostat is not showing an I/O bottle\n> neck.\n>\n> iostat –xN 1\n>\n> Device:\n> rrqm/s wrqm/s r/s\n> w/s rsec/s wsec/s avgrq-sz avgqu-sz\n> await svctm %util\n>\n> Pgresql--data\n> 0.00 0.00 0.00\n> 8288.00 0.00 66304.00\n> 8.00 60.92 7.35\n> 0.01 4.30\n\n8288 randomly scattered writes per second sound like enough to\nbottleneck a pretty impressive RAID. Or am I misreading that?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 14:31:44 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Yes, I'm doing multiple threads inserting to the same tables.\nI don't think the WAL is the issue as I even tried going ASYNC (non acid), disabled sync after writes, however still didn't got able to push full performance.\n\nI've checked the locks and I see lots of ExclusiveLock's with:\nselect  * from pg_locks order by mode\n\n\n   locktype    | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |  pid  |           mode           | granted | fastpath \n---------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+--------------------------+---------+----------\n relation      |    16385 |    19295 |      |       |            |               |         |       |          | 72/18              | 19879 | AccessShareLock          | t       | t\n relation      |    16385 |    11069 |      |       |            |               |         |       |          | 76/32              | 19881 | AccessShareLock          | t       | t\n virtualxid    |          |          |      |       | 56/34      |               |         |       |          | 56/34              | 17952 | ExclusiveLock            | t       | t\n virtualxid    |          |          |      |       | 27/33      |               |         |       |          | 27/33              | 17923 | ExclusiveLock            | t       | t\n virtualxid    |          |          |      |       | 6/830      |               |         |       |          | 6/830              | 17902 | ExclusiveLock            | t       | t\n virtualxid    |          |          |      |       | 62/34      |               |         |       |          | 62/34              | 17959 | ExclusiveLock            | t       | t\n virtualxid    |          |          |      |       | 51/34      |               |         |       |          | 51/34              | 17947 | ExclusiveLock            | t       | t\n virtualxid    |          |          |      |       | 36/34      |               |         |       |          | 36/34              | 17932 | ExclusiveLock            | t       | t\n virtualxid    |          |          |      |       | 10/830     |               |         |       |          | 10/830             | 17906 | \n.................(about 56 of those)\nExclusiveLock            | t       | t\n transactionid |          |          |      |       |            |         30321 |         |       |          | 55/33              | 17951 | ExclusiveLock            | t       | f\n transactionid |          |          |      |       |            |         30344 |         |       |          | 19/34              | 17912 | ExclusiveLock            | t       | f\n transactionid |          |          |      |       |            |         30354 |         |       |          | 3/834              | 17898 | ExclusiveLock            | t       | f\n transactionid |          |          |      |       |            |         30359 |         |       |          | 50/34              | 17946 | ExclusiveLock            | t       | f\n transactionid |          |          |      |       |            |         30332 |         |       |          | 9/830              | 17905 | ExclusiveLock            | t       | f\n transactionid |          |          |      |       |            |         30294 |         |       |          | 37/33              | 17933 | ExclusiveLock            | t       | f\n transactionid |          |          |      |       |            |         30351 |         |       |          | 38/34              | 17934 | ExclusiveLock            | t       | f\n transactionid |          |          |      |       |            |         30326 |         |       |          | 26/33              | 17922 | ExclusiveLock            | t       | f\n.................(about 52 of those)\n relation      |    16385 |    19291 |      |       |            |               |         |       |          | 72/18              | 19879 | ShareUpdateExclusiveLock | t       | f\n(3 of those)\n relation      |    16385 |    19313 |      |       |            |               |         |       |          | 33/758             | 17929 | RowExclusiveLock         | t       | t\n(211 of those)\n\n\nHowever I don't see any of the EXTEND locks mentioned.\n\nI would give a try translating the trigger to C but I can't code it without a good sample to start from, if anyone has one and would like to share I would love to start from it and share with other people so everyone can benefit.\n\n----------------------------------------\n> Date: Thu, 20 Dec 2012 15:02:34 -0500\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n>\n> Charles,\n>\n> * Charles Gomes ([email protected]) wrote:\n> > I’m doing 1.2 Billion inserts into a table partitioned in\n> > 15.\n>\n> Do you end up having multiple threads writing to the same, underlying,\n> tables..? If so, I've seen that problem before. Look at pg_locks while\n> things are running and see if there are 'extend' locks that aren't being\n> immediately granted.\n>\n> Basically, there's a lock that PG has on a per-relation basis to extend\n> the relation (by a mere 8K..) which will block other writers. If\n> there's a lot of contention around that lock, you'll get poor\n> performance and it'll be faster to have independent threads writing\n> directly to the underlying tables. I doubt rewriting the trigger in C\n> will help if the problem is the extent lock.\n>\n> If you do get this working well, I'd love to hear what you did to\n> accomplish that. Note also that you can get bottle-necked on the WAL\n> data, unless you've taken steps to avoid that WAL.\n>\n> Thanks,\n>\n> Stephen \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 17:43:22 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Hi,\n\nOn 21 December 2012 04:29, Charles Gomes <[email protected]> wrote:\n> When I target the MASTER table on all the inserts and let\n> the trigger decide what partition to choose from it takes 4 hours.\n>\n> If I target the partitioned table directly during the\n> insert I can get 4 times better performance. It takes 1 hour.\n\nYes, that's my experience as well. Triggers are the slowest.\nPerformance of \"DO INSTEAD\" rule is close to direct inserts but rule\nsetup is complex (each partition needs one):\n\n create or replace rule <master_table>_insert_<partition_name> as on\ninsert to <master_table>\n where new.<part_column> >= ... and\nnew.<part_column> < ....\n do instead\n insert into <master_table>_<partition_name>\nvalues (new.*)\n\nThe best is used to direct inserts (into partition) if you can.\n\n--\nOndrej Ivanic\n(http://www.linkedin.com/in/ondrejivanic)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Dec 2012 09:50:49 +1100", "msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Jeff, \n\nThe 8288 writes are fine, as the array has a BBU, it's fine. You see about 4% of the utilization.\n\nTo target directly instead of doing :\nINSERT INTO TABLE VALUES ()\nI use:\nINSERT INTO TABLE_PARTITION_01 VALUES()\n\nBy targeting it I see a huge performance increase.\n\nI haven't tested using 1Billion rows in a single table. The issue is that in the future it will grow to more than 1 billion rows, it will get to about 4Billion rows and that's when I believe partition would be a major improvement.\n\n\n----------------------------------------\n> Date: Thu, 20 Dec 2012 14:31:44 -0800\n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n>\n> On Thu, Dec 20, 2012 at 9:29 AM, Charles Gomes <[email protected]> wrote:\n> > Hello guys\n> >\n> >\n> >\n> > I’m doing 1.2 Billion inserts into a table partitioned in\n> > 15.\n> >\n> >\n> >\n> > When I target the MASTER table on all the inserts and let\n> > the trigger decide what partition to choose from it takes 4 hours.\n> >\n> > If I target the partitioned table directly during the\n> > insert I can get 4 times better performance. It takes 1 hour.\n>\n> How do you target them directly? By implementing the\n> \"trigger-equivalent-code\" in the application code tuple by tuple, or\n> by pre-segregating the tuples and then bulk loading each segment to\n> its partition?\n>\n> What if you get rid of the partitioning and just load data to the\n> master, is that closer to 4 hours or to 1 hour?\n>\n> ...\n> >\n> >\n> > What I noticed that iostat is not showing an I/O bottle\n> > neck.\n> >\n> > iostat –xN 1\n> >\n> > Device:\n> > rrqm/s wrqm/s r/s\n> > w/s rsec/s wsec/s avgrq-sz avgqu-sz\n> > await svctm %util\n> >\n> > Pgresql--data\n> > 0.00 0.00 0.00\n> > 8288.00 0.00 66304.00\n> > 8.00 60.92 7.35\n> > 0.01 4.30\n>\n> 8288 randomly scattered writes per second sound like enough to\n> bottleneck a pretty impressive RAID. Or am I misreading that?\n>\n> Cheers,\n>\n> Jeff\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 17:56:24 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "True, that's the same I feel, I will be looking to translate the trigger to C if I can find good examples, that should accelerate. \nUsing rules would be totally bad as I'm partitioning daily and after one year having 365 lines of IF won't be fun to maintain.\n\n----------------------------------------\n> Date: Fri, 21 Dec 2012 09:50:49 +1100\n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n>\n> Hi,\n>\n> On 21 December 2012 04:29, Charles Gomes <[email protected]> wrote:\n> > When I target the MASTER table on all the inserts and let\n> > the trigger decide what partition to choose from it takes 4 hours.\n> >\n> > If I target the partitioned table directly during the\n> > insert I can get 4 times better performance. It takes 1 hour.\n>\n> Yes, that's my experience as well. Triggers are the slowest.\n> Performance of \"DO INSTEAD\" rule is close to direct inserts but rule\n> setup is complex (each partition needs one):\n>\n> create or replace rule <master_table>_insert_<partition_name> as on\n> insert to <master_table>\n> where new.<part_column> >= ... and\n> new.<part_column> < ....\n> do instead\n> insert into <master_table>_<partition_name>\n> values (new.*)\n>\n> The best is used to direct inserts (into partition) if you can.\n>\n> --\n> Ondrej Ivanic\n> (http://www.linkedin.com/in/ondrejivanic)\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 17:59:18 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Charles Gomes <[email protected]> writes:\n> Using rules would be totally bad as I'm partitioning daily and after one year having 365 lines of IF won't be fun to maintain.\n\nYou should probably rethink that plan anyway. The existing support for\npartitioning is not meant to support hundreds of partitions; you're\ngoing to be bleeding performance in a lot of places if you insist on\ndoing that.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 18:39:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Thu, Dec 20, 2012 at 4:39 PM, Tom Lane <[email protected]> wrote:\n> Charles Gomes <[email protected]> writes:\n>> Using rules would be totally bad as I'm partitioning daily and after one year having 365 lines of IF won't be fun to maintain.\n>\n> You should probably rethink that plan anyway. The existing support for\n> partitioning is not meant to support hundreds of partitions; you're\n> going to be bleeding performance in a lot of places if you insist on\n> doing that.\n\nA couple of points:\n\n1: In my experience hundreds is OK performance wise, but as you\napproach thousands you fall off a cliff, and performance is terrible.\nSo at the 3 to 4 year mark daily partition tables will definitely be\nhaving problems.\n\n2: A good way around this is to have partitions for the last x days,\nlast x weeks or months before that, and x years going however far\nback. This keeps the number of partitions low. Just dump the oldest\nday into a weekly partition, til the next week starts, then dump the\noldest week into monthly etc. As long as you have lower traffic times\nof day or enough bandwidth it works pretty well. Or you can just use\ndaily partitions til things start going boom and fix it all at a later\ndate. It's probably better to be proactive tho.\n\n3: Someone above mentioned rules being faster than triggers. In my\nexperience they're WAY slower than triggers but maybe that was just on\nthe older pg versions (8.3 and lower) we were doing this on. I'd be\ninterested in seeing some benchmarks if rules have gotten faster or I\nwas just doing it wrong.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 20 Dec 2012 17:19:52 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Thursday, December 20, 2012, Charles Gomes wrote:\n\n> Jeff,\n>\n> The 8288 writes are fine, as the array has a BBU, it's fine. You see about\n> 4% of the utilization.\n>\n\nBBU is great for latency, but it doesn't do much for throughput, unless it\nis doing write combining behind the scenes. Is it HDD or SSD behind the\nBBU? Have you bench-marked it on randomly scattered 8k writes?\n\nI've seen %util reports that were low while watching a strace showed\nobvious IO freezes. So I don't know how much faith to put into low %util.\n\n\n\n>\n> To target directly instead of doing :\n> INSERT INTO TABLE VALUES ()\n> I use:\n> INSERT INTO TABLE_PARTITION_01 VALUES()\n>\n\nBut how is it deciding what partition to use? Does it have to re-decide\nfor every row, or does each thread serve only one partition throughout its\nlife and so makes the decision only once?\n\n\n\n>\n> By targeting it I see a huge performance increase.\n>\n\nBut is that because by targeting you are by-passing the the over-head of\ntriggers, or is it because you are loading the rows in an order which leads\nto more efficient index maintenance?\n\n\n\n> I haven't tested using 1Billion rows in a single table. The issue is that\n> in the future it will grow to more than 1 billion rows, it will get to\n> about 4Billion rows and that's when I believe partition would be a major\n> improvement.\n>\n\nThe way that partitioning gives you performance improvements is by you\nembracing the partitioning, for example by targeting the loading to just\none partition without any indexes, creating indexes, and then atomically\nattaching it to the table. If you wish to have partitions, but want to use\ntriggers to hide that partitioning from you, then I don't think you can\nexpect to get much of a speed up through using partitions.\n\nAny way, the way I would approach it would be to load to a single\nun-partitioned table, and also load to a single dummy-partitioned table\nwhich uses a trigger that looks like the one you want to use for real, but\ndirects all rows to a single partition. If these loads take the same time,\nyou know it is not the trigger which is limiting.\n\nCheers,\n\nJeff\n\nOn Thursday, December 20, 2012, Charles Gomes wrote:Jeff,\n\nThe 8288 writes are fine, as the array has a BBU, it's fine. You see about 4% of the utilization.BBU is great for latency, but it doesn't do much for throughput, unless it is doing write combining behind the scenes.  Is it HDD or SSD behind the BBU?  Have you bench-marked it on randomly scattered 8k writes?\nI've seen %util reports that were low while watching a strace showed obvious IO freezes.  So I don't know how much faith to put into low %util.   \n\nTo target directly instead of doing :\nINSERT INTO TABLE VALUES ()\nI use:\nINSERT INTO TABLE_PARTITION_01 VALUES()But how is it deciding what partition to use?  Does it have to re-decide for every row, or does each thread serve only one partition throughout its life and so makes the decision only once?\n \n\nBy targeting it I see a huge performance increase.But is that because by targeting you are by-passing the the over-head of triggers, or is it because you are loading the rows in an order which leads to more efficient index maintenance?\n \nI haven't tested using 1Billion rows in a single table. The issue is that in the future it will grow to more than 1 billion rows, it will get to about 4Billion rows and that's when I believe partition would be a major improvement.\nThe way that partitioning gives you performance improvements is by you embracing the partitioning, for example by targeting the loading to just one partition without any indexes, creating indexes, and then atomically attaching it to the table.  If you wish to have partitions, but want to use triggers to hide that partitioning from you, then I don't think you can expect to get much of a speed up through using partitions.\nAny way, the way I would approach it would be to load to a single un-partitioned table, and also load to a single dummy-partitioned table which uses a trigger that looks like the one you want to use for real, but directs all rows to a single partition.  If these loads take the same time, you know it is not the trigger which is limiting.\nCheers,Jeff", "msg_date": "Thu, 20 Dec 2012 19:24:09 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Tom, I may have to rethink it, so I'm going to have about 100 Million rows per day (5 days a week) 2 Billion per month. My point on partitioning was to be able to store 6 months of data in a single machine. About 132 partitions in a total of 66 billion rows.\n\n\n----------------------------------------\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]; [email protected]\n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> Date: Thu, 20 Dec 2012 18:39:07 -0500\n>\n> Charles Gomes <[email protected]> writes:\n> > Using rules would be totally bad as I'm partitioning daily and after one year having 365 lines of IF won't be fun to maintain.\n>\n> You should probably rethink that plan anyway. The existing support for\n> partitioning is not meant to support hundreds of partitions; you're\n> going to be bleeding performance in a lot of places if you insist on\n> doing that.\n>\n> regards, tom lane\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Dec 2012 09:14:46 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "The BBU does combines the writes.\n\nI've benchmarked using a single table and it took 1:34:21.549959 to insert 1188000000 rows. (70 writers to a single table)\n\nI've also benchmarked having writers targeting individual partitions and they get the same job done in 1 Hour.\n\nI/O is definitely not the botleneck.\n\nWithout changing hardware it accelerates things almost 4 times, looks like to be a delay on the way Postgresql handles the partitions or the time taking for the trigger to select what partition to insert.\n\n\nWhen targeting I issue commands that insert directly into the partition \"INSERT INTO quotes_DATE VALUES() ..,..,...,.., \" 10k rows at time.\nWhen not targeting I leave to the trigger to decide:\n\n\n\nCREATE OR REPLACE FUNCTION quotes_insert_trigger()RETURNS trigger AS $$\n\nDECLARE\n\ntablename varchar(24);\n\nbdate varchar(10);\n\nedate varchar(10);\n\nBEGIN\n\ntablename = 'quotes_' || to_char(new.received_time,'YYYY_MM_DD');\n\nEXECUTE 'INSERT INTO '|| tablename ||' VALUES (($1).*)'\nUSING NEW ;\n\nRETURN NULL;\n\nEND;\n\n$$\n\nLANGUAGE plpgsql;\n\n\nMaybe translating this trigger to C could help. But I haven't heart anyone that did use partitioning with a trigger in C and I don't have the know how on it without examples.\n\n________________________________\n> Date: Thu, 20 Dec 2012 19:24:09 -0800 \n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table \n> From: [email protected] \n> To: [email protected] \n> CC: [email protected] \n> \n> \n> \n> On Thursday, December 20, 2012, Charles Gomes wrote: \n> Jeff, \n> \n> The 8288 writes are fine, as the array has a BBU, it's fine. You see \n> about 4% of the utilization. \n> \n> BBU is great for latency, but it doesn't do much for throughput, unless \n> it is doing write combining behind the scenes. Is it HDD or SSD behind \n> the BBU? Have you bench-marked it on randomly scattered 8k writes? \n> \n> I've seen %util reports that were low while watching a strace showed \n> obvious IO freezes. So I don't know how much faith to put into low \n> %util. \n> \n> \n> \n> To target directly instead of doing : \n> INSERT INTO TABLE VALUES () \n> I use: \n> INSERT INTO TABLE_PARTITION_01 VALUES() \n> \n> But how is it deciding what partition to use? Does it have to \n> re-decide for every row, or does each thread serve only one partition \n> throughout its life and so makes the decision only once? \n> \n> \n> \n> By targeting it I see a huge performance increase. \n> \n> But is that because by targeting you are by-passing the the over-head \n> of triggers, or is it because you are loading the rows in an order \n> which leads to more efficient index maintenance? \n> \n> \n> I haven't tested using 1Billion rows in a single table. The issue is \n> that in the future it will grow to more than 1 billion rows, it will \n> get to about 4Billion rows and that's when I believe partition would be \n> a major improvement. \n> \n> The way that partitioning gives you performance improvements is by you \n> embracing the partitioning, for example by targeting the loading to \n> just one partition without any indexes, creating indexes, and then \n> atomically attaching it to the table. If you wish to have partitions, \n> but want to use triggers to hide that partitioning from you, then I \n> don't think you can expect to get much of a speed up through using \n> partitions. \n> \n> Any way, the way I would approach it would be to load to a single \n> un-partitioned table, and also load to a single dummy-partitioned table \n> which uses a trigger that looks like the one you want to use for real, \n> but directs all rows to a single partition. If these loads take the \n> same time, you know it is not the trigger which is limiting. \n> \n> Cheers, \n> \n> Jeff \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Dec 2012 09:30:07 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Thursday, December 20, 2012, Charles Gomes wrote:\n\n> Without hyperthreading CPU still not a bottleneck, while I/O is only 10%\n> utilization.\n>\n> top - 14:55:01 up 27 min, 2 users, load average: 0.17, 0.19, 0.14\n> Tasks: 614 total, 17 running, 597 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 73.8%us, 4.3%sy, 0.0%ni, 21.6%id, 0.1%wa, 0.0%hi, 0.1%si,\n> 0.0%st\n>\n\n\n...\n\n\n> I believe the bottleneck may be that pgsql has fight with it's siblings to\n> update the indexes.\n\n\nI think that should mostly show up as idle or wait, not as user time.\n Since your at 75% user already, you couldn't expect more than a 33%\nimprovement by fixing that, assuming that that were the problem.\n\n\n> Is there a way good way to add probes to check where things are slowing\ndown ?\n\nWhat happens if you just drop the indexes as an experiment? That should\nput a hard limit on the amount the indexes can be slowing you down.\n\nI like oprofile to give you good bottom level profile of where CPU time is\ngoing. Unfortunately, it will probably just show you \"AllocSetAlloc\".\n Also, I don't trust it on virtualized systems, if you are on one of those.\n\nThere are many other ways of approaching it, but mostly you have to already\nhave a good theory about what is going on in order know which one to use or\nto interpret the results, and many of them require you to make custom\ncompiles of the postgres server code.\n\n\nCheers,\n\nJeff\n\nOn Thursday, December 20, 2012, Charles Gomes wrote:Without hyperthreading CPU still not a bottleneck, while I/O is only 10% utilization.\n\ntop - 14:55:01 up 27 min,  2 users,  load average: 0.17, 0.19, 0.14\nTasks: 614 total,  17 running, 597 sleeping,   0 stopped,   0 zombie\nCpu(s): 73.8%us,  4.3%sy,  0.0%ni, 21.6%id,  0.1%wa,  0.0%hi,  0.1%si,  0.0%st... \n\nI believe the bottleneck may be that pgsql has fight with it's siblings to update the indexes. I think that should mostly show up as idle or wait, not as user time.  Since your at 75% user already, you couldn't expect more than a 33% improvement by fixing that, assuming that that were the problem.\n> Is there a way good way to add probes to check where things are slowing down ?What happens if you just drop the indexes as an experiment?  That should put a hard limit on the amount the indexes can be slowing you down.\nI like oprofile to give you good bottom level profile of where CPU time is going.  Unfortunately, it will probably just show you \"AllocSetAlloc\".  Also, I don't trust it on virtualized systems, if you are on one of those.\nThere are many other ways of approaching it, but mostly you have to already have a good theory about what is going on in order know which one to use or to interpret the results, and many of them require you to make custom compiles of the postgres server code.\n Cheers,Jeff", "msg_date": "Sun, 23 Dec 2012 14:55:15 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Thursday, December 20, 2012, Charles Gomes wrote:\n\n> True, that's the same I feel, I will be looking to translate the trigger\n> to C if I can find good examples, that should accelerate.\n>\n\nI think your performance bottleneck is almost certainly the dynamic SQL.\n Using C to generate that dynamic SQL isn't going to help much, because it\nis still the SQL engine that has to parse, plan, and execute it.\n\nAre the vast majority if your inserts done on any given day for records\nfrom that same day or the one before; or are they evenly spread over the\npreceding year? If the former, you could use static SQL in IF and ELSIF\nfor those days, and fall back on the dynamic SQL for the exceptions in the\nELSE block. Of course that means you have to update the trigger every day.\n\n\n\n> Using rules would be totally bad as I'm partitioning daily and after one\n> year having 365 lines of IF won't be fun to maintain.\n>\n\nMaintaining 365 lines of IF is what Perl was invented for. That goes for\ntriggers w/ static SQL as well as for rules.\n\nIf you do the static SQL in a trigger and the dates of the records are\nevenly scattered over the preceding year, make sure your IFs are nested\nlike a binary search, not a linear search. And if they are mostly for\n\"today's\" date, then make sure you search backwards.\n\nCheers,\n\nJeff\n\nOn Thursday, December 20, 2012, Charles Gomes wrote:True, that's the same I feel, I will be looking to translate the trigger to C if I can find good examples, that should accelerate.\nI think your performance bottleneck is almost certainly the dynamic SQL.  Using C to generate that dynamic SQL isn't going to help much, because it is still the SQL engine that has to parse, plan, and execute it.\nAre the vast majority if your inserts done on any given day for records from that same day or the one before; or are they evenly spread over the preceding year?  If the former, you could use static SQL in IF and ELSIF for those days, and fall back on the dynamic SQL for the exceptions in the ELSE block.  Of course that means you have to update the trigger every day.\n \nUsing rules would be totally bad as I'm partitioning daily and after one year having 365 lines of IF won't be fun to maintain.Maintaining 365 lines of IF is what Perl was invented for.  That goes for triggers w/ static SQL as well as for rules.\nIf you do the static SQL in a trigger and the dates of the records are evenly scattered over the preceding year, make sure your IFs are nested like a binary search, not a linear search.  And if they are mostly for \"today's\" date, then make sure you search backwards.\nCheers,Jeff", "msg_date": "Sun, 23 Dec 2012 14:55:16 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "________________________________\n> Date: Sun, 23 Dec 2012 14:55:16 -0800 \n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table \n> From: [email protected] \n> To: [email protected] \n> CC: [email protected]; [email protected] \n> \n> On Thursday, December 20, 2012, Charles Gomes wrote: \n> True, that's the same I feel, I will be looking to translate the \n> trigger to C if I can find good examples, that should accelerate. \n> \n> I think your performance bottleneck is almost certainly the dynamic \n> SQL. Using C to generate that dynamic SQL isn't going to help much, \n> because it is still the SQL engine that has to parse, plan, and execute \n> it. \n> \n> Are the vast majority if your inserts done on any given day for records \n> from that same day or the one before; or are they evenly spread over \n> the preceding year? If the former, you could use static SQL in IF and \n> ELSIF for those days, and fall back on the dynamic SQL for the \n> exceptions in the ELSE block. Of course that means you have to update \n> the trigger every day. \n> \n> \n> Using rules would be totally bad as I'm partitioning daily and after \n> one year having 365 lines of IF won't be fun to maintain. \n> \n> Maintaining 365 lines of IF is what Perl was invented for. That goes \n> for triggers w/ static SQL as well as for rules. \n> \n> If you do the static SQL in a trigger and the dates of the records are \n> evenly scattered over the preceding year, make sure your IFs are nested \n> like a binary search, not a linear search. And if they are mostly for \n> \"today's\" date, then make sure you search backwards. \n> \n> Cheers, \n> \n> Jeff\n\nJeff, I've changed the code from dynamic to:\n\nCREATE OR REPLACE FUNCTION quotes_insert_trigger()\nRETURNS trigger AS $$\nDECLARE\nr_date text;\nBEGIN\nr_date = to_char(new.received_time, 'YYYY_MM_DD');\ncase r_date\n    when '2012_09_10' then \n        insert into quotes_2012_09_10 values (NEW.*) using new;\n        return;\n    when '2012_09_11' then\n        insert into quotes_2012_09_11 values (NEW.*) using new;\n        return;\n    when '2012_09_12' then\n        insert into quotes_2012_09_12 values (NEW.*) using new;\n        return;\n    when '2012_09_13' then\n        insert into quotes_2012_09_13 values (NEW.*) using new;\n        return;\n    when '2012_09_14' then\n        insert into quotes_2012_09_14 values (NEW.*) using new;\n        return;\n    when '2012_09_15' then\n        insert into quotes_2012_09_15 values (NEW.*) using new;\n        return;\n    when '2012_09_16' then\n        insert into quotes_2012_09_16 values (NEW.*) using new;\n        return;\n    when '2012_09_17' then\n        insert into quotes_2012_09_17 values (NEW.*) using new;\n        return;\n    when '2012_09_18' then\n        insert into quotes_2012_09_18 values (NEW.*) using new;\n        return;\n    when '2012_09_19' then\n        insert into quotes_2012_09_19 values (NEW.*) using new;\n        return;\n    when '2012_09_20' then\n        insert into quotes_2012_09_20 values (NEW.*) using new;\n        return;\n    when '2012_09_21' then\n        insert into quotes_2012_09_21 values (NEW.*) using new;\n        return;\n    when '2012_09_22' then\n        insert into quotes_2012_09_22 values (NEW.*) using new;\n        return;\n    when '2012_09_23' then\n        insert into quotes_2012_09_23 values (NEW.*) using new;\n        return;\n    when '2012_09_24' then\n        insert into quotes_2012_09_24 values (NEW.*) using new;\n        return;\nend case\nRETURN NULL;\nEND;\n$$\nLANGUAGE plpgsql;\n\n\nHowever I've got no speed improvement.\nI need to keep two weeks worth of partitions at a time, that's why all the WHEN statements.\nWish postgres could automate the partition process natively like the other sql db.\n\nThank you guys for your help. \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Dec 2012 10:51:12 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "By the way, I've just re-wrote the code to target the partitions individually and I've got almost 4 times improvement.\nShouldn't it be faster to process the trigger, I would understand if there was no CPU left, but there is lots of cpu to chew.\nIt seems that there will be no other way to speedup unless the insert code is partition aware.\n\n----------------------------------------\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]; [email protected]\n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> Date: Mon, 24 Dec 2012 10:51:12 -0500\n>\n> ________________________________\n> > Date: Sun, 23 Dec 2012 14:55:16 -0800\n> > Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> > From: [email protected]\n> > To: [email protected]\n> > CC: [email protected]; [email protected]\n> >\n> > On Thursday, December 20, 2012, Charles Gomes wrote:\n> > True, that's the same I feel, I will be looking to translate the\n> > trigger to C if I can find good examples, that should accelerate.\n> >\n> > I think your performance bottleneck is almost certainly the dynamic\n> > SQL. Using C to generate that dynamic SQL isn't going to help much,\n> > because it is still the SQL engine that has to parse, plan, and execute\n> > it.\n> >\n> > Are the vast majority if your inserts done on any given day for records\n> > from that same day or the one before; or are they evenly spread over\n> > the preceding year? If the former, you could use static SQL in IF and\n> > ELSIF for those days, and fall back on the dynamic SQL for the\n> > exceptions in the ELSE block. Of course that means you have to update\n> > the trigger every day.\n> >\n> >\n> > Using rules would be totally bad as I'm partitioning daily and after\n> > one year having 365 lines of IF won't be fun to maintain.\n> >\n> > Maintaining 365 lines of IF is what Perl was invented for. That goes\n> > for triggers w/ static SQL as well as for rules.\n> >\n> > If you do the static SQL in a trigger and the dates of the records are\n> > evenly scattered over the preceding year, make sure your IFs are nested\n> > like a binary search, not a linear search. And if they are mostly for\n> > \"today's\" date, then make sure you search backwards.\n> >\n> > Cheers,\n> >\n> > Jeff\n>\n> Jeff, I've changed the code from dynamic to:\n>\n> CREATE OR REPLACE FUNCTION quotes_insert_trigger()\n> RETURNS trigger AS $$\n> DECLARE\n> r_date text;\n> BEGIN\n> r_date = to_char(new.received_time, 'YYYY_MM_DD');\n> case r_date\n> when '2012_09_10' then\n> insert into quotes_2012_09_10 values (NEW.*) using new;\n> return;\n> when '2012_09_11' then\n> insert into quotes_2012_09_11 values (NEW.*) using new;\n> return;\n> when '2012_09_12' then\n> insert into quotes_2012_09_12 values (NEW.*) using new;\n> return;\n> when '2012_09_13' then\n> insert into quotes_2012_09_13 values (NEW.*) using new;\n> return;\n> when '2012_09_14' then\n> insert into quotes_2012_09_14 values (NEW.*) using new;\n> return;\n> when '2012_09_15' then\n> insert into quotes_2012_09_15 values (NEW.*) using new;\n> return;\n> when '2012_09_16' then\n> insert into quotes_2012_09_16 values (NEW.*) using new;\n> return;\n> when '2012_09_17' then\n> insert into quotes_2012_09_17 values (NEW.*) using new;\n> return;\n> when '2012_09_18' then\n> insert into quotes_2012_09_18 values (NEW.*) using new;\n> return;\n> when '2012_09_19' then\n> insert into quotes_2012_09_19 values (NEW.*) using new;\n> return;\n> when '2012_09_20' then\n> insert into quotes_2012_09_20 values (NEW.*) using new;\n> return;\n> when '2012_09_21' then\n> insert into quotes_2012_09_21 values (NEW.*) using new;\n> return;\n> when '2012_09_22' then\n> insert into quotes_2012_09_22 values (NEW.*) using new;\n> return;\n> when '2012_09_23' then\n> insert into quotes_2012_09_23 values (NEW.*) using new;\n> return;\n> when '2012_09_24' then\n> insert into quotes_2012_09_24 values (NEW.*) using new;\n> return;\n> end case\n> RETURN NULL;\n> END;\n> $$\n> LANGUAGE plpgsql;\n>\n>\n> However I've got no speed improvement.\n> I need to keep two weeks worth of partitions at a time, that's why all the WHEN statements.\n> Wish postgres could automate the partition process natively like the other sql db.\n>\n> Thank you guys for your help.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Dec 2012 12:07:09 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "\nOn Dec 24, 2012, at 9:07 PM, Charles Gomes <[email protected]> wrote:\n\n> By the way, I've just re-wrote the code to target the partitions individually and I've got almost 4 times improvement.\n> Shouldn't it be faster to process the trigger, I would understand if there was no CPU left, but there is lots of cpu to chew.\n\nI saw your 20% idle cpu and raise eyebrows.\n\n> It seems that there will be no other way to speedup unless the insert code is partition aware.\n> \n> ----------------------------------------\n>> From: [email protected]\n>> To: [email protected]\n>> CC: [email protected]; [email protected]\n>> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n>> Date: Mon, 24 Dec 2012 10:51:12 -0500\n>> \n>> ________________________________\n>>> Date: Sun, 23 Dec 2012 14:55:16 -0800\n>>> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n>>> From: [email protected]\n>>> To: [email protected]\n>>> CC: [email protected]; [email protected]\n>>> \n>>> On Thursday, December 20, 2012, Charles Gomes wrote:\n>>> True, that's the same I feel, I will be looking to translate the\n>>> trigger to C if I can find good examples, that should accelerate.\n>>> \n>>> I think your performance bottleneck is almost certainly the dynamic\n>>> SQL. Using C to generate that dynamic SQL isn't going to help much,\n>>> because it is still the SQL engine that has to parse, plan, and execute\n>>> it.\n>>> \n>>> Are the vast majority if your inserts done on any given day for records\n>>> from that same day or the one before; or are they evenly spread over\n>>> the preceding year? If the former, you could use static SQL in IF and\n>>> ELSIF for those days, and fall back on the dynamic SQL for the\n>>> exceptions in the ELSE block. Of course that means you have to update\n>>> the trigger every day.\n>>> \n>>> \n>>> Using rules would be totally bad as I'm partitioning daily and after\n>>> one year having 365 lines of IF won't be fun to maintain.\n>>> \n>>> Maintaining 365 lines of IF is what Perl was invented for. That goes\n>>> for triggers w/ static SQL as well as for rules.\n>>> \n>>> If you do the static SQL in a trigger and the dates of the records are\n>>> evenly scattered over the preceding year, make sure your IFs are nested\n>>> like a binary search, not a linear search. And if they are mostly for\n>>> \"today's\" date, then make sure you search backwards.\n>>> \n>>> Cheers,\n>>> \n>>> Jeff\n>> \n>> Jeff, I've changed the code from dynamic to:\n>> \n>> CREATE OR REPLACE FUNCTION quotes_insert_trigger()\n>> RETURNS trigger AS $$\n>> DECLARE\n>> r_date text;\n>> BEGIN\n>> r_date = to_char(new.received_time, 'YYYY_MM_DD');\n>> case r_date\n>> when '2012_09_10' then\n>> insert into quotes_2012_09_10 values (NEW.*) using new;\n>> return;\n>> when '2012_09_11' then\n>> insert into quotes_2012_09_11 values (NEW.*) using new;\n>> return;\n>> when '2012_09_12' then\n>> insert into quotes_2012_09_12 values (NEW.*) using new;\n>> return;\n>> when '2012_09_13' then\n>> insert into quotes_2012_09_13 values (NEW.*) using new;\n>> return;\n>> when '2012_09_14' then\n>> insert into quotes_2012_09_14 values (NEW.*) using new;\n>> return;\n>> when '2012_09_15' then\n>> insert into quotes_2012_09_15 values (NEW.*) using new;\n>> return;\n>> when '2012_09_16' then\n>> insert into quotes_2012_09_16 values (NEW.*) using new;\n>> return;\n>> when '2012_09_17' then\n>> insert into quotes_2012_09_17 values (NEW.*) using new;\n>> return;\n>> when '2012_09_18' then\n>> insert into quotes_2012_09_18 values (NEW.*) using new;\n>> return;\n>> when '2012_09_19' then\n>> insert into quotes_2012_09_19 values (NEW.*) using new;\n>> return;\n>> when '2012_09_20' then\n>> insert into quotes_2012_09_20 values (NEW.*) using new;\n>> return;\n>> when '2012_09_21' then\n>> insert into quotes_2012_09_21 values (NEW.*) using new;\n>> return;\n>> when '2012_09_22' then\n>> insert into quotes_2012_09_22 values (NEW.*) using new;\n>> return;\n>> when '2012_09_23' then\n>> insert into quotes_2012_09_23 values (NEW.*) using new;\n>> return;\n>> when '2012_09_24' then\n>> insert into quotes_2012_09_24 values (NEW.*) using new;\n>> return;\n>> end case\n>> RETURN NULL;\n>> END;\n>> $$\n>> LANGUAGE plpgsql;\n>> \n>> \n>> However I've got no speed improvement.\n>> I need to keep two weeks worth of partitions at a time, that's why all the WHEN statements.\n>> Wish postgres could automate the partition process natively like the other sql db.\n>> \n>> Thank you guys for your help.\n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Dec 2012 21:11:07 +0400", "msg_from": "Evgeny Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "I've just found this:\nFrom: \nhttp://archives.postgresql.org/pgsql-hackers/2008-12/msg01221.php\n\n\"initial tests to insert 140k rows are as follows:\n\n- direct inserts in a child table: 2 seconds\n\n- pgplsql trigger (IF ... ELSE IF ... blocks) : 14.5 seconds.\n\n- C trigger: 4 seconds (actually the overhead is in the constraint check)\n\n\"\n\nThis is from 2008 and looks like at that time those folks where already having performance issues with partitions.\n\nGoing to copy some folks from the old thread, hopefully 4 years later they may have found a solution. Maybe they've moved on into something more exciting, maybe He is in another world where we don't have database servers. In special the brave Emmanuel for posting his trigger code that I will hack into my own :P\nThanks Emmanuel.\n\n\n\n\n\n\n----------------------------------------\n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> From: [email protected]\n> Date: Mon, 24 Dec 2012 21:11:07 +0400\n> CC: [email protected]; [email protected]; [email protected]\n> To: [email protected]\n>\n>\n> On Dec 24, 2012, at 9:07 PM, Charles Gomes <[email protected]> wrote:\n>\n> > By the way, I've just re-wrote the code to target the partitions individually and I've got almost 4 times improvement.\n> > Shouldn't it be faster to process the trigger, I would understand if there was no CPU left, but there is lots of cpu to chew.\n>\n> I saw your 20% idle cpu and raise eyebrows.\n>\n> > It seems that there will be no other way to speedup unless the insert code is partition aware.\n> >\n> > ----------------------------------------\n> >> From: [email protected]\n> >> To: [email protected]\n> >> CC: [email protected]; [email protected]\n> >> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> >> Date: Mon, 24 Dec 2012 10:51:12 -0500\n> >>\n> >> ________________________________\n> >>> Date: Sun, 23 Dec 2012 14:55:16 -0800\n> >>> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> >>> From: [email protected]\n> >>> To: [email protected]\n> >>> CC: [email protected]; [email protected]\n> >>>\n> >>> On Thursday, December 20, 2012, Charles Gomes wrote:\n> >>> True, that's the same I feel, I will be looking to translate the\n> >>> trigger to C if I can find good examples, that should accelerate.\n> >>>\n> >>> I think your performance bottleneck is almost certainly the dynamic\n> >>> SQL. Using C to generate that dynamic SQL isn't going to help much,\n> >>> because it is still the SQL engine that has to parse, plan, and execute\n> >>> it.\n> >>>\n> >>> Are the vast majority if your inserts done on any given day for records\n> >>> from that same day or the one before; or are they evenly spread over\n> >>> the preceding year? If the former, you could use static SQL in IF and\n> >>> ELSIF for those days, and fall back on the dynamic SQL for the\n> >>> exceptions in the ELSE block. Of course that means you have to update\n> >>> the trigger every day.\n> >>>\n> >>>\n> >>> Using rules would be totally bad as I'm partitioning daily and after\n> >>> one year having 365 lines of IF won't be fun to maintain.\n> >>>\n> >>> Maintaining 365 lines of IF is what Perl was invented for. That goes\n> >>> for triggers w/ static SQL as well as for rules.\n> >>>\n> >>> If you do the static SQL in a trigger and the dates of the records are\n> >>> evenly scattered over the preceding year, make sure your IFs are nested\n> >>> like a binary search, not a linear search. And if they are mostly for\n> >>> \"today's\" date, then make sure you search backwards.\n> >>>\n> >>> Cheers,\n> >>>\n> >>> Jeff\n> >>\n> >> Jeff, I've changed the code from dynamic to:\n> >>\n> >> CREATE OR REPLACE FUNCTION quotes_insert_trigger()\n> >> RETURNS trigger AS $$\n> >> DECLARE\n> >> r_date text;\n> >> BEGIN\n> >> r_date = to_char(new.received_time, 'YYYY_MM_DD');\n> >> case r_date\n> >> when '2012_09_10' then\n> >> insert into quotes_2012_09_10 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_11' then\n> >> insert into quotes_2012_09_11 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_12' then\n> >> insert into quotes_2012_09_12 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_13' then\n> >> insert into quotes_2012_09_13 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_14' then\n> >> insert into quotes_2012_09_14 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_15' then\n> >> insert into quotes_2012_09_15 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_16' then\n> >> insert into quotes_2012_09_16 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_17' then\n> >> insert into quotes_2012_09_17 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_18' then\n> >> insert into quotes_2012_09_18 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_19' then\n> >> insert into quotes_2012_09_19 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_20' then\n> >> insert into quotes_2012_09_20 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_21' then\n> >> insert into quotes_2012_09_21 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_22' then\n> >> insert into quotes_2012_09_22 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_23' then\n> >> insert into quotes_2012_09_23 values (NEW.*) using new;\n> >> return;\n> >> when '2012_09_24' then\n> >> insert into quotes_2012_09_24 values (NEW.*) using new;\n> >> return;\n> >> end case\n> >> RETURN NULL;\n> >> END;\n> >> $$\n> >> LANGUAGE plpgsql;\n> >>\n> >>\n> >> However I've got no speed improvement.\n> >> I need to keep two weeks worth of partitions at a time, that's why all the WHEN statements.\n> >> Wish postgres could automate the partition process natively like the other sql db.\n> >>\n> >> Thank you guys for your help.\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list ([email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Dec 2012 13:36:17 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Hi Charles,\n\nI am not working on Postgres anymore and none of our patches were ever \naccepted by the community.\nThe list of development I made can still be found at \nhttp://wiki.postgresql.org/wiki/Aster%27s_Development_Projects\n\nAll the code related to these improvements must still be accessible in \nthe archive. If you can't find something, let me know, I'll try to find \nit in my backups!\n\nHappy holidays\nEmmanuel\n\n\nOn 12/24/2012 13:36, Charles Gomes wrote:\n> I've just found this:\n> From:\n> http://archives.postgresql.org/pgsql-hackers/2008-12/msg01221.php\n>\n> \"initial tests to insert 140k rows are as follows:\n>\n> - direct inserts in a child table: 2 seconds\n>\n> - pgplsql trigger (IF ... ELSE IF ... blocks) : 14.5 seconds.\n>\n> - C trigger: 4 seconds (actually the overhead is in the constraint check)\n>\n> \"\n>\n> This is from 2008 and looks like at that time those folks where already having performance issues with partitions.\n>\n> Going to copy some folks from the old thread, hopefully 4 years later they may have found a solution. Maybe they've moved on into something more exciting, maybe He is in another world where we don't have database servers. In special the brave Emmanuel for posting his trigger code that I will hack into my own :P\n> Thanks Emmanuel.\n>\n>\n>\n>\n>\n>\n> ----------------------------------------\n>> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n>> From: [email protected]\n>> Date: Mon, 24 Dec 2012 21:11:07 +0400\n>> CC: [email protected]; [email protected]; [email protected]\n>> To: [email protected]\n>>\n>>\n>> On Dec 24, 2012, at 9:07 PM, Charles Gomes <[email protected]> wrote:\n>>\n>>> By the way, I've just re-wrote the code to target the partitions individually and I've got almost 4 times improvement.\n>>> Shouldn't it be faster to process the trigger, I would understand if there was no CPU left, but there is lots of cpu to chew.\n>> I saw your 20% idle cpu and raise eyebrows.\n>>\n>>> It seems that there will be no other way to speedup unless the insert code is partition aware.\n>>>\n>>> ----------------------------------------\n>>>> From: [email protected]\n>>>> To: [email protected]\n>>>> CC: [email protected]; [email protected]\n>>>> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n>>>> Date: Mon, 24 Dec 2012 10:51:12 -0500\n>>>>\n>>>> ________________________________\n>>>>> Date: Sun, 23 Dec 2012 14:55:16 -0800\n>>>>> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n>>>>> From: [email protected]\n>>>>> To: [email protected]\n>>>>> CC: [email protected]; [email protected]\n>>>>>\n>>>>> On Thursday, December 20, 2012, Charles Gomes wrote:\n>>>>> True, that's the same I feel, I will be looking to translate the\n>>>>> trigger to C if I can find good examples, that should accelerate.\n>>>>>\n>>>>> I think your performance bottleneck is almost certainly the dynamic\n>>>>> SQL. Using C to generate that dynamic SQL isn't going to help much,\n>>>>> because it is still the SQL engine that has to parse, plan, and execute\n>>>>> it.\n>>>>>\n>>>>> Are the vast majority if your inserts done on any given day for records\n>>>>> from that same day or the one before; or are they evenly spread over\n>>>>> the preceding year? If the former, you could use static SQL in IF and\n>>>>> ELSIF for those days, and fall back on the dynamic SQL for the\n>>>>> exceptions in the ELSE block. Of course that means you have to update\n>>>>> the trigger every day.\n>>>>>\n>>>>>\n>>>>> Using rules would be totally bad as I'm partitioning daily and after\n>>>>> one year having 365 lines of IF won't be fun to maintain.\n>>>>>\n>>>>> Maintaining 365 lines of IF is what Perl was invented for. That goes\n>>>>> for triggers w/ static SQL as well as for rules.\n>>>>>\n>>>>> If you do the static SQL in a trigger and the dates of the records are\n>>>>> evenly scattered over the preceding year, make sure your IFs are nested\n>>>>> like a binary search, not a linear search. And if they are mostly for\n>>>>> \"today's\" date, then make sure you search backwards.\n>>>>>\n>>>>> Cheers,\n>>>>>\n>>>>> Jeff\n>>>> Jeff, I've changed the code from dynamic to:\n>>>>\n>>>> CREATE OR REPLACE FUNCTION quotes_insert_trigger()\n>>>> RETURNS trigger AS $$\n>>>> DECLARE\n>>>> r_date text;\n>>>> BEGIN\n>>>> r_date = to_char(new.received_time, 'YYYY_MM_DD');\n>>>> case r_date\n>>>> when '2012_09_10' then\n>>>> insert into quotes_2012_09_10 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_11' then\n>>>> insert into quotes_2012_09_11 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_12' then\n>>>> insert into quotes_2012_09_12 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_13' then\n>>>> insert into quotes_2012_09_13 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_14' then\n>>>> insert into quotes_2012_09_14 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_15' then\n>>>> insert into quotes_2012_09_15 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_16' then\n>>>> insert into quotes_2012_09_16 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_17' then\n>>>> insert into quotes_2012_09_17 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_18' then\n>>>> insert into quotes_2012_09_18 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_19' then\n>>>> insert into quotes_2012_09_19 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_20' then\n>>>> insert into quotes_2012_09_20 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_21' then\n>>>> insert into quotes_2012_09_21 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_22' then\n>>>> insert into quotes_2012_09_22 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_23' then\n>>>> insert into quotes_2012_09_23 values (NEW.*) using new;\n>>>> return;\n>>>> when '2012_09_24' then\n>>>> insert into quotes_2012_09_24 values (NEW.*) using new;\n>>>> return;\n>>>> end case\n>>>> RETURN NULL;\n>>>> END;\n>>>> $$\n>>>> LANGUAGE plpgsql;\n>>>>\n>>>>\n>>>> However I've got no speed improvement.\n>>>> I need to keep two weeks worth of partitions at a time, that's why all the WHEN statements.\n>>>> Wish postgres could automate the partition process natively like the other sql db.\n>>>>\n>>>> Thank you guys for your help.\n>>>>\n>>>> --\n>>>> Sent via pgsql-performance mailing list ([email protected])\n>>>> To make changes to your subscription:\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Dec 2012 13:47:12 -0500", "msg_from": "Emmanuel Cecchet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Emmanuel, I really appreciate you getting back on this old old topic.\nWish you a very very happy Christmas and happy new year.\n\nI'm\n kinda disappointed to see that since 2008 pgsql has not evolved to support native \npartitioning. Partitioning with Triggers is so slow.\nLooks like pgsql lost some momentum after departure of contributors with initiative like you.\n\nThe code I've copied from your post and I'm modifying it for 9.2 and will post it back here.\n\nThank you very much,\nCharles\n\n----------------------------------------\n> Date: Mon, 24 Dec 2012 13:47:12 -0500\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]\n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n>\n> Hi Charles,\n>\n> I am not working on Postgres anymore and none of our patches were ever\n> accepted by the community.\n> The list of development I made can still be found at\n> http://wiki.postgresql.org/wiki/Aster%27s_Development_Projects\n>\n> All the code related to these improvements must still be accessible in\n> the archive. If you can't find something, let me know, I'll try to find\n> it in my backups!\n>\n> Happy holidays\n> Emmanuel\n>\n>\n> On 12/24/2012 13:36, Charles Gomes wrote:\n> > I've just found this:\n> > From:\n> > http://archives.postgresql.org/pgsql-hackers/2008-12/msg01221.php\n> >\n> > \"initial tests to insert 140k rows are as follows:\n> >\n> > - direct inserts in a child table: 2 seconds\n> >\n> > - pgplsql trigger (IF ... ELSE IF ... blocks) : 14.5 seconds.\n> >\n> > - C trigger: 4 seconds (actually the overhead is in the constraint check)\n> >\n> > \"\n> >\n> > This is from 2008 and looks like at that time those folks where already having performance issues with partitions.\n> >\n> > Going to copy some folks from the old thread, hopefully 4 years later they may have found a solution. Maybe they've moved on into something more exciting, maybe He is in another world where we don't have database servers. In special the brave Emmanuel for posting his trigger code that I will hack into my own :P\n> > Thanks Emmanuel.\n> >\n> >\n> >\n> >\n> >\n> >\n> > ----------------------------------------\n> >> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> >> From: [email protected]\n> >> Date: Mon, 24 Dec 2012 21:11:07 +0400\n> >> CC: [email protected]; [email protected]; [email protected]\n> >> To: [email protected]\n> >>\n> >>\n> >> On Dec 24, 2012, at 9:07 PM, Charles Gomes <[email protected]> wrote:\n> >>\n> >>> By the way, I've just re-wrote the code to target the partitions individually and I've got almost 4 times improvement.\n> >>> Shouldn't it be faster to process the trigger, I would understand if there was no CPU left, but there is lots of cpu to chew.\n> >> I saw your 20% idle cpu and raise eyebrows.\n> >>\n> >>> It seems that there will be no other way to speedup unless the insert code is partition aware.\n> >>>\n> >>> ----------------------------------------\n> >>>> From: [email protected]\n> >>>> To: [email protected]\n> >>>> CC: [email protected]; [email protected]\n> >>>> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> >>>> Date: Mon, 24 Dec 2012 10:51:12 -0500\n> >>>>\n> >>>> ________________________________\n> >>>>> Date: Sun, 23 Dec 2012 14:55:16 -0800\n> >>>>> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> >>>>> From: [email protected]\n> >>>>> To: [email protected]\n> >>>>> CC: [email protected]; [email protected]\n> >>>>>\n> >>>>> On Thursday, December 20, 2012, Charles Gomes wrote:\n> >>>>> True, that's the same I feel, I will be looking to translate the\n> >>>>> trigger to C if I can find good examples, that should accelerate.\n> >>>>>\n> >>>>> I think your performance bottleneck is almost certainly the dynamic\n> >>>>> SQL. Using C to generate that dynamic SQL isn't going to help much,\n> >>>>> because it is still the SQL engine that has to parse, plan, and execute\n> >>>>> it.\n> >>>>>\n> >>>>> Are the vast majority if your inserts done on any given day for records\n> >>>>> from that same day or the one before; or are they evenly spread over\n> >>>>> the preceding year? If the former, you could use static SQL in IF and\n> >>>>> ELSIF for those days, and fall back on the dynamic SQL for the\n> >>>>> exceptions in the ELSE block. Of course that means you have to update\n> >>>>> the trigger every day.\n> >>>>>\n> >>>>>\n> >>>>> Using rules would be totally bad as I'm partitioning daily and after\n> >>>>> one year having 365 lines of IF won't be fun to maintain.\n> >>>>>\n> >>>>> Maintaining 365 lines of IF is what Perl was invented for. That goes\n> >>>>> for triggers w/ static SQL as well as for rules.\n> >>>>>\n> >>>>> If you do the static SQL in a trigger and the dates of the records are\n> >>>>> evenly scattered over the preceding year, make sure your IFs are nested\n> >>>>> like a binary search, not a linear search. And if they are mostly for\n> >>>>> \"today's\" date, then make sure you search backwards.\n> >>>>>\n> >>>>> Cheers,\n> >>>>>\n> >>>>> Jeff\n> >>>> Jeff, I've changed the code from dynamic to:\n> >>>>\n> >>>> CREATE OR REPLACE FUNCTION quotes_insert_trigger()\n> >>>> RETURNS trigger AS $$\n> >>>> DECLARE\n> >>>> r_date text;\n> >>>> BEGIN\n> >>>> r_date = to_char(new.received_time, 'YYYY_MM_DD');\n> >>>> case r_date\n> >>>> when '2012_09_10' then\n> >>>> insert into quotes_2012_09_10 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_11' then\n> >>>> insert into quotes_2012_09_11 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_12' then\n> >>>> insert into quotes_2012_09_12 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_13' then\n> >>>> insert into quotes_2012_09_13 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_14' then\n> >>>> insert into quotes_2012_09_14 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_15' then\n> >>>> insert into quotes_2012_09_15 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_16' then\n> >>>> insert into quotes_2012_09_16 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_17' then\n> >>>> insert into quotes_2012_09_17 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_18' then\n> >>>> insert into quotes_2012_09_18 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_19' then\n> >>>> insert into quotes_2012_09_19 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_20' then\n> >>>> insert into quotes_2012_09_20 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_21' then\n> >>>> insert into quotes_2012_09_21 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_22' then\n> >>>> insert into quotes_2012_09_22 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_23' then\n> >>>> insert into quotes_2012_09_23 values (NEW.*) using new;\n> >>>> return;\n> >>>> when '2012_09_24' then\n> >>>> insert into quotes_2012_09_24 values (NEW.*) using new;\n> >>>> return;\n> >>>> end case\n> >>>> RETURN NULL;\n> >>>> END;\n> >>>> $$\n> >>>> LANGUAGE plpgsql;\n> >>>>\n> >>>>\n> >>>> However I've got no speed improvement.\n> >>>> I need to keep two weeks worth of partitions at a time, that's why all the WHEN statements.\n> >>>> Wish postgres could automate the partition process natively like the other sql db.\n> >>>>\n> >>>> Thank you guys for your help.\n> >>>>\n> >>>> --\n> >>>> Sent via pgsql-performance mailing list ([email protected])\n> >>>> To make changes to your subscription:\n> >>>> http://www.postgresql.org/mailpref/pgsql-performance\n> >>> --\n> >>> Sent via pgsql-performance mailing list ([email protected])\n> >>> To make changes to your subscription:\n> >>> http://www.postgresql.org/mailpref/pgsql-performance\n> >>\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list ([email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Dec 2012 14:29:35 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Monday, December 24, 2012, Charles Gomes wrote:\n\n> ________________________________\n>\n\n\n> >\n> > I think your performance bottleneck is almost certainly the dynamic\n> > SQL. Using C to generate that dynamic SQL isn't going to help much,\n> > because it is still the SQL engine that has to parse, plan, and execute\n> > it.\n> >\n>\n> Jeff, I've changed the code from dynamic to:\n>\n> CREATE OR REPLACE FUNCTION quotes_insert_trigger()\n> RETURNS trigger AS $$\n> DECLARE\n> r_date text;\n> BEGIN\n> r_date = to_char(new.received_time, 'YYYY_MM_DD');\n> case r_date\n> when '2012_09_10' then\n> insert into quotes_2012_09_10 values (NEW.*) using new;\n> return;\n> ...\n\n\n\n> However I've got no speed improvement.\n> I need to keep two weeks worth of partitions at a time, that's why all the\n> WHEN statements.\n>\n\nThe 'using new' and return without argument are syntax errors.\n\nWhen I do a model system with those fixed, I get about 2 fold improvement\nover the dynamic SQL performance. Even if your performance did not go up,\ndid your CPU usage go down? Perhaps you have multiple bottlenecks all\nsitting at about the same place, and so tackling any one of them at a time\ndoesn't get you anywhere.\n\nHow does both the dynamic and the CASE scale with the number of threads? I\nthink you said you had something like 70 sessions, but only 8 CPUs. That\nprobably will do bad things with contention, and I don't see how using more\nconnections than CPUs is going to help you here. If the CASE starts out\nfaster in single thread but then flat lines and the EXECUTE catches up,\nthat suggests a different avenue of investigation than they are always the\nsame.\n\n\n\n> Wish postgres could automate the partition process natively like the other\n> sql db.\n>\n\nMore automated would be nice (i.e. one operation to make both the check\nconstraints and the trigger, so they can't get out of sync), but would not\nnecessarily mean faster. I don't know what you mean about other db. Last\ntime I looked at partitioning in mysql, it was only about breaking up the\nunderlying storage into separate files (without regards to contents of the\nrows), so that is the same as what postgres does automatically. And in\nOracle, their partitioning seemed about the same as postgres's as far\nas administrative tedium was concerned. I'm not familiar with how the MS\nproduct handles it, and maybe me experience with the other two are out of\ndate.\n\nCheers,\n\nJeff\n\nOn Monday, December 24, 2012, Charles Gomes wrote:________________________________ \n>\n> I think your performance bottleneck is almost certainly the dynamic\n> SQL.  Using C to generate that dynamic SQL isn't going to help much,\n> because it is still the SQL engine that has to parse, plan, and execute\n> it.\n>\nJeff, I've changed the code from dynamic to:\n\nCREATE OR REPLACE FUNCTION quotes_insert_trigger()\nRETURNS trigger AS $$\nDECLARE\nr_date text;\nBEGIN\nr_date = to_char(new.received_time, 'YYYY_MM_DD');\ncase r_date\n    when '2012_09_10' then\n        insert into quotes_2012_09_10 values (NEW.*) using new;\n        return;...\nHowever I've got no speed improvement.\nI need to keep two weeks worth of partitions at a time, that's why all the WHEN statements.The 'using new' and return without argument are syntax errors.\nWhen I do a model system with those fixed, I get about 2 fold improvement over the dynamic SQL performance.  Even if your performance did not go up, did your CPU usage go down?  Perhaps you have multiple bottlenecks all sitting at about the same place, and so tackling any one of them at a time doesn't get you anywhere.\nHow does both the dynamic and the CASE scale with the number of threads?  I think you said you had something like 70 sessions, but only 8 CPUs.  That probably will do bad things with contention, and I don't see how using more connections than CPUs is going to help you here.  If the CASE starts out faster in single thread but then flat lines and the EXECUTE catches up, that suggests a different avenue of investigation than they are always the same.\n \nWish postgres could automate the partition process natively like the other sql db.More automated would be nice (i.e. one operation to make both the check constraints and the trigger, so they can't get out of sync), but would not necessarily mean faster.  I don't know what you mean about other db.  Last time I looked at partitioning in mysql, it was only about breaking up the underlying storage into separate files (without regards to contents of the rows), so that is the same as what postgres does automatically.  And in Oracle, their partitioning seemed about the same as postgres's as far as administrative tedium was concerned.  I'm not familiar with how the MS product handles it, and maybe me experience with the other two are out of date.  \n Cheers,Jeff", "msg_date": "Wed, 26 Dec 2012 23:03:33 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "2012/12/27 Jeff Janes <[email protected]>:\n> On Monday, December 24, 2012, Charles Gomes wrote:\n>>\n>> ________________________________\n>\n>\n>>\n>> >\n>> > I think your performance bottleneck is almost certainly the dynamic\n>> > SQL. Using C to generate that dynamic SQL isn't going to help much,\n>> > because it is still the SQL engine that has to parse, plan, and execute\n>> > it.\n>> >\n>>\n>> Jeff, I've changed the code from dynamic to:\n>>\n>> CREATE OR REPLACE FUNCTION quotes_insert_trigger()\n>> RETURNS trigger AS $$\n>> DECLARE\n>> r_date text;\n>> BEGIN\n>> r_date = to_char(new.received_time, 'YYYY_MM_DD');\n>> case r_date\n>> when '2012_09_10' then\n>> insert into quotes_2012_09_10 values (NEW.*) using new;\n>> return;\n>> ...\n>\n>\n>>\n>> However I've got no speed improvement.\n>> I need to keep two weeks worth of partitions at a time, that's why all the\n>> WHEN statements.\n>\n>\n> The 'using new' and return without argument are syntax errors.\n>\n> When I do a model system with those fixed, I get about 2 fold improvement\n> over the dynamic SQL performance. Even if your performance did not go up,\n> did your CPU usage go down? Perhaps you have multiple bottlenecks all\n> sitting at about the same place, and so tackling any one of them at a time\n> doesn't get you anywhere.\n>\n> How does both the dynamic and the CASE scale with the number of threads? I\n> think you said you had something like 70 sessions, but only 8 CPUs. That\n> probably will do bad things with contention, and I don't see how using more\n> connections than CPUs is going to help you here. If the CASE starts out\n> faster in single thread but then flat lines and the EXECUTE catches up, that\n> suggests a different avenue of investigation than they are always the same.\n>\n>\n>>\n>> Wish postgres could automate the partition process natively like the other\n>> sql db.\n>\n>\n> More automated would be nice (i.e. one operation to make both the check\n> constraints and the trigger, so they can't get out of sync), but would not\n> necessarily mean faster. I don't know what you mean about other db. Last\n> time I looked at partitioning in mysql, it was only about breaking up the\n> underlying storage into separate files (without regards to contents of the\n> rows), so that is the same as what postgres does automatically. And in\n> Oracle, their partitioning seemed about the same as postgres's as far as\n> administrative tedium was concerned. I'm not familiar with how the MS\n> product handles it, and maybe me experience with the other two are out of\n> date.\n\nI did simple test - not too precious (just for first orientation) -\ntested on 9.3 - compiled without assertions\n\ninsert 0.5M rows into empty target table with one trivial trigger and\none index is about 4 sec\n\nsame with little bit complex trigger - one IF statement and two assign\nstatements is about 5 sec\n\nsimple forwarding two two tables - 8 sec\n\nusing dynamic SQL is significantly slower - 18 sec - probably due\noverhead with cached plans\n\na overhead depends on number of partitions, number of indexes, but I\nexpect so overhead of redistributed triggers should be about 50-100%\n(less on large tables, higher on small tables).\n\nNative implementation should significantly effective evaluate\nexpressions, mainly simple expressions - (this is significant for\nlarge number of partitions) and probably can do tuple forwarding\nfaster than is heavy INSERT statement (is question if is possible\ndecrease some overhead with more sophisticate syntax (by removing\nrecord expand).\n\nSo native implementation can carry significant speed up - mainly if we\ncan distribute tuples without expression evaluating (evaluated by\nexecutor)\n\nRegards\n\nPavel\n\n\n\n\n>\n> Cheers,\n>\n> Jeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 06:40:23 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "________________________________\n> Date: Wed, 26 Dec 2012 23:03:33 -0500 \n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table \n> From: [email protected] \n> To: [email protected] \n> CC: [email protected]; [email protected] \n> \n> On Monday, December 24, 2012, Charles Gomes wrote: \n> ________________________________ \n> \n> > \n> > I think your performance bottleneck is almost certainly the dynamic \n> > SQL. Using C to generate that dynamic SQL isn't going to help much, \n> > because it is still the SQL engine that has to parse, plan, and execute \n> > it. \n> > \n> \n> Jeff, I've changed the code from dynamic to: \n> \n> CREATE OR REPLACE FUNCTION quotes_insert_trigger() \n> RETURNS trigger AS $$ \n> DECLARE \n> r_date text; \n> BEGIN \n> r_date = to_char(new.received_time, 'YYYY_MM_DD'); \n> case r_date \n> when '2012_09_10' then \n> insert into quotes_2012_09_10 values (NEW.*) using new; \n> return; \n> ... \n> \n> \n> However I've got no speed improvement. \n> I need to keep two weeks worth of partitions at a time, that's why all \n> the WHEN statements. \n> \n> The 'using new' and return without argument are syntax errors. \n> \n> When I do a model system with those fixed, I get about 2 fold \n> improvement over the dynamic SQL performance. Even if your performance \n> did not go up, did your CPU usage go down? Perhaps you have multiple \n> bottlenecks all sitting at about the same place, and so tackling any \n> one of them at a time doesn't get you anywhere. \n\n\n\nI’ve run a small test with the fixes you mentioned and it changed from\n1H:20M to, 1H:30M to insert 396000000 rows.\n\n\n \n\nIf there was another bottleneck, performance when targeting the\npartitions directly would not be twice as fast. I’ve run another long insert\ntest and it takes 4H:15M to complete using triggers to distribute the inserts.  When targeting It completes in 1H:55M. That’s\nboth for 70 simultaneous workers with the same data and 1188000000 rows.\n\n\n \n\nThe tests that Emmanuel did translating the trigger to C have great\nperformance improvement. While His code is very general and could work for\nanyone using CHECK’s for triggers. I’m still working on fixing it so it’s\ncompatible with 9.2\n\nSo far I’m having a hard time using the C triggers anyway,:\n\nERROR:  could not load library\n\"/var/lib/pgsql/pg_trigger_example.so\":\n/var/lib/pgsql/pg_trigger_example.so: failed to map segment from shared object:\nOperation not permitted\n\nI will do more reading on it.\n\nI think having it to work again can bring some value so more people can\nbe aware of the performance improvement using C instead of PLSQL.\n\n\n> \n> How does both the dynamic and the CASE scale with the number of \n> threads? I think you said you had something like 70 sessions, but only \n> 8 CPUs. That probably will do bad things with contention, and I don't \n> see how using more connections than CPUs is going to help you here. If \n> the CASE starts out faster in single thread but then flat lines and the \n> EXECUTE catches up, that suggests a different avenue of investigation \n> than they are always the same. \n> \n\n\nI didn’t see a significant change in CPU utilization, it seems to be a\nbit less, but not that much, however IO is still idling.\n\n\n> \n> Wish postgres could automate the partition process natively like the \n> other sql db. \n> \n> More automated would be nice (i.e. one operation to make both the check \n> constraints and the trigger, so they can't get out of sync), but would \n> not necessarily mean faster. I don't know what you mean about other \n> db. Last time I looked at partitioning in mysql, it was only about \n> breaking up the underlying storage into separate files (without regards \n> to contents of the rows), so that is the same as what postgres does \n> automatically. And in Oracle, their partitioning seemed about the same \n> as postgres's as far as administrative tedium was concerned. I'm not \n> familiar with how the MS product handles it, and maybe me experience \n> with the other two are out of date. \n\n\n\nThe other free sql DB supports a more elaborated scheme, for example: \n\nCREATE\nTABLE ti (id INT, amount DECIMAL(7,2), tr_date DATE)\n\n    ENGINE=INNODB\n\n    PARTITION BY HASH( MONTH(tr_date) )\n\n    PARTITIONS 6;\n\nIt\nalso supports partitioning by RANGE, LIST or KEY.\n\n\nThe paid one uses a very similar style:CREATE TABLE dept (deptno NUMBER, deptname VARCHAR(32))\n PARTITION BY HASH(deptno) PARTITIONS 16;\n\nAlso:\nCREATE TABLE sales\n ( prod_id NUMBER(6)\n , cust_id NUMBER\n , time_id DATE\n , quantity_sold NUMBER(3)\n )\n PARTITION BY RANGE (time_id)\n ( PARTITION sales_q1_2006 VALUES LESS THAN (TO_DATE('01-APR-2006','dd-MON-yyyy'))\n TABLESPACE tsa\n , PARTITION sales_q2_2006 VALUES LESS THAN (TO_DATE('01-JUL-2006','dd-MON-yyyy'))\n TABLESPACE tsb\n...\n\n\n\n\n> \n> Cheers, \n> \n> Jeff \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 11:16:26 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Wednesday, December 26, 2012, Pavel Stehule wrote:\n\n> 2012/12/27 Jeff Janes <[email protected]>:\n> >\n> > More automated would be nice (i.e. one operation to make both the check\n> > constraints and the trigger, so they can't get out of sync), but would\n> not\n> > necessarily mean faster.\n>\n>\n<snip some benchmarking>\n\nNative implementation should significantly effective evaluate\n\n expressions, mainly simple expressions - (this is significant for\n> large number of partitions) and probably can do tuple forwarding\n> faster than is heavy INSERT statement (is question if is possible\n> decrease some overhead with more sophisticate syntax (by removing\n> record expand).\n>\n\nIf the main goal is to make it faster, I'd rather see all of plpgsql get\nfaster, rather than just a special case of partitioning triggers. For\nexample, right now a CASE <expression> statement with 100 branches is about\nthe same speed as an equivalent list of 100 elsif. So it seems to be doing\na linear search, when it could be doing a hash that should be a lot faster.\n\n\n\n>\n> So native implementation can carry significant speed up - mainly if we\n> can distribute tuples without expression evaluating (evaluated by\n> executor)\n>\n\nMaking partitioning inserts native does open up other opportunities to make\nit faster, and also to make it administratively easier; but do we want to\ntry to tackle both of those goals simultaneously? I think the\nadministrative aspects would come first. (But I doubt I will be the one to\nimplement either, so my vote doesn't count for much here.)\n\n\nCheers,\n\nJeff\n\n>\n>\n\nOn Wednesday, December 26, 2012, Pavel Stehule wrote:2012/12/27 Jeff Janes <[email protected]>:\n\n\n>\n> More automated would be nice (i.e. one operation to make both the check\n> constraints and the trigger, so they can't get out of sync), but would not\n> necessarily mean faster.\n <snip some benchmarking>Native implementation should significantly effective evaluate\n\nexpressions, mainly simple expressions - (this is significant for\nlarge number of partitions) and probably can do tuple forwarding\nfaster than is heavy INSERT statement (is question if is possible\ndecrease some overhead with more sophisticate syntax (by removing\nrecord expand).If the main goal is to make it faster, I'd rather see all of plpgsql get faster, rather than just a special case of partitioning triggers.  For example, right now a CASE <expression> statement with 100 branches is about the same speed as an equivalent list of 100 elsif.  So it seems to be doing a linear search, when it could be doing a hash that should be a lot faster. \n \n\nSo native implementation can carry significant speed up - mainly if we\ncan distribute tuples without expression evaluating (evaluated by\nexecutor)Making partitioning inserts native does open up other opportunities to make it faster, and also to make it administratively easier; but do we want to try to tackle both of those goals simultaneously?  I think the administrative aspects would come first.  (But I doubt I will be the one to implement either, so my vote doesn't count for much here.)\nCheers,Jeff", "msg_date": "Thu, 27 Dec 2012 13:11:49 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Monday, December 24, 2012, Charles Gomes wrote:\n\n> By the way, I've just re-wrote the code to target the partitions\n> individually and I've got almost 4 times improvement.\n> Shouldn't it be faster to process the trigger, I would understand if there\n> was no CPU left, but there is lots of cpu to chew.\n>\n\nOnce you turned off hyperthreading, it was reporting 75% CPU usage.\n Assuming that that accounting is perfect, that means you could only get\n33% faster if you were to somehow start using all of the CPU. So I don't\nthink I'd call that a lot of CPU left. And if you have 70 processes\nfighting for 8 cores, I'm not surprised you can't get above that CPU usage.\n\n\n\n> It seems that there will be no other way to speedup unless the insert code\n> is partition aware.\n>\n\n\nThere may be other ways, but that one will probably get you the most gain,\nespecially if you use COPY or \\copy. Since the main goal of partitioning\nis to allow your physical storage layout to conspire with your bulk\noperations, it is hard to see how you can get the benefits of partitioning\nwithout having your bulk loading participate in that conspiracy.\n\n\nCheers,\n\nJeff\n\nOn Monday, December 24, 2012, Charles Gomes wrote:By the way, I've just re-wrote the code to target the partitions individually and I've got almost 4 times improvement.\n\nShouldn't it be faster to process the trigger, I would understand if there was no CPU left, but there is lots of cpu to chew.Once you turned off hyperthreading, it was reporting 75% CPU usage.  Assuming that that accounting is perfect, that means you could only get 33% faster if you were to somehow start using all of the CPU.  So I don't think I'd call that a lot of CPU left.  And if you have 70 processes fighting for 8 cores, I'm not surprised you can't get above that CPU usage.\n \nIt seems that there will be no other way to speedup unless the insert code is partition aware.There may be other ways, but that one will probably get you the most gain, especially if you use COPY or \\copy.  Since the main goal of partitioning is to allow your physical storage layout to conspire with your bulk operations, it is hard to see how you can get the benefits of partitioning without having your bulk loading participate in that conspiracy.\n Cheers,Jeff", "msg_date": "Thu, 27 Dec 2012 13:24:15 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "2012/12/27 Jeff Janes <[email protected]>:\n> On Wednesday, December 26, 2012, Pavel Stehule wrote:\n>>\n>> 2012/12/27 Jeff Janes <[email protected]>:\n>> >\n>> > More automated would be nice (i.e. one operation to make both the check\n>> > constraints and the trigger, so they can't get out of sync), but would\n>> > not\n>> > necessarily mean faster.\n>>\n>\n> <snip some benchmarking>\n>\n>> Native implementation should significantly effective evaluate\n>>\n>> expressions, mainly simple expressions - (this is significant for\n>> large number of partitions) and probably can do tuple forwarding\n>> faster than is heavy INSERT statement (is question if is possible\n>> decrease some overhead with more sophisticate syntax (by removing\n>> record expand).\n>\n>\n> If the main goal is to make it faster, I'd rather see all of plpgsql get\n> faster, rather than just a special case of partitioning triggers. For\n> example, right now a CASE <expression> statement with 100 branches is about\n> the same speed as an equivalent list of 100 elsif. So it seems to be doing\n> a linear search, when it could be doing a hash that should be a lot faster.\n\na bottleneck is not in PL/pgSQL directly. It is in PostgreSQL\nexpression executor. Personally I don't see any simple optimization -\nmaybe some variant of JIT (for expression executor) should to improve\nperformance.\n\nAny other optimization require significant redesign PL/pgSQL what is\njob what I don't would do now - personally, it is not work what I\nwould to start by self, because using plpgsql triggers for\npartitioning is bad usage of plpgsql - and I believe so after native\nimplementation any this work will be useless. Design some generic C\ntrigger or really full implementation is better work.\n\nMore, there is still expensive INSERT statement - forwarding tuple on\nC level should be significantly faster - because it don't be generic.\n\n>\n>\n>>\n>>\n>> So native implementation can carry significant speed up - mainly if we\n>> can distribute tuples without expression evaluating (evaluated by\n>> executor)\n>\n>\n> Making partitioning inserts native does open up other opportunities to make\n> it faster, and also to make it administratively easier; but do we want to\n> try to tackle both of those goals simultaneously? I think the\n> administrative aspects would come first. (But I doubt I will be the one to\n> implement either, so my vote doesn't count for much here.)\n\nAnybody who starts work on native implementation will have my support\n(it is feature that lot of customers needs). I have customers that can\nsupport development and I believe so there are others. Actually It\nneeds only one tenacious man, because it is work for two years.\n\nRegards\n\nPavel\n\n>\n>\n> Cheers,\n>\n> Jeff\n>>\n>>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 19:46:12 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Pavel, \n\nI've been trying to port the work of Emmanuel \nhttp://archives.postgresql.org/pgsql-hackers/2008-12/msg01221.php\n\n\nHis implementation is pretty straight forward. Simple trigger doing constrain checks with caching for bulk inserts.\nSo far that's what I got http://www.widesol.com/~charles/pgsql/partition.c\nI had some issues as He uses HeapTuples and on 9.2 I see a Slot.\n\n\n----------------------------------------\n> From: [email protected]\n> Date: Thu, 27 Dec 2012 19:46:12 +0100\n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n> To: [email protected]\n> CC: [email protected]; [email protected]; [email protected]\n>\n> 2012/12/27 Jeff Janes <[email protected]>:\n> > On Wednesday, December 26, 2012, Pavel Stehule wrote:\n> >>\n> >> 2012/12/27 Jeff Janes <[email protected]>:\n> >> >\n> >> > More automated would be nice (i.e. one operation to make both the check\n> >> > constraints and the trigger, so they can't get out of sync), but would\n> >> > not\n> >> > necessarily mean faster.\n> >>\n> >\n> > <snip some benchmarking>\n> >\n> >> Native implementation should significantly effective evaluate\n> >>\n> >> expressions, mainly simple expressions - (this is significant for\n> >> large number of partitions) and probably can do tuple forwarding\n> >> faster than is heavy INSERT statement (is question if is possible\n> >> decrease some overhead with more sophisticate syntax (by removing\n> >> record expand).\n> >\n> >\n> > If the main goal is to make it faster, I'd rather see all of plpgsql get\n> > faster, rather than just a special case of partitioning triggers. For\n> > example, right now a CASE <expression> statement with 100 branches is about\n> > the same speed as an equivalent list of 100 elsif. So it seems to be doing\n> > a linear search, when it could be doing a hash that should be a lot faster.\n>\n> a bottleneck is not in PL/pgSQL directly. It is in PostgreSQL\n> expression executor. Personally I don't see any simple optimization -\n> maybe some variant of JIT (for expression executor) should to improve\n> performance.\n>\n> Any other optimization require significant redesign PL/pgSQL what is\n> job what I don't would do now - personally, it is not work what I\n> would to start by self, because using plpgsql triggers for\n> partitioning is bad usage of plpgsql - and I believe so after native\n> implementation any this work will be useless. Design some generic C\n> trigger or really full implementation is better work.\n>\n> More, there is still expensive INSERT statement - forwarding tuple on\n> C level should be significantly faster - because it don't be generic.\n>\n> >\n> >\n> >>\n> >>\n> >> So native implementation can carry significant speed up - mainly if we\n> >> can distribute tuples without expression evaluating (evaluated by\n> >> executor)\n> >\n> >\n> > Making partitioning inserts native does open up other opportunities to make\n> > it faster, and also to make it administratively easier; but do we want to\n> > try to tackle both of those goals simultaneously? I think the\n> > administrative aspects would come first. (But I doubt I will be the one to\n> > implement either, so my vote doesn't count for much here.)\n>\n> Anybody who starts work on native implementation will have my support\n> (it is feature that lot of customers needs). I have customers that can\n> support development and I believe so there are others. Actually It\n> needs only one tenacious man, because it is work for two years.\n>\n> Regards\n>\n> Pavel\n>\n> >\n> >\n> > Cheers,\n> >\n> > Jeff\n> >>\n> >>\n> >\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 14:00:02 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "* Jeff Janes ([email protected]) wrote:\n> If the main goal is to make it faster, I'd rather see all of plpgsql get\n> faster, rather than just a special case of partitioning triggers. For\n> example, right now a CASE <expression> statement with 100 branches is about\n> the same speed as an equivalent list of 100 elsif. So it seems to be doing\n> a linear search, when it could be doing a hash that should be a lot faster.\n\nThat's a nice thought, but I'm not sure that it'd really be practical.\nCASE statements in plpgsql are completely general and really behave more\nlike an if/elsif tree than a C-style switch() statement or similar. For\none thing, the expression need not use the same variables, could be\ncomplex multi-variable conditionals, etc.\n\nFiguring out that you could build a dispatch table for a given CASE\nstatement and then building it, storing it, and remembering to use it,\nwouldn't be cheap.\n\nOn the other hand, I've actually *wanted* a simpler syntax on occation.\nI have no idea if there'd be a way to make it work, but this would be\nkind of nice:\n\nCASE OF x -- or whatever\n WHEN 1 THEN blah blah\n WHEN 2 THEN blah blah\n WHEN 3 THEN blah blah\nEND\n\nwhich would be possible to build into a dispatch table by looking at the\ntype of x and the literals used in the overall CASE statement. Even so,\nthere would likely be some number of WHEN conditions required before\nit'd actually be more efficient to use, though perhaps getting rid of\nthe expression evaluation (if that'd be possible) would make up for it.\n\n\tThanks,\n\t\t\n\t\tStephen", "msg_date": "Thu, 27 Dec 2012 14:00:19 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "2012/12/27 Stephen Frost <[email protected]>:\n> * Jeff Janes ([email protected]) wrote:\n>> If the main goal is to make it faster, I'd rather see all of plpgsql get\n>> faster, rather than just a special case of partitioning triggers. For\n>> example, right now a CASE <expression> statement with 100 branches is about\n>> the same speed as an equivalent list of 100 elsif. So it seems to be doing\n>> a linear search, when it could be doing a hash that should be a lot faster.\n>\n> That's a nice thought, but I'm not sure that it'd really be practical.\n> CASE statements in plpgsql are completely general and really behave more\n> like an if/elsif tree than a C-style switch() statement or similar. For\n> one thing, the expression need not use the same variables, could be\n> complex multi-variable conditionals, etc.\n>\n> Figuring out that you could build a dispatch table for a given CASE\n> statement and then building it, storing it, and remembering to use it,\n> wouldn't be cheap.\n>\n> On the other hand, I've actually *wanted* a simpler syntax on occation.\n> I have no idea if there'd be a way to make it work, but this would be\n> kind of nice:\n>\n> CASE OF x -- or whatever\n> WHEN 1 THEN blah blah\n> WHEN 2 THEN blah blah\n> WHEN 3 THEN blah blah\n> END\n>\n> which would be possible to build into a dispatch table by looking at the\n> type of x and the literals used in the overall CASE statement. Even so,\n> there would likely be some number of WHEN conditions required before\n> it'd actually be more efficient to use, though perhaps getting rid of\n> the expression evaluation (if that'd be possible) would make up for it.\n\nI understand, but I am not happy with it. CASE is relative complex.\nThere is SQL CASE too, and this is third variant of CASE. Maybe some\nsimple CASE statements can be supported by parser and there should be\nlocal optimization (probably only for numeric - without casting) But\nit needs relative lot of new code? Will be this code accepted?\n\nRegards\n\nPavel\n\n>\n> Thanks,\n>\n> Stephen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 20:21:25 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "There is switch-like sql case:\n39.6.2.4. Simple CASE\n\nCASE search-expression\n WHEN expression [, expression [ ... ]] THEN\n statements\n [ WHEN expression [, expression [ ... ]] THEN\n statements\n ... ]\n [ ELSE\n statements ]\nEND CASE;\n\nIt should work like C switch statement.\n\nAlso, for bulk insert, have you tried \"for each statement\" triggers instead\nof \"for each row\"?\nThis would look like a lot of inserts and would not be fast in\nsingle-row-insert case, but can give you benefit for huge inserts.\nIt should look like\ninsert into quotes_2012_09_10 select * from new where\ncast(new.received_time as date) = '2012-09-10' ;\ninsert into quotes_2012_09_11 select * from new where\ncast(new.received_time as date) = '2012-09-11' ;\n...\n\n2012/12/27 Stephen Frost <[email protected]>\n\n> * Jeff Janes ([email protected]) wrote:\n> > If the main goal is to make it faster, I'd rather see all of plpgsql get\n> > faster, rather than just a special case of partitioning triggers. For\n> > example, right now a CASE <expression> statement with 100 branches is\n> about\n> > the same speed as an equivalent list of 100 elsif. So it seems to be\n> doing\n> > a linear search, when it could be doing a hash that should be a lot\n> faster.\n>\n> That's a nice thought, but I'm not sure that it'd really be practical.\n> CASE statements in plpgsql are completely general and really behave more\n> like an if/elsif tree than a C-style switch() statement or similar. For\n> one thing, the expression need not use the same variables, could be\n> complex multi-variable conditionals, etc.\n>\n> Figuring out that you could build a dispatch table for a given CASE\n> statement and then building it, storing it, and remembering to use it,\n> wouldn't be cheap.\n>\n> On the other hand, I've actually *wanted* a simpler syntax on occation.\n> I have no idea if there'd be a way to make it work, but this would be\n> kind of nice:\n>\n> CASE OF x -- or whatever\n> WHEN 1 THEN blah blah\n> WHEN 2 THEN blah blah\n> WHEN 3 THEN blah blah\n> END\n>\n> which would be possible to build into a dispatch table by looking at the\n> type of x and the literals used in the overall CASE statement. Even so,\n> there would likely be some number of WHEN conditions required before\n> it'd actually be more efficient to use, though perhaps getting rid of\n> the expression evaluation (if that'd be possible) would make up for it.\n>\n> Thanks,\n>\n> Stephen\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nThere is switch-like sql case:39.6.2.4. Simple CASE\n\nCASE search-expression\n WHEN expression [, expression [ ... ]] THEN\n statements\n [ WHEN expression [, expression [ ... ]] THEN\n statements\n ... ]\n [ ELSE\n statements ]\nEND CASE;It should work like C switch statement.Also, for bulk insert, have you tried \"for each statement\" triggers instead of \"for each row\"?\nThis would look like a lot of inserts and would not be fast in single-row-insert case, but can give you benefit for huge inserts.It should look likeinsert into quotes_2012_09_10 select * from new where cast(new.received_time as date) = '2012-09-10' ;\ninsert into quotes_2012_09_11 select * from new where cast(new.received_time as date) = '2012-09-11' ;\n...2012/12/27 Stephen Frost <[email protected]>\n* Jeff Janes ([email protected]) wrote:\n\n> If the main goal is to make it faster, I'd rather see all of plpgsql get\n> faster, rather than just a special case of partitioning triggers.  For\n> example, right now a CASE <expression> statement with 100 branches is about\n> the same speed as an equivalent list of 100 elsif.  So it seems to be doing\n> a linear search, when it could be doing a hash that should be a lot faster.\n\nThat's a nice thought, but I'm not sure that it'd really be practical.\nCASE statements in plpgsql are completely general and really behave more\nlike an if/elsif tree than a C-style switch() statement or similar.  For\none thing, the expression need not use the same variables, could be\ncomplex multi-variable conditionals, etc.\n\nFiguring out that you could build a dispatch table for a given CASE\nstatement and then building it, storing it, and remembering to use it,\nwouldn't be cheap.\n\nOn the other hand, I've actually *wanted* a simpler syntax on occation.\nI have no idea if there'd be a way to make it work, but this would be\nkind of nice:\n\nCASE OF x -- or whatever\n  WHEN 1 THEN blah blah\n  WHEN 2 THEN blah blah\n  WHEN 3 THEN blah blah\nEND\n\nwhich would be possible to build into a dispatch table by looking at the\ntype of x and the literals used in the overall CASE statement.  Even so,\nthere would likely be some number of WHEN conditions required before\nit'd actually be more efficient to use, though perhaps getting rid of\nthe expression evaluation (if that'd be possible) would make up for it.\n\n        Thanks,\n\n                Stephen\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Fri, 28 Dec 2012 14:35:43 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "BTW: If \"select count(*) from new\" is fast, you can even choose the\nstrategy in trigger depending on insert size.\n\n\n2012/12/28 Vitalii Tymchyshyn <[email protected]>\n\n> There is switch-like sql case:\n> 39.6.2.4. Simple CASE\n>\n> CASE search-expression\n> WHEN expression [, expression [ ... ]] THEN\n> statements\n> [ WHEN expression [, expression [ ... ]] THEN\n> statements\n> ... ]\n> [ ELSE\n> statements ]\n> END CASE;\n>\n> It should work like C switch statement.\n>\n> Also, for bulk insert, have you tried \"for each statement\" triggers\n> instead of \"for each row\"?\n> This would look like a lot of inserts and would not be fast in\n> single-row-insert case, but can give you benefit for huge inserts.\n> It should look like\n> insert into quotes_2012_09_10 select * from new where\n> cast(new.received_time as date) = '2012-09-10' ;\n> insert into quotes_2012_09_11 select * from new where\n> cast(new.received_time as date) = '2012-09-11' ;\n> ...\n>\n> 2012/12/27 Stephen Frost <[email protected]>\n>\n>> * Jeff Janes ([email protected]) wrote:\n>> > If the main goal is to make it faster, I'd rather see all of plpgsql get\n>> > faster, rather than just a special case of partitioning triggers. For\n>> > example, right now a CASE <expression> statement with 100 branches is\n>> about\n>> > the same speed as an equivalent list of 100 elsif. So it seems to be\n>> doing\n>> > a linear search, when it could be doing a hash that should be a lot\n>> faster.\n>>\n>> That's a nice thought, but I'm not sure that it'd really be practical.\n>> CASE statements in plpgsql are completely general and really behave more\n>> like an if/elsif tree than a C-style switch() statement or similar. For\n>> one thing, the expression need not use the same variables, could be\n>> complex multi-variable conditionals, etc.\n>>\n>> Figuring out that you could build a dispatch table for a given CASE\n>> statement and then building it, storing it, and remembering to use it,\n>> wouldn't be cheap.\n>>\n>> On the other hand, I've actually *wanted* a simpler syntax on occation.\n>> I have no idea if there'd be a way to make it work, but this would be\n>> kind of nice:\n>>\n>> CASE OF x -- or whatever\n>> WHEN 1 THEN blah blah\n>> WHEN 2 THEN blah blah\n>> WHEN 3 THEN blah blah\n>> END\n>>\n>> which would be possible to build into a dispatch table by looking at the\n>> type of x and the literals used in the overall CASE statement. Even so,\n>> there would likely be some number of WHEN conditions required before\n>> it'd actually be more efficient to use, though perhaps getting rid of\n>> the expression evaluation (if that'd be possible) would make up for it.\n>>\n>> Thanks,\n>>\n>> Stephen\n>>\n>\n>\n>\n> --\n> Best regards,\n> Vitalii Tymchyshyn\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nBTW: If \"select count(*) from new\" is fast, you can even choose the strategy in trigger depending on insert size.2012/12/28 Vitalii Tymchyshyn <[email protected]>\nThere is switch-like sql case:\n39.6.2.4. Simple CASE\n\nCASE search-expression\n WHEN expression [, expression [ ... ]] THEN\n statements\n [ WHEN expression [, expression [ ... ]] THEN\n statements\n ... ]\n [ ELSE\n statements ]\nEND CASE;It should work like C switch statement.Also, for bulk insert, have you tried \"for each statement\" triggers instead of \"for each row\"?\nThis would look like a lot of inserts and would not be fast in single-row-insert case, but can give you benefit for huge inserts.It should look likeinsert into quotes_2012_09_10 select * from new where cast(new.received_time as date) = '2012-09-10' ;\ninsert into quotes_2012_09_11 select * from new where cast(new.received_time as date) = '2012-09-11' ;\n...2012/12/27 Stephen Frost <[email protected]>\n* Jeff Janes ([email protected]) wrote:\n\n\n> If the main goal is to make it faster, I'd rather see all of plpgsql get\n> faster, rather than just a special case of partitioning triggers.  For\n> example, right now a CASE <expression> statement with 100 branches is about\n> the same speed as an equivalent list of 100 elsif.  So it seems to be doing\n> a linear search, when it could be doing a hash that should be a lot faster.\n\nThat's a nice thought, but I'm not sure that it'd really be practical.\nCASE statements in plpgsql are completely general and really behave more\nlike an if/elsif tree than a C-style switch() statement or similar.  For\none thing, the expression need not use the same variables, could be\ncomplex multi-variable conditionals, etc.\n\nFiguring out that you could build a dispatch table for a given CASE\nstatement and then building it, storing it, and remembering to use it,\nwouldn't be cheap.\n\nOn the other hand, I've actually *wanted* a simpler syntax on occation.\nI have no idea if there'd be a way to make it work, but this would be\nkind of nice:\n\nCASE OF x -- or whatever\n  WHEN 1 THEN blah blah\n  WHEN 2 THEN blah blah\n  WHEN 3 THEN blah blah\nEND\n\nwhich would be possible to build into a dispatch table by looking at the\ntype of x and the literals used in the overall CASE statement.  Even so,\nthere would likely be some number of WHEN conditions required before\nit'd actually be more efficient to use, though perhaps getting rid of\nthe expression evaluation (if that'd be possible) would make up for it.\n\n        Thanks,\n\n                Stephen\n-- Best regards, Vitalii Tymchyshyn\n\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Fri, 28 Dec 2012 14:39:28 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Vitalii,\n\n* Vitalii Tymchyshyn ([email protected]) wrote:\n> There is switch-like sql case:\n[...]\n> It should work like C switch statement.\n\nIt does and it doesn't. It behaves generally like a C switch statement,\nbut is much more flexible and therefore can't be optimized like a C\nswitch statement can be.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Fri, 28 Dec 2012 07:41:41 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Hello\n\n>\n> Also, for bulk insert, have you tried \"for each statement\" triggers instead\n> of \"for each row\"?\n> This would look like a lot of inserts and would not be fast in\n> single-row-insert case, but can give you benefit for huge inserts.\n> It should look like\n> insert into quotes_2012_09_10 select * from new where cast(new.received_time\n> as date) = '2012-09-10' ;\n> insert into quotes_2012_09_11 select * from new where cast(new.received_time\n> as date) = '2012-09-11' ;\n> ...\n\nIt has only one problem - PostgreSQL has not relations NEW and OLD for\nstatements triggers.\n\nRegards\n\nPavel\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 28 Dec 2012 13:48:19 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Why so? Basic form \"case lvalue when rvalue then out ... end\" is much like\nswitch.\nThe \"case when condition then out ... end\" is different, more complex\nbeast, but first one is essentially a switch. If it is now trnasformed into\n\"case when lvalue = rvalue1 then out1 when lvalue=rvalue2 then out2 ...\nend\" then this can be optimized and this would benefit many users, not only\nones that use partitioning.\n\n\n2012/12/28 Stephen Frost <[email protected]>\n\n> Vitalii,\n>\n> * Vitalii Tymchyshyn ([email protected]) wrote:\n> > There is switch-like sql case:\n> [...]\n> > It should work like C switch statement.\n>\n> It does and it doesn't. It behaves generally like a C switch statement,\n> but is much more flexible and therefore can't be optimized like a C\n> switch statement can be.\n>\n> Thanks,\n>\n> Stephen\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nWhy so? Basic form \"case lvalue when rvalue then out ... end\" is much like switch. The \"case when condition then out ... end\" is different, more complex beast, but first one is essentially a switch. If it is now trnasformed into \n\"case when lvalue = rvalue1 then out1 when lvalue=rvalue2 then out2 ... end\" then this can be optimized and this would benefit many users, not only ones that use partitioning.\n2012/12/28 Stephen Frost <[email protected]>\nVitalii,\n\n* Vitalii Tymchyshyn ([email protected]) wrote:\n> There is switch-like sql case:\n[...]\n> It should work like C switch statement.\n\nIt does and it doesn't.  It behaves generally like a C switch statement,\nbut is much more flexible and therefore can't be optimized like a C\nswitch statement can be.\n\n        Thanks,\n\n                Stephen\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Fri, 28 Dec 2012 15:18:38 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "It's a pity. Why does not it listed in \"Compatibility\" section of create\ntrigger documentation? I think, this makes \"for each statement\" triggers\nnot compatible with SQL99.\n\n\n2012/12/28 Pavel Stehule <[email protected]>\n\n> Hello\n>\n> >\n> > Also, for bulk insert, have you tried \"for each statement\" triggers\n> instead\n> > of \"for each row\"?\n> > This would look like a lot of inserts and would not be fast in\n> > single-row-insert case, but can give you benefit for huge inserts.\n> > It should look like\n> > insert into quotes_2012_09_10 select * from new where\n> cast(new.received_time\n> > as date) = '2012-09-10' ;\n> > insert into quotes_2012_09_11 select * from new where\n> cast(new.received_time\n> > as date) = '2012-09-11' ;\n> > ...\n>\n> It has only one problem - PostgreSQL has not relations NEW and OLD for\n> statements triggers.\n>\n> Regards\n>\n> Pavel\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nIt's a pity. Why does not it listed in \"Compatibility\" section of create trigger documentation? I think, this makes \"for each statement\" triggers not compatible with SQL99.\n2012/12/28 Pavel Stehule <[email protected]>\nHello\n\n>\n> Also, for bulk insert, have you tried \"for each statement\" triggers instead\n> of \"for each row\"?\n> This would look like a lot of inserts and would not be fast in\n> single-row-insert case, but can give you benefit for huge inserts.\n> It should look like\n> insert into quotes_2012_09_10 select * from new where cast(new.received_time\n> as date) = '2012-09-10' ;\n> insert into quotes_2012_09_11 select * from new where cast(new.received_time\n> as date) = '2012-09-11' ;\n> ...\n\nIt has only one problem - PostgreSQL has not relations NEW and OLD for\nstatements triggers.\n\nRegards\n\nPavel\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Fri, 28 Dec 2012 15:25:41 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "2012/12/28 Vitalii Tymchyshyn <[email protected]>:\n> Why so? Basic form \"case lvalue when rvalue then out ... end\" is much like\n> switch.\n> The \"case when condition then out ... end\" is different, more complex beast,\n> but first one is essentially a switch. If it is now trnasformed into\n> \"case when lvalue = rvalue1 then out1 when lvalue=rvalue2 then out2 ... end\"\n> then this can be optimized and this would benefit many users, not only ones\n> that use partitioning.\n\nplease, look to plpgsql source code. PL/pgSQL is too simply and has\nnot own arithmetic unit - all is transformed to SELECTs, has not any\noptimization. But is really short and maintainable.\n\nThese SELECTs are evaluated only when it is necessary - but it is\nevaluated by PostgreSQL expression executor - not by PL/pgSQL directly\n- PL/pgSQL cannot process constant by self.\n\nSo any enhancing needs PL/pgSQL redesign and I am not sure, so this\nuse case has accurate benefit, because expression bottleneck is only\none part of partitioning triggers bottleneck. More - if you need\nreally fast code, you can use own code in C - and it be 10x times\nfaster than any optimized PL/pgSQL code. And using C triggers in\nPostgreSQL is not terrible work.\n\nUsing plpgsql row triggers for partitioning is not good idea - it is\njust work around from my perspective, and we should to solve source of\nproblem - missing native support.\n\nRegards\n\nPavel Stehule\n\n\n\n>\n>\n> 2012/12/28 Stephen Frost <[email protected]>\n>>\n>> Vitalii,\n>>\n>> * Vitalii Tymchyshyn ([email protected]) wrote:\n>> > There is switch-like sql case:\n>> [...]\n>> > It should work like C switch statement.\n>>\n>> It does and it doesn't. It behaves generally like a C switch statement,\n>> but is much more flexible and therefore can't be optimized like a C\n>> switch statement can be.\n>>\n>> Thanks,\n>>\n>> Stephen\n>\n>\n>\n>\n> --\n> Best regards,\n> Vitalii Tymchyshyn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 28 Dec 2012 14:41:23 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Friday, December 28, 2012, Vitalii Tymchyshyn wrote:\n\n> There is switch-like sql case:\n> 39.6.2.4. Simple CASE\n>\n> CASE search-expression\n> WHEN expression [, expression [ ... ]] THEN\n> statements\n> [ WHEN expression [, expression [ ... ]] THEN\n> statements\n> ... ]\n> [ ELSE\n> statements ]\n> END CASE;\n>\n> It should work like C switch statement.\n>\n>\nI had thought that too, but the catch is that the target expressions do not\nneed to be constants when the function is created. Indeed, they can even\nbe volatile.\n\nCREATE OR REPLACE FUNCTION foo(x integer)\nRETURNS integer AS $$\nBEGIN\ncase x\nwhen 0 then return -5;\nwhen (random()*10)::integer then return 1;\nwhen (random()*10)::integer then return 2;\nwhen (random()*10)::integer then return 3;\nwhen (random()*10)::integer then return 4;\nwhen (random()*10)::integer then return 5;\nwhen (random()*10)::integer then return 6;\nwhen (random()*10)::integer then return 7;\nwhen (random()*10)::integer then return 8;\nwhen (random()*10)::integer then return 9;\nwhen (random()*10)::integer then return 10;\nelse return -6;\n\nCheers,\n\nJeff\n\n>\n\nOn Friday, December 28, 2012, Vitalii Tymchyshyn wrote:There is switch-like sql case:\n39.6.2.4. Simple CASE\n\nCASE search-expression\n WHEN expression [, expression [ ... ]] THEN\n statements\n [ WHEN expression [, expression [ ... ]] THEN\n statements\n ... ]\n [ ELSE\n statements ]\nEND CASE;It should work like C switch statement.I had thought that too, but the catch is that the target expressions do not need to be constants when the function is created.  Indeed, they can even be volatile.\nCREATE OR REPLACE FUNCTION foo(x integer)RETURNS integer AS $$BEGINcase xwhen 0 then return -5; when (random()*10)::integer then return 1; \nwhen (random()*10)::integer then return 2; when (random()*10)::integer then return 3; when (random()*10)::integer then return 4; when (random()*10)::integer then return 5; \nwhen (random()*10)::integer then return 6; when (random()*10)::integer then return 7; when (random()*10)::integer then return 8; when (random()*10)::integer then return 9; when (random()*10)::integer then return 10; \nelse return -6;Cheers,Jeff", "msg_date": "Fri, 28 Dec 2012 06:05:17 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "2012/12/28 Vitalii Tymchyshyn <[email protected]>:\n> Why so? Basic form \"case lvalue when rvalue then out ... end\" is much like\n> switch.\n\nSorry, to be honest, I missed that distinction and didn't expect that to\nwork as-is, yet apparently it does. Does it currently perform the same\nas an if/elsif tree or is it implemented to actually use a table lookup?\n\n* Pavel Stehule ([email protected]) wrote:\n> please, look to plpgsql source code. PL/pgSQL is too simply and has\n> not own arithmetic unit - all is transformed to SELECTs, has not any\n> optimization. But is really short and maintainable.\n\nI was thinking we'd actually do this for all CASE statements, those in\nplpgsql and those in regular SQL, if it's possible to do. Hopefully\nit'd be possible to do easily in plpgsql once the SQL-level CASE is\ndone.\n\n> These SELECTs are evaluated only when it is necessary - but it is\n> evaluated by PostgreSQL expression executor - not by PL/pgSQL directly\n> - PL/pgSQL cannot process constant by self.\n\nRight, but I wonder if we could pass the entire CASE tree to the\nexecutor, with essentially pointers to the code blocks which will be\nexecuted, and get back a function which we can call over and over that\ntakes whatever the parameter is and returns the 'right' pointer?\n\n> So any enhancing needs PL/pgSQL redesign and I am not sure, so this\n> use case has accurate benefit, because expression bottleneck is only\n> one part of partitioning triggers bottleneck. More - if you need\n> really fast code, you can use own code in C - and it be 10x times\n> faster than any optimized PL/pgSQL code. And using C triggers in\n> PostgreSQL is not terrible work.\n\nIt's quite a bit of work for people who don't know C or are\n(understandably) concerned about writing things which can easily\nsegfault the entire backend.\n\n> Using plpgsql row triggers for partitioning is not good idea - it is\n> just work around from my perspective, and we should to solve source of\n> problem - missing native support.\n\nI agree that native partitioning would certainly be nice. I was really\nhoping that was going to happen for 9.3, but it seems unlikely now\n(unless I've missed something).\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Fri, 28 Dec 2012 09:10:29 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "* Jeff Janes ([email protected]) wrote:\n> I had thought that too, but the catch is that the target expressions do not\n> need to be constants when the function is created. Indeed, they can even\n> be volatile.\n\nRight, any optimization in this regard would only work in certain\ninstances- eg: when the 'WHEN' components are all constants and the data\ntype is something we can manage, etc, etc.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Fri, 28 Dec 2012 09:11:53 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "2012/12/28 Stephen Frost <[email protected]>:\n> 2012/12/28 Vitalii Tymchyshyn <[email protected]>:\n>> Why so? Basic form \"case lvalue when rvalue then out ... end\" is much like\n>> switch.\n>\n> Sorry, to be honest, I missed that distinction and didn't expect that to\n> work as-is, yet apparently it does. Does it currently perform the same\n> as an if/elsif tree or is it implemented to actually use a table lookup?\n\nboth IF and CASE has very similar implementation - table lookup is not\nused - there are not special path for searching constants\n\n>\n> * Pavel Stehule ([email protected]) wrote:\n>> please, look to plpgsql source code. PL/pgSQL is too simply and has\n>> not own arithmetic unit - all is transformed to SELECTs, has not any\n>> optimization. But is really short and maintainable.\n>\n> I was thinking we'd actually do this for all CASE statements, those in\n> plpgsql and those in regular SQL, if it's possible to do. Hopefully\n> it'd be possible to do easily in plpgsql once the SQL-level CASE is\n> done.\n>\n\nI am not sure - SQL case is not heavy specially optimized too :(\n\nI see only one possible way, do almost work when CASE statement is\nparsed and bypass executor - this can work, but I afraid so it can\nslowdown first start and some use cases where is not too much paths,\nbecause we have to check all paths before executions.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 28 Dec 2012 15:44:17 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "On Thursday, December 20, 2012, Scott Marlowe wrote:\n\n>\n> 3: Someone above mentioned rules being faster than triggers. In my\n> experience they're WAY slower than triggers but maybe that was just on\n> the older pg versions (8.3 and lower) we were doing this on. I'd be\n> interested in seeing some benchmarks if rules have gotten faster or I\n> was just doing it wrong.\n>\n\nIt apparently depends on how you use them.\n\nTo load 1e6 rows into the parent, redistributing to 100 partitions (rows\nevenly distributed over partitions) using RULEs, it took 14.5 seconds using\na \"insert into foo select * from foo_tmp\" (not counting the time it took to\nprepopulate the foo_tmp via \\copy).\n\nThis is about 25% faster than the 18.4 seconds it took to load the same\ndata via \\copy using a plpgsql trigger which was structured with nested IF\n... ELSE...END IF that do a binary search over the partitions.\n\nHowever if I didn't use \\copy or \"insert into...select\", but rather used a\nPerl loop invoking normal single-row inserts (but all in a single\ntransaction) with DBD::Pg, then the RULEs took 596 seconds, an astonishing\nseven times slower than the 83 seconds it took the previously mentioned\nplpgsql trigger to do the same thing.\n\nThis was under 9.1.7.\n\nIn 9.2.2, it seems to get 3 times worse yet for RULEs in the insert loop.\n But that result seems hard to believe, so I am repeating it.\n\nCheers\n\nJeff\n\n>\n>\n\nOn Thursday, December 20, 2012, Scott Marlowe wrote:\n3: Someone above mentioned rules being faster than triggers.  In my\nexperience they're WAY slower than triggers but maybe that was just on\nthe older pg versions (8.3 and lower) we were doing this on.  I'd be\ninterested in seeing some benchmarks if rules have gotten faster or I\nwas just doing it wrong.It apparently depends on how you use them.To load 1e6 rows into the parent, redistributing to 100 partitions (rows evenly distributed over partitions) using RULEs, it took 14.5 seconds using a \"insert into foo select * from foo_tmp\" (not counting the time it took to prepopulate the foo_tmp via \\copy).\nThis is about 25% faster than the 18.4 seconds it took to load the same data via \\copy using a plpgsql trigger which was structured with nested IF ... ELSE...END IF that do a binary search over the partitions.\n However if I didn't use \\copy or \"insert into...select\", but rather used a Perl loop invoking normal single-row inserts (but all in a single transaction) with DBD::Pg, then the RULEs took 596 seconds, an astonishing seven times slower than the 83 seconds it took the previously mentioned plpgsql trigger to do the same thing.\nThis was under 9.1.7.  In 9.2.2, it seems to get 3 times worse yet for RULEs in the insert loop.  But that result seems hard to believe, so I am repeating it.\nCheersJeff", "msg_date": "Fri, 28 Dec 2012 08:30:30 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "UNSUBSCRIBE\r\n\r\n \r\n\r\nDe: [email protected] [mailto:[email protected]] Em nome de Jeff Janes\r\nEnviada em: sexta-feira, 28 de dezembro de 2012 14:31\r\nPara: Scott Marlowe\r\nCc: Tom Lane; Charles Gomes; Ondrej Ivanič; [email protected]\r\nAssunto: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\r\n\r\n \r\n\r\n\r\n\r\nOn Thursday, December 20, 2012, Scott Marlowe wrote:\r\n\r\n\r\n3: Someone above mentioned rules being faster than triggers. In my\r\nexperience they're WAY slower than triggers but maybe that was just on\r\nthe older pg versions (8.3 and lower) we were doing this on. I'd be\r\ninterested in seeing some benchmarks if rules have gotten faster or I\r\nwas just doing it wrong.\r\n\r\n \r\n\r\nIt apparently depends on how you use them.\r\n\r\n \r\n\r\nTo load 1e6 rows into the parent, redistributing to 100 partitions (rows evenly distributed over partitions) using RULEs, it took 14.5 seconds using a \"insert into foo select * from foo_tmp\" (not counting the time it took to prepopulate the foo_tmp via \\copy).\r\n\r\n \r\n\r\nThis is about 25% faster than the 18.4 seconds it took to load the same data via \\copy using a plpgsql trigger which was structured with nested IF ... ELSE...END IF that do a binary search over the partitions.\r\n\r\nHowever if I didn't use \\copy or \"insert into...select\", but rather used a Perl loop invoking normal single-row inserts (but all in a single transaction) with DBD::Pg, then the RULEs took 596 seconds, an astonishing seven times slower than the 83 seconds it took the previously mentioned plpgsql trigger to do the same thing.\r\n\r\n \r\n\r\nThis was under 9.1.7. \r\n\r\n \r\n\r\nIn 9.2.2, it seems to get 3 times worse yet for RULEs in the insert loop. But that result seems hard to believe, so I am repeating it.\r\n\r\n \r\n\r\nCheers\r\n\r\n \r\n\r\nJeff\r\n\r\n\t \r\n\r\n\nUNSUBSCRIBE De: [email protected] [mailto:[email protected]] Em nome de Jeff JanesEnviada em: sexta-feira, 28 de dezembro de 2012 14:31Para: Scott MarloweCc: Tom Lane; Charles Gomes; Ondrej Ivanič; [email protected]: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table On Thursday, December 20, 2012, Scott Marlowe wrote:3: Someone above mentioned rules being faster than triggers. In myexperience they're WAY slower than triggers but maybe that was just onthe older pg versions (8.3 and lower) we were doing this on. I'd beinterested in seeing some benchmarks if rules have gotten faster or Iwas just doing it wrong. It apparently depends on how you use them. To load 1e6 rows into the parent, redistributing to 100 partitions (rows evenly distributed over partitions) using RULEs, it took 14.5 seconds using a \"insert into foo select * from foo_tmp\" (not counting the time it took to prepopulate the foo_tmp via \\copy). This is about 25% faster than the 18.4 seconds it took to load the same data via \\copy using a plpgsql trigger which was structured with nested IF ... ELSE...END IF that do a binary search over the partitions.However if I didn't use \\copy or \"insert into...select\", but rather used a Perl loop invoking normal single-row inserts (but all in a single transaction) with DBD::Pg, then the RULEs took 596 seconds, an astonishing seven times slower than the 83 seconds it took the previously mentioned plpgsql trigger to do the same thing. This was under 9.1.7.  In 9.2.2, it seems to get 3 times worse yet for RULEs in the insert loop. But that result seems hard to believe, so I am repeating it. Cheers Jeff", "msg_date": "Fri, 28 Dec 2012 16:00:29 -0200", "msg_from": "\"Luciano Ernesto da Silva\" <[email protected]>", "msg_from_op": false, "msg_subject": "RES: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Hello\n\n2012/12/28 Luciano Ernesto da Silva <[email protected]>:\n> UNSUBSCRIBE\n>\n>\n>\n> De: [email protected]\n> [mailto:[email protected]] Em nome de Jeff Janes\n> Enviada em: sexta-feira, 28 de dezembro de 2012 14:31\n> Para: Scott Marlowe\n> Cc: Tom Lane; Charles Gomes; Ondrej Ivanič; [email protected]\n> Assunto: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table\n>\n>\n>\n>\n>\n> On Thursday, December 20, 2012, Scott Marlowe wrote:\n>\n>\n> 3: Someone above mentioned rules being faster than triggers. In my\n> experience they're WAY slower than triggers but maybe that was just on\n> the older pg versions (8.3 and lower) we were doing this on. I'd be\n> interested in seeing some benchmarks if rules have gotten faster or I\n> was just doing it wrong.\n>\n>\n\nI am not sure, but I expect so speed or slowness of rules depends\nprimary on number of partitions. More significantly than triggers.\n\nRegards\n\nPavel\n\n>\n> It apparently depends on how you use them.\n>\n>\n>\n> To load 1e6 rows into the parent, redistributing to 100 partitions (rows\n> evenly distributed over partitions) using RULEs, it took 14.5 seconds using\n> a \"insert into foo select * from foo_tmp\" (not counting the time it took to\n> prepopulate the foo_tmp via \\copy).\n>\n>\n>\n> This is about 25% faster than the 18.4 seconds it took to load the same data\n> via \\copy using a plpgsql trigger which was structured with nested IF ...\n> ELSE...END IF that do a binary search over the partitions.\n>\n> However if I didn't use \\copy or \"insert into...select\", but rather used a\n> Perl loop invoking normal single-row inserts (but all in a single\n> transaction) with DBD::Pg, then the RULEs took 596 seconds, an astonishing\n> seven times slower than the 83 seconds it took the previously mentioned\n> plpgsql trigger to do the same thing.\n>\n>\n>\n> This was under 9.1.7.\n>\n>\n>\n> In 9.2.2, it seems to get 3 times worse yet for RULEs in the insert loop.\n> But that result seems hard to believe, so I am repeating it.\n>\n>\n>\n> Cheers\n>\n>\n>\n> Jeff\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 28 Dec 2012 19:05:04 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "2012/12/27 Charles Gomes <[email protected]>\n\n> So far that's what I got http://www.widesol.com/~charles/pgsql/partition.c\n> I had some issues as He uses HeapTuples and on 9.2 I see a Slot.\n>\n\nHi Charles,\nI copied your C code partition.c and am trying to test it.\n\nFor compiling you suggest :\n\n...\ngcc -I \"./\" -fpic -c trigger.c\n...\n\nWhere comes the file *trigger.c* from ? Is that the one you find in the\nsource directory\n./src/backend/commands/ ?\n\nThanks a lot\nBest regards\nAli\n\n2012/12/27 Charles Gomes <[email protected]>\nSo far that's what I got http://www.widesol.com/~charles/pgsql/partition.c\nI had some issues as He uses HeapTuples and on 9.2 I see a Slot.Hi Charles,I copied your C code partition.c and am trying to test it.For compiling you suggest :...gcc -I \"./\" -fpic -c trigger.c\n\n\n...Where comes the file trigger.c from ? Is that the one you find in the source directory ./src/backend/commands/    ?Thanks a lot\nBest regards\nAli", "msg_date": "Thu, 17 Jan 2013 15:38:14 +0100", "msg_from": "Ali Pouya <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "________________________________\n> Date: Thu, 17 Jan 2013 15:38:14 +0100 \n> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table \n> From: [email protected] \n> To: [email protected] \n> CC: [email protected] \n> \n> \n> 2012/12/27 Charles Gomes \n> <[email protected]<mailto:[email protected]>> \n> So far that's what I got http://www.widesol.com/~charles/pgsql/partition.c \n> I had some issues as He uses HeapTuples and on 9.2 I see a Slot. \n> \n> Hi Charles, \n> I copied your C code partition.c and am trying to test it. \n> \n> For compiling you suggest : \n> \n> ... \n> gcc -I \"./\" -fpic -c trigger.c \n> ... \n> \n> Where comes the file trigger.c from ? Is that the one you find in the \n> source directory \n> ./src/backend/commands/ ? \n> \n> Thanks a lot \n> Best regards \n> Ali \n> \n> \n>\n\nAli,\nYou can save the source as partition.c and use:\n\ngcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -DREFINT_VERBOSE -I. -I. -I\"/usr/pgsql-9.2/include/server/\" -D_GNU_SOURCE   -c -o partition.o partition.c\n \ngcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags -L/usr/pgsql-9.2/lib -lpgport  -shared -o /usr/pgsql-9.2/lib/partition.so\n \nTo Compile you must have postgresql-devel packages.\n\nI've added everything to github:\nhttps://github.com/charlesrg/pgsql_partition/blob/master/partition.c\n\nFor more info check\nhttp://www.charlesrg.com/linux/71-postgresql-partitioning-the-database-the-fastest-way \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Jan 2013 10:01:31 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" }, { "msg_contents": "Ali,\n\n> You can save the source as partition.c and use:\n>\n> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute\n> -Wformat-security -fno-strict-aliasing -fwrapv -fpic -DREFINT_VERBOSE -I.\n> -I. -I\"/usr/pgsql-9.2/include/server/\" -D_GNU_SOURCE -c -o partition.o\n> partition.c\n>\n> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute\n> -Wformat-security -fno-strict-aliasing -fwrapv -fpic -Wl,--as-needed\n> -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags -L/usr/pgsql-9.2/lib\n> -lpgport -shared -o /usr/pgsql-9.2/lib/partition.so\n>\n> To Compile you must have postgresql-devel packages.\n>\n> I've added everything to github:\n> https://github.com/charlesrg/pgsql_partition/blob/master/partition.c\n>\n> For more info check\n>\n> http://www.charlesrg.com/linux/71-postgresql-partitioning-the-database-the-fastest-way\n\nThanks Charles,\nNow the compilation is OK.\nI'll test and feed back more information if any.\nbest regards\nAli\n\nAli,\nYou can save the source as partition.c and use:\n\ngcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -DREFINT_VERBOSE -I. -I. -I\"/usr/pgsql-9.2/include/server/\" -D_GNU_SOURCE   -c -o partition.o partition.c\n\n \ngcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags -L/usr/pgsql-9.2/lib -lpgport  -shared -o /usr/pgsql-9.2/lib/partition.so\n\n \nTo Compile you must have postgresql-devel packages.\n\nI've added everything to github:\nhttps://github.com/charlesrg/pgsql_partition/blob/master/partition.c\n\nFor more info check\nhttp://www.charlesrg.com/linux/71-postgresql-partitioning-the-database-the-fastest-way                                    \nThanks Charles,Now the compilation is OK.I'll test and feed back more information if any.best regardsAli", "msg_date": "Thu, 17 Jan 2013 17:03:50 +0100", "msg_from": "Ali Pouya <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Bulk Insert to Partitioned Table" } ]
[ { "msg_contents": "Ghislain ROUVIGNAC wrote:\n\n>> I would leave default_statistics_target alone unless you see a lot of\n>> estimates which are off by more than an order of magnitude. Even then, it\n>> is often better to set a higher value for a few individual columns than for\n>> everything.\n> \n> \n> We had an issue with a customer where we had to increase the statistics\n> parameter for a primary key.\n> So I'd like to know if there is a way to identify for which column we have\n> to change the statistics.\n\nI don't know a better way than to investigate queries which seem to\nbe running longer than you would expect, and look for cases where\nEXPLAIN ANALYZE shows an estimated row count which is off from\nactual by enough to cause a problem. Sometimes this is caused by\ncorrelations between values in different columns, in which case a\nhigher target is not likely to help; but sometimes it's a matter\nthat there is an uneven distribution among values not included in\nthe \"most common values\", in which case boosting the target to\nstore more values and finer-grained information on ranges will be\nexactly what you need.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 21 Dec 2012 10:34:13 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow queries after vacuum analyze" } ]
[ { "msg_contents": "Hello\n\nplease do not consider this email as an yet another question how to speed up writing.\n\nThe situation is different:\n\nMy algorithm stores after the computation the result as tuples in a DB.\nThe tuples in addition to normal values (e.g. a,b) , contains sql statements that fetch values (for instance the geometry attribute) from another table (e.g. orig_table).\n\ne.g. \n\nINSERT INTO dest_table (\n Select a,b, s.geometry,s.length from orig_table s where s.id=?\n)\n\nThe number of inserts depends on the size of the result and vary from 10,000 to 1,000,000.\n\nMy question is: how can I speed up such inserts?\n\nOnly COPY statements want work, since I need additional values\nInsert statements takes long time (even if using Bulk)\n\nWhat do you suggest me in such a situation?\n\nWould it be better to perform?\n- first use COPY to store values in new table\n- second update the new table with values from origin table\n\n\nthanks for your hints / suggestions\n\ncheers Markus\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 14:10:40 +0100", "msg_from": "Markus Innerebner <[email protected]>", "msg_from_op": true, "msg_subject": "Improve performance for writing " }, { "msg_contents": "Markus,\n\nHave you looked over here:\nhttp://www.postgresql.org/docs/9.2/static/populate.html\n\n\n----------------------------------------\n> From: [email protected]\n> Subject: [PERFORM] Improve performance for writing\n> Date: Thu, 27 Dec 2012 14:10:40 +0100\n> To: [email protected]\n>\n> Hello\n>\n> please do not consider this email as an yet another question how to speed up writing.\n>\n> The situation is different:\n>\n> My algorithm stores after the computation the result as tuples in a DB.\n> The tuples in addition to normal values (e.g. a,b) , contains sql statements that fetch values (for instance the geometry attribute) from another table (e.g. orig_table).\n>\n> e.g.\n>\n> INSERT INTO dest_table (\n> Select a,b, s.geometry,s.length from orig_table s where s.id=?\n> )\n>\n> The number of inserts depends on the size of the result and vary from 10,000 to 1,000,000.\n>\n> My question is: how can I speed up such inserts?\n>\n> Only COPY statements want work, since I need additional values\n> Insert statements takes long time (even if using Bulk)\n>\n> What do you suggest me in such a situation?\n>\n> Would it be better to perform?\n> - first use COPY to store values in new table\n> - second update the new table with values from origin table\n>\n>\n> thanks for your hints / suggestions\n>\n> cheers Markus\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 09:45:38 -0500", "msg_from": "Charles Gomes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve performance for writing" } ]
[ { "msg_contents": "We just upgraded from 8.3 to 9.1 and we're seeing some performance\nproblems. When we EXPLAIN ANALYZE our queries the explain result claim\nthat the queries are reasonably fast but the wall clock time is way way\nlonger. Does anyone know why this might happen?\n\nLike so:\ndb=>\\timing\ndb=>EXPLAIN ANALYZE SELECT max(id) FROM foo WHERE blah_id = 1209123;\n\nThe plan is sensible. The estimates are sensible. The actual DB time\nreads like it is very sensible. But the wall clock time is like 11 seconds\nand the \\timing report confirms it.\n\nAny ideas?\n\nThanks!\n\nNik\n\nWe just upgraded from 8.3 to 9.1 and we're seeing some performance problems.  When we EXPLAIN ANALYZE our queries the explain result claim that the queries are reasonably fast but the wall clock time is way way longer.  Does anyone know why this might happen?\nLike so:db=>\\timingdb=>EXPLAIN ANALYZE SELECT max(id) FROM foo WHERE blah_id = 1209123;The plan is sensible.  The estimates are sensible.  The actual DB time reads like it is very sensible.  But the wall clock time is like 11 seconds and the \\timing report confirms it.\nAny ideas?Thanks!Nik", "msg_date": "Thu, 27 Dec 2012 12:10:11 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "explain analyze reports that my queries are fast but they run very\n\tslowly" }, { "msg_contents": "\nLe 2012-12-27 à 12:10, Nikolas Everett a écrit :\n\n> We just upgraded from 8.3 to 9.1 and we're seeing some performance problems. When we EXPLAIN ANALYZE our queries the explain result claim that the queries are reasonably fast but the wall clock time is way way longer. Does anyone know why this might happen?\n> \n> Like so:\n> db=>\\timing\n> db=>EXPLAIN ANALYZE SELECT max(id) FROM foo WHERE blah_id = 1209123;\n> \n> The plan is sensible. The estimates are sensible. The actual DB time reads like it is very sensible. But the wall clock time is like 11 seconds and the \\timing report confirms it.\n> \n> Any ideas?\n\nCould you post the actual plans? On both versions? That would help a lot.\n\nAlso, http://explain.depesz.com/ helps readability.\n\nBye,\nFrançois\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 12:21:18 -0500", "msg_from": "=?iso-8859-1?Q?Fran=E7ois_Beausoleil?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain analyze reports that my queries are fast but they run\n\tvery slowly" }, { "msg_contents": "On 27/12/12 17:21, François Beausoleil wrote:\n>\n> Le 2012-12-27 à 12:10, Nikolas Everett a écrit :\n>\n>> We just upgraded from 8.3 to 9.1 and we're seeing some performance problems. When we EXPLAIN ANALYZE our queries the explain result claim that the queries are reasonably fast but the wall clock time is way way longer. Does anyone know why this might happen?\n>>\n\nIs it possible you missed an optimisation setting in the migration \nprocess? I made that mistake, and much later found performance was \nsomewhat degraded (but surprisingly not as much as I'd expected) by my \nhaving failed to set effective_cache_size.\n\nAlso, if you just did a dump/restore, it might help to run Analyse once\n(it seems that Analyse is normally run automatically via vacuum, but the \nfirst time you insert the data, it may not happen).\n\nA side-effect of Analyse it that it will pull all the tables into the OS \nmemory cache (or try to) - which may give significantly faster results \n(try running the same query twice in succession: it's often 5x faster \nthe 2nd time).\n\nHTH,\n\nRichard\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 17:35:39 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain analyze reports that my queries are fast but\n\tthey run very slowly" }, { "msg_contents": "Thanks!\n\nhttp://explain.depesz.com/s/yfs\n\nLooks like we're running a load of about 6. The machines have two physical\ncores hyperthreaded to 32 cores.\n\nInteresting - the data is stored on nfs on a netapp. We don't seem to have\na ton of nfs traffic.\n\nAlso we've got shared memory set to 48 gigs which is comfortably less than\nthe 146 gigs in the machine.\n\n\nOn Thu, Dec 27, 2012 at 12:21 PM, François Beausoleil\n<[email protected]>wrote:\n\n>\n> Le 2012-12-27 à 12:10, Nikolas Everett a écrit :\n>\n> > We just upgraded from 8.3 to 9.1 and we're seeing some performance\n> problems. When we EXPLAIN ANALYZE our queries the explain result claim\n> that the queries are reasonably fast but the wall clock time is way way\n> longer. Does anyone know why this might happen?\n> >\n> > Like so:\n> > db=>\\timing\n> > db=>EXPLAIN ANALYZE SELECT max(id) FROM foo WHERE blah_id = 1209123;\n> >\n> > The plan is sensible. The estimates are sensible. The actual DB time\n> reads like it is very sensible. But the wall clock time is like 11 seconds\n> and the \\timing report confirms it.\n> >\n> > Any ideas?\n>\n> Could you post the actual plans? On both versions? That would help a lot.\n>\n> Also, http://explain.depesz.com/ helps readability.\n>\n> Bye,\n> François\n>\n>\n\nThanks!http://explain.depesz.com/s/yfsLooks like we're running a load of about 6.  The machines have two physical cores hyperthreaded to 32 cores.\nInteresting - the data is stored on nfs on a netapp.  We don't seem to have a ton of nfs traffic.Also we've got shared memory set to 48 gigs which is comfortably less than the 146 gigs in the machine.\nOn Thu, Dec 27, 2012 at 12:21 PM, François Beausoleil <[email protected]> wrote:\n\nLe 2012-12-27 à 12:10, Nikolas Everett a écrit :\n\n> We just upgraded from 8.3 to 9.1 and we're seeing some performance problems.  When we EXPLAIN ANALYZE our queries the explain result claim that the queries are reasonably fast but the wall clock time is way way longer.  Does anyone know why this might happen?\n\n\n>\n> Like so:\n> db=>\\timing\n> db=>EXPLAIN ANALYZE SELECT max(id) FROM foo WHERE blah_id = 1209123;\n>\n> The plan is sensible.  The estimates are sensible.  The actual DB time reads like it is very sensible.  But the wall clock time is like 11 seconds and the \\timing report confirms it.\n>\n> Any ideas?\n\nCould you post the actual plans? On both versions? That would help a lot.\n\nAlso, http://explain.depesz.com/ helps readability.\n\nBye,\nFrançois", "msg_date": "Thu, 27 Dec 2012 12:45:59 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: explain analyze reports that my queries are fast but\n\tthey run very slowly" }, { "msg_contents": "New news - the hot slave seems to be performing as expected with no long\npauses.\n\nIt looks like we're using an archive_timeout of 60 seconds and default\ncheckout_timeout and checkpoint_completion_target. I didn't do any of the\nresearch on this. It seems like we're asking postgres to clear all of the\ndirty buffers every 60 seconds. With 48 gigs of shared buffers we could\nhave quite a few buffers to clear. Is there some place I could check on\nhow all that is going?\n\nOn Thu, Dec 27, 2012 at 12:45 PM, Nikolas Everett <[email protected]> wrote:\n\n> p\n\nNew news - the hot slave seems to be performing as expected with no long pauses.It looks like we're using an archive_timeout of 60 seconds and default checkout_timeout and checkpoint_completion_target.  I didn't do any of the research on this.  It seems like we're asking postgres to clear all of the dirty buffers every 60 seconds.  With 48 gigs of shared buffers we could have quite a few buffers to clear.  Is there some place I could check on how all that is going?\nOn Thu, Dec 27, 2012 at 12:45 PM, Nikolas Everett <[email protected]> wrote:\n\np", "msg_date": "Thu, 27 Dec 2012 12:58:33 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: explain analyze reports that my queries are fast but\n\tthey run very slowly" }, { "msg_contents": "Actually that last paragraph doesn't make much sense. Please ignore it.\n\n\nOn Thu, Dec 27, 2012 at 12:58 PM, Nikolas Everett <[email protected]> wrote:\n\n> New news - the hot slave seems to be performing as expected with no long\n> pauses.\n>\n> It looks like we're using an archive_timeout of 60 seconds and default\n> checkout_timeout and checkpoint_completion_target. I didn't do any of the\n> research on this. It seems like we're asking postgres to clear all of the\n> dirty buffers every 60 seconds. With 48 gigs of shared buffers we could\n> have quite a few buffers to clear. Is there some place I could check on\n> how all that is going?\n>\n> On Thu, Dec 27, 2012 at 12:45 PM, Nikolas Everett <[email protected]>wrote:\n>\n>> p\n>\n>\n>\n\nActually that last paragraph doesn't make much sense.  Please ignore it.On Thu, Dec 27, 2012 at 12:58 PM, Nikolas Everett <[email protected]> wrote:\nNew news - the hot slave seems to be performing as expected with no long pauses.\nIt looks like we're using an archive_timeout of 60 seconds and default checkout_timeout and checkpoint_completion_target.  I didn't do any of the research on this.  It seems like we're asking postgres to clear all of the dirty buffers every 60 seconds.  With 48 gigs of shared buffers we could have quite a few buffers to clear.  Is there some place I could check on how all that is going?\nOn Thu, Dec 27, 2012 at 12:45 PM, Nikolas Everett <[email protected]> wrote:\n\n\np", "msg_date": "Thu, 27 Dec 2012 13:00:24 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: explain analyze reports that my queries are fast but\n\tthey run very slowly" }, { "msg_contents": "Nikolas Everett <[email protected]> writes:\n> We just upgraded from 8.3 to 9.1 and we're seeing some performance\n> problems. When we EXPLAIN ANALYZE our queries the explain result claim\n> that the queries are reasonably fast but the wall clock time is way way\n> longer. Does anyone know why this might happen?\n\n> Like so:\n> db=>\\timing\n> db=>EXPLAIN ANALYZE SELECT max(id) FROM foo WHERE blah_id = 1209123;\n\n> The plan is sensible. The estimates are sensible. The actual DB time\n> reads like it is very sensible. But the wall clock time is like 11 seconds\n> and the \\timing report confirms it.\n\nSeems like the extra time would have to be in parsing/planning, or in\nwaiting to acquire AccessShareLock on the table. It's hard to believe\nthe former for such a simple query, unless the table has got thousands\nof indexes or something silly like that. Lock waits are surely possible\nif there is something else contending for exclusive lock on the table,\nbut it's hard to see how the wait time would be so consistent.\n\nBTW, the explain.depesz.com link you posted clearly does not correspond\nto the above query (it's not doing a MAX), so another possibility is\nconfusion about what query is really causing trouble. We've seen people\nremove essential details before while trying to anonymize their query.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 14:21:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain analyze reports that my queries are fast but they run\n\tvery slowly" }, { "msg_contents": "Sorry for the confusion around the queries. Both queries are causing\ntrouble. We've noticed that just EXPLAINING the very simple queries takes\nforever.\n\nAfter more digging it looks like this table has an inordinate number\nof indices (10 ish). There a whole buch of conditional indicies for other\ncolumns that we're not checking. The particular column that is causing us\ntrouble exists in both a regular (con_id) and a composite index (con_id,\nsomthing_else).\n\nWe checked on locks and don't see any ungranted locks. Would waiting on\nthe AccessShareLock not appear in pg_locks?\n\nThanks!\n\nNik\n\n\nOn Thu, Dec 27, 2012 at 2:21 PM, Tom Lane <[email protected]> wrote:\n\n> Nikolas Everett <[email protected]> writes:\n> > We just upgraded from 8.3 to 9.1 and we're seeing some performance\n> > problems. When we EXPLAIN ANALYZE our queries the explain result claim\n> > that the queries are reasonably fast but the wall clock time is way way\n> > longer. Does anyone know why this might happen?\n>\n> > Like so:\n> > db=>\\timing\n> > db=>EXPLAIN ANALYZE SELECT max(id) FROM foo WHERE blah_id = 1209123;\n>\n> > The plan is sensible. The estimates are sensible. The actual DB time\n> > reads like it is very sensible. But the wall clock time is like 11\n> seconds\n> > and the \\timing report confirms it.\n>\n> Seems like the extra time would have to be in parsing/planning, or in\n> waiting to acquire AccessShareLock on the table. It's hard to believe\n> the former for such a simple query, unless the table has got thousands\n> of indexes or something silly like that. Lock waits are surely possible\n> if there is something else contending for exclusive lock on the table,\n> but it's hard to see how the wait time would be so consistent.\n>\n> BTW, the explain.depesz.com link you posted clearly does not correspond\n> to the above query (it's not doing a MAX), so another possibility is\n> confusion about what query is really causing trouble. We've seen people\n> remove essential details before while trying to anonymize their query.\n>\n> regards, tom lane\n>\n\nSorry for the confusion around the queries.  Both queries are causing trouble.  We've noticed that just EXPLAINING the very simple queries takes forever.After more digging it looks like this table has an inordinate number of indices (10 ish).  There a whole buch of conditional indicies for other columns that we're not checking.  The particular column that is causing us trouble exists in both a regular (con_id) and a composite index (con_id, somthing_else).\nWe checked on locks and don't see any ungranted locks.  Would waiting on the AccessShareLock not appear in pg_locks?Thanks!\nNikOn Thu, Dec 27, 2012 at 2:21 PM, Tom Lane <[email protected]> wrote:\nNikolas Everett <[email protected]> writes:\n\n\n> We just upgraded from 8.3 to 9.1 and we're seeing some performance\n> problems.  When we EXPLAIN ANALYZE our queries the explain result claim\n> that the queries are reasonably fast but the wall clock time is way way\n> longer.  Does anyone know why this might happen?\n\n> Like so:\n> db=>\\timing\n> db=>EXPLAIN ANALYZE SELECT max(id) FROM foo WHERE blah_id = 1209123;\n\n> The plan is sensible.  The estimates are sensible.  The actual DB time\n> reads like it is very sensible.  But the wall clock time is like 11 seconds\n> and the \\timing report confirms it.\n\nSeems like the extra time would have to be in parsing/planning, or in\nwaiting to acquire AccessShareLock on the table.  It's hard to believe\nthe former for such a simple query, unless the table has got thousands\nof indexes or something silly like that.  Lock waits are surely possible\nif there is something else contending for exclusive lock on the table,\nbut it's hard to see how the wait time would be so consistent.\n\nBTW, the explain.depesz.com link you posted clearly does not correspond\nto the above query (it's not doing a MAX), so another possibility is\nconfusion about what query is really causing trouble.  We've seen people\nremove essential details before while trying to anonymize their query.\n\n                        regards, tom lane", "msg_date": "Thu, 27 Dec 2012 14:28:11 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: explain analyze reports that my queries are fast but\n\tthey run very slowly" }, { "msg_contents": "Another other thing - the query seems to get faster after the first time we\nplan it. I'm not sure that this is the case but I think it might be.\n\n\nOn Thu, Dec 27, 2012 at 2:28 PM, Nikolas Everett <[email protected]> wrote:\n\n> Sorry for the confusion around the queries. Both queries are causing\n> trouble. We've noticed that just EXPLAINING the very simple queries takes\n> forever.\n>\n> After more digging it looks like this table has an inordinate number\n> of indices (10 ish). There a whole buch of conditional indicies for other\n> columns that we're not checking. The particular column that is causing us\n> trouble exists in both a regular (con_id) and a composite index (con_id,\n> somthing_else).\n>\n> We checked on locks and don't see any ungranted locks. Would waiting on\n> the AccessShareLock not appear in pg_locks?\n>\n> Thanks!\n>\n> Nik\n>\n>\n> On Thu, Dec 27, 2012 at 2:21 PM, Tom Lane <[email protected]> wrote:\n>\n>> Nikolas Everett <[email protected]> writes:\n>> > We just upgraded from 8.3 to 9.1 and we're seeing some performance\n>> > problems. When we EXPLAIN ANALYZE our queries the explain result claim\n>> > that the queries are reasonably fast but the wall clock time is way way\n>> > longer. Does anyone know why this might happen?\n>>\n>> > Like so:\n>> > db=>\\timing\n>> > db=>EXPLAIN ANALYZE SELECT max(id) FROM foo WHERE blah_id = 1209123;\n>>\n>> > The plan is sensible. The estimates are sensible. The actual DB time\n>> > reads like it is very sensible. But the wall clock time is like 11\n>> seconds\n>> > and the \\timing report confirms it.\n>>\n>> Seems like the extra time would have to be in parsing/planning, or in\n>> waiting to acquire AccessShareLock on the table. It's hard to believe\n>> the former for such a simple query, unless the table has got thousands\n>> of indexes or something silly like that. Lock waits are surely possible\n>> if there is something else contending for exclusive lock on the table,\n>> but it's hard to see how the wait time would be so consistent.\n>>\n>> BTW, the explain.depesz.com link you posted clearly does not correspond\n>> to the above query (it's not doing a MAX), so another possibility is\n>> confusion about what query is really causing trouble. We've seen people\n>> remove essential details before while trying to anonymize their query.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nAnother other thing - the query seems to get faster after the first time we plan it.  I'm not sure that this is the case but I think it might be.\n\nOn Thu, Dec 27, 2012 at 2:28 PM, Nikolas Everett <[email protected]> wrote:\nSorry for the confusion around the queries.  Both queries are causing trouble.  We've noticed that just EXPLAINING the very simple queries takes forever.After more digging it looks like this table has an inordinate number of indices (10 ish).  There a whole buch of conditional indicies for other columns that we're not checking.  The particular column that is causing us trouble exists in both a regular (con_id) and a composite index (con_id, somthing_else).\nWe checked on locks and don't see any ungranted locks.  Would waiting on the AccessShareLock not appear in pg_locks?Thanks!\nNikOn Thu, Dec 27, 2012 at 2:21 PM, Tom Lane <[email protected]> wrote:\nNikolas Everett <[email protected]> writes:\n\n\n\n> We just upgraded from 8.3 to 9.1 and we're seeing some performance\n> problems.  When we EXPLAIN ANALYZE our queries the explain result claim\n> that the queries are reasonably fast but the wall clock time is way way\n> longer.  Does anyone know why this might happen?\n\n> Like so:\n> db=>\\timing\n> db=>EXPLAIN ANALYZE SELECT max(id) FROM foo WHERE blah_id = 1209123;\n\n> The plan is sensible.  The estimates are sensible.  The actual DB time\n> reads like it is very sensible.  But the wall clock time is like 11 seconds\n> and the \\timing report confirms it.\n\nSeems like the extra time would have to be in parsing/planning, or in\nwaiting to acquire AccessShareLock on the table.  It's hard to believe\nthe former for such a simple query, unless the table has got thousands\nof indexes or something silly like that.  Lock waits are surely possible\nif there is something else contending for exclusive lock on the table,\nbut it's hard to see how the wait time would be so consistent.\n\nBTW, the explain.depesz.com link you posted clearly does not correspond\nto the above query (it's not doing a MAX), so another possibility is\nconfusion about what query is really causing trouble.  We've seen people\nremove essential details before while trying to anonymize their query.\n\n                        regards, tom lane", "msg_date": "Thu, 27 Dec 2012 14:29:29 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: explain analyze reports that my queries are fast but\n\tthey run very slowly" }, { "msg_contents": "Nikolas Everett <[email protected]> writes:\n> After more digging it looks like this table has an inordinate number\n> of indices (10 ish).\n\n10 doesn't sound like a lot.\n\n> There a whole buch of conditional indicies for other\n> columns that we're not checking. The particular column that is causing us\n> trouble exists in both a regular (con_id) and a composite index (con_id,\n> somthing_else).\n\nYou're not being at all clear here. Are you trying to say that only\nqueries involving \"WHERE col = constant\" for a particular column seem\nto be slow? If so, maybe the column has a weird datatype or a wildly\nout of line statistics target? (Still hard to see how you get to 11-ish\nseconds in planning, though.)\n\nOne thing you might do is watch the backend process in \"top\" or local\nequivalent, and see if it's spending most of the 11 seconds sleeping, or\naccumulating CPU time, or accumulating I/O. That info would eliminate\na lot of possibilities.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 14:42:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain analyze reports that my queries are fast but they run\n\tvery slowly" }, { "msg_contents": "Nikolas Everett <[email protected]> writes:\n> We straced the backend during the explain and it looked like the open\n> commands were taking several seconds each.\n\nKind of makes me wonder if you have a whole lot of tables (\"whole lot\"\nin this context probably means tens of thousands) and are storing the\ndatabase on a filesystem that doesn't scale well to lots of files in one\ndirectory. If that's the explanation, the reason the 8.3 installation\nwas okay was likely that it was stored on a more modern filesystem.\n\nBTW, please keep the list cc'd on replies.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 16:33:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain analyze reports that my queries are fast but they run\n\tvery slowly" }, { "msg_contents": "On Thu, Dec 27, 2012 at 4:33 PM, Tom Lane <[email protected]> wrote:\n\n> Nikolas Everett <[email protected]> writes:\n> > We straced the backend during the explain and it looked like the open\n> > commands were taking several seconds each.\n>\n> Kind of makes me wonder if you have a whole lot of tables (\"whole lot\"\n> in this context probably means tens of thousands) and are storing the\n> database on a filesystem that doesn't scale well to lots of files in one\n> directory. If that's the explanation, the reason the 8.3 installation\n> was okay was likely that it was stored on a more modern filesystem.\n>\n\nWe have 1897 files for our largest database which really isn't a whole lot.\n The old servers were EXT3 over FC to a NetApp running RHEL5 PPC. The new\nservers are on NFS to the same NetApp running RHEL5 Intel. We've failed\nfrom our physical primary to a virtual secondary both of which seem to have\nthe same problem. We're in the process of rebuilding the a hot slave on\nEXT3 over iSCSI. We'll fail over to it as soon as we can.\n\nWe never tried stracing the PPC infrastructure but it obviously didn't have\nthis problem.\n\nWe also have another cluster running with an identical setup which doesn't\nseem to have the problem. In fact, the problem never came up durring\ncorrectness testing for this problem either - it specifically required load\nbefore it came up.\n\n\n>\n> BTW, please keep the list cc'd on replies.\n>\n\nItchy reply finger.\n\n\n>\n> regards, tom lane\n>\n\nOn Thu, Dec 27, 2012 at 4:33 PM, Tom Lane <[email protected]> wrote:\nNikolas Everett <[email protected]> writes:\n\n\n> We straced the backend during the explain and it looked like the open\n> commands were taking several seconds each.\n\nKind of makes me wonder if you have a whole lot of tables (\"whole lot\"\nin this context probably means tens of thousands) and are storing the\ndatabase on a filesystem that doesn't scale well to lots of files in one\ndirectory.  If that's the explanation, the reason the 8.3 installation\nwas okay was likely that it was stored on a more modern filesystem.We have 1897 files for our largest database which really isn't a whole lot.  The old servers were EXT3 over FC to a NetApp running RHEL5 PPC.  The new servers are on NFS to the same NetApp running RHEL5 Intel.  We've failed from our physical primary to a virtual secondary both of which seem to have the same problem.  We're in the process of rebuilding the a hot slave on EXT3 over iSCSI.  We'll fail over to it as soon as we can.\nWe never tried stracing the PPC infrastructure but it obviously didn't have this problem.We also have another cluster running with an identical setup which doesn't seem to have the problem.  In fact, the problem never came up durring correctness testing for this problem either - it specifically required load before it came up.\n \n\nBTW, please keep the list cc'd on replies.Itchy reply finger. \n\n                        regards, tom lane", "msg_date": "Thu, 27 Dec 2012 17:01:42 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: explain analyze reports that my queries are fast but\n\tthey run very slowly" }, { "msg_contents": "Nikolas Everett <[email protected]> writes:\n> On Thu, Dec 27, 2012 at 4:33 PM, Tom Lane <[email protected]> wrote:\n>> Nikolas Everett <[email protected]> writes:\n>>> We straced the backend during the explain and it looked like the open\n>>> commands were taking several seconds each.\n\n>> Kind of makes me wonder if you have a whole lot of tables (\"whole lot\"\n>> in this context probably means tens of thousands) and are storing the\n>> database on a filesystem that doesn't scale well to lots of files in one\n>> directory. If that's the explanation, the reason the 8.3 installation\n>> was okay was likely that it was stored on a more modern filesystem.\n\n> We have 1897 files for our largest database which really isn't a whole lot.\n\nOK...\n\n> The old servers were EXT3 over FC to a NetApp running RHEL5 PPC. The new\n> servers are on NFS to the same NetApp running RHEL5 Intel.\n\nNow I'm wondering about network glitches or NFS configuration problems.\nThis is a bit outside my expertise unfortunately, but it seems clear\nthat your performance issue is somewhere in that area.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 18:12:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain analyze reports that my queries are fast but they run\n\tvery slowly" }, { "msg_contents": "It looks like it was a problem with NFS. We're not really sure what was\nwrong with it but once we failed over to an iSCSI mount for the data\neverything is running just fine.\n\n\nOn Thu, Dec 27, 2012 at 6:12 PM, Tom Lane <[email protected]> wrote:\n\n> Nikolas Everett <[email protected]> writes:\n> > On Thu, Dec 27, 2012 at 4:33 PM, Tom Lane <[email protected]> wrote:\n> >> Nikolas Everett <[email protected]> writes:\n> >>> We straced the backend during the explain and it looked like the open\n> >>> commands were taking several seconds each.\n>\n> >> Kind of makes me wonder if you have a whole lot of tables (\"whole lot\"\n> >> in this context probably means tens of thousands) and are storing the\n> >> database on a filesystem that doesn't scale well to lots of files in one\n> >> directory. If that's the explanation, the reason the 8.3 installation\n> >> was okay was likely that it was stored on a more modern filesystem.\n>\n> > We have 1897 files for our largest database which really isn't a whole\n> lot.\n>\n> OK...\n>\n> > The old servers were EXT3 over FC to a NetApp running RHEL5 PPC. The\n> new\n> > servers are on NFS to the same NetApp running RHEL5 Intel.\n>\n> Now I'm wondering about network glitches or NFS configuration problems.\n> This is a bit outside my expertise unfortunately, but it seems clear\n> that your performance issue is somewhere in that area.\n>\n> regards, tom lane\n>\n\nIt looks like it was a problem with NFS.  We're not really sure what was wrong with it but once we failed over to an iSCSI mount for the data everything is running just fine.\nOn Thu, Dec 27, 2012 at 6:12 PM, Tom Lane <[email protected]> wrote:\nNikolas Everett <[email protected]> writes:\n> On Thu, Dec 27, 2012 at 4:33 PM, Tom Lane <[email protected]> wrote:\n>> Nikolas Everett <[email protected]> writes:\n>>> We straced the backend during the explain and it looked like the open\n>>> commands were taking several seconds each.\n\n>> Kind of makes me wonder if you have a whole lot of tables (\"whole lot\"\n>> in this context probably means tens of thousands) and are storing the\n>> database on a filesystem that doesn't scale well to lots of files in one\n>> directory.  If that's the explanation, the reason the 8.3 installation\n>> was okay was likely that it was stored on a more modern filesystem.\n\n> We have 1897 files for our largest database which really isn't a whole lot.\n\nOK...\n\n>  The old servers were EXT3 over FC to a NetApp running RHEL5 PPC.  The new\n> servers are on NFS to the same NetApp running RHEL5 Intel.\n\nNow I'm wondering about network glitches or NFS configuration problems.\nThis is a bit outside my expertise unfortunately, but it seems clear\nthat your performance issue is somewhere in that area.\n\n                        regards, tom lane", "msg_date": "Thu, 27 Dec 2012 18:35:49 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: explain analyze reports that my queries are fast but\n\tthey run very slowly" } ]
[ { "msg_contents": "Hey guys,\n\nI recently stumbled over a Linux scheduler setting that has outright \nshocked me. So, after reading through this:\n\nhttp://blog.tsunanet.net/2010/11/how-long-does-it-take-to-make-context.html\n\nit became readily apparent we were hitting the same wall. I could do a \npgbench and increase the connection count by 100 every iteration, and \neventually performance just fell off a proverbial cliff and never recovered.\n\nFor our particular systems, this barrier is somewhere around 800 \nprocesses. Select-only performance on a 3600-scale pgbench database in \ncache falls from about 70k TPS to about 12k TPS after crossing that \nline. Worse, sar shows over 70% CPU dedicated to system overhead.\n\nAfter some fiddling around, I changed sched_migration_cost from its \ndefault of 500000 to 5000000 and performance returned to linear scaling \nimmediately. It's literally night and day. Setting it back to 500000 \nreverts to the terrible performance.\n\nIn addition, setting the migration cost to a higher value does not \nnegatively affect any other performance metric I've checked. This is on \nan Ubuntu 12.04 system, and I'd love if someone out there could \nindependently verify this, because frankly, I find it difficult to believe.\n\nIf legit, high-connection systems would benefit greatly.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Dec 2012 13:57:09 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "sched_migration_cost for high-connection counts" } ]
[ { "msg_contents": "I have encountered serious under-estimations of distinct values when \nvalues are not evenly distributed but clustered within a column. I think \nthis problem might be relevant to many real-world use cases and I wonder \nif there is a good workaround or possibly a programmatic solution that \ncould be implemented.\n\nThanks for your help!\nStefan\n\n\n*The Long Story:*\n\nWhen Postgres collects statistics, it estimates the number of distinct \nvalues for every column (see pg_stats.n_distinct). This is one important \nsource for the planner to determine the selectivity and hence can have \ngreat influence on the resulting query plan.\n\n\nMy Problem:\n\nWhen I collected statistics on some columns that have rather high \nselectivity but not anything like unique values, I consistently got \nn_distinct values that are far too low (easily by some orders of \nmagnitude). Worse still, the estimates did not really improve until I \nanalyzed the whole table.\n\nI tested this for Postgres 9.1 and 9.2. An artificial test-case is \ndescribed at the end of this mail.\n\n\nSome Analysis of the Problem:\n\nUnfortunately it is not trivial to estimate the total number of \ndifferent values based on a sample. As far as I found out, Postgres \nuses an algorithm that is based on the number of values that are found \nonly once in the sample used for ANALYZE. I found references to \nGood-Turing frequency estimation \n(http://encodestatistics.org/publications/statistics_and_postgres.pdf) \nand to a paper from Haas & Stokes, Computer Science, 1996 \n(http://researcher.ibm.com/researcher/files/us-phaas/jasa3rj.pdf). The \nlatter source is from Josh Berkus in a 2005 discussion on the Postgres \nPerformance List (see e.g. \nhttp://grokbase.com/t/postgresql/pgsql-performance/054kztf8pf/bad-n-distinct-estimation-hacks-suggested \nfor a look on the whole discussion there). The formula given there for \nthe total number of distinct values is:\n\n n*d / (n - f1 + f1*n/N)\n\nwhere f1 is the number of values that occurred only once in the sample. \nn is the number of rows sampled, d the number of distincts found and N \nthe total number of rows in the table.\n\nNow, the 2005 discussion goes into great detail on the advantages and \ndisadvantages of this algorithm, particularly when using small sample \nsizes, and several alternatives are discussed. I do not know whether \nanything has been changed after that, but I know that the very distinct \nproblem, which I will focus on here, still persists.\n\nWhen the number of values that are found only once in the sample (f1) \nbecomes zero, the whole term equals d, that is, n_distinct is estimated \nto be just the number of distincts found in the sample.\n\nThis is basically fine as it should only happen when the sample has \nreally covered more or less all distinct values. However, we have a \nsampling problem here: for maximum efficiency Postgres samples not \nrandom rows but random pages. If the distribution of the values is not \nrandom but clustered (that is, the same values tend to be close \ntogether) we run into problems. The probability that any value from a \nclustered distribution is sampled only once, when any page covers \nmultiple adjacent rows, is very low.\n\nSo, under these circumstances, the estimate for n_distinct will always \nbe close to the number of distincts found in the sample. Even if every \nvalue would in fact only appear a few times in the table.\n\n\nRelevance:\n\nI think this is not just an unfortunate border case, but a very common \nsituation. Imagine two tables that are filled continually over time \nwhere the second table references the first - some objects and multiple \nproperties for each for example. Now the foreign key column of the \nsecond table will have many distinct values but a highly clustered \ndistribution. It is probably not helpful, if the planner significantly \nunderestimates the high selectivity of the foreign key column.\n\n\nWorkarounds:\n\nThere are workarounds: manually setting table column statistics or using \nan extremely high statistics target, so that the whole table gets \nanalyzed. However, these workarounds do not seem elegant and may be \nimpractical.\n\n\nQuestions:\n\nA) Did I find the correct reason for my problem? Specifically, does \nPostgres really estimate n_distinct as described above?\n\nB) Are there any elegant workarounds?\n\nC) What could be a programmatic solution to this problem? I think, it \nmight be possible to use the number of values that are found in only one \npage (vs. found only once at all) for f1. Or the number of distincts \ncould be calculated using some completely different approach?\n\n\nTest Case:\n\nFor an artificial test-case let's create a table and fill it with 10 \nmillion rows (appr. 1,300 MB required). There is an ID column featuring \nunique values and 4 groups of 3 columns each that have selectivities of:\n- 5 (x_2000k = 2,000,000 distinct values)\n- 25 (x_400k = 400,000 distinct values)\n- 125 (x_80k = 80,000 distinct values).\n\nThe 4 groups of columns show different distributions:\n- clustered and ordered (e.g. 1,1,1,2,2,2,3,3,3): clustered_ordered_x\n- clustered but random values (e.g. 2,2,2,7,7,7,4,4,4): clustered_random_x\n- uniform (e.g. 1,2,3,1,2,3,1,2,3): uniform_x\n- random (e.g. well random, you know random_x\n\nHere we go:\n\n CREATE UNLOGGED TABLE test_1\n (id BIGINT,\n clustered_ordered_2000k BIGINT, clustered_ordered_400k BIGINT, \nclustered_ordered_80k BIGINT,\n clustered_random_2000k BIGINT, clustered_random_400k BIGINT, \nclustered_random_80k BIGINT,\n uniform_2000k BIGINT, uniform_400k BIGINT, uniform_80k BIGINT,\n random_2000k BIGINT, random_400k BIGINT, random_80k BIGINT);\n\n WITH q1 AS (SELECT generate_series(1,10000000) AS i, random() AS r),\n q AS (SELECT q1.i, q1.r, trunc(sub_2000k.r * 10000000) AS r_2000k, \ntrunc(sub_400k.r * 10000000) AS r_400k, trunc(sub_80k.r * 10000000) AS \nr_80k FROM q1\n JOIN q1 AS sub_2000k ON sub_2000k.i - 1 = trunc((q1.i - 1) \n/ 5)\n JOIN q1 AS sub_400k ON sub_400k.i - 1 = trunc((q1.i - 1) / 25)\n JOIN q1 AS sub_80k ON sub_80k.i - 1 = trunc((q1.i - 1) / 125)\n ORDER BY q1.i)\n INSERT INTO test_1\n SELECT q.i,\n trunc((q.i + 4) / 5), trunc((q.i + 24) / 25), trunc((q.i + \n124) / 125),\n q.r_2000k, q.r_400k, q.r_80k,\n trunc(q.i % 2000000), trunc(q.i % 400000), trunc(q.i % 80000),\n trunc(q.r * 2000000), trunc(q.r * 400000), trunc(q.r * 80000)\n FROM q;\n\n\nNow let's query the real numbers of distinct values:\n\n SELECT colname, distincts FROM\n (SELECT 'id' AS colname, COUNT(DISTINCT id) AS distincts FROM \ntest_1 UNION\n SELECT 'clustered_ordered_2000k' AS colname, COUNT(DISTINCT \nclustered_ordered_2000k) AS distincts FROM test_1 UNION\n SELECT 'clustered_ordered_400k' AS colname, COUNT(DISTINCT \nclustered_ordered_400k) AS distincts FROM test_1 UNION\n SELECT 'clustered_ordered_80k' AS colname, COUNT(DISTINCT \nclustered_ordered_80k) AS distincts FROM test_1 UNION\n SELECT 'clustered_random_2000k' AS colname, COUNT(DISTINCT \nclustered_random_2000k) AS distincts FROM test_1 UNION\n SELECT 'clustered_random_400k' AS colname, COUNT(DISTINCT \nclustered_random_400k) AS distincts FROM test_1 UNION\n SELECT 'clustered_random_80k' AS colname, COUNT(DISTINCT \nclustered_random_80k) AS distincts FROM test_1 UNION\n SELECT 'uniform_2000k' AS colname, COUNT(DISTINCT uniform_2000k) \nAS distincts FROM test_1 UNION\n SELECT 'uniform_400k' AS colname, COUNT(DISTINCT uniform_400k) AS \ndistincts FROM test_1 UNION\n SELECT 'uniform_80k' AS colname, COUNT(DISTINCT uniform_80k) AS \ndistincts FROM test_1 UNION\n SELECT 'random_2000k' AS colname, COUNT(DISTINCT random_2000k) AS \ndistincts FROM test_1 UNION\n SELECT 'random_400k' AS colname, COUNT(DISTINCT random_400k) AS \ndistincts FROM test_1 UNION\n SELECT 'random_80k' AS colname, COUNT(DISTINCT random_80k) AS \ndistincts FROM test_1) AS sub\n ORDER BY colname;\n\n colname | distincts\n-------------------------+-----------\n clustered_ordered_2000k | 2000000\n clustered_ordered_400k | 400000\n clustered_ordered_80k | 80000\n clustered_random_2000k | 1811948\n clustered_random_400k | 391881\n clustered_random_80k | 79681\n id | 10000000\n random_2000k | 1986619\n random_400k | 400000\n random_80k | 80000\n uniform_2000k | 2000000\n uniform_400k | 400000\n uniform_80k | 80000\n\n-> So we got what we asked for.\n\n\nAs the row length of the table is not very large, we decrease the \nstatistics target. Otherwise a quarter of the table will get sampled and \nthe effect is less clear:\n\n SET default_statistics_target = 10;\n ANALYZE VERBOSE test_1;\n SELECT attname, n_distinct, correlation FROM pg_stats WHERE tablename \n= 'test_1'\n ORDER BY attname;\n\n attname | n_distinct | correlation\n-------------------------+------------+-------------\n clustered_ordered_2000k | 51487 | 1\n clustered_ordered_400k | 9991 | 1\n clustered_ordered_80k | 3752 | 1\n clustered_random_2000k | 51487 | 0.00938534\n clustered_random_400k | 9991 | 0.00373309\n clustered_random_80k | 3748 | -0.0461863\n id | -1 | 1\n random_2000k | -0.310305 | 0.00140735\n random_400k | 289890 | 0.00140921\n random_80k | 71763 | 0.00142101\n uniform_2000k | -0.310305 | 0.209842\n uniform_400k | -0.101016 | 0.0259991\n uniform_80k | 74227 | 0.0154193\n\n-> estimates for random and uniform distributions are really good. But \nfor clustered distributions, estimates are off by a factor of 20 to 40.\n\n\nAnd clean up\n DROP TABLE test_1;\n\n\n\n\n\n\n\n I have encountered serious under-estimations of distinct values when\n values are not evenly distributed but clustered within a column. I\n think this problem might be relevant to many real-world use cases\n and I wonder if there is a good workaround or possibly a\n programmatic solution that could be implemented.\n\n Thanks for your help!\n Stefan\n\n\nThe Long Story:\n\n When Postgres collects statistics, it estimates the number of\n distinct values for every column (see pg_stats.n_distinct). This is\n one important source for the planner to determine the selectivity\n and hence can have great influence on the resulting query plan.\n\n\n My Problem:\n\n When I collected statistics on some columns that have rather high\n selectivity but not anything like unique values, I consistently got\n n_distinct values that are far too low (easily by some orders of\n magnitude). Worse still, the estimates did not really improve until\n I analyzed the whole table.\n\n I tested this for Postgres 9.1 and 9.2. An artificial test-case is\n described at the end of this mail.\n\n\n Some Analysis of the Problem:\n\n Unfortunately it is not trivial to estimate the total number of\n different values based on  a sample. As far as I found out, Postgres\n uses an algorithm that is based on the number of values that are\n found only once in the sample used for ANALYZE. I found references\n to Good-Turing frequency estimation\n (http://encodestatistics.org/publications/statistics_and_postgres.pdf)\n and to a paper from Haas & Stokes, Computer Science, 1996\n (http://researcher.ibm.com/researcher/files/us-phaas/jasa3rj.pdf).\n The latter source is from Josh Berkus in a 2005 discussion on the\n Postgres Performance List (see e.g.\n http://grokbase.com/t/postgresql/pgsql-performance/054kztf8pf/bad-n-distinct-estimation-hacks-suggested\n for a look on the whole discussion there). The formula given there\n for the total number of distinct values is:\n\n  n*d / (n - f1 + f1*n/N)\n\n where f1 is the number of values that occurred only once in the\n sample. n is the number of rows sampled, d the number of distincts\n found and N the total number of rows in the table.\n\n Now, the 2005 discussion goes into great detail on the advantages\n and disadvantages of this algorithm, particularly when using small\n sample sizes, and several alternatives are discussed. I do not know\n whether anything has been changed after that, but I know that the\n very distinct problem, which I will focus on here, still persists.\n\n When the number of values that are found only once in the sample\n (f1) becomes zero, the whole term equals d, that is, n_distinct is\n estimated to be just the number of distincts found in the sample.\n\n This is basically fine as it should only happen when the sample has\n really covered more or less all distinct values. However, we have a\n sampling problem here: for maximum efficiency Postgres samples not\n random rows but random pages. If the distribution of the values is\n not random but clustered (that is, the same values tend to be close\n together) we run into problems. The probability that any value from\n a clustered distribution is sampled only once, when any page covers\n multiple adjacent rows, is very low.\n\n So, under these circumstances, the estimate for n_distinct will\n always be close to the number of distincts found in the sample. Even\n if every value would in fact only appear a few times in the table.\n\n\n Relevance:\n\n I think this is not just an unfortunate border case, but a very\n common situation. Imagine two tables that are filled continually\n over time where the second table references the first - some objects\n and multiple properties for each for example. Now the foreign key\n column of the second table will have many distinct values but a\n highly clustered distribution. It is probably not helpful, if the\n planner significantly underestimates the high selectivity of the\n foreign key column.\n\n\n Workarounds:\n\n There are workarounds: manually setting table column statistics or\n using an extremely high statistics target, so that the whole table\n gets analyzed. However, these workarounds do not seem elegant and\n may be impractical.\n\n\n Questions:\n\n A) Did I find the correct reason for my problem? Specifically, does\n Postgres really estimate n_distinct as described above?\n\n B) Are there any elegant workarounds?\n\n C) What could be a programmatic solution to this problem? I think,\n it might be possible to use the number of values that are found in\n only one page (vs. found only once at all) for f1. Or the number of\n distincts could be calculated using some completely different\n approach?\n\n\n Test Case:\n\n For an artificial test-case let's create a table and fill it with 10\n million rows (appr. 1,300 MB required). There is an ID column\n featuring unique values and 4 groups of 3 columns each that have\n selectivities of:\n - 5 (x_2000k = 2,000,000 distinct values)\n - 25 (x_400k = 400,000 distinct values)\n - 125 (x_80k = 80,000 distinct values).\n\n The 4 groups of columns show different distributions:\n - clustered and ordered (e.g. 1,1,1,2,2,2,3,3,3):\n clustered_ordered_x\n - clustered but random values (e.g. 2,2,2,7,7,7,4,4,4):\n clustered_random_x\n - uniform (e.g. 1,2,3,1,2,3,1,2,3): uniform_x\n - random (e.g. well random, you know random_x\n\n Here we go:\n\n CREATE UNLOGGED TABLE test_1\n     (id BIGINT,\n     clustered_ordered_2000k BIGINT, clustered_ordered_400k\n BIGINT, clustered_ordered_80k BIGINT,\n     clustered_random_2000k BIGINT, clustered_random_400k\n BIGINT, clustered_random_80k BIGINT,\n     uniform_2000k BIGINT, uniform_400k BIGINT, uniform_80k\n BIGINT,\n     random_2000k BIGINT, random_400k BIGINT, random_80k\n BIGINT);\n\n WITH q1 AS (SELECT generate_series(1,10000000) AS i,\n random() AS r),\n     q AS (SELECT q1.i, q1.r, trunc(sub_2000k.r * 10000000)\n AS r_2000k, trunc(sub_400k.r * 10000000) AS r_400k,\n trunc(sub_80k.r * 10000000) AS r_80k FROM q1\n             JOIN q1 AS sub_2000k ON sub_2000k.i - 1 =\n trunc((q1.i - 1) / 5)\n             JOIN q1 AS sub_400k ON sub_400k.i - 1 =\n trunc((q1.i - 1) / 25)\n             JOIN q1 AS sub_80k ON sub_80k.i - 1 =\n trunc((q1.i - 1) / 125)\n             ORDER BY q1.i)\n INSERT INTO test_1\n     SELECT q.i,\n         trunc((q.i + 4) / 5), trunc((q.i + 24) / 25),\n trunc((q.i + 124) / 125),\n         q.r_2000k, q.r_400k, q.r_80k,\n         trunc(q.i % 2000000), trunc(q.i % 400000),\n trunc(q.i % 80000),\n         trunc(q.r * 2000000), trunc(q.r * 400000),\n trunc(q.r * 80000)\n     FROM q;\n\n\n Now let's query the real numbers of distinct values:\n\n SELECT colname, distincts FROM\n     (SELECT 'id' AS colname, COUNT(DISTINCT id) AS\n distincts FROM test_1 UNION\n     SELECT 'clustered_ordered_2000k' AS colname,\n COUNT(DISTINCT clustered_ordered_2000k) AS distincts FROM test_1\n UNION\n     SELECT 'clustered_ordered_400k' AS colname,\n COUNT(DISTINCT clustered_ordered_400k) AS distincts FROM test_1\n UNION\n     SELECT 'clustered_ordered_80k' AS colname,\n COUNT(DISTINCT clustered_ordered_80k) AS distincts FROM test_1\n UNION\n     SELECT 'clustered_random_2000k' AS colname,\n COUNT(DISTINCT clustered_random_2000k) AS distincts FROM test_1\n UNION\n     SELECT 'clustered_random_400k' AS colname,\n COUNT(DISTINCT clustered_random_400k) AS distincts FROM test_1\n UNION\n     SELECT 'clustered_random_80k' AS colname,\n COUNT(DISTINCT clustered_random_80k) AS distincts FROM test_1\n UNION\n     SELECT 'uniform_2000k' AS colname, COUNT(DISTINCT\n uniform_2000k) AS distincts FROM test_1 UNION\n     SELECT 'uniform_400k' AS colname, COUNT(DISTINCT\n uniform_400k) AS distincts FROM test_1 UNION\n     SELECT 'uniform_80k' AS colname, COUNT(DISTINCT\n uniform_80k) AS distincts FROM test_1 UNION\n     SELECT 'random_2000k' AS colname, COUNT(DISTINCT\n random_2000k) AS distincts FROM test_1 UNION\n     SELECT 'random_400k' AS colname, COUNT(DISTINCT\n random_400k) AS distincts FROM test_1 UNION\n     SELECT 'random_80k' AS colname, COUNT(DISTINCT\n random_80k) AS distincts FROM test_1) AS sub\n ORDER BY colname;\n\n         colname         | distincts\n-------------------------+-----------\n clustered_ordered_2000k |   2000000\n clustered_ordered_400k  |    400000\n clustered_ordered_80k   |     80000\n clustered_random_2000k  |   1811948\n clustered_random_400k   |    391881\n clustered_random_80k    |     79681\n id                      |  10000000\n random_2000k            |   1986619\n random_400k             |    400000\n random_80k              |     80000\n uniform_2000k           |   2000000\n uniform_400k            |    400000\n uniform_80k             |     80000\n\n -> So we got what we asked for.\n\n\n As the row length of the table is not very large, we decrease the\n statistics target. Otherwise a quarter of the table will get sampled\n and the effect is less clear:\n\n SET default_statistics_target = 10;\n ANALYZE VERBOSE test_1;\n SELECT attname, n_distinct, correlation FROM pg_stats\n WHERE tablename = 'test_1'\n     ORDER BY attname;\n\n         attname         | n_distinct | correlation\n-------------------------+------------+-------------\n clustered_ordered_2000k |      51487 |           1\n clustered_ordered_400k  |       9991 |           1\n clustered_ordered_80k   |       3752 |           1\n clustered_random_2000k  |      51487 |  0.00938534\n clustered_random_400k   |       9991 |  0.00373309\n clustered_random_80k    |       3748 |  -0.0461863\n id                      |         -1 |           1\n random_2000k            |  -0.310305 |  0.00140735\n random_400k             |     289890 |  0.00140921\n random_80k              |      71763 |  0.00142101\n uniform_2000k           |  -0.310305 |    0.209842\n uniform_400k            |  -0.101016 |   0.0259991\n uniform_80k             |      74227 |   0.0154193\n\n -> estimates for random and uniform distributions are really\n good. But for clustered distributions, estimates are off by a factor\n of 20 to 40.\n\n\n And clean up\n DROP TABLE test_1;", "msg_date": "Sat, 29 Dec 2012 21:57:04 +0100", "msg_from": "Stefan Andreatta <[email protected]>", "msg_from_op": true, "msg_subject": "serious under-estimation of n_distinct for clustered distributions" }, { "msg_contents": "On 29 December 2012 20:57, Stefan Andreatta <[email protected]> wrote:\n> Now, the 2005 discussion goes into great detail on the advantages and\n> disadvantages of this algorithm, particularly when using small sample sizes,\n> and several alternatives are discussed. I do not know whether anything has\n> been changed after that, but I know that the very distinct problem, which I\n> will focus on here, still persists.\n\nIt's a really hard problem to solve satisfactorily. It's a problem\nthat has been studied in much detail. Yes, the algorithm used is still\nthe same. See the comments within src/backend/commands/analyze.c (IBM\nResearch Report RJ 10025 is referenced there).\n\nThe general advice here is:\n\n1) Increase default_statistics_target for the column.\n\n2) If that doesn't help, consider using the following DDL:\n\nalter table foo alter column bar set ( n_distinct = 5.0);\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 29 Dec 2012 21:57:04 +0000", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: serious under-estimation of n_distinct for clustered\n distributions" }, { "msg_contents": "On 12/29/2012 10:57 PM, Peter Geoghegan wrote:\n> On 29 December 2012 20:57, Stefan Andreatta <[email protected]> wrote:\n>> Now, the 2005 discussion goes into great detail on the advantages and\n>> disadvantages of this algorithm, particularly when using small sample sizes,\n>> and several alternatives are discussed. I do not know whether anything has\n>> been changed after that, but I know that the very distinct problem, which I\n>> will focus on here, still persists.\n>\n> It's a really hard problem to solve satisfactorily. It's a problem\n> that has been studied in much detail. Yes, the algorithm used is still\n> the same. See the comments within src/backend/commands/analyze.c (IBM\n> Research Report RJ 10025 is referenced there).\n\nThanks a lot for this information! I looked through the code a bit. The \nHaas & Stokes Formula is fine. The problem really lies with the two \nphase random selection procedure:\n\nStarting from line 1039, there is a comment:\n * As of May 2004 we use a new two-stage method: Stage one selects up\n * to targrows random blocks (or all blocks, if there aren't so many).\n * Stage two scans these blocks and uses the Vitter algorithm to create\n * a random sample of targrows rows (or less, if there are less in the\n * sample of blocks). The two stages are executed simultaneously: each\n * block is processed as soon as stage one returns its number and while\n * the rows are read stage two controls which ones are to be inserted\n * into the sample.\n *\n * Although every row has an equal chance of ending up in the final\n * sample, this sampling method is not perfect: not every possible\n * sample has an equal chance of being selected. For large relations\n * the number of different blocks represented by the sample tends to be\n * too small. We can live with that for now. Improvements are welcome.\n\n\nNow the problem with clustered data is, that the probability of sampling \na value twice is much higher when the same page is repeatedly sampled. \nAs stage one takes a random sample of pages, and stage two samples rows \nfrom these pages, the probability of visiting the same page twice (or \nmore often) is much higher than if random rows were selected from the \nwhole table. Hence we get a lot more multiple values for clustered data \nand we end up with the severe under-estimation we can see in those cases.\n\nProbabilities do my brain in, as usual, but I tested the procedure for \nmy test data with a simple python script. There is absolutely nothing \nwrong with the implementation. It seems to be a purely statistical problem.\n\nNot everything may be hopeless though ;-) The problem could \ntheoretically be avoided if random rows were selected from the whole \ntable. Again, that may not be feasible - the two phase approach was \nprobably not implemented for nothing.\n\nAnother possible solution would be to avoid much of the resampling (not \nall) in phase two. For that - in theory - every page visited would have \nto get a lower weight, so that revisiting this page is not any more \nlikely as rows were selected from the whole column. That does not sound \neasy or elegant to implement. But perhaps there is some clever algorithm \n- unfortunately I do not know.\n\n\n> The general advice here is:\n>\n> 1) Increase default_statistics_target for the column.\n>\n> 2) If that doesn't help, consider using the following DDL:\n>\n> alter table foo alter column bar set ( n_distinct = 5.0);\n\nYes, I will probably have to live with that for now - I will come back \nto these workarounds with one or two questions.\n\nThanks again & Regards,\nSeefan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 30 Dec 2012 19:02:44 +0100", "msg_from": "Stefan Andreatta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: serious under-estimation of n_distinct for clustered\n distributions" }, { "msg_contents": "On Sat, Dec 29, 2012 at 5:57 PM, Stefan Andreatta\n<[email protected]> wrote:\n> n*d / (n - f1 + f1*n/N)\n>\n> where f1 is the number of values that occurred only once in the sample. n is\n> the number of rows sampled, d the number of distincts found and N the total\n> number of rows in the table.\n>\n...\n>\n> When the number of values that are found only once in the sample (f1)\n> becomes zero, the whole term equals d, that is, n_distinct is estimated to\n> be just the number of distincts found in the sample.\n\nI think the problem lies in the assumption that if there are no\nsingly-sampled values, then the sample must have included all distinct\nvalues. This is clearly not true even on a fully random sample, it\nonly means those sampled distinct values appear frequently enough to\nbe excluded from the sample.\n\nIn the clustered case, the error would be evident if the\nrandomly-sampled pages were split into two samples, and considered\nseparately. The distinct set in one would not match the distinct set\nin the other, and the intersection's size would say something about\nthe real number of distinct values in the whole population.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 30 Dec 2012 16:24:21 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: serious under-estimation of n_distinct for clustered\n distributions" }, { "msg_contents": "On 12/29/2012 10:57 PM, Peter Geoghegan wrote:\n> On 29 December 2012 20:57, Stefan Andreatta <[email protected]> wrote:\n...\n\n> The general advice here is:\n>\n> 1) Increase default_statistics_target for the column.\n\nI tried that, but to get good estimates under these circumstances, I \nneed to set the statistics_target so high that the whole table gets \nanalyzed. As this problem matters most for all of our large tables, I \nwould have to set default_statistics_target to something like 100000 - \nthat's a bit scary for production systems with tables of appr. 100GB, I \nfind.\n\n\n> 2) If that doesn't help, consider using the following DDL:\n>\n> alter table foo alter column bar set ( n_distinct = 5.0);\n>\n\nYes, that's probably best - even if it means quite some maintenance \nwork. I do it like that:\n\n ALTER TABLE test_1 ALTER COLUMN clustered_random_2000k SET (n_distinct \n= -0.05);\n\nbtw: Postgres will never set relative n_distinct values for anything \nlarger than -0.1. If I determine (or know) it to be a constant but lower \nfraction, could it be a problem to explicitly set this value to between \n-0.1 and 0?\n\n\nTo activate that setting, however, an ANALYZE has to be run. That was \nnot clear to me from the documentation:\n\n ANALYZE verbose test_1;\n\n\nTo check column options and statistics values:\n\n SELECT pg_class.relname AS table_name,\n pg_attribute.attname AS column_name, pg_attribute.attoptions\n FROM pg_attribute\n JOIN pg_class ON pg_attribute.attrelid = pg_class.oid\n WHERE pg_attribute.attnum > 0\n AND pg_class.relname = 'test_1'\n AND pg_attribute.attname = 'clustered_random_2000k';\n\n SELECT tablename AS table_name, attname AS column_name,\n null_frac, avg_width, n_distinct, correlation\n FROM pg_stats\n WHERE tablename = 'test_1' and attname = 'clustered_random_2000k';\n\n\nAnd finally, we can undo the whole thing, if necessary:\n\n ALTER TABLE test_1 ALTER COLUMN clustered_random_2000k RESET (n_distinct);\n ANALYZE VERBOSE test_1;\n\n\nRegards,\nStefan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 04 Jan 2013 06:14:34 +0100", "msg_from": "Stefan Andreatta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: serious under-estimation of n_distinct for clustered\n distributions" }, { "msg_contents": "A status update on this problem:\n\n1.) Workarounds (setting n_distinct manually) are tested and - as far as \nworkarounds go - OK.\n\n2.) Source of the problem and possible solution:\n\nThe source of these troubles is the sampling method employed in \nsrc/backend/commands/analyze.c. Judging from Tom Lane's comment for the \noriginal implementation in 2004 this has never been thought to be \nperfect. Does anybody see a chance to improve that part? Should this \ndiscussion be taken elsewhere? Is there any input from my side that \ncould help?\n\n\nbtw: I do find this problem to be very frequent in our databases. And \nconsidering the commonplace conditions leading to it, I would expect \nmany systems to be affected. But searching the forums and the web I \nhardly found any references to it - which amazes me to no end.\n\n\nBest Regards,\nStefan\n\n\n\nOn 12/30/2012 07:02 PM, Stefan Andreatta wrote:\n> On 12/29/2012 10:57 PM, Peter Geoghegan wrote:\n>> On 29 December 2012 20:57, Stefan Andreatta <[email protected]> \n>> wrote:\n>>> Now, the 2005 discussion goes into great detail on the advantages and\n>>> disadvantages of this algorithm, particularly when using small \n>>> sample sizes,\n>>> and several alternatives are discussed. I do not know whether \n>>> anything has\n>>> been changed after that, but I know that the very distinct problem, \n>>> which I\n>>> will focus on here, still persists.\n>>\n>> It's a really hard problem to solve satisfactorily. It's a problem\n>> that has been studied in much detail. Yes, the algorithm used is still\n>> the same. See the comments within src/backend/commands/analyze.c (IBM\n>> Research Report RJ 10025 is referenced there).\n>\n> Thanks a lot for this information! I looked through the code a bit. \n> The Haas & Stokes Formula is fine. The problem really lies with the \n> two phase random selection procedure:\n>\n> Starting from line 1039, there is a comment:\n> * As of May 2004 we use a new two-stage method: Stage one selects up\n> * to targrows random blocks (or all blocks, if there aren't so many).\n> * Stage two scans these blocks and uses the Vitter algorithm to create\n> * a random sample of targrows rows (or less, if there are less in the\n> * sample of blocks). The two stages are executed simultaneously: each\n> * block is processed as soon as stage one returns its number and while\n> * the rows are read stage two controls which ones are to be inserted\n> * into the sample.\n> *\n> * Although every row has an equal chance of ending up in the final\n> * sample, this sampling method is not perfect: not every possible\n> * sample has an equal chance of being selected. For large relations\n> * the number of different blocks represented by the sample tends to be\n> * too small. We can live with that for now. Improvements are welcome.\n>\n>\n> Now the problem with clustered data is, that the probability of \n> sampling a value twice is much higher when the same page is repeatedly \n> sampled. As stage one takes a random sample of pages, and stage two \n> samples rows from these pages, the probability of visiting the same \n> page twice (or more often) is much higher than if random rows were \n> selected from the whole table. Hence we get a lot more multiple values \n> for clustered data and we end up with the severe under-estimation we \n> can see in those cases.\n>\n> Probabilities do my brain in, as usual, but I tested the procedure for \n> my test data with a simple python script. There is absolutely nothing \n> wrong with the implementation. It seems to be a purely statistical \n> problem.\n>\n> Not everything may be hopeless though ;-) The problem could \n> theoretically be avoided if random rows were selected from the whole \n> table. Again, that may not be feasible - the two phase approach was \n> probably not implemented for nothing.\n>\n> Another possible solution would be to avoid much of the resampling \n> (not all) in phase two. For that - in theory - every page visited \n> would have to get a lower weight, so that revisiting this page is not \n> any more likely as rows were selected from the whole column. That does \n> not sound easy or elegant to implement. But perhaps there is some \n> clever algorithm - unfortunately I do not know.\n>\n>\n>> The general advice here is:\n>>\n>> 1) Increase default_statistics_target for the column.\n>>\n>> 2) If that doesn't help, consider using the following DDL:\n>>\n>> alter table foo alter column bar set ( n_distinct = 5.0);\n>\n> Yes, I will probably have to live with that for now - I will come back \n> to these workarounds with one or two questions.\n>\n> Thanks again & Regards,\n> Seefan\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Jan 2013 08:35:31 +0100", "msg_from": "Stefan Andreatta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: serious under-estimation of n_distinct for clustered\n distributions" }, { "msg_contents": "On 14 January 2013 07:35, Stefan Andreatta <[email protected]> wrote:\n> The source of these troubles is the sampling method employed in\n> src/backend/commands/analyze.c. Judging from Tom Lane's comment for the\n> original implementation in 2004 this has never been thought to be perfect.\n> Does anybody see a chance to improve that part? Should this discussion be\n> taken elsewhere? Is there any input from my side that could help?\n\nNumerous alternative algorithms exist, as this has been an area of\ngreat interest for researchers for some time. Some alternatives may\neven be objectively better than Haas & Stokes. A quick peruse through\nthe archives shows that Simon Riggs once attempted to introduce an\nalgorithm described in the paper \"A Block Sampling Approach to\nDistinct Value Estimation\":\n\nhttp://www.stat.washington.edu/research/reports/1999/tr355.pdf\n\nHowever, the word on the street is that it may be worth pursuing some\nof the ideas described by the literature in just the last few years.\nI've often thought that this would be an interesting problem to work\non. I haven't had time to pursue it, though. You may wish to propose a\npatch on the pgsql-hackers mailing list.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Jan 2013 11:34:12 +0000", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: serious under-estimation of n_distinct for clustered\n distributions" } ]
[ { "msg_contents": "I have been running Postgresql 9.2 under VMWare/WinXP-32bit, and it works\nreally well.\n\nI finally decided to move to my host - Win 7 Ultimate 64bit - and installed\nthe 64bit version with same config as the 32bit. When I tried to run it,\nit was extremely slow connecting to a 4 table database I use as a test when\nstarting a server. In pgAdminIII, any attempt to open the W7 hosted\ndatabase takes 15-20 seconds. Ditto to open a table.\n\nI created a connection to the server in the VMWare/XP system, and\nperformance was close to immediate.\n\nI then removed the 64bit version, and installed the 32bit version on W7.\n\nSame results.\n\nSo this looks like I've got something messed up on W7.\n\nAny ideas?\n\nTIA.\n\n\n-- \nJohn Kasarda, CFPIM, Jonah\nValid & Robust, Inc\n\nI have been running Postgresql 9.2 under VMWare/WinXP-32bit, and it works really well.I finally decided to move to my host - Win 7 Ultimate 64bit - and installed the 64bit version with same config as the 32bit.  When I tried to run it, it was extremely slow connecting to a 4 table database I use as a test when starting a server.  In pgAdminIII, any attempt to open the W7 hosted database takes 15-20 seconds.  Ditto to open a table.\nI created a connection to the server in the VMWare/XP system, and performance was close to immediate.I then removed the 64bit version, and installed the 32bit version on W7.\nSame results.So this looks like I've got something messed up on W7.Any ideas?TIA.\n-- John Kasarda, CFPIM, JonahValid & Robust, Inc", "msg_date": "Sun, 30 Dec 2012 18:03:06 -0800", "msg_from": "John Kasarda <[email protected]>", "msg_from_op": true, "msg_subject": "Slow connections on Win 7" }, { "msg_contents": "I know nothing about running PostgreSQL in windows, but slowness in establishing connections on a particular host that isn't replicable elsewhere is very often the result of something doing reverse DNS lookups on a server when accepting new connections. Perhaps the client you are connecting from isn't set up to be resolvable via reverse dns? Maybe look in the docs for a way to disable reverse lookups on new connections? This is pure guesswork but may give you something to investigate while you wait for a response from someone more knowledgable on your specific problem\n\nSent from my iPhone\n\nOn Dec 30, 2012, at 6:03 PM, John Kasarda <[email protected]> wrote:\n\n> I have been running Postgresql 9.2 under VMWare/WinXP-32bit, and it works really well.\n> \n> I finally decided to move to my host - Win 7 Ultimate 64bit - and installed the 64bit version with same config as the 32bit. When I tried to run it, it was extremely slow connecting to a 4 table database I use as a test when starting a server. In pgAdminIII, any attempt to open the W7 hosted database takes 15-20 seconds. Ditto to open a table.\n> \n> I created a connection to the server in the VMWare/XP system, and performance was close to immediate.\n> \n> I then removed the 64bit version, and installed the 32bit version on W7.\n> \n> Same results.\n> \n> So this looks like I've got something messed up on W7.\n> \n> Any ideas?\n> \n> TIA.\n> \n> \n> -- \n> John Kasarda, CFPIM, Jonah\n> Valid & Robust, Inc\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 30 Dec 2012 19:00:25 -0800", "msg_from": "Sam Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow connections on Win 7" } ]
[ { "msg_contents": "Hey everyone!\n\nAfter much testing and hair-pulling, we've confirmed two kernel settings \nthat should always be modified in production Linux systems. Especially \nnew ones with the completely fair scheduler (CFS) as opposed to the O(1) \nscheduler.\n\nIf you want to follow along, these are:\n\n/proc/sys/kernel/sched_migration_cost\n/proc/sys/kernel/sched_autogroup_enabled\n\nWhich correspond to sysctl settings:\n\nkernel.sched_migration_cost\nkernel.sched_autogroup_enabled\n\nWhat do these settings do?\n--------------------------\n\n* sched_migration_cost\n\nThe migration cost is the total time the scheduler will consider a \nmigrated process \"cache hot\" and thus less likely to be re-migrated. By \ndefault, this is 0.5ms (500000 ns), and as the size of the process table \nincreases, eventually causes the scheduler to break down. On our \nsystems, after a smooth degradation with increasing connection count, \nsystem CPU spiked from 20 to 70% sustained and TPS was cut by 5-10x once \nwe crossed some invisible connection count threshold. For us, that was a \npgbench with 900 or more clients.\n\nThe migration cost should be increased, almost universally on server \nsystems with many processes. This means systems like PostgreSQL or \nApache would benefit from having higher migration costs. We've had good \nluck with a setting of 5ms (5000000 ns) instead.\n\nWhen the breakdown occurs, system CPU (as obtained from sar) increases \nfrom 20% on a heavy pgbench (scale 3500 on a 72GB system) to over 70%, \nand %nice/%user is cut by half or more. A higher migration cost \nessentially eliminates this artificial throttle.\n\n* sched_autogroup_enabled\n\nThis is a relatively new patch which Linus lauded back in late 2010. It \nbasically groups tasks by TTY so perceived responsiveness is improved. \nBut on server systems, large daemons like PostgreSQL are going to be \nlaunched from the same pseudo-TTY, and be effectively choked out of CPU \ncycles in favor of less important tasks.\n\nThe default setting is 1 (enabled) on some platforms. By setting this to \n0 (disabled), we saw an outright 30% performance boost on the same \npgbench test. A fully cached scale 3500 database on a 72GB system went \nfrom 67k TPS to 82k TPS with 900 client connections.\n\nTotal Benefit\n-------------\n\nAt higher connections counts, such as systems that can't use pooling or \nmake extensive use of prepared queries, these can massively affect \nperformance. At 900 connections, our test systems were at 17k TPS \nunaltered, but 85k TPS after these two modifications. Even with this \nperformance boost, we still had 40% CPU free instead of 0%. In effect, \nthe logarithmic performance of the new scheduler is returned to normal \nunder large process tables.\n\nSome systems will have a higher \"cracking\" point than others. The effect \nis amplified when a system is under high memory pressure, hence a lot of \nexpensive queries on a high number of concurrent connections is the \neasiest way to replicate these results.\n\nAdmins migrating from older systems (RHEL 5.x) may find this especially \nshocking, because the old O(1) scheduler was too \"stupid\" to have these \nadvanced features, hence it was impossible to cause this kind of behavior.\n\nThere's probably still a little room for improvement here, since 30-40% \nCPU is still unclaimed in our larger tests. I'd like to see the total \nperformance drop (175k ideal TPS at 24-connections) decreased. But these \nkernel tweaks are rarely discussed anywhere, it seems. There doesn't \nseem to be any consensus on how these (and other) scheduler settings \nshould be modified under different usage scenarios.\n\nI just figured I'd share, since we found this info so beneficial.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Jan 2013 15:46:25 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "Dear Shaun,\n\nThanks for that - it's really interesting to know.\n\nOn 02/01/13 21:46, Shaun Thomas wrote:\n> Hey everyone!\n>\n> After much testing and hair-pulling, we've confirmed two kernel\n> settings that should always be modified in production Linux systems.\n> Especially new ones with the completely fair scheduler (CFS) as\n> opposed to the O(1) scheduler.\n\nDoes it apply to all types of production system, or just to certain \nworkloads?\n\nFor example, what happens when there are only one or two concurrent\nprocesses? (i.e. there are always several more CPU cores than there are\nactual connections).\n\n\n> * sched_autogroup_enabled\n>\n> This is a relatively new patch which Linus lauded back in late 2010.\n> It basically groups tasks by TTY so perceived responsiveness is\n> improved. But on server systems, large daemons like PostgreSQL are\n> going to be launched from the same pseudo-TTY, and be effectively\n> choked out of CPU cycles in favor of less important tasks.\n\n\nI've got several production servers using Postgres: I'd like to squeeze \na bit more performance out of them, but in all cases, one (sometimes \ntwo) CPU cores are (sometimes) maxed out, but there are always several \ncores permanently idling. So does this apply here?\n\nThanks for your advice,\n\nRichard\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 03 Jan 2013 00:47:22 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On Wed, Jan 2, 2013 at 3:46 PM, Shaun Thomas <[email protected]> wrote:\n> Hey everyone!\n>\n> After much testing and hair-pulling, we've confirmed two kernel settings\n> that should always be modified in production Linux systems. Especially new\n> ones with the completely fair scheduler (CFS) as opposed to the O(1)\n> scheduler.\n>\n> If you want to follow along, these are:\n>\n> /proc/sys/kernel/sched_migration_cost\n> /proc/sys/kernel/sched_autogroup_enabled\n>\n> Which correspond to sysctl settings:\n>\n> kernel.sched_migration_cost\n> kernel.sched_autogroup_enabled\n>\n> What do these settings do?\n> --------------------------\n>\n> * sched_migration_cost\n>\n> The migration cost is the total time the scheduler will consider a migrated\n> process \"cache hot\" and thus less likely to be re-migrated. By default, this\n> is 0.5ms (500000 ns), and as the size of the process table increases,\n> eventually causes the scheduler to break down. On our systems, after a\n> smooth degradation with increasing connection count, system CPU spiked from\n> 20 to 70% sustained and TPS was cut by 5-10x once we crossed some invisible\n> connection count threshold. For us, that was a pgbench with 900 or more\n> clients.\n>\n> The migration cost should be increased, almost universally on server systems\n> with many processes. This means systems like PostgreSQL or Apache would\n> benefit from having higher migration costs. We've had good luck with a\n> setting of 5ms (5000000 ns) instead.\n>\n> When the breakdown occurs, system CPU (as obtained from sar) increases from\n> 20% on a heavy pgbench (scale 3500 on a 72GB system) to over 70%, and\n> %nice/%user is cut by half or more. A higher migration cost essentially\n> eliminates this artificial throttle.\n>\n> * sched_autogroup_enabled\n>\n> This is a relatively new patch which Linus lauded back in late 2010. It\n> basically groups tasks by TTY so perceived responsiveness is improved. But\n> on server systems, large daemons like PostgreSQL are going to be launched\n> from the same pseudo-TTY, and be effectively choked out of CPU cycles in\n> favor of less important tasks.\n>\n> The default setting is 1 (enabled) on some platforms. By setting this to 0\n> (disabled), we saw an outright 30% performance boost on the same pgbench\n> test. A fully cached scale 3500 database on a 72GB system went from 67k TPS\n> to 82k TPS with 900 client connections.\n>\n> Total Benefit\n> -------------\n>\n> At higher connections counts, such as systems that can't use pooling or make\n> extensive use of prepared queries, these can massively affect performance.\n> At 900 connections, our test systems were at 17k TPS unaltered, but 85k TPS\n> after these two modifications. Even with this performance boost, we still\n> had 40% CPU free instead of 0%. In effect, the logarithmic performance of\n> the new scheduler is returned to normal under large process tables.\n>\n> Some systems will have a higher \"cracking\" point than others. The effect is\n> amplified when a system is under high memory pressure, hence a lot of\n> expensive queries on a high number of concurrent connections is the easiest\n> way to replicate these results.\n>\n> Admins migrating from older systems (RHEL 5.x) may find this especially\n> shocking, because the old O(1) scheduler was too \"stupid\" to have these\n> advanced features, hence it was impossible to cause this kind of behavior.\n>\n> There's probably still a little room for improvement here, since 30-40% CPU\n> is still unclaimed in our larger tests. I'd like to see the total\n> performance drop (175k ideal TPS at 24-connections) decreased. But these\n> kernel tweaks are rarely discussed anywhere, it seems. There doesn't seem to\n> be any consensus on how these (and other) scheduler settings should be\n> modified under different usage scenarios.\n>\n> I just figured I'd share, since we found this info so beneficial.\n\nThis is fantastic info.\n\nVlad, you might want to check this out and see if it has any impact in\nyour high cpu case...via:\nhttp://postgresql.1045698.n5.nabble.com/High-SYS-CPU-need-advise-td5732045.html\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 7 Jan 2013 13:22:13 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On 01/02/2013 10:46 PM, Shaun Thomas wrote:\n> Hey everyone!\n>\n> After much testing and hair-pulling, we've confirmed two kernel settings that\n > should always be modified in production Linux systems. Especially new ones with\n> the completely fair scheduler (CFS) as opposed to the O(1) scheduler.\n\n[cut]\n\n> I just figured I'd share, since we found this info so beneficial.\n\nI just want to confirm that on our relatively small\ntest server that tweaks give us a 25% performance boost!\n\nReally appreciated Shaun.\n\nthanks\nAndrea\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Jan 2013 09:29:59 +0100", "msg_from": "Andrea Suisani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On 01/08/2013 09:29 AM, Andrea Suisani wrote:\n> On 01/02/2013 10:46 PM, Shaun Thomas wrote:\n>> Hey everyone!\n>>\n>> After much testing and hair-pulling, we've confirmed two kernel settings that\n> > should always be modified in production Linux systems. Especially new ones with\n>> the completely fair scheduler (CFS) as opposed to the O(1) scheduler.\n>\n> [cut]\n>\n>> I just figured I'd share, since we found this info so beneficial.\n>\n> I just want to confirm that on our relatively small\n> test server that tweaks give us a 25% performance boost!\n\n12.5% sorry for the typo...\n\n\n> Really appreciated Shaun.\n>\n> thanks\n> Andrea\n>\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Jan 2013 16:16:12 +0100", "msg_from": "Andrea Suisani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "The kernel on our Linux system doesn't appear to have these two settings according to the list provided by sysctl -a. Please pardon my ignorance, but should I add them? \n\nWe have Postgresql 9.0 on Linux 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n\nThanks,\nMidge\n\n ----- Original Message ----- \n From: Shaun Thomas \n To: [email protected] \n Sent: Wednesday, January 02, 2013 1:46 PM\n Subject: [PERFORM] Two Necessary Kernel Tweaks for Linux Systems\n\n\n Hey everyone!\n\n After much testing and hair-pulling, we've confirmed two kernel settings \n that should always be modified in production Linux systems. Especially \n new ones with the completely fair scheduler (CFS) as opposed to the O(1) \n scheduler.\n\n If you want to follow along, these are:\n\n /proc/sys/kernel/sched_migration_cost\n /proc/sys/kernel/sched_autogroup_enabled\n\n Which correspond to sysctl settings:\n\n kernel.sched_migration_cost\n kernel.sched_autogroup_enabled\n\n What do these settings do?\n --------------------------\n\n * sched_migration_cost\n\n The migration cost is the total time the scheduler will consider a \n migrated process \"cache hot\" and thus less likely to be re-migrated. By \n default, this is 0.5ms (500000 ns), and as the size of the process table \n increases, eventually causes the scheduler to break down. On our \n systems, after a smooth degradation with increasing connection count, \n system CPU spiked from 20 to 70% sustained and TPS was cut by 5-10x once \n we crossed some invisible connection count threshold. For us, that was a \n pgbench with 900 or more clients.\n\n The migration cost should be increased, almost universally on server \n systems with many processes. This means systems like PostgreSQL or \n Apache would benefit from having higher migration costs. We've had good \n luck with a setting of 5ms (5000000 ns) instead.\n\n When the breakdown occurs, system CPU (as obtained from sar) increases \n from 20% on a heavy pgbench (scale 3500 on a 72GB system) to over 70%, \n and %nice/%user is cut by half or more. A higher migration cost \n essentially eliminates this artificial throttle.\n\n * sched_autogroup_enabled\n\n This is a relatively new patch which Linus lauded back in late 2010. It \n basically groups tasks by TTY so perceived responsiveness is improved. \n But on server systems, large daemons like PostgreSQL are going to be \n launched from the same pseudo-TTY, and be effectively choked out of CPU \n cycles in favor of less important tasks.\n\n The default setting is 1 (enabled) on some platforms. By setting this to \n 0 (disabled), we saw an outright 30% performance boost on the same \n pgbench test. A fully cached scale 3500 database on a 72GB system went \n from 67k TPS to 82k TPS with 900 client connections.\n\n Total Benefit\n -------------\n\n At higher connections counts, such as systems that can't use pooling or \n make extensive use of prepared queries, these can massively affect \n performance. At 900 connections, our test systems were at 17k TPS \n unaltered, but 85k TPS after these two modifications. Even with this \n performance boost, we still had 40% CPU free instead of 0%. In effect, \n the logarithmic performance of the new scheduler is returned to normal \n under large process tables.\n\n Some systems will have a higher \"cracking\" point than others. The effect \n is amplified when a system is under high memory pressure, hence a lot of \n expensive queries on a high number of concurrent connections is the \n easiest way to replicate these results.\n\n Admins migrating from older systems (RHEL 5.x) may find this especially \n shocking, because the old O(1) scheduler was too \"stupid\" to have these \n advanced features, hence it was impossible to cause this kind of behavior.\n\n There's probably still a little room for improvement here, since 30-40% \n CPU is still unclaimed in our larger tests. I'd like to see the total \n performance drop (175k ideal TPS at 24-connections) decreased. But these \n kernel tweaks are rarely discussed anywhere, it seems. There doesn't \n seem to be any consensus on how these (and other) scheduler settings \n should be modified under different usage scenarios.\n\n I just figured I'd share, since we found this info so beneficial.\n\n -- \n Shaun Thomas\n OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n 312-444-8534\n [email protected]\n\n ______________________________________________\n\n See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n -- \n Sent via pgsql-performance mailing list ([email protected])\n To make changes to your subscription:\n http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\nThe kernel on our Linux system doesn't appear to \nhave these two settings according to the list provided by sysctl -a. Please \npardon my ignorance, but should I add them? \n \nWe have Postgresql 9.0 on Linux 2.6.18-164.el5 #1 \nSMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n \nThanks,\nMidge\n \n\n----- Original Message ----- \nFrom:\nShaun \n Thomas \nTo: [email protected]\n\nSent: Wednesday, January 02, 2013 1:46 \n PM\nSubject: [PERFORM] Two Necessary Kernel \n Tweaks for Linux Systems\nHey everyone!After much testing and hair-pulling, we've \n confirmed two kernel settings that should always be modified in production \n Linux systems. Especially new ones with the completely fair scheduler \n (CFS) as opposed to the O(1) scheduler.If you want to follow \n along, these \n are:/proc/sys/kernel/sched_migration_cost/proc/sys/kernel/sched_autogroup_enabledWhich \n correspond to sysctl \n settings:kernel.sched_migration_costkernel.sched_autogroup_enabledWhat \n do these settings do?--------------------------* \n sched_migration_costThe migration cost is the total time the scheduler \n will consider a migrated process \"cache hot\" and thus less likely to be \n re-migrated. By default, this is 0.5ms (500000 ns), and as the size of the \n process table increases, eventually causes the scheduler to break down. On \n our systems, after a smooth degradation with increasing connection count, \n system CPU spiked from 20 to 70% sustained and TPS was cut by 5-10x once \n we crossed some invisible connection count threshold. For us, that was a \n pgbench with 900 or more clients.The migration cost should be \n increased, almost universally on server systems with many processes. This \n means systems like PostgreSQL or Apache would benefit from having higher \n migration costs. We've had good luck with a setting of 5ms (5000000 ns) \n instead.When the breakdown occurs, system CPU (as obtained from sar) \n increases from 20% on a heavy pgbench (scale 3500 on a 72GB system) to \n over 70%, and %nice/%user is cut by half or more. A higher migration cost \n essentially eliminates this artificial throttle.* \n sched_autogroup_enabledThis is a relatively new patch which Linus \n lauded back in late 2010. It basically groups tasks by TTY so perceived \n responsiveness is improved. But on server systems, large daemons like \n PostgreSQL are going to be launched from the same pseudo-TTY, and be \n effectively choked out of CPU cycles in favor of less important \n tasks.The default setting is 1 (enabled) on some platforms. By setting \n this to 0 (disabled), we saw an outright 30% performance boost on the same \n pgbench test. A fully cached scale 3500 database on a 72GB system went \n from 67k TPS to 82k TPS with 900 client connections.Total \n Benefit-------------At higher connections counts, such as systems \n that can't use pooling or make extensive use of prepared queries, these \n can massively affect performance. At 900 connections, our test systems \n were at 17k TPS unaltered, but 85k TPS after these two modifications. Even \n with this performance boost, we still had 40% CPU free instead of 0%. In \n effect, the logarithmic performance of the new scheduler is returned to \n normal under large process tables.Some systems will have a higher \n \"cracking\" point than others. The effect is amplified when a system is \n under high memory pressure, hence a lot of expensive queries on a high \n number of concurrent connections is the easiest way to replicate these \n results.Admins migrating from older systems (RHEL 5.x) may find this \n especially shocking, because the old O(1) scheduler was too \"stupid\" to \n have these advanced features, hence it was impossible to cause this kind \n of behavior.There's probably still a little room for improvement here, \n since 30-40% CPU is still unclaimed in our larger tests. I'd like to see \n the total performance drop (175k ideal TPS at 24-connections) decreased. \n But these kernel tweaks are rarely discussed anywhere, it seems. There \n doesn't seem to be any consensus on how these (and other) scheduler \n settings should be modified under different usage scenarios.I just \n figured I'd share, since we found this info so beneficial.-- Shaun \n ThomasOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, \n [email protected]______________________________________________See \n http://www.peak6.com/email_disclaimer/ \n for terms and conditions related to this email-- Sent via \n pgsql-performance mailing list ([email protected])To \n make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 8 Jan 2013 10:25:52 -0800", "msg_from": "\"Midge Brown\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On 01/08/2013 12:25 PM, Midge Brown wrote:\n\n> The kernel on our Linux system doesn't appear to have these two\n> settings according to the list provided by sysctl -a. Please pardon\n> my ignorance, but should I add them?\n\nSorry if I wasn't more clear. These only apply to Linux systems with the \nCompletely Fair Scheduler, as opposed to the O(1) scheduler. For all \nintents and purposes, this means 3.0 kernels and above.\n\nWith a 2.6 kernel, you're fine.\n\nEffectively these changes fix what is basically a performance regression \ncompared to older kernels.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Jan 2013 12:28:25 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On Tue, Jan 8, 2013 at 11:28 AM, Shaun Thomas <[email protected]> wrote:\n> On 01/08/2013 12:25 PM, Midge Brown wrote:\n>\n>> The kernel on our Linux system doesn't appear to have these two\n>> settings according to the list provided by sysctl -a. Please pardon\n>> my ignorance, but should I add them?\n>\n>\n> Sorry if I wasn't more clear. These only apply to Linux systems with the\n> Completely Fair Scheduler, as opposed to the O(1) scheduler. For all intents\n> and purposes, this means 3.0 kernels and above.\n>\n> With a 2.6 kernel, you're fine.\n>\n> Effectively these changes fix what is basically a performance regression\n> compared to older kernels.\n\nWhat's the comparison of these settings versus say going to the NOP scheduler?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Jan 2013 11:31:13 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On 01/08/2013 12:31 PM, Scott Marlowe wrote:\n\n> What's the comparison of these settings versus say going to the NOP\n> scheduler?\n\nAssuming you actually meant NOP and not the NOOP I/O scheduler, I don't \nknow. These CPU scheduler tweaks are all I could dig up, and googling \nfor NOP by itself or combined with Linux terms is tremendously unhelpful.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Jan 2013 12:36:50 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On Tue, Jan 8, 2013 at 11:36 AM, Shaun Thomas <[email protected]> wrote:\n> On 01/08/2013 12:31 PM, Scott Marlowe wrote:\n>\n>> What's the comparison of these settings versus say going to the NOP\n>> scheduler?\n>\n>\n> Assuming you actually meant NOP and not the NOOP I/O scheduler, I don't\n> know. These CPU scheduler tweaks are all I could dig up, and googling for\n> NOP by itself or combined with Linux terms is tremendously unhelpful.\n\nAssembly language on the brain. of course I meant NOOP.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Jan 2013 12:04:36 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On 01/08/2013 01:04 PM, Scott Marlowe wrote:\n\n> Assembly language on the brain. of course I meant NOOP.\n\nOk, in that case, these are completely separate things. For IO \nscheduling, there's the Completely Fair Queue (CFQ), NOOP, Deadline, and \nso on.\n\nFor process scheduling, at least recently, there's Completely Fair \nScheduler or nothing. So far as I can tell, there is no alternative \nprocess scheduler. Just as I can't find an alternative memory manager \nthat I can tell to stop flushing my freaking active file cache due to \nphantom memory pressure. ;)\n\nThe tweaks I was discussing in this thread effectively do two things:\n\n1. Stop process grouping by TTY.\n\nOn servers, this really is a net performance loss. Especially on heavily \nforked apps like PG. System % is about 5% lower since the scheduler is \ndoing less work, but at the cost of less spreading across available \nCPUs. Our systems see a 30% performance hit with grouping enabled, \nothers may see more or less.\n\n2. Less aggressive process scheduling.\n\nThe O(log N) scheduler heuristics collapse at high process counts for \nsome reason, causing the scheduler to spend more and more time planning \nCPU assignments until it spirals completely out of control. I've seen \nthis behavior on 3.0 kernels straight to 3.5, so it looks like an \ninherent weakness of CFS. By increasing migration cost, we make the \nscheduler do less work less often, so that weird 70+% system CPU spike \nvanishes.\n\nMy guess is the increased migration cost basically offsets the point at \nwhich the scheduler would freak out. I've tested up to 2000 connections, \nand it responds fine, whereas before we were seeing flaky results as \nearly as 700 connections.\n\nMy guess as to why this is? I think it's due to VSZ as perceived by the \nscheduler. To swap processes, it also has to preload L2 and L3 cache for \nthe assigned process. As the number of PG connections increase, all with \ntheir own VSZ/RSS allocations, the scheduler has more thinking to do. At \na point when the sum of VSZ/RSS eclipses the amount of available RAM, \nthe scheduler loses nearly all decision-making ability and craps its pants.\n\nThis would also explain why I'm seeing something similar with memory. At \nhigh connection counts, even though %used is fine, and we have over 40GB \nfree for caching. VSZ/RSS are both way bigger than available cache, so \nmemory pressure causes kswapd to continuously purge the active cache \npool into inactive, and inactive into free, all while the device \nattempts to fill the active pool. It's an IO feedback loop, and around \nthe same number of connections that used to make the process scheduler \ndie. Too much of a coincidence, in my opinion.\n\nBut unlike the process scheduler, there are no good knobs to turn that \nwill fix the memory manager's behavior. At least, not in 3.0, 3.2, or \n3.4 kernels.\n\nBut I freely admit I'm just speculating based on observed behavior. I \nknow neither jack, nor squat about internal kernel mechanics. Anyone who \nactually *isn't* talking out of his ass is free to interject. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Jan 2013 13:32:14 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "When I checked these, both of these settings exist on my CentOS 6.x host \n(2.6.32-279.5.1.el6.x86_64).\n\nHowever, the autogroup_enabled was already set to 0. (The \nmigration_cost was set to the 0.5ms, default noted in the OP.) So I \ndon't know if this is strictly limited to kernel 3.0.\n\nIs there an \"easy\" way to tell what scheduler my OS is using?\n\n-AJ\n\n\nOn 1/8/2013 2:32 PM, Shaun Thomas wrote:\n> On 01/08/2013 01:04 PM, Scott Marlowe wrote:\n>\n>> Assembly language on the brain. of course I meant NOOP.\n>\n> Ok, in that case, these are completely separate things. For IO \n> scheduling, there's the Completely Fair Queue (CFQ), NOOP, Deadline, \n> and so on.\n>\n> For process scheduling, at least recently, there's Completely Fair \n> Scheduler or nothing. So far as I can tell, there is no alternative \n> process scheduler. Just as I can't find an alternative memory manager \n> that I can tell to stop flushing my freaking active file cache due to \n> phantom memory pressure. ;)\n>\n> The tweaks I was discussing in this thread effectively do two things:\n>\n> 1. Stop process grouping by TTY.\n>\n> On servers, this really is a net performance loss. Especially on \n> heavily forked apps like PG. System % is about 5% lower since the \n> scheduler is doing less work, but at the cost of less spreading across \n> available CPUs. Our systems see a 30% performance hit with grouping \n> enabled, others may see more or less.\n>\n> 2. Less aggressive process scheduling.\n>\n> The O(log N) scheduler heuristics collapse at high process counts for \n> some reason, causing the scheduler to spend more and more time \n> planning CPU assignments until it spirals completely out of control. \n> I've seen this behavior on 3.0 kernels straight to 3.5, so it looks \n> like an inherent weakness of CFS. By increasing migration cost, we \n> make the scheduler do less work less often, so that weird 70+% system \n> CPU spike vanishes.\n>\n> My guess is the increased migration cost basically offsets the point \n> at which the scheduler would freak out. I've tested up to 2000 \n> connections, and it responds fine, whereas before we were seeing flaky \n> results as early as 700 connections.\n>\n> My guess as to why this is? I think it's due to VSZ as perceived by \n> the scheduler. To swap processes, it also has to preload L2 and L3 \n> cache for the assigned process. As the number of PG connections \n> increase, all with their own VSZ/RSS allocations, the scheduler has \n> more thinking to do. At a point when the sum of VSZ/RSS eclipses the \n> amount of available RAM, the scheduler loses nearly all \n> decision-making ability and craps its pants.\n>\n> This would also explain why I'm seeing something similar with memory. \n> At high connection counts, even though %used is fine, and we have over \n> 40GB free for caching. VSZ/RSS are both way bigger than available \n> cache, so memory pressure causes kswapd to continuously purge the \n> active cache pool into inactive, and inactive into free, all while the \n> device attempts to fill the active pool. It's an IO feedback loop, and \n> around the same number of connections that used to make the process \n> scheduler die. Too much of a coincidence, in my opinion.\n>\n> But unlike the process scheduler, there are no good knobs to turn that \n> will fix the memory manager's behavior. At least, not in 3.0, 3.2, or \n> 3.4 kernels.\n>\n> But I freely admit I'm just speculating based on observed behavior. I \n> know neither jack, nor squat about internal kernel mechanics. Anyone \n> who actually *isn't* talking out of his ass is free to interject. :)\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Jan 2013 15:05:56 -0500", "msg_from": "AJ Weber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On 01/08/2013 02:05 PM, AJ Weber wrote:\n\n> Is there an \"easy\" way to tell what scheduler my OS is using?\n\nUnfortunately not. I looked again, and it seems that CFS was merged into \n2.6.23. Anything before that is probably safe, but the vendor may have \nbackported it. If you don't see the settings I described, you probably \ndon't have it.\n\nSo I guess Midge had 2.6.18, which predates the merge in 2.6.23.\n\nI honestly don't understand the Linux kernel sometimes. A process \nscheduler swap is a *gigantic* functional change, and it's in a dot \nrelease. I vastly prefer PostgreSQL's approach...\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Jan 2013 15:48:38 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On Tuesday, January 08, 2013 03:48:38 PM Shaun Thomas wrote:\n> On 01/08/2013 02:05 PM, AJ Weber wrote:\n> > Is there an \"easy\" way to tell what scheduler my OS is using?\n> \n> Unfortunately not. I looked again, and it seems that CFS was merged into\n> 2.6.23. Anything before that is probably safe, but the vendor may have\n> backported it. If you don't see the settings I described, you probably\n> don't have it.\n> \n> So I guess Midge had 2.6.18, which predates the merge in 2.6.23.\n> \n> I honestly don't understand the Linux kernel sometimes. A process\n> scheduler swap is a *gigantic* functional change, and it's in a dot\n> release. I vastly prefer PostgreSQL's approach...\n\nRed Hat also selectively backports major functionality into their enterprise \nkernels. If you're running RHEL or a clone like CentOS, the reported kernel \nversion has little bearing on what may nor may not be in your kernel.\n\nThey're very well tested and stable, so there's nothing wrong with them, per \nse, but you can't just say oh, you have version xxx, you don't have this \nfunctionality.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Jan 2013 15:24:33 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "Hi,\n\nwe also hit this performance barrier a while ago, when migrating a\ndatabase on a big server (48 core Opteron, 512GB RAM) from Kernel\n2.6.32 to 3.2 (both kernels from Debian packages). The system load was\ngetting very high, as you also observed (don't know the exact numbers\nright now).\n\nAfter some investigation I found out, that the reason for the high\nsystem load was that the postgresql processes were migrating from core\nto core at very high rates. So the behaviour of the CFS scheduler must\nhave changed in this regard between 2.6.32 and 3.2 kernels.\n\nYou can easily see this, if you have a look how much time the\nmigration kernel threads spend in the CPU (ps ax | grep migration). A\nlook into /proc/sched_debug also can give you some more insight into\nthe scheduler behaviour.\n\n On NUMA systems the scheduler tries to migrate processes to the nodes\non which they have the best memory-locality. But on a big database one\nprocess is typically reading randomly from a dataset which is spread\nabove all nodes. On newer kernels the CFS scheduler seems to try more\naggressively to migrate processes to other cores. I don't know if it\nis for better load balancing or for better memory locality. But\nprocess migrations are consuming a lot of resources.\n\nI had to change sched_migration_costs from 500000 (0.5ms) to 100000000\n(100ms). This means, the scheduler is only considering a task for\nmigration if the task was running at least for 100ms instead of 0.5ms.\nThis solved the problem for us - the migration kernel threads didn't\nhave to do much work anymore and thus the system load was going down\nagain.\n\nA general problem is, that the CFS scheduler has a lot of changes\nbetween all kernel versions, so it is really hard to predict which\nregressions you can hit when going to another kernel version.\nScheduling on NUMA systems is also very complex.\n\nAn interesting dissertations showing the inconsistent behaviour of the\nCFS scheduler:\nhttp://research.cs.wisc.edu/adsl/Publications/meehean-thesis11.pdf\n\nSome parameters, which also could be considered for systematic benchmarking are\n\nsched_latency_ns\nsched_min_granularity_ns\n\nI guess that higher numbers could improve performance too on systems\nwith many cores and many connections.\n\nThanks for starting this interesting thread!\n\nHenri\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 10 Jan 2013 09:51:26 +0100", "msg_from": "Henri Philipps <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "On 01/10/2013 02:51 AM, Henri Philipps wrote:\n\n> http://research.cs.wisc.edu/adsl/Publications/meehean-thesis11.pdf\n\nWow, that was pretty interesting. It looks like for servers, the O(1) \nscheduler is much better even with the assignment bug he identified, and \nBFS responds better to varying load than CFS.\n\nIt's too bad the paper is so old and only considers the 2.6 kernel. I'd \nlove to see this type of research applied to the latest.\n\n> sched_latency_ns\n> sched_min_granularity_ns\n>\n> I guess that higher numbers could improve performance too on systems\n> with many cores and many connections.\n\nI messed around with these a bit. Settings 10x smaller and 10x larger \ndidn't do anything appreciable that I noticed. Performance metrics were \nwithin variance of my earlier tests. Only autogrouping and migration \ncost had any appreciable effect.\n\nI'm glad we weren't the only ones who ran into this, too. You settled on \na much higher setting than we did, but the end result was the same. I \nwonder how prevalent this will become as more servers are switched over \nto newer kernels in the next couple of years. Hopefully more people \nstart complaining so they fix it. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 10 Jan 2013 09:53:25 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "2013-01-08 22:48 keltezéssel, Shaun Thomas írta:\n> On 01/08/2013 02:05 PM, AJ Weber wrote:\n>\n>> Is there an \"easy\" way to tell what scheduler my OS is using?\n>\n> Unfortunately not. I looked again, and it seems that CFS was merged into 2.6.23. \n> Anything before that is probably safe, but the vendor may have backported it. If you \n> don't see the settings I described, you probably don't have it.\n>\n> So I guess Midge had 2.6.18, which predates the merge in 2.6.23.\n>\n> I honestly don't understand the Linux kernel sometimes. A process scheduler swap is a \n> *gigantic* functional change, and it's in a dot release. I vastly prefer PostgreSQL's \n> approach...\n\nThe kernel version numbering is different.\nA point release in 2.6.x is 2.6.x.y.\nThis has changed in 3.x, a point release is 3.x.y.\n\nBest regards,\nZoltán Böszörményi\n\n-- \n----------------------------------\nZoltán Böszörményi\nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt, Austria\nWeb: http://www.postgresql-support.de\n http://www.postgresql.at/\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Jan 2013 15:28:48 +0100", "msg_from": "Boszormenyi Zoltan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two Necessary Kernel Tweaks for Linux Systems" }, { "msg_contents": "I have a server that is IO-bound right now (it's 4 cores, and top \nindicates the use rarely hits 25%, but the Wait spikes above 25-40% \nregularly). The server is running postgresql 9.0 and tomcat 6. As I \nhave mentioned in a previous thread, I can't alter the hardware to add \ndisks unfortunately, so I'm going to try and move postgresql off this \napplication server to its own host, but this is a production \nenvironment, so in the meantime...\n\nIs it possible that some spikes in IO could be attributable to the \nautovacuum process? Is there a way to check this theory?\n\nWould it be advisable (or even permissible to try/test) to disable \nautovacuum, and schedule a manual vacuumdb in the middle of the night, \nwhen this server is mostly-idle?\n\nThanks for any tips. I'm in a bit of a jam with my limited hardware.\n\n-AJ\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Jan 2013 11:53:57 -0500", "msg_from": "AJ Weber <[email protected]>", "msg_from_op": false, "msg_subject": "autovacuum fringe case?" }, { "msg_contents": "\n\n\n\nOn 23.01.2013, at 20:53, AJ Weber <[email protected]> wrote:\n\n> I have a server that is IO-bound right now (it's 4 cores, and top indicates the use rarely hits 25%, but the Wait spikes above 25-40% regularly). The server is running postgresql 9.0 and tomcat 6. As I have mentioned in a previous thread, I can't alter the hardware to add disks unfortunately, so I'm going to try and move postgresql off this application server to its own host, but this is a production environment, so in the meantime...\n> \n> Is it possible that some spikes in IO could be attributable to the autovacuum process? Is there a way to check this theory?\n> \n\nTry iotop\n\n> Would it be advisable (or even permissible to try/test) to disable autovacuum, and schedule a manual vacuumdb in the middle of the night, when this server is mostly-idle?\n> \n> Thanks for any tips. I'm in a bit of a jam with my limited hardware.\n> \n> -AJ\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Jan 2013 23:03:25 +0400", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum fringe case?" }, { "msg_contents": "On Wed, Jan 23, 2013 at 8:53 AM, AJ Weber <[email protected]> wrote:\n> I have a server that is IO-bound right now (it's 4 cores, and top indicates\n> the use rarely hits 25%, but the Wait spikes above 25-40% regularly).\n\nHow long do the spikes last?\n\n> The\n> server is running postgresql 9.0 and tomcat 6. As I have mentioned in a\n> previous thread, I can't alter the hardware to add disks unfortunately, so\n> I'm going to try and move postgresql off this application server to its own\n> host, but this is a production environment, so in the meantime...\n>\n> Is it possible that some spikes in IO could be attributable to the\n> autovacuum process? Is there a way to check this theory?\n\nset log_autovacuum_min_duration to 0 or some positive number, and see\nif the vacuums correlate with periods of io stress (from sar or\nvmstat, for example--the problem is that sar only takes snapshots\nevery 10 minutes, which is too coarse if the spikes are short).\n\n> Would it be advisable (or even permissible to try/test) to disable\n> autovacuum, and schedule a manual vacuumdb in the middle of the night, when\n> this server is mostly-idle?\n\nScheduling a manual vacuum should be fine (but keep in mind that\nvacuum has very different default cost_delay settings than autovacuum\ndoes. If the server is completely idle that shouldn't matter, but if\nit is only mostly idle, you might want to throttle the IO a bit). But\nI certainly would not disable autovacuum without further evidence. If\na table only needs to be vacuumed once a day and you preemptively do\nit at 3a.m., then autovac won't bother to do it itself during the day.\n So there is no point, but much risk, in also turning autovac off.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Jan 2013 11:13:48 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum fringe case?" }, { "msg_contents": "\n\nOn 1/23/2013 2:13 PM, Jeff Janes wrote:\n> On Wed, Jan 23, 2013 at 8:53 AM, AJ Weber<[email protected]> wrote:\n>> I have a server that is IO-bound right now (it's 4 cores, and top indicates\n>> the use rarely hits 25%, but the Wait spikes above 25-40% regularly).\n> How long do the spikes last?\n From what I can gather, a few seconds to a few minutes.\n>\n>> The\n>> server is running postgresql 9.0 and tomcat 6. As I have mentioned in a\n>> previous thread, I can't alter the hardware to add disks unfortunately, so\n>> I'm going to try and move postgresql off this application server to its own\n>> host, but this is a production environment, so in the meantime...\n>>\n>> Is it possible that some spikes in IO could be attributable to the\n>> autovacuum process? Is there a way to check this theory?\n> set log_autovacuum_min_duration to 0 or some positive number, and see\n> if the vacuums correlate with periods of io stress (from sar or\n> vmstat, for example--the problem is that sar only takes snapshots\n> every 10 minutes, which is too coarse if the spikes are short).\nI used iotop last time it was going crazy, and there were 5 postgres \nprocs at the top of the list (and virtually nothing else) all doing a \nSELECT. So I'm also going to restart the DB this weekend with \nlog-min-duration enabled. Could also be some misbehaving queries...\n\nIs there a skinny set of instructions on loading pg_stat_statements? Or \nshould I just log them and review them from there?\n\n>\n>> Would it be advisable (or even permissible to try/test) to disable\n>> autovacuum, and schedule a manual vacuumdb in the middle of the night, when\n>> this server is mostly-idle?\n> Scheduling a manual vacuum should be fine (but keep in mind that\n> vacuum has very different default cost_delay settings than autovacuum\n> does. If the server is completely idle that shouldn't matter, but if\n> it is only mostly idle, you might want to throttle the IO a bit). But\n> I certainly would not disable autovacuum without further evidence. If\n> a table only needs to be vacuumed once a day and you preemptively do\n> it at 3a.m., then autovac won't bother to do it itself during the day.\n> So there is no point, but much risk, in also turning autovac off.\nIf I set autovacuum_max_workers = 1, will that effectively single-thread \nit so I don't have two running at once? Maybe that'll mitigate disk \ncontention a little at least?\n>\n> Cheers,\n>\n> Jeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Jan 2013 17:48:10 -0500", "msg_from": "AJ Weber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum fringe case?" }, { "msg_contents": "AJ Weber escribió:\n\n> On 1/23/2013 2:13 PM, Jeff Janes wrote:\n\n> >Scheduling a manual vacuum should be fine (but keep in mind that\n> >vacuum has very different default cost_delay settings than autovacuum\n> >does. If the server is completely idle that shouldn't matter, but if\n> >it is only mostly idle, you might want to throttle the IO a bit). But\n> >I certainly would not disable autovacuum without further evidence. If\n> >a table only needs to be vacuumed once a day and you preemptively do\n> >it at 3a.m., then autovac won't bother to do it itself during the day.\n> > So there is no point, but much risk, in also turning autovac off.\n> If I set autovacuum_max_workers = 1, will that effectively\n> single-thread it so I don't have two running at once? Maybe that'll\n> mitigate disk contention a little at least?\n\nIf you have a single one, it will go three times as fast. If you want\nto make the whole thing go slower (i.e. cause less impact on your I/O\nsystem when running), crank up autovacuum_vacuum_cost_delay.\n\n-- \nÁlvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 23 Jan 2013 23:03:04 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum fringe case?" }, { "msg_contents": "On Wednesday, January 23, 2013, AJ Weber wrote:\n\n>\n>\n> Is there a skinny set of instructions on loading pg_stat_statements? Or\n> should I just log them and review them from there?\n>\n\nMake sure you have installed contrib. (How you do that depends on how you\ninstalled PostgreSQL in the first place. If you installed from source, then\njust follow \"sudo make install\" with \"cd contrib; sudo make install\")\n\n\nThen, just change postgresql.conf so that\n\nshared_preload_libraries = 'pg_stat_statements'\n\nAnd restart the server.\n\nThen in psql run\n\ncreate extension pg_stat_statements ;\n\nCheers,\n\nJeff\n\nOn Wednesday, January 23, 2013, AJ Weber wrote:\nIs there a skinny set of instructions on loading pg_stat_statements?  Or should I just log them and review them from there?Make sure you have installed contrib.  (How you do that depends on how you installed PostgreSQL in the first place. If you installed from source, then just follow \"sudo make install\" with \"cd contrib; sudo make install\")\n Then, just change postgresql.conf so thatshared_preload_libraries = 'pg_stat_statements'And restart the server.\nThen in psql runcreate extension pg_stat_statements ;Cheers,Jeff", "msg_date": "Wed, 23 Jan 2013 19:06:14 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum fringe case?" } ]
[ { "msg_contents": "Hi Listers,\n\nwe migrated an oracle datawarehouse to postgresql 9.1 ( ppas 9.1.7.12 ) and are facing massive issues with response times in postgres when compared to the oracle system. Both database run on the same hardware and storage ( rhel5.8 64bit ).\n\nOracle memory parameters are:\nSGA=1gb\nPGA=200mb\n\nPostgres currently runs with 15gb of shared buffers ( that’s because the big table in question is around 2.5gb in size and one suggestion was to increase that much so postgresql will cache the complete table. and this is the case now ).\n\nexplain (analyze,buffers) SELECT test1.slsales_batch\n , test1.slsales_checksum\n , test1.slsales_reg_id\n , test1.slsales_prod_id\n , test1.slsales_date_id\n , test1.slsales_pos_id\n , test1.slsales_amt_sales_gross\n , test1.slsales_amt_sales_discount\n , test1.slsales_units_sales_gross\n , test1.slsales_amt_returns\n , test1.slsales_amt_returns_discount\n , test1.slsales_units_returns\n , (test1.slsales_amt_sales_gross - test1.slsales_amt_returns)\n * mgmt_fact_winratio.winratio_ratio AS slsales_amt_est_winnings\n , mgmt_fact_winratio.winratio_ratio AS slsales_ratio\n FROM mgmtt_own.test1\n LEFT JOIN mgmtt_own.mgmt_fact_winratio\n ON mgmt_fact_winratio.winratio_date_id = test1.slsales_date_id\n\nOracle’s explain plan looks like this:\n\n----------------------------------------------------------------------------------------------------\n| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |\n----------------------------------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | | 25M| 1527M| | 115K (3)| 00:23:10 |\n|* 1 | HASH JOIN RIGHT OUTER| | 25M| 1527M| 4376K| 115K (3)| 00:23:10 |\n| 2 | TABLE ACCESS FULL | MGMT_FACT_WINRATIO | 159K| 2498K| | 167 (5)| 00:00:03 |\n| 3 | TABLE ACCESS FULL | TEST1 | 25M| 1139M| | 43435 (5)| 00:08:42 |\n----------------------------------------------------------------------------------------------------\nPredicate Information (identified by operation id):\n---------------------------------------------------\n 1 - access(\"MGMT_FACT_WINRATIO\".\"WINRATIO_PROD_ID\"(+)=\"TEST1\".\"SLSALES_PROD_ID\" AND\n \"MGMT_FACT_WINRATIO\".\"WINRATIO_DATE_ID\"(+)=\"TEST1\".\"SLSALES_DATE_ID\")\n\nSomehow oracle seems to know that a right join is the better way to go.\n\nPostgres’s explain plan:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\nHash Left Join (cost=3948.52..13646089.21 rows=25262160 width=61) (actual time=260.642..81240.692 rows=25262549 loops=1)\n Hash Cond: ((test1.slsales_date_id = mgmt_fact_winratio.winratio_date_id) AND (test1.slsales_prod_id = mgmt_fact_winratio.winratio_prod_id))\n Buffers: shared hit=306590\n -> Seq Scan on test1 (cost=0.00..254148.75 rows=25262160 width=56) (actual time=0.009..15674.535 rows=25262161 loops=1)\n Buffers: shared hit=305430\n -> Hash (cost=1582.89..1582.89 rows=157709 width=19) (actual time=260.564..260.564 rows=157709 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 7855kB\n Buffers: shared hit=1160\n -> Seq Scan on mgmt_fact_winratio (cost=0.00..1582.89 rows=157709 width=19) (actual time=0.008..114.406 rows=157709 loops=1)\n Buffers: shared hit=1160\nTotal runtime: 95762.025 ms\n(11 rows)\n\nTried to modify the statement according to oracle’s plan, but this did not help:\n\nexplain (analyze,buffers) SELECT test1.slsales_batch\n , test1.slsales_checksum\n , test1.slsales_reg_id\n , test1.slsales_prod_id\n , test1.slsales_date_id\n , test1.slsales_pos_id\n , test1.slsales_amt_sales_gross\n , test1.slsales_amt_sales_discount\n , test1.slsales_units_sales_gross\n , test1.slsales_amt_returns\n , test1.slsales_amt_returns_discount\n , test1.slsales_units_returns\n , (test1.slsales_amt_sales_gross - test1.slsales_amt_returns)\n * mgmt_fact_winratio.winratio_ratio AS slsales_amt_est_winnings\n , mgmt_fact_winratio.winratio_ratio AS slsales_ratio\n FROM mgmtt_own.test1\n , mgmtt_own.mgmt_fact_winratio\n WHERE mgmt_fact_winratio.winratio_prod_id(+) = test1.slsales_prod_id\n AND mgmt_fact_winratio.winratio_date_id(+) = test1.slsales_date_id\n;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\nHash Left Join (cost=3948.52..13646089.21 rows=25262160 width=61) (actual time=276.605..80629.400 rows=25262549 loops=1)\n Hash Cond: ((test1.slsales_prod_id = mgmt_fact_winratio.winratio_prod_id) AND (test1.slsales_date_id = mgmt_fact_winratio.winratio_date_id))\n Buffers: shared hit=306590\n -> Seq Scan on test1 (cost=0.00..254148.75 rows=25262160 width=56) (actual time=0.009..15495.167 rows=25262161 loops=1)\n Buffers: shared hit=305430\n -> Hash (cost=1582.89..1582.89 rows=157709 width=19) (actual time=276.515..276.515 rows=157709 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 7855kB\n Buffers: shared hit=1160\n -> Seq Scan on mgmt_fact_winratio (cost=0.00..1582.89 rows=157709 width=19) (actual time=0.009..119.930 rows=157709 loops=1)\n Buffers: shared hit=1160\nTotal runtime: 95011.401 ms\n\nParameters changed:\ndefault_statistics_target =1000\nenable_mergejoin=false ( when enabled query takes even longer )\nseq_page_cost=1\nrandom_page_cost=2\n\nvacuumed the whole database and currently there is no data coming in, so everything is up to date.\n\nWhat additionally makes me wonder is, that the same table in oracle is taking much less space than in postgresql:\n\nSQL> select sum(bytes) from dba_extents where segment_name = 'TEST1';\nSUM(BYTES)\n----------\n1610612736\n\nselect pg_relation_size('mgmtt_own.test1');\npg_relation_size\n------------------\n 2502082560\n(1 row)\n\n(sysdba@[local]:7777) [bi_dwht] > \\d+ mgmtt_own.test1\n Table \"mgmtt_own.test1\"\n Column | Type | Modifiers | Storage | Description\n------------------------------+---------------+-----------+---------+-------------\nslsales_batch | numeric(8,0) | | main |\nslsales_checksum | numeric(8,0) | | main |\nslsales_reg_id | numeric(8,0) | | main |\nslsales_prod_id | numeric(8,0) | | main |\nslsales_date_id | numeric(8,0) | | main |\nslsales_pos_id | numeric(8,0) | | main |\nslsales_amt_sales_gross | numeric(16,6) | | main |\nslsales_amt_sales_discount | numeric(16,6) | | main |\nslsales_units_sales_gross | numeric(8,0) | | main |\nslsales_amt_returns | numeric(16,6) | | main |\nslsales_amt_returns_discount | numeric(16,6) | | main |\nslsales_units_returns | numeric(8,0) | | main |\nslsales_amt_est_winnings | numeric(16,6) | | main |\nIndexes:\n \"itest1\" btree (slsales_date_id) CLUSTER, tablespace \"mgmtt_idx\"\n \"itest2\" btree (slsales_prod_id), tablespace \"mgmtt_idx\"\nHas OIDs: no\nTablespace: \"mgmtt_dat\"\n\nAlthough the plan seems to be ok because most of the table must be read 95 secs compared to 23 secs will be a killer for the project.\n\nAny hints what else could be checked/done ?\n\nKind Regards\nDaniel\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Listers,\n \nwe migrated an oracle datawarehouse to postgresql 9.1 ( ppas 9.1.7.12 ) and are facing massive issues with response times in postgres when compared to the oracle system. Both database run on the same hardware and storage ( rhel5.8 64bit\n ).\n \nOracle memory parameters are:\nSGA=1gb\nPGA=200mb\n \nPostgres currently runs with 15gb of shared buffers ( that’s because the big table in question is around 2.5gb in size and one suggestion was to increase that much so postgresql will cache the complete table. and this is the case now ).\n \nexplain (analyze,buffers) SELECT test1.slsales_batch\n     , test1.slsales_checksum\n     , test1.slsales_reg_id\n     , test1.slsales_prod_id\n     , test1.slsales_date_id\n     , test1.slsales_pos_id\n     , test1.slsales_amt_sales_gross\n     , test1.slsales_amt_sales_discount\n     , test1.slsales_units_sales_gross\n     , test1.slsales_amt_returns\n     , test1.slsales_amt_returns_discount\n     , test1.slsales_units_returns\n     , (test1.slsales_amt_sales_gross - test1.slsales_amt_returns)\n\n         * mgmt_fact_winratio.winratio_ratio AS slsales_amt_est_winnings\n     , mgmt_fact_winratio.winratio_ratio AS slsales_ratio\n  FROM mgmtt_own.test1\n   LEFT JOIN mgmtt_own.mgmt_fact_winratio \n             ON mgmt_fact_winratio.winratio_date_id = test1.slsales_date_id\n \nOracle’s explain plan looks like this:\n \n----------------------------------------------------------------------------------------------------\n| Id  | Operation             | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |\n----------------------------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT      |                    |    25M|  1527M|       |   115K  (3)| 00:23:10 |\n|*  1 |  HASH JOIN RIGHT OUTER|                    |    25M|  1527M|  4376K|   115K  (3)| 00:23:10 |\n|   2 |   TABLE ACCESS FULL   | MGMT_FACT_WINRATIO |   159K|  2498K|       |   167   (5)| 00:00:03 |\n|   3 |   TABLE ACCESS FULL   | TEST1              |    25M|  1139M|       | 43435   (5)| 00:08:42 |\n----------------------------------------------------------------------------------------------------\nPredicate Information (identified by operation id):\n---------------------------------------------------\n   1 - access(\"MGMT_FACT_WINRATIO\".\"WINRATIO_PROD_ID\"(+)=\"TEST1\".\"SLSALES_PROD_ID\" AND\n              \"MGMT_FACT_WINRATIO\".\"WINRATIO_DATE_ID\"(+)=\"TEST1\".\"SLSALES_DATE_ID\")\n \nSomehow oracle seems to know that a right join is the better way to go.\n \nPostgres’s explain plan:\n \n                                                                   QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\nHash Left Join  (cost=3948.52..13646089.21 rows=25262160 width=61) (actual time=260.642..81240.692 rows=25262549 loops=1)\n   Hash Cond: ((test1.slsales_date_id = mgmt_fact_winratio.winratio_date_id) AND (test1.slsales_prod_id = mgmt_fact_winratio.winratio_prod_id))\n   Buffers: shared hit=306590\n   ->  Seq Scan on test1  (cost=0.00..254148.75 rows=25262160 width=56) (actual time=0.009..15674.535 rows=25262161 loops=1)\n         Buffers: shared hit=305430\n   ->  Hash  (cost=1582.89..1582.89 rows=157709 width=19) (actual time=260.564..260.564 rows=157709 loops=1)\n         Buckets: 16384  Batches: 1  Memory Usage: 7855kB\n         Buffers: shared hit=1160\n         ->  Seq Scan on mgmt_fact_winratio  (cost=0.00..1582.89 rows=157709 width=19) (actual time=0.008..114.406 rows=157709 loops=1)\n               Buffers: shared hit=1160\nTotal runtime: 95762.025 ms\n(11 rows)\n \nTried to modify the statement according to oracle’s plan, but this did not help:\n \nexplain (analyze,buffers) SELECT test1.slsales_batch\n     , test1.slsales_checksum\n     , test1.slsales_reg_id\n     , test1.slsales_prod_id\n     , test1.slsales_date_id\n     , test1.slsales_pos_id\n     , test1.slsales_amt_sales_gross\n     , test1.slsales_amt_sales_discount\n     , test1.slsales_units_sales_gross\n     , test1.slsales_amt_returns\n     , test1.slsales_amt_returns_discount\n     , test1.slsales_units_returns\n     , (test1.slsales_amt_sales_gross - test1.slsales_amt_returns)\n\n         * mgmt_fact_winratio.winratio_ratio AS slsales_amt_est_winnings\n     , mgmt_fact_winratio.winratio_ratio AS slsales_ratio\n  FROM mgmtt_own.test1\n     , mgmtt_own.mgmt_fact_winratio \n WHERE mgmt_fact_winratio.winratio_prod_id(+) = test1.slsales_prod_id\n   AND mgmt_fact_winratio.winratio_date_id(+) = test1.slsales_date_id\n\n;\n                                                                  QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\nHash Left Join  (cost=3948.52..13646089.21 rows=25262160 width=61) (actual time=276.605..80629.400 rows=25262549 loops=1)\n   Hash Cond: ((test1.slsales_prod_id = mgmt_fact_winratio.winratio_prod_id) AND (test1.slsales_date_id = mgmt_fact_winratio.winratio_date_id))\n   Buffers: shared hit=306590\n   ->  Seq Scan on test1  (cost=0.00..254148.75 rows=25262160 width=56) (actual time=0.009..15495.167 rows=25262161 loops=1)\n         Buffers: shared hit=305430\n   ->  Hash  (cost=1582.89..1582.89 rows=157709 width=19) (actual time=276.515..276.515 rows=157709 loops=1)\n         Buckets: 16384  Batches: 1  Memory Usage: 7855kB\n         Buffers: shared hit=1160\n         ->  Seq Scan on mgmt_fact_winratio  (cost=0.00..1582.89 rows=157709 width=19) (actual time=0.009..119.930 rows=157709 loops=1)\n               Buffers: shared hit=1160\nTotal runtime: 95011.401 ms\n \nParameters changed:\ndefault_statistics_target =1000\nenable_mergejoin=false  ( when enabled query takes even longer )\nseq_page_cost=1\nrandom_page_cost=2\n \nvacuumed the whole database and currently there is no data coming in, so everything is up to date.\n\n \nWhat additionally makes me wonder is, that the same table in oracle is taking much less space than in postgresql:\n \nSQL> select  sum(bytes) from dba_extents where segment_name = 'TEST1';\nSUM(BYTES)\n----------\n1610612736\n \nselect pg_relation_size('mgmtt_own.test1');\npg_relation_size\n------------------\n       2502082560\n(1 row)\n \n(sysdba@[local]:7777) [bi_dwht] > \\d+ mgmtt_own.test1\n                             Table \"mgmtt_own.test1\"\n            Column            |     Type      | Modifiers | Storage | Description\n------------------------------+---------------+-----------+---------+-------------\nslsales_batch                | numeric(8,0)  |           | main    |\nslsales_checksum             | numeric(8,0)  |           | main    |\nslsales_reg_id               | numeric(8,0)  |           | main    |\nslsales_prod_id              | numeric(8,0)  |           | main    |\nslsales_date_id              | numeric(8,0)  |           | main    |\nslsales_pos_id               | numeric(8,0)  |           | main    |\nslsales_amt_sales_gross      | numeric(16,6) |           | main    |\nslsales_amt_sales_discount   | numeric(16,6) |           | main    |\nslsales_units_sales_gross    | numeric(8,0)  |           | main    |\nslsales_amt_returns          | numeric(16,6) |           | main    |\nslsales_amt_returns_discount | numeric(16,6) |           | main    |\nslsales_units_returns        | numeric(8,0)  |           | main    |\nslsales_amt_est_winnings     | numeric(16,6) |           | main    |\nIndexes:\n    \"itest1\" btree (slsales_date_id) CLUSTER, tablespace \"mgmtt_idx\"\n    \"itest2\" btree (slsales_prod_id), tablespace \"mgmtt_idx\"\nHas OIDs: no\nTablespace: \"mgmtt_dat\"\n \nAlthough the plan seems to be ok because most of the table must be read 95 secs compared to 23 secs will be a killer for the project.\n \nAny hints what else could be checked/done ?\n \nKind Regards\nDaniel", "msg_date": "Thu, 3 Jan 2013 13:30:42 +0000", "msg_from": "Daniel Westermann <[email protected]>", "msg_from_op": true, "msg_subject": "FW: performance issue with a 2.5gb joinded table" }, { "msg_contents": "On 03.01.2013 15:30, Daniel Westermann wrote:\n> What additionally makes me wonder is, that the same table in oracle is taking much less space than in postgresql:\n>\n> SQL> select sum(bytes) from dba_extents where segment_name = 'TEST1';\n> SUM(BYTES)\n> ----------\n> 1610612736\n>\n> select pg_relation_size('mgmtt_own.test1');\n> pg_relation_size\n> ------------------\n> 2502082560\n> (1 row)\n>\n> (sysdba@[local]:7777) [bi_dwht]> \\d+ mgmtt_own.test1\n> Table \"mgmtt_own.test1\"\n> Column | Type | Modifiers | Storage | Description\n> ------------------------------+---------------+-----------+---------+-------------\n> slsales_batch | numeric(8,0) | | main |\n> slsales_checksum | numeric(8,0) | | main |\n> slsales_reg_id | numeric(8,0) | | main |\n> slsales_prod_id | numeric(8,0) | | main |\n> slsales_date_id | numeric(8,0) | | main |\n> slsales_pos_id | numeric(8,0) | | main |\n> slsales_amt_sales_gross | numeric(16,6) | | main |\n> slsales_amt_sales_discount | numeric(16,6) | | main |\n> slsales_units_sales_gross | numeric(8,0) | | main |\n> slsales_amt_returns | numeric(16,6) | | main |\n> slsales_amt_returns_discount | numeric(16,6) | | main |\n> slsales_units_returns | numeric(8,0) | | main |\n> slsales_amt_est_winnings | numeric(16,6) | | main |\n> Indexes:\n> \"itest1\" btree (slsales_date_id) CLUSTER, tablespace \"mgmtt_idx\"\n> \"itest2\" btree (slsales_prod_id), tablespace \"mgmtt_idx\"\n> Has OIDs: no\n> Tablespace: \"mgmtt_dat\"\n\nOne difference is that numerics are stored more tightly packed on \nOracle. Which is particularly good for Oracle as they don't have other \nnumeric data types than number. On PostgreSQL, you'll want to use int4 \nfor ID-fields, where possible. An int4 always takes up 4 bytes, while a \nnumeric holding an integer value in the same range is typically 5-9 bytes.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 03 Jan 2013 19:02:08 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: performance issue with a 2.5gb joinded table" }, { "msg_contents": "-----Original Message-----\nFrom: Heikki Linnakangas [mailto:[email protected]] \nSent: Donnerstag, 3. Januar 2013 18:02\nTo: Daniel Westermann\nCc: '[email protected]'\nSubject: Re: [PERFORM] FW: performance issue with a 2.5gb joinded table\n\nOn 03.01.2013 15:30, Daniel Westermann wrote:\n> What additionally makes me wonder is, that the same table in oracle is taking much less space than in postgresql:\n>\n> SQL> select sum(bytes) from dba_extents where segment_name = \n> SQL> 'TEST1';\n> SUM(BYTES)\n> ----------\n> 1610612736\n>\n> select pg_relation_size('mgmtt_own.test1');\n> pg_relation_size\n> ------------------\n> 2502082560\n> (1 row)\n>\n> (sysdba@[local]:7777) [bi_dwht]> \\d+ mgmtt_own.test1\n> Table \"mgmtt_own.test1\"\n> Column | Type | Modifiers | Storage | Description\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> ------------------------------+---------------+-----------+---------+-\n> slsales_batch | numeric(8,0) | | main |\n> slsales_checksum | numeric(8,0) | | main |\n> slsales_reg_id | numeric(8,0) | | main |\n> slsales_prod_id | numeric(8,0) | | main |\n> slsales_date_id | numeric(8,0) | | main |\n> slsales_pos_id | numeric(8,0) | | main |\n> slsales_amt_sales_gross | numeric(16,6) | | main |\n> slsales_amt_sales_discount | numeric(16,6) | | main |\n> slsales_units_sales_gross | numeric(8,0) | | main |\n> slsales_amt_returns | numeric(16,6) | | main |\n> slsales_amt_returns_discount | numeric(16,6) | | main |\n> slsales_units_returns | numeric(8,0) | | main |\n> slsales_amt_est_winnings | numeric(16,6) | | main |\n> Indexes:\n> \"itest1\" btree (slsales_date_id) CLUSTER, tablespace \"mgmtt_idx\"\n> \"itest2\" btree (slsales_prod_id), tablespace \"mgmtt_idx\"\n> Has OIDs: no\n> Tablespace: \"mgmtt_dat\"\n\nOne difference is that numerics are stored more tightly packed on Oracle. Which is particularly good for Oracle as they don't have other numeric data types than number. On PostgreSQL, you'll want to use int4 for ID-fields, where possible. An int4 always takes up 4 bytes, while a numeric holding an integer value in the same range is typically 5-9 bytes.\n\n- Heikki\n\nThanks for poiting that out, Heikki.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Jan 2013 18:34:21 +0000", "msg_from": "Daniel Westermann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: performance issue with a 2.5gb joinded table" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> One difference is that numerics are stored more tightly packed on \n> Oracle. Which is particularly good for Oracle as they don't have other \n> numeric data types than number. On PostgreSQL, you'll want to use int4 \n> for ID-fields, where possible. An int4 always takes up 4 bytes, while a \n> numeric holding an integer value in the same range is typically 5-9 bytes.\n\nReplacing those numeric(8) and numeric(16) fields with int4 and int8\nwould be greatly beneficial to comparison and hashing performance,\nnot just table size. I'm a bit surprised that EDB's porting tools\nevidently don't do this automatically (I infer from the reference to\nPPAS that the OP is using EDB ...)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 04 Jan 2013 15:40:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: performance issue with a 2.5gb joinded table" }, { "msg_contents": "-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Freitag, 4. Januar 2013 21:41\nTo: Heikki Linnakangas\nCc: Daniel Westermann; '[email protected]'\nSubject: Re: [PERFORM] FW: performance issue with a 2.5gb joinded table\n\nHeikki Linnakangas <[email protected]> writes:\n> One difference is that numerics are stored more tightly packed on \n> Oracle. Which is particularly good for Oracle as they don't have other \n> numeric data types than number. On PostgreSQL, you'll want to use int4 \n> for ID-fields, where possible. An int4 always takes up 4 bytes, while \n> a numeric holding an integer value in the same range is typically 5-9 bytes.\n\n>> Replacing those numeric(8) and numeric(16) fields with int4 and int8 would be greatly beneficial to comparison and hashing performance, not just table size. I'm a >> bit surprised that EDB's porting tools evidently don't do this automatically (I infer from the reference to PPAS that the OP is using EDB ...)\n>>\n>>\t\t\tregards, tom lane\n\nThanks, tom. Any clue where there remaining around 500mb difference come from ? converted all the numeric(8) to int and this saved around 380mb of storage and around 10 secs exectution time... both databases have their files on standard ext3, same fs options. Given that the table has around 25'000'000 rows this is still approx. 20 bytes more per row on average\n\nRegards\nDaniel\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Jan 2013 21:29:57 +0000", "msg_from": "Daniel Westermann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: performance issue with a 2.5gb joinded table" }, { "msg_contents": "Daniel,\n\n>>Somehow oracle seems to know that a right join is the better way to go.\nIn fact, PostgreSQL is just doing the same thing: it hashes smaller table\nand scans the bigger one.\n\nCould you please clarify how do you consume 25M rows?\nIt could be the difference of response times comes not from the PostgreSQL\nitself, but from the client code.\n\nCould you please add the following information?\n1) Execution time of simple query that selects MAX of all the required\ncolumns \"select max(test1.slsales_batch) , max(test1.slsales_checksum),\n...\".\nI mean not explain (analyze, buffers), but simple execution.\nThe purpose of MAX is to split overhead of consuming of the resultset from\nthe overhead of producing it.\n\n2) explain (analyze, buffers) for the same query with maxes. That should\nreveal the overhead of explain analyze itself.\n\n3) The output of the following SQLPlus script (from Oracle):\n set linesize 1000 pagesize 10000 trimout on trimspool on time on timing on\n spool slow_query.lst\n select /*+ gather_plan_statistics */ max(test1.slsales_batch) ,\nmax(test1.slsales_checksum), ..;\n select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS\nLAST'));\n spool off\n\n That would display detailed statistics on execution time similar to the\nexplain (analyze, buffers).\n\n4) Could you please clarify how did you migrate test1 table?\nI guess the order of rows in that table might affect overall execution time.\nSorted table would be more CPU cache friendly, thus giving speedup. (see\n[1] for similar example).\nAs far as I understand, simple create table as select * from test1 order by\nslsales_date_id, slsales_prod_id should improve cache locality.\n\n\n[1]:\nhttp://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array\n\n-- \nRegards,\nVladimir Sitnikov\n\nDaniel,>>Somehow oracle seems to know that a right join is the better way to go.In fact, PostgreSQL is just doing the same thing: it hashes smaller table and scans the bigger one.\nCould you please clarify how do you consume 25M rows?It could be the difference of response times comes not from the PostgreSQL itself, but from the client code.\nCould you please add the following information?1) Execution time of simple query that selects MAX of all the required columns \"select max(test1.slsales_batch) , max(test1.slsales_checksum), ...\". \nI mean not explain (analyze, buffers), but simple execution.The purpose of MAX is to split overhead of consuming of the resultset from the overhead of producing it.\n2) explain (analyze, buffers) for the same query with maxes. That should reveal the overhead of explain analyze itself.\n3) The output of the following SQLPlus script (from Oracle):  set linesize 1000 pagesize 10000 trimout on trimspool on time on timing on\n  spool slow_query.lst  select /*+ gather_plan_statistics */ max(test1.slsales_batch) , max(test1.slsales_checksum), ..;\n  select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));  spool off\n  That would display detailed statistics on execution time similar to the explain (analyze, buffers).\n4) Could you please clarify how did you migrate test1 table?I guess the order of rows in that table might affect overall execution time.\nSorted table would be more CPU cache friendly, thus giving speedup. (see [1] for similar example).As far as I understand, simple create table as select * from test1 order by slsales_date_id, slsales_prod_id should improve cache locality.\n[1]: http://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array\n-- Regards,Vladimir Sitnikov", "msg_date": "Wed, 9 Jan 2013 13:30:05 +0400", "msg_from": "Vladimir Sitnikov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: performance issue with a 2.5gb joinded table" } ]
[ { "msg_contents": "Hi everybody,\n\nI have implemented my first app using PG DB and thought for a minute(may be\ntwo) that I know something about PG but below problem totally destroyed my\nconfidence :). Please help me to restore it.\n\nHere is simple join query. It runs just fine on MS SQL 2008 and uses\nall available indexes using even bigger overall dataset.\n\nselect visits.id, views.id\nfrom visits join views on visits.id = views.visit_id\nwhere visits.created_at >= '11/15/2012' and visits.created_at <\n'11/16/2012'\n\nQuick performance stat\n\nMS SQL: 1 second, 264K rows\nPG: 158 seconds, 264K rows\n\nExplain plan from both DBs\n\nPG QUERY PLAN\nHash Join (cost=12716.17..1101820.09 rows=248494 width=8)\n Hash Cond: (views.visit_id = visits.id)\n -> Seq Scan on views (cost=0.00..819136.56 rows=17434456 width=8)\n -> Hash (cost=10549.16..10549.16 rows=132081 width=4)\n -> Index Scan using visits_created_at_index on visits\n (cost=0.00..10549.16 rows=132081 width=4)\n Index Cond: ((created_at >= '2012-11-15 00:00:00'::timestamp\nwithout time zone) AND (created_at < '2012-11-16 00:00:00'::timestamp\nwithout time zone))\n\nschemaname | tablename | indexname | tablespace |\n indexdef\n\n------------+-----------+---------------------------------+------------+------------------------------------------------------------------------------------------\n public | views | views_pkey | |\nCREATE UNIQUE INDEX views_pkey ON views USING btree (id)\n public | views | views_visit_id_index | |\nCREATE INDEX views_visit_id_index ON views USING btree (visit_id)\n\nMS SQL Query plan\n'11/16/2012'\n |--Parallelism(Gather Streams)\n |--Nested Loops(Inner Join, OUTER REFERENCES:([visits].[id],\n[Expr1006]) OPTIMIZED WITH UNORDERED PREFETCH)\n |--Index Seek(OBJECT:([visits].[test]),\nSEEK:([visits].[created_at] >= '2012-11-15 00:00:00.000' AND\n[visits].[created_at] < '2012-11-16 00:00:00.000') ORDERED FORWARD)\n |--Index Seek(OBJECT:([views].[views_visit_id_index]),\nSEEK:([views].[visit_id]=[raw_visits].[id]) ORDERED FORWARD)\n\nIt is clear that PG does full table scan \"Seq Scan on views\n (cost=0.00..819136.56 rows=17434456 width=8)\"\n\nDon't understand why PG doesn't use views_visit_id_index in that query but\nrather scans whole table. One explanation I have found that when resulting\ndataset constitutes ~15% of total number of rows in the table then seq scan\nis used. In this case resulting dataset is just 1.5% of total number of\nrows. So it must be something different. Any reason why it happens and how\nto fix it?\n\nPostgres 9.2\nUbuntu 12.04.1 LTS\nshared_buffers = 4GB the rest of the settings are default ones\n\nThanks\n-Alex\n\nHi everybody,I have implemented my first app using PG DB and thought for a minute(may be two) that I know something about PG but below problem totally destroyed my confidence :). Please help me to restore it. \nHere is simple join query. It runs just fine on MS SQL 2008 and uses all available indexes using even bigger overall dataset. select visits.id, views.id\nfrom visits join views on visits.id = views.visit_idwhere visits.created_at >= '11/15/2012' and visits.created_at < '11/16/2012' \nQuick performance statMS SQL: 1 second, 264K rowsPG: 158 seconds,  264K rowsExplain plan from both DBsPG QUERY PLAN\nHash Join  (cost=12716.17..1101820.09 rows=248494 width=8)  Hash Cond: (views.visit_id = visits.id)  ->  Seq Scan on views  (cost=0.00..819136.56 rows=17434456 width=8)\n  ->  Hash  (cost=10549.16..10549.16 rows=132081 width=4)        ->  Index Scan using visits_created_at_index on visits  (cost=0.00..10549.16 rows=132081 width=4)              Index Cond: ((created_at >= '2012-11-15 00:00:00'::timestamp without time zone) AND (created_at < '2012-11-16 00:00:00'::timestamp without time zone))\nschemaname | tablename |            indexname            | tablespace |                                         indexdef                                         ------------+-----------+---------------------------------+------------+------------------------------------------------------------------------------------------\n public     | views     | views_pkey                      |            | CREATE UNIQUE INDEX views_pkey ON views USING btree (id) public     | views     | views_visit_id_index            |            | CREATE INDEX views_visit_id_index ON views USING btree (visit_id)\nMS SQL Query plan'11/16/2012'  |--Parallelism(Gather Streams)       |--Nested Loops(Inner Join, OUTER REFERENCES:([visits].[id], [Expr1006]) OPTIMIZED WITH UNORDERED PREFETCH)\n            |--Index Seek(OBJECT:([visits].[test]), SEEK:([visits].[created_at] >= '2012-11-15 00:00:00.000' AND [visits].[created_at] < '2012-11-16 00:00:00.000') ORDERED FORWARD)            |--Index Seek(OBJECT:([views].[views_visit_id_index]), SEEK:([views].[visit_id]=[raw_visits].[id]) ORDERED FORWARD)\nIt is clear that PG does full table scan \"Seq Scan on views  (cost=0.00..819136.56 rows=17434456 width=8)\"Don't understand why PG doesn't use views_visit_id_index in that query but rather scans whole table. One explanation I have found that when resulting dataset constitutes ~15% of total number of rows in the table then seq scan is used. In this case resulting dataset is just 1.5% of total number of rows. So it must be something different. Any reason why it happens and how to fix it?\nPostgres 9.2Ubuntu 12.04.1 LTSshared_buffers = 4GB the rest of the settings are default onesThanks-Alex", "msg_date": "Thu, 3 Jan 2013 16:54:10 -0600", "msg_from": "Alex Vinnik <[email protected]>", "msg_from_op": true, "msg_subject": "Simple join doesn't use index" }, { "msg_contents": "On 01/03/2013 10:54 PM, Alex Vinnik wrote:\n> I have implemented my first app using PG DB and thought for a minute(may be\n> two) that I know something about PG but below problem totally destroyed my\n> confidence :). Please help me to restore it.\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n-- \nJeremy\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 03 Jan 2013 23:11:40 +0000", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On 01/03/2013 11:54 PM, Alex Vinnik wrote:\n> Don't understand why PG doesn't use views_visit_id_index in that query\n> but rather scans whole table. One explanation I have found that when\n> resulting dataset constitutes ~15% of total number of rows in the table\n> then seq scan is used. In this case resulting dataset is just 1.5% of\n> total number of rows. So it must be something different. Any reason why\n> it happens and how to fix it?\n\nBut does the query planner know the same? If you added the EXPLAIN \nANALYZE output of the query and something like:\n\n SELECT tablename AS table_name, attname AS column_name,\n null_frac, avg_width, n_distinct, correlation\n FROM pg_stats\n WHERE tablename in ('views', 'visits');\n\n.. one could possibly tell a bit more.\n\n> Postgres 9.2\n> Ubuntu 12.04.1 LTS\n> shared_buffers = 4GB the rest of the settings are default ones\n\nThere are more than just this one memory related value, that need to be \nchanged for optimal performance. E.g. effective_cache_size can have a \ndirect effect on use of nested loops. See:\n\nhttp://www.postgresql.org/docs/9.2/static/runtime-config-query.html\n\nRegards,\nStefan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 04 Jan 2013 05:33:09 +0100", "msg_from": "Stefan Andreatta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "Hi all,\n\nI have a table that has about 73mm rows in it and growing. Running \n9.0.x on a server that unfortunately is a little I/O constrained. Some \n(maybe) pertinent settings:\ndefault_statistics_target = 50\nmaintenance_work_mem = 512MB\nconstraint_exclusion = on\neffective_cache_size = 5GB\nwork_mem = 18MB\nwal_buffers = 8MB\ncheckpoint_segments = 32\nshared_buffers = 2GB\n\nThe server has 12GB RAM, 4 cores, but is shared with a big webapp \nrunning in Tomcat -- and I only have a RAID1 disk to work on. Woes me...\n\nAnyway, this table is going to continue to grow, and it's used \nfrequently (Read and Write). From what I read, this table is a \ncandidate to be partitioned for performance and scalability. I have \ntested some scripts to build the \"inherits\" tables with their \nconstraints and the trigger/function to perform the work.\n\nAm I doing the right thing by partitioning this? If so, and I can \nafford some downtime, is dumping the table via pg_dump and then loading \nit back in the best way to do this?\n\nShould I run a cluster or vacuum full after all is done?\n\nIs there a major benefit if I can upgrade to 9.2.x in some way that I \nhaven't realized?\n\nFinally, if anyone has any comments about my settings listed above that \nmight help improve performance, I thank you in advance.\n\n-AJ\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 04 Jan 2013 16:31:31 -0500", "msg_from": "AJ Weber <[email protected]>", "msg_from_op": false, "msg_subject": "Partition table in 9.0.x?" }, { "msg_contents": "On Friday, January 4, 2013, AJ Weber wrote:\n\n> Hi all,\n>\n> I have a table that has about 73mm rows in it and growing.\n\n\nHow big is the table in MB? Its indexes?\n\n...\n>\n\n\n> The server has 12GB RAM, 4 cores, but is shared with a big webapp running\n> in Tomcat -- and I only have a RAID1 disk to work on. Woes me...\n>\n>\nBy a RAID1 disk, do you mean two disks in a RAID1 configuration, or a\nsingle RAID1 composed of an unspecified number of disks?\n\nOften spending many thousands of dollars in DBA time can save you from\nhaving to buy many hundreds of dollars in hard drives. :) On the other\nhand, often you end up having to buy the extra disks anyway afterall.\n\n\n\n\n> Anyway, this table is going to continue to grow, and it's used frequently\n> (Read and Write).\n\n\nAre all rows in the table read and written with equal vigor, or are there\nhot rows and cold rows that can be recognized based on the row's values?\n\n\n> From what I read, this table is a candidate to be partitioned for\n> performance and scalability. I have tested some scripts to build the\n> \"inherits\" tables with their constraints and the trigger/function to\n> perform the work.\n>\n> Am I doing the right thing by partitioning this?\n\n\nProbably not. Or at least, you haven't given us the information to know.\n Very broadly speaking, well-implemented partitioning makes bulk loading\nand removal operations take less IO, but makes normal operations take more\nIO, or if lucky leaves it unchanged. There are exceptions, but unless you\ncan identify a very specific reason to think you might have one of those\nexceptions, then you probably don't.\n\nDo you have a natural partitioning key? That is, is there a column (or\nexpression) which occurs as a selective component in the where clause of\nalmost all of your most io consuming SQL and DML? If so, you might benefit\nfrom partitioning on it. (But in that case, you might be able to get most\nof the benefits of partitioning, without the headaches of it, just by\nrevamping your indexes to include that column/expression as their leading\nfield).\n\nIf you don't have a good candidate partitioning key, then partitioning will\nalmost surely make things worse.\n\n If so, and I can afford some downtime, is dumping the table via pg_dump\n> and then loading it back in the best way to do this?\n>\n\nTo do efficient bulk loading into a partitioned table, you need to\nspecifically target each partition, rather than targeting with a trigger.\n That pretty much rules out pg_dump, AFAIK, unless you are going to parse\nthe dump file(s) and rewrite them.\n\n\n> Should I run a cluster or vacuum full after all is done?\n>\n\nProbably not. If a cluster after the partitioning would be beneficial,\nthere would be a pretty good chance you could do a cluster *instead* of the\npartitioning and get the same benefit.\n\nIf you do some massive deletes from the parent table as part of populating\nthe children, then a vacuum full of the parent could be useful. But if you\ndump the parent table, truncate it, and reload it as partitioned tables,\nthen vacuum full would probably not be useful.\n\nReally, you need to identify your most resource-intensive queries before\nyou can make any reasonable decisions.\n\n\n\n>\n> Is there a major benefit if I can upgrade to 9.2.x in some way that I\n> haven't realized?\n>\n\nIf you have specific queries that are misoptimized and so are generating\nmore IO than they need to, then upgrading could help. On the other hand,\nit could also make things worse, if a currently well optimized query\nbecomes worse.\n\nBut, instrumentation has improved in 9.2 from 9.0, so upgrading would make\nit easier to figure out just which queries are really bad and have the most\nopportunity for improvement. A little well informed optimization might\nobviate the need for either partitioning or more hard drives.\n\n\n> Finally, if anyone has any comments about my settings listed above that\n> might help improve performance, I thank you in advance.\n>\n\nYour default statistics target seemed low. Without knowing the nature of\nyour most resource intensive queries or how much memory tomcat is using, it\nis hard to say more.\n\nCheers,\n\nJeff\n\nOn Friday, January 4, 2013, AJ Weber wrote:Hi all,\n\nI have a table that has about 73mm rows in it and growing.  How big is the table in MB?  Its indexes?\n... \nThe server has 12GB RAM, 4 cores, but is shared with a big webapp running in Tomcat -- and I only have a RAID1 disk to work on.  Woes me...\nBy a RAID1 disk, do you mean two disks in a RAID1 configuration, or a single RAID1 composed of an unspecified number of disks?Often spending many thousands of dollars in DBA time can save you from having to buy many hundreds of dollars in hard drives. :)  On the other hand, often you end up having to buy the extra disks anyway afterall.\n \nAnyway, this table is going to continue to grow, and it's used frequently (Read and Write). Are all rows in the table read and written with equal vigor, or are there hot rows and cold rows that can be recognized based on the row's values?\n  From what I read, this table is a candidate to be partitioned for performance and scalability.  I have tested some scripts to build the \"inherits\" tables with their constraints and the trigger/function to perform the work.\n\nAm I doing the right thing by partitioning this? Probably not.  Or at least, you haven't given us the information to know.  Very broadly speaking, well-implemented partitioning makes bulk loading and removal operations take less IO, but makes normal operations take more IO,  or if lucky leaves it unchanged.  There are exceptions, but unless you can identify a very specific reason to think you might have one of those exceptions, then you probably don't.\nDo you have a natural partitioning key?  That is, is there a column (or expression) which occurs as a selective component in the where clause of almost all of your most io consuming SQL and DML?  If so, you might benefit from partitioning on it.  (But in that case, you might be able to get most of the benefits of partitioning, without the headaches of it, just by revamping your indexes to include that column/expression as their leading field).\nIf you don't have a good candidate partitioning key, then partitioning will almost surely make things worse.\n If so, and I can afford some downtime, is dumping the table via pg_dump and then loading it back in the best way to do this?To do efficient bulk loading into a partitioned table, you need to specifically target each partition, rather than targeting with a trigger.  That pretty much rules out pg_dump, AFAIK, unless you are going to parse the dump file(s) and rewrite them.\n\nShould I run a cluster or vacuum full after all is done?Probably not.  If a cluster after the partitioning would be beneficial, there would be a pretty good chance you could do a cluster *instead* of the partitioning and get the same benefit.  \nIf you do some massive deletes from the parent table as part of populating the children, then a vacuum full of the parent could be useful.  But if you dump the parent table, truncate it, and reload it as partitioned tables, then vacuum full would probably not be useful.\nReally, you need to identify your most resource-intensive queries before you can make any reasonable decisions. \n\nIs there a major benefit if I can upgrade to 9.2.x in some way that I haven't realized?If you have specific queries that are misoptimized and so are generating more IO than they need to, then upgrading could help.  On the other hand, it could also make things worse, if a currently well optimized query becomes worse. \nBut, instrumentation has improved in 9.2 from 9.0, so upgrading would make it easier to figure out just which queries are really bad and have the most opportunity for improvement.  A little well informed optimization might obviate the need for either partitioning or more hard drives.\n\n\nFinally, if anyone has any comments about my settings listed above that might help improve performance, I thank you in advance.Your default statistics target seemed low.  Without knowing the nature of your most resource intensive queries or how much memory tomcat is using, it is hard to say more.\nCheers,Jeff", "msg_date": "Fri, 4 Jan 2013 20:03:03 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition table in 9.0.x?" }, { "msg_contents": "All fair questions...\n\nThank you for your detailed response!\n\n\nOn 1/4/2013 11:03 PM, Jeff Janes wrote:\n> On Friday, January 4, 2013, AJ Weber wrote:\n>\n> Hi all,\n>\n> I have a table that has about 73mm rows in it and growing. \n>\n>\n> How big is the table in MB? Its indexes?\nNot sure on this. Will see if pgAdmin tells me.\n\n>\n> ...\n>\n> The server has 12GB RAM, 4 cores, but is shared with a big webapp\n> running in Tomcat -- and I only have a RAID1 disk to work on.\n> Woes me...\n>\n>\n> By a RAID1 disk, do you mean two disks in a RAID1 configuration, or a \n> single RAID1 composed of an unspecified number of disks?\n>\n> Often spending many thousands of dollars in DBA time can save you from \n> having to buy many hundreds of dollars in hard drives. :) On the \n> other hand, often you end up having to buy the extra disks anyway \n> afterall.\n>\nI mean I have two disks in a RAID1 configuration. The server is \ncurrently in a whitebox datacenter and I have zero control over the \nhardware, so adding disks is unfortunately out of the question. I \ncompletely understand the comment, and would love to have a larger SAN \navailable to me that I could configure...I just don't and have no way of \ngetting one anytime soon.\n\n>\n> Anyway, this table is going to continue to grow, and it's used\n> frequently (Read and Write). \n>\n>\n> Are all rows in the table read and written with equal vigor, or are \n> there hot rows and cold rows that can be recognized based on the row's \n> values?\nNo, I could probably figure out a way to setup an \"archive\" or \"older\" \nsection of the data that is updated much less frequently. Deletes are \nrare. Inserts/Updates \"yes\". Select on existing rows -- very frequent.\n\n> From what I read, this table is a candidate to be partitioned for\n> performance and scalability. I have tested some scripts to build\n> the \"inherits\" tables with their constraints and the\n> trigger/function to perform the work.\n>\n> Am I doing the right thing by partitioning this? \n>\n>\n> Probably not. Or at least, you haven't given us the information to \n> know. Very broadly speaking, well-implemented partitioning makes bulk \n> loading and removal operations take less IO, but makes normal \n> operations take more IO, or if lucky leaves it unchanged. There are \n> exceptions, but unless you can identify a very specific reason to \n> think you might have one of those exceptions, then you probably don't.\nI know you can't believe everything you read, but I thought I saw some \nmetrics about when a table's size exceeds some fraction of available \nRAM, or when it approaches 100mm rows, it's a big candidate for \npartitioning.\n\n>\n> Do you have a natural partitioning key? That is, is there a column \n> (or expression) which occurs as a selective component in the where \n> clause of almost all of your most io consuming SQL and DML? If so, \n> you might benefit from partitioning on it. (But in that case, you \n> might be able to get most of the benefits of partitioning, without the \n> headaches of it, just by revamping your indexes to include that \n> column/expression as their leading field).\n>\n> If you don't have a good candidate partitioning key, then partitioning \n> will almost surely make things worse.\n>\nThe table is a \"detail table\" to its master records. That is, it's like \nan order-details table where it will have a 1-n rows joined to the \nmaster (\"order\") table on the order-id. So I can partition it based on \nthe order number pretty easily (which is a bigint, btw).\n\n> If so, and I can afford some downtime, is dumping the table via\n> pg_dump and then loading it back in the best way to do this?\n>\n>\n> To do efficient bulk loading into a partitioned table, you need to \n> specifically target each partition, rather than targeting with a \n> trigger. That pretty much rules out pg_dump, AFAIK, unless you are \n> going to parse the dump file(s) and rewrite them.\n>\n>\n> Should I run a cluster or vacuum full after all is done?\n>\n>\n> Probably not. If a cluster after the partitioning would be \n> beneficial, there would be a pretty good chance you could do a cluster \n> *instead* of the partitioning and get the same benefit.\n>\nI did try clustering the table on the PK (which is actually 4 columns), \nand it appeared to help a bit. I was hoping partitioning was going to \nhelp me even more.\n\n> If you do some massive deletes from the parent table as part of \n> populating the children, then a vacuum full of the parent could be \n> useful. But if you dump the parent table, truncate it, and reload it \n> as partitioned tables, then vacuum full would probably not be useful.\n>\n> Really, you need to identify your most resource-intensive queries \n> before you can make any reasonable decisions.\n>\n>\n> Is there a major benefit if I can upgrade to 9.2.x in some way\n> that I haven't realized?\n>\n>\n> If you have specific queries that are misoptimized and so are \n> generating more IO than they need to, then upgrading could help. On \n> the other hand, it could also make things worse, if a currently well \n> optimized query becomes worse.\n>\nIs there some new feature or optimization you're thinking about with \nthis comment? If so, could you please just send me a link and/or \nfeature name and I'll google it myself?\n\n> But, instrumentation has improved in 9.2 from 9.0, so upgrading would \n> make it easier to figure out just which queries are really bad and \n> have the most opportunity for improvement. A little well informed \n> optimization might obviate the need for either partitioning or more \n> hard drives.\n>\nThis is interesting too. I obviously would like the best available \noptions to tune the database and the application. Is this detailed in \nthe release notes somewhere, and what tools could I use to take \nadvantage of this? (Are there new/improved details included in the \nEXPLAIN statement or something?)\n>\n>\n> Finally, if anyone has any comments about my settings listed above\n> that might help improve performance, I thank you in advance.\n>\n>\n> Your default statistics target seemed low. Without knowing the nature \n> of your most resource intensive queries or how much memory tomcat is \n> using, it is hard to say more.\nTomcat uses 4G of RAM, plus we have nginx in front using a little and \nsome other, smaller services running on the server in addition to the \nusual Linux gamut of processes.\n\n>\n> Cheers,\n>\n> Jeff\n\n\n\n\n\n\n All fair questions...\n\n Thank you for your detailed response!\n\n\n On 1/4/2013 11:03 PM, Jeff Janes wrote:\n On Friday, January 4, 2013, AJ Weber wrote:\nHi all,\n\n I have a table that has about 73mm rows in it and growing.  \n\n\nHow big is the table in MB?  Its indexes?\n\n Not sure on this.  Will see if pgAdmin tells me.\n\n\n\n\n\n ...\n\n \n\n The server has 12GB RAM, 4 cores, but is shared with a big\n webapp running in Tomcat -- and I only have a RAID1 disk to work\n on.  Woes me...\n\n\n\n\nBy a RAID1 disk, do you mean two disks in a RAID1\n configuration, or a single RAID1 composed of an unspecified\n number of disks?\n\n\nOften spending many thousands of dollars in DBA time can save\n you from having to buy many hundreds of dollars in hard drives.\n :)  On the other hand, often you end up having to buy the extra\n disks anyway afterall.\n\n\n\n I mean I have two disks in a RAID1 configuration.  The server is\n currently in a whitebox datacenter and I have zero control over the\n hardware, so adding disks is unfortunately out of the question.  I\n completely understand the comment, and would love to have a larger\n SAN available to me that I could configure...I just don't and have\n no way of getting one anytime soon.\n\n\n\n\n \n\n Anyway, this table is going to continue to grow, and it's used\n frequently (Read and Write). \n\n\nAre all rows in the table read and written with equal vigor,\n or are there hot rows and cold rows that can be recognized based\n on the row's values?\n\n No, I could probably figure out a way to setup an \"archive\" or\n \"older\" section of the data that is updated much less frequently. \n Deletes are rare.  Inserts/Updates \"yes\".  Select on existing rows\n -- very frequent.\n\n\n \n From what I\n read, this table is a candidate to be partitioned for\n performance and scalability.  I have tested some scripts to\n build the \"inherits\" tables with their constraints and the\n trigger/function to perform the work.\n\n Am I doing the right thing by partitioning this? \n\n\nProbably not.  Or at least, you haven't given us the\n information to know.  Very broadly speaking, well-implemented\n partitioning makes bulk loading and removal operations take less\n IO, but makes normal operations take more IO,  or if lucky\n leaves it unchanged.  There are exceptions, but unless you can\n identify a very specific reason to think you might have one of\n those exceptions, then you probably don't.\n\n I know you can't believe everything you read, but I thought I saw\n some metrics about when a table's size exceeds some fraction of\n available RAM, or when it approaches 100mm rows, it's a big\n candidate for partitioning.\n\n\n\n\nDo you have a natural partitioning key?  That is, is there a\n column (or expression) which occurs as a selective component in\n the where clause of almost all of your most io consuming SQL and\n DML?  If so, you might benefit from partitioning on it.  (But in\n that case, you might be able to get most of the benefits of\n partitioning, without the headaches of it, just by revamping\n your indexes to include that column/expression as their leading\n field).\n\n\nIf you don't have a good candidate partitioning key, then\n partitioning will almost surely make things worse.\n\n\n\n The table is a \"detail table\" to its master records.  That is, it's\n like an order-details table where it will have a 1-n rows joined to\n the master (\"order\") table on the order-id.  So I can partition it\n based on the order number pretty easily (which is a bigint, btw).\n\n\n\n  If so, and I can afford some downtime, is dumping the table via\n pg_dump and then loading it back in the best way to do this?\n\n\n\nTo do efficient bulk loading into a partitioned table, you\n need to specifically target each partition, rather than\n targeting with a trigger.  That pretty much rules out pg_dump,\n AFAIK, unless you are going to parse the dump file(s) and\n rewrite them.\n\n\n\n Should I run a cluster or vacuum full after all is done?\n\n\n\nProbably not.  If a cluster after the partitioning would be\n beneficial, there would be a pretty good chance you could do a\n cluster *instead* of the partitioning and get the same benefit.\n  \n\n\n\n I did try clustering the table on the PK (which is actually 4\n columns), and it appeared to help a bit.  I was hoping partitioning\n was going to help me even more.\n\n\nIf you do some massive deletes from the parent table as part\n of populating the children, then a vacuum full of the parent\n could be useful.  But if you dump the parent table, truncate it,\n and reload it as partitioned tables, then vacuum full would\n probably not be useful.\n\n\nReally, you need to identify your most resource-intensive\n queries before you can make any reasonable decisions.\n\n\n \n\n\n Is there a major benefit if I can upgrade to 9.2.x in some way\n that I haven't realized?\n\n\n\nIf you have specific queries that are misoptimized and so are\n generating more IO than they need to, then upgrading could help.\n  On the other hand, it could also make things worse, if a\n currently well optimized query becomes worse. \n\n\n\n Is there some new feature or optimization you're thinking about with\n this comment?  If so, could you please just send me a link and/or\n feature name and I'll google it myself?\n\n\nBut, instrumentation has improved in 9.2 from 9.0, so\n upgrading would make it easier to figure out just which queries\n are really bad and have the most opportunity for improvement.  A\n little well informed optimization might obviate the need for\n either partitioning or more hard drives.\n\n\n\n This is interesting too.  I obviously would like the best available\n options to tune the database and the application.  Is this detailed\n in the release notes somewhere, and what tools could I use to take\n advantage of this?  (Are there new/improved details included in the\n EXPLAIN statement or something?)\n\n\n\n Finally, if anyone has any comments about my settings listed\n above that might help improve performance, I thank you in\n advance.\n\n\n\nYour default statistics target seemed low.  Without knowing\n the nature of your most resource intensive queries or how much\n memory tomcat is using, it is hard to say more.\n\n Tomcat uses 4G of RAM, plus we have nginx in front using a little\n and some other, smaller services running on the server in addition\n to the usual Linux gamut of processes.\n\n\n\n\nCheers,\n\n\nJeff", "msg_date": "Sun, 06 Jan 2013 10:27:01 -0500", "msg_from": "AJ Weber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition table in 9.0.x?" }, { "msg_contents": "On Thu, Jan 3, 2013 at 4:54 PM, Alex Vinnik <[email protected]> wrote:\n> Don't understand why PG doesn't use views_visit_id_index in that query but\n> rather scans whole table. One explanation I have found that when resulting\n> dataset constitutes ~15% of total number of rows in the table then seq scan\n> is used. In this case resulting dataset is just 1.5% of total number of\n> rows. So it must be something different. Any reason why it happens and how\n> to fix it?\n>\n> Postgres 9.2\n> Ubuntu 12.04.1 LTS\n> shared_buffers = 4GB the rest of the settings are default ones\n<snip>\n\nIt happens because you lied to the database...heh. In particular, the\n'effective_cache_size' setting which defaults to 128mb. That probably\nneeds to be much, much larger. Basically postgres is figuring the\ncache is much smaller than the data and starts to favor sequential\nplans once you hit a certain threshold. If you had a server with only\nsay 256mb ram, it probably *would* be faster.\n\nSQL server probably uses all kinds of crazy native unportable kernel\ncalls to avoid having to make a similar .conf setting. Or maybe it\njust assumes infinite cache size...dunno.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 7 Jan 2013 18:13:26 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Sunday, January 6, 2013, AJ Weber wrote:\n\n> All fair questions...\n>\n> Thank you for your detailed response!\n>\n>\n> On 1/4/2013 11:03 PM, Jeff Janes wrote:\n>\n> On Friday, January 4, 2013, AJ Weber wrote:\n>\n>> Hi all,\n>>\n>> I have a table that has about 73mm rows in it and growing.\n>\n>\n> How big is the table in MB? Its indexes?\n>\n> Not sure on this. Will see if pgAdmin tells me.\n>\n\nIt probably does, but from psql command line, you can do \\d+ and \\di+\n\n\n>\n>> Anyway, this table is going to continue to grow, and it's used frequently\n>> (Read and Write).\n>\n>\n> Are all rows in the table read and written with equal vigor, or are\n> there hot rows and cold rows that can be recognized based on the row's\n> values?\n>\n> No, I could probably figure out a way to setup an \"archive\" or \"older\"\n> section of the data that is updated much less frequently.\n>\n\nSo the data that deliniates this does not exist in that table, but it does\nexist someplace, either just in your head, or in the column of a higher\nlevel table?\n\n\n> Deletes are rare. Inserts/Updates \"yes\". Select on existing rows -- very\n> frequent.\n>\n\n\nIf you have little control over your storage and are already IO bound, and\nthe tables are growing rapidly, you may need to rethink that \"deletes are\nrare\" bit. So the inserts and updates do target a hot part, while the\nselects are evenly spread?\n\nIn that case, it is very important to know if the slow part are the\nselects, or the insert and deletes. If the selects are slow, and the hot\nrows for selects can't be gathered together into a hot partition, then\nafter clustering they will still be slow as the disk will still have to\nseek all over the place (massive data-mining type selects might be an\nexception to that, but I wouldn't count on it).\n\n\n>\n>\n>\n>> From what I read, this table is a candidate to be partitioned for\n>> performance and scalability. I have tested some scripts to build the\n>> \"inherits\" tables with their constraints and the trigger/function to\n>> perform the work.\n>>\n>> Am I doing the right thing by partitioning this?\n>\n>\n> Probably not. Or at least, you haven't given us the information to\n> know. Very broadly speaking, well-implemented partitioning makes bulk\n> loading and removal operations take less IO, but makes normal operations\n> take more IO, or if lucky leaves it unchanged. There are exceptions, but\n> unless you can identify a very specific reason to think you might have one\n> of those exceptions, then you probably don't.\n>\n> I know you can't believe everything you read, but I thought I saw some\n> metrics about when a table's size exceeds some fraction of available RAM,\n> or when it approaches 100mm rows, it's a big candidate for partitioning.\n>\n\nI think it is a matter of semantics. A small table is poor candidate for\npartitioning even if it has an excellent key to use for partitioning. A\nlarge table could be a good candidate up until you realize it doesn't have\na good key to use, at which point it stops being a good candidate (in my\nopinion).\n\n\n\n>\n>> Should I run a cluster or vacuum full after all is done?\n>>\n>\n> Probably not. If a cluster after the partitioning would be beneficial,\n> there would be a pretty good chance you could do a cluster *instead* of the\n> partitioning and get the same benefit.\n>\n> I did try clustering the table on the PK (which is actually 4 columns),\n> and it appeared to help a bit. I was hoping partitioning was going to help\n> me even more.\n>\n\nWas the order_num (from the parent table) the leading field of the 4 column\nPK? If not, you might want to reorder the PK so that it is the leading\nfield and cluster again. Or if reordering the PK columns is not\nconvenient, make a new index on the order_num and cluster on that (perhaps\ndropping the index after the cluster, if it no longer serves a purpose)\n\n\n>\n>> Is there a major benefit if I can upgrade to 9.2.x in some way that I\n>> haven't realized?\n>>\n>\n> If you have specific queries that are misoptimized and so are generating\n> more IO than they need to, then upgrading could help. On the other hand,\n> it could also make things worse, if a currently well optimized query\n> becomes worse.\n>\n> Is there some new feature or optimization you're thinking about with\n> this comment? If so, could you please just send me a link and/or feature\n> name and I'll google it myself?\n>\n\n\nThe main things I am thinking of are the \"fudge factor\" for large indexes,\nwhich is currently being discussed in both performance and hackers mailing\nlists, which was made overly aggressive in 9.2 and so can make it choose\nworse plans, and the \"allow the planner to generate custom plans for\nspecific parameter values even when using prepared statements\" from the 9.2\nrelease notes, which can allow it to choose better plans. But, surely\nthere are other changes as well, which amount to corner cases and so are\nhard to discuss in the abstract. Which is why instrumentation is\nimportant. There isn't much point in worrying about possible changed plans\nuntil you've identified the queries that are important to worry about.\n\n\n>\n> But, instrumentation has improved in 9.2 from 9.0, so upgrading would\n> make it easier to figure out just which queries are really bad and have the\n> most opportunity for improvement. A little well informed optimization\n> might obviate the need for either partitioning or more hard drives.\n>\n> This is interesting too. I obviously would like the best available\n> options to tune the database and the application. Is this detailed in the\n> release notes somewhere, and what tools could I use to take advantage of\n> this? (Are there new/improved details included in the EXPLAIN statement or\n> something?)\n>\n\ntrack_io_timing is new, and it exposes new data into EXPLAIN (ANALYZE,\nBUFFERS) as well as into other places. You might not want to turn this on\npermanently, as it can affect performance (but you can test with\npg_test_timing <https://mail.google.com/mail/mu/mp/635/pgtesttiming.html>as\noutlined in the docs to see how large probable affect it). Also,\nEXPLAIN displays the number row removed by filters, which may or may not be\nuseful to you.\n\nMost exciting I think are the improvements to the contrib module\npg_stat_statements. That would be my first recourse, to find out which of\nyour statements are taking the most time (and/or IO). I try to install and\nconfigure this for all of my databases now as a matter of course.\n\nSee the 9.2 release notes (with links therein to the rest of the\ndocumentation) for discussion of these.\n\nCheers,\n\nJeff\n\n>\n\nOn Sunday, January 6, 2013, AJ Weber wrote:\n\n All fair questions...\n\n Thank you for your detailed response!\n\n\n On 1/4/2013 11:03 PM, Jeff Janes wrote:\n On Friday, January 4, 2013, AJ Weber wrote:\nHi all,\n\n I have a table that has about 73mm rows in it and growing.  \n\n\nHow big is the table in MB?  Its indexes?\n\n Not sure on this.  Will see if pgAdmin tells me.It probably does, but from psql command line, you can do \\d+ and \\di+\n \n\n Anyway, this table is going to continue to grow, and it's used\n frequently (Read and Write). \n\n\nAre all rows in the table read and written with equal vigor,\n or are there hot rows and cold rows that can be recognized based\n on the row's values?\n\n No, I could probably figure out a way to setup an \"archive\" or\n \"older\" section of the data that is updated much less frequently. So the data that deliniates this does not exist in that table, but it does exist someplace, either just in your head, or in the column of a higher level table?\n \n Deletes are rare.  Inserts/Updates \"yes\".  Select on existing rows\n -- very frequent.If you have little control over your storage and are already IO bound, and the tables are growing rapidly, you may need to rethink that \"deletes are rare\" bit.  So the inserts and updates do target a hot part, while the selects are evenly spread?\nIn that case, it is very important to know if the slow part are the selects, or the insert and deletes.  If the selects are slow, and the hot rows for selects can't be gathered together into a hot partition, then after clustering they will still be slow as the disk will still have to seek all over the place (massive data-mining type selects might be an exception to that, but I wouldn't count on it).\n \n\n\n \n From what I\n read, this table is a candidate to be partitioned for\n performance and scalability.  I have tested some scripts to\n build the \"inherits\" tables with their constraints and the\n trigger/function to perform the work.\n\n Am I doing the right thing by partitioning this? \n\n\nProbably not.  Or at least, you haven't given us the\n information to know.  Very broadly speaking, well-implemented\n partitioning makes bulk loading and removal operations take less\n IO, but makes normal operations take more IO,  or if lucky\n leaves it unchanged.  There are exceptions, but unless you can\n identify a very specific reason to think you might have one of\n those exceptions, then you probably don't.\n\n I know you can't believe everything you read, but I thought I saw\n some metrics about when a table's size exceeds some fraction of\n available RAM, or when it approaches 100mm rows, it's a big\n candidate for partitioning.I think it is a matter of semantics. A small table is poor candidate for partitioning even if it has an excellent key to use for partitioning.  A large table could be a good candidate up until you realize it doesn't have a good key to use, at which point it stops being a good candidate (in my opinion).\n\n\n\n Should I run a cluster or vacuum full after all is done?\n\n\n\nProbably not.  If a cluster after the partitioning would be\n beneficial, there would be a pretty good chance you could do a\n cluster *instead* of the partitioning and get the same benefit.\n  \n\n\n\n I did try clustering the table on the PK (which is actually 4\n columns), and it appeared to help a bit.  I was hoping partitioning\n was going to help me even more.Was the order_num (from the parent table) the leading field of the 4 column PK?  If not, you might want to reorder the PK so that it is the leading field and cluster again.  Or if reordering the PK columns is not convenient, make a new index on the order_num and cluster on that (perhaps dropping the index after the cluster, if it no longer serves a purpose)\n  \n\n\n Is there a major benefit if I can upgrade to 9.2.x in some way\n that I haven't realized?\n\n\n\nIf you have specific queries that are misoptimized and so are\n generating more IO than they need to, then upgrading could help.\n  On the other hand, it could also make things worse, if a\n currently well optimized query becomes worse. \n\n\n\n Is there some new feature or optimization you're thinking about with\n this comment?  If so, could you please just send me a link and/or\n feature name and I'll google it myself?The main things I am thinking of are the \"fudge factor\" for large indexes, which is currently being discussed in both performance and hackers mailing lists, which was made overly aggressive in 9.2 and so can make it choose worse plans, and the \"allow the planner to generate custom plans for specific parameter values even when using prepared statements\" from the 9.2 release notes, which can allow it to choose better plans.  But, surely there are other changes as well, which amount to corner cases and so are hard to discuss in the abstract.  Which is why instrumentation is important.  There isn't much point in worrying about possible changed plans until you've identified the queries that are important to worry about.\n \n\n\nBut, instrumentation has improved in 9.2 from 9.0, so\n upgrading would make it easier to figure out just which queries\n are really bad and have the most opportunity for improvement.  A\n little well informed optimization might obviate the need for\n either partitioning or more hard drives.\n\n\n\n This is interesting too.  I obviously would like the best available\n options to tune the database and the application.  Is this detailed\n in the release notes somewhere, and what tools could I use to take\n advantage of this?  (Are there new/improved details included in the\n EXPLAIN statement or something?)track_io_timing is new, and it exposes new data into EXPLAIN (ANALYZE, BUFFERS) as well as into other places. You might not want to turn this on permanently, as it can affect performance (but you can test with pg_test_timing as outlined in the docs to see how large probable affect it).  Also, EXPLAIN displays the number row removed by filters, which may or may not be useful to you.  \nMost exciting I think are the improvements to the contrib module pg_stat_statements.  That would be my first recourse, to find out which of your statements are taking the most time (and/or IO).  I try to install and configure this for all of my databases now as a matter of course.  \nSee the 9.2 release notes (with links therein to the rest of the documentation) for discussion of these. Cheers,Jeff", "msg_date": "Tue, 8 Jan 2013 07:26:06 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition table in 9.0.x?" }, { "msg_contents": "\n>\n> It probably does, but from psql command line, you can do \\d+ and \\di+\n\\d+ doesn't appear to display any size information.\n\n>\n> If you have little control over your storage and are already IO bound, \n> and the tables are growing rapidly, you may need to rethink that \n> \"deletes are rare\" bit. So the inserts and updates do target a hot \n> part, while the selects are evenly spread?\n>\n> In that case, it is very important to know if the slow part are the \n> selects, or the insert and deletes. If the selects are slow, and the \n> hot rows for selects can't be gathered together into a hot partition, \n> then after clustering they will still be slow as the disk will still \n> have to seek all over the place (massive data-mining type selects \n> might be an exception to that, but I wouldn't count on it).\nSince order_num is sequential, I could partition on it in broad \n(sequential) ranges. That would put all recent/new rows in one \ntable-partition that would be a fraction of the size of the overall \n(unpartitioned) table. I guess that would require manual maintenance \nover-time (to switch to another, new partition as each grows).\n\n>\n> I think it is a matter of semantics. A small table is poor candidate \n> for partitioning even if it has an excellent key to use for \n> partitioning. A large table could be a good candidate up until you \n> realize it doesn't have a good key to use, at which point it stops \n> being a good candidate (in my opinion).\n>\nMy first idea to evenly-partition the table was to use the order_num and \ndo a \"mod\" on it with the number of tables I wanted to use. That would \nyield a partition-table number of 0-mod, and all rows for the same order \nwould stay within the same partition-table. However, you're right in \nthinking that a search for orders could -- really WOULD -- require \nretrieving details from multiple partitions, probably increasing IO. So \nmaybe the sequential partitioning (if at all) is better, just more \nmaintenance down-the-road.\n>\n> Was the order_num (from the parent table) the leading field of the 4 \n> column PK? If not, you might want to reorder the PK so that it is the \n> leading field and cluster again. Or if reordering the PK columns is \n> not convenient, make a new index on the order_num and cluster on that \n> (perhaps dropping the index after the cluster, if it no longer serves \n> a purpose)\n>\nYes, the order_num is the first column in the PK, and our main browse \nqueries use, at a minimum, the first 2-3 columns in that PK in their \nwhere-clause.\n\nMany thanks again for all the input!\n-AJ\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Jan 2013 11:45:49 -0500", "msg_from": "AJ Weber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition table in 9.0.x?" }, { "msg_contents": "On Tue, Jan 8, 2013 at 8:45 AM, AJ Weber <[email protected]> wrote:\n>\n>>\n>> It probably does, but from psql command line, you can do \\d+ and \\di+\n>\n> \\d+ doesn't appear to display any size information.\n\nIt does if you use it without an argument, to display all the tables\nin the search path:\n\njjanes=# \\d+\n List of relations\n Schema | Name | Type | Owner | Size | Description\n--------+------------------+-------+--------+---------+-------------\n public | pgbench_accounts | table | jjanes | 128 MB |\n public | pgbench_branches | table | jjanes | 40 kB |\n public | pgbench_history | table | jjanes | 0 bytes |\n public | pgbench_tellers | table | jjanes | 40 kB |\n(4 rows)\n\nIt rather annoys me that you actually get less information (no size,\nno owner) when you use \\d+ on a named table. I don't know if there is\na reason for that feature, or if it was just an oversight.\n\n\n\n\n\n>\n>\n>>\n>> If you have little control over your storage and are already IO bound, and\n>> the tables are growing rapidly, you may need to rethink that \"deletes are\n>> rare\" bit. So the inserts and updates do target a hot part, while the\n>> selects are evenly spread?\n>>\n>> In that case, it is very important to know if the slow part are the\n>> selects, or the insert and deletes. If the selects are slow, and the hot\n>> rows for selects can't be gathered together into a hot partition, then after\n>> clustering they will still be slow as the disk will still have to seek all\n>> over the place (massive data-mining type selects might be an exception to\n>> that, but I wouldn't count on it).\n>\n> Since order_num is sequential, I could partition on it in broad (sequential)\n> ranges. That would put all recent/new rows in one table-partition that\n> would be a fraction of the size of the overall (unpartitioned) table. I\n> guess that would require manual maintenance over-time (to switch to another,\n> new partition as each grows).\n\nYep. If your selects are concentrated in those recent/new, this could\nbe very useful. But, if your selects are not concentrated on the\nrecent/new rows, the benefit would be small.\n\n>\n>\n>>\n>> I think it is a matter of semantics. A small table is poor candidate for\n>> partitioning even if it has an excellent key to use for partitioning. A\n>> large table could be a good candidate up until you realize it doesn't have a\n>> good key to use, at which point it stops being a good candidate (in my\n>> opinion).\n>>\n> My first idea to evenly-partition the table was to use the order_num and do\n> a \"mod\" on it with the number of tables I wanted to use. That would yield a\n> partition-table number of 0-mod,\n\nThe current constraint exclusion code is quite simple-minded and\ndoesn't know how to make use of check constraints that use the mod\nfunction, so the indexes of all partitions would have to be searched\nfor each order_num-driven query, even though we know the data could\nonly exist in one of them. The constraint exclusion codes does\nunderstand check constraints that involve ranges.\n\nThere could still be some benefit as the table data would be\nconcentrated, even if the index data is not.\n\n> and all rows for the same order would stay\n> within the same partition-table.\n\nBut usually a given order_num would only be of interest for a fraction\nof a second before moving on to some other order_num of interest, so\nby the time the relevant partition become fully cached, it would no\nlonger be hot. Or, if the partitions were small enough, you could\nassume that all rows would be dragged into memory when the first one\nwas requested because they lay so close to each other. But it is not\nfeasible to have a large enough number of partitions to make that\nhappen. But if the table is clustered, this is exactly what you would\nget--the trouble would be keeping it clustered. If most of the\nline-items are inserted at the same time as each other, they probably\nshould be fairly well clustered to start with.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 8 Jan 2013 09:21:42 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition table in 9.0.x?" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Tue, Jan 8, 2013 at 8:45 AM, AJ Weber <[email protected]> wrote:\n>> \n>> \\d+ doesn't appear to display any size information.\n\n> It does if you use it without an argument, to display all the tables\n> in the search path:\n\n> jjanes=# \\d+\n> List of relations\n> Schema | Name | Type | Owner | Size | Description\n> --------+------------------+-------+--------+---------+-------------\n> public | pgbench_accounts | table | jjanes | 128 MB |\n> public | pgbench_branches | table | jjanes | 40 kB |\n> public | pgbench_history | table | jjanes | 0 bytes |\n> public | pgbench_tellers | table | jjanes | 40 kB |\n> (4 rows)\n\n> It rather annoys me that you actually get less information (no size,\n> no owner) when you use \\d+ on a named table. I don't know if there is\n> a reason for that feature, or if it was just an oversight.\n\nThis is actually an abbreviation for \\dtisv+, which is a completely\ndifferent command from \"\\d table\". You can use something like\n\"\\dt+ table-pattern\" to get a display of the above form for a subset\nof tables. I agree it ain't too consistent.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Jan 2013 12:51:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition table in 9.0.x?" }, { "msg_contents": "\n> It does if you use it without an argument, to display all the tables\n> in the search path:\n>\n> jjanes=# \\d+\n> List of relations\n> Schema | Name | Type | Owner | Size | Description\n> --------+------------------+-------+--------+---------+-------------\n> public | pgbench_accounts | table | jjanes | 128 MB |\n> public | pgbench_branches | table | jjanes | 40 kB |\n> public | pgbench_history | table | jjanes | 0 bytes |\n> public | pgbench_tellers | table | jjanes | 40 kB |\n> (4 rows)\n>\n> It rather annoys me that you actually get less information (no size,\n> no owner) when you use \\d+ on a named table. I don't know if there is\n> a reason for that feature, or if it was just an oversight.\nThat is rather peculiar. Sorry for that.\nTable in question is 9284MB\n(Parent table is 621MB)\n\n>\n> The current constraint exclusion code is quite simple-minded and\n> doesn't know how to make use of check constraints that use the mod\n> function, so the indexes of all partitions would have to be searched\n> for each order_num-driven query, even though we know the data could\n> only exist in one of them. The constraint exclusion codes does\n> understand check constraints that involve ranges.\nHmm. That's a bit of a limitation I didn't know about. I assume it \ndoesn't understand the percent (mod operator) just the same as not \nunderstanding the MOD() function? Either way, I guess this strategy \ndoes not pan-out.\n> There could still be some benefit as the table data would be\n> concentrated, even if the index data is not.\nI'm reaching way, way back in my head, but I think _some_ RDBMS I worked \nwith previously had a way to \"cluster\" the rows around a single one of \nthe indexes on the table, thus putting the index and the row-data \n\"together\" and reducing the number of IO's to retrieve the row if that \nindex was used. Am I understanding that PG's \"cluster\" is strictly to \ngroup like rows together logically -- table data only, not to coordinate \nthe table row with the index upon which you clustered them?\n\n>\n>> and all rows for the same order would stay\n>> within the same partition-table.\n> But usually a given order_num would only be of interest for a fraction\n> of a second before moving on to some other order_num of interest, so\n> by the time the relevant partition become fully cached, it would no\n> longer be hot. Or, if the partitions were small enough, you could\n> assume that all rows would be dragged into memory when the first one\n> was requested because they lay so close to each other. But it is not\n> feasible to have a large enough number of partitions to make that\n> happen. But if the table is clustered, this is exactly what you would\n> get--the trouble would be keeping it clustered. If most of the\n> line-items are inserted at the same time as each other, they probably\n> should be fairly well clustered to start with.\nDoes decreasing the fill to like 90 help keep it clustered in-between \ntimes that I could shutdown the app and perform a (re-) cluster on the \noverall table? Problem is, with a table that size, and the hardware I'm \n\"blessed with\", the cluster takes quite a bit of time. :(\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 08 Jan 2013 13:04:55 -0500", "msg_from": "AJ Weber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition table in 9.0.x?" }, { "msg_contents": "On Thursday, January 3, 2013, Alex Vinnik wrote:\n\n> Hi everybody,\n>\n> I have implemented my first app using PG DB and thought for a minute(may\n> be two) that I know something about PG but below\n> problem totally destroyed my confidence :). Please help me to restore it.\n>\n> Here is simple join query. It runs just fine on MS SQL 2008 and uses\n> all available indexes using even bigger overall dataset.\n>\n> select visits.id, views.id\n> from visits join views on visits.id = views.visit_id\n> where visits.created_at >= '11/15/2012' and visits.created_at <\n> '11/16/2012'\n>\n> Quick performance stat\n>\n> MS SQL: 1 second, 264K rows\n>\n\nIf it returns 264K rows in 1 second, then it must have all data in memory.\n Which prompts a few questions:\n\nIs *all* data in memory, or is it just the data needed for this particular\nquery because you already ran it recently with the same date range?\n\n\n\n> PG: 158 seconds, 264K rows\n>\n\nDoes the PG machine have enough memory to hold all the data in RAM? If so,\ndoes it actually have all the data in RAM? That is, is the cache already\nwarm? Starting from scratch it can take a long time for the cache to warm\nup naturally. And finally, if all the data is in RAM, does PG know this?\n\nFor the last question, the answer is \"no\", since you are using default\nsettings. You need to lower random_page_cost and probably also\nseq_page_cost in order to trick PG to think the data is in RAM. Of course\nif you do this when the data is in fact not in RAM, the result could be\ncatastrophically bad plans. (And when I tried to replicate your situation\non anemic hardware, indeed the hash join often was faster than the nested\nloop on both indexes.)\n\n\n\n\n>\n> Explain plan from both DBs\n>\n> PG QUERY PLAN\n> Hash Join (cost=12716.17..1101820.09 rows=248494 width=8)\n> Hash Cond: (views.visit_id = visits.id)\n> -> Seq Scan on views (cost=0.00..819136.56 rows=17434456 width=8)\n>\n\nThis cost estimate is probably due mostly to seg_page_cost and\ncpu_tuple_cost, which at their defaults means the table has 645,000 blocks\n(819136 - 17434456/100) blocks and each block has ~30 rows.\n\nBut you are returning 248,494 rows, or roughly 1 / 2.5 of a row per block.\n Let's say you need to fetch 200,000 blocks (in random order) to get those\nrows. Since at default settings fetching 200,000 random blocks is\nconsidered as expensive as fetching 800,000 sequential blocks, the index\nscan you want already looks more expensive than the sequential scan. But,\n if you want to use the index scan, you also have to fetch the index\nblocks, which a sequential scan does not need to do. There are probably\nabout 50,000 index blocks, but each one has to be fetched about 5 times\n(248,494/50,000). Because your effective_cache_size is so low, PG assumes\nthe next time it needs to fetch the same block, it will no longer be in\nmemory and so needs to be fetched again at full random_page_cost.\n\n\n\n> It is clear that PG does full table scan \"Seq Scan on views\n> (cost=0.00..819136.56 rows=17434456 width=8)\"\n>\n> Don't understand why PG doesn't use views_visit_id_index in that query but\n> rather scans whole table. One explanation I have found that when resulting\n> dataset constitutes ~15% of total number of rows in the table then seq scan\n> is used.\n>\n\nI don't know where you found that rule of thumb, but it would probably more\naccurate if it was given in in terms of the percentage of the table's\n*blocks* scanned, rather than *rows*.\n\n\n\n\n\n> In this case resulting dataset is just 1.5% of total number of rows.\n>\n\nSince there are about 30 rows per block, scanning 1.5% of the rows means\nscanning somewhat less than 45% of the blocks, assuming the rows are\nrandomly distributed over the blocks. And they are scanned in a less\nefficient way.\n\n\n> Postgres 9.2\n>\n\nYou are probably getting hit hard by the overly-large \"fudge factor\"\npenalty for scans of large indexes, of much discussion recently in regards\nto 9.2.\n\n\n> Ubuntu 12.04.1 LTS\n> shared_buffers = 4GB the rest of the settings are default ones\n>\n\n\nThe default effective_cache_size is almost certainly wrong, and if the\nanalogy to MSSQL to is correct, then random_page_cost almost certainly is\nas well.\n\nAnother poster referred you to the wiki page for suggestion on how to\nreport slow queries, particularly using EXPLAIN (analyze, buffers) rather\nthan merely EXPLAIN. In this case, I would also try setting\nenable_hashjoin=off and enable_mergejoin=off in the session, in order to\nforce the planner to use the plan you think you want, so we can see what PG\nthinks of that one.\n\nCheers,\n\nJeff\n\n>\n\nOn Thursday, January 3, 2013, Alex Vinnik wrote:Hi everybody,\nI have implemented my first app using PG DB and thought for a minute(may be two) that I know something about PG but below problem totally destroyed my confidence :). Please help me to restore it. \nHere is simple join query. It runs just fine on MS SQL 2008 and uses all available indexes using even bigger overall dataset. select visits.id, views.id\nfrom visits join views on visits.id = views.visit_idwhere visits.created_at >= '11/15/2012' and visits.created_at < '11/16/2012' \n\nQuick performance statMS SQL: 1 second, 264K rowsIf it returns 264K rows in 1 second, then it must have all data in memory.  Which prompts a few questions:\nIs *all* data in memory, or is it just the data needed for this particular query because you already ran it recently with the same date range? \nPG: 158 seconds,  264K rowsDoes the PG machine have enough memory to hold all the data in RAM?  If so, does it actually have all the data in RAM? That is, is the cache already warm?  Starting from scratch it can take a long time for the cache to warm up naturally.  And finally, if all the data is in RAM, does PG know this?  \nFor the last question, the answer is \"no\", since you are using default settings.  You need to lower random_page_cost and probably also seq_page_cost in order to trick PG to think the data is in RAM.  Of course if you do this when the data is in fact not in RAM, the result could be catastrophically bad plans.  (And when I tried to replicate your situation on anemic hardware, indeed the hash join often was faster than the nested loop on both indexes.)\n Explain plan from both DBs\nPG QUERY PLAN\nHash Join  (cost=12716.17..1101820.09 rows=248494 width=8)  Hash Cond: (views.visit_id = visits.id)  ->  Seq Scan on views  (cost=0.00..819136.56 rows=17434456 width=8)\nThis cost estimate is probably due mostly to seg_page_cost and cpu_tuple_cost, which at their defaults means the table has 645,000 blocks (819136 - 17434456/100) blocks and each block has ~30 rows.\nBut you are returning 248,494 rows, or roughly 1 / 2.5 of a row per block.  Let's say you need to fetch 200,000 blocks (in random order) to get those rows.  Since at default settings fetching 200,000 random blocks is considered as expensive as fetching 800,000 sequential blocks, the index scan you want already looks more expensive than the sequential scan.  But,  if you want to use the index scan, you also have to fetch the index blocks, which a sequential scan does not need to do.  There are probably about 50,000 index blocks, but each one has to be fetched about 5 times (248,494/50,000).  Because your effective_cache_size is so low, PG assumes the next time it needs to fetch the same block, it will no longer be in memory and so needs to be fetched again at full random_page_cost.\n It is clear that PG does full table scan \"Seq Scan on views  (cost=0.00..819136.56 rows=17434456 width=8)\"\nDon't understand why PG doesn't use views_visit_id_index in that query but rather scans whole table. One explanation I have found that when resulting dataset constitutes ~15% of total number of rows in the table then seq scan is used.\nI don't know where you found that rule of thumb, but it would probably more accurate if it was given in in terms of the percentage of the  table's *blocks* scanned, rather than *rows*.\n  In this case resulting dataset is just 1.5% of total number of rows. \nSince there are about 30 rows per block, scanning 1.5% of the rows means scanning somewhat less than 45% of the blocks, assuming the rows are randomly distributed over the blocks.  And they are scanned in a less efficient way.\nPostgres 9.2\nYou are probably getting hit hard by the overly-large \"fudge factor\" penalty for scans of large indexes, of much discussion recently in regards to 9.2. \nUbuntu 12.04.1 LTSshared_buffers = 4GB the rest of the settings are default ones\nThe default effective_cache_size is almost certainly wrong, and if the analogy to MSSQL to is correct, then random_page_cost almost certainly is as well.Another poster referred you to the wiki page for suggestion on how to report slow queries, particularly using EXPLAIN (analyze, buffers) rather than merely EXPLAIN.  In this case, I would also try setting enable_hashjoin=off and enable_mergejoin=off in the session, in order to force the planner to use the plan you think you want, so we can see what PG thinks of that one.\nCheers,Jeff", "msg_date": "Tue, 8 Jan 2013 20:34:11 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "Guys, thanks a lot for your input. It is very valuable for us. We plan to\nfix a separate dev server similar to production one, copy all data there\nand try you suggestions as we really don't want to do it on production\nserver. I also noticed that IOPS jumps to 100% when running this query. So\nit is a bit scary to make those changes in production directly. Will report\nback on the progress and findings.\n\nGuys, thanks a lot for your input. It is very valuable for us. We plan to fix a separate dev server similar to production one, copy all data there and try you suggestions as we really don't want to do it on production server. I also noticed that IOPS jumps to 100% when running this query. So it is a bit scary to make those changes in production directly. Will report back on the progress and findings.", "msg_date": "Wed, 9 Jan 2013 09:49:43 -0600", "msg_from": "Alex Vinnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Wed, Jan 9, 2013 at 9:49 AM, Alex Vinnik <[email protected]> wrote:\n> Guys, thanks a lot for your input. It is very valuable for us. We plan to\n> fix a separate dev server similar to production one, copy all data there and\n> try you suggestions as we really don't want to do it on production server. I\n> also noticed that IOPS jumps to 100% when running this query. So it is a bit\n> scary to make those changes in production directly. Will report back on the\n> progress and findings.\n\nnothing wrong with that, but keep in mind you can tweak\n'effective_cache_size' for a single session with 'set' command;\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 9 Jan 2013 09:53:08 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Tue, Jan 8, 2013 at 10:04 AM, AJ Weber <[email protected]> wrote:\n>>\n>> The current constraint exclusion code is quite simple-minded and\n>> doesn't know how to make use of check constraints that use the mod\n>> function, so the indexes of all partitions would have to be searched\n>> for each order_num-driven query, even though we know the data could\n>> only exist in one of them. The constraint exclusion codes does\n>> understand check constraints that involve ranges.\n>\n> Hmm. That's a bit of a limitation I didn't know about. I assume it doesn't\n> understand the percent (mod operator) just the same as not understanding the\n> MOD() function? Either way, I guess this strategy does not pan-out.\n\nYes, it doesn't detect either. It would use it if you formulate to\nevery equality query with an extra restriction: \"where id=1234567 and\nmod(id,100)=67\" or whatever.\n\n(But I was surprised that % and mod() are not recognized as being\nequivalent. If you specify it one way in the check constraint, you\nneed to use the same \"spelling\" in the where clause)\n\n>> There could still be some benefit as the table data would be\n>> concentrated, even if the index data is not.\n>\n> I'm reaching way, way back in my head, but I think _some_ RDBMS I worked\n> with previously had a way to \"cluster\" the rows around a single one of the\n> indexes on the table, thus putting the index and the row-data \"together\" and\n> reducing the number of IO's to retrieve the row if that index was used.\n\nIn Oracle this is called in \"index organized table\" or IOT (or it was\nat one point, they have the habit of rename most of their features\nwith each release). I don't know what other RDBMS call it.\nSupporting secondary indexes when the table data could move around was\nquite intricate/weird.\n\nPG doesn't have this index-organized-table feature--it has been\ndiscussed but I don't of any currently active effort to add it.\n\nThere is another feature, sometimes called clustering, in which the\nrows from different tables can be mingled together in the same block.\nSo both the parent order and the child order_line_item that have the\nsame order_num (i.e. the join column) would be in the same block. So\nonce you query for a specific order and did the necessary IO, the\ncorresponding order_line_item rows would already be in memory. I\nthought this was interesting, but I don't know how often it was\nactually used.\n\n> Am\n> I understanding that PG's \"cluster\" is strictly to group like rows together\n> logically -- table data only, not to coordinate the table row with the index\n> upon which you clustered them?\n\nThey are coordinated in a sense. Not as one single structure, but as\ntwo structures in parallel.\n\n\n\n>>> and all rows for the same order would stay\n>>> within the same partition-table.\n>>\n>> But usually a given order_num would only be of interest for a fraction\n>> of a second before moving on to some other order_num of interest, so\n>> by the time the relevant partition become fully cached, it would no\n>> longer be hot. Or, if the partitions were small enough, you could\n>> assume that all rows would be dragged into memory when the first one\n>> was requested because they lay so close to each other. But it is not\n>> feasible to have a large enough number of partitions to make that\n>> happen. But if the table is clustered, this is exactly what you would\n>> get--the trouble would be keeping it clustered. If most of the\n>> line-items are inserted at the same time as each other, they probably\n>> should be fairly well clustered to start with.\n>\n> Does decreasing the fill to like 90 help keep it clustered in-between times\n> that I could shutdown the app and perform a (re-) cluster on the overall\n> table? Problem is, with a table that size, and the hardware I'm \"blessed\n> with\", the cluster takes quite a bit of time. :(\n\nProbably not. If the data starts out clustered and gets updated a\nlot, lowering the fill factor might be able to prevent some\nde-clustering due to row migration. But when you insert new rows, PG\nmakes no effort to put them near existing rows with the same key. (In\na hypothetical future in which that did happen, lowering the fill\nfactor would then probably help)\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Jan 2013 10:24:51 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition table in 9.0.x?" }, { "msg_contents": "It sure turned out that default settings are not a good fit. Setting\nrandom_page_cost\nto 1.0 made query to run in 2.6 seconds and I clearly see that indexes are\nbeing used in explain plan and IO utilization is close to 0.\n\nQUERY PLAN\nSort (cost=969787.23..970288.67 rows=200575 width=8) (actual\ntime=2176.045..2418.162 rows=241238 loops=1)\n Sort Key: visits.id, views.id\n Sort Method: external sort Disk: 4248kB\n -> Nested Loop (cost=0.00..950554.81 rows=200575 width=8) (actual\ntime=0.048..1735.357 rows=241238 loops=1)\n -> Index Scan using visits_created_at_index on visits\n (cost=0.00..5459.16 rows=82561 width=4) (actual time=0.032..178.591\nrows=136021 loops=1)\n Index Cond: ((created_at >= '2012-12-15 00:00:00'::timestamp\nwithout time zone) AND (created_at < '2012-12-16 00:00:00'::timestamp\nwithout time zone))\n -> Index Scan using views_visit_id_index on views\n (cost=0.00..11.33 rows=12 width=8) (actual time=0.004..0.006 rows=2\nloops=136021)\n Index Cond: (visit_id = visits.id)\nTotal runtime: 2635.169 ms\n\nHowever I noticed that sorting is done using disk(\"external sort Disk:\n4248kB\") which prompted me to take a look at work_mem. But it turned out\nthat small increase to 4MB from default 1MB turns off index usage and query\ngets x10 slower. IO utilization jumped to 100% from literally nothing. so\nback to square one...\n\nQUERY PLAN\nSort (cost=936642.75..937144.19 rows=200575 width=8) (actual\ntime=33200.762..33474.443 rows=241238 loops=1)\n Sort Key: visits.id, views.id\n Sort Method: external merge Disk: 4248kB\n -> Hash Join (cost=6491.17..917410.33 rows=200575 width=8) (actual\ntime=7156.498..32723.221 rows=241238 loops=1)\n Hash Cond: (views.visit_id = visits.id)\n -> Seq Scan on views (cost=0.00..832189.95 rows=8768395 width=8)\n(actual time=0.100..12126.342 rows=8200704 loops=1)\n -> Hash (cost=5459.16..5459.16 rows=82561 width=4) (actual\ntime=353.683..353.683 rows=136021 loops=1)\n Buckets: 16384 Batches: 2 (originally 1) Memory Usage:\n4097kB\n -> Index Scan using visits_created_at_index on visits\n (cost=0.00..5459.16 rows=82561 width=4) (actual time=0.032..175.051\nrows=136021 loops=1)\n Index Cond: ((created_at >= '2012-12-15\n00:00:00'::timestamp without time zone) AND (created_at < '2012-12-16\n00:00:00'::timestamp without time zone))\nTotal runtime: 33698.000 ms\n\nBasically PG is going through all views again and not using \"Index Scan\nusing views_visit_id_index on views\". Looks like setting work_mem confuses\nplanner somehow. Any idea what can be done to do sorting in memory. I\nsuspect it should make query even more faster. Thanks -Alex\n\n\n\n> nothing wrong with that, but keep in mind you can tweak\n> 'effective_cache_size' for a single session with 'set' command;\n>\n> merlin\n>\n\nIt sure turned out that default settings are not a good fit. Setting random_page_cost to 1.0 made query to run in 2.6 seconds and I clearly see that indexes are being used in explain plan and IO utilization is close to 0.\nQUERY PLANSort  (cost=969787.23..970288.67 rows=200575 width=8) (actual time=2176.045..2418.162 rows=241238 loops=1)\n  Sort Key: visits.id, views.id  Sort Method: external sort  Disk: 4248kB  ->  Nested Loop  (cost=0.00..950554.81 rows=200575 width=8) (actual time=0.048..1735.357 rows=241238 loops=1)\n        ->  Index Scan using visits_created_at_index on visits  (cost=0.00..5459.16 rows=82561 width=4) (actual time=0.032..178.591 rows=136021 loops=1)              Index Cond: ((created_at >= '2012-12-15 00:00:00'::timestamp without time zone) AND (created_at < '2012-12-16 00:00:00'::timestamp without time zone))\n        ->  Index Scan using views_visit_id_index on views  (cost=0.00..11.33 rows=12 width=8) (actual time=0.004..0.006 rows=2 loops=136021)              Index Cond: (visit_id = visits.id)\nTotal runtime: 2635.169 msHowever I noticed that sorting is done using disk(\"external sort  Disk: 4248kB\") which prompted me to take a look at work_mem. But it turned out that small increase to 4MB from default 1MB turns off index usage and query gets x10 slower. IO utilization jumped to 100% from literally nothing. so back to square one...\nQUERY PLANSort  (cost=936642.75..937144.19 rows=200575 width=8) (actual time=33200.762..33474.443 rows=241238 loops=1)\n  Sort Key: visits.id, views.id  Sort Method: external merge  Disk: 4248kB  ->  Hash Join  (cost=6491.17..917410.33 rows=200575 width=8) (actual time=7156.498..32723.221 rows=241238 loops=1)\n        Hash Cond: (views.visit_id = visits.id)        ->  Seq Scan on views  (cost=0.00..832189.95 rows=8768395 width=8) (actual time=0.100..12126.342 rows=8200704 loops=1)\n        ->  Hash  (cost=5459.16..5459.16 rows=82561 width=4) (actual time=353.683..353.683 rows=136021 loops=1)              Buckets: 16384  Batches: 2 (originally 1)  Memory Usage: 4097kB              ->  Index Scan using visits_created_at_index on visits  (cost=0.00..5459.16 rows=82561 width=4) (actual time=0.032..175.051 rows=136021 loops=1)\n                    Index Cond: ((created_at >= '2012-12-15 00:00:00'::timestamp without time zone) AND (created_at < '2012-12-16 00:00:00'::timestamp without time zone))Total runtime: 33698.000 ms\nBasically PG is going through all views again and not using \"Index Scan using views_visit_id_index on views\". Looks like setting work_mem confuses planner somehow. Any idea what can be done to do sorting in memory. I suspect it should make query even more faster. Thanks -Alex\n\nnothing wrong with that, but keep in mind you can tweak\n'effective_cache_size' for a single session with 'set' command;\n\nmerlin", "msg_date": "Mon, 28 Jan 2013 17:43:51 -0600", "msg_from": "Alex Vinnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik <[email protected]> wrote:\n\n> It sure turned out that default settings are not a good fit.\n>\n\ndo you know pgtune?\nit's a good tool for starters, if you want a fast postgres and don't really\nwant to learn what's behind the scenes.\n\nrandom_page_cost=1 might be not what you really want.\nit would mean that random reads are as fast as as sequential reads, which\nprobably is true only for SSD\n\n\nFilip\n\nOn Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik <[email protected]> wrote:\nIt sure turned out that default settings are not a good fit. do you know pgtune?it's a good tool for starters, if you want a fast postgres and don't really want to learn what's behind the scenes.\nrandom_page_cost=1 might be not what you really want. it would mean that random reads are as fast as as sequential reads, which probably is true only for SSDFilip", "msg_date": "Mon, 28 Jan 2013 18:55:10 -0600", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik <[email protected]> wrote:\n> It sure turned out that default settings are not a good fit. Setting\n> random_page_cost to 1.0 made query to run in 2.6 seconds and I clearly see\n> that indexes are being used in explain plan and IO utilization is close to\n> 0.\n>\n> QUERY PLAN\n> Sort (cost=969787.23..970288.67 rows=200575 width=8) (actual\n> time=2176.045..2418.162 rows=241238 loops=1)\n> Sort Key: visits.id, views.id\n> Sort Method: external sort Disk: 4248kB\n> -> Nested Loop (cost=0.00..950554.81 rows=200575 width=8) (actual\n> time=0.048..1735.357 rows=241238 loops=1)\n> -> Index Scan using visits_created_at_index on visits\n> (cost=0.00..5459.16 rows=82561 width=4) (actual time=0.032..178.591\n> rows=136021 loops=1)\n> Index Cond: ((created_at >= '2012-12-15 00:00:00'::timestamp\n> without time zone) AND (created_at < '2012-12-16 00:00:00'::timestamp\n> without time zone))\n> -> Index Scan using views_visit_id_index on views\n> (cost=0.00..11.33 rows=12 width=8) (actual time=0.004..0.006 rows=2\n> loops=136021)\n> Index Cond: (visit_id = visits.id)\n> Total runtime: 2635.169 ms\n>\n> However I noticed that sorting is done using disk(\"external sort Disk:\n> 4248kB\") which prompted me to take a look at work_mem. But it turned out\n> that small increase to 4MB from default 1MB turns off index usage and query\n> gets x10 slower. IO utilization jumped to 100% from literally nothing. so\n> back to square one...\n>\n> QUERY PLAN\n> Sort (cost=936642.75..937144.19 rows=200575 width=8) (actual\n> time=33200.762..33474.443 rows=241238 loops=1)\n> Sort Key: visits.id, views.id\n> Sort Method: external merge Disk: 4248kB\n> -> Hash Join (cost=6491.17..917410.33 rows=200575 width=8) (actual\n> time=7156.498..32723.221 rows=241238 loops=1)\n> Hash Cond: (views.visit_id = visits.id)\n> -> Seq Scan on views (cost=0.00..832189.95 rows=8768395 width=8)\n> (actual time=0.100..12126.342 rows=8200704 loops=1)\n> -> Hash (cost=5459.16..5459.16 rows=82561 width=4) (actual\n> time=353.683..353.683 rows=136021 loops=1)\n> Buckets: 16384 Batches: 2 (originally 1) Memory Usage:\n> 4097kB\n> -> Index Scan using visits_created_at_index on visits\n> (cost=0.00..5459.16 rows=82561 width=4) (actual time=0.032..175.051\n> rows=136021 loops=1)\n> Index Cond: ((created_at >= '2012-12-15\n> 00:00:00'::timestamp without time zone) AND (created_at < '2012-12-16\n> 00:00:00'::timestamp without time zone))\n> Total runtime: 33698.000 ms\n>\n> Basically PG is going through all views again and not using \"Index Scan\n> using views_visit_id_index on views\". Looks like setting work_mem confuses\n> planner somehow. Any idea what can be done to do sorting in memory. I\n> suspect it should make query even more faster. Thanks -Alex\n\nhm, what happens when you set work_mem a fair amount higher? (say,\n64mb). You can set it for one session by going \"set work_mem='64mb';\n\" as opposed to the entire server in postgresql.conf.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 28 Jan 2013 20:31:32 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Mon, Jan 28, 2013 at 6:55 PM, Filip Rembiałkowski <[email protected]>wrote:\n\n>\n> On Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik <[email protected]> wrote:\n>\n>> It sure turned out that default settings are not a good fit.\n>>\n>\n> do you know pgtune?\n> it's a good tool for starters, if you want a fast postgres and don't\n> really want to learn what's behind the scenes.\n>\nYeah.. I came across pgtune but noticed that latest version dated\n2009-10-29 http://pgfoundry.org/frs/?group_id=1000416 which is kind of\noutdated. Tar file has settings for pg 8.3. Is still relevant?\n\n\n>\n> random_page_cost=1 might be not what you really want.\n> it would mean that random reads are as fast as as sequential reads, which\n> probably is true only for SSD\n>\nWhat randon_page_cost would be more appropriate for EC2 EBS Provisioned\nvolume that can handle 2,000 IOPS?\n\n>\n>\n>\n> Filip\n>\n>\n\nOn Mon, Jan 28, 2013 at 6:55 PM, Filip Rembiałkowski <[email protected]> wrote:\n\nOn Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik <[email protected]> wrote:\nIt sure turned out that default settings are not a good fit. do you know pgtune?it's a good tool for starters, if you want a fast postgres and don't really want to learn what's behind the scenes.\nYeah.. I came across pgtune but noticed that latest version dated 2009-10-29 http://pgfoundry.org/frs/?group_id=1000416 which is kind of outdated. Tar file has settings for pg 8.3. Is still relevant?\n \n\nrandom_page_cost=1 might be not what you really want. it would mean that random reads are as fast as as sequential reads, which probably is true only for SSDWhat randon_page_cost would be more appropriate for EC2 EBS Provisioned volume that can handle 2,000 IOPS? \n\nFilip", "msg_date": "Tue, 29 Jan 2013 08:24:10 -0600", "msg_from": "Alex Vinnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "Setting work_mem to 64MB triggers in memory sort but look what happens with\nviews look up. PG goes through all records there \"Seq Scan on views\"\ninstead of using visitor_id index and I have only subset of real data to\nplay around. Can imagine what cost would be running it against bigger\ndataset. Something else is in play here that makes planner to take this\nroute. Any ideas how to gain more insight into planner's inner workings?\n\nQUERY PLAN\nSort (cost=960280.46..960844.00 rows=225414 width=8) (actual\ntime=23328.040..23537.126 rows=209401 loops=1)\n Sort Key: visits.id, views.id\n Sort Method: quicksort Memory: 15960kB\n -> Hash Join (cost=8089.16..940238.66 rows=225414 width=8) (actual\ntime=6622.072..22995.890 rows=209401 loops=1)\n Hash Cond: (views.visit_id = visits.id)\n -> Seq Scan on views (cost=0.00..831748.05 rows=8724205 width=8)\n(actual time=0.093..10552.306 rows=6995893 loops=1)\n -> Hash (cost=6645.51..6645.51 rows=115492 width=4) (actual\ntime=307.389..307.389 rows=131311 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 4617kB\n -> Index Scan using visits_created_at_index on visits\n (cost=0.00..6645.51 rows=115492 width=4) (actual time=0.040..163.151\nrows=131311 loops=1)\n Index Cond: ((created_at >= '2013-01-15\n00:00:00'::timestamp without time zone) AND (created_at < '2013-01-16\n00:00:00'::timestamp without time zone))\nTotal runtime: 23733.045 ms\n\n\nOn Mon, Jan 28, 2013 at 8:31 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik <[email protected]> wrote:\n> > It sure turned out that default settings are not a good fit. Setting\n> > random_page_cost to 1.0 made query to run in 2.6 seconds and I clearly\n> see\n> > that indexes are being used in explain plan and IO utilization is close\n> to\n> > 0.\n> >\n> > QUERY PLAN\n> > Sort (cost=969787.23..970288.67 rows=200575 width=8) (actual\n> > time=2176.045..2418.162 rows=241238 loops=1)\n> > Sort Key: visits.id, views.id\n> > Sort Method: external sort Disk: 4248kB\n> > -> Nested Loop (cost=0.00..950554.81 rows=200575 width=8) (actual\n> > time=0.048..1735.357 rows=241238 loops=1)\n> > -> Index Scan using visits_created_at_index on visits\n> > (cost=0.00..5459.16 rows=82561 width=4) (actual time=0.032..178.591\n> > rows=136021 loops=1)\n> > Index Cond: ((created_at >= '2012-12-15\n> 00:00:00'::timestamp\n> > without time zone) AND (created_at < '2012-12-16 00:00:00'::timestamp\n> > without time zone))\n> > -> Index Scan using views_visit_id_index on views\n> > (cost=0.00..11.33 rows=12 width=8) (actual time=0.004..0.006 rows=2\n> > loops=136021)\n> > Index Cond: (visit_id = visits.id)\n> > Total runtime: 2635.169 ms\n> >\n> > However I noticed that sorting is done using disk(\"external sort Disk:\n> > 4248kB\") which prompted me to take a look at work_mem. But it turned out\n> > that small increase to 4MB from default 1MB turns off index usage and\n> query\n> > gets x10 slower. IO utilization jumped to 100% from literally nothing. so\n> > back to square one...\n> >\n> > QUERY PLAN\n> > Sort (cost=936642.75..937144.19 rows=200575 width=8) (actual\n> > time=33200.762..33474.443 rows=241238 loops=1)\n> > Sort Key: visits.id, views.id\n> > Sort Method: external merge Disk: 4248kB\n> > -> Hash Join (cost=6491.17..917410.33 rows=200575 width=8) (actual\n> > time=7156.498..32723.221 rows=241238 loops=1)\n> > Hash Cond: (views.visit_id = visits.id)\n> > -> Seq Scan on views (cost=0.00..832189.95 rows=8768395\n> width=8)\n> > (actual time=0.100..12126.342 rows=8200704 loops=1)\n> > -> Hash (cost=5459.16..5459.16 rows=82561 width=4) (actual\n> > time=353.683..353.683 rows=136021 loops=1)\n> > Buckets: 16384 Batches: 2 (originally 1) Memory Usage:\n> > 4097kB\n> > -> Index Scan using visits_created_at_index on visits\n> > (cost=0.00..5459.16 rows=82561 width=4) (actual time=0.032..175.051\n> > rows=136021 loops=1)\n> > Index Cond: ((created_at >= '2012-12-15\n> > 00:00:00'::timestamp without time zone) AND (created_at < '2012-12-16\n> > 00:00:00'::timestamp without time zone))\n> > Total runtime: 33698.000 ms\n> >\n> > Basically PG is going through all views again and not using \"Index Scan\n> > using views_visit_id_index on views\". Looks like setting work_mem\n> confuses\n> > planner somehow. Any idea what can be done to do sorting in memory. I\n> > suspect it should make query even more faster. Thanks -Alex\n>\n> hm, what happens when you set work_mem a fair amount higher? (say,\n> 64mb). You can set it for one session by going \"set work_mem='64mb';\n> \" as opposed to the entire server in postgresql.conf.\n>\n> merlin\n>\n\nSetting work_mem to 64MB triggers in memory sort but look what happens with views look up. PG goes through all records there \"Seq Scan on views\" instead of using visitor_id index and I have only subset of real data to play around. Can imagine what cost would be running it against bigger dataset. Something else is in play here that makes planner to take this route. Any ideas how to gain more insight into planner's inner workings?\nQUERY PLANSort  (cost=960280.46..960844.00 rows=225414 width=8) (actual time=23328.040..23537.126 rows=209401 loops=1)\n  Sort Key: visits.id, views.id  Sort Method: quicksort  Memory: 15960kB  ->  Hash Join  (cost=8089.16..940238.66 rows=225414 width=8) (actual time=6622.072..22995.890 rows=209401 loops=1)\n        Hash Cond: (views.visit_id = visits.id)        ->  Seq Scan on views  (cost=0.00..831748.05 rows=8724205 width=8) (actual time=0.093..10552.306 rows=6995893 loops=1)\n        ->  Hash  (cost=6645.51..6645.51 rows=115492 width=4) (actual time=307.389..307.389 rows=131311 loops=1)              Buckets: 16384  Batches: 1  Memory Usage: 4617kB              ->  Index Scan using visits_created_at_index on visits  (cost=0.00..6645.51 rows=115492 width=4) (actual time=0.040..163.151 rows=131311 loops=1)\n                    Index Cond: ((created_at >= '2013-01-15 00:00:00'::timestamp without time zone) AND (created_at < '2013-01-16 00:00:00'::timestamp without time zone))Total runtime: 23733.045 ms\nOn Mon, Jan 28, 2013 at 8:31 PM, Merlin Moncure <[email protected]> wrote:\nOn Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik <[email protected]> wrote:\n\n> It sure turned out that default settings are not a good fit. Setting\n> random_page_cost to 1.0 made query to run in 2.6 seconds and I clearly see\n> that indexes are being used in explain plan and IO utilization is close to\n> 0.\n>\n> QUERY PLAN\n> Sort  (cost=969787.23..970288.67 rows=200575 width=8) (actual\n> time=2176.045..2418.162 rows=241238 loops=1)\n>   Sort Key: visits.id, views.id\n>   Sort Method: external sort  Disk: 4248kB\n>   ->  Nested Loop  (cost=0.00..950554.81 rows=200575 width=8) (actual\n> time=0.048..1735.357 rows=241238 loops=1)\n>         ->  Index Scan using visits_created_at_index on visits\n> (cost=0.00..5459.16 rows=82561 width=4) (actual time=0.032..178.591\n> rows=136021 loops=1)\n>               Index Cond: ((created_at >= '2012-12-15 00:00:00'::timestamp\n> without time zone) AND (created_at < '2012-12-16 00:00:00'::timestamp\n> without time zone))\n>         ->  Index Scan using views_visit_id_index on views\n> (cost=0.00..11.33 rows=12 width=8) (actual time=0.004..0.006 rows=2\n> loops=136021)\n>               Index Cond: (visit_id = visits.id)\n> Total runtime: 2635.169 ms\n>\n> However I noticed that sorting is done using disk(\"external sort  Disk:\n> 4248kB\") which prompted me to take a look at work_mem. But it turned out\n> that small increase to 4MB from default 1MB turns off index usage and query\n> gets x10 slower. IO utilization jumped to 100% from literally nothing. so\n> back to square one...\n>\n> QUERY PLAN\n> Sort  (cost=936642.75..937144.19 rows=200575 width=8) (actual\n> time=33200.762..33474.443 rows=241238 loops=1)\n>   Sort Key: visits.id, views.id\n>   Sort Method: external merge  Disk: 4248kB\n>   ->  Hash Join  (cost=6491.17..917410.33 rows=200575 width=8) (actual\n> time=7156.498..32723.221 rows=241238 loops=1)\n>         Hash Cond: (views.visit_id = visits.id)\n>         ->  Seq Scan on views  (cost=0.00..832189.95 rows=8768395 width=8)\n> (actual time=0.100..12126.342 rows=8200704 loops=1)\n>         ->  Hash  (cost=5459.16..5459.16 rows=82561 width=4) (actual\n> time=353.683..353.683 rows=136021 loops=1)\n>               Buckets: 16384  Batches: 2 (originally 1)  Memory Usage:\n> 4097kB\n>               ->  Index Scan using visits_created_at_index on visits\n> (cost=0.00..5459.16 rows=82561 width=4) (actual time=0.032..175.051\n> rows=136021 loops=1)\n>                     Index Cond: ((created_at >= '2012-12-15\n> 00:00:00'::timestamp without time zone) AND (created_at < '2012-12-16\n> 00:00:00'::timestamp without time zone))\n> Total runtime: 33698.000 ms\n>\n> Basically PG is going through all views again and not using \"Index Scan\n> using views_visit_id_index on views\". Looks like setting work_mem confuses\n> planner somehow. Any idea what can be done to do sorting in memory. I\n> suspect it should make query even more faster. Thanks -Alex\n\nhm, what happens when you set work_mem a fair amount higher? (say,\n64mb).   You can set it for one session by going \"set work_mem='64mb';\n\" as opposed to the entire server in postgresql.conf.\n\nmerlin", "msg_date": "Tue, 29 Jan 2013 08:41:50 -0600", "msg_from": "Alex Vinnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Tue, Jan 29, 2013 at 8:24 AM, Alex Vinnik <[email protected]> wrote:\n\n> On Mon, Jan 28, 2013 at 6:55 PM, Filip Rembiałkowski <[email protected]>wrote:\n>\n>>\n>> do you know pgtune?\n>> it's a good tool for starters, if you want a fast postgres and don't\n>> really want to learn what's behind the scenes.\n>>\n> Yeah.. I came across pgtune but noticed that latest version dated\n> 2009-10-29 http://pgfoundry.org/frs/?group_id=1000416 which is kind of\n> outdated. Tar file has settings for pg 8.3. Is still relevant?\n>\n\nYes, I'm sure it will not do anything bad to your config.\n\n\n>\n>> random_page_cost=1 might be not what you really want.\n>> it would mean that random reads are as fast as as sequential reads, which\n>> probably is true only for SSD\n>>\n> What randon_page_cost would be more appropriate for EC2 EBS Provisioned\n> volume that can handle 2,000 IOPS?\n>\n>>\n>>\nI'd say: don't guess. Measure.\nUse any tool that can test sequential disk block reads versus random disk\nblock reads.\nbonnie++ is quite popular.\n\n\n\nFilip\n\nOn Tue, Jan 29, 2013 at 8:24 AM, Alex Vinnik <[email protected]> wrote:\nOn Mon, Jan 28, 2013 at 6:55 PM, Filip Rembiałkowski <[email protected]> wrote:\n\ndo you know pgtune?it's a good tool for starters, if you want a fast postgres and don't really want to learn what's behind the scenes.\nYeah.. I came across pgtune but noticed that latest version dated 2009-10-29 http://pgfoundry.org/frs/?group_id=1000416 which is kind of outdated. Tar file has settings for pg 8.3. Is still relevant?\nYes, I'm sure it will not do anything bad to your config.  \n\n\n\nrandom_page_cost=1 might be not what you really want. it would mean that random reads are as fast as as sequential reads, which probably is true only for SSDWhat randon_page_cost would be more appropriate for EC2 EBS Provisioned volume that can handle 2,000 IOPS? \n\nI'd say: don't guess. Measure. Use any tool that can test sequential disk block reads versus random disk block reads.\n\nbonnie++ is quite popular.Filip", "msg_date": "Tue, 29 Jan 2013 10:19:19 -0600", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Tue, Jan 29, 2013 at 8:41 AM, Alex Vinnik <[email protected]> wrote:\n> Setting work_mem to 64MB triggers in memory sort but look what happens with\n> views look up. PG goes through all records there \"Seq Scan on views\" instead\n> of using visitor_id index and I have only subset of real data to play\n> around. Can imagine what cost would be running it against bigger dataset.\n> Something else is in play here that makes planner to take this route. Any\n> ideas how to gain more insight into planner's inner workings?\n\ndid you set effective_cache_seize as noted upthread?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Jan 2013 10:41:47 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Jan 29, 2013, at 6:24 AM, Alex Vinnik wrote:\n\n> random_page_cost=1 might be not what you really want. \n> it would mean that random reads are as fast as as sequential reads, which probably is true only for SSD\n> What randon_page_cost would be more appropriate for EC2 EBS Provisioned volume that can handle 2,000 IOPS? \n\nFor EC2 Provisioned IOPS volumes - not standard EBS - random_page_cost=1 is exactly what you want.\n\n\nOn Jan 29, 2013, at 6:24 AM, Alex Vinnik wrote:random_page_cost=1 might be not what you really want. it would mean that random reads are as fast as as sequential reads, which probably is true only for SSDWhat randon_page_cost would be more appropriate for EC2 EBS Provisioned volume that can handle 2,000 IOPS? For EC2 Provisioned IOPS volumes - not standard EBS - random_page_cost=1 is exactly what you want.", "msg_date": "Tue, 29 Jan 2013 09:39:07 -0800", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Tue, Jan 29, 2013 at 11:39 AM, Ben Chobot <[email protected]> wrote:\n\n> On Jan 29, 2013, at 6:24 AM, Alex Vinnik wrote:\n>\n> random_page_cost=1 might be not what you really want.\n>> it would mean that random reads are as fast as as sequential reads, which\n>> probably is true only for SSD\n>>\n> What randon_page_cost would be more appropriate for EC2 EBS Provisioned\n> volume that can handle 2,000 IOPS?\n>\n>\n> For EC2 Provisioned IOPS volumes - not standard EBS - random_page_cost=1\n> is exactly what you want.\n>\n> Well... after some experimentation it turned out that random_page_cost=0.6\ngives me fast query\n\nQUERY PLAN\nSort (cost=754114.96..754510.46 rows=158199 width=8) (actual\ntime=1839.324..2035.405 rows=209401 loops=1)\n Sort Key: visits.id, views.id\n Sort Method: quicksort Memory: 15960kB\n -> Nested Loop (cost=0.00..740453.38 rows=158199 width=8) (actual\ntime=0.048..1531.592 rows=209401 loops=1)\n -> Index Scan using visits_created_at_index on visits\n (cost=0.00..5929.82 rows=115492 width=4) (actual time=0.032..161.488\nrows=131311 loops=1)\n Index Cond: ((created_at >= '2013-01-15 00:00:00'::timestamp\nwithout time zone) AND (created_at < '2013-01-16 00:00:00'::timestamp\nwithout time zone))\n -> Index Scan using views_visit_id_index on views\n (cost=0.00..6.26 rows=10 width=8) (actual time=0.003..0.005 rows=2\nloops=131311)\n Index Cond: (visit_id = visits.id)\nTotal runtime: 2234.142 ms\n\nrandom_page_cost=0.7 slows it down 16 times\n\nSort (cost=804548.42..804943.92 rows=158199 width=8) (actual\ntime=37011.337..37205.449 rows=209401 loops=1)\n Sort Key: visits.id, views.id\n Sort Method: quicksort Memory: 15960kB\n -> Merge Join (cost=15871.37..790886.85 rows=158199 width=8) (actual\ntime=35673.602..36714.056 rows=209401 loops=1)\n Merge Cond: (visits.id = views.visit_id)\n -> Sort (cost=15824.44..16113.17 rows=115492 width=4) (actual\ntime=335.486..463.085 rows=131311 loops=1)\n Sort Key: visits.id\n Sort Method: quicksort Memory: 12300kB\n -> Index Scan using visits_created_at_index on visits\n (cost=0.00..6113.04 rows=115492 width=4) (actual time=0.034..159.326\nrows=131311 loops=1)\n Index Cond: ((created_at >= '2013-01-15\n00:00:00'::timestamp without time zone) AND (created_at < '2013-01-16\n00:00:00'::timestamp without time zone))\n -> Index Scan using views_visit_id_visit_buoy_index on views\n (cost=0.00..757596.22 rows=6122770 width=8) (actual time=0.017..30765.316\nrows=5145902 loops=1)\nTotal runtime: 37407.174 ms\n\nI am totally puzzled now...\n\nOn Tue, Jan 29, 2013 at 11:39 AM, Ben Chobot <[email protected]> wrote:\n\nOn Jan 29, 2013, at 6:24 AM, Alex Vinnik wrote:\nrandom_page_cost=1 might be not what you really want. it would mean that random reads are as fast as as sequential reads, which probably is true only for SSD\nWhat randon_page_cost would be more appropriate for EC2 EBS Provisioned volume that can handle 2,000 IOPS? For EC2 Provisioned IOPS volumes - not standard EBS - random_page_cost=1 is exactly what you want.\nWell... after some experimentation it turned out that random_page_cost=0.6 gives me fast query\nQUERY PLANSort  (cost=754114.96..754510.46 rows=158199 width=8) (actual time=1839.324..2035.405 rows=209401 loops=1)  Sort Key: visits.id, views.id\n  Sort Method: quicksort  Memory: 15960kB  ->  Nested Loop  (cost=0.00..740453.38 rows=158199 width=8) (actual time=0.048..1531.592 rows=209401 loops=1)        ->  Index Scan using visits_created_at_index on visits  (cost=0.00..5929.82 rows=115492 width=4) (actual time=0.032..161.488 rows=131311 loops=1)\n              Index Cond: ((created_at >= '2013-01-15 00:00:00'::timestamp without time zone) AND (created_at < '2013-01-16 00:00:00'::timestamp without time zone))        ->  Index Scan using views_visit_id_index on views  (cost=0.00..6.26 rows=10 width=8) (actual time=0.003..0.005 rows=2 loops=131311)\n              Index Cond: (visit_id = visits.id)Total runtime: 2234.142 msrandom_page_cost=0.7 slows it down 16 times\nSort  (cost=804548.42..804943.92 rows=158199 width=8) (actual time=37011.337..37205.449 rows=209401 loops=1)  Sort Key: visits.id, views.id\n  Sort Method: quicksort  Memory: 15960kB  ->  Merge Join  (cost=15871.37..790886.85 rows=158199 width=8) (actual time=35673.602..36714.056 rows=209401 loops=1)        Merge Cond: (visits.id = views.visit_id)\n        ->  Sort  (cost=15824.44..16113.17 rows=115492 width=4) (actual time=335.486..463.085 rows=131311 loops=1)              Sort Key: visits.id              Sort Method: quicksort  Memory: 12300kB\n              ->  Index Scan using visits_created_at_index on visits  (cost=0.00..6113.04 rows=115492 width=4) (actual time=0.034..159.326 rows=131311 loops=1)                    Index Cond: ((created_at >= '2013-01-15 00:00:00'::timestamp without time zone) AND (created_at < '2013-01-16 00:00:00'::timestamp without time zone))\n        ->  Index Scan using views_visit_id_visit_buoy_index on views  (cost=0.00..757596.22 rows=6122770 width=8) (actual time=0.017..30765.316 rows=5145902 loops=1)Total runtime: 37407.174 ms\nI am totally puzzled now...", "msg_date": "Tue, 29 Jan 2013 12:59:10 -0600", "msg_from": "Alex Vinnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Tue, Jan 29, 2013 at 12:59 PM, Alex Vinnik <[email protected]> wrote:\n>\n>\n>\n> On Tue, Jan 29, 2013 at 11:39 AM, Ben Chobot <[email protected]> wrote:\n>>\n>> On Jan 29, 2013, at 6:24 AM, Alex Vinnik wrote:\n>>\n>>> random_page_cost=1 might be not what you really want.\n>>> it would mean that random reads are as fast as as sequential reads, which\n>>> probably is true only for SSD\n>>\n>> What randon_page_cost would be more appropriate for EC2 EBS Provisioned\n>> volume that can handle 2,000 IOPS?\n>>\n>>\n>> For EC2 Provisioned IOPS volumes - not standard EBS - random_page_cost=1\n>> is exactly what you want.\n>>\n> Well... after some experimentation it turned out that random_page_cost=0.6\n> gives me fast query\n>\n> QUERY PLAN\n> Sort (cost=754114.96..754510.46 rows=158199 width=8) (actual\n> time=1839.324..2035.405 rows=209401 loops=1)\n> Sort Key: visits.id, views.id\n> Sort Method: quicksort Memory: 15960kB\n> -> Nested Loop (cost=0.00..740453.38 rows=158199 width=8) (actual\n> time=0.048..1531.592 rows=209401 loops=1)\n> -> Index Scan using visits_created_at_index on visits\n> (cost=0.00..5929.82 rows=115492 width=4) (actual time=0.032..161.488\n> rows=131311 loops=1)\n> Index Cond: ((created_at >= '2013-01-15 00:00:00'::timestamp\n> without time zone) AND (created_at < '2013-01-16 00:00:00'::timestamp\n> without time zone))\n> -> Index Scan using views_visit_id_index on views (cost=0.00..6.26\n> rows=10 width=8) (actual time=0.003..0.005 rows=2 loops=131311)\n> Index Cond: (visit_id = visits.id)\n> Total runtime: 2234.142 ms\n>\n> random_page_cost=0.7 slows it down 16 times\n>\n> Sort (cost=804548.42..804943.92 rows=158199 width=8) (actual\n> time=37011.337..37205.449 rows=209401 loops=1)\n> Sort Key: visits.id, views.id\n> Sort Method: quicksort Memory: 15960kB\n> -> Merge Join (cost=15871.37..790886.85 rows=158199 width=8) (actual\n> time=35673.602..36714.056 rows=209401 loops=1)\n> Merge Cond: (visits.id = views.visit_id)\n> -> Sort (cost=15824.44..16113.17 rows=115492 width=4) (actual\n> time=335.486..463.085 rows=131311 loops=1)\n> Sort Key: visits.id\n> Sort Method: quicksort Memory: 12300kB\n> -> Index Scan using visits_created_at_index on visits\n> (cost=0.00..6113.04 rows=115492 width=4) (actual time=0.034..159.326\n> rows=131311 loops=1)\n> Index Cond: ((created_at >= '2013-01-15\n> 00:00:00'::timestamp without time zone) AND (created_at < '2013-01-16\n> 00:00:00'::timestamp without time zone))\n\n> -> Index Scan using views_visit_id_visit_buoy_index on views\n> (cost=0.00..757596.22 rows=6122770 width=8) (actual time=0.017..30765.316\n> rows=5145902 loops=1)\n\nSomething is awry here. pg is doing an index scan via\nviews_visit_id_visit_buoy_index with no matching condition. What's\nthe definition of that index? The reason why the random_page_cost\nadjustment is working is that you are highly penalizing sequential\ntype scans so that the database is avoiding the merge (sort A, sort B,\nstepwise compare).\n\nSQL server is doing a nestloop/index scan, just like the faster pg\nplan, but is a bit faster because it's parallelizing.\n\n merlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Jan 2013 13:35:13 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Mon, Jan 28, 2013 at 3:43 PM, Alex Vinnik <[email protected]> wrote:\n> It sure turned out that default settings are not a good fit. Setting\n> random_page_cost to 1.0 made query to run in 2.6 seconds and I clearly see\n> that indexes are being used in explain plan and IO utilization is close to\n> 0.\n>\n> QUERY PLAN\n> Sort (cost=969787.23..970288.67 rows=200575 width=8) (actual\n> time=2176.045..2418.162 rows=241238 loops=1)\n> Sort Key: visits.id, views.id\n> Sort Method: external sort Disk: 4248kB\n\nWhat query are you running? The query you originally showed us should\nnot be doing this sort in the first place.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Jan 2013 12:06:50 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Tue, Jan 29, 2013 at 2:06 PM, Jeff Janes <[email protected]> wrote:\n\n> > Sort Key: visits.id, views.id\n> > Sort Method: external sort Disk: 4248kB\n>\n> What query are you running? The query you originally showed us should\n> not be doing this sort in the first place.\n>\n> Cheers,\n>\n> Jeff\n>\n\nHere is the query\n\nselect visits.id, views.id\nfrom visits join views on visits.id = views.visit_id\nwhere visits.created_at >= '01/15/2013' and visits.created_at < '01/16/2013'\norder by visits.id, views.id;\n\nOriginal query didn't have order by clause\n\nHere query plan w/o order by\nMerge Join (cost=18213.46..802113.80 rows=182579 width=8) (actual\ntime=144443.693..145469.499 rows=209401 loops=1)\n Merge Cond: (visits.id = views.visit_id)\n -> Sort (cost=18195.47..18523.91 rows=131373 width=4) (actual\ntime=335.496..464.929 rows=131311 loops=1)\n Sort Key: visits.id\n Sort Method: quicksort Memory: 12300kB\n -> Index Scan using visits_created_at_index on visits\n (cost=0.00..7026.59 rows=131373 width=4) (actual time=0.037..162.047\nrows=131311 loops=1)\n Index Cond: ((created_at >= '2013-01-15 00:00:00'::timestamp\nwithout time zone) AND (created_at < '2013-01-16 00:00:00'::timestamp\nwithout time zone))\n -> Index Scan using views_visit_id_visit_buoy_index on views\n (cost=0.00..766120.99 rows=6126002 width=8) (actual\ntime=18.960..140565.130 rows=4014837 loops=1)\nTotal runtime: 145664.274 ms\n\nOn Tue, Jan 29, 2013 at 2:06 PM, Jeff Janes <[email protected]> wrote:\n>   Sort Key: visits.id, views.id\n\n>   Sort Method: external sort  Disk: 4248kB\n\nWhat query are you running?  The query you originally showed us should\nnot be doing this sort in the first place.\n\nCheers,\n\nJeff\nHere is the query\nselect visits.id, views.idfrom visits join views on visits.id = views.visit_id\nwhere visits.created_at >= '01/15/2013' and visits.created_at < '01/16/2013'order by visits.id, views.id;\nOriginal query didn't have order by clause\nHere query plan w/o order byMerge Join  (cost=18213.46..802113.80 rows=182579 width=8) (actual time=144443.693..145469.499 rows=209401 loops=1)\n  Merge Cond: (visits.id = views.visit_id)  ->  Sort  (cost=18195.47..18523.91 rows=131373 width=4) (actual time=335.496..464.929 rows=131311 loops=1)        Sort Key: visits.id\n        Sort Method: quicksort  Memory: 12300kB        ->  Index Scan using visits_created_at_index on visits  (cost=0.00..7026.59 rows=131373 width=4) (actual time=0.037..162.047 rows=131311 loops=1)\n              Index Cond: ((created_at >= '2013-01-15 00:00:00'::timestamp without time zone) AND (created_at < '2013-01-16 00:00:00'::timestamp without time zone))  ->  Index Scan using views_visit_id_visit_buoy_index on views  (cost=0.00..766120.99 rows=6126002 width=8) (actual time=18.960..140565.130 rows=4014837 loops=1)\nTotal runtime: 145664.274 ms", "msg_date": "Tue, 29 Jan 2013 14:43:17 -0600", "msg_from": "Alex Vinnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "index definition\nCREATE INDEX views_visit_id_visit_buoy_index ON views USING btree\n(visit_id, visit_buoy)\n\n\n\nOn Tue, Jan 29, 2013 at 1:35 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Tue, Jan 29, 2013 at 12:59 PM, Alex Vinnik <[email protected]>\n> wrote:\n> >\n> >\n> >\n> > On Tue, Jan 29, 2013 at 11:39 AM, Ben Chobot <[email protected]>\n> wrote:\n> >>\n> >> On Jan 29, 2013, at 6:24 AM, Alex Vinnik wrote:\n> >>\n> >>> random_page_cost=1 might be not what you really want.\n> >>> it would mean that random reads are as fast as as sequential reads,\n> which\n> >>> probably is true only for SSD\n> >>\n> >> What randon_page_cost would be more appropriate for EC2 EBS Provisioned\n> >> volume that can handle 2,000 IOPS?\n> >>\n> >>\n> >> For EC2 Provisioned IOPS volumes - not standard EBS - random_page_cost=1\n> >> is exactly what you want.\n> >>\n> > Well... after some experimentation it turned out that\n> random_page_cost=0.6\n> > gives me fast query\n> >\n> > QUERY PLAN\n> > Sort (cost=754114.96..754510.46 rows=158199 width=8) (actual\n> > time=1839.324..2035.405 rows=209401 loops=1)\n> > Sort Key: visits.id, views.id\n> > Sort Method: quicksort Memory: 15960kB\n> > -> Nested Loop (cost=0.00..740453.38 rows=158199 width=8) (actual\n> > time=0.048..1531.592 rows=209401 loops=1)\n> > -> Index Scan using visits_created_at_index on visits\n> > (cost=0.00..5929.82 rows=115492 width=4) (actual time=0.032..161.488\n> > rows=131311 loops=1)\n> > Index Cond: ((created_at >= '2013-01-15\n> 00:00:00'::timestamp\n> > without time zone) AND (created_at < '2013-01-16 00:00:00'::timestamp\n> > without time zone))\n> > -> Index Scan using views_visit_id_index on views\n> (cost=0.00..6.26\n> > rows=10 width=8) (actual time=0.003..0.005 rows=2 loops=131311)\n> > Index Cond: (visit_id = visits.id)\n> > Total runtime: 2234.142 ms\n> >\n> > random_page_cost=0.7 slows it down 16 times\n> >\n> > Sort (cost=804548.42..804943.92 rows=158199 width=8) (actual\n> > time=37011.337..37205.449 rows=209401 loops=1)\n> > Sort Key: visits.id, views.id\n> > Sort Method: quicksort Memory: 15960kB\n> > -> Merge Join (cost=15871.37..790886.85 rows=158199 width=8) (actual\n> > time=35673.602..36714.056 rows=209401 loops=1)\n> > Merge Cond: (visits.id = views.visit_id)\n> > -> Sort (cost=15824.44..16113.17 rows=115492 width=4) (actual\n> > time=335.486..463.085 rows=131311 loops=1)\n> > Sort Key: visits.id\n> > Sort Method: quicksort Memory: 12300kB\n> > -> Index Scan using visits_created_at_index on visits\n> > (cost=0.00..6113.04 rows=115492 width=4) (actual time=0.034..159.326\n> > rows=131311 loops=1)\n> > Index Cond: ((created_at >= '2013-01-15\n> > 00:00:00'::timestamp without time zone) AND (created_at < '2013-01-16\n> > 00:00:00'::timestamp without time zone))\n>\n> > -> Index Scan using views_visit_id_visit_buoy_index on views\n> > (cost=0.00..757596.22 rows=6122770 width=8) (actual time=0.017..30765.316\n> > rows=5145902 loops=1)\n>\n> Something is awry here. pg is doing an index scan via\n> views_visit_id_visit_buoy_index with no matching condition. What's\n> the definition of that index? The reason why the random_page_cost\n> adjustment is working is that you are highly penalizing sequential\n> type scans so that the database is avoiding the merge (sort A, sort B,\n> stepwise compare).\n>\n> SQL server is doing a nestloop/index scan, just like the faster pg\n> plan, but is a bit faster because it's parallelizing.\n>\n> merlin\n>\n\nindex definitionCREATE INDEX views_visit_id_visit_buoy_index ON views USING btree (visit_id, visit_buoy)\nOn Tue, Jan 29, 2013 at 1:35 PM, Merlin Moncure <[email protected]> wrote:\nOn Tue, Jan 29, 2013 at 12:59 PM, Alex Vinnik <[email protected]> wrote:\n\n>\n>\n>\n> On Tue, Jan 29, 2013 at 11:39 AM, Ben Chobot <[email protected]> wrote:\n>>\n>> On Jan 29, 2013, at 6:24 AM, Alex Vinnik wrote:\n>>\n>>> random_page_cost=1 might be not what you really want.\n>>> it would mean that random reads are as fast as as sequential reads, which\n>>> probably is true only for SSD\n>>\n>> What randon_page_cost would be more appropriate for EC2 EBS Provisioned\n>> volume that can handle 2,000 IOPS?\n>>\n>>\n>> For EC2 Provisioned IOPS volumes - not standard EBS - random_page_cost=1\n>> is exactly what you want.\n>>\n> Well... after some experimentation it turned out that random_page_cost=0.6\n> gives me fast query\n>\n> QUERY PLAN\n> Sort  (cost=754114.96..754510.46 rows=158199 width=8) (actual\n> time=1839.324..2035.405 rows=209401 loops=1)\n>   Sort Key: visits.id, views.id\n>   Sort Method: quicksort  Memory: 15960kB\n>   ->  Nested Loop  (cost=0.00..740453.38 rows=158199 width=8) (actual\n> time=0.048..1531.592 rows=209401 loops=1)\n>         ->  Index Scan using visits_created_at_index on visits\n> (cost=0.00..5929.82 rows=115492 width=4) (actual time=0.032..161.488\n> rows=131311 loops=1)\n>               Index Cond: ((created_at >= '2013-01-15 00:00:00'::timestamp\n> without time zone) AND (created_at < '2013-01-16 00:00:00'::timestamp\n> without time zone))\n>         ->  Index Scan using views_visit_id_index on views  (cost=0.00..6.26\n> rows=10 width=8) (actual time=0.003..0.005 rows=2 loops=131311)\n>               Index Cond: (visit_id = visits.id)\n> Total runtime: 2234.142 ms\n>\n> random_page_cost=0.7 slows it down 16 times\n>\n> Sort  (cost=804548.42..804943.92 rows=158199 width=8) (actual\n> time=37011.337..37205.449 rows=209401 loops=1)\n>   Sort Key: visits.id, views.id\n>   Sort Method: quicksort  Memory: 15960kB\n>   ->  Merge Join  (cost=15871.37..790886.85 rows=158199 width=8) (actual\n> time=35673.602..36714.056 rows=209401 loops=1)\n>         Merge Cond: (visits.id = views.visit_id)\n>         ->  Sort  (cost=15824.44..16113.17 rows=115492 width=4) (actual\n> time=335.486..463.085 rows=131311 loops=1)\n>               Sort Key: visits.id\n>               Sort Method: quicksort  Memory: 12300kB\n>               ->  Index Scan using visits_created_at_index on visits\n> (cost=0.00..6113.04 rows=115492 width=4) (actual time=0.034..159.326\n> rows=131311 loops=1)\n>                     Index Cond: ((created_at >= '2013-01-15\n> 00:00:00'::timestamp without time zone) AND (created_at < '2013-01-16\n> 00:00:00'::timestamp without time zone))\n\n>         ->  Index Scan using views_visit_id_visit_buoy_index on views\n> (cost=0.00..757596.22 rows=6122770 width=8) (actual time=0.017..30765.316\n> rows=5145902 loops=1)\n\nSomething is awry here. pg is doing an index scan via\nviews_visit_id_visit_buoy_index with no matching condition.  What's\nthe definition of that index? The reason why the random_page_cost\nadjustment is working is that you are highly penalizing sequential\ntype scans so that the database is avoiding the merge (sort A, sort B,\nstepwise compare).\n\nSQL server is doing a nestloop/index scan, just like the faster pg\nplan, but is a bit faster because it's parallelizing.\n\n merlin", "msg_date": "Tue, 29 Jan 2013 14:48:50 -0600", "msg_from": "Alex Vinnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Mon, Jan 28, 2013 at 4:55 PM, Filip Rembiałkowski\n<[email protected]> wrote:\n>\n> On Mon, Jan 28, 2013 at 5:43 PM, Alex Vinnik <[email protected]> wrote:\n>>\n>> It sure turned out that default settings are not a good fit.\n>\n>\n> do you know pgtune?\n> it's a good tool for starters, if you want a fast postgres and don't really\n> want to learn what's behind the scenes.\n>\n> random_page_cost=1 might be not what you really want.\n> it would mean that random reads are as fast as as sequential reads, which\n> probably is true only for SSD\n\nOr that the \"reads\" are cached and coming from RAM, which is almost\nsurely the case here.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 29 Jan 2013 15:15:36 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "On Monday, January 28, 2013, Alex Vinnik wrote:\n\n> It sure turned out that default settings are not a good fit. Setting random_page_cost\n> to 1.0 made query to run in 2.6 seconds and I clearly see that indexes are\n> being used in explain plan and IO utilization is close to 0.\n>\n\nThis is not surprising. The default settings are aimed at quite small\nservers, while you seem to have a rather substantial one. Have you done\nanything yet to fix effective_cache_size?\n\n\n\n\n> QUERY PLAN\n> Sort (cost=969787.23..970288.67 rows=200575 width=8) (actual\n> time=2176.045..2418.162 rows=241238 loops=1)\n>\n\n\nHowever I noticed that sorting is done using disk(\"external sort Disk:\n> 4248kB\")\n>\n\nAs far as pgsql is concerned, it is using the disk. But the kernel is\nprobably buffering that to an extent that the disk is probably being\ntouched rather little. So I wouldn't worry about it.\n\n\n\n\n> which prompted me to take a look at work_mem. But it turned out that\n> small increase to 4MB from default 1MB turns off index usage and query gets\n> x10 slower. IO utilization jumped to 100% from literally nothing. so back\n> to square one...\n>\n> QUERY PLAN\n> Sort (cost=936642.75..937144.19 rows=200575 width=8) (actual\n> time=33200.762..33474.443 rows=241238 loops=1)\n>\n\nAnd why should the IO utilization have jumped? Is everything in memory, or\nis it not? You should run your EXPLAINs with (analyze, buffers), and also\nyou should turn on track_io_timings, at least in the local session; that\nwill give us some insights.\n\nIf everything is in memory, then why is the seq scan taking so long? If\nnot, then why is the nested loop such a good idea? (In my hands, when\neverything does *not* fit in memory, the nested loop is very very bad)\n\nYou seem have a bit of an infatuation with Dec 15th, running that one query\nover and over and over. Why? If the real\nlive query is not just for that one day repeatedly, then you should test\nwith different days, not just one day repeatedly. (And if your real query\nreally is like the movie \"Groundhog Day\", you should probably cluster or\npartition with that in mind.)\n\nAnyway, there was an issue introduced in 9.2.0 and to be removed in 9.2.3\nwhich over-penalized nested loops that had large indexes on the inner side.\n Since your different plans are so close to each other in estimated cost, I\nthink this issue would be enough to tip it into the seq scan. Also, your\npoor setting of effective_cache_size might also be enough to tip it. And\nboth combined, almost certainly are.\n\nBut ultimately, I think you are optimizing for a case that does not\nactually exist.\n\nCheers,\n\nJeff\n\nOn Monday, January 28, 2013, Alex Vinnik wrote:It sure turned out that default settings are not a good fit. Setting random_page_cost to 1.0 made query to run in 2.6 seconds and I clearly see that indexes are being used in explain plan and IO utilization is close to 0.\nThis is not surprising.  The default settings are aimed at quite small servers, while you seem to have a rather substantial one.  Have you done anything yet to fix effective_cache_size?\n  \nQUERY PLANSort  (cost=969787.23..970288.67 rows=200575 width=8) (actual time=2176.045..2418.162 rows=241238 loops=1)\nHowever I noticed that sorting is done using disk(\"external sort  Disk: 4248kB\")\nAs far as pgsql is concerned, it is using the disk.  But the kernel is probably buffering that to an extent that the disk is probably being touched rather little.  So I wouldn't worry about it.\n  which prompted me to take a look at work_mem. But it turned out that small increase to 4MB from default 1MB turns off index usage and query gets x10 slower. IO utilization jumped to 100% from literally nothing. so back to square one...\nQUERY PLANSort  (cost=936642.75..937144.19 rows=200575 width=8) (actual time=33200.762..33474.443 rows=241238 loops=1)\nAnd why should the IO utilization have jumped?  Is everything in memory, or is it not?  You should run your EXPLAINs with (analyze, buffers), and also you should turn on track_io_timings, at least in the local session; that will give us some insights.\nIf everything is in memory, then why is the seq scan taking so long?  If not, then why is the nested loop such a good idea?  (In my hands, when everything does *not* fit in memory, the nested loop is very very bad)\n You seem have a bit of an infatuation with Dec 15th, running that one query over and over and over.  Why?  If the real live query is not just for that one day repeatedly, then you should test with different days, not just one day repeatedly.  (And if your real query really is like the movie \"Groundhog Day\", you should probably cluster or partition with that in mind.)\nAnyway, there was an issue introduced in 9.2.0 and to be removed in 9.2.3 which over-penalized nested loops that had large indexes on the inner side.  Since your different plans are so close to each other in estimated cost, I think this issue would be enough to tip it into the seq scan.  Also, your poor setting of effective_cache_size might also be enough to tip it. And both combined, almost certainly are.\nBut ultimately, I think you are optimizing for a case that does not actually exist.Cheers,Jeff", "msg_date": "Sat, 2 Feb 2013 08:39:42 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" }, { "msg_contents": "> Yeah.. I came across pgtune but noticed that latest version dated 2009-10-29 http://pgfoundry.org/frs/?group_id=1000416 which is kind of outdated. Tar file has settings for pg 8.3. Is still relevant?\n> \n> Yes, I'm sure it will not do anything bad to your config. \n> \n\nApologies for leaping in a little late, but I note the version on Github has been updated much more recently:\n\n https://github.com/gregs1104/pgtune\n\nCheers,\nDan\n--\nDan Fairs | [email protected] | @danfairs | secondsync.com\n\n\nYeah.. I came across pgtune but noticed that latest version dated 2009-10-29 http://pgfoundry.org/frs/?group_id=1000416 which is kind of outdated. Tar file has settings for pg 8.3. Is still relevant?\nYes, I'm sure it will not do anything bad to your config.  Apologies for leaping in a little late, but I note the version on Github has been updated much more recently:  https://github.com/gregs1104/pgtuneCheers,Dan\n--Dan Fairs | [email protected] | @danfairs | secondsync.com", "msg_date": "Mon, 4 Feb 2013 21:14:58 +0000", "msg_from": "Dan Fairs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple join doesn't use index" } ]