threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi all.\n\nI’m running PostgreSQL 9.3.4 and doing stress test of the database with writing only load. The test plan does 1000 transactions per second (each of them does several updates/inserts). The problem is that checkpoint is not distributed over time well. When the checkpoint finishes, the db gets lots of I/O operations and response timings grows strongly.\n\nMy checkpoint settings looks like that:\n\npostgres=# select name, setting from pg_catalog.pg_settings where name like 'checkpoint%' and boot_val != reset_val;\n name | setting \n------------------------------+---------\n checkpoint_completion_target | 0.9\n checkpoint_segments | 100500\n checkpoint_timeout | 600\n(3 rows)\n\npostgres=#\n\nBut in the log I see that checkpoint continues less than 600*0.9 = 540 seconds:\n\n2014-04-14 12:54:41.479 MSK,,,10517,,53468da6.2915,433,,2014-04-10 16:25:10 MSK,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2014-04-14 12:57:06.107 MSK,,,10517,,53468da6.2915,434,,2014-04-10 16:25:10 MSK,,0,LOG,00000,\"checkpoint complete: wrote 65140 buffers (24.8%); 0 transaction log file(s) added, 0 removed, 327 recycled; write=134.217 s, sync=10.292 s, total=144.627 s; sync files=31, longest=3.332 s, average=0.331 s\",,,,,,,,,»\"\n\n\nWhen the checkpoint starts (12:54:41.479) dstat says that I/O load increases:\n\n----system---- -dsk/total- --io/total-\n date/time | read writ| read writ\n14-04 12:54:39| 0 15M| 0 2562 \n14-04 12:54:40| 0 13M| 0 2330 \n14-04 12:54:41| 0 97M| 0 5981 \n14-04 12:54:42| 0 95M| 0 8869 \n14-04 12:54:43| 0 147M| 0 8493 \n14-04 12:54:44| 0 144M| 0 8316 \n14-04 12:54:45| 0 176M| 0 8189 \n14-04 12:54:46| 0 141M| 0 8221 \n14-04 12:54:47| 0 143M| 0 8260 \n14-04 12:54:48| 0 141M| 0 7576 \n14-04 12:54:49| 0 173M| 0 8171\n\nBut when it finishes (12:57:06.107) the I/O load is much higher than the hardware can do:\n\n----system---- -dsk/total- --io/total-\n date/time | read writ| read writ\n14-04 12:56:52| 0 33M| 0 5185 \n14-04 12:56:53| 0 64M| 0 5271 \n14-04 12:56:54| 0 65M| 0 5256 \n14-04 12:56:55| 0 153M| 0 15.8k\n14-04 12:56:56| 0 758M| 0 18.6k\n14-04 12:56:57| 0 823M| 0 4164 \n14-04 12:56:58| 0 843M| 0 8186 \n14-04 12:56:59| 0 794M| 0 15.0k\n14-04 12:57:00| 0 880M| 0 5954 \n14-04 12:57:01| 0 862M| 0 4608 \n14-04 12:57:02| 0 804M| 0 7387 \n14-04 12:57:03| 0 849M| 0 4878 \n14-04 12:57:04| 0 788M| 0 20.0k\n14-04 12:57:05| 0 805M| 0 6004 \n14-04 12:57:06| 0 143M| 0 6932 \n14-04 12:57:07| 0 108M| 0 6150 \n14-04 12:57:08| 0 42M| 0 6233 \n14-04 12:57:09| 0 73M| 0 6248\n\nResponse timings of the application at this moment look like that:\n\n\nThe hardware is quite good to handle this load (PGDATA lives on soft raid10 array of 8 ssd drives). I’ve done the same test with 3000 tps - the result was exactly the same. The only difference was that I/O spikes had been stronger.\n\nSo my question is why the checkpoint is not spread for 540 seconds? Is there any way to understand why I/O spike happens when the checkpoint finishes but does not happen during all of the checkpoint process? Any help would be really appropriate.\n\n--\nVladimir",
"msg_date": "Mon, 14 Apr 2014 13:46:42 +0400",
"msg_from": "Borodin Vladimir <[email protected]>",
"msg_from_op": true,
"msg_subject": "Checkpoint distribution"
},
{
"msg_contents": "On Mon, Apr 14, 2014 at 2:46 AM, Borodin Vladimir <[email protected]> wrote:\n\n> Hi all.\n>\n> I’m running PostgreSQL 9.3.4 and doing stress test of the database with\n> writing only load. The test plan does 1000 transactions per second (each of\n> them does several updates/inserts). The problem is that checkpoint is not\n> distributed over time well. When the checkpoint finishes, the db gets lots\n> of I/O operations and response timings grows strongly.\n>\n> My checkpoint settings looks like that:\n>\n> postgres=# select name, setting from pg_catalog.pg_settings where name\n> like 'checkpoint%' and boot_val != reset_val;\n> name | setting\n> ------------------------------+---------\n> checkpoint_completion_target | 0.9\n> checkpoint_segments | 100500\n> checkpoint_timeout | 600\n> (3 rows)\n>\n> postgres=#\n>\n> But in the log I see that checkpoint continues less than 600*0.9 = 540\n> seconds:\n>\n> 2014-04-14 12:54:41.479 MSK,,,10517,,53468da6.2915,433,,2014-04-10\n> 16:25:10 MSK,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n> 2014-04-14 12:57:06.107 MSK,,,10517,,53468da6.2915,434,,2014-04-10\n> 16:25:10 MSK,,0,LOG,00000,\"checkpoint complete: wrote 65140 buffers\n> (24.8%); 0 transaction log file(s) added, 0 removed, 327 recycled;\n> write=134.217 s, sync=10.292 s, total=144.627 s; sync files=31,\n> longest=3.332 s, average=0.331 s\",,,,,,,,,»\"\n>\n\nWhen a checkpoint starts, the checkpointer process counts up all the\nbuffers that need to be written. Then it goes through and writes them. It\npaces itself by comparing how many buffers it itself has written to how\nmany need to be written. But if a buffer that needs to be checkpointed\nhappens to get written by some other process (the background writer, or a\nbackend, because they need a clean buffer to read different data into), the\ncheckpointer is not notified of this and doesn't count that buffer as being\nwritten when it computes whether it is on track. This causes it to finish\nearly. This can be confusing, but probably doesn't cause any real\nproblems. The reason for checkpoint_completion_target is to spread the IO\nout over a longer time, but if much of the checkpoint IO is really being\ndone by the background writer, than it is already getting spread out fairly\nwell.\n\nWhen the checkpoint starts (12:54:41.479) dstat says that I/O load\n> increases:\n>\n\n...\n\nBut when it finishes (12:57:06.107) the I/O load is much higher than the\n> hardware can do:\n>\n\nDuring the writing phase of the checkpoint, PostgreSQL passes the dirty\ndata to the OS. At the end, it then tells the OS to make sure that that\ndata has actually reached disk. If your OS stored up too much dirty data\nin memory then it kind of freaks out once it is notified it needs to\nactually write that data to disk. The best solution for this may be to\nlower dirty_background_bytes or dirty_background_ratio so the OS doesn't\nstore up so much trouble for itself.\n\nCheers\n\nJeff\n\nOn Mon, Apr 14, 2014 at 2:46 AM, Borodin Vladimir <[email protected]> wrote:\nHi all.\nI’m running PostgreSQL 9.3.4 and doing stress test of the database with writing only load. The test plan does 1000 transactions per second (each of them does several updates/inserts). The problem is that checkpoint is not distributed over time well. When the checkpoint finishes, the db gets lots of I/O operations and response timings grows strongly.\nMy checkpoint settings looks like that:postgres=# select name, setting from pg_catalog.pg_settings where name like 'checkpoint%' and boot_val != reset_val;\n name | setting ------------------------------+--------- checkpoint_completion_target | 0.9\n checkpoint_segments | 100500 checkpoint_timeout | 600(3 rows)\npostgres=#But in the log I see that checkpoint continues less than 600*0.9 = 540 seconds:2014-04-14 12:54:41.479 MSK,,,10517,,53468da6.2915,433,,2014-04-10 16:25:10 MSK,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2014-04-14 12:57:06.107 MSK,,,10517,,53468da6.2915,434,,2014-04-10 16:25:10 MSK,,0,LOG,00000,\"checkpoint complete: wrote 65140 buffers (24.8%); 0 transaction log file(s) added, 0 removed, 327 recycled; write=134.217 s, sync=10.292 s, total=144.627 s; sync files=31, longest=3.332 s, average=0.331 s\",,,,,,,,,»\"\nWhen a checkpoint starts, the checkpointer process counts up all the buffers that need to be written. Then it goes through and writes them. It paces itself by comparing how many buffers it itself has written to how many need to be written. But if a buffer that needs to be checkpointed happens to get written by some other process (the background writer, or a backend, because they need a clean buffer to read different data into), the checkpointer is not notified of this and doesn't count that buffer as being written when it computes whether it is on track. This causes it to finish early. This can be confusing, but probably doesn't cause any real problems. The reason for checkpoint_completion_target is to spread the IO out over a longer time, but if much of the checkpoint IO is really being done by the background writer, than it is already getting spread out fairly well.\nWhen the checkpoint starts (12:54:41.479) dstat says that I/O load increases:\n ...\nBut when it finishes (12:57:06.107) the I/O load is much higher than the hardware can do:\nDuring the writing phase of the checkpoint, PostgreSQL passes the dirty data to the OS. At the end, it then tells the OS to make sure that that data has actually reached disk. If your OS stored up too much dirty data in memory then it kind of freaks out once it is notified it needs to actually write that data to disk. The best solution for this may be to lower dirty_background_bytes or dirty_background_ratio so the OS doesn't store up so much trouble for itself.\nCheersJeff",
"msg_date": "Mon, 14 Apr 2014 08:11:23 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpoint distribution"
},
{
"msg_contents": "14 апр. 2014 г., в 19:11, Jeff Janes <[email protected]> написал(а):\n\n> On Mon, Apr 14, 2014 at 2:46 AM, Borodin Vladimir <[email protected]> wrote:\n> Hi all.\n> \n> I’m running PostgreSQL 9.3.4 and doing stress test of the database with writing only load. The test plan does 1000 transactions per second (each of them does several updates/inserts). The problem is that checkpoint is not distributed over time well. When the checkpoint finishes, the db gets lots of I/O operations and response timings grows strongly.\n> \n> My checkpoint settings looks like that:\n> \n> postgres=# select name, setting from pg_catalog.pg_settings where name like 'checkpoint%' and boot_val != reset_val;\n> name | setting \n> ------------------------------+---------\n> checkpoint_completion_target | 0.9\n> checkpoint_segments | 100500\n> checkpoint_timeout | 600\n> (3 rows)\n> \n> postgres=#\n> \n> But in the log I see that checkpoint continues less than 600*0.9 = 540 seconds:\n> \n> 2014-04-14 12:54:41.479 MSK,,,10517,,53468da6.2915,433,,2014-04-10 16:25:10 MSK,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n> 2014-04-14 12:57:06.107 MSK,,,10517,,53468da6.2915,434,,2014-04-10 16:25:10 MSK,,0,LOG,00000,\"checkpoint complete: wrote 65140 buffers (24.8%); 0 transaction log file(s) added, 0 removed, 327 recycled; write=134.217 s, sync=10.292 s, total=144.627 s; sync files=31, longest=3.332 s, average=0.331 s\",,,,,,,,,»\"\n> \n> When a checkpoint starts, the checkpointer process counts up all the buffers that need to be written. Then it goes through and writes them. It paces itself by comparing how many buffers it itself has written to how many need to be written. But if a buffer that needs to be checkpointed happens to get written by some other process (the background writer, or a backend, because they need a clean buffer to read different data into), the checkpointer is not notified of this and doesn't count that buffer as being written when it computes whether it is on track. This causes it to finish early. This can be confusing, but probably doesn't cause any real problems. The reason for checkpoint_completion_target is to spread the IO out over a longer time, but if much of the checkpoint IO is really being done by the background writer, than it is already getting spread out fairly well.\n\nI didn’t know that, thanks. Seems, that I have quite small shared buffers size. I will investigate this problem.\n\n> \n> When the checkpoint starts (12:54:41.479) dstat says that I/O load increases:\n> \n> ...\n> \n> But when it finishes (12:57:06.107) the I/O load is much higher than the hardware can do:\n> \n> During the writing phase of the checkpoint, PostgreSQL passes the dirty data to the OS. At the end, it then tells the OS to make sure that that data has actually reached disk. If your OS stored up too much dirty data in memory then it kind of freaks out once it is notified it needs to actually write that data to disk. The best solution for this may be to lower dirty_background_bytes or dirty_background_ratio so the OS doesn't store up so much trouble for itself.\n> \n\nActually, I have already tuned them to different values. Test results above have been obtained with such settings for page cache:\n\nvm.dirty_background_ratio = 5\nvm.dirty_ratio = 40\nvm.dirty_expire_centisecs = 100\nvm.dirty_writeback_centisecs = 100\n\nTogethrer with previous point I will try to tune os and postgres settings. Thanks.\n\n> Cheers\n> \n> Jeff\n\n\n--\nVladimir\n\n\n\n\n\n14 апр. 2014 г., в 19:11, Jeff Janes <[email protected]> написал(а):On Mon, Apr 14, 2014 at 2:46 AM, Borodin Vladimir <[email protected]> wrote:\nHi all.\nI’m running PostgreSQL 9.3.4 and doing stress test of the database with writing only load. The test plan does 1000 transactions per second (each of them does several updates/inserts). The problem is that checkpoint is not distributed over time well. When the checkpoint finishes, the db gets lots of I/O operations and response timings grows strongly.\nMy checkpoint settings looks like that:postgres=# select name, setting from pg_catalog.pg_settings where name like 'checkpoint%' and boot_val != reset_val;\n name | setting ------------------------------+--------- checkpoint_completion_target | 0.9\n checkpoint_segments | 100500 checkpoint_timeout | 600(3 rows)\npostgres=#But in the log I see that checkpoint continues less than 600*0.9 = 540 seconds:2014-04-14 12:54:41.479 MSK,,,10517,,53468da6.2915,433,,2014-04-10 16:25:10 MSK,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2014-04-14 12:57:06.107 MSK,,,10517,,53468da6.2915,434,,2014-04-10 16:25:10 MSK,,0,LOG,00000,\"checkpoint complete: wrote 65140 buffers (24.8%); 0 transaction log file(s) added, 0 removed, 327 recycled; write=134.217 s, sync=10.292 s, total=144.627 s; sync files=31, longest=3.332 s, average=0.331 s\",,,,,,,,,»\"\nWhen a checkpoint starts, the checkpointer process counts up all the buffers that need to be written. Then it goes through and writes them. It paces itself by comparing how many buffers it itself has written to how many need to be written. But if a buffer that needs to be checkpointed happens to get written by some other process (the background writer, or a backend, because they need a clean buffer to read different data into), the checkpointer is not notified of this and doesn't count that buffer as being written when it computes whether it is on track. This causes it to finish early. This can be confusing, but probably doesn't cause any real problems. The reason for checkpoint_completion_target is to spread the IO out over a longer time, but if much of the checkpoint IO is really being done by the background writer, than it is already getting spread out fairly well.I didn’t know that, thanks. Seems, that I have quite small shared buffers size. I will investigate this problem.\nWhen the checkpoint starts (12:54:41.479) dstat says that I/O load increases:\n ...\nBut when it finishes (12:57:06.107) the I/O load is much higher than the hardware can do:\nDuring the writing phase of the checkpoint, PostgreSQL passes the dirty data to the OS. At the end, it then tells the OS to make sure that that data has actually reached disk. If your OS stored up too much dirty data in memory then it kind of freaks out once it is notified it needs to actually write that data to disk. The best solution for this may be to lower dirty_background_bytes or dirty_background_ratio so the OS doesn't store up so much trouble for itself.\nActually, I have already tuned them to different values. Test results above have been obtained with such settings for page cache:vm.dirty_background_ratio = 5vm.dirty_ratio = 40vm.dirty_expire_centisecs = 100vm.dirty_writeback_centisecs = 100Togethrer with previous point I will try to tune os and postgres settings. Thanks.CheersJeff\n\n--Vladimir",
"msg_date": "Mon, 14 Apr 2014 20:42:08 +0400",
"msg_from": "Borodin Vladimir <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Checkpoint distribution"
},
{
"msg_contents": "On Mon, Apr 14, 2014 at 9:42 AM, Borodin Vladimir <[email protected]> wrote:\n\n> 14 апр. 2014 г., в 19:11, Jeff Janes <[email protected]> написал(а):\n>\n>\n> During the writing phase of the checkpoint, PostgreSQL passes the dirty\n> data to the OS. At the end, it then tells the OS to make sure that that\n> data has actually reached disk. If your OS stored up too much dirty data\n> in memory then it kind of freaks out once it is notified it needs to\n> actually write that data to disk. The best solution for this may be to\n> lower dirty_background_bytes or dirty_background_ratio so the OS doesn't\n> store up so much trouble for itself.\n>\n>\n> Actually, I have already tuned them to different values. Test results\n> above have been obtained with such settings for page cache:\n>\n> vm.dirty_background_ratio = 5\n>\n\nIf you have 64GB of RAM, that is 3.2GB of allowed dirty data, which is\nprobably too much. But I think I've heard rumors that the kernel ignores\nsettings below 5, so probably switch to dirty_background_bytes.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 14, 2014 at 9:42 AM, Borodin Vladimir <[email protected]> wrote:\n14 апр. 2014 г., в 19:11, Jeff Janes <[email protected]> написал(а):\n\n\nDuring the writing phase of the checkpoint, PostgreSQL passes the dirty data to the OS. At the end, it then tells the OS to make sure that that data has actually reached disk. If your OS stored up too much dirty data in memory then it kind of freaks out once it is notified it needs to actually write that data to disk. The best solution for this may be to lower dirty_background_bytes or dirty_background_ratio so the OS doesn't store up so much trouble for itself.\nActually, I have already tuned them to different values. Test results above have been obtained with such settings for page cache:\nvm.dirty_background_ratio = 5If you have 64GB of RAM, that is 3.2GB of allowed dirty data, which is probably too much. But I think I've heard rumors that the kernel ignores settings below 5, so probably switch to dirty_background_bytes.\nCheers,Jeff",
"msg_date": "Mon, 14 Apr 2014 11:09:31 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpoint distribution"
},
{
"msg_contents": "14 апр. 2014 г., в 22:09, Jeff Janes <[email protected]> написал(а):\n\n> On Mon, Apr 14, 2014 at 9:42 AM, Borodin Vladimir <[email protected]> wrote:\n> 14 апр. 2014 г., в 19:11, Jeff Janes <[email protected]> написал(а):\n> \n>> \n> \n>> During the writing phase of the checkpoint, PostgreSQL passes the dirty data to the OS. At the end, it then tells the OS to make sure that that data has actually reached disk. If your OS stored up too much dirty data in memory then it kind of freaks out once it is notified it needs to actually write that data to disk. The best solution for this may be to lower dirty_background_bytes or dirty_background_ratio so the OS doesn't store up so much trouble for itself.\n>> \n> \n> Actually, I have already tuned them to different values. Test results above have been obtained with such settings for page cache:\n> \n> vm.dirty_background_ratio = 5\n> \n> If you have 64GB of RAM, that is 3.2GB of allowed dirty data, which is probably too much. But I think I've heard rumors that the kernel ignores settings below 5, so probably switch to dirty_background_bytes.\n> \n\nActually, I have even more :) 128 GB of RAM. I’ve set such settings for page cache:\n\n# 100 MB\nvm.dirty_background_bytes = 104857600\nvm.dirty_ratio = 40\nvm.dirty_expire_centisecs = 100\nvm.dirty_writeback_centisecs = 100\n\nAnd tried 2 GB, 4 GB, 8 GB for shared_buffers size (when I wrote first letter, it was 2 GB). Shared buffers size does not matter with above page cache settings. But it really affects the distribution of checkpoint over time. \nRight now test results (for 1000 tps and checkpoint every 5 minutes) look like that:\n\n\nThank you very much, Jeff.\n\n> Cheers,\n> \n> Jeff\n\n\n--\nVladimir",
"msg_date": "Tue, 15 Apr 2014 14:16:34 +0400",
"msg_from": "Borodin Vladimir <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Checkpoint distribution"
}
] |
[
{
"msg_contents": "Hi all,\n\n(Referred here from pgsql-performance)\n\ntl;dr: every time I shut down a database and bring it back up, SSI seems \nto go slower. In order to avoid thousands of SSI aborts due to running \nout of shared memory, I've had to set max_predicate_locks to several \nthousand (2000 is tolerable, 8000 required to avoid all errors); this \nseems excessively high considering how short TPC-C transactions are, and \nhow aggressively SSI reclaims storage. I've also found what appears to \nbe a bug, where the SSI SLRU storage (pg_serial) sometimes jumps from \n~200kB to 8GB within a matter of seconds. The 8GB persists through later \nruns and seems to be part of the performance problem; deleting the \npg_serial directory after each database shutdown seems to resolve most \nof the problem.\n\nExcruciatingly long and detailed version follows...\n\nThis is with pgsql-9.3.4, x86_64-linux, home-built with `./configure \n--prefix=...' and gcc-4.7.\n24-core Intel box with hyperthreading (so 48 contexts).\nTPC-C courtesy of oltpbenchmark.com. 12WH TPC-C, 24 clients.\n\nI get a strange behavior across repeated runs: each 100-second run is a \nbit slower than the one preceding it, when run with SSI (SERIALIZABLE). \nSwitching to SI (REPEATABLE_READ) removes the problem, so it's \napparently not due to the database growing. The database is completely \nshut down (pg_ctl stop) between runs, but the data lives in tmpfs, so \nthere's no I/O problem here. 64GB RAM, so no paging, either.\n\nNote that this slowdown is in addition to the 30% performance gap \nbetween SI and SSI on my 24-core machine. I understand that the latter \nis a known bottleneck [1]; my question is why the bottleneck should get \nworse over time:\n\nWith SI, I get ~4.4ktps, consistently.\nWith SSI, I get 3.9, 3.8, 3.4. 3.3, 3.1, 2.9, ...\n\nSo the question: what should I look for to diagnose/triage this problem? \nI've done some legwork already, but have no idea where to go next.\n\nLooking through the logs, abort rates due to SSI aren't changing in any \nobvious way. I've been hacking on SSI for over a month now as part of a \nresearch project, and am fairly familiar with predicate.c, but I don't \nsee any obvious reason this behavior should arise (in particular, SLRU \nstorage seems to be re-initialized every time the postmaster restarts, \nso there shouldn't be any particular memory effect due to SIREAD locks). \nI'm also familiar with both Cahill's and Ports/Grittner's published \ndescriptions of SSI, but again, nothing obvious jumps out.\n\nTop reports only 50-60% CPU usage for most clients, and linux perf shows \n5-10% of time going to various lwlock calls. Compiling with \n-DLWLOCK_STATS and comparing results for SI vs. SSI shows that the \nWALInsertLock (7) is white-hot in all cases, followed by almost equally \nwhite-hot SSI lwlocks (28,30,29) and the 16 PredicateLockMgrLocks \n(65-80). Other than the log bottleneck, this aligns precisely with \nprevious results reported by others [1]; unfortunately, I don't see \nanything obvious in the lock stats to say why the problem is getting \nworse over time.\n\nIn my experience this sort of behavior indicates a bug, where fixing it \ncould have a significant impact on performance (because the early \n\"damage\" is done so quickly after start-up that even the very first run \ndoesn't live up to its true potential).\n\nI also strongly suspect a bug because the SLRU storage (pg_serial) \noccasionally jumps from the 100-200kB range to 8GB. It's rare, but when \nit happens, performance tanks to tens of tps for the rest of the run, \nand the 8GB persists into subsequent runs. I saw some code comments in \npredicate.c suggesting that SLRU pages which fall out of range would not \nbe reclaimed until the next SLRU wraparound. Deleting pg_serial/* before \neach `pg_ctl start' seems to remove most of the problem (and, from my \nunderstanding of SSI, should be harmless, because no serial conflicts \ncan persists across a database shutdown).\n\nI tried to repro, and a series of 30-second runs gave the following \nthroughputs (tps):\n*4615\n3155 3149 3115 3206 3162 3069 3005 2978 2953 **308\n2871 2876 2838 2853 2817 2768 2736 2782 2732 2833\n2749 2675 2771 2700 2675 2682 2647 2572 2626 2567\n*4394\n\nThat ** entry was an 8GB blow-up. All files in the directory had been \ncreated at the same time (= not during a previous run), and persisted \nthrough the runs that followed. There was also a run where abort rates \njumped through the roof (~40k aborts rather than the usual 2000 or so), \nwith a huge number of \"out of shared memory\" errors; apparently \nmax_predicate_locks=2000 wasn't high enough.\n\n$ cat pgsql.conf\nshared_buffers = 8GB\nsynchronous_commit = off\ncheckpoint_segments = 64\nmax_pred_locks_per_transaction = 2000\ndefault_statistics_target = 100\nmaintenance_work_mem = 2GB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 40GB\nwork_mem = 1920MB\nwal_buffers = 16MB\n\n\n[1] \nhttp://www.postgresql.org/message-id/CA+TgmoYAiSM2jWEndReY5PL0sKbhgg7dbDH6r=oXKYzi9B7KJA@mail.gmail.com\n\nThoughts?\nRyan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Apr 2014 08:58:14 -0400",
"msg_from": "Ryan Johnson <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSI slows down over time"
},
{
"msg_contents": "Ryan Johnson <[email protected]> wrote:\n\n> every time I shut down a database and bring it back up, SSI seems\n> to go slower.\n\nThere's one thing to rule out up front -- that would be a\nlong-lived prepared transaction.\n\nPlease post the output of these queries:\n\nselect version();\nshow max_prepared_transactions;\nselect * from pg_prepared_xacts;\n\nThanks.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Apr 2014 07:14:24 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSI slows down over time"
},
{
"msg_contents": "On 14/04/2014 10:14 AM, Kevin Grittner wrote:\n> Ryan Johnson <[email protected]> wrote:\n>\n>> every time I shut down a database and bring it back up, SSI seems\n>> to go slower.\n> There's one thing to rule out up front -- that would be a\n> long-lived prepared transaction.\n>\n> Please post the output of these queries:\n>\n> select version();\n> show max_prepared_transactions;\n> select * from pg_prepared_xacts;\nHmm. My machine was rebooted over the weekend for Heartbleed patches, so \nI'll have to re-build the database and fire off enough runs to repro. \nThere are some disadvantages to keeping it in tmpfs...\n\nMeanwhile, a quick question: what factors might cause a prepared \ntransaction to exist in the first place? I'm running a single-node db, \nand I've had only normal database shutdowns, so I wouldn't have expected \nany.\n\nThoughts?\nRyan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Apr 2014 10:22:42 -0400",
"msg_from": "Ryan Johnson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSI slows down over time"
},
{
"msg_contents": "Ryan Johnson <[email protected]> wrote:\n\n> what factors might cause a prepared transaction to exist in the\n> first place?\n\nAs part of a \"distributed transaction\" using \"two phase commit\" a\nPREPARE TRANSACTION statement would have had to run against\nPostgreSQL:\n\nhttp://www.postgresql.org/docs/current/interactive/sql-prepare-transaction.html\n\n> I'm running a single-node db, and I've had only normal database\n> shutdowns, so I wouldn't have expected any.\n\nPrepared transactions survive shutdowns, normal or not, so that\ndoesn't matter; but prepared transactions are normally used with a\ntransaction manager coordinating transactions among multiple data\nstores. On the other hand, I have seen cases where a developer\n\"playing around\" with database features has created one. And using\nthem with a \"home-grown\" transaction manager rather than a mature\nproduct is risky; there are some non-obvious pitfalls to avoid.\n\nAnyway, you may have found a bug, but most of what you're seeing\ncould be caused by a prepared transaction sitting around\nindefinitely, so it's something to check before looking at other\npossible causes. If you have a way to reproduce this from a new\ncluster, please share it. That always makes diagnosis much easier.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Apr 2014 08:54:03 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSI slows down over time"
},
{
"msg_contents": "On 14/04/2014 10:14 AM, Kevin Grittner wrote:\n> Ryan Johnson <[email protected]> wrote:\n>\n>> every time I shut down a database and bring it back up, SSI seems\n>> to go slower.\n> There's one thing to rule out up front -- that would be a\n> long-lived prepared transaction.\n>\n> Please post the output of these queries:\n>\n> select version();\n> show max_prepared_transactions;\n> select * from pg_prepared_xacts;\n version\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.4 on x86_64-unknown-linux-gnu, compiled by gcc \n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n(1 row)\n\n max_prepared_transactions\n---------------------------\n 0\n(1 row)\n\n transaction | gid | prepared | owner | database\n-------------+-----+----------+-------+----------\n(0 rows)\n\nFYI, here's a plot of performance over time. Each point in the graph is \nthroughput (in tps) over a 10-second measurement (~20 minutes total), \nagainst a 12 WH TPC-C dataset with 24 clients banging on it. I issued a \npg_ctl stop/start pair between each run:\n\n\nThe downward trend is clearly visible, almost 30% performance loss by \nthe end. The data directory went from 1.4GB to 3.8GB over the lifetime \nof the experiment. Final pg_serial size was 144kB, so the 8GB pg_serial \nanomaly was not responsible for the drop in performance over time (this \ntime). I forgot to do an SI run at the beginning, but historically SI \nperformance has remained pretty steady over time. I don't know what \ncauses those big dips in performance, but it happens with SI as well so \nI assume it's checkpointing or some such.\n\nNow that I have a degraded database, any suggestions what should I look \nfor or what experiments I should run? I'm currently re-running the same \nexperiment, but deleting pg_serial/* in between runs; there was some \nindication last week that this prevents the performance drop, but that \nwas nowhere near a rigorous analysis.\n\nBTW, this is actually a TPC-C++ implementation I created, based on the \ndescription in Cahill's thesis (and starting from the oltpbenchmark \nTPC-C code). It turns out that normal TPC-C never spills to pg_serial \n(at least, not that I ever saw). If you want to put hands on the code, I \ncan tar it up and post it somewhere.\n\nThoughts?\nRyan",
"msg_date": "Mon, 14 Apr 2014 16:30:56 -0400",
"msg_from": "Ryan Johnson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSI slows down over time"
},
{
"msg_contents": "On 14/04/2014 4:30 PM, Ryan Johnson wrote:\n> FYI, here's a plot of performance over time. Each point in the graph \n> is throughput (in tps) over a 10-second measurement (~20 minutes \n> total), against a 12 WH TPC-C dataset with 24 clients banging on it. I \n> issued a pg_ctl stop/start pair between each run:\n\n\nUpdated result: SI definitely does not suffer any performance loss over \ntime, but it's not clear what is wrong with SSI: deleting pg_serial/* \nhad exactly zero impact on performance.\n\nThe two near-zero results for SSI are both cases where pg_serial/ \nballooned to 8GB during the run (I saw two others tonight during \ntesting). The one big dip for SI was a hang (the second such hang tonight).\n\n>\n> Thoughts?\n> Ryan\n>",
"msg_date": "Mon, 14 Apr 2014 21:57:43 -0400",
"msg_from": "Ryan Johnson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSI slows down over time"
}
] |
[
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n\n>\n>> these issues tend to get solved through optimization fences.\n>>> Reorganize a query into a CTE, or use the (gross) OFFSET 0 trick.\n>>> How are these nothing other than unofficial hints?\n>>>\n>> Yeah, the cognitive dissonance levels get pretty high around this\n>> issue. Some of the same people who argue strenuously against\n>> adding hints about what plan should be chosen also argue against\n>> having clearly equivalent queries optimize to the same plan because\n>> they find the fact that they don't useful for coercing a decent\n>> plan sometimes. That amounts to a hint, but obscure and\n>> undocumented. (The OP may be wondering what this \"OFFSET 0 trick\"\n>> is, and how he can use it.)\n>>\n>\n+1. I've said this or something like it at least a half-dozen times.\nPostgres DOES have hints, they're just obscure, undocumented and hard to\nuse. If a developer chooses to use them, they become embedded in the app\nand forgotten. They're hard to find because there's nothing explicit in the\nSQL to look for. You have to know to look for things like \"OFFSET\" or \"SET\n...\". Five years down the road when the developer is long gone, who's going\nto know why \"... OFFSET 0\" was put in the code unless the developer made\ncareful comments?\n\n\n> With explicit, documented hints, one could search for hints of a\n>> particular type should the optimizer improve to the point where\n>> they are no longer needed. It is harder to do that with subtle\n>> differences in syntax choice. Figuring out which CTEs or LIMITs\n>> were chosen because they caused optimization barriers rather than\n>> for their semantic merit takes some effort.\n>\n>\nExactly.\n\nI'll make a bet here. I'll bet that the majority of large Postgres\ninstallations have at least one, probably several, SQL statements that have\nbeen \"hinted\" in some way, either with CTEs or LIMITs, or by using SET to\ndisable a particular query type, and that these \"hints\" are critical to the\nsystem's performance.\n\nThe question is not whether to have hints. The question is how to expose\nhints to users.\n\nCraig\n\nShaun Thomas <[email protected]> wrote:\n\n\n\nthese issues tend to get solved through optimization fences.\nReorganize a query into a CTE, or use the (gross) OFFSET 0 trick.\nHow are these nothing other than unofficial hints?\n\nYeah, the cognitive dissonance levels get pretty high around this\nissue. Some of the same people who argue strenuously against\nadding hints about what plan should be chosen also argue against\nhaving clearly equivalent queries optimize to the same plan because\nthey find the fact that they don't useful for coercing a decent\nplan sometimes. That amounts to a hint, but obscure and\nundocumented. (The OP may be wondering what this \"OFFSET 0 trick\"\nis, and how he can use it.)+1. I've said this or something like it at least a half-dozen times. Postgres DOES have hints, they're just obscure, undocumented and hard to use. If a developer chooses to use them, they become embedded in the app and forgotten. They're hard to find because there's nothing explicit in the SQL to look for. You have to know to look for things like \"OFFSET\" or \"SET ...\". Five years down the road when the developer is long gone, who's going to know why \"... OFFSET 0\" was put in the code unless the developer made careful comments?\n With explicit, documented hints, one could search for hints of a\n\nparticular type should the optimizer improve to the point where\nthey are no longer needed. It is harder to do that with subtle\ndifferences in syntax choice. Figuring out which CTEs or LIMITs\nwere chosen because they caused optimization barriers rather than\nfor their semantic merit takes some effort.Exactly.I'll make a bet here. I'll bet that the majority of large Postgres installations have at least one, probably several, SQL statements that have been \"hinted\" in some way, either with CTEs or LIMITs, or by using SET to disable a particular query type, and that these \"hints\" are critical to the system's performance.\nThe question is not whether to have hints. The question is how to expose hints to users.Craig",
"msg_date": "Mon, 14 Apr 2014 08:35:45 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Getting query plan alternatives from query planner?"
},
{
"msg_contents": "Hi Craig and Shawn\n\nI fully agree with your argumentation.\nWho's the elephant in the room who is reluctant to introduce explicit hints?\n\n-S.\n\n\n2014-04-14 17:35 GMT+02:00 Craig James <[email protected]>:\n\n> Shaun Thomas <[email protected]> wrote:\n>\n>>\n>>> these issues tend to get solved through optimization fences.\n>>>> Reorganize a query into a CTE, or use the (gross) OFFSET 0 trick.\n>>>> How are these nothing other than unofficial hints?\n>>>>\n>>> Yeah, the cognitive dissonance levels get pretty high around this\n>>> issue. Some of the same people who argue strenuously against\n>>> adding hints about what plan should be chosen also argue against\n>>> having clearly equivalent queries optimize to the same plan because\n>>> they find the fact that they don't useful for coercing a decent\n>>> plan sometimes. That amounts to a hint, but obscure and\n>>> undocumented. (The OP may be wondering what this \"OFFSET 0 trick\"\n>>> is, and how he can use it.)\n>>>\n>>\n> +1. I've said this or something like it at least a half-dozen times.\n> Postgres DOES have hints, they're just obscure, undocumented and hard to\n> use. If a developer chooses to use them, they become embedded in the app\n> and forgotten. They're hard to find because there's nothing explicit in the\n> SQL to look for. You have to know to look for things like \"OFFSET\" or \"SET\n> ...\". Five years down the road when the developer is long gone, who's going\n> to know why \"... OFFSET 0\" was put in the code unless the developer made\n> careful comments?\n>\n>\n>> With explicit, documented hints, one could search for hints of a\n>>> particular type should the optimizer improve to the point where\n>>> they are no longer needed. It is harder to do that with subtle\n>>> differences in syntax choice. Figuring out which CTEs or LIMITs\n>>> were chosen because they caused optimization barriers rather than\n>>> for their semantic merit takes some effort.\n>>\n>>\n> Exactly.\n>\n> I'll make a bet here. I'll bet that the majority of large Postgres\n> installations have at least one, probably several, SQL statements that have\n> been \"hinted\" in some way, either with CTEs or LIMITs, or by using SET to\n> disable a particular query type, and that these \"hints\" are critical to the\n> system's performance.\n>\n> The question is not whether to have hints. The question is how to expose\n> hints to users.\n>\n> Craig\n>\n>\n\nHi Craig and ShawnI fully agree with your argumentation.Who's the elephant in the room who is reluctant to introduce explicit hints?-S.\n2014-04-14 17:35 GMT+02:00 Craig James <[email protected]>:\nShaun Thomas <[email protected]> wrote:\n\n\n\nthese issues tend to get solved through optimization fences.\nReorganize a query into a CTE, or use the (gross) OFFSET 0 trick.\nHow are these nothing other than unofficial hints?\n\nYeah, the cognitive dissonance levels get pretty high around this\nissue. Some of the same people who argue strenuously against\nadding hints about what plan should be chosen also argue against\nhaving clearly equivalent queries optimize to the same plan because\nthey find the fact that they don't useful for coercing a decent\nplan sometimes. That amounts to a hint, but obscure and\nundocumented. (The OP may be wondering what this \"OFFSET 0 trick\"\nis, and how he can use it.)+1. I've said this or something like it at least a half-dozen times. Postgres DOES have hints, they're just obscure, undocumented and hard to use. If a developer chooses to use them, they become embedded in the app and forgotten. They're hard to find because there's nothing explicit in the SQL to look for. You have to know to look for things like \"OFFSET\" or \"SET ...\". Five years down the road when the developer is long gone, who's going to know why \"... OFFSET 0\" was put in the code unless the developer made careful comments?\n\n With explicit, documented hints, one could search for hints of a\n\n\nparticular type should the optimizer improve to the point where\nthey are no longer needed. It is harder to do that with subtle\ndifferences in syntax choice. Figuring out which CTEs or LIMITs\nwere chosen because they caused optimization barriers rather than\nfor their semantic merit takes some effort.Exactly.I'll make a bet here. I'll bet that the majority of large Postgres installations have at least one, probably several, SQL statements that have been \"hinted\" in some way, either with CTEs or LIMITs, or by using SET to disable a particular query type, and that these \"hints\" are critical to the system's performance.\nThe question is not whether to have hints. The question is how to expose hints to users.Craig",
"msg_date": "Mon, 14 Apr 2014 20:36:42 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Getting query plan alternatives from query planner?"
},
{
"msg_contents": "On 04/14/2014 09:36 PM, Stefan Keller wrote:\n> Who's the elephant in the room who is reluctant to introduce explicit hints?\n\nPlease read some of the previous discussions on this. Like this, in this \nvery same thread:\n\nhttp://www.postgresql.org/message-id/[email protected]\n\nI'd like to have explicit hints, *of the kind explained in that \nmessage*. Hints that tell the planner what the data distribution is \nlike. Hints to override statistics and heuristics used by the planner.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Apr 2014 22:18:46 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Getting query plan alternatives from query planner?"
}
] |
[
{
"msg_contents": "I have several related tables that represent a call state. Let's think of\nthese as phone calls to simplify things. Sometimes I need to determine the\nlast time a user was called, the last time a user answered a call, or the\nlast time a user completed a call.\n\nThe basic schema is something like this:\n\nCREATE TABLE calls (\n id BIGINT NOT NULL, // sequence generator\n user_id BIGINT NOT NULL,\n called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n\n PRIMARY KEY (id),\n FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n);\n\nCREATE TABLE calls_answered (\n id BIGINT NOT NULL,\n answered TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n\n PRIMARY KEY (id),\n FOREIGN KEY (id) REFERENCES calls(id) ON DELETE CASCADE\n);\n\n\nAnd so on for calls_connected, calls_completed, call_errors, etc.\n\nOccasionally I will want to know things like \"When was the last time a user\nanswered a call\" or \"How many times has a user been called\".\n\nI can do these queries using a combination of MAX or COUNT. But I'm\nconcerned about the performance.\n\nSELECT MAX(a.id)\nFROM calls_answered a JOIN calls c ON c.id = a.id\nWHERE c.user_id = ?;\n\n\nOr the number of answered calls:\n\nSELECT MAX(a.id)\nFROM calls_answered a JOIN calls c ON c.id = a.id\nWHERE c.user_id = ?;\n\n\nSometimes I might want to get this data for a whole bunch of users. For\nexample, \"give me all users whose have not answered a call in the last 5\ndays.\" Or even \"what percentage of users called actually answered a call.\"\nThis approach could become a performance issue. So the other option is to\ncreate a call_summary table that is updated with triggers.\n\nThe summary table would need fields like \"user_id\", \"last_call_id\",\n\"call_count\", \"last_answered_id\", \"answered_count\", \"last_completed_id\",\n\"last_completed_count\", etc.\n\nMy only issue with a summary table is that I don't want a bunch of null\nfields. For example, if the user was *called* but they have never *answered* at\ncall then the last_call_id and call_count fields on the summary table would\nbe non-NULL but the last_answer_id and answer_count fields WOULD be NULL.\nBut over time all fields would eventually become non-NULL.\n\nSo that leads me to a summary table for EACH call state. Each summary table\nwould have a user id, a ref_id, and a count -- one summary table for each\nstate e.g. call_summary, call_answered_summary, etc.\n\nThis approach has the down side that it creates a lot of tables and\ntriggers. It has the upside of being pretty efficient without having to\ndeal with NULL values. It's also pretty easy to reason about.\n\nSo for my question -- is the choice between these a personal preference\nsort of thing or is there a right or wrong approach? Am I missing another\napproach that would be better? I'm okay with SQL but I'm not expert so I'm\nnot sure if there is an accepted DESIGN PATTERN for this that I am missing.\n\nThanks!\n\nI have several related tables that represent a call state. Let's think of these as phone calls to simplify things. Sometimes I need to determine the last time a user was called, the last time a user answered a call, or the last time a user completed a call. \nThe basic schema is something like this:CREATE TABLE calls (\n id BIGINT NOT NULL, // sequence generator user_id BIGINT NOT NULL, called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (id), FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n);CREATE TABLE calls_answered (\n id BIGINT NOT NULL, answered TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (id), FOREIGN KEY (id) REFERENCES calls(id) ON DELETE CASCADE\n);And so on for calls_connected, calls_completed, call_errors, etc.Occasionally I will want to know things like \"When was the last time a user answered a call\" or \"How many times has a user been called\". \nI can do these queries using a combination of MAX or COUNT. But I'm concerned about the performance. SELECT MAX(a.id) \nFROM calls_answered a JOIN calls c ON c.id = a.id WHERE c.user_id = ?;\nOr the number of answered calls:SELECT MAX(a.id) \nFROM calls_answered a JOIN calls c ON c.id = a.id WHERE c.user_id = ?;\nSometimes I might want to get this data for a whole bunch of users. For example, \"give me all users whose have not answered a call in the last 5 days.\" Or even \"what percentage of users called actually answered a call.\" This approach could become a performance issue. So the other option is to create a call_summary table that is updated with triggers.\nThe summary table would need fields like \"user_id\", \"last_call_id\", \"call_count\", \"last_answered_id\", \"answered_count\", \"last_completed_id\", \"last_completed_count\", etc.\nMy only issue with a summary table is that I don't want a bunch of null fields. For example, if the user was called but they have never answered at call then the last_call_id and call_count fields on the summary table would be non-NULL but the last_answer_id and answer_count fields WOULD be NULL. But over time all fields would eventually become non-NULL.\nSo that leads me to a summary table for EACH call state. Each summary table would have a user id, a ref_id, and a count -- one summary table for each state e.g. call_summary, call_answered_summary, etc. \nThis approach has the down side that it creates a lot of tables and triggers. It has the upside of being pretty efficient without having to deal with NULL values. It's also pretty easy to reason about. \nSo for my question -- is the choice between these a personal preference sort of thing or is there a right or wrong approach? Am I missing another approach that would be better? I'm okay with SQL but I'm not expert so I'm not sure if there is an accepted DESIGN PATTERN for this that I am missing. \nThanks!",
"msg_date": "Mon, 14 Apr 2014 09:27:29 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Approach to Data Summary and Analysis"
},
{
"msg_contents": "On Mon, 14 Apr 2014 09:27:29 -0700\nRobert DiFalco <[email protected]> wrote:\n\n> I have several related tables that represent a call state. \n> \n> And so on for calls_connected, calls_completed, call_errors, etc.\n> \n> So for my question -- is the choice between these a personal preference\n> sort of thing or is there a right or wrong approach? Am I missing another\n> approach that would be better? \n\nHi Robert,\n\nI guess a call state is subject to change, in which case you would have to shuffle records between tables when that happens?\n\nISTM you should consider using only a 'calls' table, and add an 'id_call_state' field to it that references the list of possible states. This would make your queries simpler.\n\ncreate table call_state(\nid_call_state text PRIMARY KEY,\nlibelle text);\n\nINSERT INTO call_state (id_call_state, libelle) VALUES ('calls_connected', 'Connected'), ('calls_completed', 'Completed'), ('call_errors', 'Error');\n\n> CREATE TABLE calls (\n> id BIGINT NOT NULL, // sequence generator\n\nid_call_state INTEGER NOT NULL REFERENCES call_state,\n\n> user_id BIGINT NOT NULL,\n> called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n> \n> PRIMARY KEY (id),\n> FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n> );\n\n\n-- \n\t\t\t\t\tRegards, Vincent Veyron \n\nhttp://libremen.com/ \nLegal case, contract and insurance claim management software\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Mon, 14 Apr 2014 23:35:53 +0200",
"msg_from": "Vincent Veyron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Approach to Data Summary and Analysis"
},
{
"msg_contents": "On Mon, Apr 14, 2014 at 12:27 PM, Robert DiFalco\n<[email protected]>wrote:\n\n> I have several related tables that represent a call state. Let's think of\n> these as phone calls to simplify things. Sometimes I need to determine the\n> last time a user was called, the last time a user answered a call, or the\n> last time a user completed a call.\n>\n> The basic schema is something like this:\n>\n> CREATE TABLE calls (\n> id BIGINT NOT NULL, // sequence generator\n> user_id BIGINT NOT NULL,\n> called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n>\n> PRIMARY KEY (id),\n> FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n> );\n>\n> CREATE TABLE calls_answered (\n> id BIGINT NOT NULL,\n> answered TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n>\n> PRIMARY KEY (id),\n> FOREIGN KEY (id) REFERENCES calls(id) ON DELETE CASCADE\n> );\n>\n>\n> And so on for calls_connected, calls_completed, call_errors, etc.\n>\n> Occasionally I will want to know things like \"When was the last time a\n> user answered a call\" or \"How many times has a user been called\".\n>\n> I can do these queries using a combination of MAX or COUNT. But I'm\n> concerned about the performance.\n>\n> SELECT MAX(a.id)\n> FROM calls_answered a JOIN calls c ON c.id = a.id\n> WHERE c.user_id = ?;\n>\n>\n> Or the number of answered calls:\n>\n> SELECT MAX(a.id)\n> FROM calls_answered a JOIN calls c ON c.id = a.id\n> WHERE c.user_id = ?;\n>\n>\n> Sometimes I might want to get this data for a whole bunch of users. For\n> example, \"give me all users whose have not answered a call in the last 5\n> days.\" Or even \"what percentage of users called actually answered a call.\"\n> This approach could become a performance issue. So the other option is to\n> create a call_summary table that is updated with triggers.\n>\n> The summary table would need fields like \"user_id\", \"last_call_id\",\n> \"call_count\", \"last_answered_id\", \"answered_count\", \"last_completed_id\",\n> \"last_completed_count\", etc.\n>\n> My only issue with a summary table is that I don't want a bunch of null\n> fields. For example, if the user was *called* but they have never\n> *answered* at call then the last_call_id and call_count fields on the\n> summary table would be non-NULL but the last_answer_id and answer_count\n> fields WOULD be NULL. But over time all fields would eventually become\n> non-NULL.\n>\n> So that leads me to a summary table for EACH call state. Each summary\n> table would have a user id, a ref_id, and a count -- one summary table for\n> each state e.g. call_summary, call_answered_summary, etc.\n>\n> This approach has the down side that it creates a lot of tables and\n> triggers. It has the upside of being pretty efficient without having to\n> deal with NULL values. It's also pretty easy to reason about.\n>\n> So for my question -- is the choice between these a personal preference\n> sort of thing or is there a right or wrong approach? Am I missing another\n> approach that would be better? I'm okay with SQL but I'm not expert so I'm\n> not sure if there is an accepted DESIGN PATTERN for this that I am missing.\n>\n> Thanks!\n>\n>\n>\nMy initial thought is: that design is over-normalized. The thing you are\ntrying to model is the call, and it has severl attributes, some of which\nmay be unknown or not applicable (which is what NULL is for). So my\nthought would be to do something like this:\n\nCREATE TABLE calls (\n id BIGINT NOT NULL, // sequence generator\n user_id BIGINT NOT NULL,\n called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\nanswered TIMESTAMPTZ\n\n PRIMARY KEY (id),\n FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n);\n\n\n-- \nI asked the Internet how to train my cat, and the Internet told me to get a\ndog.\n\nOn Mon, Apr 14, 2014 at 12:27 PM, Robert DiFalco <[email protected]> wrote:\nI have several related tables that represent a call state. Let's think of these as phone calls to simplify things. Sometimes I need to determine the last time a user was called, the last time a user answered a call, or the last time a user completed a call. \nThe basic schema is something like this:CREATE TABLE calls (\n id BIGINT NOT NULL, // sequence generator user_id BIGINT NOT NULL,\n called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (id), FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n);CREATE TABLE calls_answered (\n id BIGINT NOT NULL, answered TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (id), FOREIGN KEY (id) REFERENCES calls(id) ON DELETE CASCADE\n);And so on for calls_connected, calls_completed, call_errors, etc.Occasionally I will want to know things like \"When was the last time a user answered a call\" or \"How many times has a user been called\". \nI can do these queries using a combination of MAX or COUNT. But I'm concerned about the performance. SELECT MAX(a.id) \nFROM calls_answered a JOIN calls c ON c.id = a.id WHERE c.user_id = ?;\nOr the number of answered calls:SELECT MAX(a.id) \nFROM calls_answered a JOIN calls c ON c.id = a.id WHERE c.user_id = ?;\nSometimes I might want to get this data for a whole bunch of users. For example, \"give me all users whose have not answered a call in the last 5 days.\" Or even \"what percentage of users called actually answered a call.\" This approach could become a performance issue. So the other option is to create a call_summary table that is updated with triggers.\nThe summary table would need fields like \"user_id\", \"last_call_id\", \"call_count\", \"last_answered_id\", \"answered_count\", \"last_completed_id\", \"last_completed_count\", etc.\nMy only issue with a summary table is that I don't want a bunch of null fields. For example, if the user was called but they have never answered at call then the last_call_id and call_count fields on the summary table would be non-NULL but the last_answer_id and answer_count fields WOULD be NULL. But over time all fields would eventually become non-NULL.\nSo that leads me to a summary table for EACH call state. Each summary table would have a user id, a ref_id, and a count -- one summary table for each state e.g. call_summary, call_answered_summary, etc. \nThis approach has the down side that it creates a lot of tables and triggers. It has the upside of being pretty efficient without having to deal with NULL values. It's also pretty easy to reason about. \nSo for my question -- is the choice between these a personal preference sort of thing or is there a right or wrong approach? Am I missing another approach that would be better? I'm okay with SQL but I'm not expert so I'm not sure if there is an accepted DESIGN PATTERN for this that I am missing. \nThanks!\nMy initial thought is: that design is over-normalized. The thing you are trying to model is the call, and it has severl attributes, some of which may be unknown or not applicable (which is what NULL is for). So my thought would be to do something like this:\nCREATE TABLE calls ( id BIGINT NOT NULL, // sequence generator\n user_id BIGINT NOT NULL, called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,answered TIMESTAMPTZ\n PRIMARY KEY (id), FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n);-- I asked the Internet how to train my cat, and the Internet told me to get a dog.",
"msg_date": "Tue, 15 Apr 2014 10:56:00 -0400",
"msg_from": "Chris Curvey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Approach to Data Summary and Analysis"
},
{
"msg_contents": "On Tue, Apr 15, 2014 at 10:56 AM, Chris Curvey <[email protected]>wrote:\n\n> On Mon, Apr 14, 2014 at 12:27 PM, Robert DiFalco <[email protected]\n> > wrote:\n>\n>> I have several related tables that represent a call state. Let's think of\n>> these as phone calls to simplify things. Sometimes I need to determine the\n>> last time a user was called, the last time a user answered a call, or the\n>> last time a user completed a call.\n>>\n>> The basic schema is something like this:\n>>\n>> CREATE TABLE calls (\n>> id BIGINT NOT NULL, // sequence generator\n>> user_id BIGINT NOT NULL,\n>> called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n>>\n>> PRIMARY KEY (id),\n>> FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n>> );\n>>\n>> CREATE TABLE calls_answered (\n>> id BIGINT NOT NULL,\n>> answered TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n>>\n>> PRIMARY KEY (id),\n>> FOREIGN KEY (id) REFERENCES calls(id) ON DELETE CASCADE\n>> );\n>>\n>>\n>> And so on for calls_connected, calls_completed, call_errors, etc.\n>>\n>> Occasionally I will want to know things like \"When was the last time a\n>> user answered a call\" or \"How many times has a user been called\".\n>>\n>> I can do these queries using a combination of MAX or COUNT. But I'm\n>> concerned about the performance.\n>>\n>> SELECT MAX(a.id)\n>> FROM calls_answered a JOIN calls c ON c.id = a.id\n>> WHERE c.user_id = ?;\n>>\n>>\n>> Or the number of answered calls:\n>>\n>> SELECT MAX(a.id)\n>> FROM calls_answered a JOIN calls c ON c.id = a.id\n>> WHERE c.user_id = ?;\n>>\n>>\n>> Sometimes I might want to get this data for a whole bunch of users. For\n>> example, \"give me all users whose have not answered a call in the last 5\n>> days.\" Or even \"what percentage of users called actually answered a call.\"\n>> This approach could become a performance issue. So the other option is to\n>> create a call_summary table that is updated with triggers.\n>>\n>> The summary table would need fields like \"user_id\", \"last_call_id\",\n>> \"call_count\", \"last_answered_id\", \"answered_count\", \"last_completed_id\",\n>> \"last_completed_count\", etc.\n>>\n>> My only issue with a summary table is that I don't want a bunch of null\n>> fields. For example, if the user was *called* but they have never\n>> *answered* at call then the last_call_id and call_count fields on the\n>> summary table would be non-NULL but the last_answer_id and answer_count\n>> fields WOULD be NULL. But over time all fields would eventually become\n>> non-NULL.\n>>\n>> So that leads me to a summary table for EACH call state. Each summary\n>> table would have a user id, a ref_id, and a count -- one summary table for\n>> each state e.g. call_summary, call_answered_summary, etc.\n>>\n>> This approach has the down side that it creates a lot of tables and\n>> triggers. It has the upside of being pretty efficient without having to\n>> deal with NULL values. It's also pretty easy to reason about.\n>>\n>> So for my question -- is the choice between these a personal preference\n>> sort of thing or is there a right or wrong approach? Am I missing another\n>> approach that would be better? I'm okay with SQL but I'm not expert so I'm\n>> not sure if there is an accepted DESIGN PATTERN for this that I am missing.\n>>\n>> Thanks!\n>>\n>>\n>>\n> (Sorry, fat-fingered and hit \"send too early\"...)\n\nCREATE TABLE calls (\n id BIGINT NOT NULL, // sequence generator\n user_id BIGINT NOT NULL,\n called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n answered TIMESTAMPTZ NULL,\n connected TIMESTAMPTZ NULL,\n completed TIMESTAMPTZ NULL,\n\n\n PRIMARY KEY (id),\n FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n);\n\nThen your queries end up looking like this:\n\n--last time john answered\nSELECT MAX(a.id)\nFROM calls\nwhere answered is not null\nand user_id = ?\n\n-- users that have not answered a call in the last five days (I can think\nof a few ways to interpret that phrase)\nselect myusers.*\nfrom myusers\nwhere not exists\n( select *\n from calls\n where calls.user_id = myusers.user_id\n and answered >= <five days ago>)\n\n-- average ring time\nselect avg(extract ('seconds' from called - answered))\nwhere answered is not null\n\n\n\n-- \nI asked the Internet how to train my cat, and the Internet told me to get a\ndog.\n\nOn Tue, Apr 15, 2014 at 10:56 AM, Chris Curvey <[email protected]> wrote:\n\nOn Mon, Apr 14, 2014 at 12:27 PM, Robert DiFalco <[email protected]> wrote:\nI have several related tables that represent a call state. Let's think of these as phone calls to simplify things. Sometimes I need to determine the last time a user was called, the last time a user answered a call, or the last time a user completed a call. \nThe basic schema is something like this:CREATE TABLE calls (\n id BIGINT NOT NULL, // sequence generator user_id BIGINT NOT NULL,\n called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (id), FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n);CREATE TABLE calls_answered (\n id BIGINT NOT NULL, answered TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (id), FOREIGN KEY (id) REFERENCES calls(id) ON DELETE CASCADE\n);And so on for calls_connected, calls_completed, call_errors, etc.Occasionally I will want to know things like \"When was the last time a user answered a call\" or \"How many times has a user been called\". \nI can do these queries using a combination of MAX or COUNT. But I'm concerned about the performance. SELECT MAX(a.id) \nFROM calls_answered a JOIN calls c ON c.id = a.id WHERE c.user_id = ?;\nOr the number of answered calls:SELECT MAX(a.id) \nFROM calls_answered a JOIN calls c ON c.id = a.id WHERE c.user_id = ?;\nSometimes I might want to get this data for a whole bunch of users. For example, \"give me all users whose have not answered a call in the last 5 days.\" Or even \"what percentage of users called actually answered a call.\" This approach could become a performance issue. So the other option is to create a call_summary table that is updated with triggers.\nThe summary table would need fields like \"user_id\", \"last_call_id\", \"call_count\", \"last_answered_id\", \"answered_count\", \"last_completed_id\", \"last_completed_count\", etc.\nMy only issue with a summary table is that I don't want a bunch of null fields. For example, if the user was called but they have never answered at call then the last_call_id and call_count fields on the summary table would be non-NULL but the last_answer_id and answer_count fields WOULD be NULL. But over time all fields would eventually become non-NULL.\nSo that leads me to a summary table for EACH call state. Each summary table would have a user id, a ref_id, and a count -- one summary table for each state e.g. call_summary, call_answered_summary, etc. \nThis approach has the down side that it creates a lot of tables and triggers. It has the upside of being pretty efficient without having to deal with NULL values. It's also pretty easy to reason about. \nSo for my question -- is the choice between these a personal preference sort of thing or is there a right or wrong approach? Am I missing another approach that would be better? I'm okay with SQL but I'm not expert so I'm not sure if there is an accepted DESIGN PATTERN for this that I am missing. \nThanks!\n(Sorry, fat-fingered and hit \"send too early\"...)\nCREATE TABLE calls ( id BIGINT NOT NULL, // sequence generator user_id BIGINT NOT NULL,\n called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP, answered TIMESTAMPTZ NULL,\n connected TIMESTAMPTZ NULL, completed TIMESTAMPTZ NULL,\n PRIMARY KEY (id),\n FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE);Then your queries end up looking like this:\n--last time john answeredSELECT MAX(a.id) \nFROM callswhere answered is not null\nand user_id = ?-- users that have not answered a call in the last five days (I can think of a few ways to interpret that phrase)\nselect myusers.*from myuserswhere not exists( select * from calls where calls.user_id = myusers.user_id and answered >= <five days ago>)\n-- average ring timeselect avg(extract ('seconds' from called - answered))where answered is not null-- \nI asked the Internet how to train my cat, and the Internet told me to get a dog.",
"msg_date": "Tue, 15 Apr 2014 11:12:55 -0400",
"msg_from": "Chris Curvey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Approach to Data Summary and Analysis"
},
{
"msg_contents": "Actually that was exactly the initial table design. There were more fields\nbecause for my use case there were a lot more states and certain states\nhave additional data (for example when a call goes from answered to\nconnected it also gets the user_id of the person being connected to). So\nthat one table started getting a LOT of columns which starting making it\nhard to reason about.\n\nThe more normalized version has a couple of things going for it. COUNT,\nMIN, MAX, etc are very fast because I don't have to conditionally add null\nchecks. Everything is inserted so for the millions of calls that get made\nthe normalized schema was much more efficient for writing. It was also\neasier to understand. The answer table only has calls that were answered,\nthe error table only has calls the resulted in an error after being\nconnected, etc.\n\nI know this kind of gets into a religious area when discussing NULLs and\nwhat level of normalization is appropriate so I don't want to spark any of\nthat on this thread. But only doing inserts and never doing updates or\ndeletes performed very well for large data sets.\n\nThat said, I could explore a compromise between the monolithic table\napproach and the completely normalized set of tables approach. Thanks for\nyour input!\n\n\nOn Tue, Apr 15, 2014 at 8:12 AM, Chris Curvey <[email protected]> wrote:\n\n>\n>\n>\n> On Tue, Apr 15, 2014 at 10:56 AM, Chris Curvey <[email protected]>wrote:\n>\n>> On Mon, Apr 14, 2014 at 12:27 PM, Robert DiFalco <\n>> [email protected]> wrote:\n>>\n>>> I have several related tables that represent a call state. Let's think\n>>> of these as phone calls to simplify things. Sometimes I need to determine\n>>> the last time a user was called, the last time a user answered a call, or\n>>> the last time a user completed a call.\n>>>\n>>> The basic schema is something like this:\n>>>\n>>> CREATE TABLE calls (\n>>> id BIGINT NOT NULL, // sequence generator\n>>> user_id BIGINT NOT NULL,\n>>> called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n>>>\n>>> PRIMARY KEY (id),\n>>> FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n>>> );\n>>>\n>>> CREATE TABLE calls_answered (\n>>> id BIGINT NOT NULL,\n>>> answered TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n>>>\n>>> PRIMARY KEY (id),\n>>> FOREIGN KEY (id) REFERENCES calls(id) ON DELETE CASCADE\n>>> );\n>>>\n>>>\n>>> And so on for calls_connected, calls_completed, call_errors, etc.\n>>>\n>>> Occasionally I will want to know things like \"When was the last time a\n>>> user answered a call\" or \"How many times has a user been called\".\n>>>\n>>> I can do these queries using a combination of MAX or COUNT. But I'm\n>>> concerned about the performance.\n>>>\n>>> SELECT MAX(a.id)\n>>> FROM calls_answered a JOIN calls c ON c.id = a.id\n>>> WHERE c.user_id = ?;\n>>>\n>>>\n>>> Or the number of answered calls:\n>>>\n>>> SELECT MAX(a.id)\n>>> FROM calls_answered a JOIN calls c ON c.id = a.id\n>>> WHERE c.user_id = ?;\n>>>\n>>>\n>>> Sometimes I might want to get this data for a whole bunch of users. For\n>>> example, \"give me all users whose have not answered a call in the last 5\n>>> days.\" Or even \"what percentage of users called actually answered a call.\"\n>>> This approach could become a performance issue. So the other option is to\n>>> create a call_summary table that is updated with triggers.\n>>>\n>>> The summary table would need fields like \"user_id\", \"last_call_id\",\n>>> \"call_count\", \"last_answered_id\", \"answered_count\", \"last_completed_id\",\n>>> \"last_completed_count\", etc.\n>>>\n>>> My only issue with a summary table is that I don't want a bunch of null\n>>> fields. For example, if the user was *called* but they have never\n>>> *answered* at call then the last_call_id and call_count fields on the\n>>> summary table would be non-NULL but the last_answer_id and answer_count\n>>> fields WOULD be NULL. But over time all fields would eventually become\n>>> non-NULL.\n>>>\n>>> So that leads me to a summary table for EACH call state. Each summary\n>>> table would have a user id, a ref_id, and a count -- one summary table for\n>>> each state e.g. call_summary, call_answered_summary, etc.\n>>>\n>>> This approach has the down side that it creates a lot of tables and\n>>> triggers. It has the upside of being pretty efficient without having to\n>>> deal with NULL values. It's also pretty easy to reason about.\n>>>\n>>> So for my question -- is the choice between these a personal preference\n>>> sort of thing or is there a right or wrong approach? Am I missing another\n>>> approach that would be better? I'm okay with SQL but I'm not expert so I'm\n>>> not sure if there is an accepted DESIGN PATTERN for this that I am missing.\n>>>\n>>> Thanks!\n>>>\n>>>\n>>>\n>> (Sorry, fat-fingered and hit \"send too early\"...)\n>\n> CREATE TABLE calls (\n> id BIGINT NOT NULL, // sequence generator\n> user_id BIGINT NOT NULL,\n> called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n> answered TIMESTAMPTZ NULL,\n> connected TIMESTAMPTZ NULL,\n> completed TIMESTAMPTZ NULL,\n>\n>\n> PRIMARY KEY (id),\n> FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n> );\n>\n> Then your queries end up looking like this:\n>\n> --last time john answered\n> SELECT MAX(a.id)\n> FROM calls\n> where answered is not null\n> and user_id = ?\n>\n> -- users that have not answered a call in the last five days (I can think\n> of a few ways to interpret that phrase)\n> select myusers.*\n> from myusers\n> where not exists\n> ( select *\n> from calls\n> where calls.user_id = myusers.user_id\n> and answered >= <five days ago>)\n>\n> -- average ring time\n> select avg(extract ('seconds' from called - answered))\n> where answered is not null\n>\n>\n>\n> --\n> I asked the Internet how to train my cat, and the Internet told me to get\n> a dog.\n>\n\nActually that was exactly the initial table design. There were more fields because for my use case there were a lot more states and certain states have additional data (for example when a call goes from answered to connected it also gets the user_id of the person being connected to). So that one table started getting a LOT of columns which starting making it hard to reason about. \nThe more normalized version has a couple of things going for it. COUNT, MIN, MAX, etc are very fast because I don't have to conditionally add null checks. Everything is inserted so for the millions of calls that get made the normalized schema was much more efficient for writing. It was also easier to understand. The answer table only has calls that were answered, the error table only has calls the resulted in an error after being connected, etc. \nI know this kind of gets into a religious area when discussing NULLs and what level of normalization is appropriate so I don't want to spark any of that on this thread. But only doing inserts and never doing updates or deletes performed very well for large data sets. \nThat said, I could explore a compromise between the monolithic table approach and the completely normalized set of tables approach. Thanks for your input!\nOn Tue, Apr 15, 2014 at 8:12 AM, Chris Curvey <[email protected]> wrote:\nOn Tue, Apr 15, 2014 at 10:56 AM, Chris Curvey <[email protected]> wrote:\n\nOn Mon, Apr 14, 2014 at 12:27 PM, Robert DiFalco <[email protected]> wrote:\nI have several related tables that represent a call state. Let's think of these as phone calls to simplify things. Sometimes I need to determine the last time a user was called, the last time a user answered a call, or the last time a user completed a call. \nThe basic schema is something like this:CREATE TABLE calls (\n id BIGINT NOT NULL, // sequence generator user_id BIGINT NOT NULL,\n called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (id), FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE\n);CREATE TABLE calls_answered (\n id BIGINT NOT NULL, answered TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,\n PRIMARY KEY (id), FOREIGN KEY (id) REFERENCES calls(id) ON DELETE CASCADE\n);And so on for calls_connected, calls_completed, call_errors, etc.Occasionally I will want to know things like \"When was the last time a user answered a call\" or \"How many times has a user been called\". \nI can do these queries using a combination of MAX or COUNT. But I'm concerned about the performance. SELECT MAX(a.id) \nFROM calls_answered a JOIN calls c ON c.id = a.id WHERE c.user_id = ?;\nOr the number of answered calls:SELECT MAX(a.id) \nFROM calls_answered a JOIN calls c ON c.id = a.id WHERE c.user_id = ?;\nSometimes I might want to get this data for a whole bunch of users. For example, \"give me all users whose have not answered a call in the last 5 days.\" Or even \"what percentage of users called actually answered a call.\" This approach could become a performance issue. So the other option is to create a call_summary table that is updated with triggers.\nThe summary table would need fields like \"user_id\", \"last_call_id\", \"call_count\", \"last_answered_id\", \"answered_count\", \"last_completed_id\", \"last_completed_count\", etc.\nMy only issue with a summary table is that I don't want a bunch of null fields. For example, if the user was called but they have never answered at call then the last_call_id and call_count fields on the summary table would be non-NULL but the last_answer_id and answer_count fields WOULD be NULL. But over time all fields would eventually become non-NULL.\nSo that leads me to a summary table for EACH call state. Each summary table would have a user id, a ref_id, and a count -- one summary table for each state e.g. call_summary, call_answered_summary, etc. \nThis approach has the down side that it creates a lot of tables and triggers. It has the upside of being pretty efficient without having to deal with NULL values. It's also pretty easy to reason about. \nSo for my question -- is the choice between these a personal preference sort of thing or is there a right or wrong approach? Am I missing another approach that would be better? I'm okay with SQL but I'm not expert so I'm not sure if there is an accepted DESIGN PATTERN for this that I am missing. \nThanks!\n(Sorry, fat-fingered and hit \"send too early\"...)\n\nCREATE TABLE calls ( id BIGINT NOT NULL, // sequence generator user_id BIGINT NOT NULL,\n called TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP, answered TIMESTAMPTZ NULL,\n connected TIMESTAMPTZ NULL, completed TIMESTAMPTZ NULL,\n PRIMARY KEY (id),\n FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE);\nThen your queries end up looking like this:\n--last time john answeredSELECT MAX(a.id) \nFROM callswhere answered is not null\nand user_id = ?-- users that have not answered a call in the last five days (I can think of a few ways to interpret that phrase)\nselect myusers.*from myuserswhere not exists( select * from calls where calls.user_id = myusers.user_id and answered >= <five days ago>)\n-- average ring timeselect avg(extract ('seconds' from called - answered))where answered is not null\n-- \nI asked the Internet how to train my cat, and the Internet told me to get a dog.",
"msg_date": "Tue, 15 Apr 2014 08:53:04 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Approach to Data Summary and Analysis"
},
{
"msg_contents": "On 04/15/2014 09:53 AM, Robert DiFalco wrote:\n> Actually that was exactly the initial table design. There were more \n> fields because for my use case there were a lot more states and \n> certain states have additional data (for example when a call goes from \n> answered to connected it also gets the user_id of the person being \n> connected to). So that one table started getting a LOT of columns \n> which starting making it hard to reason about.\n>\n> The more normalized version has a couple of things going for it. \n> COUNT, MIN, MAX, etc are very fast because I don't have to \n> conditionally add null checks. Everything is inserted so for the \n> millions of calls that get made the normalized schema was much more \n> efficient for writing. It was also easier to understand. The answer \n> table only has calls that were answered, the error table only has \n> calls the resulted in an error after being connected, etc.\n>\n> I know this kind of gets into a religious area when discussing NULLs \n> and what level of normalization is appropriate so I don't want to \n> spark any of that on this thread. But only doing inserts and never \n> doing updates or deletes performed very well for large data sets.\n>\n> That said, I could explore a compromise between the monolithic table \n> approach and the completely normalized set of tables approach. Thanks \n> for your input!\n>\nI wonder if the \"LOT of columns\" are the bits that need to be parcelled \noff as specific to one condition of a call?\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Tue, 15 Apr 2014 10:02:08 -0600",
"msg_from": "Rob Sargent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Approach to Data Summary and Analysis"
},
{
"msg_contents": "On 4/14/2014 12:27 PM, Robert DiFalco wrote:\n> And so on for calls_connected, calls_completed, call_errors, etc.\n>\n> Occasionally I will want to know things like \"When was the last time a \n> user answered a call\" or \"How many times has a user been called\".\n> ...\n> Sometimes I might want to get this data for a whole bunch of users.\n> ...\n> So the other option is to create a call_summary table that is updated \n> with triggers.\n> ...\n> My only issue with a summary table is that I don't want a bunch of \n> null fields.\n> ...\n> But over time all fields would eventually become non-NULL.\n>\n> So that leads me to a summary table for EACH call state. This approach \n> has the down side that it creates a lot of tables and triggers. It has \n> the upside of being pretty efficient without having to deal with NULL \n> values. It's also pretty easy to reason about.\n> ...\n> So for my question -- is the choice between these a personal \n> preference sort of thing or is there a right or wrong approach? Am I \n> missing another approach that would be better? I'm okay with SQL but \n> I'm not expert so I'm not sure if there is an accepted DESIGN PATTERN \n> for this that I am missing.\n>\nThere is no right or wrong - there is better, worse, best, and worst for \nany specific scenario. In my experience, most people have time/money to \nget to an 80% \"better\" design than all the other answers during design \nand then it gets refined over time. And yes, personal experience does \nplay a part in how people interpret better/worse [aka religion] ;)\n\nI didn't see anybody ask these questions - and to identify \"better\" - \nthey have to be asked.\n1. How much data are you feeding into your system how fast?\n this directly affects your choices on distribution, parallel \nprocessing... writes vs updates vs triggers for copying vs all reads\n [and if on bare metal - potentially where you place your logs, \nindexes, core lookup tables, etc]\n2. How much data are you reading out of your system - how fast?\n you have given \"simple\" use cases (how many calls completed \nwithin a time frame or to a number)\n you have given very slightly more complex use cases (when was \nthe last time John answered a call)\n you have given a slightly more bulky processing question of (how \nmany times have these users been called)\nSo...\n a) How many users executing read queries do you have?\n b) What is the expected load for simple queries (per \nweek/day/hour/minute - depending upon your resolution on speed)\n c) What is the expected load for your mid-line complex queries\n d) What is the \"maximum\" volume you expect a bulk query to go \nafter (like all users in the last 18 years, or this city's users in the \nlast day?) and how frequently will that kind of query be executed? How \nmuch tolerance for delay do your users have?\n e) do you have any known really complex queries that might bog \nthe system down?\n f) How much lag time can you afford between capture and reporting?\n\nAnswers to the above define your performance requirements - which \ndefines the style of schema you need. Queries can be written to pull \ndata from any schema design - but how fast they can perform or how \neasily they can be created...\n\nChris and Vincent both targeted a balance between writes and reads - \nwhich adequately answers 80-85% of the usages out there. But you didn't \ngive us any of the above - so their recommendation (while very likely \nvalid) may not actually fit your case at all.\n\nAs to design patterns -\n\"Generally\" a database schema is more normalized for an operational \nsystem because normalization results in fewer writes/updates and lowers \nthe risk of corruption if a failure takes place. It also isolates \nupdates for any specific value to one location minimizing internally \ncaused data corruption.\nReporting systems are generally less normalized because writes are more \none-time and reads are where the load occurs.\nSometimes you have to use data replication to have a system that \nappropriately supports both.\n\nyou have shown you are already aware of normalization.\nIf you weren't aware of approaches to Data Warehousing... you can review \ninformation about how it is accomplished\n- see the blogs on kimballgroup DOT com they cover a lot of high(er) \nlevel concepts with enough specificity to be of some direct use.\n[that website also covers some ideas for \"Big Data\" which aren't \nnecessarily limited to RDBMS']\n\nSpecify your performance requirements, then figure out your schema design.\n\nFWIW I don't understand your (or any other person's) hesitancy for \"lots \nof\" \"NULL\" values. They provide meaning in a number of different \nways... not the least of which is that you don't know (yet) - which is \nknowledge in and of itself.\n\nRoxanne\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Tue, 15 Apr 2014 18:26:28 -0400",
"msg_from": "Roxanne Reid-Bennett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Approach to Data Summary and Analysis"
},
{
"msg_contents": "1. >500K rows per day into the calls table.\n2. Very rarely. The only common query is gathering users that have not been\ncalled \"today\" (along with some other qualifying criteria). More analytical\nqueries/reports are done for internal use and it is not essential that they\nbe lickity-split.\na. Usually just one connection at a time executes read queries.\nb. the users not called today query will be done once a day.\nc. Daily\nd. All users for the last year (if you are asking about retention). We will\nalso rarely have to run for all time.\ne. Not that I'm aware of (or seen) today.\nf. For the simple queries we cannot afford latency between calls and\nquerying who was already called.\n\nWhile I don't seem to be getting much support for it here :D my write\nperformance (which is most essential) has been much better since I further\nnormalized the tables and made it so that NULL is never used and data is\nnever updated (i.e. it is immutable once it is written).\n\nAs for wanting to avoid NULLs I don't really know what to say. Obviously\nsome times NULL's are required. For this design I don't really need them\nand they make the data harder to reason about (because they are kind of\nopen to interpretation). They can also give you different results than you\nsometimes expect (for example when looking for a non matching key, you\nstart having to inject some OR IS NULLs and such). Also, the absence of\nnull can make a lot of queries more optimal). That said, I understand where\nyou all are coming from with de-normalization. It's definitely the path of\nthe least resistance. Our instinct is to want to see all related data in a\nsingle table when possible.\n\nThe summary table was really a separate point from whether or not people\nliked my schema or not -- I mean whether I de-normalize as people are\nasking or not, there would still be the question of a summary table for MAX\nand COUNT queries or to not have a summary table for those. I probably made\nthe original question too open ended.\n\n\n\nOn Tue, Apr 15, 2014 at 3:26 PM, Roxanne Reid-Bennett <[email protected]>wrote:\n\n> On 4/14/2014 12:27 PM, Robert DiFalco wrote:\n>\n>> And so on for calls_connected, calls_completed, call_errors, etc.\n>>\n>> Occasionally I will want to know things like \"When was the last time a\n>> user answered a call\" or \"How many times has a user been called\".\n>> ...\n>>\n>> Sometimes I might want to get this data for a whole bunch of users.\n>> ...\n>>\n>> So the other option is to create a call_summary table that is updated\n>> with triggers.\n>> ...\n>>\n>> My only issue with a summary table is that I don't want a bunch of null\n>> fields.\n>> ...\n>>\n>> But over time all fields would eventually become non-NULL.\n>>\n>> So that leads me to a summary table for EACH call state. This approach\n>> has the down side that it creates a lot of tables and triggers. It has the\n>> upside of being pretty efficient without having to deal with NULL values.\n>> It's also pretty easy to reason about.\n>> ...\n>>\n>> So for my question -- is the choice between these a personal preference\n>> sort of thing or is there a right or wrong approach? Am I missing another\n>> approach that would be better? I'm okay with SQL but I'm not expert so I'm\n>> not sure if there is an accepted DESIGN PATTERN for this that I am missing.\n>>\n>> There is no right or wrong - there is better, worse, best, and worst for\n> any specific scenario. In my experience, most people have time/money to\n> get to an 80% \"better\" design than all the other answers during design and\n> then it gets refined over time. And yes, personal experience does play a\n> part in how people interpret better/worse [aka religion] ;)\n>\n> I didn't see anybody ask these questions - and to identify \"better\" -\n> they have to be asked.\n> 1. How much data are you feeding into your system how fast?\n> this directly affects your choices on distribution, parallel\n> processing... writes vs updates vs triggers for copying vs all reads\n> [and if on bare metal - potentially where you place your logs,\n> indexes, core lookup tables, etc]\n> 2. How much data are you reading out of your system - how fast?\n> you have given \"simple\" use cases (how many calls completed within\n> a time frame or to a number)\n> you have given very slightly more complex use cases (when was the\n> last time John answered a call)\n> you have given a slightly more bulky processing question of (how\n> many times have these users been called)\n> So...\n> a) How many users executing read queries do you have?\n> b) What is the expected load for simple queries (per\n> week/day/hour/minute - depending upon your resolution on speed)\n> c) What is the expected load for your mid-line complex queries\n> d) What is the \"maximum\" volume you expect a bulk query to go after\n> (like all users in the last 18 years, or this city's users in the last\n> day?) and how frequently will that kind of query be executed? How much\n> tolerance for delay do your users have?\n> e) do you have any known really complex queries that might bog the\n> system down?\n> f) How much lag time can you afford between capture and reporting?\n>\n> Answers to the above define your performance requirements - which defines\n> the style of schema you need. Queries can be written to pull data from any\n> schema design - but how fast they can perform or how easily they can be\n> created...\n>\n> Chris and Vincent both targeted a balance between writes and reads - which\n> adequately answers 80-85% of the usages out there. But you didn't give us\n> any of the above - so their recommendation (while very likely valid) may\n> not actually fit your case at all.\n>\n> As to design patterns -\n> \"Generally\" a database schema is more normalized for an operational system\n> because normalization results in fewer writes/updates and lowers the risk\n> of corruption if a failure takes place. It also isolates updates for any\n> specific value to one location minimizing internally caused data corruption.\n> Reporting systems are generally less normalized because writes are more\n> one-time and reads are where the load occurs.\n> Sometimes you have to use data replication to have a system that\n> appropriately supports both.\n>\n> you have shown you are already aware of normalization.\n> If you weren't aware of approaches to Data Warehousing... you can review\n> information about how it is accomplished\n> - see the blogs on kimballgroup DOT com they cover a lot of high(er)\n> level concepts with enough specificity to be of some direct use.\n> [that website also covers some ideas for \"Big Data\" which aren't\n> necessarily limited to RDBMS']\n>\n> Specify your performance requirements, then figure out your schema design.\n>\n> FWIW I don't understand your (or any other person's) hesitancy for \"lots\n> of\" \"NULL\" values. They provide meaning in a number of different ways...\n> not the least of which is that you don't know (yet) - which is knowledge in\n> and of itself.\n>\n> Roxanne\n>\n>\n>\n>\n> --\n> Sent via pgsql-general mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-general\n>\n\n1. >500K rows per day into the calls table.2. Very rarely. The only common query is gathering users that have not been called \"today\" (along with some other qualifying criteria). More analytical queries/reports are done for internal use and it is not essential that they be lickity-split. \na. Usually just one connection at a time executes read queries.b. the users not called today query will be done once a day.c. Dailyd. All users for the last year (if you are asking about retention). We will also rarely have to run for all time.\ne. Not that I'm aware of (or seen) today.f. For the simple queries we cannot afford latency between calls and querying who was already called.While I don't seem to be getting much support for it here :D my write performance (which is most essential) has been much better since I further normalized the tables and made it so that NULL is never used and data is never updated (i.e. it is immutable once it is written). \nAs for wanting to avoid NULLs I don't really know what to say. Obviously some times NULL's are required. For this design I don't really need them and they make the data harder to reason about (because they are kind of open to interpretation). They can also give you different results than you sometimes expect (for example when looking for a non matching key, you start having to inject some OR IS NULLs and such). Also, the absence of null can make a lot of queries more optimal). That said, I understand where you all are coming from with de-normalization. It's definitely the path of the least resistance. Our instinct is to want to see all related data in a single table when possible. \nThe summary table was really a separate point from whether or not people liked my schema or not -- I mean whether I de-normalize as people are asking or not, there would still be the question of a summary table for MAX and COUNT queries or to not have a summary table for those. I probably made the original question too open ended. \n On Tue, Apr 15, 2014 at 3:26 PM, Roxanne Reid-Bennett <[email protected]> wrote:\nOn 4/14/2014 12:27 PM, Robert DiFalco wrote:\n\nAnd so on for calls_connected, calls_completed, call_errors, etc.\n\nOccasionally I will want to know things like \"When was the last time a user answered a call\" or \"How many times has a user been called\".\n...\nSometimes I might want to get this data for a whole bunch of users.\n...\nSo the other option is to create a call_summary table that is updated with triggers.\n...\nMy only issue with a summary table is that I don't want a bunch of null fields.\n...\nBut over time all fields would eventually become non-NULL.\n\nSo that leads me to a summary table for EACH call state. This approach has the down side that it creates a lot of tables and triggers. It has the upside of being pretty efficient without having to deal with NULL values. It's also pretty easy to reason about.\n\n...\nSo for my question -- is the choice between these a personal preference sort of thing or is there a right or wrong approach? Am I missing another approach that would be better? I'm okay with SQL but I'm not expert so I'm not sure if there is an accepted DESIGN PATTERN for this that I am missing.\n\n\nThere is no right or wrong - there is better, worse, best, and worst for any specific scenario. In my experience, most people have time/money to get to an 80% \"better\" design than all the other answers during design and then it gets refined over time. And yes, personal experience does play a part in how people interpret better/worse [aka religion] ;)\n\nI didn't see anybody ask these questions - and to identify \"better\" - they have to be asked.\n1. How much data are you feeding into your system how fast?\n this directly affects your choices on distribution, parallel processing... writes vs updates vs triggers for copying vs all reads\n [and if on bare metal - potentially where you place your logs, indexes, core lookup tables, etc]\n2. How much data are you reading out of your system - how fast?\n you have given \"simple\" use cases (how many calls completed within a time frame or to a number)\n you have given very slightly more complex use cases (when was the last time John answered a call)\n you have given a slightly more bulky processing question of (how many times have these users been called)\nSo...\n a) How many users executing read queries do you have?\n b) What is the expected load for simple queries (per week/day/hour/minute - depending upon your resolution on speed)\n c) What is the expected load for your mid-line complex queries\n d) What is the \"maximum\" volume you expect a bulk query to go after (like all users in the last 18 years, or this city's users in the last day?) and how frequently will that kind of query be executed? How much tolerance for delay do your users have?\n\n e) do you have any known really complex queries that might bog the system down?\n f) How much lag time can you afford between capture and reporting?\n\nAnswers to the above define your performance requirements - which defines the style of schema you need. Queries can be written to pull data from any schema design - but how fast they can perform or how easily they can be created...\n\nChris and Vincent both targeted a balance between writes and reads - which adequately answers 80-85% of the usages out there. But you didn't give us any of the above - so their recommendation (while very likely valid) may not actually fit your case at all.\n\nAs to design patterns -\n\"Generally\" a database schema is more normalized for an operational system because normalization results in fewer writes/updates and lowers the risk of corruption if a failure takes place. It also isolates updates for any specific value to one location minimizing internally caused data corruption.\n\nReporting systems are generally less normalized because writes are more one-time and reads are where the load occurs.\nSometimes you have to use data replication to have a system that appropriately supports both.\n\nyou have shown you are already aware of normalization.\nIf you weren't aware of approaches to Data Warehousing... you can review information about how it is accomplished\n- see the blogs on kimballgroup DOT com they cover a lot of high(er) level concepts with enough specificity to be of some direct use.\n[that website also covers some ideas for \"Big Data\" which aren't necessarily limited to RDBMS']\n\nSpecify your performance requirements, then figure out your schema design.\n\nFWIW I don't understand your (or any other person's) hesitancy for \"lots of\" \"NULL\" values. They provide meaning in a number of different ways... not the least of which is that you don't know (yet) - which is knowledge in and of itself.\n\nRoxanne\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general",
"msg_date": "Tue, 15 Apr 2014 18:10:10 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Approach to Data Summary and Analysis"
},
{
"msg_contents": "On 16/04/14 13:10, Robert DiFalco wrote:\n> 1. >500K rows per day into the calls table.\n> 2. Very rarely. The only common query is gathering users that have not \n> been called \"today\" (along with some other qualifying criteria). More \n> analytical queries/reports are done for internal use and it is not \n> essential that they be lickity-split.\n> a. Usually just one connection at a time executes read queries.\n> b. the users not called today query will be done once a day.\n> c. Daily\n> d. All users for the last year (if you are asking about retention). We \n> will also rarely have to run for all time.\n> e. Not that I'm aware of (or seen) today.\n> f. For the simple queries we cannot afford latency between calls and \n> querying who was already called.\n>\n> While I don't seem to be getting much support for it here :D my write \n> performance (which is most essential) has been much better since I \n> further normalized the tables and made it so that NULL is never used \n> and data is never updated (i.e. it is immutable once it is written).\n>\n> As for wanting to avoid NULLs I don't really know what to say. \n> Obviously some times NULL's are required. For this design I don't \n> really need them and they make the data harder to reason about \n> (because they are kind of open to interpretation). They can also give \n> you different results than you sometimes expect (for example when \n> looking for a non matching key, you start having to inject some OR IS \n> NULLs and such). Also, the absence of null can make a lot of queries \n> more optimal). That said, I understand where you all are coming from \n> with de-normalization. It's definitely the path of the least \n> resistance. Our instinct is to want to see all related data in a \n> single table when possible.\n>\n> The summary table was really a separate point from whether or not \n> people liked my schema or not -- I mean whether I de-normalize as \n> people are asking or not, there would still be the question of a \n> summary table for MAX and COUNT queries or to not have a summary table \n> for those. I probably made the original question too open ended.\n>\n>\n> On Tue, Apr 15, 2014 at 3:26 PM, Roxanne Reid-Bennett <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> On 4/14/2014 12:27 PM, Robert DiFalco wrote:\n>\n> And so on for calls_connected, calls_completed, call_errors, etc.\n>\n> Occasionally I will want to know things like \"When was the\n> last time a user answered a call\" or \"How many times has a\n> user been called\".\n> ...\n>\n> Sometimes I might want to get this data for a whole bunch of\n> users.\n> ...\n>\n> So the other option is to create a call_summary table that is\n> updated with triggers.\n> ...\n>\n> My only issue with a summary table is that I don't want a\n> bunch of null fields.\n> ...\n>\n> But over time all fields would eventually become non-NULL.\n>\n> So that leads me to a summary table for EACH call state. This\n> approach has the down side that it creates a lot of tables and\n> triggers. It has the upside of being pretty efficient without\n> having to deal with NULL values. It's also pretty easy to\n> reason about.\n> ...\n>\n> So for my question -- is the choice between these a personal\n> preference sort of thing or is there a right or wrong\n> approach? Am I missing another approach that would be better?\n> I'm okay with SQL but I'm not expert so I'm not sure if there\n> is an accepted DESIGN PATTERN for this that I am missing.\n>\n> There is no right or wrong - there is better, worse, best, and\n> worst for any specific scenario. In my experience, most people\n> have time/money to get to an 80% \"better\" design than all the\n> other answers during design and then it gets refined over time.\n> And yes, personal experience does play a part in how people\n> interpret better/worse [aka religion] ;)\n>\n> I didn't see anybody ask these questions - and to identify\n> \"better\" - they have to be asked.\n> 1. How much data are you feeding into your system how fast?\n> this directly affects your choices on distribution,\n> parallel processing... writes vs updates vs triggers for copying\n> vs all reads\n> [and if on bare metal - potentially where you place your\n> logs, indexes, core lookup tables, etc]\n> 2. How much data are you reading out of your system - how fast?\n> you have given \"simple\" use cases (how many calls completed\n> within a time frame or to a number)\n> you have given very slightly more complex use cases (when\n> was the last time John answered a call)\n> you have given a slightly more bulky processing question of\n> (how many times have these users been called)\n> So...\n> a) How many users executing read queries do you have?\n> b) What is the expected load for simple queries (per\n> week/day/hour/minute - depending upon your resolution on speed)\n> c) What is the expected load for your mid-line complex queries\n> d) What is the \"maximum\" volume you expect a bulk query to\n> go after (like all users in the last 18 years, or this city's\n> users in the last day?) and how frequently will that kind of\n> query be executed? How much tolerance for delay do your users have?\n> e) do you have any known really complex queries that might\n> bog the system down?\n> f) How much lag time can you afford between capture and\n> reporting?\n>\n> Answers to the above define your performance requirements - which\n> defines the style of schema you need. Queries can be written to\n> pull data from any schema design - but how fast they can perform\n> or how easily they can be created...\n>\n> Chris and Vincent both targeted a balance between writes and reads\n> - which adequately answers 80-85% of the usages out there. But\n> you didn't give us any of the above - so their recommendation\n> (while very likely valid) may not actually fit your case at all.\n>\n> As to design patterns -\n> \"Generally\" a database schema is more normalized for an\n> operational system because normalization results in fewer\n> writes/updates and lowers the risk of corruption if a failure\n> takes place. It also isolates updates for any specific value to\n> one location minimizing internally caused data corruption.\n> Reporting systems are generally less normalized because writes are\n> more one-time and reads are where the load occurs.\n> Sometimes you have to use data replication to have a system that\n> appropriately supports both.\n>\n> you have shown you are already aware of normalization.\n> If you weren't aware of approaches to Data Warehousing... you can\n> review information about how it is accomplished\n> - see the blogs on kimballgroup DOT com they cover a lot of\n> high(er) level concepts with enough specificity to be of some\n> direct use.\n> [that website also covers some ideas for \"Big Data\" which aren't\n> necessarily limited to RDBMS']\n>\n> Specify your performance requirements, then figure out your schema\n> design.\n>\n> FWIW I don't understand your (or any other person's) hesitancy for\n> \"lots of\" \"NULL\" values. They provide meaning in a number of\n> different ways... not the least of which is that you don't know\n> (yet) - which is knowledge in and of itself.\n>\n> Roxanne\n>\n>\n>\n>\n> -- \n> Sent via pgsql-general mailing list ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-general\n>\n>\nHave you considered partial indexes? Using the /WHERE //predicate/ \noption of /CREATE INDEX/.\n\nThis can be useful if you often look for things that are often only a \nsmall subset of keys. For example a partial index on sex would useful \nfor nurses, only indexing those that are male as they are in a very \nsmall minority.\n\n\nCheers,\nGavin\n\n\n\n\n\n\nOn 16/04/14 13:10, Robert DiFalco\n wrote:\n\n\n1. >500K rows per day into the calls table.\n 2. Very rarely. The only common query is gathering users\n that have not been called \"today\" (along with some other\n qualifying criteria). More analytical queries/reports are done\n for internal use and it is not essential that they be\n lickity-split. \na. Usually just one connection at a time executes read\n queries.\nb. the users not called today query will be done once a\n day.\nc. Daily\nd. All users for the last year (if you are asking about\n retention). We will also rarely have to run for all time.\ne. Not that I'm aware of (or seen) today.\nf. For the simple queries we cannot afford latency between\n calls and querying who was already called.\n\n\nWhile I don't seem to be getting much support for it here\n :D my write performance (which is most essential) has been\n much better since I further normalized the tables and made it\n so that NULL is never used and data is never updated (i.e. it\n is immutable once it is written). \n\n\nAs for wanting to avoid NULLs I don't really know what to\n say. Obviously some times NULL's are required. For this design\n I don't really need them and they make the data harder to\n reason about (because they are kind of open to\n interpretation). They can also give you different results\n than you sometimes expect (for example when looking for a non\n matching key, you start having to inject some OR IS NULLs and\n such). Also, the absence of null can make a lot of queries\n more optimal). That said, I understand where you all are\n coming from with de-normalization. It's definitely the path of\n the least resistance. Our instinct is to want to see all\n related data in a single table when possible. \n\n\nThe summary table was really a separate point from whether\n or not people liked my schema or not -- I mean whether I\n de-normalize as people are asking or not, there would still be\n the question of a summary table for MAX and COUNT queries or\n to not have a summary table for those. I probably made the\n original question too open ended. \n \n\n\n\nOn Tue, Apr 15, 2014 at 3:26 PM,\n Roxanne Reid-Bennett <[email protected]> wrote:\n\nOn 4/14/2014 12:27 PM, Robert DiFalco wrote:\n\n\n\n And so on for calls_connected, calls_completed,\n call_errors, etc.\n\n Occasionally I will want to know things like \"When was\n the last time a user answered a call\" or \"How many times\n has a user been called\".\n\n ...\n \n Sometimes I might want to get this data for a whole\n bunch of users.\n\n ...\n \n So the other option is to create a call_summary table\n that is updated with triggers.\n\n ...\n \n My only issue with a summary table is that I don't want\n a bunch of null fields.\n\n ...\n \n But over time all fields would eventually become\n non-NULL.\n\n\n So that leads me to a summary table for EACH call state.\n This approach has the down side that it creates a lot of\n tables and triggers. It has the upside of being pretty\n efficient without having to deal with NULL values. It's\n also pretty easy to reason about.\n ...\n \n So for my question -- is the choice between these a\n personal preference sort of thing or is there a right or\n wrong approach? Am I missing another approach that would\n be better? I'm okay with SQL but I'm not expert so I'm\n not sure if there is an accepted DESIGN PATTERN for this\n that I am missing.\n\n\n\n There is no right or wrong - there is better, worse, best,\n and worst for any specific scenario. In my experience, most\n people have time/money to get to an 80% \"better\" design than\n all the other answers during design and then it gets refined\n over time. And yes, personal experience does play a part in\n how people interpret better/worse [aka religion] ;)\n\n I didn't see anybody ask these questions - and to identify\n \"better\" - they have to be asked.\n 1. How much data are you feeding into your system how fast?\n this directly affects your choices on distribution,\n parallel processing... writes vs updates vs triggers for\n copying vs all reads\n [and if on bare metal - potentially where you place\n your logs, indexes, core lookup tables, etc]\n 2. How much data are you reading out of your system - how\n fast?\n you have given \"simple\" use cases (how many calls\n completed within a time frame or to a number)\n you have given very slightly more complex use cases\n (when was the last time John answered a call)\n you have given a slightly more bulky processing\n question of (how many times have these users been called)\n So...\n a) How many users executing read queries do you have?\n b) What is the expected load for simple queries (per\n week/day/hour/minute - depending upon your resolution on\n speed)\n c) What is the expected load for your mid-line complex\n queries\n d) What is the \"maximum\" volume you expect a bulk\n query to go after (like all users in the last 18 years, or\n this city's users in the last day?) and how frequently will\n that kind of query be executed? How much tolerance for\n delay do your users have?\n e) do you have any known really complex queries that\n might bog the system down?\n f) How much lag time can you afford between capture\n and reporting?\n\n Answers to the above define your performance requirements -\n which defines the style of schema you need. Queries can be\n written to pull data from any schema design - but how fast\n they can perform or how easily they can be created...\n\n Chris and Vincent both targeted a balance between writes and\n reads - which adequately answers 80-85% of the usages out\n there. But you didn't give us any of the above - so their\n recommendation (while very likely valid) may not actually\n fit your case at all.\n\n As to design patterns -\n \"Generally\" a database schema is more normalized for an\n operational system because normalization results in fewer\n writes/updates and lowers the risk of corruption if a\n failure takes place. It also isolates updates for any\n specific value to one location minimizing internally caused\n data corruption.\n Reporting systems are generally less normalized because\n writes are more one-time and reads are where the load\n occurs.\n Sometimes you have to use data replication to have a system\n that appropriately supports both.\n\n you have shown you are already aware of normalization.\n If you weren't aware of approaches to Data Warehousing...\n you can review information about how it is accomplished\n - see the blogs on kimballgroup DOT com they cover a lot of\n high(er) level concepts with enough specificity to be of\n some direct use.\n [that website also covers some ideas for \"Big Data\" which\n aren't necessarily limited to RDBMS']\n\n Specify your performance requirements, then figure out your\n schema design.\n\n FWIW I don't understand your (or any other person's)\n hesitancy for \"lots of\" \"NULL\" values. They provide meaning\n in a number of different ways... not the least of which is\n that you don't know (yet) - which is knowledge in and of\n itself.\n\n Roxanne\n\n\n\n\n\n -- \n Sent via pgsql-general mailing list ([email protected])\n To make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n\n\n\n\n\n\n\n Have you considered partial indexes? Using the WHERE predicate option of CREATE\n INDEX.\n\n This can be useful if you often look for things that are often only\n a small subset of keys. For example a partial index on sex would\n useful for nurses, only indexing those that are male as they are in\n a very small minority.\n\n\n Cheers,\n Gavin",
"msg_date": "Wed, 16 Apr 2014 13:27:25 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Approach to Data Summary and Analysis"
},
{
"msg_contents": "On 4/15/2014 9:10 PM, Robert DiFalco wrote:\n> 1. >500K rows per day into the calls table.\n> 2. Very rarely. The only common query is gathering users that have not \n> been called \"today\" (along with some other qualifying criteria). More \n> analytical queries/reports are done for internal use and it is not \n> essential that they be lickity-split.\n> a. Usually just one connection at a time executes read queries.\n> b. the users not called today query will be done once a day.\n> c. Daily\n> d. All users for the last year (if you are asking about retention). We \n> will also rarely have to run for all time.\n> e. Not that I'm aware of (or seen) today.\n> f. For the simple queries we cannot afford latency between calls and \n> querying who was already called.\n>\n> While I don't seem to be getting much support for it here :D my write \n> performance (which is most essential) has been much better since I \n> further normalized the tables and made it so that NULL is never used \n> and data is never updated (i.e. it is immutable once it is written).\n\nBased on the above you are primarily capturing data and feeding back \nessentially one easy to find result set [who has NOT been successfully \ncalled] on an ongoing single threaded basis [once per day?]. So you are \nabsolutely correct - tune for writing speed.\n\n> The summary table was really a separate point from whether or not \n> people liked my schema or not -- I mean whether I de-normalize as \n> people are asking or not, there would still be the question of a \n> summary table for MAX and COUNT queries or to not have a summary table \n> for those. I probably made the original question too open ended.\n>\nDo you know your answer?\nyou said : \"Occasionally I will want to know things like \"\nyou answered to frequency on queries as \"the users not called today \nquery will be done once a day.\" as was c) [I'm assuming once?]\nand d) appears to be \"ad-hoc\" and you said your users can deal with \nlatency in response for those.\n\nSo finding Min/Max/Count quickly really *don't* matter for tuning.\n\nSo the only reason I can see to add a summary table is to ... simplify \nmaintenance [note I did NOT say \"development\"] and then only IF it \ndoesn't impact the write speeds beyond an acceptable level. Proper \ninternal / external documentation can mitigate maintenance nightmares. \nIf your developer(s) can't figure out how to get the data they need from \nthe schema - then give them the queries to run. [you are likely better \nat tuning those anyway]\n\nLast consideration - business consumption of data does change over \ntime. Disk space is cheap [but getting and keeping speed sometimes \nisn't]. You might consider including ongoing partial archival of the \noperational data during slow usage (write) periods.\n\nRoxanne\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Wed, 16 Apr 2014 11:42:02 -0400",
"msg_from": "Roxanne Reid-Bennett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Approach to Data Summary and Analysis"
},
{
"msg_contents": "Thanks Roxanne, I suppose when it comes down to it -- for the current use\ncases and data size -- my only concern is the \"calling\" query that will\nneed to use max to determine if a user has already had a call today. For a\nlarge user set, for each user I would either have to MAX on the answered\ntimestamp to compare it against today or do an exist query to see if any\ntimestamp for that user is greater or equal than \"today\".\n\nBut I suppose I just need to construct a huge dataset and see. I was\nthinking by keeping a summary so I always knew the last answer or call time\nfor each user that I could mitigate this becoming an issue. Over time a\nsingle user could have answered a call thousands of times. So that would\nmake a \"<=\" timestamp query be just # of users instead of # of users X 1000\n(or however many calls they have answered over the non-archived time\nperiod).\n\n\n\n\n\n\n\nOn Wed, Apr 16, 2014 at 8:42 AM, Roxanne Reid-Bennett <[email protected]>wrote:\n\n> On 4/15/2014 9:10 PM, Robert DiFalco wrote:\n>\n>> 1. >500K rows per day into the calls table.\n>> 2. Very rarely. The only common query is gathering users that have not\n>> been called \"today\" (along with some other qualifying criteria). More\n>> analytical queries/reports are done for internal use and it is not\n>> essential that they be lickity-split.\n>> a. Usually just one connection at a time executes read queries.\n>> b. the users not called today query will be done once a day.\n>> c. Daily\n>> d. All users for the last year (if you are asking about retention). We\n>> will also rarely have to run for all time.\n>> e. Not that I'm aware of (or seen) today.\n>> f. For the simple queries we cannot afford latency between calls and\n>> querying who was already called.\n>>\n>> While I don't seem to be getting much support for it here :D my write\n>> performance (which is most essential) has been much better since I further\n>> normalized the tables and made it so that NULL is never used and data is\n>> never updated (i.e. it is immutable once it is written).\n>>\n>\n> Based on the above you are primarily capturing data and feeding back\n> essentially one easy to find result set [who has NOT been successfully\n> called] on an ongoing single threaded basis [once per day?]. So you are\n> absolutely correct - tune for writing speed.\n>\n>\n> The summary table was really a separate point from whether or not people\n>> liked my schema or not -- I mean whether I de-normalize as people are\n>> asking or not, there would still be the question of a summary table for MAX\n>> and COUNT queries or to not have a summary table for those. I probably made\n>> the original question too open ended.\n>>\n>> Do you know your answer?\n> you said : \"Occasionally I will want to know things like \"\n> you answered to frequency on queries as \"the users not called today query\n> will be done once a day.\" as was c) [I'm assuming once?]\n> and d) appears to be \"ad-hoc\" and you said your users can deal with\n> latency in response for those.\n>\n> So finding Min/Max/Count quickly really *don't* matter for tuning.\n>\n> So the only reason I can see to add a summary table is to ... simplify\n> maintenance [note I did NOT say \"development\"] and then only IF it doesn't\n> impact the write speeds beyond an acceptable level. Proper internal /\n> external documentation can mitigate maintenance nightmares. If your\n> developer(s) can't figure out how to get the data they need from the schema\n> - then give them the queries to run. [you are likely better at tuning those\n> anyway]\n>\n> Last consideration - business consumption of data does change over time.\n> Disk space is cheap [but getting and keeping speed sometimes isn't]. You\n> might consider including ongoing partial archival of the operational data\n> during slow usage (write) periods.\n>\n>\n> Roxanne\n>\n>\n> --\n> Sent via pgsql-general mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-general\n>\n\nThanks Roxanne, I suppose when it comes down to it -- for the current use cases and data size -- my only concern is the \"calling\" query that will need to use max to determine if a user has already had a call today. For a large user set, for each user I would either have to MAX on the answered timestamp to compare it against today or do an exist query to see if any timestamp for that user is greater or equal than \"today\". \nBut I suppose I just need to construct a huge dataset and see. I was thinking by keeping a summary so I always knew the last answer or call time for each user that I could mitigate this becoming an issue. Over time a single user could have answered a call thousands of times. So that would make a \"<=\" timestamp query be just # of users instead of # of users X 1000 (or however many calls they have answered over the non-archived time period).\nOn Wed, Apr 16, 2014 at 8:42 AM, Roxanne Reid-Bennett <[email protected]> wrote:\nOn 4/15/2014 9:10 PM, Robert DiFalco wrote:\n\n1. >500K rows per day into the calls table.\n2. Very rarely. The only common query is gathering users that have not been called \"today\" (along with some other qualifying criteria). More analytical queries/reports are done for internal use and it is not essential that they be lickity-split.\n\na. Usually just one connection at a time executes read queries.\nb. the users not called today query will be done once a day.\nc. Daily\nd. All users for the last year (if you are asking about retention). We will also rarely have to run for all time.\ne. Not that I'm aware of (or seen) today.\nf. For the simple queries we cannot afford latency between calls and querying who was already called.\n\nWhile I don't seem to be getting much support for it here :D my write performance (which is most essential) has been much better since I further normalized the tables and made it so that NULL is never used and data is never updated (i.e. it is immutable once it is written).\n\n\nBased on the above you are primarily capturing data and feeding back essentially one easy to find result set [who has NOT been successfully called] on an ongoing single threaded basis [once per day?]. So you are absolutely correct - tune for writing speed.\n\n\n\nThe summary table was really a separate point from whether or not people liked my schema or not -- I mean whether I de-normalize as people are asking or not, there would still be the question of a summary table for MAX and COUNT queries or to not have a summary table for those. I probably made the original question too open ended.\n\n\nDo you know your answer?\nyou said : \"Occasionally I will want to know things like \"\nyou answered to frequency on queries as \"the users not called today query will be done once a day.\" as was c) [I'm assuming once?]\nand d) appears to be \"ad-hoc\" and you said your users can deal with latency in response for those.\n\nSo finding Min/Max/Count quickly really *don't* matter for tuning.\n\nSo the only reason I can see to add a summary table is to ... simplify maintenance [note I did NOT say \"development\"] and then only IF it doesn't impact the write speeds beyond an acceptable level. Proper internal / external documentation can mitigate maintenance nightmares. If your developer(s) can't figure out how to get the data they need from the schema - then give them the queries to run. [you are likely better at tuning those anyway]\n\nLast consideration - business consumption of data does change over time. Disk space is cheap [but getting and keeping speed sometimes isn't]. You might consider including ongoing partial archival of the operational data during slow usage (write) periods.\n\n\nRoxanne\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general",
"msg_date": "Wed, 16 Apr 2014 11:40:06 -0700",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Approach to Data Summary and Analysis"
},
{
"msg_contents": "On 4/16/2014 2:40 PM, Robert DiFalco wrote:\n> Thanks Roxanne, I suppose when it comes down to it -- for the current \n> use cases and data size -- my only concern is the \"calling\" query that \n> will need to use max to determine if a user has already had a call \n> today. For a large user set, for each user I would either have to MAX \n> on the answered timestamp to compare it against today or do an exist \n> query to see if any timestamp for that user is greater or equal than \n> \"today\".\n\nI didn't go back to look at your original schema- but.. if your 500K \nrecords are coming in time ordered... You may be able to track \"max\" as \nan attribute on an \"SCD\" based on the caller/callee table [or the \ncaller/ee table itself if that table is only used by your app] with an \nupdate from a post-insert trigger on the appropriate table. Even if they \naren't time ordered, you add the overhead of a single comparative in the \ntrigger. Downside is that you fire a trigger and an update for every \ninsert. [or just an update depending on what is driving your load of the \n500K records]\n\nAgain - the proof on \"value\" of this overhead is a comparison of the \ncost for the updates vs the cost on the query to find max() I suspect \nyour once a day query can afford all sorts of other optimizations that \nare \"better\" than a trigger fired on every insert. [such as the \nfunction index - that was already mentioned] I really suspect you just \ndon't have enough load on the query side (complex queries * # of users) \nto justify the extra load on the write side (+1 trigger, +1 update / \ninsert) to avoid a (potentially) heavy query load 1x/day.\n\nAnother option... if only worried about \"today\".. then keep only \n\"today's\" data in your query table, and migrate historical data nightly \nto a pseudo archive table for those \"every once in a while\" questions. \nI haven't played with table inheritance in Postgres - but that's a \ncapability I might look at if I were doing a pseudo archive table.\n\nRoxanne\n\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Fri, 18 Apr 2014 01:46:36 -0400",
"msg_from": "Roxanne Reid-Bennett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Approach to Data Summary and Analysis"
}
] |
[
{
"msg_contents": "Any rules of thumb for work_mem, maintenance_work_mem, shared_buffer, etc.\nfor a database that DOESN'T anticipate concurrent connections and that is\ndoing lots of aggregate functions on large tables? All the advice I can\nfind online on tuning\n(this<http://wiki.postgresql.org/wiki/Performance_Optimization>\n, this<http://media.revsys.com/talks/djangocon/2011/secrets-of-postgresql-performance.pdf>\n, this <http://www.revsys.com/writings/postgresql-performance.html> etc.)\nis written for people anticipating lots of concurrent connections.\n\nI'm a social scientist looking to use Postgres not as a database to be\nshared by multiple users, but rather as my own tool for manipulating a\nmassive data set (I have 5 billion transaction records (600gb in csv) and\nwant to pull out unique user pairs, estimate aggregates for individual\nusers, etc.). This also means almost no writing, except to creation of new\ntables based on selections from the main table.\n\nI'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if that's\nimportant.\n\nThanks!\n\nAny rules of thumb for work_mem, maintenance_work_mem, shared_buffer, etc. for a database that DOESN'T anticipate concurrent connections and that is doing lots of aggregate functions on large tables? All the advice I can find online on tuning (this, this, this etc.) is written for people anticipating lots of concurrent connections.\nI'm a social scientist looking to use Postgres not as a database to be shared by multiple users, but rather as my own tool for manipulating a massive data set (I have 5 billion transaction records (600gb in csv) and want to pull out unique user pairs, estimate aggregates for individual users, etc.). This also means almost no writing, except to creation of new tables based on selections from the main table. \nI'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if that's important.\nThanks!",
"msg_date": "Mon, 14 Apr 2014 14:46:04 -0700",
"msg_from": "Nick Eubank <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning Postgres for Single connection use"
},
{
"msg_contents": "On 15/04/14 09:46, Nick Eubank wrote:\n>\n> Any rules of thumb for |work_mem|, |maintenance_work_mem|, \n> |shared_buffer|, etc. for a database that DOESN'T anticipate \n> concurrent connections and that is doing lots of aggregate functions \n> on large tables? All the advice I can find online on tuning (this \n> <http://wiki.postgresql.org/wiki/Performance_Optimization>, this \n> <http://media.revsys.com/talks/djangocon/2011/secrets-of-postgresql-performance.pdf>, \n> this \n> <http://www.revsys.com/writings/postgresql-performance.html> etc.) is \n> written for people anticipating lots of concurrent connections.\n>\n> I'm a social scientist looking to use Postgres not as a database to be \n> shared by multiple users, but rather as my own tool for manipulating a \n> massive data set (I have 5 billion transaction records (600gb in csv) \n> and want to pull out unique user pairs, estimate aggregates for \n> individual users, etc.). This also means almost no writing, except to \n> creation of new tables based on selections from the main table.\n>\n> I'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if \n> that's important.\n>\n> Thanks!\n>\nWell for serious database work, I suggest upgrading to Linux - you will \nget better performance out of the same hardware and probably (a year or \nso ago, I noticed some tuning options did not apply to Microsoft O/S's, \nbut I don't recall the details - these options may, or may not, apply to \nyour situation) more scope for tuning. Apart from anything else, your \nprocessing will not be slowed down by having to run anti-virus software!\n\nNote that in Linux you have a wide choice of distributions and desktop \nenvironments: I chose Mate (http://mate-desktop.org), some people prefer \nxfce (http://www.xfce.org), I used to use GNOME 2.\n\n\nCheers,\nGavin\n\n\n\n\n\n\nOn 15/04/14 09:46, Nick Eubank wrote:\n\n\n\nAny\n rules of thumb for work_mem, maintenance_work_mem, shared_buffer,\n etc. for a database that DOESN'T anticipate concurrent\n connections and that is doing lots of aggregate functions on\n large tables? All the advice I\n can find online on tuning (this, this, this etc.) is written for people\n anticipating lots of concurrent connections.\nI'm a\n social scientist looking to use Postgres not as a database to\n be shared by multiple users, but rather as my own tool for\n manipulating a massive data set (I have 5 billion transaction\n records (600gb in csv) and want to pull out unique user pairs,\n estimate aggregates for individual users, etc.). This also\n means almost no writing, except to creation of new tables\n based on selections from the main table. \nI'm\n on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores\n if that's important.\n\nThanks!\n\n\n\n Well for serious database work, I suggest upgrading to Linux - you\n will get better performance out of the same hardware and probably (a\n year or so ago, I noticed some tuning options did not apply to\n Microsoft O/S's, but I don't recall the details - these options may,\n or may not, apply to your situation) more scope for tuning. Apart\n from anything else, your processing will not be slowed down by\n having to run anti-virus software!\n\n Note that in Linux you have a wide choice of distributions and\n desktop environments: I chose Mate (http://mate-desktop.org), some\n people prefer xfce (http://www.xfce.org), I used to use GNOME 2.\n\n\n Cheers,\n Gavin",
"msg_date": "Tue, 15 Apr 2014 10:41:57 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres for Single connection use"
},
{
"msg_contents": "Thanks Gavin -- would LOVE to. Sadly I'm in a weird situation\nwhere my hardware is not under my control, so I'm stuck making the best of\nwhat I have. Next time though! :)\n\nOn Monday, April 14, 2014, Gavin Flower <[email protected]>\nwrote:\n\n> On 15/04/14 09:46, Nick Eubank wrote:\n>\n> Any rules of thumb for work_mem, maintenance_work_mem, shared_buffer,\n> etc. for a database that DOESN'T anticipate concurrent connections and that\n> is doing lots of aggregate functions on large tables? All the advice I\n> can find online on tuning (this<http://wiki.postgresql.org/wiki/Performance_Optimization>\n> , this<http://media.revsys.com/talks/djangocon/2011/secrets-of-postgresql-performance.pdf>\n> , this <http://www.revsys.com/writings/postgresql-performance.html> etc.)\n> is written for people anticipating lots of concurrent connections.\n>\n> I'm a social scientist looking to use Postgres not as a database to be\n> shared by multiple users, but rather as my own tool for manipulating a\n> massive data set (I have 5 billion transaction records (600gb in csv) and\n> want to pull out unique user pairs, estimate aggregates for individual\n> users, etc.). This also means almost no writing, except to creation of new\n> tables based on selections from the main table.\n>\n> I'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if that's\n> important.\n>\n> Thanks!\n>\n> Well for serious database work, I suggest upgrading to Linux - you will\n> get better performance out of the same hardware and probably (a year or so\n> ago, I noticed some tuning options did not apply to Microsoft O/S's, but I\n> don't recall the details - these options may, or may not, apply to your\n> situation) more scope for tuning. Apart from anything else, your\n> processing will not be slowed down by having to run anti-virus software!\n>\n> Note that in Linux you have a wide choice of distributions and desktop\n> environments: I chose Mate (http://mate-desktop.org), some people prefer\n> xfce (http://www.xfce.org), I used to use GNOME 2.\n>\n>\n> Cheers,\n> Gavin\n>\n\nThanks Gavin -- would LOVE to. Sadly I'm in a weird situation where my hardware is not under my control, so I'm stuck making the best of what I have. Next time though! :)On Monday, April 14, 2014, Gavin Flower <[email protected]> wrote:\n\n\nOn 15/04/14 09:46, Nick Eubank wrote:\n\n\n\nAny\n rules of thumb for work_mem, maintenance_work_mem, shared_buffer,\n etc. for a database that DOESN'T anticipate concurrent\n connections and that is doing lots of aggregate functions on\n large tables? All the advice I\n can find online on tuning (this, this, this etc.) is written for people\n anticipating lots of concurrent connections.\nI'm a\n social scientist looking to use Postgres not as a database to\n be shared by multiple users, but rather as my own tool for\n manipulating a massive data set (I have 5 billion transaction\n records (600gb in csv) and want to pull out unique user pairs,\n estimate aggregates for individual users, etc.). This also\n means almost no writing, except to creation of new tables\n based on selections from the main table. \nI'm\n on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores\n if that's important.\n\nThanks!\n\n\n\n Well for serious database work, I suggest upgrading to Linux - you\n will get better performance out of the same hardware and probably (a\n year or so ago, I noticed some tuning options did not apply to\n Microsoft O/S's, but I don't recall the details - these options may,\n or may not, apply to your situation) more scope for tuning. Apart\n from anything else, your processing will not be slowed down by\n having to run anti-virus software!\n\n Note that in Linux you have a wide choice of distributions and\n desktop environments: I chose Mate (http://mate-desktop.org), some\n people prefer xfce (http://www.xfce.org), I used to use GNOME 2.\n\n\n Cheers,\n Gavin",
"msg_date": "Mon, 14 Apr 2014 16:39:26 -0700",
"msg_from": "Nick Eubank <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Postgres for Single connection use"
},
{
"msg_contents": "\nOn 04/14/2014 05:46 PM, Nick Eubank wrote:\n>\n> Any rules of thumb for |work_mem|, |maintenance_work_mem|, \n> |shared_buffer|, etc. for a database that DOESN'T anticipate \n> concurrent connections and that is doing lots of aggregate functions \n> on large tables? All the advice I can find online on tuning (this \n> <http://wiki.postgresql.org/wiki/Performance_Optimization>, this \n> <http://media.revsys.com/talks/djangocon/2011/secrets-of-postgresql-performance.pdf>, \n> this \n> <http://www.revsys.com/writings/postgresql-performance.html> etc.) is \n> written for people anticipating lots of concurrent connections.\n>\n> I'm a social scientist looking to use Postgres not as a database to be \n> shared by multiple users, but rather as my own tool for manipulating a \n> massive data set (I have 5 billion transaction records (600gb in csv) \n> and want to pull out unique user pairs, estimate aggregates for \n> individual users, etc.). This also means almost no writing, except to \n> creation of new tables based on selections from the main table.\n>\n> I'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if \n> that's important.\n>\n>\n\n\nFirst up would probably be \"don't run on Windows\". shared_buffers above \n512Mb causes performance to degrade on Windows, while that threshold is \nmuch higher on *nix.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Apr 2014 20:12:31 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres for Single connection use"
},
{
"msg_contents": "On Mon, Apr 14, 2014 at 2:46 PM, Nick Eubank <[email protected]> wrote:\n\n> Any rules of thumb for work_mem, maintenance_work_mem, shared_buffer,\n> etc. for a database that DOESN'T anticipate concurrent connections and that\n> is doing lots of aggregate functions on large tables? All the advice I\n> can find online on tuning (this<http://wiki.postgresql.org/wiki/Performance_Optimization>\n> , this<http://media.revsys.com/talks/djangocon/2011/secrets-of-postgresql-performance.pdf>\n> , this <http://www.revsys.com/writings/postgresql-performance.html> etc.)\n> is written for people anticipating lots of concurrent connections.\n>\n> I'm a social scientist looking to use Postgres not as a database to be\n> shared by multiple users, but rather as my own tool for manipulating a\n> massive data set (I have 5 billion transaction records (600gb in csv) and\n> want to pull out unique user pairs, estimate aggregates for individual\n> users, etc.). This also means almost no writing, except to creation of new\n> tables based on selections from the main table.\n>\n> I'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if that's\n> important.\n>\n\nI'd go with a small shared_buffers, like 128MB, and let the OS cache as\nmuch as possible. This minimizes the amount of double buffering.\n\nAnd set work_mem to about 6GB, then bump it up if that doesn't seem to\ncause problems.\n\nIn the scenario you describe, it is probably no big deal if you guess too\nhigh. Monitor the process, if it it starts to go nuts just kill it and\nstart again with a lower work_mem. If it is a single user system, you can\nafford to be adventurous.\n\nIf you need to build indexes, you should bump up maintenance_work_mem, but\nI just would do that in the local session not system wide.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 14, 2014 at 2:46 PM, Nick Eubank <[email protected]> wrote:\n\nAny rules of thumb for work_mem, maintenance_work_mem, shared_buffer, etc. for a database that DOESN'T anticipate concurrent connections and that is doing lots of aggregate functions on large tables? All the advice I can find online on tuning (this, this, this etc.) is written for people anticipating lots of concurrent connections.\nI'm a social scientist looking to use Postgres not as a database to be shared by multiple users, but rather as my own tool for manipulating a massive data set (I have 5 billion transaction records (600gb in csv) and want to pull out unique user pairs, estimate aggregates for individual users, etc.). This also means almost no writing, except to creation of new tables based on selections from the main table. \nI'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if that's important.\nI'd go with a small shared_buffers, like 128MB, and let the OS cache as much as possible. This minimizes the amount of double buffering.And set work_mem to about 6GB, then bump it up if that doesn't seem to cause problems.\nIn the scenario you describe, it is probably no big deal if you guess too high. Monitor the process, if it it starts to go nuts just kill it and start again with a lower work_mem. If it is a single user system, you can afford to be adventurous.\nIf you need to build indexes, you should bump up maintenance_work_mem, but I just would do that in the local session not system wide.Cheers,Jeff",
"msg_date": "Mon, 14 Apr 2014 17:19:22 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres for Single connection use"
},
{
"msg_contents": "In this list, please bottom post!\n\nI've added potentially useful advice below.\n\nOn 15/04/14 11:39, Nick Eubank wrote:\n> Thanks Gavin -- would LOVE to. Sadly I'm in a weird situation \n> where my hardware is not under my control, so I'm stuck making the \n> best of what I have. Next time though! :)\n>\n> On Monday, April 14, 2014, Gavin Flower <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> On 15/04/14 09:46, Nick Eubank wrote:\n>>\n>> Any rules of thumb for |work_mem|, |maintenance_work_mem|,\n>> |shared_buffer|, etc. for a database that DOESN'T anticipate\n>> concurrent connections and that is doing lots of aggregate\n>> functions on large tables? All the advice I can find online on\n>> tuning (this\n>> <http://wiki.postgresql.org/wiki/Performance_Optimization>, this\n>> <http://media.revsys.com/talks/djangocon/2011/secrets-of-postgresql-performance.pdf>,\n>> this\n>> <http://www.revsys.com/writings/postgresql-performance.html> etc.) is\n>> written for people anticipating lots of concurrent connections.\n>>\n>> I'm a social scientist looking to use Postgres not as a database\n>> to be shared by multiple users, but rather as my own tool for\n>> manipulating a massive data set (I have 5 billion transaction\n>> records (600gb in csv) and want to pull out unique user pairs,\n>> estimate aggregates for individual users, etc.). This also means\n>> almost no writing, except to creation of new tables based on\n>> selections from the main table.\n>>\n>> I'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores\n>> if that's important.\n>>\n>> Thanks!\n>>\n> Well for serious database work, I suggest upgrading to Linux - you\n> will get better performance out of the same hardware and probably\n> (a year or so ago, I noticed some tuning options did not apply to\n> Microsoft O/S's, but I don't recall the details - these options\n> may, or may not, apply to your situation) more scope for tuning. \n> Apart from anything else, your processing will not be slowed down\n> by having to run anti-virus software!\n>\n> Note that in Linux you have a wide choice of distributions and\n> desktop environments: I chose Mate (http://mate-desktop.org), some\n> people prefer xfce (http://www.xfce.org), I used to use GNOME 2.\n>\n>\n> Cheers,\n> Gavin\n>\nYeah, I know the feeling!\n\nI have a client that uses MySQL (ugh!), but I won't even bother \nmentioning PostgreSQL!\n\nHopefully, someone more knowledgeable will give you some good advice \nspecific to your O/S.\n\nFor tables that don't change, consider a packing density of 100%.\n\nTake care in how you design your tables, and the column types.\n\nConsider carefully the queries you are likely to use, so you can design \nappropriate indexes.\n\nSome advice will depend on the schema you plan to use, and the type of \nqueries.\n\n\nCheers,\nGavin\n\n\n\n\n\n\n\n\nIn this list, please bottom post!\n\n I've added potentially useful advice below.\n\n On 15/04/14 11:39, Nick Eubank wrote:\n\nThanks Gavin -- would LOVE to. Sadly I'm in a weird\n situation where my hardware is not under my control, so I'm stuck\n making the best of what I have. Next time though! :)\n\n On Monday, April 14, 2014, Gavin Flower <[email protected]>\n wrote:\n\n\nOn 15/04/14 09:46, Nick Eubank wrote:\n\n\n\nAny rules of thumb for work_mem, maintenance_work_mem, shared_buffer,\n etc. for a database that DOESN'T anticipate concurrent\n connections and that is doing lots of aggregate\n functions on large tables? All\n the advice I can find online on tuning (this, this, this etc.)\n is written for people anticipating lots of concurrent\n connections.\nI'm a social scientist looking to use Postgres not as a\n database to be shared by multiple users, but rather as\n my own tool for manipulating a massive data set (I have\n 5 billion transaction records (600gb in csv) and want to\n pull out unique user pairs, estimate aggregates for\n individual users, etc.). This also means almost no\n writing, except to creation of new tables based on\n selections from the main table. \nI'm on a Windows 8 VM with 16gb ram, SCSI VMware HD,\n and 3 cores if that's important.\n\nThanks!\n\n\n\n Well for serious database work, I suggest upgrading to Linux -\n you will get better performance out of the same hardware and\n probably (a year or so ago, I noticed some tuning options did\n not apply to Microsoft O/S's, but I don't recall the details -\n these options may, or may not, apply to your situation) more\n scope for tuning. Apart from anything else, your processing\n will not be slowed down by having to run anti-virus software!\n\n Note that in Linux you have a wide choice of distributions and\n desktop environments: I chose Mate (http://mate-desktop.org),\n some people prefer xfce (http://www.xfce.org),\n I used to use GNOME 2.\n\n\n Cheers,\n Gavin\n\n\n\n Yeah, I know the feeling!\n\n I have a client that uses MySQL (ugh!), but I won't even bother\n mentioning PostgreSQL!\n\n Hopefully, someone more knowledgeable will give you some good advice\n specific to your O/S.\n\n For tables that don't change, consider a packing density of 100%.\n\n Take care in how you design your tables, and the column types.\n\n Consider carefully the queries you are likely to use, so you can\n design appropriate indexes.\n\n Some advice will depend on the schema you plan to use, and the type\n of queries.\n\n\n Cheers,\n Gavin",
"msg_date": "Tue, 15 Apr 2014 12:29:01 +1200",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres for Single connection use"
},
{
"msg_contents": "Terrific -- thanks Gavin and Jeff! That's incredibly helpful for a n00b\nlike me!\n\n\nOn Mon, Apr 14, 2014 at 5:29 PM, Gavin Flower <[email protected]\n> wrote:\n\n> In this list, please bottom post!\n>\n> I've added potentially useful advice below.\n>\n>\n> On 15/04/14 11:39, Nick Eubank wrote:\n>\n> Thanks Gavin -- would LOVE to. Sadly I'm in a weird situation\n> where my hardware is not under my control, so I'm stuck making the best of\n> what I have. Next time though! :)\n>\n> On Monday, April 14, 2014, Gavin Flower <[email protected]>\n> wrote:\n>\n>> On 15/04/14 09:46, Nick Eubank wrote:\n>>\n>> Any rules of thumb for work_mem, maintenance_work_mem, shared_buffer,\n>> etc. for a database that DOESN'T anticipate concurrent connections and that\n>> is doing lots of aggregate functions on large tables? All the advice I\n>> can find online on tuning (this<http://wiki.postgresql.org/wiki/Performance_Optimization>\n>> , this<http://media.revsys.com/talks/djangocon/2011/secrets-of-postgresql-performance.pdf>\n>> , this <http://www.revsys.com/writings/postgresql-performance.html> etc.)\n>> is written for people anticipating lots of concurrent connections.\n>>\n>> I'm a social scientist looking to use Postgres not as a database to be\n>> shared by multiple users, but rather as my own tool for manipulating a\n>> massive data set (I have 5 billion transaction records (600gb in csv) and\n>> want to pull out unique user pairs, estimate aggregates for individual\n>> users, etc.). This also means almost no writing, except to creation of new\n>> tables based on selections from the main table.\n>>\n>> I'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if\n>> that's important.\n>>\n>> Thanks!\n>>\n>> Well for serious database work, I suggest upgrading to Linux - you will\n>> get better performance out of the same hardware and probably (a year or so\n>> ago, I noticed some tuning options did not apply to Microsoft O/S's, but I\n>> don't recall the details - these options may, or may not, apply to your\n>> situation) more scope for tuning. Apart from anything else, your\n>> processing will not be slowed down by having to run anti-virus software!\n>>\n>> Note that in Linux you have a wide choice of distributions and desktop\n>> environments: I chose Mate (http://mate-desktop.org), some people prefer\n>> xfce (http://www.xfce.org), I used to use GNOME 2.\n>>\n>>\n>> Cheers,\n>> Gavin\n>>\n> Yeah, I know the feeling!\n>\n> I have a client that uses MySQL (ugh!), but I won't even bother mentioning\n> PostgreSQL!\n>\n> Hopefully, someone more knowledgeable will give you some good advice\n> specific to your O/S.\n>\n> For tables that don't change, consider a packing density of 100%.\n>\n> Take care in how you design your tables, and the column types.\n>\n> Consider carefully the queries you are likely to use, so you can design\n> appropriate indexes.\n>\n> Some advice will depend on the schema you plan to use, and the type of\n> queries.\n>\n>\n> Cheers,\n> Gavin\n>\n>\n>\n\nTerrific -- thanks Gavin and Jeff! That's incredibly helpful for a n00b like me! On Mon, Apr 14, 2014 at 5:29 PM, Gavin Flower <[email protected]> wrote:\n\n\nIn this list, please bottom post!\n\n I've added potentially useful advice below.\n\n On 15/04/14 11:39, Nick Eubank wrote:\n\nThanks Gavin -- would LOVE to. Sadly I'm in a weird\n situation where my hardware is not under my control, so I'm stuck\n making the best of what I have. Next time though! :)\n\n On Monday, April 14, 2014, Gavin Flower <[email protected]>\n wrote:\n\n\nOn 15/04/14 09:46, Nick Eubank wrote:\n\n\n\nAny rules of thumb for work_mem, maintenance_work_mem, shared_buffer,\n etc. for a database that DOESN'T anticipate concurrent\n connections and that is doing lots of aggregate\n functions on large tables? All\n the advice I can find online on tuning (this, this, this etc.)\n is written for people anticipating lots of concurrent\n connections.\nI'm a social scientist looking to use Postgres not as a\n database to be shared by multiple users, but rather as\n my own tool for manipulating a massive data set (I have\n 5 billion transaction records (600gb in csv) and want to\n pull out unique user pairs, estimate aggregates for\n individual users, etc.). This also means almost no\n writing, except to creation of new tables based on\n selections from the main table. \nI'm on a Windows 8 VM with 16gb ram, SCSI VMware HD,\n and 3 cores if that's important.\n\nThanks!\n\n\n\n Well for serious database work, I suggest upgrading to Linux -\n you will get better performance out of the same hardware and\n probably (a year or so ago, I noticed some tuning options did\n not apply to Microsoft O/S's, but I don't recall the details -\n these options may, or may not, apply to your situation) more\n scope for tuning. Apart from anything else, your processing\n will not be slowed down by having to run anti-virus software!\n\n Note that in Linux you have a wide choice of distributions and\n desktop environments: I chose Mate (http://mate-desktop.org),\n some people prefer xfce (http://www.xfce.org),\n I used to use GNOME 2.\n\n\n Cheers,\n Gavin\n\n\n\n Yeah, I know the feeling!\n\n I have a client that uses MySQL (ugh!), but I won't even bother\n mentioning PostgreSQL!\n\n Hopefully, someone more knowledgeable will give you some good advice\n specific to your O/S.\n\n For tables that don't change, consider a packing density of 100%.\n\n Take care in how you design your tables, and the column types.\n\n Consider carefully the queries you are likely to use, so you can\n design appropriate indexes.\n\n Some advice will depend on the schema you plan to use, and the type\n of queries.\n\n\n Cheers,\n Gavin",
"msg_date": "Mon, 14 Apr 2014 17:50:09 -0700",
"msg_from": "Nick Eubank <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Postgres for Single connection use"
},
{
"msg_contents": "On Mon, Apr 14, 2014 at 5:19 PM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Apr 14, 2014 at 2:46 PM, Nick Eubank <[email protected]> wrote:\n>\n>> Any rules of thumb for work_mem, maintenance_work_mem, shared_buffer,\n>> etc. for a database that DOESN'T anticipate concurrent connections and that\n>> is doing lots of aggregate functions on large tables? All the advice I\n>> can find online on tuning (this<http://wiki.postgresql.org/wiki/Performance_Optimization>\n>> , this<http://media.revsys.com/talks/djangocon/2011/secrets-of-postgresql-performance.pdf>\n>> , this <http://www.revsys.com/writings/postgresql-performance.html> etc.)\n>> is written for people anticipating lots of concurrent connections.\n>>\n>> I'm a social scientist looking to use Postgres not as a database to be\n>> shared by multiple users, but rather as my own tool for manipulating a\n>> massive data set (I have 5 billion transaction records (600gb in csv) and\n>> want to pull out unique user pairs, estimate aggregates for individual\n>> users, etc.). This also means almost no writing, except to creation of new\n>> tables based on selections from the main table.\n>>\n>> I'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if\n>> that's important.\n>>\n>\n> I'd go with a small shared_buffers, like 128MB, and let the OS cache as\n> much as possible. This minimizes the amount of double buffering.\n>\n> And set work_mem to about 6GB, then bump it up if that doesn't seem to\n> cause problems.\n>\n> In the scenario you describe, it is probably no big deal if you guess too\n> high. Monitor the process, if it it starts to go nuts just kill it and\n> start again with a lower work_mem. If it is a single user system, you can\n> afford to be adventurous.\n>\n> If you need to build indexes, you should bump up maintenance_work_mem, but\n> I just would do that in the local session not system wide.\n>\n> Cheers,\n>\n> Jeff\n>\n>\n>\nQuick followup Jeff: it seems that I can't set work_mem above about 1gb\n(can't get to 2gb. When I update config, the values just don't change in\n\"SHOW ALL\" -- integer constraint?). Is there a work around, or should I\ntweak something else accordingly?\n\nThanks!\n\nNick\n\n(Properly bottom posted this time?)\n\nOn Mon, Apr 14, 2014 at 5:19 PM, Jeff Janes <[email protected]> wrote:\nOn Mon, Apr 14, 2014 at 2:46 PM, Nick Eubank <[email protected]> wrote:\n\n\n\nAny rules of thumb for work_mem, maintenance_work_mem, shared_buffer, etc. for a database that DOESN'T anticipate concurrent connections and that is doing lots of aggregate functions on large tables? All the advice I can find online on tuning (this, this, this etc.) is written for people anticipating lots of concurrent connections.\nI'm a social scientist looking to use Postgres not as a database to be shared by multiple users, but rather as my own tool for manipulating a massive data set (I have 5 billion transaction records (600gb in csv) and want to pull out unique user pairs, estimate aggregates for individual users, etc.). This also means almost no writing, except to creation of new tables based on selections from the main table. \nI'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if that's important.\nI'd go with a small shared_buffers, like 128MB, and let the OS cache as much as possible. This minimizes the amount of double buffering.\n\nAnd set work_mem to about 6GB, then bump it up if that doesn't seem to cause problems.\nIn the scenario you describe, it is probably no big deal if you guess too high. Monitor the process, if it it starts to go nuts just kill it and start again with a lower work_mem. If it is a single user system, you can afford to be adventurous.\nIf you need to build indexes, you should bump up maintenance_work_mem, but I just would do that in the local session not system wide.Cheers,Jeff\n \nQuick followup Jeff: it seems that I can't set work_mem above about 1gb (can't get to 2gb. When I update config, the values just don't change in \"SHOW ALL\" -- integer constraint?). Is there a work around, or should I tweak something else accordingly? \nThanks!Nick(Properly bottom posted this time?)",
"msg_date": "Tue, 15 Apr 2014 09:12:20 -0700",
"msg_from": "Nick Eubank <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning Postgres for Single connection use"
},
{
"msg_contents": "On Tue, Apr 15, 2014 at 9:12 AM, Nick Eubank <[email protected]> wrote:\n\n> On Mon, Apr 14, 2014 at 5:19 PM, Jeff Janes <[email protected]> wrote:\n>\n>>\n>> I'd go with a small shared_buffers, like 128MB, and let the OS cache as\n>> much as possible. This minimizes the amount of double buffering.\n>>\n>> And set work_mem to about 6GB, then bump it up if that doesn't seem to\n>> cause problems.\n>>\n>> In the scenario you describe, it is probably no big deal if you guess too\n>> high. Monitor the process, if it it starts to go nuts just kill it and\n>> start again with a lower work_mem. If it is a single user system, you can\n>> afford to be adventurous.\n>>\n>> If you need to build indexes, you should bump up maintenance_work_mem,\n>> but I just would do that in the local session not system wide.\n>>\n>> Cheers,\n>>\n>> Jeff\n>>\n>>\n>>\n> Quick followup Jeff: it seems that I can't set work_mem above about 1gb\n> (can't get to 2gb. When I update config, the values just don't change in\n> \"SHOW ALL\" -- integer constraint?). Is there a work around, or should I\n> tweak something else accordingly?\n>\n\n\nWhat version are you using? What is the exact line you put in your config\nfile? Did you get any errors when using that config file? Are you sure\nyou actually reloaded the server, so that it reread the config file, rather\nthan just changing the file and then not applying the change?\n\nI usually set work_mem within a psql connection, in which case you need to\nquote the setting if you use units:\n\nset work_mem=\"3GB\";\n\nBut if you set it system wide in the config file the quotes should not be\nneeded.\n\n\n\n> Thanks!\n>\n> Nick\n>\n> (Properly bottom posted this time?)\n>\n\nLooked good to me.\n\nCheers,\n\nJeff\n\nOn Tue, Apr 15, 2014 at 9:12 AM, Nick Eubank <[email protected]> wrote:\nOn Mon, Apr 14, 2014 at 5:19 PM, Jeff Janes <[email protected]> wrote:\nI'd go with a small shared_buffers, like 128MB, and let the OS cache as much as possible. This minimizes the amount of double buffering.\n\n\nAnd set work_mem to about 6GB, then bump it up if that doesn't seem to cause problems.\nIn the scenario you describe, it is probably no big deal if you guess too high. Monitor the process, if it it starts to go nuts just kill it and start again with a lower work_mem. If it is a single user system, you can afford to be adventurous.\nIf you need to build indexes, you should bump up maintenance_work_mem, but I just would do that in the local session not system wide.Cheers,Jeff\n \nQuick followup Jeff: it seems that I can't set work_mem above about 1gb (can't get to 2gb. When I update config, the values just don't change in \"SHOW ALL\" -- integer constraint?). Is there a work around, or should I tweak something else accordingly? \nWhat version are you using? What is the exact line you put in your config file? Did you get any errors when using that config file? Are you sure you actually reloaded the server, so that it reread the config file, rather than just changing the file and then not applying the change?\nI usually set work_mem within a psql connection, in which case you need to quote the setting if you use units:set work_mem=\"3GB\";But if you set it system wide in the config file the quotes should not be needed.\n \nThanks!Nick(Properly bottom posted this time?) \nLooked good to me.Cheers,Jeff",
"msg_date": "Tue, 15 Apr 2014 09:43:43 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres for Single connection use"
},
{
"msg_contents": "\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Nick Eubank\r\nSent: Tuesday, April 15, 2014 11:12 AM\r\nTo: Jeff Janes\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Tuning Postgres for Single connection use\r\n\r\n\r\n\r\nOn Mon, Apr 14, 2014 at 5:19 PM, Jeff Janes <[email protected]> wrote:\r\nOn Mon, Apr 14, 2014 at 2:46 PM, Nick Eubank <[email protected]> wrote:\r\nAny rules of thumb for work_mem, maintenance_work_mem, shared_buffer, etc. for a database that DOESN'T anticipate concurrent connections and that is doing lots of aggregate functions on large tables? All the advice I can find online on tuning (this, this, this etc.) is written for people anticipating lots of concurrent connections.\r\nI'm a social scientist looking to use Postgres not as a database to be shared by multiple users, but rather as my own tool for manipulating a massive data set (I have 5 billion transaction records (600gb in csv) and want to pull out unique user pairs, estimate aggregates for individual users, etc.). This also means almost no writing, except to creation of new tables based on selections from the main table. \r\nI'm on a Windows 8 VM with 16gb ram, SCSI VMware HD, and 3 cores if that's important.\r\n\r\nI'd go with a small shared_buffers, like 128MB, and let the OS cache as much as possible. This minimizes the amount of double buffering.\r\n\r\nAnd set work_mem to about 6GB, then bump it up if that doesn't seem to cause problems.\r\n\r\nIn the scenario you describe, it is probably no big deal if you guess too high. Monitor the process, if it it starts to go nuts just kill it and start again with a lower work_mem. If it is a single user system, you can afford to be adventurous.\r\n\r\nIf you need to build indexes, you should bump up maintenance_work_mem, but I just would do that in the local session not system wide.\r\n\r\nCheers,\r\n\r\nJeff\r\n \r\n\r\n\r\nQuick followup Jeff: it seems that I can't set work_mem above about 1gb (can't get to 2gb. When I update config, the values just don't change in \"SHOW ALL\" -- integer constraint?). Is there a work around, or should I tweak something else accordingly? \r\n\r\nThanks!\r\n\r\nNick\r\n\r\n(Properly bottom posted this time?) \r\n\r\n[Schnabel, Robert D.] \r\n\r\nNick,\r\n\r\nI asked the same question a while ago about work_mem on Windows. See this thread:\r\nhttp://www.postgresql.org/message-id/[email protected]\r\n\r\nBob\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 15 Apr 2014 16:43:43 +0000",
"msg_from": "\"Schnabel, Robert D.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres for Single connection use"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> On Tue, Apr 15, 2014 at 9:12 AM, Nick Eubank <[email protected]> wrote:\n>> Quick followup Jeff: it seems that I can't set work_mem above about 1gb\n>> (can't get to 2gb. When I update config, the values just don't change in\n>> \"SHOW ALL\" -- integer constraint?). Is there a work around, or should I\n>> tweak something else accordingly?\n\n> What version are you using? What is the exact line you put in your config\n> file? Did you get any errors when using that config file? Are you sure\n> you actually reloaded the server, so that it reread the config file, rather\n> than just changing the file and then not applying the change?\n\n> I usually set work_mem within a psql connection, in which case you need to\n> quote the setting if you use units:\n> set work_mem=\"3GB\";\n\nFWIW, it's generally considered a seriously *bad* idea to set work_mem as\nhigh as 1GB in postgresql.conf: you're promising that each query running\non the server can use 1GB per sort or hash step. You probably don't have\nthe machine resources to honor that promise. (If you do, I'd like to have\nyour IT budget ;-)) Recommended practice is to keep the global setting\nconservatively small, and bump it up locally in your session (with SET)\nfor individual queries that need the very large value.\n\nBut having said that, Postgres doesn't try to enforce any such practice.\nMy money is on what Jeff is evidently thinking: you forgot to do \"pg_ctl\nreload\", or else the setting is too large for your platform, in which case\nthere should have been a complaint in the postmaster log. As noted\nelsewhere, the limit for Windows is a hair under 2GB even if it's 64-bit\nWindows.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 15 Apr 2014 13:05:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning Postgres for Single connection use"
},
{
"msg_contents": ">\n>\n> On Tuesday, April 15, 2014, Tom Lane <[email protected]> wrote:\n\n> Jeff Janes <[email protected]> writes:\n> > On Tue, Apr 15, 2014 at 9:12 AM, Nick Eubank <[email protected]>\n> wrote:\n> >> Quick followup Jeff: it seems that I can't set work_mem above about 1gb\n> >> (can't get to 2gb. When I update config, the values just don't change in\n> >> \"SHOW ALL\" -- integer constraint?). Is there a work around, or should I\n> >> tweak something else accordingly?\n>\n> > What version are you using? What is the exact line you put in your\n> config\n> > file? Did you get any errors when using that config file? Are you sure\n> > you actually reloaded the server, so that it reread the config file,\n> rather\n> > than just changing the file and then not applying the change?\n>\n> > I usually set work_mem within a psql connection, in which case you need\n> to\n> > quote the setting if you use units:\n> > set work_mem=\"3GB\";\n>\n> FWIW, it's generally considered a seriously *bad* idea to set work_mem as\n> high as 1GB in postgresql.conf: you're promising that each query running\n> on the server can use 1GB per sort or hash step. You probably don't have\n> the machine resources to honor that promise. (If you do, I'd like to have\n> your IT budget ;-)) Recommended practice is to keep the global setting\n> conservatively small, and bump it up locally in your session (with SET)\n> for individual queries that need the very large value.\n>\n> But having said that, Postgres doesn't try to enforce any such practice.\n> My money is on what Jeff is evidently thinking: you forgot to do \"pg_ctl\n> reload\", or else the setting is too large for your platform, in which case\n> there should have been a complaint in the postmaster log. As noted\n> elsewhere, the limit for Windows is a hair under 2GB even if it's 64-bit\n> Windows.\n>\n> regards, tom lane\n\n\nThanks Tom -- quick follow up: I know that 1gb work_mem is a terrible idea\nfor normal postgres users with lots of concurrent users, but for my\nsituations where there will only ever be one connection running one query,\nwhy is that a problem on a machine with 16gb of ram.\n\nRe:Robert -- thanks for that clarification!\n\nOn Tuesday, April 15, 2014, Tom Lane <[email protected]> wrote:\n\n\nJeff Janes <[email protected]> writes:\n> On Tue, Apr 15, 2014 at 9:12 AM, Nick Eubank <[email protected]> wrote:\n>> Quick followup Jeff: it seems that I can't set work_mem above about 1gb\n>> (can't get to 2gb. When I update config, the values just don't change in\n>> \"SHOW ALL\" -- integer constraint?). Is there a work around, or should I\n>> tweak something else accordingly?\n\n> What version are you using? What is the exact line you put in your config\n> file? Did you get any errors when using that config file? Are you sure\n> you actually reloaded the server, so that it reread the config file, rather\n> than just changing the file and then not applying the change?\n\n> I usually set work_mem within a psql connection, in which case you need to\n> quote the setting if you use units:\n> set work_mem=\"3GB\";\n\nFWIW, it's generally considered a seriously *bad* idea to set work_mem as\nhigh as 1GB in postgresql.conf: you're promising that each query running\non the server can use 1GB per sort or hash step. You probably don't have\nthe machine resources to honor that promise. (If you do, I'd like to have\nyour IT budget ;-)) Recommended practice is to keep the global setting\nconservatively small, and bump it up locally in your session (with SET)\nfor individual queries that need the very large value.\n\nBut having said that, Postgres doesn't try to enforce any such practice.\nMy money is on what Jeff is evidently thinking: you forgot to do \"pg_ctl\nreload\", or else the setting is too large for your platform, in which case\nthere should have been a complaint in the postmaster log. As noted\nelsewhere, the limit for Windows is a hair under 2GB even if it's 64-bit\nWindows.\n\n regards, tom laneThanks Tom -- quick follow up: I know that 1gb work_mem is a terrible idea for normal postgres users with lots of concurrent users, but for my situations where there will only ever be one connection running one query, why is that a problem on a machine with 16gb of ram. \nRe:Robert -- thanks for that clarification!",
"msg_date": "Tue, 15 Apr 2014 11:51:51 -0700",
"msg_from": "Nick Eubank <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning Postgres for Single connection use"
}
] |
[
{
"msg_contents": "I was given anecdotal information regarding HFS+ performance under OSX as\r\nbeing unsuitable for production PG deployments and that pg_test_fsync\r\ncould be used to measure the relative speed versus other operating systems\r\n(such as Linux). In my performance lab, I have a number of similarly\r\nequipped Linux hosts (Ubuntu 12.04 64-bit LTS Server w/128Gb RAM / 2 OWC\r\n6g Mercury Extreme SSDs / 7200rpm SATA3 HDD / 16 E5-series cores) that I\r\nused to capture baseline Linux numbers. As we generally recommend our\r\ncustomers use SSD (the s3700 recommended by PG), I wanted to perform a\r\ncomparison. On these beefy machines I ran the following tests:\r\n\r\nSSD:\r\n\r\n# pg_test_fsync -f ./fsync.out -s 30\r\n30 seconds per test\r\nO_DIRECT supported on this platform for open_datasync and open_sync.\r\n\r\nCompare file sync methods using one 8kB write:\r\n(in wal_sync_method preference order, except fdatasync\r\nis Linux's default)\r\n open_datasync n/a\r\n fdatasync 2259.652 ops/sec 443 usecs/op\r\n fsync 1949.664 ops/sec 513 usecs/op\r\n fsync_writethrough n/a\r\n open_sync 2245.162 ops/sec 445 usecs/op\r\n\r\nCompare file sync methods using two 8kB writes:\r\n(in wal_sync_method preference order, except fdatasync\r\nis Linux's default)\r\n open_datasync n/a\r\n fdatasync 2161.941 ops/sec 463 usecs/op\r\n fsync 1891.894 ops/sec 529 usecs/op\r\n fsync_writethrough n/a\r\n open_sync 1118.826 ops/sec 894 usecs/op\r\n\r\nCompare open_sync with different write sizes:\r\n(This is designed to compare the cost of writing 16kB\r\nin different write open_sync sizes.)\r\n 1 * 16kB open_sync write 2171.558 ops/sec 460 usecs/op\r\n 2 * 8kB open_sync writes 1126.490 ops/sec 888 usecs/op\r\n 4 * 4kB open_sync writes 569.594 ops/sec 1756 usecs/op\r\n 8 * 2kB open_sync writes 285.149 ops/sec 3507 usecs/op\r\n 16 * 1kB open_sync writes 142.528 ops/sec 7016 usecs/op\r\n\r\nTest if fsync on non-write file descriptor is honored:\r\n(If the times are similar, fsync() can sync data written\r\non a different descriptor.)\r\n write, fsync, close 1947.557 ops/sec 513 usecs/op\r\n write, close, fsync 1951.082 ops/sec 513 usecs/op\r\n\r\nNon-Sync'ed 8kB writes:\r\n write 481296.909 ops/sec 2 usecs/op\r\n\r\n\r\nHDD:\r\n\r\npg_test_fsync -f /tmp/fsync.out -s 30\r\n30 seconds per test\r\nO_DIRECT supported on this platform for open_datasync and open_sync.\r\n\r\nCompare file sync methods using one 8kB write:\r\n(in wal_sync_method preference order, except fdatasync\r\nis Linux's default)\r\n open_datasync n/a\r\n fdatasync 105.783 ops/sec 9453 usecs/op\r\n fsync 27.692 ops/sec 36111 usecs/op\r\n fsync_writethrough n/a\r\n open_sync 103.399 ops/sec 9671 usecs/op\r\n\r\nCompare file sync methods using two 8kB writes:\r\n(in wal_sync_method preference order, except fdatasync\r\nis Linux's default)\r\n open_datasync n/a\r\n fdatasync 104.647 ops/sec 9556 usecs/op\r\n fsync 27.223 ops/sec 36734 usecs/op\r\n fsync_writethrough n/a\r\n open_sync 55.839 ops/sec 17909 usecs/op\r\n\r\nCompare open_sync with different write sizes:\r\n(This is designed to compare the cost of writing 16kB\r\nin different write open_sync sizes.)\r\n 1 * 16kB open_sync write 103.581 ops/sec 9654 usecs/op\r\n 2 * 8kB open_sync writes 55.207 ops/sec 18113 usecs/op\r\n 4 * 4kB open_sync writes 28.320 ops/sec 35311 usecs/op\r\n 8 * 2kB open_sync writes 14.581 ops/sec 68582 usecs/op\r\n 16 * 1kB open_sync writes 7.407 ops/sec 135003 usecs/op\r\n\r\nTest if fsync on non-write file descriptor is honored:\r\n(If the times are similar, fsync() can sync data written\r\non a different descriptor.)\r\n write, fsync, close 27.228 ops/sec 36727 usecs/op\r\n write, close, fsync 27.108 ops/sec 36890 usecs/op\r\n\r\nNon-Sync'ed 8kB writes:\r\n write 466108.001 ops/sec 2 usecs/op\r\n\r\n\r\n-------\r\n\r\nSo far, so good. Local HDD vs. SSD shows a significant difference in fsync\r\nperformance. Here are the corresponding fstab entries :\r\n\r\n/dev/mapper/cim-base\r\n/opt/cim\t\text4\tdefaults,noatime,nodiratime,discard\t0\t2 (SSD)\r\n/dev/mapper/p--app--lin-root / ext4 errors=remount-ro 0\r\n 1 (HDD)\r\n\r\nI then tried the pg_test_fsync on my OSX Mavericks machine (quad-core i7 /\r\nIntel 520SSD / 16GB RAM) and got the following results :\r\n\r\n# pg_test_fsync -s 30 -f ./fsync.out\r\n30 seconds per test\r\nDirect I/O is not supported on this platform.\r\n\r\nCompare file sync methods using one 8kB write:\r\n(in wal_sync_method preference order, except fdatasync\r\nis Linux's default)\r\n open_datasync 8752.240 ops/sec 114 usecs/op\r\n fdatasync 8556.469 ops/sec 117 usecs/op\r\n fsync 8831.080 ops/sec 113 usecs/op\r\n fsync_writethrough 735.362 ops/sec 1360 usecs/op\r\n open_sync 8967.000 ops/sec 112 usecs/op\r\n\r\nCompare file sync methods using two 8kB writes:\r\n(in wal_sync_method preference order, except fdatasync\r\nis Linux's default)\r\n open_datasync 4256.906 ops/sec 235 usecs/op\r\n fdatasync 7485.242 ops/sec 134 usecs/op\r\n fsync 7335.658 ops/sec 136 usecs/op\r\n fsync_writethrough 716.530 ops/sec 1396 usecs/op\r\n open_sync 4303.408 ops/sec 232 usecs/op\r\n\r\nCompare open_sync with different write sizes:\r\n(This is designed to compare the cost of writing 16kB\r\nin different write open_sync sizes.)\r\n 1 * 16kB open_sync write 7559.381 ops/sec 132 usecs/op\r\n 2 * 8kB open_sync writes 4537.573 ops/sec 220 usecs/op\r\n 4 * 4kB open_sync writes 2539.780 ops/sec 394 usecs/op\r\n 8 * 2kB open_sync writes 1307.499 ops/sec 765 usecs/op\r\n 16 * 1kB open_sync writes 659.985 ops/sec 1515 usecs/op\r\n\r\nTest if fsync on non-write file descriptor is honored:\r\n(If the times are similar, fsync() can sync data written\r\non a different descriptor.)\r\n write, fsync, close 9003.622 ops/sec 111 usecs/op\r\n write, close, fsync 8035.427 ops/sec 124 usecs/op\r\n\r\nNon-Sync'ed 8kB writes:\r\n write 271112.074 ops/sec 4 usecs/op\r\n\r\n-------\r\n\r\n\r\nThese results were unexpected and surprising. In almost every metric (with\r\nthe exception of the Non-Sync¹d 8k8 writes), OSX Mavericks 10.9.2 using\r\nHFS+ out-performed my Ubuntu servers. While the SSDs come from different\r\nmanufacturers, both use the SandForce SF-2281 controllers.\r\n\r\nPlausible explanations of the apparent disparity in fsync performance\r\nwould be welcome.\r\n\r\nThanks, Mel\r\n\r\nP.S. One more thing; I found this article which maps fsync mechanisms\r\nversus\r\noperating systems :\r\nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm\r\n\r\nThis article suggests that both open_datasync and fdatasync are _not_\r\nsupported for OSX, but the pg_test_fsync results suggest otherwise.\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Apr 2014 22:32:38 +0000",
"msg_from": "Mel Llaguno <[email protected]>",
"msg_from_op": true,
"msg_subject": "HFS+ pg_test_fsync performance"
},
{
"msg_contents": "2014-04-15 0:32 GMT+02:00 Mel Llaguno <[email protected]>:\n\n> I was given anecdotal information regarding HFS+ performance under OSX as\n> being unsuitable for production PG deployments and that pg_test_fsync\n> could be used to measure the relative speed versus other operating systems\n> (such as Linux). In my performance lab, I have a number of similarly\n> equipped Linux hosts (Ubuntu 12.04 64-bit LTS Server w/128Gb RAM / 2 OWC\n> 6g Mercury Extreme SSDs / 7200rpm SATA3 HDD / 16 E5-series cores) that I\n> used to capture baseline Linux numbers. As we generally recommend our\n> customers use SSD (the s3700 recommended by PG), I wanted to perform a\n> comparison. On these beefy machines I ran the following tests:\n>\n> SSD:\n>\n> # pg_test_fsync -f ./fsync.out -s 30\n> 30 seconds per test\n> O_DIRECT supported on this platform for open_datasync and open_sync.\n>\n> Compare file sync methods using one 8kB write:\n> (in wal_sync_method preference order, except fdatasync\n> is Linux's default)\n> open_datasync n/a\n> fdatasync 2259.652 ops/sec 443 usecs/op\n> fsync 1949.664 ops/sec 513 usecs/op\n> fsync_writethrough n/a\n> open_sync 2245.162 ops/sec 445 usecs/op\n>\n> Compare file sync methods using two 8kB writes:\n> (in wal_sync_method preference order, except fdatasync\n> is Linux's default)\n> open_datasync n/a\n> fdatasync 2161.941 ops/sec 463 usecs/op\n> fsync 1891.894 ops/sec 529 usecs/op\n> fsync_writethrough n/a\n> open_sync 1118.826 ops/sec 894 usecs/op\n>\n> Compare open_sync with different write sizes:\n> (This is designed to compare the cost of writing 16kB\n> in different write open_sync sizes.)\n> 1 * 16kB open_sync write 2171.558 ops/sec 460 usecs/op\n> 2 * 8kB open_sync writes 1126.490 ops/sec 888 usecs/op\n> 4 * 4kB open_sync writes 569.594 ops/sec 1756 usecs/op\n> 8 * 2kB open_sync writes 285.149 ops/sec 3507 usecs/op\n> 16 * 1kB open_sync writes 142.528 ops/sec 7016 usecs/op\n>\n> Test if fsync on non-write file descriptor is honored:\n> (If the times are similar, fsync() can sync data written\n> on a different descriptor.)\n> write, fsync, close 1947.557 ops/sec 513 usecs/op\n> write, close, fsync 1951.082 ops/sec 513 usecs/op\n>\n> Non-Sync'ed 8kB writes:\n> write 481296.909 ops/sec 2 usecs/op\n>\n>\n> HDD:\n>\n> pg_test_fsync -f /tmp/fsync.out -s 30\n> 30 seconds per test\n> O_DIRECT supported on this platform for open_datasync and open_sync.\n>\n> Compare file sync methods using one 8kB write:\n> (in wal_sync_method preference order, except fdatasync\n> is Linux's default)\n> open_datasync n/a\n> fdatasync 105.783 ops/sec 9453 usecs/op\n> fsync 27.692 ops/sec 36111 usecs/op\n> fsync_writethrough n/a\n> open_sync 103.399 ops/sec 9671 usecs/op\n>\n> Compare file sync methods using two 8kB writes:\n> (in wal_sync_method preference order, except fdatasync\n> is Linux's default)\n> open_datasync n/a\n> fdatasync 104.647 ops/sec 9556 usecs/op\n> fsync 27.223 ops/sec 36734 usecs/op\n> fsync_writethrough n/a\n> open_sync 55.839 ops/sec 17909 usecs/op\n>\n> Compare open_sync with different write sizes:\n> (This is designed to compare the cost of writing 16kB\n> in different write open_sync sizes.)\n> 1 * 16kB open_sync write 103.581 ops/sec 9654 usecs/op\n> 2 * 8kB open_sync writes 55.207 ops/sec 18113 usecs/op\n> 4 * 4kB open_sync writes 28.320 ops/sec 35311 usecs/op\n> 8 * 2kB open_sync writes 14.581 ops/sec 68582 usecs/op\n> 16 * 1kB open_sync writes 7.407 ops/sec 135003 usecs/op\n>\n> Test if fsync on non-write file descriptor is honored:\n> (If the times are similar, fsync() can sync data written\n> on a different descriptor.)\n> write, fsync, close 27.228 ops/sec 36727 usecs/op\n> write, close, fsync 27.108 ops/sec 36890 usecs/op\n>\n> Non-Sync'ed 8kB writes:\n> write 466108.001 ops/sec 2 usecs/op\n>\n>\n> -------\n>\n> So far, so good. Local HDD vs. SSD shows a significant difference in fsync\n> performance. Here are the corresponding fstab entries :\n>\n> /dev/mapper/cim-base\n> /opt/cim ext4 defaults,noatime,nodiratime,discard 0\n> 2 (SSD)\n> /dev/mapper/p--app--lin-root / ext4 errors=remount-ro 0\n> 1 (HDD)\n>\n> I then tried the pg_test_fsync on my OSX Mavericks machine (quad-core i7 /\n> Intel 520SSD / 16GB RAM) and got the following results :\n>\n> # pg_test_fsync -s 30 -f ./fsync.out\n> 30 seconds per test\n> Direct I/O is not supported on this platform.\n>\n> Compare file sync methods using one 8kB write:\n> (in wal_sync_method preference order, except fdatasync\n> is Linux's default)\n> open_datasync 8752.240 ops/sec 114 usecs/op\n> fdatasync 8556.469 ops/sec 117 usecs/op\n> fsync 8831.080 ops/sec 113 usecs/op\n> fsync_writethrough 735.362 ops/sec 1360 usecs/op\n> open_sync 8967.000 ops/sec 112 usecs/op\n>\n> Compare file sync methods using two 8kB writes:\n> (in wal_sync_method preference order, except fdatasync\n> is Linux's default)\n> open_datasync 4256.906 ops/sec 235 usecs/op\n> fdatasync 7485.242 ops/sec 134 usecs/op\n> fsync 7335.658 ops/sec 136 usecs/op\n> fsync_writethrough 716.530 ops/sec 1396 usecs/op\n> open_sync 4303.408 ops/sec 232 usecs/op\n>\n> Compare open_sync with different write sizes:\n> (This is designed to compare the cost of writing 16kB\n> in different write open_sync sizes.)\n> 1 * 16kB open_sync write 7559.381 ops/sec 132 usecs/op\n> 2 * 8kB open_sync writes 4537.573 ops/sec 220 usecs/op\n> 4 * 4kB open_sync writes 2539.780 ops/sec 394 usecs/op\n> 8 * 2kB open_sync writes 1307.499 ops/sec 765 usecs/op\n> 16 * 1kB open_sync writes 659.985 ops/sec 1515 usecs/op\n>\n> Test if fsync on non-write file descriptor is honored:\n> (If the times are similar, fsync() can sync data written\n> on a different descriptor.)\n> write, fsync, close 9003.622 ops/sec 111 usecs/op\n> write, close, fsync 8035.427 ops/sec 124 usecs/op\n>\n> Non-Sync'ed 8kB writes:\n> write 271112.074 ops/sec 4 usecs/op\n>\n> -------\n>\n>\n> These results were unexpected and surprising. In almost every metric (with\n> the exception of the Non-Sync¹d 8k8 writes), OSX Mavericks 10.9.2 using\n> HFS+ out-performed my Ubuntu servers. While the SSDs come from different\n> manufacturers, both use the SandForce SF-2281 controllers.\n>\n> Plausible explanations of the apparent disparity in fsync performance\n> would be welcome.\n>\n> Thanks, Mel\n>\n> P.S. One more thing; I found this article which maps fsync mechanisms\n> versus\n> operating systems :\n> http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm\n>\n> This article suggests that both open_datasync and fdatasync are _not_\n> supported for OSX, but the pg_test_fsync results suggest otherwise.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nMy 2 cents :\n\nThe results are not surprising, in the linux enviroment the i/o call of\npg_test_fsync are using O_DIRECT (PG_O_DIRECT) with also the O_SYNC or\nO_DSYNC calls, so ,in practice, it is waiting the \"answer\" from the storage\nbypassing the cache in sync mode, while in the Mac OS X it is not doing\nso, it's only using the O_SYNC or O_DSYNC calls without O_DIRECT, in\npractice, it's using the cache of filesystem , even if it is asking the\nsync of io calls.\n\n\nBye\n\nMat Dba\n\n2014-04-15 0:32 GMT+02:00 Mel Llaguno <[email protected]>:\nI was given anecdotal information regarding HFS+ performance under OSX as\nbeing unsuitable for production PG deployments and that pg_test_fsync\ncould be used to measure the relative speed versus other operating systems\n(such as Linux). In my performance lab, I have a number of similarly\nequipped Linux hosts (Ubuntu 12.04 64-bit LTS Server w/128Gb RAM / 2 OWC\n6g Mercury Extreme SSDs / 7200rpm SATA3 HDD / 16 E5-series cores) that I\nused to capture baseline Linux numbers. As we generally recommend our\ncustomers use SSD (the s3700 recommended by PG), I wanted to perform a\ncomparison. On these beefy machines I ran the following tests:\n\nSSD:\n\n# pg_test_fsync -f ./fsync.out -s 30\n30 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync n/a\n fdatasync 2259.652 ops/sec 443 usecs/op\n fsync 1949.664 ops/sec 513 usecs/op\n fsync_writethrough n/a\n open_sync 2245.162 ops/sec 445 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync n/a\n fdatasync 2161.941 ops/sec 463 usecs/op\n fsync 1891.894 ops/sec 529 usecs/op\n fsync_writethrough n/a\n open_sync 1118.826 ops/sec 894 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB\nin different write open_sync sizes.)\n 1 * 16kB open_sync write 2171.558 ops/sec 460 usecs/op\n 2 * 8kB open_sync writes 1126.490 ops/sec 888 usecs/op\n 4 * 4kB open_sync writes 569.594 ops/sec 1756 usecs/op\n 8 * 2kB open_sync writes 285.149 ops/sec 3507 usecs/op\n 16 * 1kB open_sync writes 142.528 ops/sec 7016 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n write, fsync, close 1947.557 ops/sec 513 usecs/op\n write, close, fsync 1951.082 ops/sec 513 usecs/op\n\nNon-Sync'ed 8kB writes:\n write 481296.909 ops/sec 2 usecs/op\n\n\nHDD:\n\npg_test_fsync -f /tmp/fsync.out -s 30\n30 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync n/a\n fdatasync 105.783 ops/sec 9453 usecs/op\n fsync 27.692 ops/sec 36111 usecs/op\n fsync_writethrough n/a\n open_sync 103.399 ops/sec 9671 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync n/a\n fdatasync 104.647 ops/sec 9556 usecs/op\n fsync 27.223 ops/sec 36734 usecs/op\n fsync_writethrough n/a\n open_sync 55.839 ops/sec 17909 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB\nin different write open_sync sizes.)\n 1 * 16kB open_sync write 103.581 ops/sec 9654 usecs/op\n 2 * 8kB open_sync writes 55.207 ops/sec 18113 usecs/op\n 4 * 4kB open_sync writes 28.320 ops/sec 35311 usecs/op\n 8 * 2kB open_sync writes 14.581 ops/sec 68582 usecs/op\n 16 * 1kB open_sync writes 7.407 ops/sec 135003 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n write, fsync, close 27.228 ops/sec 36727 usecs/op\n write, close, fsync 27.108 ops/sec 36890 usecs/op\n\nNon-Sync'ed 8kB writes:\n write 466108.001 ops/sec 2 usecs/op\n\n\n-------\n\nSo far, so good. Local HDD vs. SSD shows a significant difference in fsync\nperformance. Here are the corresponding fstab entries :\n\n/dev/mapper/cim-base\n/opt/cim ext4 defaults,noatime,nodiratime,discard 0 2 (SSD)\n/dev/mapper/p--app--lin-root / ext4 errors=remount-ro 0\n 1 (HDD)\n\nI then tried the pg_test_fsync on my OSX Mavericks machine (quad-core i7 /\nIntel 520SSD / 16GB RAM) and got the following results :\n\n# pg_test_fsync -s 30 -f ./fsync.out\n30 seconds per test\nDirect I/O is not supported on this platform.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 8752.240 ops/sec 114 usecs/op\n fdatasync 8556.469 ops/sec 117 usecs/op\n fsync 8831.080 ops/sec 113 usecs/op\n fsync_writethrough 735.362 ops/sec 1360 usecs/op\n open_sync 8967.000 ops/sec 112 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 4256.906 ops/sec 235 usecs/op\n fdatasync 7485.242 ops/sec 134 usecs/op\n fsync 7335.658 ops/sec 136 usecs/op\n fsync_writethrough 716.530 ops/sec 1396 usecs/op\n open_sync 4303.408 ops/sec 232 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB\nin different write open_sync sizes.)\n 1 * 16kB open_sync write 7559.381 ops/sec 132 usecs/op\n 2 * 8kB open_sync writes 4537.573 ops/sec 220 usecs/op\n 4 * 4kB open_sync writes 2539.780 ops/sec 394 usecs/op\n 8 * 2kB open_sync writes 1307.499 ops/sec 765 usecs/op\n 16 * 1kB open_sync writes 659.985 ops/sec 1515 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n write, fsync, close 9003.622 ops/sec 111 usecs/op\n write, close, fsync 8035.427 ops/sec 124 usecs/op\n\nNon-Sync'ed 8kB writes:\n write 271112.074 ops/sec 4 usecs/op\n\n-------\n\n\nThese results were unexpected and surprising. In almost every metric (with\nthe exception of the Non-Sync¹d 8k8 writes), OSX Mavericks 10.9.2 using\nHFS+ out-performed my Ubuntu servers. While the SSDs come from different\nmanufacturers, both use the SandForce SF-2281 controllers.\n\nPlausible explanations of the apparent disparity in fsync performance\nwould be welcome.\n\nThanks, Mel\n\nP.S. One more thing; I found this article which maps fsync mechanisms\nversus\noperating systems :\nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm\n\nThis article suggests that both open_datasync and fdatasync are _not_\nsupported for OSX, but the pg_test_fsync results suggest otherwise.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nMy 2 cents :The results are not surprising, in the linux enviroment the i/o call of pg_test_fsync are using O_DIRECT (PG_O_DIRECT) with also the O_SYNC or O_DSYNC calls, so ,in practice, it is waiting the \"answer\" from the storage bypassing the cache in sync mode, while in the Mac OS X it is not doing so, it's only using the O_SYNC or O_DSYNC calls without O_DIRECT, in practice, it's using the cache of filesystem , even if it is asking the sync of io calls.\nByeMat Dba",
"msg_date": "Tue, 15 Apr 2014 14:18:07 +0200",
"msg_from": "desmodemone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HFS+ pg_test_fsync performance"
},
{
"msg_contents": "My 2 cents :\n\nThe results are not surprising, in the linux enviroment the i/o call of pg_test_fsync are using O_DIRECT (PG_O_DIRECT) with also the O_SYNC or O_DSYNC calls, so ,in practice, it is waiting the \"answer\" from the storage bypassing the cache in sync mode, while in the Mac OS X it is not doing so, it's only using the O_SYNC or O_DSYNC calls without O_DIRECT, in practice, it's using the cache of filesystem , even if it is asking the sync of io calls.\n\n\nBye\n\nMat Dba\n\n--------\n\nThanks for the explanation. Given that OSX always seems to use filesystem cache, is there a way to measure fsync performance that is equivalent to Linux? Or will the use of pg_test_fsync always be inflated under OSX? The reason I ask is that we would like to make a case with a customer that PG performance on OSX/HFS+ would be sub-optimal compared to using Linux/EXT4 (or FreeBSD/UFS2 for that matter).\n\nThanks, Mel\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nMy 2 cents :\n\n\nThe results are not surprising, in the linux enviroment the i/o call of pg_test_fsync are using O_DIRECT (PG_O_DIRECT) with also the O_SYNC or O_DSYNC calls, so ,in practice, it is waiting the \"answer\" from the storage bypassing the\n cache in sync mode, while in the Mac OS X it is not doing so, it's only using the O_SYNC or O_DSYNC calls without O_DIRECT, in practice, it's using the cache of filesystem , even if it is asking the sync of io calls.\n\n\n\nBye\n\n\nMat Dba\n\n\n\n--------\n\n\n\nThanks for the explanation. Given that OSX always seems to use filesystem cache, is there a way to measure fsync performance that is equivalent to Linux? Or will the use of pg_test_fsync always be inflated under OSX? The reason I ask\n is that we would like to make a case with a customer that PG performance on OSX/HFS+ would be sub-optimal compared to using Linux/EXT4 (or FreeBSD/UFS2 for that matter).\n\n\n\nThanks, Mel",
"msg_date": "Tue, 15 Apr 2014 14:31:05 +0000",
"msg_from": "Mel Llaguno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: HFS+ pg_test_fsync performance"
},
{
"msg_contents": "Mel,\n\n> I was given anecdotal information regarding HFS+ performance under OSX as\n> being unsuitable for production PG deployments and that pg_test_fsync\n> could be used to measure the relative speed versus other operating systems\n\nYou're welcome to identify your source of anecdotal evidence: me. And\nit's based on experience and testing, although I'm not allowed to share\nthe raw results. Let's just say that I was part of two different\nprojects where we moved from using OSX on Apple hardware do using Linux\non the *same* hardware ... because of intolerable performance on OSX.\nSwitching to Linux more than doubled our real write throughput.\n\n> Compare file sync methods using one 8kB write:\n> (in wal_sync_method preference order, except fdatasync\n> is Linux's default)\n> open_datasync 8752.240 ops/sec 114 usecs/op\n> fdatasync 8556.469 ops/sec 117 usecs/op\n> fsync 8831.080 ops/sec 113 usecs/op\n============================================================================\n> fsync_writethrough 735.362 ops/sec 1360 usecs/op\n============================================================================\n> open_sync 8967.000 ops/sec 112 usecs/op\n\nfsync_writethrough is the *only* relevant stat above. For all of the\nother fsync operations, OSX is \"faking it\"; returning to the calling\ncode without ever flushing to disk. This would result in data\ncorruption in the event of an unexpected system shutdown.\n\nBoth OSX and Windows do this, which is why we *have* fsync_writethrough.\n Mind you, I'm a little shocked that performance is still so bad on\nSSDs; I'd attributed HFS's slow fsync mostly to waiting for a full disk\nrotation, but apparently the primary cause is something else.\n\nYou'll notice that the speed of fsync_writethrough is 1/4 that of\ncomparable speed on Linux. You can get similar performance on Linux by\nputting your WAL on a ramdisk, but I don't recommend that for production.\n\nBut: things get worse. In addition to the very slow speed on real\nfsyncs, HFS+ has very primitive IO scheduling for multiprocessor\nworkloads; the filesystem was designed for single-core machines (in\n1995!) and has no ability to interleave writes from multiple concurrent\nprocesses. This results in \"stuttering\" as the IO system tries to\nservice first one write request, then another, and ends up stalling\nboth. If you do, for example, a serious ETL workload with parallelism\non OSX, you'll see that IO throughput describes a sawtooth from full\nspeed to zero, being near-zero about half the time.\n\nSo not only are fsyncs slower, real throughput for sustained writes on\nHFS+ are 50% or less of the hardware maximum in any real multi-user\nworkload.\n\nIn order to test this, you'd need a workload which required loading and\nsorting several tables larger than RAM, at least two in parallel.\n\nIn the words of the lead HFS+ developer, Don Brady: \"Since we believed\nit was only a stop gap solution, we just went from 16 to 32 bits. Had we\nknown that it would still be in use 15 years later with multi-terabyte\ndrives, we probably would have done more design changes!\"\n\nHFS+ was written in about 6 months, and is largely unimproved since its\nrelease in 1995. Ext2 doesn't perform too well, either; the difference\nis that Linux users have alternative filesystems available.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Apr 2014 11:04:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HFS+ pg_test_fsync performance"
},
{
"msg_contents": "Josh,\n\nThanks for the feedback. Given the prevalence of SSDs/VMs, it would be\nuseful to start collecting stats/tuning for different operating systems\nfor things like fsync (and possibly bonnie++/dd). If the community is\ninterested, I¹ve got a performance lab that I¹d be willing to help run\ntests on. Having this information would only improve our ability to\nsupport our customers.\n\nM.\n\nOn 4/23/14, 12:04 PM, \"Josh Berkus\" <[email protected]> wrote:\n\n>Mel,\n>\n>> I was given anecdotal information regarding HFS+ performance under OSX\n>>as\n>> being unsuitable for production PG deployments and that pg_test_fsync\n>> could be used to measure the relative speed versus other operating\n>>systems\n>\n>You're welcome to identify your source of anecdotal evidence: me. And\n>it's based on experience and testing, although I'm not allowed to share\n>the raw results. Let's just say that I was part of two different\n>projects where we moved from using OSX on Apple hardware do using Linux\n>on the *same* hardware ... because of intolerable performance on OSX.\n>Switching to Linux more than doubled our real write throughput.\n>\n>> Compare file sync methods using one 8kB write:\n>> (in wal_sync_method preference order, except fdatasync\n>> is Linux's default)\n>> open_datasync 8752.240 ops/sec 114\n>>usecs/op\n>> fdatasync 8556.469 ops/sec 117\n>>usecs/op\n>> fsync 8831.080 ops/sec 113\n>>usecs/op\n>==========================================================================\n>==\n>> fsync_writethrough 735.362 ops/sec 1360\n>>usecs/op\n>==========================================================================\n>==\n>> open_sync 8967.000 ops/sec 112\n>>usecs/op\n>\n>fsync_writethrough is the *only* relevant stat above. For all of the\n>other fsync operations, OSX is \"faking it\"; returning to the calling\n>code without ever flushing to disk. This would result in data\n>corruption in the event of an unexpected system shutdown.\n>\n>Both OSX and Windows do this, which is why we *have* fsync_writethrough.\n> Mind you, I'm a little shocked that performance is still so bad on\n>SSDs; I'd attributed HFS's slow fsync mostly to waiting for a full disk\n>rotation, but apparently the primary cause is something else.\n>\n>You'll notice that the speed of fsync_writethrough is 1/4 that of\n>comparable speed on Linux. You can get similar performance on Linux by\n>putting your WAL on a ramdisk, but I don't recommend that for production.\n>\n>But: things get worse. In addition to the very slow speed on real\n>fsyncs, HFS+ has very primitive IO scheduling for multiprocessor\n>workloads; the filesystem was designed for single-core machines (in\n>1995!) and has no ability to interleave writes from multiple concurrent\n>processes. This results in \"stuttering\" as the IO system tries to\n>service first one write request, then another, and ends up stalling\n>both. If you do, for example, a serious ETL workload with parallelism\n>on OSX, you'll see that IO throughput describes a sawtooth from full\n>speed to zero, being near-zero about half the time.\n>\n>So not only are fsyncs slower, real throughput for sustained writes on\n>HFS+ are 50% or less of the hardware maximum in any real multi-user\n>workload.\n>\n>In order to test this, you'd need a workload which required loading and\n>sorting several tables larger than RAM, at least two in parallel.\n>\n>In the words of the lead HFS+ developer, Don Brady: \"Since we believed\n>it was only a stop gap solution, we just went from 16 to 32 bits. Had we\n>known that it would still be in use 15 years later with multi-terabyte\n>drives, we probably would have done more design changes!\"\n>\n>HFS+ was written in about 6 months, and is largely unimproved since its\n>release in 1995. Ext2 doesn't perform too well, either; the difference\n>is that Linux users have alternative filesystems available.\n>\n>-- \n>Josh Berkus\n>PostgreSQL Experts Inc.\n>http://pgexperts.com\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Apr 2014 18:19:19 +0000",
"msg_from": "Mel Llaguno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: HFS+ pg_test_fsync performance"
},
{
"msg_contents": "On 04/23/2014 11:19 AM, Mel Llaguno wrote:\n> Josh,\n> \n> Thanks for the feedback. Given the prevalence of SSDs/VMs, it would be\n> useful to start collecting stats/tuning for different operating systems\n> for things like fsync (and possibly bonnie++/dd). If the community is\n> interested, I¹ve got a performance lab that I¹d be willing to help run\n> tests on. Having this information would only improve our ability to\n> support our customers.\n\nThat would be terrific. I'd also suggest running the performance test\nyou have for your production workload, and we could run some different\nsizes of pgbench.\n\nI'd be particularly interested in the performance of ZFS tuning options\non Linux ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Apr 2014 11:29:00 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HFS+ pg_test_fsync performance"
}
] |
[
{
"msg_contents": "I have a client wanting to test PostgreSQL on ZFS running Linux.\n\nOther than pg_bench are there any other benchmarks that are easy to test?\n\nOne of the possible concerns is fragmentation over time. Any ideas on how\nto fragment the database before running pg_bench ?\n\nAlso there is some concern about fragmentation of the WAL logs. I am\nlooking at testing with and without the WAL logs on ZFS. Any other specific\nconcerns ?\n\n\nDave Cramer\ncredativ ltd (Canada)\n\n78 Zina St\nOrangeville, ON\nCanada. L9W 1E8\n\nOffice: +1 (905) 766-4091\nMobile: +1 (519) 939-0336\n\n===================================\nCanada: http://www.credativ.ca\nUSA: http://www.credativ.us\nGermany: http://www.credativ.de\nNetherlands: http://www.credativ.nl\nUK: http://www.credativ.co.uk\nIndia: http://www.credativ.in\n===================================\n\nI have a client wanting to test PostgreSQL on ZFS running Linux.\n\nOther than pg_bench are there any other benchmarks that are easy to test?\n\nOne of the possible concerns is fragmentation over time. Any ideas on how\nto fragment the database before running pg_bench ?\n\nAlso there is some concern about fragmentation of the WAL logs. I am\nlooking at testing with and without the WAL logs on ZFS. Any other specific\nconcerns ?\n\n\nDave Cramer\ncredativ ltd (Canada)\n\n78 Zina St\nOrangeville, ON\nCanada. L9W 1E8\n\nOffice: +1 (905) 766-4091\nMobile: +1 (519) 939-0336\n\n===================================\nCanada: http://www.credativ.ca\nUSA: http://www.credativ.us\nGermany: http://www.credativ.de\nNetherlands: http://www.credativ.nl\nUK: http://www.credativ.co.uk\nIndia: http://www.credativ.in\n===================================",
"msg_date": "Tue, 15 Apr 2014 11:57:29 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Testing strategies"
},
{
"msg_contents": "On Tue, Apr 15, 2014 at 12:57 PM, Dave Cramer <[email protected]> wrote:\n\n> I have a client wanting to test PostgreSQL on ZFS running Linux. Other\n> than pg_bench are there any other benchmarks that are easy to test?\n\n\nCheck Gregory Smith article about testing disks [1].\n\n[1] http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Tue, Apr 15, 2014 at 12:57 PM, Dave Cramer <[email protected]> wrote:\nI have a client wanting to test PostgreSQL on ZFS running Linux.\n\nOther than pg_bench are there any other benchmarks that are easy to test?Check Gregory Smith article about testing disks [1].[1] http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n-- Matheus de Oliveira\nAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Tue, 15 Apr 2014 17:23:41 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Testing strategies"
}
] |
[
{
"msg_contents": "Hi all,\n\nA few years ago someone said postgres windows can't set working_mem above\nabout 2 GB (www.postgresql.org/message-id/[email protected] --\nseems to be same for maintenance_working_mem ). Im finding limit still\npresent.\n\n I'm doing single user, single connection data intensive queries and would\nlike to set a higher value on windows to better use 16gb built in\nram (don't control platform, so can't jump to Linux).\n\nAnyone found a work around?\n\nThanks!\n\nNick\n\nHi all,A few years ago someone said postgres windows can't set working_mem above about 2 GB (www.postgresql.org/message-id/[email protected] -- seems to be same for maintenance_working_mem ). Im finding limit still present.\n I'm doing single user, single connection data intensive queries and would like to set a higher value on windows to better use 16gb built in ram (don't control platform, so can't jump to Linux). \nAnyone found a work around?\nThanks!Nick",
"msg_date": "Tue, 15 Apr 2014 18:36:37 -0700",
"msg_from": "Nick Eubank <[email protected]>",
"msg_from_op": true,
"msg_subject": "Workaround for working_mem max value in windows?"
},
{
"msg_contents": "> Hi all,\n> \n> A few years ago someone said postgres windows can't set working_mem \n> above about 2 GB (\nwww.postgresql.org/message-id/[email protected]\n> -- seems to be same for maintenance_working_mem ). Im finding \n> limit still present.\n\nSetting work_mem higher than 2GB on a 16GB machine could easily run the \nserver out of memory.\n\nwork_mem is set on a \"per client\" and \"per sort\" basis, so setting it to \n2GB would exhaust the amount of available ram very quickly on complex \nqueries with multiple sorts, (or with a number of clients greater than 8 - \nalthough you mention that you're using a single user; that doesn't mean \nthat there is only 1 connection to the database).\n\nThe same rule applies with maintenance_work_mem, more than 1 autovacuum \nwould use n multiples of maintenance_work_mem, again exhausting the server \nvery quickly.\n\n> \n> I'm doing single user, single connection data intensive queries and\n> would like to set a higher value on windows to better use 16gb built\n> in ram (don't control platform, so can't jump to Linux). \n> \n> Anyone found a work around?\n> \n\nPostgreSQL on windows is maintained by EnterpriseDB IIRC, so maybe someone \non their forums has any ideas on this, as I doubt very much that the extra \nwork in the PostgreSQL core would be undertaken give the comment by Tom in \nthe thread you posted.\n\nhttp://forums.enterprisedb.com/forums/list.page\n\nCheers\n=============================================\n\nRomax Technology Limited \nA limited company registered in England and Wales.\nRegistered office:\nRutherford House \nNottingham Science and Technology Park \nNottingham \nNG7 2PZ \nEngland\nRegistration Number: 2345696\nVAT Number: 526 246 746\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that \nis confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf \nof the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your \nsystem and contact the sender. Thank you for your cooperation.\n=================================================\n> Hi all,\n> \n> A few years ago someone said postgres windows can't set working_mem\n\n> above about 2 GB (www.postgresql.org/message-id/[email protected]\n> -- seems to be same for maintenance_working_mem ). Im finding\n\n> limit still present.\n\nSetting work_mem higher than 2GB on\na 16GB machine could easily run the server out of memory.\n\nwork_mem is set on a \"per client\"\nand \"per sort\" basis, so setting it to 2GB would exhaust the\namount of available ram very quickly on complex queries with multiple sorts,\n(or with a number of clients greater than 8 - although you mention that\nyou're using a single user; that doesn't mean that there is only 1 connection\nto the database).\n\nThe same rule applies with maintenance_work_mem,\nmore than 1 autovacuum would use n multiples of maintenance_work_mem, again\nexhausting the server very quickly.\n\n> \n> I'm doing single user, single connection data intensive\nqueries and\n> would like to set a higher value on windows to better use 16gb built\n> in ram (don't control platform, so can't jump to Linux). \n> \n> Anyone found a work around?\n> \n\nPostgreSQL on windows is maintained\nby EnterpriseDB IIRC, so maybe someone on their forums has any ideas on\nthis, as I doubt very much that the extra work in the PostgreSQL core would\nbe undertaken give the comment by Tom in the thread you posted.\n\nhttp://forums.enterprisedb.com/forums/list.page\n\nCheers\n=============================================\n\nRomax Technology Limited \nA limited company registered in England and Wales.\nRegistered office:\nRutherford House \nNottingham Science and Technology Park \nNottingham \nNG7 2PZ \nEngland\nRegistration Number: 2345696\nVAT Number: 526 246 746\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that\nis confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf\nof the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your\nsystem and contact the sender. Thank you for your cooperation.\n=================================================",
"msg_date": "Wed, 16 Apr 2014 07:42:32 +0100",
"msg_from": "Martin French <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Workaround for working_mem max value in windows?"
},
{
"msg_contents": ">Anyone found a work around?\n\nWouldn't it helpful, setting it in your session?\n\nset work_mem='2000MB';\nset maintenance_work_mem='2000MB';\n\ndo rest of sql after ..... \n\nRegards,\nAmul Sul\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Workaround-for-working-mem-max-value-in-windows-tp5800170p5800216.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 16 Apr 2014 01:29:39 -0700 (PDT)",
"msg_from": "amulsul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Workaround for working_mem max value in windows?"
},
{
"msg_contents": "On Wed, Apr 16, 2014 at 1:29 AM, amulsul <[email protected]> wrote:\n\n> >Anyone found a work around?\n>\n> Wouldn't it helpful, setting it in your session?\n>\n> set work_mem='2000MB';\n> set maintenance_work_mem='2000MB';\n>\n> do rest of sql after .....\n>\n> Regards,\n> Amul Sul\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Workaround-for-working-mem-max-value-in-windows-tp5800170p5800216.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks all!\n\nSorry Martin, should have been clearer on my usage plans: I'm only\ninterested in optimizing for single-connection, sequential high-demand\nqueries, so I think I'm safe bumping up memory usage, even if it's usually\na disastrous idea for most users. I'll definitely check with the\nEnterprise folks!\n\nAmul: thanks for the followup! Unfortunately, setting locally faces the\nsame limitation as setting things in the config file -- I get an \"ERROR:\n3072000 is outside the valid range for parameter \"work_mem\" (64 .. 2097151)\nSQL state: 22023\" problem if I set above ~1.9gb. :(\n\nOn Wed, Apr 16, 2014 at 1:29 AM, amulsul <[email protected]> wrote:\n>Anyone found a work around?\n\nWouldn't it helpful, setting it in your session?\n\nset work_mem='2000MB';\nset maintenance_work_mem='2000MB';\n\ndo rest of sql after .....\n\nRegards,\nAmul Sul\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Workaround-for-working-mem-max-value-in-windows-tp5800170p5800216.html\n\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nThanks all!Sorry Martin, should have been clearer on my usage plans: I'm only interested in optimizing for single-connection, sequential high-demand queries, so I think I'm safe bumping up memory usage, even if it's usually a disastrous idea for most users. I'll definitely check with the Enterprise folks!\nAmul: thanks for the followup! Unfortunately, setting locally faces the same limitation as setting things in the config file -- I get an \"ERROR: 3072000 is outside the valid range for parameter \"work_mem\" (64 .. 2097151)\nSQL state: 22023\" problem if I set above ~1.9gb. :(",
"msg_date": "Wed, 16 Apr 2014 09:35:12 -0700",
"msg_from": "Nick Eubank <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Workaround for working_mem max value in windows?"
},
{
"msg_contents": "> On 16 Apr 2014, at 17:35, \"Nick Eubank\" <[email protected]> wrote:\n> \n> \n> \n> \n>> On Wed, Apr 16, 2014 at 1:29 AM, amulsul <[email protected]> wrote:\n>> >Anyone found a work around?\n>> \n>> Wouldn't it helpful, setting it in your session?\n>> \n>> set work_mem='2000MB';\n>> set maintenance_work_mem='2000MB';\n>> \n>> do rest of sql after .....\n>> \n>> Regards,\n>> Amul Sul\n>> \n>> \n>> \n>> --\n>> View this message in context: http://postgresql.1045698.n5.nabble.com/Workaround-for-working-mem-max-value-in-windows-tp5800170p5800216.html\n>> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>> \n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> Thanks all!\n> \n> Sorry Martin, should have been clearer on my usage plans: I'm only interested in optimizing for single-connection, sequential high-demand queries, so I think I'm safe bumping up memory usage, even if it's usually a disastrous idea for most users. I'll definitely check with the Enterprise folks!\n> \n> Amul: thanks for the followup! Unfortunately, setting locally faces the same limitation as setting things in the config file -- I get an \"ERROR: 3072000 is outside the valid range for parameter \"work_mem\" (64 .. 2097151)\n> SQL state: 22023\" problem if I set above ~1.9gb. :(\n> \n\nNick, the issue would still remain if you set work_mem to 2Gb and joined 8 tables together, it would consume too much memory. \n\nSo if you DO decide to proceed with this, I would recommend to err on the side of caution and be a little more concerned with tuning your SQL statements\n\nGood luck. \n\n\n============================================= Romax Technology Limited A limited company registered in England and Wales. Registered office: Rutherford House Nottingham Science and Technology Park Nottingham NG7 2PZ England Registration Number: 2345696 VAT Number: 526 246 746 Telephone numbers: +44 (0)115 951 88 00 (main) For other office locations see: http://www.romaxtech.com/Contact ================================= =============== E-mail: [email protected] Website: www.romaxtech.com ================================= ================ Confidentiality Statement This transmission is for the addressee only and contains information that is confidential and privileged. Unless you are the named addressee, or authorised to receive it on behalf of the addressee you may not copy or use it, or disclose it to anyone else. If you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation. ================================================= \nOn 16 Apr 2014, at 17:35, \"Nick Eubank\" <[email protected]> wrote:On Wed, Apr 16, 2014 at 1:29 AM, amulsul <[email protected]> wrote:\n>Anyone found a work around?Wouldn't it helpful, setting it in your session?set work_mem='2000MB';set maintenance_work_mem='2000MB';do rest of sql after .....Regards,Amul Sul--View this message in context: http://postgresql.1045698.n5.nabble.com/Workaround-for-working-mem-max-value-in-windows-tp5800170p5800216.html\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.--Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performanceThanks all!Sorry Martin, should have been clearer on my usage plans: I'm only interested in optimizing for single-connection, sequential high-demand queries, so I think I'm safe bumping up memory usage, even if it's usually a disastrous idea for most users. I'll definitely check with the Enterprise folks!\nAmul: thanks for the followup! Unfortunately, setting locally faces the same limitation as setting things in the config file -- I get an \"ERROR: 3072000 is outside the valid range for parameter \"work_mem\" (64 .. 2097151)\nSQL state: 22023\" problem if I set above ~1.9gb. :(Nick, the issue would still remain if you set work_mem to 2Gb and joined 8 tables together, it would consume too much memory. So if you DO decide to proceed with this, I would recommend to err on the side of caution and be a little more concerned with tuning your SQL statementsGood luck. =============================================\n\nRomax Technology Limited \nA limited company registered in England and Wales.\nRegistered office:\nRutherford House \nNottingham Science and Technology Park \nNottingham \nNG7 2PZ \nEngland\nRegistration Number: 2345696\nVAT Number: 526 246 746\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that is confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf of the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation.\n=================================================",
"msg_date": "Wed, 16 Apr 2014 18:08:12 +0100",
"msg_from": "\"Martin French\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Workaround for working_mem max value in windows?"
},
{
"msg_contents": "On Tue, Apr 15, 2014 at 6:36 PM, Nick Eubank <[email protected]> wrote:\n\n> Hi all,\n>\n> A few years ago someone said postgres windows can't set working_mem above\n> about 2 GB (www.postgresql.org/message-id/[email protected] seems to be same for maintenance_working_mem ). Im finding limit still\n> present.\n>\n> I'm doing single user, single connection data intensive queries and would\n> like to set a higher value on windows to better use 16gb built in\n> ram (don't control platform, so can't jump to Linux).\n>\n> Anyone found a work around?\n>\n\nBefore worrying much about that, I'd just give it a try at the highest\nvalue it will let you set and see what happens.\n\nIf you want to do something like hashed aggregate that would have been\npredicted to fit in 6GB but not in 1.999GB, then you will lose out on the\nhash agg by not being able to set the memory higher. On the other hand, if\nyour queries want to use sorts that will spill to disk anyway, the exact\nvalue of work_mem usually doesn't matter much as long as it not absurdly\nsmall (1MB absurdly small for analytics, 64MB is probably not). In fact\nvery large work_mem can be worse in those cases, because large priority\nqueue heaps are unfriendly to the CPU cache. (Based on Linux experience,\nbut I don't see why that would not carry over to Windows)\n\nFrankly I think you've bitten off more than you can chew. 600GB of csv is\ngoing to expand to probably 3TB of postgresql data once loaded. If you\ncan't control the platform, I'm guessing your disk array options are no\nbetter than your OS options are.\n\nACID compliance is expensive, both in storage overhead and in processing\ntime, and I don't think you can afford that and probably don't need it.\n Any chance you could give up on databases and get what you need just using\npipelines of sort, cut, uniq, awk, perl, etc. (or whatever their Window\nequivalent is)?\n\nCheers,\n\nJeff\n\nOn Tue, Apr 15, 2014 at 6:36 PM, Nick Eubank <[email protected]> wrote:\nHi all,A few years ago someone said postgres windows can't set working_mem above about 2 GB (www.postgresql.org/message-id/[email protected] -- seems to be same for maintenance_working_mem ). Im finding limit still present.\n I'm doing single user, single connection data intensive queries and would like to set a higher value on windows to better use 16gb built in ram (don't control platform, so can't jump to Linux). \nAnyone found a work around?Before worrying much about that, I'd just give it a try at the highest value it will let you set and see what happens.\nIf you want to do something like hashed aggregate that would have been predicted to fit in 6GB but not in 1.999GB, then you will lose out on the hash agg by not being able to set the memory higher. On the other hand, if your queries want to use sorts that will spill to disk anyway, the exact value of work_mem usually doesn't matter much as long as it not absurdly small (1MB absurdly small for analytics, 64MB is probably not). In fact very large work_mem can be worse in those cases, because large priority queue heaps are unfriendly to the CPU cache. (Based on Linux experience, but I don't see why that would not carry over to Windows)\nFrankly I think you've bitten off more than you can chew. 600GB of csv is going to expand to probably 3TB of postgresql data once loaded. If you can't control the platform, I'm guessing your disk array options are no better than your OS options are.\nACID compliance is expensive, both in storage overhead and in processing time, and I don't think you can afford that and probably don't need it. Any chance you could give up on databases and get what you need just using pipelines of sort, cut, uniq, awk, perl, etc. (or whatever their Window equivalent is)?\nCheers,Jeff",
"msg_date": "Wed, 16 Apr 2014 12:48:39 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Workaround for working_mem max value in windows?"
},
{
"msg_contents": "On Wednesday, April 16, 2014, Jeff Janes <[email protected]> wrote:\n\n> On Tue, Apr 15, 2014 at 6:36 PM, Nick Eubank <[email protected]<javascript:_e(%7B%7D,'cvml','[email protected]');>\n> > wrote:\n>\n>> Hi all,\n>>\n>> A few years ago someone said postgres windows can't set working_mem above\n>> about 2 GB (www.postgresql.org/message-id/[email protected] seems to be same for maintenance_working_mem ). Im finding limit still\n>> present.\n>>\n>> I'm doing single user, single connection data intensive queries and\n>> would like to set a higher value on windows to better use 16gb built in\n>> ram (don't control platform, so can't jump to Linux).\n>>\n>> Anyone found a work around?\n>>\n>\n> Before worrying much about that, I'd just give it a try at the highest\n> value it will let you set and see what happens.\n>\n> If you want to do something like hashed aggregate that would have been\n> predicted to fit in 6GB but not in 1.999GB, then you will lose out on the\n> hash agg by not being able to set the memory higher. On the other hand, if\n> your queries want to use sorts that will spill to disk anyway, the exact\n> value of work_mem usually doesn't matter much as long as it not absurdly\n> small (1MB absurdly small for analytics, 64MB is probably not). In fact\n> very large work_mem can be worse in those cases, because large priority\n> queue heaps are unfriendly to the CPU cache. (Based on Linux experience,\n> but I don't see why that would not carry over to Windows)\n>\n> Frankly I think you've bitten off more than you can chew. 600GB of csv is\n> going to expand to probably 3TB of postgresql data once loaded. If you\n> can't control the platform, I'm guessing your disk array options are no\n> better than your OS options are.\n>\n> ACID compliance is expensive, both in storage overhead and in processing\n> time, and I don't think you can afford that and probably don't need it.\n> Any chance you could give up on databases and get what you need just using\n> pipelines of sort, cut, uniq, awk, perl, etc. (or whatever their Window\n> equivalent is)?\n>\n> Cheers,\n>\n> Jeff\n>\n\nThanks Jeff -- you're clearly correct that SQL is not the optimal tool for\nthis, as I'm clearly leaning. I just can't find anything MADE for one-user\nbig data transformations. :/ I may resort to that kind of pipeline\napproach, I just have so many transformations to do I was hoping I could\nuse a declarative language in something.\n\nBut your point about hash map size is excellent. No idea how big an index\nfor this would be...\n\nOn Wednesday, April 16, 2014, Jeff Janes <[email protected]> wrote:\nOn Tue, Apr 15, 2014 at 6:36 PM, Nick Eubank <[email protected]> wrote:\nHi all,A few years ago someone said postgres windows can't set working_mem above about 2 GB (www.postgresql.org/message-id/[email protected] -- seems to be same for maintenance_working_mem ). Im finding limit still present.\n I'm doing single user, single connection data intensive queries and would like to set a higher value on windows to better use 16gb built in ram (don't control platform, so can't jump to Linux). \nAnyone found a work around?Before worrying much about that, I'd just give it a try at the highest value it will let you set and see what happens.\nIf you want to do something like hashed aggregate that would have been predicted to fit in 6GB but not in 1.999GB, then you will lose out on the hash agg by not being able to set the memory higher. On the other hand, if your queries want to use sorts that will spill to disk anyway, the exact value of work_mem usually doesn't matter much as long as it not absurdly small (1MB absurdly small for analytics, 64MB is probably not). In fact very large work_mem can be worse in those cases, because large priority queue heaps are unfriendly to the CPU cache. (Based on Linux experience, but I don't see why that would not carry over to Windows)\nFrankly I think you've bitten off more than you can chew. 600GB of csv is going to expand to probably 3TB of postgresql data once loaded. If you can't control the platform, I'm guessing your disk array options are no better than your OS options are.\nACID compliance is expensive, both in storage overhead and in processing time, and I don't think you can afford that and probably don't need it. Any chance you could give up on databases and get what you need just using pipelines of sort, cut, uniq, awk, perl, etc. (or whatever their Window equivalent is)?\nCheers,JeffThanks Jeff -- you're clearly correct that SQL is not the optimal tool for this, as I'm clearly leaning. I just can't find anything MADE for one-user big data transformations. :/ I may resort to that kind of pipeline approach, I just have so many transformations to do I was hoping I could use a declarative language in something.\nBut your point about hash map size is excellent. No idea how big an index for this would be...",
"msg_date": "Wed, 16 Apr 2014 18:35:37 -0700",
"msg_from": "Nick Eubank <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Workaround for working_mem max value in windows?"
},
{
"msg_contents": "On Wednesday, 16 April 2014 10:05 PM, Nick Eubank <[email protected]> wrote:\n\n\n\n>Amul: thanks for the followup! Unfortunately, setting locally faces the same limitation \n>as setting things in the config file -- \n>I get an \"ERROR: 3072000 is outside the valid range for parameter \"work_mem\" (64 .. 2097151)\n\n>SQL state: 22023\" problem if I set above ~1.9gb. :(\n\n\nyes, you can set work_mem upto 2047MB.\n\n\nRegards,\nAmul Sul \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 17 Apr 2014 15:05:43 +0800 (SGT)",
"msg_from": "amul sul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Workaround for working_mem max value in windows?"
}
] |
[
{
"msg_contents": "Following are the tables\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\nCREATE TABLE equipment (\n contract_nr varchar(32) COLLATE \"C\" NULL DEFAULT NULL,\n name varchar(64) COLLATE \"C\" DEFAULT '',\n latitude numeric(10,7) NOT NULL,\n longitude numeric(10,7) NOT NULL,\n mac_addr_w varchar(17) COLLATE \"C\" NOT NULL,\n mac_addr_wl varchar(17) COLLATE \"C\" NOT NULL,\n identifier varchar(32) COLLATE \"C\" NOT NULL,\n he_identifier varchar(17) COLLATE \"C\" DEFAULT '',\n number_of_wlans integer NOT NULL DEFAULT '1' ,\n regions varchar(64) COLLATE \"C\" DEFAULT '',\n external_id varchar(64) COLLATE \"C\",\n PRIMARY KEY (external_id)\n) ;\n\nCREATE INDEX equipment_mac_addr_w_idx ON equipment (mac_addr_w);\nCREATE INDEX equipment_latitude_idx ON equipment (latitude);\nCREATE INDEX equipment_longitude_idx ON equipment (longitude);\n\nno of rows - 15000\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n\ncreate table accounting (\n equip_wan varchar(17) COLLATE \"C\" NOT NULL,\n equip_wlan varchar(17) COLLATE \"C\" NOT NULL,\n identifier varchar(32) COLLATE \"C\" NOT NULL,\n he_identifier varchar(17) COLLATE \"C\" NULL DEFAULT NULL,\n time_stamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,\n in_oc bigint NOT NULL DEFAULT 0,\n out_oc bigint NOT NULL DEFAULT 0\n );\n\nCREATE INDEX accounting_time_stamp_idx ON accounting (time_stamp);\nCREATE INDEX accounting_equip_wan_idx ON accounting (equip_wan);\n\nno of rows - 36699300\n*This table is growing rapidly*\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n\ncreate table accounting_fifteenminute_aggregate (\n equip_wan varchar(17) COLLATE \"C\" NOT NULL,\n identifier varchar(32) COLLATE \"C\" NOT NULL,\n time_stamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,\n in_oc bigint NOT NULL DEFAULT 0,\n out_oc bigint NOT NULL DEFAULT 0\n );\n\nCREATE INDEX accounting_15min_agg_timestamp_idx ON\naccounting_fifteenminute_aggregate (time_stamp);\nCREATE INDEX accounting_15min_agg_equip_wan_idx ON\naccounting_fifteenminute_aggregate (equip_wan);\n\nno of rows - 4800000\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n\ncreate table accounting_hourly_aggregate (\n equip_wan varchar(17) COLLATE \"C\" NOT NULL,\n identifier varchar(32) COLLATE \"C\" NOT NULL,\n time_stamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,\n in_oc bigint NOT NULL DEFAULT 0,\n out_oc bigint NOT NULL DEFAULT 0\n );\n\nCREATE INDEX accounting_hourly_agg_timestamp_idx ON\naccounting_hourly_aggregate (time_stamp);\nCREATE INDEX accounting_hourly_agg_equip_wan_idx ON\naccounting_hourly_aggregate (equip_wan);\n\nno of rows - 1400000\n\n<TABLE DEFINITION\nENDS>---------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nThe below 2 queries run every 15 min and 1 hour respectively from tomcat\nnode using jdbc. Hourly query uses 15 min query.\nTomcat and DB are in different node.\n\n*(1)* INSERT INTO accounting_fifteenminute_aggregate\nSelect equip_wan,identifier,'2014-04-16 14:00:00.0',sum(in_oc),sum(out_oc)\nfrom accounting where time_stamp >= '2014-04-16 13:45:00.0' and time_stamp\n< '2014-04-16 14:00:00.0' group by equip_wan,identifier\n\ntime taken for execution of the above query - 300 sec\n\nEXPLAIN (ANALYZE, BUFFERS)\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Insert on accounting_fifteenminute_aggregate (cost=253.37..253.47 rows=4\nwidth=89) (actual time=196833.655..196833.655 rows=0 loops=1)\n Buffers: shared hit=23941 read=4092 dirtied=2675\n -> Subquery Scan on \"*SELECT*\" (cost=253.37..253.47 rows=4 width=89)\n(actual time=3621.621..3763.701 rows=3072 loops=1)\n Buffers: shared hit=3494 read=93\n -> HashAggregate (cost=253.37..253.41 rows=4 width=41) (actual\ntime=3621.617..3737.370 rows=3072 loops=1)\n Buffers: shared hit=3494 read=93\n -> Index Scan using accounting_time_stamp_idx on accounting\n (cost=0.00..220.56 rows=3281 width=41) (actual time=3539.890..3601.808\nrows=3680 loops=1)\n Index Cond: ((time_stamp >= '2014-04-16\n13:45:00+05:30'::timestamp with time zone) AND (time_stamp < '2014-04-16\n14:00:00+05:30'::timestamp with time zone))\n Buffers: shared hit=3494 read=93\n Total runtime: 196833.781 ms\n(10 rows)\n\n\n*(2) *INSERT INTO accounting_hourly_aggregate\nSelect equip_wan,identifier,'2014-04-16 14:00:00.0',sum(in_oc),sum(out_oc)\nfrom accounting_fifteenminute_aggregate where time_stamp > '2014-04-16\n13:00:00.0' group by equip_wan,identifier\n\ntime taken for execution of the above query - 280 sec\n\n*************************************************************************************************************************************\nThe below query is report query which uses the above aggregated tables\n\nSelect\nqueryA.wAddr,\nqueryA.name,\nqueryA.dBy,\nqueryA.upBy,\n(queryA.upBy + queryA.dBy) as totalBy\nFrom\n(Select\nqueryC.wAddr,\nqueryC.name,\nCOALESCE(queryI.dBy, 0) as dBy,\nCOALESCE(queryI.upBy, 0) as upBy\nFrom\n(Select\nDISTINCT ON(mac_addr_w)\nmac_addr_w as wAddr,\nname\n From equipment\nwhere\n(latitude BETWEEN -90.0 AND 90.0) AND\n(longitude BETWEEN -180.0 AND 180.0)\n) as queryC\nLeft Join\n(Select\nequip_wan as wAddr,\nSUM(in_oc) as dBy,\nSUM(out_oc) as upBy\n From accounting_hourly_aggregate\nwhere time_stamp > '2014-04-13 16:00:00.0' and time_stamp <= '2014-04-14\n16:00:00.0'\nGroup by equip_wan) as queryI\nOn queryC.wAddr = queryI.wAddr) as queryA\norder by totalBy DESC Limit 10;\n\nAbove query execution takes - 3 min 28 sec.\nSo I did a manual analyze to see if any performance benefit is obtained.\nAnalyze accounting_hourly_aggregate takes 40 sec.\nAfter analysis same query takes 16 sec.\nBut 40 mins after analyzing accounting_hourly_aggregate table, the above\nquery execution time again increases to few minutes.\nThe above query is run from command line of postgres.\nAuto vacuum is by default running.\n\nExplain of the above query\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=11664.77..11664.80 rows=10 width=92) (actual\ntime=159613.007..159613.010 rows=10 loops=1)\n Buffers: shared hit=2282 read=3528\n -> Sort (cost=11664.77..11689.77 rows=10000 width=92) (actual\ntime=159613.005..159613.007 rows=10 loops=1)\n Sort Key: ((COALESCE(queryI.upBy, 0::numeric) +\nCOALESCE(queryI.dBy, 0::numeric)))\n Sort Method: top-N heapsort Memory: 26kB\n Buffers: shared hit=2282 read=3528\n -> Merge Left Join (cost=9748.22..11448.68 rows=10000 width=92)\n(actual time=157526.220..159607.130 rows=10000 loops=1)\n Merge Cond: ((equipment.mac_addr_w)::text =\n(queryI.wAddr)::text)\n Buffers: shared hit=2282 read=3528\n -> Unique (cost=0.00..1538.56 rows=10000 width=28) (actual\ntime=84.291..2151.497 rows=10000 loops=1)\n Buffers: shared hit=591 read=840\n -> Index Scan using equipment_mac_addr_w_idx on\nequipment (cost=0.00..1499.35 rows=15684 width=28) (actual\ntime=84.288..2145.990 rows=15684 loops=1)\n Filter: ((latitude >= (-90.0)) AND (latitude <=\n90.0) AND (longitude >= (-180.0)) AND (longitude <= 180.0))\n Buffers: shared hit=591 read=840\n -> Sort (cost=9748.22..9750.20 rows=793 width=82) (actual\ntime=157441.910..157443.710 rows=6337 loops=1)\n Sort Key: queryI.wAddr\n Sort Method: quicksort Memory: 688kB\n Buffers: shared hit=1691 read=2688\n -> Subquery Scan on queryI (cost=9694.17..9710.03\nrows=793 width=82) (actual time=157377.819..157381.314 rows=6337 loops=1)\n Buffers: shared hit=1691 read=2688\n -> HashAggregate (cost=9694.17..9702.10\nrows=793 width=34) (actual time=157377.819..157380.154 rows=6337 loops=1)\n Buffers: shared hit=1691 read=2688\n -> Index Scan using\naccounting_hourly_agg_idx on accounting_hourly_aggregate\n (cost=0.00..8292.98 rows=186826 width=34) (actual\ntime=1328.363..154164.439 rows=193717 loops=1)\n Index Cond: ((time_stamp >\n'2014-04-14 12:00:00+05:30'::timestamp with time zone) AND (time_stamp <=\n'2014-04-15 18:00:00+05:30'::timestamp with time zone))\n Buffers: shared hit=1691 read=2688\n Total runtime: 159613.100 ms\n(26 rows)\n\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\nFollowing values have been changed in postgresql.conf\nshared_buffers = 2GB\nwork_mem = 32MB\nmaintenance_work_mem = 512MB\nwal_buffers = 16MB\ncheckpoint_segments = 32\ncheckpoint_timeout = 15min\ncheckpoint_completion_target = 0.9\n\nSystem config -\n8 gb RAM\nhard disk - 300 gb\nLinux 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64\nx86_64 x86_64 GNU/Linux\n\nPostgres version\nPostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2\n20080704 (Red Hat 4.1.2-52), 64-bit\n\nBasically all the queries are taking time, as the raw tables size\nincreases. Will partitioning help ?\n\nFollowing are the tables---------------------------------------------------------------------------------------------------------------------------------------------------------------\nCREATE TABLE equipment ( contract_nr varchar(32) COLLATE \"C\" NULL DEFAULT NULL, name varchar(64) COLLATE \"C\" DEFAULT '', latitude numeric(10,7) NOT NULL,\n longitude numeric(10,7) NOT NULL, mac_addr_w varchar(17) COLLATE \"C\" NOT NULL, mac_addr_wl varchar(17) COLLATE \"C\" NOT NULL, identifier varchar(32) COLLATE \"C\" NOT NULL,\n he_identifier varchar(17) COLLATE \"C\" DEFAULT '', number_of_wlans integer NOT NULL DEFAULT '1' , regions varchar(64) COLLATE \"C\" DEFAULT '',\n external_id varchar(64) COLLATE \"C\", PRIMARY KEY (external_id)) ;CREATE INDEX equipment_mac_addr_w_idx ON equipment (mac_addr_w);CREATE INDEX equipment_latitude_idx ON equipment (latitude);\nCREATE INDEX equipment_longitude_idx ON equipment (longitude);no of rows - 15000---------------------------------------------------------------------------------------------------------------------------------------------------------------\ncreate table accounting ( equip_wan varchar(17) COLLATE \"C\" NOT NULL, equip_wlan varchar(17) COLLATE \"C\" NOT NULL, identifier varchar(32) COLLATE \"C\" NOT NULL,\n he_identifier varchar(17) COLLATE \"C\" NULL DEFAULT NULL, time_stamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP, in_oc bigint NOT NULL DEFAULT 0, out_oc bigint NOT NULL DEFAULT 0\n );CREATE INDEX accounting_time_stamp_idx ON accounting (time_stamp);CREATE INDEX accounting_equip_wan_idx ON accounting (equip_wan);no of rows - 36699300\nThis table is growing rapidly---------------------------------------------------------------------------------------------------------------------------------------------------------------\ncreate table accounting_fifteenminute_aggregate ( equip_wan varchar(17) COLLATE \"C\" NOT NULL, identifier varchar(32) COLLATE \"C\" NOT NULL, time_stamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP, \n in_oc bigint NOT NULL DEFAULT 0, out_oc bigint NOT NULL DEFAULT 0 );CREATE INDEX accounting_15min_agg_timestamp_idx ON accounting_fifteenminute_aggregate (time_stamp);\nCREATE INDEX accounting_15min_agg_equip_wan_idx ON accounting_fifteenminute_aggregate (equip_wan);no of rows - 4800000---------------------------------------------------------------------------------------------------------------------------------------------------------------\ncreate table accounting_hourly_aggregate ( equip_wan varchar(17) COLLATE \"C\" NOT NULL, identifier varchar(32) COLLATE \"C\" NOT NULL, time_stamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP, \n in_oc bigint NOT NULL DEFAULT 0, out_oc bigint NOT NULL DEFAULT 0 );CREATE INDEX accounting_hourly_agg_timestamp_idx ON accounting_hourly_aggregate (time_stamp);\nCREATE INDEX accounting_hourly_agg_equip_wan_idx ON accounting_hourly_aggregate (equip_wan);no of rows - 1400000<TABLE DEFINITION ENDS>---------------------------------------------------------------------------------------------------------------------------------------------------------------\nThe below 2 queries run every 15 min and 1 hour respectively from tomcat node using jdbc. Hourly query uses 15 min query.Tomcat and DB are in different node.\n(1) INSERT INTO accounting_fifteenminute_aggregate Select equip_wan,identifier,'2014-04-16 14:00:00.0',sum(in_oc),sum(out_oc) from accounting where time_stamp >= '2014-04-16 13:45:00.0' and time_stamp < '2014-04-16 14:00:00.0' group by equip_wan,identifier\ntime taken for execution of the above query - 300 secEXPLAIN (ANALYZE, BUFFERS) QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Insert on accounting_fifteenminute_aggregate (cost=253.37..253.47 rows=4 width=89) (actual time=196833.655..196833.655 rows=0 loops=1)\n Buffers: shared hit=23941 read=4092 dirtied=2675 -> Subquery Scan on \"*SELECT*\" (cost=253.37..253.47 rows=4 width=89) (actual time=3621.621..3763.701 rows=3072 loops=1) Buffers: shared hit=3494 read=93\n -> HashAggregate (cost=253.37..253.41 rows=4 width=41) (actual time=3621.617..3737.370 rows=3072 loops=1) Buffers: shared hit=3494 read=93 -> Index Scan using accounting_time_stamp_idx on accounting (cost=0.00..220.56 rows=3281 width=41) (actual time=3539.890..3601.808 rows=3680 loops=1)\n Index Cond: ((time_stamp >= '2014-04-16 13:45:00+05:30'::timestamp with time zone) AND (time_stamp < '2014-04-16 14:00:00+05:30'::timestamp with time zone)) Buffers: shared hit=3494 read=93\n Total runtime: 196833.781 ms(10 rows)(2) INSERT INTO accounting_hourly_aggregate Select equip_wan,identifier,'2014-04-16 14:00:00.0',sum(in_oc),sum(out_oc) \nfrom accounting_fifteenminute_aggregate where time_stamp > '2014-04-16 13:00:00.0' group by equip_wan,identifiertime taken for execution of the above query - 280 sec\n*************************************************************************************************************************************The below query is report query which uses the above aggregated tables\nSelectqueryA.wAddr,queryA.name,queryA.dBy,queryA.upBy,(queryA.upBy + queryA.dBy) as totalByFrom(Select queryC.wAddr,\nqueryC.name,COALESCE(queryI.dBy, 0) as dBy,COALESCE(queryI.upBy, 0) as upByFrom(Select DISTINCT ON(mac_addr_w) mac_addr_w as wAddr,name\nFrom equipmentwhere (latitude BETWEEN -90.0 AND 90.0) AND (longitude BETWEEN -180.0 AND 180.0) ) as queryCLeft Join(Select equip_wan as wAddr,\nSUM(in_oc) as dBy,SUM(out_oc) as upByFrom accounting_hourly_aggregatewhere time_stamp > '2014-04-13 16:00:00.0' and time_stamp <= '2014-04-14 16:00:00.0'\nGroup by equip_wan) as queryIOn queryC.wAddr = queryI.wAddr) as queryAorder by totalBy DESC Limit 10;Above query execution takes - 3 min 28 sec.So I did a manual analyze to see if any performance benefit is obtained. Analyze accounting_hourly_aggregate takes 40 sec.\nAfter analysis same query takes 16 sec.But 40 mins after analyzing accounting_hourly_aggregate table, the above query execution time again increases to few minutes.The above query is run from command line of postgres. \nAuto vacuum is by default running.Explain of the above query---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=11664.77..11664.80 rows=10 width=92) (actual time=159613.007..159613.010 rows=10 loops=1) Buffers: shared hit=2282 read=3528 -> Sort (cost=11664.77..11689.77 rows=10000 width=92) (actual time=159613.005..159613.007 rows=10 loops=1)\n Sort Key: ((COALESCE(queryI.upBy, 0::numeric) + COALESCE(queryI.dBy, 0::numeric))) Sort Method: top-N heapsort Memory: 26kB Buffers: shared hit=2282 read=3528\n -> Merge Left Join (cost=9748.22..11448.68 rows=10000 width=92) (actual time=157526.220..159607.130 rows=10000 loops=1) Merge Cond: ((equipment.mac_addr_w)::text = (queryI.wAddr)::text)\n Buffers: shared hit=2282 read=3528 -> Unique (cost=0.00..1538.56 rows=10000 width=28) (actual time=84.291..2151.497 rows=10000 loops=1) Buffers: shared hit=591 read=840\n -> Index Scan using equipment_mac_addr_w_idx on equipment (cost=0.00..1499.35 rows=15684 width=28) (actual time=84.288..2145.990 rows=15684 loops=1) Filter: ((latitude >= (-90.0)) AND (latitude <= 90.0) AND (longitude >= (-180.0)) AND (longitude <= 180.0))\n Buffers: shared hit=591 read=840 -> Sort (cost=9748.22..9750.20 rows=793 width=82) (actual time=157441.910..157443.710 rows=6337 loops=1) Sort Key: queryI.wAddr\n Sort Method: quicksort Memory: 688kB Buffers: shared hit=1691 read=2688 -> Subquery Scan on queryI (cost=9694.17..9710.03 rows=793 width=82) (actual time=157377.819..157381.314 rows=6337 loops=1)\n Buffers: shared hit=1691 read=2688 -> HashAggregate (cost=9694.17..9702.10 rows=793 width=34) (actual time=157377.819..157380.154 rows=6337 loops=1)\n Buffers: shared hit=1691 read=2688 -> Index Scan using accounting_hourly_agg_idx on accounting_hourly_aggregate (cost=0.00..8292.98 rows=186826 width=34) (actual time=1328.363..154164.439 rows=193717 loops=1)\n Index Cond: ((time_stamp > '2014-04-14 12:00:00+05:30'::timestamp with time zone) AND (time_stamp <= '2014-04-15 18:00:00+05:30'::timestamp with time zone))\n Buffers: shared hit=1691 read=2688 Total runtime: 159613.100 ms(26 rows)+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nFollowing values have been changed in postgresql.confshared_buffers = 2GBwork_mem = 32MB maintenance_work_mem = 512MB\nwal_buffers = 16MBcheckpoint_segments = 32 checkpoint_timeout = 15min checkpoint_completion_target = 0.9 \nSystem config - 8 gb RAMhard disk - 300 gbLinux 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64 GNU/LinuxPostgres version\nPostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bitBasically all the queries are taking time, as the raw tables size increases. Will partitioning help ?",
"msg_date": "Wed, 16 Apr 2014 18:35:16 +0530",
"msg_from": "sheishere b <[email protected]>",
"msg_from_op": true,
"msg_subject": "Queries very slow after data size increases"
}
] |
[
{
"msg_contents": "Hello all,\n\nI am trying to simplify some of the queries I use with my database creating a big view of all the possible attributes my items can have, the view is rather large:\n\nhttp://pastebin.com/ScnJ8Hd3\n\n\nI thought that Postgresql would optimize out joins on columns I don't ask for when I use the view but it doesn't, this query:\n\nSELECT referencia\nFROM articulo_view\nWHERE referencia = '09411000';\n\n\nHave this query plan:\n\nhttp://explain.depesz.com/s/4lW0\n\n\nMaybe I am surpassing some limit? I have tried changing from_collapse_limit and join_collapse_limit but still the planner join the unneeded tables.\n\nIs possible to tell Postgresql do the right thing? If so, how? Thanks!\n\nRegards,\nMiguel Angel.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 16 Apr 2014 17:13:24 +0200",
"msg_from": "Linos <[email protected]>",
"msg_from_op": true,
"msg_subject": "unneeded joins on view"
},
{
"msg_contents": "On 04/16/2014 06:13 PM, Linos wrote:\n> I thought that Postgresql would optimize out joins on columns I\n> don't ask for when I use the view but it doesn't, this query:\n\nIt doesn't, because it would be wrong. It still has to check that the \ntables have a matching row (or multiple matching rows).\n\nIf you use LEFT JOINs instead, and have a unique index on all the ID \ncolumns, then the planner can do what you expected and leave out the joins.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 16 Apr 2014 18:57:21 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unneeded joins on view"
},
{
"msg_contents": "On 16/04/14 17:57, Heikki Linnakangas wrote:\n> On 04/16/2014 06:13 PM, Linos wrote:\n>> I thought that Postgresql would optimize out joins on columns I\n>> don't ask for when I use the view but it doesn't, this query:\n> \n> It doesn't, because it would be wrong. It still has to check that the tables have a matching row (or multiple matching rows).\n> \n> If you use LEFT JOINs instead, and have a unique index on all the ID columns, then the planner can do what you expected and leave out the joins.\n> \n> - Heikki\n> \n> \n\nYou are right, I knew I was forgetting something important but I didn't know what was, thank you Heikki.\n\nRegards,\nMiguel Angel.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 16 Apr 2014 18:08:33 +0200",
"msg_from": "Linos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unneeded joins on view"
}
] |
[
{
"msg_contents": "Hi,\n\nwe are using a mono-column index on a huge table to try to make a quick\n'select distinct ' on the column.\n\nThis used to work fine, but... it does not anymore. We don't know what\nhappened.\n\nHere are the facts:\n\n- request:\nSELECT dwhinv___rfovsnide::varchar FROM dwhinv WHERE dwhinv___rfovsnide\n> '201212_cloture' ORDER BY dwhinv___rfovsnide LIMIT 1\n\n- Plan :\nLimit (cost=0.00..1.13 rows=1 width=12) (actual time=5798.915..5798.916\nrows=1 loops=1)\n -> Index Scan using vsn_idx on dwhinv (cost=0.00..302591122.05\nrows=267473826 width=12) (actual time=5798.912..5798.912 rows=1 loops=1)\n Index Cond: ((dwhinv___rfovsnide)::text > '201212_cloture'::text)\nTotal runtime: 5799.141 ms\n\n- default_statistics_target = 200;\n\n- postgresql Version 8.4\n\n- Index used :\nCREATE INDEX vsn_idx\n ON dwhinv\n USING btree\n (dwhinv___rfovsnide);\n\nThere are 26 distinct values of the column.\nThis query used to take some milliseconds at most. The index has been\nfreshly recreated.\n\nWhat could be the problem ?\n\nFranck\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 17 Apr 2014 17:11:25 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fast distinct not working as expected"
},
{
"msg_contents": "On Thu, Apr 17, 2014 at 8:11 AM, Franck Routier <[email protected]>wrote:\n\n> Hi,\n>\n> we are using a mono-column index on a huge table to try to make a quick\n> 'select distinct ' on the column.\n>\n> This used to work fine, but... it does not anymore. We don't know what\n> happened.\n>\n> Here are the facts:\n>\n> - request:\n> SELECT dwhinv___rfovsnide::varchar FROM dwhinv WHERE dwhinv___rfovsnide\n> > '201212_cloture' ORDER BY dwhinv___rfovsnide LIMIT 1\n>\n\nThat is not equivalent to a distinct. There must be more to it than that.\n\n\n>\n> - Plan :\n> Limit (cost=0.00..1.13 rows=1 width=12) (actual time=5798.915..5798.916\n> rows=1 loops=1)\n> -> Index Scan using vsn_idx on dwhinv (cost=0.00..302591122.05\n> rows=267473826 width=12) (actual time=5798.912..5798.912 rows=1 loops=1)\n> Index Cond: ((dwhinv___rfovsnide)::text > '201212_cloture'::text)\n> Total runtime: 5799.141 ms\n>\n\n\nMy best guess would be that the index got stuffed full of entries for rows\nthat are not visible, either because they are not yet committed, or have\nbeen deleted but are not yet vacuumable. Do you have any long-lived\ntransactions?\n\n\n>\n> - postgresql Version 8.4\n>\n\nNewer versions have better diagnostic tools. An explain (analyze, buffers)\n would be nice, especially with track_io_timing on.\n\nCheers,\n\nJeff\n\nOn Thu, Apr 17, 2014 at 8:11 AM, Franck Routier <[email protected]> wrote:\nHi,\n\nwe are using a mono-column index on a huge table to try to make a quick\n'select distinct ' on the column.\n\nThis used to work fine, but... it does not anymore. We don't know what\nhappened.\n\nHere are the facts:\n\n- request:\nSELECT dwhinv___rfovsnide::varchar FROM dwhinv WHERE dwhinv___rfovsnide\n> '201212_cloture' ORDER BY dwhinv___rfovsnide LIMIT 1That is not equivalent to a distinct. There must be more to it than that. \n\n- Plan :\nLimit (cost=0.00..1.13 rows=1 width=12) (actual time=5798.915..5798.916\nrows=1 loops=1)\n -> Index Scan using vsn_idx on dwhinv (cost=0.00..302591122.05\nrows=267473826 width=12) (actual time=5798.912..5798.912 rows=1 loops=1)\n Index Cond: ((dwhinv___rfovsnide)::text > '201212_cloture'::text)\nTotal runtime: 5799.141 msMy best guess would be that the index got stuffed full of entries for rows that are not visible, either because they are not yet committed, or have been deleted but are not yet vacuumable. Do you have any long-lived transactions?\n \n- postgresql Version 8.4Newer versions have better diagnostic tools. An explain (analyze, buffers) would be nice, especially with track_io_timing on.Cheers,\nJeff",
"msg_date": "Thu, 17 Apr 2014 08:57:48 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fast distinct not working as expected"
},
{
"msg_contents": "Hi,\n>\n> That is not equivalent to a distinct. There must be more to it than that.\nIndeed, this query is used in a loop:\n\nCREATE OR REPLACE FUNCTION small_distinct(IN tablename character\nvarying, IN fieldname character varying, IN sample anyelement DEFAULT\n''::character varying)\n RETURNS SETOF anyelement AS\n$BODY$\nBEGIN\n EXECUTE 'SELECT '||fieldName||' FROM '||tableName||' ORDER BY\n'||fieldName\n ||' LIMIT 1' INTO result;\n WHILE result IS NOT NULL LOOP\n RETURN NEXT;\n EXECUTE 'SELECT '||fieldName||' FROM '||tableName\n ||' WHERE '||fieldName||' > $1 ORDER BY ' || fieldName || '\nLIMIT 1'\n INTO result USING result;\n END LOOP;\nEND;\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100\n ROWS 1000;\n\n\nSince we have the problem, some iteration of the query are still quick\n(< 1ms), but others are long (> 5s).\n> \n>\n>\n>\n>\n>\n> My best guess would be that the index got stuffed full of entries for\n> rows that are not visible, either because they are not yet committed,\n> or have been deleted but are not yet vacuumable. Do you have any\n> long-lived transactions?\nThere has been a delete on the table (about 20% of the records). Then a\nmanual VACUUM.\nWe have recreated the index, but it did not help.\n\nIn the explain analyze output, the index scan begins at 5798.912. What\ncan be happening before that ?\n\nIndex Scan using vsn_idx on dwhinv (cost=0.00..302591122.05\nrows=267473826 width=12) (actual time=5798.912..5798.912 rows=1 loops=1)\n\n(Notice the delay is not planning itself, as explain is instantaneous)\n\n> \n>\n>\n> - postgresql Version 8.4\n>\n>\n> Newer versions have better diagnostic tools. An explain (analyze,\n> buffers) would be nice, especially with track_io_timing on.\nYep, we certainly would like to, but this is a distant prod box, with no\naccess to an online upgrade source, and no planned upgrade for now :-((\n\nRegards,\nFranck\n\n\n\n\n\n\n Hi,\n\n\n\n\n\n\nThat is not equivalent to a distinct. There must be\n more to it than that.\n\n\n\n\n Indeed, this query is used in a loop:\n\n CREATE OR REPLACE FUNCTION small_distinct(IN tablename character\n varying, IN fieldname character varying, IN sample anyelement\n DEFAULT ''::character varying)\n RETURNS SETOF anyelement AS\n $BODY$\n BEGIN\n EXECUTE 'SELECT '||fieldName||' FROM '||tableName||' ORDER BY\n '||fieldName\n ||' LIMIT 1' INTO result;\n WHILE result IS NOT NULL LOOP\n RETURN NEXT;\n EXECUTE 'SELECT '||fieldName||' FROM '||tableName\n ||' WHERE '||fieldName||' > $1 ORDER BY ' || fieldName\n || ' LIMIT 1'\n INTO result USING result;\n END LOOP;\n END;\n $BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100\n ROWS 1000;\n\n\n Since we have the problem, some iteration of the query are still\n quick (< 1ms), but others are long (> 5s).\n\n\n\n\n \n\n\n\n\n\n\n\nMy best guess would be that the index got stuffed full\n of entries for rows that are not visible, either because\n they are not yet committed, or have been deleted but are\n not yet vacuumable. Do you have any long-lived\n transactions?\n\n\n\n\n There has been a delete on the table (about 20% of the records).\n Then a manual VACUUM.\n We have recreated the index, but it did not help.\n\n In the explain analyze output, the index scan begins at 5798.912.\n What can be happening before that ?\nIndex Scan using vsn_idx on dwhinv (cost=0.00..302591122.05\nrows=267473826 width=12) (actual time=5798.912..5798.912 rows=1 loops=1)\n (Notice the delay is not planning itself, as explain is\n instantaneous)\n\n\n\n\n\n \n\n - postgresql Version 8.4\n\n\n\nNewer versions have better diagnostic tools. An\n explain (analyze, buffers) would be nice, especially with\n track_io_timing on.\n\n\n\n\n Yep, we certainly would like to, but this is a distant prod box,\n with no access to an online upgrade source, and no planned upgrade\n for now :-((\n\n Regards,\n Franck",
"msg_date": "Thu, 17 Apr 2014 19:17:48 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fast distinct not working as expected"
},
{
"msg_contents": "On Thu, Apr 17, 2014 at 10:17 AM, Franck Routier\n<[email protected]>wrote:\n\n>\n>\n> My best guess would be that the index got stuffed full of entries for\n> rows that are not visible, either because they are not yet committed, or\n> have been deleted but are not yet vacuumable. Do you have any long-lived\n> transactions?\n>\n> There has been a delete on the table (about 20% of the records). Then a\n> manual VACUUM.\n> We have recreated the index, but it did not help.\n>\n\nIf there are any open transactions (even ones that have never touched this\nparticular table) which started before the delete was committed, then the\nvacuum was obliged to keep those deleted records around, in case that open\ntransaction happens to start caring about them. I assume that the deleted\nrows were not randomly distributed, but rather were concentrated in the\nexact range you are now inspecting.\n\nThe reindex was also obliged to carry those deleted but not yet\nuninteresting rows along to the new index.\n\n\n\n>\n> In the explain analyze output, the index scan begins at 5798.912. What can\n> be happening before that ?\n>\n\nThe index scan reports it first *visible* row at 5798.912. Before that, it\nwas probably digging through thousands or millions of deleted rows,\nlabouriously verifying that they are not visible to it (but still\ntheoretically visible to someone else).\n\nIt could be blocked on a lock or something, or you could have really\nhorrible IO contention that takes 5 seconds to read two blocks. But I\nthink the other scenario is more likely.\n\nBy the way, many people don't like silent cross-posting, as then we end up\nunknowningly answering a question here that was already answered elsewhere.\n\nhttp://stackoverflow.com/questions/23137713/postgresql-query-plan-delay\n\n\n\nCheers,\n\nJeff\n\nOn Thu, Apr 17, 2014 at 10:17 AM, Franck Routier <[email protected]> wrote:\n\n\n\n\n\nMy best guess would be that the index got stuffed full\n of entries for rows that are not visible, either because\n they are not yet committed, or have been deleted but are\n not yet vacuumable. Do you have any long-lived\n transactions?\n\n\n\n\n There has been a delete on the table (about 20% of the records).\n Then a manual VACUUM.\n We have recreated the index, but it did not help.If there are any open transactions (even ones that have never touched this particular table) which started before the delete was committed, then the vacuum was obliged to keep those deleted records around, in case that open transaction happens to start caring about them. I assume that the deleted rows were not randomly distributed, but rather were concentrated in the exact range you are now inspecting.\nThe reindex was also obliged to carry those deleted but not yet uninteresting rows along to the new index. \n\n\n In the explain analyze output, the index scan begins at 5798.912.\n What can be happening before that ?The index scan reports it first *visible* row at 5798.912. Before that, it was probably digging through thousands or millions of deleted rows, labouriously verifying that they are not visible to it (but still theoretically visible to someone else). \nIt could be blocked on a lock or something, or you could have really horrible IO contention that takes 5 seconds to read two blocks. But I think the other scenario is more likely.\nBy the way, many people don't like silent cross-posting, as then we end up unknowningly answering a question here that was already answered elsewhere.http://stackoverflow.com/questions/23137713/postgresql-query-plan-delay\nCheers,Jeff",
"msg_date": "Thu, 17 Apr 2014 11:17:28 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fast distinct not working as expected"
},
{
"msg_contents": "Hi,\n\nLe 17/04/2014 20:17, Jeff Janes a écrit :\n>\n>\n> If there are any open transactions (even ones that have never touched\n> this particular table) which started before the delete was committed,\n> then the vacuum was obliged to keep those deleted records around, in\n> case that open transaction happens to start caring about them. I\n> assume that the deleted rows were not randomly distributed, but rather\n> were concentrated in the exact range you are now inspecting.\n>\n> The reindex was also obliged to carry those deleted but not yet\n> uninteresting rows along to the new index.\nOk, I understand now\n> \n>\n>\n> In the explain analyze output, the index scan begins at 5798.912.\n> What can be happening before that ?\n>\n>\n> The index scan reports it first *visible* row at 5798.912. Before\n> that, it was probably digging through thousands or millions of deleted\n> rows, labouriously verifying that they are not visible to it (but\n> still theoretically visible to someone else). \nVery clear explaination, thank you.\n>\n> It could be blocked on a lock or something, or you could have really\n> horrible IO contention that takes 5 seconds to read two blocks. But I\n> think the other scenario is more likely.\nYes, I also think so.\n>\n> By the way, many people don't like silent cross-posting, as then we\n> end up unknowningly answering a question here that was already\n> answered elsewhere.\nYes, sorry for that. I don't like it either, this was posted by a\ncolleague of mine. One of those young foolish guys that prefer web\ninterfaces to plain mailing-lists... Gonna scold him :-)\n\nBest regards,\nFranck\n\n\n\n\n\n\n Hi,\n\nLe 17/04/2014 20:17, Jeff Janes a\n écrit :\n\n\n\n\n\n\n\nIf there are any open transactions (even ones that have\n never touched this particular table) which started before\n the delete was committed, then the vacuum was obliged to\n keep those deleted records around, in case that open\n transaction happens to start caring about them. I assume\n that the deleted rows were not randomly distributed, but\n rather were concentrated in the exact range you are now\n inspecting.\n\n\nThe reindex was also obliged to carry those deleted but\n not yet uninteresting rows along to the new index.\n\n\n\n\n Ok, I understand now\n \n\n\n\n \n\n \n In the explain analyze output, the index scan begins at\n 5798.912. What can be happening before that ?\n\n\n\nThe index scan reports it first *visible* row at\n 5798.912. Before that, it was probably digging through\n thousands or millions of deleted rows, labouriously\n verifying that they are not visible to it (but still\n theoretically visible to someone else). \n\n\n\n\n\n Very clear explaination, thank you.\n\n\n\n\n\n\nIt could be blocked on a lock or something, or you\n could have really horrible IO contention that takes 5\n seconds to read two blocks. But I think the other\n scenario is more likely.\n\n\n\n\n Yes, I also think so.\n\n\n\n\n\n\nBy the way, many people don't like silent\n cross-posting, as then we end up unknowningly answering a\n question here that was already answered elsewhere.\n\n\n\n\n Yes, sorry for that. I don't like it either, this was posted by a\n colleague of mine. One of those young foolish guys that prefer web\n interfaces to plain mailing-lists... Gonna scold him :-)\n\n Best regards,\n Franck",
"msg_date": "Fri, 18 Apr 2014 09:22:56 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fast distinct not working as expected"
},
{
"msg_contents": "I have found the problem, using this query |(found here http://stackoverflow.com/questions/3312929/postgresql-idle-in-transaction-diagnosis-and-reading-pg-locks)|\n\nselect pg_class.relname, pg_locks.transactionid, pg_locks.mode,\n pg_locks.granted as \"g\", pg_stat_activity.current_query,\n pg_stat_activity.query_start,\n age(now(),pg_stat_activity.query_start) as \"age\",\n pg_stat_activity.procpid \nfrom pg_stat_activity,pg_locks\nleft outer join pg_class on (pg_locks.relation = pg_class.oid) \nwhere pg_locks.pid=pg_stat_activity.procpid\nand pg_stat_activity.procpid = <AN IDLE TRANSACTION PROCESS>\norder by query_start;\n\n|\nAnd indeed, we constantly have idle transcations. They all use the same\ndummy table, a dual table substitute containing only one column, and one\nrow.\nWe use this table with tomcat-jdbc-pool to check connections health with\n'select 1 from dual' (we don't use 'select 1' for portability reasons,\nto work with oracle also).\nAnd these transactions are never commited. So we have a bunch of running\ntransactions, constantly running and recreated by tomcat-jdbc-pool. Some\nof them run for hours.\nThis seems to impact significally the ability of postgresql to vacuum...\nand thus to keep efficient indexes!\n\nChanging the configration of tomcat-jdbc-pool to 'select 1 from dual;\ncommit;' seems to resolve the problem.\n\nI'm going to ask on tomcat-jdbc-pool mailing-list if this is ok.\n\nThanks a lot for your help.\n\nFranck\n|\n\n||\n\n\n\n\n\n\n\nI have found the problem, using this query (found here http://stackoverflow.com/questions/3312929/postgresql-idle-in-transaction-diagnosis-and-reading-pg-locks)\nselect pg_class.relname, pg_locks.transactionid, pg_locks.mode,\n pg_locks.granted as \"g\", pg_stat_activity.current_query,\n pg_stat_activity.query_start,\n age(now(),pg_stat_activity.query_start) as \"age\",\n pg_stat_activity.procpid \nfrom pg_stat_activity,pg_locks\nleft outer join pg_class on (pg_locks.relation = pg_class.oid) \nwhere pg_locks.pid=pg_stat_activity.procpid\nand pg_stat_activity.procpid = <AN IDLE TRANSACTION PROCESS>\norder by query_start;\n\n And indeed, we constantly have idle transcations. They all use the\n same dummy table, a dual table substitute containing only one\n column, and one row.\n We use this table with tomcat-jdbc-pool to check connections\n health with 'select 1 from dual' (we don't use 'select 1' for\n portability reasons, to work with oracle also).\n And these transactions are never commited. So we have a bunch of\n running transactions, constantly running and recreated by\n tomcat-jdbc-pool. Some of them run for hours.\n This seems to impact significally the ability of postgresql to\n vacuum... and thus to keep efficient indexes!\n\n Changing the configration of tomcat-jdbc-pool to 'select 1 from\n dual; commit;' seems to resolve the problem.\n\n I'm going to ask on tomcat-jdbc-pool mailing-list if this is ok.\n\n Thanks a lot for your help.\n\n Franck",
"msg_date": "Fri, 18 Apr 2014 10:07:58 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fast distinct not working as expected"
},
{
"msg_contents": "On Fri, Apr 18, 2014 at 3:07 AM, Franck Routier\n<[email protected]> wrote:\n> I have found the problem, using this query (found here\n> http://stackoverflow.com/questions/3312929/postgresql-idle-in-transaction-diagnosis-and-reading-pg-locks)\n>\n> select pg_class.relname, pg_locks.transactionid, pg_locks.mode,\n> pg_locks.granted as \"g\", pg_stat_activity.current_query,\n> pg_stat_activity.query_start,\n> age(now(),pg_stat_activity.query_start) as \"age\",\n> pg_stat_activity.procpid\n> from pg_stat_activity,pg_locks\n> left outer join pg_class on (pg_locks.relation = pg_class.oid)\n> where pg_locks.pid=pg_stat_activity.procpid\n> and pg_stat_activity.procpid = <AN IDLE TRANSACTION PROCESS>\n> order by query_start;\n>\n>\n> And indeed, we constantly have idle transcations. They all use the same\n> dummy table, a dual table substitute containing only one column, and one\n> row.\n> We use this table with tomcat-jdbc-pool to check connections health with\n> 'select 1 from dual' (we don't use 'select 1' for portability reasons, to\n> work with oracle also).\n> And these transactions are never commited. So we have a bunch of running\n> transactions, constantly running and recreated by tomcat-jdbc-pool. Some of\n> them run for hours.\n> This seems to impact significally the ability of postgresql to vacuum... and\n> thus to keep efficient indexes!\n\nIt affects a lot of other things too. All locks held by those\ntransactions are still held. Failure to release transactions is a\nmajor application error that can and will explode the database. It's\nsimilar in severity to a memory leak. The basic rule of thumb is that\ntransactions should be held for the shortest possible duration --\nespecially those that write to the database.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Apr 2014 09:16:29 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fast distinct not working as expected"
}
] |
[
{
"msg_contents": "Hi Team,\n\nFor last 2 days we are facing issue with replication.\n\nWARNING: page 21 of relation base/1193555/19384612 does not exist\nCONTEXT: xlog redo insert: rel 1663/1193555/19384612; tid 21/1\nPANIC: WAL contains references to invalid pages\nCONTEXT: xlog redo insert: rel 1663/1193555/19384612; tid 21/1\nLOG: startup process (PID 20622) was terminated by signal 6: Aborted\nLOG: terminating any other active server processes\n\nStand by server went down with this error.\n\nwe just using warm stand by, but we enabled wal_level as 'hot_stanndby' in\nMaster server.\n\nI just read this mailing list, and in postgres 9.2.7 we have fix,\n\nBut as of now,\nif i change the wal level as archive, then this problem will go..? We are\njust using warm stand by. so shall we change the wal_level as archive..?\nCan you please reply this mail as soon as possible?\n-- \nBest Regards,\nVishalakshi.N\n\nHi Team,For last 2 days we are facing issue with replication.WARNING: page 21 of relation base/1193555/19384612 does not existCONTEXT: xlog redo insert: rel 1663/1193555/19384612; tid 21/1\nPANIC: WAL contains references to invalid pagesCONTEXT: xlog redo insert: rel 1663/1193555/19384612; tid 21/1LOG: startup process (PID 20622) was terminated by signal 6: AbortedLOG: terminating any other active server processes\nStand by server went down with this error.we just using warm stand by, but we enabled wal_level as 'hot_stanndby' in Master server. I just read this mailing list, and in postgres 9.2.7 we have fix,\nBut as of now,if i change the wal level as archive, then this problem will go..? We are just using warm stand by. so shall we change the wal_level as archive..? Can you please reply this mail as soon as possible? \n\n-- Best Regards,Vishalakshi.N",
"msg_date": "Fri, 18 Apr 2014 13:53:46 +0530",
"msg_from": "Vishalakshi Navaneethakrishnan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hot standby 9.2.1 PANIC: WAL contains references to invalid pages"
},
{
"msg_contents": "On Fri, Apr 18, 2014 at 1:23 AM, Vishalakshi Navaneethakrishnan\n<[email protected]> wrote:\n> if i change the wal level as archive, then this problem will go..? We are\n> just using warm stand by. so shall we change the wal_level as archive..? Can\n> you please reply this mail as soon as possible?\n\nAFAIK, the problem appears when hot_standby is set on, so you need to\nturn it off. Also, take a look at the link below:\n\nhttp://www.databasesoup.com/2013/12/why-you-need-to-apply-todays-update.html\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 19 Apr 2014 16:41:56 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hot standby 9.2.1 PANIC: WAL contains references to invalid pages"
}
] |
[
{
"msg_contents": "To whom it may concern,\n\nMy question and problem is posted on below site:\nhttp://stackoverflow.com/questions/23147724/postgresql-9-3-on-ubuntu-server-12-04-v-s-ms-sql-server-2008-r2-on-windows-7-ul\n\nWould you please help me to solve my problem.\n\nThank you for your help.\n\nAlex,\nregard\n\nTo whom it may concern,My question and problem is posted on below site:http://stackoverflow.com/questions/23147724/postgresql-9-3-on-ubuntu-server-12-04-v-s-ms-sql-server-2008-r2-on-windows-7-ul\nWould you please help me to solve my problem.Thank you for your help.Alex,regard",
"msg_date": "Fri, 18 Apr 2014 16:33:31 +0800",
"msg_from": "Wureka JI <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help on migrating data from MSSQL2008R2 to PostgreSQL 9.3"
}
] |
[
{
"msg_contents": "Hello,\n\nIf a table contains simple fields as well as large (hundreds of KiB)\ntext fields, will accessing only the simple fields cause the entire\nrecord data, including the large fields, to be read and unpacked?\n(e.g. SELECT int_field FROM table_with_large_text)\n\nMore details: after thinking about it some more, it might have\nsomething to do with tsearch2 and indexes: the large data in this case\nis a tsvector, indexed with GIN, and the query plan involves a\nre-check condition.\n\nThe query is of the form:\nSELECT simple_fields FROM table WHERE fts @@ to_tsquery('...').\n\nDoes the \"re-check condition\" mean that the original tsvector data is\nalways read from the table in addition to the index? That would be\nvery wasteful since data is practically duplicated in the table and in\nthe index. Any way around it?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 20 Apr 2014 01:15:25 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "tsearch2, large data and indexes"
},
{
"msg_contents": "On 04/20/2014 02:15 AM, Ivan Voras wrote:\n> Hello,\n>\n> If a table contains simple fields as well as large (hundreds of KiB)\n> text fields, will accessing only the simple fields cause the entire\n> record data, including the large fields, to be read and unpacked?\n> (e.g. SELECT int_field FROM table_with_large_text)\n\nNo.\n\n> More details: after thinking about it some more, it might have\n> something to do with tsearch2 and indexes: the large data in this case\n> is a tsvector, indexed with GIN, and the query plan involves a\n> re-check condition.\n>\n> The query is of the form:\n> SELECT simple_fields FROM table WHERE fts @@ to_tsquery('...').\n>\n> Does the \"re-check condition\" mean that the original tsvector data is\n> always read from the table in addition to the index?\n\nYes, if the re-check condition involves the fts column. I don't see why \nyou would have a re-check condition with a query like that, though. Are \nthere some other WHERE-conditions that you didn't show us?\n\nThe large fields are stored in the toast table. You can check if the \ntoast table is accessed with a query like this:\n\nselect * from pg_stat_all_tables where relid = (select reltoastrelid \nfrom pg_class where relname='table');\n\nRun that before and after your query, and see if the numbers change.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Apr 2014 09:40:37 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On 22 April 2014 08:40, Heikki Linnakangas <[email protected]> wrote:\n> On 04/20/2014 02:15 AM, Ivan Voras wrote:\n>> More details: after thinking about it some more, it might have\n>> something to do with tsearch2 and indexes: the large data in this case\n>> is a tsvector, indexed with GIN, and the query plan involves a\n>> re-check condition.\n>>\n>> The query is of the form:\n>> SELECT simple_fields FROM table WHERE fts @@ to_tsquery('...').\n>>\n>> Does the \"re-check condition\" mean that the original tsvector data is\n>> always read from the table in addition to the index?\n>\n>\n> Yes, if the re-check condition involves the fts column. I don't see why you\n> would have a re-check condition with a query like that, though. Are there\n> some other WHERE-conditions that you didn't show us?\n\nYes, I've read about tsearch2 and GIN indexes and there shouldn't be a\nrecheck condition - but there is.\nThis is the query:\n\nSELECT documents.id, title, raw_data, q, ts_rank(fts_data, q, 4) AS\nrank, html_filename\n FROM documents, to_tsquery('document') AS q\n WHERE fts_data @@ q\n ORDER BY rank DESC LIMIT 25;\n\nAnd here is the explain analyze: http://explain.depesz.com/s/4xm\nIt clearly shows a bitmap index scan operation is immediately followed\nby a recheck operation AND that the recheck operation actually does\nsomething, because it reduces the number of records from 61 to 58\n(!!!).\n\nThis is the table structure:\n\nnn=# \\d documents\n Table \"public.documents\"\n Column | Type | Modifiers\n---------------+----------+--------------------------------------------------------\n id | integer | not null default\nnextval('documents_id_seq'::regclass)\n ctime | integer | not null default unix_ts(now())\n dtime | integer | not null\n title | text | not null\n html_filename | text | not null\n raw_data | text |\n fts_data | tsvector | not null\n tags | text[] |\n dtype | integer | not null default 0\n flags | integer | not null default 0\nIndexes:\n \"documents_pkey\" PRIMARY KEY, btree (id)\n \"documents_html_filename\" UNIQUE, btree (html_filename)\n \"documents_dtime\" btree (dtime)\n \"documents_fts_data\" gin (fts_data)\n \"documents_tags\" gin (tags)\n\n\n> The large fields are stored in the toast table. You can check if the toast\n> table is accessed with a query like this:\n>\n> select * from pg_stat_all_tables where relid = (select reltoastrelid from\n> pg_class where relname='table');\n>\n> Run that before and after your query, and see if the numbers change.\n\nBefore:\n\nrelid|schemaname|relname|seq_scan|seq_tup_read|idx_scan|idx_tup_fetch|n_tup_ins|n_tup_upd|n_tup_del|n_tup_hot_upd|n_live_tup|n_dead_tup|last_vacuum|last_autovacuum|last_analyze|last_autoanalyze|vacuum_count|autovacuum_count|analyze_count|autoanalyze_count\n27290|pg_toast|pg_toast_27283|3|0|2481289|10631453|993194|0|266306|0|147931|2514||2014-04-18\n00:49:11.066443+02|||0|11|0|0\n\nAfter:\n\nrelid|schemaname|relname|seq_scan|seq_tup_read|idx_scan|idx_tup_fetch|n_tup_ins|n_tup_upd|n_tup_del|n_tup_hot_upd|n_live_tup|n_dead_tup|last_vacuum|last_autovacuum|last_analyze|last_autoanalyze|vacuum_count|autovacuum_count|analyze_count|autoanalyze_count\n27290|pg_toast|pg_toast_27283|3|0|2481347|10632814|993194|0|266306|0|147931|2514||2014-04-18\n00:49:11.066443+02|||0|11|0|0\n\nidx_scan has changed from 2481289 to 2481347 (58)\nidx_tup_fetch has changed from 10631453 to 10632814 (1361)\n\nNumber 58 corresponds to the number of rows found by the index, seen\nin the EXPLAIN output, I don't know where 1361 comes from.\n\nI'm also surprised by the amount of memory used for sorting (23 kB),\nsince the actually returned data from my query (all the tuples from\nall the 58 rows) amount to around 2 kB - but this is not an actual\nproblem.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Apr 2014 09:57:14 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On Tue, Apr 22, 2014 at 12:57 AM, Ivan Voras <[email protected]> wrote:\n\n> On 22 April 2014 08:40, Heikki Linnakangas <[email protected]>\n> wrote:\n> > On 04/20/2014 02:15 AM, Ivan Voras wrote:\n> >> More details: after thinking about it some more, it might have\n> >> something to do with tsearch2 and indexes: the large data in this case\n> >> is a tsvector, indexed with GIN, and the query plan involves a\n> >> re-check condition.\n>\n\nI think bitmap scans always insert a recheck, do to the possibility of\nbitmap overflow.\n\nBut that doesn't mean that it ever got triggered. In 9.4., explain\n(analyze) will report on the overflows.\n\n\n> Yes, I've read about tsearch2 and GIN indexes and there shouldn't be a\n> recheck condition - but there is.\n> This is the query:\n>\n> SELECT documents.id, title, raw_data, q, ts_rank(fts_data, q, 4) AS\n> rank, html_filename\n> FROM documents, to_tsquery('document') AS q\n> WHERE fts_data @@ q\n> ORDER BY rank DESC LIMIT 25;\n>\n> And here is the explain analyze: http://explain.depesz.com/s/4xm\n> It clearly shows a bitmap index scan operation is immediately followed\n> by a recheck operation AND that the recheck operation actually does\n> something, because it reduces the number of records from 61 to 58\n> (!!!).\n>\n\nThat could be ordinary visibility checking, not qual rechecking.\n\nCheers,\n\nJeff\n\nOn Tue, Apr 22, 2014 at 12:57 AM, Ivan Voras <[email protected]> wrote:\nOn 22 April 2014 08:40, Heikki Linnakangas <[email protected]> wrote:\n\n> On 04/20/2014 02:15 AM, Ivan Voras wrote:\n>> More details: after thinking about it some more, it might have\n>> something to do with tsearch2 and indexes: the large data in this case\n>> is a tsvector, indexed with GIN, and the query plan involves a\n>> re-check condition.I think bitmap scans always insert a recheck, do to the possibility of bitmap overflow.But that doesn't mean that it ever got triggered. In 9.4., explain (analyze) will report on the overflows.\n\nYes, I've read about tsearch2 and GIN indexes and there shouldn't be a\nrecheck condition - but there is.\nThis is the query:\n\nSELECT documents.id, title, raw_data, q, ts_rank(fts_data, q, 4) AS\nrank, html_filename\n FROM documents, to_tsquery('document') AS q\n WHERE fts_data @@ q\n ORDER BY rank DESC LIMIT 25;\n\nAnd here is the explain analyze: http://explain.depesz.com/s/4xm\nIt clearly shows a bitmap index scan operation is immediately followed\nby a recheck operation AND that the recheck operation actually does\nsomething, because it reduces the number of records from 61 to 58\n(!!!).That could be ordinary visibility checking, not qual rechecking.Cheers,Jeff",
"msg_date": "Tue, 22 Apr 2014 08:58:56 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On 22 April 2014 17:58, Jeff Janes <[email protected]> wrote:\n> On Tue, Apr 22, 2014 at 12:57 AM, Ivan Voras <[email protected]> wrote:\n>>\n>> On 22 April 2014 08:40, Heikki Linnakangas <[email protected]>\n>> wrote:\n>> > On 04/20/2014 02:15 AM, Ivan Voras wrote:\n>> >> More details: after thinking about it some more, it might have\n>> >> something to do with tsearch2 and indexes: the large data in this case\n>> >> is a tsvector, indexed with GIN, and the query plan involves a\n>> >> re-check condition.\n>\n>\n> I think bitmap scans always insert a recheck, do to the possibility of\n> bitmap overflow.\n>\n> But that doesn't mean that it ever got triggered. In 9.4., explain\n> (analyze) will report on the overflows.\n\nOk, I found out what is happening, quoting from the documentation:\n\n\"GIN indexes are not lossy for standard queries, but their performance\ndepends logarithmically on the number of unique words. (However, GIN\nindexes store only the words (lexemes) oftsvector values, and not\ntheir weight labels. Thus a table row recheck is needed when using a\nquery that involves weights.)\"\n\nMy query doesn't have weights but the tsvector in the table has them -\nI take it this is what is meant by \"involves weights.\"\n\nSo... there's really no way for tsearch2 to produce results based on\nthe index alone, without recheck? This is... limiting.\n\n>> Yes, I've read about tsearch2 and GIN indexes and there shouldn't be a\n>> recheck condition - but there is.\n>> This is the query:\n>>\n>> SELECT documents.id, title, raw_data, q, ts_rank(fts_data, q, 4) AS\n>> rank, html_filename\n>> FROM documents, to_tsquery('document') AS q\n>> WHERE fts_data @@ q\n>> ORDER BY rank DESC LIMIT 25;\n>>\n>> And here is the explain analyze: http://explain.depesz.com/s/4xm\n>> It clearly shows a bitmap index scan operation is immediately followed\n>> by a recheck operation AND that the recheck operation actually does\n>> something, because it reduces the number of records from 61 to 58\n>> (!!!).\n>\n>\n> That could be ordinary visibility checking, not qual rechecking.\n\nVisibility as in transaction-wise? It's not, this was the only client\nconnected to the dev server, and the only transaction(s) happening.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Apr 2014 13:08:25 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On Wed, Apr 23, 2014 at 8:08 AM, Ivan Voras <[email protected]> wrote:\n\n> >> And here is the explain analyze: http://explain.depesz.com/s/4xm\n> >> It clearly shows a bitmap index scan operation is immediately followed\n> >> by a recheck operation AND that the recheck operation actually does\n> >> something, because it reduces the number of records from 61 to 58\n> >> (!!!).\n> >\n> >\n> > That could be ordinary visibility checking, not qual rechecking.\n>\n> Visibility as in transaction-wise? It's not, this was the only client\n> connected to the dev server, and the only transaction(s) happening.\n\n\nI guess Jeff meant the visibility of tuples, in this case there may have 3\nrows that are referenced by the index but are not visible to your current\ntransaction (they may be visible by other transaction or simple hasn't been\nmarked by VACUUM).\n\nIf you have no concurrent transactions, you can run VACUUM on your table,\nrun the query again and see if the row counts matches.\n\nBest regards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Wed, Apr 23, 2014 at 8:08 AM, Ivan Voras <[email protected]> wrote:\n\n>> And here is the explain analyze: http://explain.depesz.com/s/4xm\n>> It clearly shows a bitmap index scan operation is immediately followed\n>> by a recheck operation AND that the recheck operation actually does\n>> something, because it reduces the number of records from 61 to 58\n>> (!!!).\n>\n>\n> That could be ordinary visibility checking, not qual rechecking.\n\nVisibility as in transaction-wise? It's not, this was the only client\nconnected to the dev server, and the only transaction(s) happening.I guess Jeff meant the visibility of tuples, in this case there may have 3 rows that are referenced by the index but are not visible to your current transaction (they may be visible by other transaction or simple hasn't been marked by VACUUM).\nIf you have no concurrent transactions, you can run VACUUM on your table, run the query again and see if the row counts matches.\n\nBest regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Wed, 23 Apr 2014 09:26:30 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On 04/22/2014 10:57 AM, Ivan Voras wrote:\n> On 22 April 2014 08:40, Heikki Linnakangas <[email protected]> wrote:\n>> On 04/20/2014 02:15 AM, Ivan Voras wrote:\n>>> More details: after thinking about it some more, it might have\n>>> something to do with tsearch2 and indexes: the large data in this case\n>>> is a tsvector, indexed with GIN, and the query plan involves a\n>>> re-check condition.\n>>>\n>>> The query is of the form:\n>>> SELECT simple_fields FROM table WHERE fts @@ to_tsquery('...').\n>>>\n>>> Does the \"re-check condition\" mean that the original tsvector data is\n>>> always read from the table in addition to the index?\n>>\n>> Yes, if the re-check condition involves the fts column. I don't see why you\n>> would have a re-check condition with a query like that, though. Are there\n>> some other WHERE-conditions that you didn't show us?\n>\n> Yes, I've read about tsearch2 and GIN indexes and there shouldn't be a\n> recheck condition - but there is.\n> This is the query:\n>\n> SELECT documents.id, title, raw_data, q, ts_rank(fts_data, q, 4) AS\n> rank, html_filename\n> FROM documents, to_tsquery('document') AS q\n> WHERE fts_data @@ q\n> ORDER BY rank DESC LIMIT 25;\n\nIt's the ranking that's causing the detoasting. \"ts_rank(fts_data, q, \n4)\" has to fetch the contents of the fts_data column.\n\nSorry, I was confused earlier: the \"Recheck Cond:\" line is always there \nin the EXPLAIN output of bitmap index scans, even if the recheck \ncondition is never executed at runtime. It's because the executor has to \nbe prepared to run the recheck-condition, if the bitmap grows large \nenough to become \"lossy\", so that it only stores the page numbers of \nmatching tuples, not the individual tuples\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Apr 2014 16:00:11 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On Wed, Apr 23, 2014 at 4:08 AM, Ivan Voras <[email protected]> wrote:\n> Ok, I found out what is happening, quoting from the documentation:\n>\n> \"GIN indexes are not lossy for standard queries, but their performance\n> depends logarithmically on the number of unique words. (However, GIN\n> indexes store only the words (lexemes) oftsvector values, and not\n> their weight labels. Thus a table row recheck is needed when using a\n> query that involves weights.)\"\n>\n> My query doesn't have weights but the tsvector in the table has them -\n> I take it this is what is meant by \"involves weights.\"\n>\n> So... there's really no way for tsearch2 to produce results based on\n> the index alone, without recheck? This is... limiting.\n\nMy guess is that you could use strip() function [1] to get rid of\nweights in your table or, that would probably be better, in your index\nonly by using expressions in it and in the query, eg.\n\n...USING gin (strip(fts_data))\n\nand\n\n... WHERE strip(fts_data) @@ q\n\n[1] http://www.postgresql.org/docs/9.3/static/textsearch-features.html\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Apr 2014 15:56:32 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On 04/24/2014 01:56 AM, Sergey Konoplev wrote:\n> On Wed, Apr 23, 2014 at 4:08 AM, Ivan Voras <[email protected]> wrote:\n>> Ok, I found out what is happening, quoting from the documentation:\n>>\n>> \"GIN indexes are not lossy for standard queries, but their performance\n>> depends logarithmically on the number of unique words. (However, GIN\n>> indexes store only the words (lexemes) oftsvector values, and not\n>> their weight labels. Thus a table row recheck is needed when using a\n>> query that involves weights.)\"\n>>\n>> My query doesn't have weights but the tsvector in the table has them -\n>> I take it this is what is meant by \"involves weights.\"\n>>\n>> So... there's really no way for tsearch2 to produce results based on\n>> the index alone, without recheck? This is... limiting.\n>\n> My guess is that you could use strip() function [1] to get rid of\n> weights in your table or, that would probably be better, in your index\n> only by using expressions in it and in the query, eg.\n\nAs the docs say, the GIN index does not store the weights. As such, \nthere is no need to strip them. A recheck would be necessary if your \nquery needs the weights, precisely because the weights are not included \nin the index.\n\n(In the OP's query, it's the ranking that was causing the detoasting.)\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Apr 2014 14:34:22 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On 24 April 2014 13:34, Heikki Linnakangas <[email protected]> wrote:\n\n> As the docs say, the GIN index does not store the weights. As such, there is\n> no need to strip them. A recheck would be necessary if your query needs the\n> weights, precisely because the weights are not included in the index.\n>\n> (In the OP's query, it's the ranking that was causing the detoasting.)\n\nThanks!\n\nMy problem is that I actually need the ranking. My queries can return\na large number of documents (tens of thousands) but I usually need\nonly the first couple of pages of most relevant results (e.g. 50-100\nrecords). With PostgreSQL and tsearch2, this means that the tens of\nthousands of documents found via the index are then detoasted and\nranked.\n\nDoes anyone have experience with external search engines which also\nhave ranking but are more efficient? How about Solr?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Apr 2014 14:34:04 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On Thu, Apr 24, 2014 at 4:34 AM, Heikki Linnakangas\n<[email protected]> wrote:\n> On 04/24/2014 01:56 AM, Sergey Konoplev wrote:\n>> My guess is that you could use strip() function [1] to get rid of\n>> weights in your table or, that would probably be better, in your index\n>> only by using expressions in it and in the query, eg.\n>\n> As the docs say, the GIN index does not store the weights. As such, there is\n> no need to strip them. A recheck would be necessary if your query needs the\n> weights, precisely because the weights are not included in the index.\n>\n> (In the OP's query, it's the ranking that was causing the detoasting.)\n\nstrip() is needed in the index because without it the index expression\nwont match one that is in the WHERE block, and the index wont be used.\nThis way we could probably get rid of the \"involves weights\" thing,\nthat causes to \"recheck condition\", if I interpret the docs correct.\n\nts_rank(), for its turn, is supposed to be used in the higher node of\nthe plan, so there is no way for it to affect the query somehow.\n\nBut, again, it is just my guess, and it requires testing.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Apr 2014 12:27:06 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On Thu, Apr 24, 2014 at 5:34 AM, Ivan Voras <[email protected]> wrote:\n> On 24 April 2014 13:34, Heikki Linnakangas <[email protected]> wrote:\n>\n>> As the docs say, the GIN index does not store the weights. As such, there is\n>> no need to strip them. A recheck would be necessary if your query needs the\n>> weights, precisely because the weights are not included in the index.\n>>\n>> (In the OP's query, it's the ranking that was causing the detoasting.)\n>\n> Thanks!\n>\n> My problem is that I actually need the ranking. My queries can return\n> a large number of documents (tens of thousands) but I usually need\n> only the first couple of pages of most relevant results (e.g. 50-100\n> records). With PostgreSQL and tsearch2, this means that the tens of\n> thousands of documents found via the index are then detoasted and\n> ranked.\n\nHeikki, what about the \"GIN improvements part 3: ordering in index\"\npatch, was it committed?\n\nhttp://www.postgresql.org/message-id/flat/CAPpHfduWvqv5b0XZ1DZuqAW29erDCULZp2wotfJzDBs7BHpKXw@mail.gmail.com\n\nIvan, there is a hope that we could get a more effective FTS solution\nthat any others I have heard about with this patch.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Apr 2014 12:57:50 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2, large data and indexes"
},
{
"msg_contents": "On 04/24/2014 10:57 PM, Sergey Konoplev wrote:\n> On Thu, Apr 24, 2014 at 5:34 AM, Ivan Voras <[email protected]> wrote:\n>> On 24 April 2014 13:34, Heikki Linnakangas <[email protected]> wrote:\n>>\n>>> As the docs say, the GIN index does not store the weights. As such, there is\n>>> no need to strip them. A recheck would be necessary if your query needs the\n>>> weights, precisely because the weights are not included in the index.\n>>>\n>>> (In the OP's query, it's the ranking that was causing the detoasting.)\n>>\n>> Thanks!\n>>\n>> My problem is that I actually need the ranking. My queries can return\n>> a large number of documents (tens of thousands) but I usually need\n>> only the first couple of pages of most relevant results (e.g. 50-100\n>> records). With PostgreSQL and tsearch2, this means that the tens of\n>> thousands of documents found via the index are then detoasted and\n>> ranked.\n>\n> Heikki, what about the \"GIN improvements part 3: ordering in index\"\n> patch, was it committed?\n>\n> http://www.postgresql.org/message-id/flat/CAPpHfduWvqv5b0XZ1DZuqAW29erDCULZp2wotfJzDBs7BHpKXw@mail.gmail.com\n\nNope, wasn't committed.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Apr 2014 09:37:52 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tsearch2, large data and indexes"
}
] |
[
{
"msg_contents": "Does anyone have some good tricks for mapping IP addresses to ASN numbers in very large volumes?\n\nThis is probably more a \"how would you approach this problem?\" than \"can you help me tweak this query\" \n\nI have a very large number of IP addresses (many millions) that are found in some security related logs. I need a fast way to categorize these back to their ASN numbers and Country codes. I have a table that was generated from BGP routing files to create a list of netblocks and their corresponding ASNs. For small groups of IP addresses (hundreds or thousands) it has always been fast enough to easily just do something like:\n\nselect count(*), asn_number from logaddr a, ipv4 b where a.ip <<= b.net_block \n\nwhere logaddr has information about IP addresses encountered and\nwhere \"ipv4\" has a list of mappings of which netblocks belong to which ASNs (yeah, it is 525,000+ Netblocks ...)\n\nI'm trying to figure out if there is a better way to use the \"cidr\" netblock to speed up the look up and matching.\n\nI've tried playing with going from ASN to the matching IP addresses, or the other way around ... for example, here is the Explain for looking at all the IP addresses in the logaddr (currently about 9 million records) that use ASN = 2119:\n\n\nexplain select count(distinct ip), country_code, asn_number from ipv4 a, logaddr b \nwhere b.ip <<= a.net_block and a.asn_number = 2119 \ngroup by country_code, asn_number;\n\n\n\"GroupAggregate (cost=36851922.32..38215407.51 rows=4 width=18)\"\n\" -> Sort (cost=36851922.32..37192793.61 rows=136348515 width=18)\"\n\" Sort Key: a.country_code, a.asn_number\"\n\" -> Nested Loop (cost=0.00..4448316.05 rows=136348515 width=18)\"\n\" Join Filter: ((b.ip)::inet <<= (a.net_block)::inet)\"\n\" -> Seq Scan on logaddr b (cost=0.00..347394.01 rows=9089901 width=7)\"\n\" -> Materialize (cost=0.00..10466.66 rows=30 width=18)\"\n\" -> Seq Scan on ipv4 a (cost=0.00..10466.51 rows=30 width=18)\"\n\" Filter: (asn_number = 2119)\"\n\n\nJust the first netblock from that ASN (2.148.0.0/14) occurs around 7,300 times, and does ok if I was just looking for that ... but I need a way to do \"big aggregate queries\" like \"what were the most common ASNs in this whole log file?\" or \"how often does country XX occur in this log file?\"\n\n\n\n----------------------------------------------------------\n\nGary Warner\nDirector of Research in Computer Forensics\nThe University of Alabama at Birmingham\nCenter for Information Assurance and Joint Forensics Research\n205.422.2113\[email protected]\n\n-----------------------------------------------------------\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 19 Apr 2014 21:12:08 -0500 (CDT)",
"msg_from": "Gary Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "IP addresses, NetBlocks, and ASNs"
},
{
"msg_contents": "\nOn Apr 19, 2014, at 7:12 PM, Gary Warner <[email protected]> wrote:\n\n> Does anyone have some good tricks for mapping IP addresses to ASN numbers in very large volumes?\n> \n> This is probably more a \"how would you approach this problem?\" than \"can you help me tweak this query\" \n> \n> I have a very large number of IP addresses (many millions) that are found in some security related logs. I need a fast way to categorize these back to their ASN numbers and Country codes. I have a table that was generated from BGP routing files to create a list of netblocks and their corresponding ASNs. For small groups of IP addresses (hundreds or thousands) it has always been fast enough to easily just do something like:\n> \n> select count(*), asn_number from logaddr a, ipv4 b where a.ip <<= b.net_block \n\nFor decent performance, you might want to look at http://ip4r.projects.pgfoundry.org. That'll let you use indexes for \"is this ip address in any of the ranges in this table.\".\n\n(I see hints that there'll be similar support for inet in 9.5, but I've not looked at the code).\n\n> where logaddr has information about IP addresses encountered and\n> where \"ipv4\" has a list of mappings of which netblocks belong to which ASNs (yeah, it is 525,000+ Netblocks ...)\n> \n> I'm trying to figure out if there is a better way to use the \"cidr\" netblock to speed up the look up and matching.\n> \n> I've tried playing with going from ASN to the matching IP addresses, or the other way around ... for example, here is the Explain for looking at all the IP addresses in the logaddr (currently about 9 million records) that use ASN = 2119:\n> \n> \n> explain select count(distinct ip), country_code, asn_number from ipv4 a, logaddr b \n> where b.ip <<= a.net_block and a.asn_number = 2119 \n> group by country_code, asn_number;\n> \n> \n> \"GroupAggregate (cost=36851922.32..38215407.51 rows=4 width=18)\"\n> \" -> Sort (cost=36851922.32..37192793.61 rows=136348515 width=18)\"\n> \" Sort Key: a.country_code, a.asn_number\"\n> \" -> Nested Loop (cost=0.00..4448316.05 rows=136348515 width=18)\"\n> \" Join Filter: ((b.ip)::inet <<= (a.net_block)::inet)\"\n> \" -> Seq Scan on logaddr b (cost=0.00..347394.01 rows=9089901 width=7)\"\n> \" -> Materialize (cost=0.00..10466.66 rows=30 width=18)\"\n> \" -> Seq Scan on ipv4 a (cost=0.00..10466.51 rows=30 width=18)\"\n> \" Filter: (asn_number = 2119)\"\n> \n> \n> Just the first netblock from that ASN (2.148.0.0/14) occurs around 7,300 times, and does ok if I was just looking for that ... but I need a way to do \"big aggregate queries\" like \"what were the most common ASNs in this whole log file?\" or \"how often does country XX occur in this log file?\"\n\nip4r is pretty good (and does support v6 too, despite the name).\n\nIf you need yet more performance (and you probably don't) the approach I've moved to using is to tag the records with ASN on import, using an in-core patricia trie kept in sync with the address ranges listed in the database to do the work. It's faster still, at the cost of a fair bit more work during import. (It's also a little more accurate in some cases, as the mapping of IP address to list of ASNs is dynamic, and you usually want the ASN at the time of an incident, not the one now.)\n\nCheers,\n Steve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 20 Apr 2014 07:57:48 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IP addresses, NetBlocks, and ASNs"
},
{
"msg_contents": "Steve, \n\nThe \"indexable IPv4 range\" described on that pgfoundry site sounds like exactly what I've been trying to figure out.\n\nThat said, I also agree with your approach of \"lookup and store with each record at fetch time\" for exactly the reason you give. If the location of that IP address changes, i don't want the CURRENT location, I want the \"at incident time\" location.\n\nThanks for the tips (though other suggestions are also very welcome...)\n\nI'll contact you off-list for more regarding the project, Steve.\n\nThanks again!\n\n----------------------------------------------------------\n\nGary Warner\nDirector of Research in Computer Forensics\nThe University of Alabama at Birmingham\nCenter for Information Assurance and Joint Forensics Research\n205.422.2113\[email protected]\n\n-----------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Steve Atkins\" <[email protected]>\nTo: \"Postgres Performance\" <[email protected]>\nCc: \"Gary Warner\" <[email protected]>\nSent: Sunday, April 20, 2014 9:57:48 AM\nSubject: Re: [PERFORM] IP addresses, NetBlocks, and ASNs\n\n\nOn Apr 19, 2014, at 7:12 PM, Gary Warner <[email protected]> wrote:\n\n> Does anyone have some good tricks for mapping IP addresses to ASN numbers in very large volumes?\n> \n> This is probably more a \"how would you approach this problem?\" than \"can you help me tweak this query\" \n> \n> I have a very large number of IP addresses (many millions) that are found in some security related logs. I need a fast way to categorize these back to their ASN numbers and Country codes. I have a table that was generated from BGP routing files to create a list of netblocks and their corresponding ASNs. For small groups of IP addresses (hundreds or thousands) it has always been fast enough to easily just do something like:\n> \n> select count(*), asn_number from logaddr a, ipv4 b where a.ip <<= b.net_block \n\nFor decent performance, you might want to look at http://ip4r.projects.pgfoundry.org. That'll let you use indexes for \"is this ip address in any of the ranges in this table.\".\n\n(I see hints that there'll be similar support for inet in 9.5, but I've not looked at the code).\n\n> where logaddr has information about IP addresses encountered and\n> where \"ipv4\" has a list of mappings of which netblocks belong to which ASNs (yeah, it is 525,000+ Netblocks ...)\n> \n> I'm trying to figure out if there is a better way to use the \"cidr\" netblock to speed up the look up and matching.\n> \n> I've tried playing with going from ASN to the matching IP addresses, or the other way around ... for example, here is the Explain for looking at all the IP addresses in the logaddr (currently about 9 million records) that use ASN = 2119:\n> \n> \n> explain select count(distinct ip), country_code, asn_number from ipv4 a, logaddr b \n> where b.ip <<= a.net_block and a.asn_number = 2119 \n> group by country_code, asn_number;\n> \n> \n> \"GroupAggregate (cost=36851922.32..38215407.51 rows=4 width=18)\"\n> \" -> Sort (cost=36851922.32..37192793.61 rows=136348515 width=18)\"\n> \" Sort Key: a.country_code, a.asn_number\"\n> \" -> Nested Loop (cost=0.00..4448316.05 rows=136348515 width=18)\"\n> \" Join Filter: ((b.ip)::inet <<= (a.net_block)::inet)\"\n> \" -> Seq Scan on logaddr b (cost=0.00..347394.01 rows=9089901 width=7)\"\n> \" -> Materialize (cost=0.00..10466.66 rows=30 width=18)\"\n> \" -> Seq Scan on ipv4 a (cost=0.00..10466.51 rows=30 width=18)\"\n> \" Filter: (asn_number = 2119)\"\n> \n> \n> Just the first netblock from that ASN (2.148.0.0/14) occurs around 7,300 times, and does ok if I was just looking for that ... but I need a way to do \"big aggregate queries\" like \"what were the most common ASNs in this whole log file?\" or \"how often does country XX occur in this log file?\"\n\nip4r is pretty good (and does support v6 too, despite the name).\n\nIf you need yet more performance (and you probably don't) the approach I've moved to using is to tag the records with ASN on import, using an in-core patricia trie kept in sync with the address ranges listed in the database to do the work. It's faster still, at the cost of a fair bit more work during import. (It's also a little more accurate in some cases, as the mapping of IP address to list of ASNs is dynamic, and you usually want the ASN at the time of an incident, not the one now.)\n\nCheers,\n Steve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 20 Apr 2014 16:01:51 -0500 (CDT)",
"msg_from": "Gary Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IP addresses, NetBlocks, and ASNs"
}
] |
[
{
"msg_contents": "As mentioned here and elsewhere (most recently in \"How can I get the query \nplanner to use a bitmap index scap instead of an index scan ?\" - 8 Mar \n2014), estimation of the relative cost of text search operations using \nGIN-indexed columns sometimes goes awry, particularly when there will be a \nlarge number of matches.\n\nThe planner may choose to use a sequential or unrelated index scan with @@ \nas a filter, especially when incorporated as a subquery, incurring \nsignificant cost (even without considering de-TOASTing). Pre-tsvectorizing \nthe column offers only a slight mitigation and can cause regressions (if \nnothing else, it adds another large column).\n\nWhat worked for me (and I'm hoping for others, though YMMV) was adding \n'OFFSET 0' to the subquery involving the indexed column, e.g.\n\n...\n(SELECT sk1.submission_id\nFROM submission_keywords sk1, keywords k1\nWHERE sk1.keyword_id = k1.keyword_id\n AND\nto_tsvector('english_nostop', k1.keyword) @@ to_tsquery('english_nostop', \n'tails')\nOFFSET 0)\n...\n\nThe result is a bitmap scan:\n------------------------------------------------------------------------------------------\nNested Loop\n(cost=8.73..4740.29 rows=21348 width=4)\n(actual time=0.621..13.661 rows=20097 loops=1)\n -> Bitmap Heap Scan on keywords k1\n (cost=8.30..1028.72 rows=755 width=4)\n (actual time=0.603..2.276 rows=752 loops=1)\n Recheck Cond:\n (to_tsvector('english_nostop'::regconfig, keyword) @@ \n'''tail'''::tsquery)\n -> Bitmap Index Scan on keyword_to_tsvector_keywords\n (cost=0.00..8.11 rows=755 width=0)\n (actual time=0.496..0.496 rows=756 loops=1)\n Index Cond:\n (to_tsvector('english_nostop'::regconfig, keyword) @@ \n'''tail'''::tsquery)\n -> Index Only Scan using keyword_id_submission_id_submission_keywords \non submission_keywords sk1\n (cost=0.43..3.47 rows=145 width=8)\n (actual time=0.005..0.010 rows=27 loops=752)\n Index Cond: (keyword_id = k1.keyword_id)\n Heap Fetches: 99\nTotal runtime: 14.809 ms\n\nWithout this the test was moved to a filter inside a nested loop, with \ndisastrous results:\n-> Hash Semi Join\n (cost=23.37..23.51 rows=1 width=8)\n (actual time=0.090..0.090 rows=0 loops=594670)\n Hash Cond: (s1.submission_id = sk1.submission_id)\n -> Index Only Scan using submissions_pkey on submissions s1\n (cost=0.42..0.56 rows=1 width=4)\n (actual time=0.007..0.007 rows=1 loops=17352)\n Index Cond: (submission_id = s.submission_id)\n Heap Fetches: 8372\n -> Hash\n (cost=22.94..22.94 rows=1 width=4)\n (actual time=0.086..0.086 rows=0 loops=594670)\n Buckets: 1024 Batches: 1 Memory Usage: 0kB\n -> Nested Loop\n (cost=0.85..22.94 rows=1 width=4)\n (actual time=0.083..0.085 rows=0 loops=594670)\n -> Index Only Scan using file_keyword on \nsubmission_keywords sk1\n (cost=0.43..0.80 rows=13 width=8)\n (actual time=0.006..0.008 rows=9 loops=594670)\n Index Cond: (submission_id = s.submission_id)\n Heap Fetches: 21324\n -> Index Scan using keywords_pkey on keywords k1\n (cost=0.42..1.69 rows=1 width=4)\n (actual time=0.008..0.008 rows=0 loops=5329219)\n Index Cond: (keyword_id = sk1.keyword_id)\n Filter: (to_tsvector('english_nostop'::regconfig, \nkeyword) @@ '''tail'''::tsquery)\nTotal runtime: 55194.034 ms [there are other lines, but 50 sec is above]\n\nYes, that's a ~3000x speedup! Not all search terms benefit so much, but we \nget a lot of searches for the most common terms, and scans just get worse \nthe more you add.\n\nI got the idea from Seamus Abshere:\nhttp://seamusabshere.github.io/2013/03/29/hinting-postgres-and-mysql-with-offset-and-limit/\n\nI've heard it said that \"any Postgres DBA worth his salt\" knows this trick, \nas well as the use of \"WITH\" to create a common table expression. Alas, many \nof us are still learning . . . I beat my head over this for a week, and it's \naffected our site for far longer. This kind of issue makes people think they \nneed to replace PostgreSQL with a dedicated search solution to be able to \nscale, which is a shame.\n\nI know hinting has a bad rep, but this is a localized fix, and what has been \nsaid before leads me to believe that estimating the cost of such situations \nis a hard nut to crack - one which is not on anyone's plate right now.\n\nIncidentally, documentation section 7.6. \"LIMIT and OFFSET\" states that \n\"OFFSET 0 is the same as omitting the OFFSET clause\" which is clearly not \nthe case here. I appreciate that this is an implementation detail which \nmight change, but it's an important one that I think deserves mentioning.\n\nHope this helps,\n-- \nLaurence \"GreenReaper\" Parry\ngreenreaper.co.uk - wikifur.com - flayrah.com - inkbunny.net\n\"Eternity lies ahead of us, and behind. Have you drunk your fill?\" \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 19 Apr 2014 23:30:36 -0500",
"msg_from": "\"Laurence Parry\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Workaround: Planner preference for tsquery filter vs. GIN index in\n fast text search"
},
{
"msg_contents": "btw, 9.4 should be wiser in case of rare+common terms, thanks to GIN\nfast scan feature.\n\nOn Sun, Apr 20, 2014 at 8:30 AM, Laurence Parry <[email protected]> wrote:\n> As mentioned here and elsewhere (most recently in \"How can I get the query\n> planner to use a bitmap index scap instead of an index scan ?\" - 8 Mar\n> 2014), estimation of the relative cost of text search operations using\n> GIN-indexed columns sometimes goes awry, particularly when there will be a\n> large number of matches.\n>\n> The planner may choose to use a sequential or unrelated index scan with @@\n> as a filter, especially when incorporated as a subquery, incurring\n> significant cost (even without considering de-TOASTing). Pre-tsvectorizing\n> the column offers only a slight mitigation and can cause regressions (if\n> nothing else, it adds another large column).\n>\n> What worked for me (and I'm hoping for others, though YMMV) was adding\n> 'OFFSET 0' to the subquery involving the indexed column, e.g.\n>\n> ...\n> (SELECT sk1.submission_id\n> FROM submission_keywords sk1, keywords k1\n> WHERE sk1.keyword_id = k1.keyword_id\n> AND\n> to_tsvector('english_nostop', k1.keyword) @@ to_tsquery('english_nostop',\n> 'tails')\n> OFFSET 0)\n> ...\n>\n> The result is a bitmap scan:\n> ------------------------------------------------------------------------------------------\n> Nested Loop\n> (cost=8.73..4740.29 rows=21348 width=4)\n> (actual time=0.621..13.661 rows=20097 loops=1)\n> -> Bitmap Heap Scan on keywords k1\n> (cost=8.30..1028.72 rows=755 width=4)\n> (actual time=0.603..2.276 rows=752 loops=1)\n> Recheck Cond:\n> (to_tsvector('english_nostop'::regconfig, keyword) @@\n> '''tail'''::tsquery)\n> -> Bitmap Index Scan on keyword_to_tsvector_keywords\n> (cost=0.00..8.11 rows=755 width=0)\n> (actual time=0.496..0.496 rows=756 loops=1)\n> Index Cond:\n> (to_tsvector('english_nostop'::regconfig, keyword) @@\n> '''tail'''::tsquery)\n> -> Index Only Scan using keyword_id_submission_id_submission_keywords on\n> submission_keywords sk1\n> (cost=0.43..3.47 rows=145 width=8)\n> (actual time=0.005..0.010 rows=27 loops=752)\n> Index Cond: (keyword_id = k1.keyword_id)\n> Heap Fetches: 99\n> Total runtime: 14.809 ms\n>\n> Without this the test was moved to a filter inside a nested loop, with\n> disastrous results:\n> -> Hash Semi Join\n> (cost=23.37..23.51 rows=1 width=8)\n> (actual time=0.090..0.090 rows=0 loops=594670)\n> Hash Cond: (s1.submission_id = sk1.submission_id)\n> -> Index Only Scan using submissions_pkey on submissions s1\n> (cost=0.42..0.56 rows=1 width=4)\n> (actual time=0.007..0.007 rows=1 loops=17352)\n> Index Cond: (submission_id = s.submission_id)\n> Heap Fetches: 8372\n> -> Hash\n> (cost=22.94..22.94 rows=1 width=4)\n> (actual time=0.086..0.086 rows=0 loops=594670)\n> Buckets: 1024 Batches: 1 Memory Usage: 0kB\n> -> Nested Loop\n> (cost=0.85..22.94 rows=1 width=4)\n> (actual time=0.083..0.085 rows=0 loops=594670)\n> -> Index Only Scan using file_keyword on submission_keywords\n> sk1\n> (cost=0.43..0.80 rows=13 width=8)\n> (actual time=0.006..0.008 rows=9 loops=594670)\n> Index Cond: (submission_id = s.submission_id)\n> Heap Fetches: 21324\n> -> Index Scan using keywords_pkey on keywords k1\n> (cost=0.42..1.69 rows=1 width=4)\n> (actual time=0.008..0.008 rows=0 loops=5329219)\n> Index Cond: (keyword_id = sk1.keyword_id)\n> Filter: (to_tsvector('english_nostop'::regconfig,\n> keyword) @@ '''tail'''::tsquery)\n> Total runtime: 55194.034 ms [there are other lines, but 50 sec is above]\n>\n> Yes, that's a ~3000x speedup! Not all search terms benefit so much, but we\n> get a lot of searches for the most common terms, and scans just get worse\n> the more you add.\n>\n> I got the idea from Seamus Abshere:\n> http://seamusabshere.github.io/2013/03/29/hinting-postgres-and-mysql-with-offset-and-limit/\n>\n> I've heard it said that \"any Postgres DBA worth his salt\" knows this trick,\n> as well as the use of \"WITH\" to create a common table expression. Alas, many\n> of us are still learning . . . I beat my head over this for a week, and it's\n> affected our site for far longer. This kind of issue makes people think they\n> need to replace PostgreSQL with a dedicated search solution to be able to\n> scale, which is a shame.\n>\n> I know hinting has a bad rep, but this is a localized fix, and what has been\n> said before leads me to believe that estimating the cost of such situations\n> is a hard nut to crack - one which is not on anyone's plate right now.\n>\n> Incidentally, documentation section 7.6. \"LIMIT and OFFSET\" states that\n> \"OFFSET 0 is the same as omitting the OFFSET clause\" which is clearly not\n> the case here. I appreciate that this is an implementation detail which\n> might change, but it's an important one that I think deserves mentioning.\n>\n> Hope this helps,\n> --\n> Laurence \"GreenReaper\" Parry\n> greenreaper.co.uk - wikifur.com - flayrah.com - inkbunny.net\n> \"Eternity lies ahead of us, and behind. Have you drunk your fill?\"\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 20 Apr 2014 08:46:01 +0400",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Workaround: Planner preference for tsquery filter vs.\n GIN index in fast text search"
},
{
"msg_contents": "> btw, 9.4 should be wiser in case of rare+common terms,\n> thanks to GIN fast scan feature.\n\nI'll look forward to it! We have a few other GIN indexes . . .\n\nI don't want to misrepresent my impression of Postgres performance; the only \nother case where I've made a significant improvement by tweaking was \npre-checking a couple of tables with count(*) > 0 before using them against \nseveral thousand submissions (checking lists of blocked artists and keywords \nagainst submission details). I've been pleasantly surprised at what it can \nhandle, especially after index-only scans came out.\n\n-- \nLaurence \"GreenReaper\" Parry \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 20 Apr 2014 00:44:34 -0500",
"msg_from": "\"Laurence Parry\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Workaround: Planner preference for tsquery filter vs. GIN index\n in fast text search"
},
{
"msg_contents": "On 04/20/2014 07:46 AM, Oleg Bartunov wrote:\n> btw, 9.4 should be wiser in case of rare+common terms, thanks to GIN\n> fast scan feature.\n\nIndeed, although we didn't actually do anything to the planner to make \nit understand when fast scan helps. Doing something about cost \nestimation is still on the 9.4 Open Items list, but I don't have any \nideas on what to do about it, and I haven't heard anything from \nAlexander about that either. That means that the cost estimation issue \nLaurence saw is going to be even worse in 9.4, because GIN is going to \nbe faster than a seq scan in more cases than before and the planner \ndoesn't know about it.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Apr 2014 09:28:23 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Workaround: Planner preference for tsquery filter vs.\n GIN index in fast text search"
},
{
"msg_contents": "On Tue, Apr 22, 2014 at 10:28 AM, Heikki Linnakangas\n<[email protected]> wrote:\n> On 04/20/2014 07:46 AM, Oleg Bartunov wrote:\n>>\n>> btw, 9.4 should be wiser in case of rare+common terms, thanks to GIN\n>> fast scan feature.\n>\n>\n> Indeed, although we didn't actually do anything to the planner to make it\n> understand when fast scan helps. Doing something about cost estimation is\n> still on the 9.4 Open Items list, but I don't have any ideas on what to do\n> about it, and I haven't heard anything from Alexander about that either.\n> That means that the cost estimation issue Laurence saw is going to be even\n> worse in 9.4, because GIN is going to be faster than a seq scan in more\n> cases than before and the planner doesn't know about it.\n>\n> - Heikki\n\nYou are right, we should return to that topic.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Apr 2014 11:15:02 +0400",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Workaround: Planner preference for tsquery filter vs.\n GIN index in fast text search"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 04/20/2014 07:46 AM, Oleg Bartunov wrote:\n>> btw, 9.4 should be wiser in case of rare+common terms, thanks to GIN\n>> fast scan feature.\n\n> Indeed, although we didn't actually do anything to the planner to make \n> it understand when fast scan helps.\n\nThe given query has nothing to do with rare+common terms, since there is\nonly one term in the search --- and what's more, the planner's estimate\nfor that term is spot on already (755 estimated matches vs 752 actual).\n\nIt looks to me like the complaint is more probably about inappropriate\nchoice of join order; but since we've been allowed to see only some small\nportion of either the query or the plan, speculating about the root cause\nis a fool's errand.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Apr 2014 09:58:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Workaround: Planner preference for tsquery filter vs. GIN index\n in fast text search"
}
] |
[
{
"msg_contents": "Hi\nI am going to add a new column to a table for modify_date that needs to be updated every time the table is updated. Is it better to just update application code to set the modify_date to current_time, or create a Before-Update trigger on the table that will update the modify_date column to current_timestamp when the table is updated? I also have slony in place, so the trigger will need to be on master and slave. Slony will take care of suppressing it on the slave and enabling in the event of a switchover, but it is additional overhead and validation to make sure nothing failed on switchover.\nSo considering that we have slony, is it better to use application code to update the modify_date or use a trigger?Is a trigger essentially 2 updates to the table? Are there any other risks in using the trigger?\n\nThanks\nRiya Verghese\n\n\nThis message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by e-mail and delete the message and any attachments from your system.\n\n\n\n\n\n\nHi\nI am going to add a new column to a table for modify_date that needs to be updated every time the table is updated. Is it better to just update application code to set the modify_date to current_time, or create a Before-Update trigger on the table that\n will update the modify_date column to current_timestamp when the table is updated? I also have slony in place, so the trigger will need to be on master and slave. Slony will take care of suppressing it on the slave and enabling in the event of a switchover,\n but it is additional overhead and validation to make sure nothing failed on switchover. \nSo considering that we have slony, is it better to use application code to update the modify_date or use a trigger?Is a trigger essentially 2 updates to the table? Are there any other risks in using the trigger?\n\n\nThanks\nRiya Verghese\n\nThis message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by e-mail and delete the message and any attachments from your system.",
"msg_date": "Tue, 22 Apr 2014 01:16:15 +0000",
"msg_from": "\"Verghese, Riya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best practices for update timestamp with/without triggers"
},
{
"msg_contents": "On Tue, Apr 22, 2014 at 01:16:15AM +0000, Verghese, Riya wrote:\n> I am going to add a new column to a table for modify_date that needs\n> to be updated every time the table is updated. Is it better to just\n> update application code to set the modify_date to current_time, or\n> create a Before-Update trigger on the table that will update the\n> modify_date column to current_timestamp when the table is updated?\n> I also have slony in place, so the trigger will need to be on master\n> and slave. Slony will take care of suppressing it on the slave and\n> enabling in the event of a switchover, but it is additional overhead\n> and validation to make sure nothing failed on switchover.\n> So considering that we have slony, is it better to use application\n> code to update the modify_date or use a trigger?Is a trigger\n> essentially 2 updates to the table? Are there any other risks in using\n> the trigger?\n\nIt's better (in my opinion) to use trigger. And it's not two updates.\n\nJust make your trigger function like:\n\ncreate function sample_trigger() returns trigger as $$\nBEGIN\n NEW.modify_date := clock_timestamp();\n RETURN NEW;\nEND;\n$$ language plpgsql;\n\nand that's all.\n\nBest regards,\n\ndepesz\n\n-- \nThe best thing about modern society is how easy it is to avoid contact with it.\n http://depesz.com/",
"msg_date": "Wed, 23 Apr 2014 12:28:59 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practices for update timestamp with/without\n triggers"
}
] |
[
{
"msg_contents": "Hi\nI am going to add a new column to a table for modify_date that needs to be\nupdated every time the table is updated. Is it better to just update\napplication code to set the modify_date to current_time, or create a\nBefore-Update trigger on the table that will update the modify_date column\nto current_timestamp when the table is updated? I also have slony in place,\nso the trigger will need to be on master and slave. Slony will take care of\nsuppressing it on the slave and enabling in the event of a switchover, but\nit is additional overhead and validation to make sure nothing failed on\nswitchover.\n\nSo considering that we have slony, is it better to use application code to\nupdate the modify_date or use a trigger? Is a trigger essentially 2 updates\nto the table? Are there any other risks in using the trigger?\n\nThanks\n\nTory Blue\n\nHiI am going to add a new column to a table for modify_date that needs to be updated every time the table is updated. Is it better to just update application code to set the modify_date to current_time, or create a Before-Update trigger on the table that will update the modify_date column to current_timestamp when the table is updated? I also have slony in place, so the trigger will need to be on master and slave. Slony will take care of suppressing it on the slave and enabling in the event of a switchover, but it is additional overhead and validation to make sure nothing failed on switchover. \nSo considering that we have slony, is it better to use application code to update the modify_date or use a trigger? Is a trigger essentially 2 updates to the table? Are there any other risks in using the trigger?\nThanksTory Blue",
"msg_date": "Mon, 21 Apr 2014 18:19:49 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best practice question"
},
{
"msg_contents": "Tory M Blue wrote\n> Hi\n> I am going to add a new column to a table for modify_date that needs to be\n> updated every time the table is updated. Is it better to just update\n> application code to set the modify_date to current_time, or create a\n> Before-Update trigger on the table that will update the modify_date column\n> to current_timestamp when the table is updated? I also have slony in\n> place,\n> so the trigger will need to be on master and slave. Slony will take care\n> of\n> suppressing it on the slave and enabling in the event of a switchover, but\n> it is additional overhead and validation to make sure nothing failed on\n> switchover.\n> \n> So considering that we have slony, is it better to use application code to\n> update the modify_date or use a trigger? Is a trigger essentially 2\n> updates\n> to the table? Are there any other risks in using the trigger?\n> \n> Thanks\n> \n> Tory Blue\n\nNot sure about the Slony trade-off but a before trigger will intercept\nbefore any physical writes and so appears to be a single action. I would\ngenerally use a trigger so that you know the updared value is recorded (and\ncan readily add logic for the no-changes situation - if desired). Not all\ntable activity has to be initiated by \"the application\" and since forgetting\nto do so is not going to result in any kind of error the probability of the\nfield becoming useless is non-zero.\n\nIt will be slower than doing it native but whether or not that is\nsignificant enough to discard the advantages of triggers is something only\nyou can decide - ideally after testing.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Best-practice-question-tp5801010p5801011.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Apr 2014 18:31:28 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice question"
},
{
"msg_contents": "On Mon, Apr 21, 2014 at 6:19 PM, Tory M Blue <[email protected]> wrote:\n> I am going to add a new column to a table for modify_date that needs to be\n> updated every time the table is updated. Is it better to just update\n> application code to set the modify_date to current_time, or create a\n> Before-Update trigger on the table that will update the modify_date column\n> to current_timestamp when the table is updated? I also have slony in place,\n> so the trigger will need to be on master and slave. Slony will take care of\n> suppressing it on the slave and enabling in the event of a switchover, but\n> it is additional overhead and validation to make sure nothing failed on\n> switchover.\n>\n> So considering that we have slony, is it better to use application code to\n> update the modify_date or use a trigger? Is a trigger essentially 2 updates\n> to the table? Are there any other risks in using the trigger?\n\nIn addition to the David's answer I would like to add the below.\n\nAFAIK Slony does not make any difference here. No, trigger doesn't\nmean 2 updates. It supplies its function with a NEW row variable where\nyou can change necessary columns and return the modified one as a\nresulting one. Another risk is the case when you need to update 2\ntables on different servers and have their modified_timestamp fields\nin sync. Here you need to determine the new value of the column in the\napplication.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Apr 2014 19:01:54 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best practice question"
}
] |
[
{
"msg_contents": "Hi,\n\ni'm working on a strange behaviour of planner,\n\n_PostgreSQL version :_ 8.4\n\n_Stats & vacuum state : _just done, the table is never changed after \ncreation ( Create table as...)\n\n_Here's my query :_\n\nSELECT *cabmnt___rfovsnide*::varchar FROM zcub_258 WHERE \n*cabmnt___rfovsnide* > '201301_reel' ORDER BY *cabmnt___rfovsnide* LIMIT 1\n\n_Here's the table :_\n\nThe table is partitionned by column *cabmnt___rfovsnide*\n\nThere is 24 partitions.\n\nCREATE TABLE zcub_258\n(\n dwhinvyea character varying(32),\n dwhinvmon text,\n dwhinvmonl character varying(32),\n dwhinvday text,\n mnt_2_rfodst0 character varying,\n mnt_2_rfodst1 character varying,\n mnt_2_rfodst2 character varying,\n mnt_2_rfodst3 character varying,\n mnt_2_rfodst4 character varying,\n nivmnt_2_rfodst integer,\n mnt___rfontr0 character varying,\n mnt___rfontr1 character varying,\n mnt___rfontr2 character varying,\n mnt___rfontr3 character varying,\n mnt___rfontr4 character varying,\n mnt___rfontr5 character varying,\n mnt___rfontr6 character varying,\n mnt___rfontr7 character varying,\n mnt___rfontr8 character varying,\n mnt___rfontr9 character varying,\n nivmnt___rfontr integer,\n* cabmnt___rfovsnide character varying(32),*\n cabmnt___rteprcide character varying(32),\n cabmnt___rtestdide character varying(32),\n key1 integer,\n key2 integer,the table\n key3 integer,\n q0 numeric,\n nothing integer,\n libmnt_2_rfodst0 character varying(32),\n liblmnt_2_rfodst0 character varying(100),\n libmnt_2_rfodst1 character varying(32),\n liblmnt_2_rfodst1 character varying(100),\n libmnt_2_rfodst2 character varying(32),\n liblmnt_2_rfodst2 character varying(100),\n libmnt_2_rfodst3 character varying(32),\n liblmnt_2_rfodst3 character varying(100),\n libmnt_2_rfodst4 character varying(32),\n liblmnt_2_rfodst4 character varying(100),\n libmnt___rfontr0 character varying(32),\n liblmnt___rfontr0 character varying(100),\n libmnt___rfontr1 character varying(32),\n liblmnt___rfontr1 character varying(100),\n libmnt___rfontr2 character varying(32),\n liblmnt___rfontr2 character varying(100),\n libmnt___rfontr3 character varying(32),\n liblmnt___rfontr3 character varying(100),\n libmnt___rfontr4 character varying(32),\n liblmnt___rfontr4 character varying(100),\n libmnt___rfontr5 character varying(32),\n liblmnt___rfontr5 character varying(100),\n libmnt___rfontr6 character varying(32),\n liblmnt___rfontr6 character varying(100),\n libmnt___rfontr7 character varying(32),\n liblmnt___rfontr7 character varying(100),\n libmnt___rfontr8 character varying(32),\n liblmnt___rfontr8 character varying(100),\n libmnt___rfontr9 character varying(32),\n liblmnt___rfontr9 character varying(100)\n)\n\n_\n__the plan is : __\n\n_\nLimit (cost=1572842.00..1572842.00 rows=1 width=13)\n -> Sort (cost=1572842.00..1619836.83 rows=18797933 width=13)\n Sort Key: public.zcub_143.cabmnt___rfovsnide\n -> Result (cost=0.00..1478852.33 rows=18797933 width=13)\n -> Append (cost=0.00..1478852.33 rows=18797933 width=13)\n -> Seq Scan on zcub_143 (cost=0.00..67.91 \nrows=3591 width=82)\n -> Seq Scan on zcub_143_0 zcub_143 \n(cost=0.00..21941.36 rows=265936 width=11)\n -> Seq Scan on zcub_143_1 zcub_143 \n(cost=0.00..695.37 rows=8637 width=15)\n -> Seq Scan on zcub_143_2 zcub_143 \n(cost=0.00..36902.82 rows=454482 width=12)\n -> Seq Scan on zcub_143_3 zcub_143 \n(cost=0.00..116775.60 rows=1475460 width=15)\n -> Seq Scan on zcub_143_4 zcub_143 \n(cost=0.00..170064.21 rows=2111521 width=15)\n -> Seq Scan on zcub_143_5 zcub_143 \n(cost=0.00..44583.32 rows=559332 width=12)\n -> Seq Scan on zcub_143_6 zcub_143 \n(cost=0.00..48501.54 rows=608454 width=12)\n -> Seq Scan on zcub_143_7 zcub_143 \n(cost=0.00..53600.30 rows=687630 width=12)\n -> Seq Scan on zcub_143_8 zcub_143 \n(cost=0.00..57048.78 rows=731078 width=12)\n -> Seq Scan on zcub_143_9 zcub_143 \n(cost=0.00..60401.80 rows=773880 width=12)\n -> Seq Scan on zcub_143_10 zcub_143 \n(cost=0.00..64455.42 rows=828942 width=12)\n -> Seq Scan on zcub_143_11 zcub_143 \n(cost=0.00..67903.80 rows=872480 width=12)\n -> Seq Scan on zcub_143_12 zcub_143 \n(cost=0.00..71341.55 rows=915955 width=12)\n -> Seq Scan on zcub_143_13 zcub_143 \n(cost=0.00..74761.82 rows=959182 width=12)\n -> Seq Scan on zcub_143_14 zcub_143 \n(cost=0.00..78838.92 rows=1014292 width=12)\n -> Seq Scan on zcub_143_15 zcub_143 \n(cost=0.00..82330.08 rows=1058208 width=12)\n -> Seq Scan on zcub_143_16 zcub_143 \n(cost=0.00..168486.12 rows=2149712 width=15)\n -> Seq Scan on zcub_143_17 zcub_143 \n(cost=0.00..86700.75 rows=1112575 width=12)\n -> Seq Scan on zcub_143_18 zcub_143 \n(cost=0.00..25063.32 rows=302332 width=14)\n -> Seq Scan on zcub_143_19 zcub_143 \n(cost=0.00..47830.92 rows=614292 width=12)\n -> Seq Scan on zcub_143_20 zcub_143 \n(cost=0.00..47832.18 rows=614318 width=12)\n -> Seq Scan on zcub_143_21 zcub_143 \n(cost=0.00..51906.06 rows=665406 width=12)\n -> Seq Scan on zcub_143_22 zcub_143 \n(cost=0.00..818.38 rows=10238 width=5)\n\nThe query takes few minutes...\n\n_Our observation till now :_\n\n-> since the cabmnt___rfovsnide is the partition key, there is only one \nvalue by partition\n-> we have an index on all partition on cabmnt___rfovsnide : why dont \npostgres use it ?\n\nWe have a test environment with similar data and configuration*in \nversion 9.1*, and the same query is under 1ms, the plan is not same, it \nuse index on all partition and keep only one row from each.\n\nIs this behaviour quite logic in 8.4 ?\n\nThank you for your time.\n\nSouquieres Adam\n\n\n\n\n\n\n Hi,\n\n i'm working on a strange behaviour of planner,\n\nPostgreSQL version : 8.4\n\nStats & vacuum state : just done, the table is never\n changed after creation ( Create table as...)\n\nHere's my query :\n\n SELECT cabmnt___rfovsnide::varchar FROM zcub_258 WHERE cabmnt___rfovsnide\n > '201301_reel' ORDER BY cabmnt___rfovsnide LIMIT 1 \n\nHere's the table :\n\n The table is partitionned by column cabmnt___rfovsnide\n\n There is 24 partitions.\n\n CREATE TABLE zcub_258\n (\n dwhinvyea character varying(32),\n dwhinvmon text,\n dwhinvmonl character varying(32),\n dwhinvday text,\n mnt_2_rfodst0 character varying,\n mnt_2_rfodst1 character varying,\n mnt_2_rfodst2 character varying,\n mnt_2_rfodst3 character varying,\n mnt_2_rfodst4 character varying,\n nivmnt_2_rfodst integer,\n mnt___rfontr0 character varying,\n mnt___rfontr1 character varying,\n mnt___rfontr2 character varying,\n mnt___rfontr3 character varying,\n mnt___rfontr4 character varying,\n mnt___rfontr5 character varying,\n mnt___rfontr6 character varying,\n mnt___rfontr7 character varying,\n mnt___rfontr8 character varying,\n mnt___rfontr9 character varying,\n nivmnt___rfontr integer,\n cabmnt___rfovsnide character varying(32),\n cabmnt___rteprcide character varying(32),\n cabmnt___rtestdide character varying(32),\n key1 integer,\n key2 integer,the table\n key3 integer,\n q0 numeric,\n nothing integer,\n libmnt_2_rfodst0 character varying(32),\n liblmnt_2_rfodst0 character varying(100),\n libmnt_2_rfodst1 character varying(32),\n liblmnt_2_rfodst1 character varying(100),\n libmnt_2_rfodst2 character varying(32),\n liblmnt_2_rfodst2 character varying(100),\n libmnt_2_rfodst3 character varying(32),\n liblmnt_2_rfodst3 character varying(100),\n libmnt_2_rfodst4 character varying(32),\n liblmnt_2_rfodst4 character varying(100),\n libmnt___rfontr0 character varying(32),\n liblmnt___rfontr0 character varying(100),\n libmnt___rfontr1 character varying(32),\n liblmnt___rfontr1 character varying(100),\n libmnt___rfontr2 character varying(32),\n liblmnt___rfontr2 character varying(100),\n libmnt___rfontr3 character varying(32),\n liblmnt___rfontr3 character varying(100),\n libmnt___rfontr4 character varying(32),\n liblmnt___rfontr4 character varying(100),\n libmnt___rfontr5 character varying(32),\n liblmnt___rfontr5 character varying(100),\n libmnt___rfontr6 character varying(32),\n liblmnt___rfontr6 character varying(100),\n libmnt___rfontr7 character varying(32),\n liblmnt___rfontr7 character varying(100),\n libmnt___rfontr8 character varying(32),\n liblmnt___rfontr8 character varying(100),\n libmnt___rfontr9 character varying(32),\n liblmnt___rfontr9 character varying(100)\n )\n\n\nthe plan is : \n\n\n Limit (cost=1572842.00..1572842.00 rows=1 width=13)\n \n -> Sort (cost=1572842.00..1619836.83 rows=18797933 width=13)\n \n Sort Key: public.zcub_143.cabmnt___rfovsnide\n \n -> Result (cost=0.00..1478852.33 rows=18797933\n width=13)\n \n -> Append (cost=0.00..1478852.33 rows=18797933\n width=13)\n \n -> Seq Scan on zcub_143 (cost=0.00..67.91\n rows=3591 width=82)\n \n -> Seq Scan on zcub_143_0 zcub_143 \n (cost=0.00..21941.36 rows=265936 width=11)\n \n -> Seq Scan on zcub_143_1 zcub_143 \n (cost=0.00..695.37 rows=8637 width=15)\n \n -> Seq Scan on zcub_143_2 zcub_143 \n (cost=0.00..36902.82 rows=454482 width=12)\n \n -> Seq Scan on zcub_143_3 zcub_143 \n (cost=0.00..116775.60 rows=1475460 width=15)\n \n -> Seq Scan on zcub_143_4 zcub_143 \n (cost=0.00..170064.21 rows=2111521 width=15)\n \n -> Seq Scan on zcub_143_5 zcub_143 \n (cost=0.00..44583.32 rows=559332 width=12)\n \n -> Seq Scan on zcub_143_6 zcub_143 \n (cost=0.00..48501.54 rows=608454 width=12)\n \n -> Seq Scan on zcub_143_7 zcub_143 \n (cost=0.00..53600.30 rows=687630 width=12)\n \n -> Seq Scan on zcub_143_8 zcub_143 \n (cost=0.00..57048.78 rows=731078 width=12)\n \n -> Seq Scan on zcub_143_9 zcub_143 \n (cost=0.00..60401.80 rows=773880 width=12)\n \n -> Seq Scan on zcub_143_10 zcub_143 \n (cost=0.00..64455.42 rows=828942 width=12)\n \n -> Seq Scan on zcub_143_11 zcub_143 \n (cost=0.00..67903.80 rows=872480 width=12)\n \n -> Seq Scan on zcub_143_12 zcub_143 \n (cost=0.00..71341.55 rows=915955 width=12)\n \n -> Seq Scan on zcub_143_13 zcub_143 \n (cost=0.00..74761.82 rows=959182 width=12)\n \n -> Seq Scan on zcub_143_14 zcub_143 \n (cost=0.00..78838.92 rows=1014292 width=12)\n \n -> Seq Scan on zcub_143_15 zcub_143 \n (cost=0.00..82330.08 rows=1058208 width=12)\n \n -> Seq Scan on zcub_143_16 zcub_143 \n (cost=0.00..168486.12 rows=2149712 width=15)\n \n -> Seq Scan on zcub_143_17 zcub_143 \n (cost=0.00..86700.75 rows=1112575 width=12)\n \n -> Seq Scan on zcub_143_18 zcub_143 \n (cost=0.00..25063.32 rows=302332 width=14)\n \n -> Seq Scan on zcub_143_19 zcub_143 \n (cost=0.00..47830.92 rows=614292 width=12)\n \n -> Seq Scan on zcub_143_20 zcub_143 \n (cost=0.00..47832.18 rows=614318 width=12)\n \n -> Seq Scan on zcub_143_21 zcub_143 \n (cost=0.00..51906.06 rows=665406 width=12)\n \n -> Seq Scan on zcub_143_22 zcub_143 \n (cost=0.00..818.38 rows=10238 width=5)\n \n\n The query takes few minutes... \n\nOur observation till now :\n\n -> since the cabmnt___rfovsnide is the partition key, there is\n only one value by partition\n -> we have an index on all partition on cabmnt___rfovsnide : why\n dont postgres use it ?\n\n We have a test environment with similar data and configuration in\n version 9.1, and the same query is under 1ms, the plan is not\n same, it use index on all partition and keep only one row from each.\n\n Is this behaviour quite logic in 8.4 ? \n\n Thank you for your time.\n\n Souquieres Adam",
"msg_date": "Tue, 22 Apr 2014 12:14:00 +0200",
"msg_from": "Souquieres Adam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query on partitioned table not using index"
}
] |
[
{
"msg_contents": "Greetings,\nWe're currently having very poor performance for the following delete query.\nDELETE FROM TopTable WHERE id IN (xx, yy, zz);\n\nWe've observed that it takes around 7 seconds under normal load to for each\nrow that's being from TopTable and several minutes pr deleted row under\nheavy load.\n\n\"id\" is the primary key in TopTable, and will trigger deletes of a few\nassociated rows in child tables using foreign keys with ON DELETE CASCADE\nas outlined below:\nTopTable\n--- Table 1 (references TopTable): 11.811.200 rows* (rows deleted)*\n--- Table 2 (references TopTable): 5.555.190 rows* (rows deleted)*\n--- Table 3 (references TopTable): 8.227.700 rows* (now rows deleted)*\n --- Table 4 (references table 3): 4.294.140 rows* (now rows deleted)*\n --- Table 5 (references table 3): 4.154.850 rows *(now rows deleted)*\n --- Table 6 (references table 5): 5.185.450 rows *(now rows\ndeleted)*\n --- Table 7 (references table 3): 68.206 rows\n*(now rows deleted)* --- Table 8 (references table 3): 108 rows *(now\nrows deleted)*\n--- Table 9 (references TopTable): 2448 rows* (now rows deleted)*\n\nIndexes have been defined for all columns referenced by the foreign key in\neach of the tables.\n\nHardware / Software info\nDatabase: PostgreSQL 9.2.2 64-bit\nOS: Red Hat Enterprise Linux Server release 5.5\nCPU: 8 core Intel Xeon 2.3GHz\nRAM: 16GB\nDisk: IBM SAN\n\nHow do we track down the cause of the poorly performing delete query?\n\nMany thanks for your advice\n\n/Jona\n\nGreetings,We're currently having very poor performance for the following delete query.DELETE FROM TopTable WHERE id IN (xx, yy, zz);We've observed that it takes around 7 seconds under normal load to for each row that's being from TopTable and several minutes pr deleted row under heavy load.\n\"id\" is the primary key in TopTable, and will trigger deletes of a few associated rows in child tables using foreign keys with ON DELETE CASCADE as outlined below:TopTable--- Table 1 (references TopTable): 11.811.200 rows (rows deleted)\n--- Table 2 (references TopTable): 5.555.190 rows (rows deleted)--- Table 3 (references TopTable): 8.227.700 rows (now rows deleted) --- Table 4 (references table 3): 4.294.140 rows (now rows deleted)\n --- Table 5 (references table 3): 4.154.850 rows (now rows deleted) --- Table 6 (references table 5): 5.185.450 rows (now rows deleted) --- Table 7 (references table 3): 68.206 rows (now rows deleted)\n --- Table 8 (references table 3): 108 rows (now rows deleted)--- Table 9 (references TopTable): 2448 rows (now rows deleted)Indexes have been defined for all columns referenced by the foreign key in each of the tables.\nHardware / Software infoDatabase: PostgreSQL 9.2.2 64-bitOS: Red Hat Enterprise Linux Server release 5.5CPU: 8 core Intel Xeon 2.3GHzRAM: 16GB\nDisk: IBM SANHow do we track down the cause of the poorly performing delete query?Many thanks for your advice/Jona",
"msg_date": "Thu, 24 Apr 2014 21:42:41 +0200",
"msg_from": "Jonatan Evald Buus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor performance for delete query"
},
{
"msg_contents": "Jonatan Evald Buus <[email protected]> writes:\n> We're currently having very poor performance for the following delete query.\n> DELETE FROM TopTable WHERE id IN (xx, yy, zz);\n\n> We've observed that it takes around 7 seconds under normal load to for each\n> row that's being from TopTable and several minutes pr deleted row under\n> heavy load.\n\nI'd really have to bet that you forgot to index one of the referencing\ntables. Are any of the foreign keys multi-column? If so you probably\nneed a matching multi-column index, not just indexes on the individual\nreferencing columns.\n\n> How do we track down the cause of the poorly performing delete query?\n\nEXPLAIN ANALYZE on a DELETE, for starters. That would isolate whether\nit's the DELETE itself or one of the foreign-key updates, and if the\nlatter which one. It's a little bit difficult to see the exact plan being\nused for a foreign-key update query, but I think one way you could do it\nis to enable auto_explain with auto_explain.log_nested_statements turned\non.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Apr 2014 16:29:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance for delete query"
},
{
"msg_contents": "Many thanks for the swift reply Tom, please see additional input below\n\n/Jona\n\n\nOn 24 April 2014 22:29, Tom Lane <[email protected]> wrote:\n\n> Jonatan Evald Buus <[email protected]> writes:\n> > We're currently having very poor performance for the following delete\n> query.\n> > DELETE FROM TopTable WHERE id IN (xx, yy, zz);\n>\n> > We've observed that it takes around 7 seconds under normal load to for\n> each\n> > row that's being from TopTable and several minutes pr deleted row under\n> > heavy load.\n>\n> I'd really have to bet that you forgot to index one of the referencing\n> tables.\n\n*That was our first thought, so we went through the child tables to check\nbut apparently we missed some. (please see below for the difference in the\nexplain analyze output)*\n\n> Are any of the foreign keys multi-column?\n\n\n*No, foreign keys are single column though some of the indexes (that we\npresume are being used?) are multi-column, with the foreign key column\nbeing the first field in the index.*\n\n\n*I.e.CREATE INDEX message_transaction_state_idx ON log.message_tbl USING\nbtree (txnid, stateid);*\n*where \"txnid\" is the foreign key column that references the TopTable.*\n\n If so you probably\n> need a matching multi-column index, not just indexes on the individual\n> referencing columns.\n>\n> > How do we track down the cause of the poorly performing delete query?\n>\n> EXPLAIN ANALYZE on a DELETE, for starters. That would isolate whether\n> it's the DELETE itself or one of the foreign-key updates, and if the\n> latter which one. It's a little bit difficult to see the exact plan being\n> used for a foreign-key update query, but I think one way you could do it\n> is to enable auto_explain with auto_explain.log_nested_statements turned\n> on.\n>\n\n*Output from EXPLAIN ANALYZE before additional indexes were added*\n\n\n\n\n\n\n\n\n\n\n*\"Delete on transaction_tbl (cost=0.00..10.89 rows=1 width=6) (actual\ntime=0.086..0.086 rows=0 loops=1)\"\" -> Index Scan using transaction_pk on\ntransaction_tbl (cost=0.00..10.89 rows=1 width=6) (actual\ntime=0.012..0.013 rows=1 loops=1)\"\" Index Cond: (id =\n4614717)\"\"Trigger for constraint address2transaction_fk on transaction_tbl:\ntime=0.460 calls=1\"\"Trigger for constraint certificate2transaction_fk on\ntransaction_tbl: time=0.470 calls=1\"\"Trigger for constraint msg2txn_fk on\ntransaction_tbl: time=0.433 calls=1\"\"Trigger for constraint\nnote2transaction_fk on transaction_tbl: time=0.808 calls=1\"\"Trigger for\nconstraint order2transaction_fk on transaction_tbl: time=0.535\ncalls=1\"\"Trigger for constraint order2transaction_fk on transaction_tbl:\ntime=0.222 calls=1\"\"Trigger for constraint certificate2order_fk on\norder_tbl: time=1827.944 calls=1\"\"Total runtime: 1830.995 ms\"*\n\n*Output from EXPLAIN ANALYZE after additional indexes were added*\n\n\n\n\n\n\n\n\n\n\n*\"Delete on transaction_tbl (cost=0.00..10.89 rows=1 width=6) (actual\ntime=0.070..0.070 rows=0 loops=1)\"\" -> Index Scan using transaction_pk on\ntransaction_tbl (cost=0.00..10.89 rows=1 width=6) (actual\ntime=0.022..0.023 rows=1 loops=1)\"\" Index Cond: (id =\n4614669)\"\"Trigger for constraint address2transaction_fk on transaction_tbl:\ntime=0.113 calls=1\"\"Trigger for constraint certificate2transaction_fk on\ntransaction_tbl: time=0.424 calls=1\"\"Trigger for constraint msg2txn_fk on\ntransaction_tbl: time=2.614 calls=1\"\"Trigger for constraint\nnote2transaction_fk on transaction_tbl: time=0.350 calls=1\"\"Trigger for\nconstraint order2transaction_fk on transaction_tbl: time=0.231\ncalls=1\"\"Trigger for constraint order2transaction_fk on transaction_tbl:\ntime=0.088 calls=1\"\"Trigger for constraint certificate2order_fk on\norder_tbl: time=0.165 calls=1\"\"Total runtime: 4.097 ms\"*\n\nWhy is \"order2transaction_fk\" being triggered twice? Is that because\nthere're two affected rows?\n\n\n\n>\n> regards, tom lane\n>\n\n\n\n-- \nJONATAN EVALD BUUS\n\nCTO\n\nMobile US +1 (305) 331-5242\nMobile DK +45 2888 2861\nTelephone +1 (305) 777-0392\nFax. +1 (305) 777-0449\[email protected]\nwww.cellpointmobile.com\n\nCellPoint Mobile Inc.\n4000 Ponce de Leon Boulevard\nSuite 470\nCoral Gables, FL 33146\nUSA\n\n'Mobilizing the Enterprise'\n\nMany thanks for the swift reply Tom, please see additional input below/JonaOn 24 April 2014 22:29, Tom Lane <[email protected]> wrote:\nJonatan Evald Buus <[email protected]> writes:\n\n> We're currently having very poor performance for the following delete query.\n> DELETE FROM TopTable WHERE id IN (xx, yy, zz);\n\n> We've observed that it takes around 7 seconds under normal load to for each\n> row that's being from TopTable and several minutes pr deleted row under\n> heavy load.\n\nI'd really have to bet that you forgot to index one of the referencing\ntables.That was our first thought, so we went through the child tables to check but apparently we missed some. (please see below for the difference in the explain analyze output)\n Are any of the foreign keys multi-column?No, foreign keys are single column though some of the indexes (that we presume are being used?) are multi-column, with the foreign key column being the first field in the index.\nI.e.CREATE INDEX message_transaction_state_idx ON log.message_tbl USING btree (txnid, stateid);where \"txnid\" is the foreign key column that references the TopTable.\n If so you probably\nneed a matching multi-column index, not just indexes on the individual\nreferencing columns.\n\n> How do we track down the cause of the poorly performing delete query?\n\nEXPLAIN ANALYZE on a DELETE, for starters. That would isolate whether\nit's the DELETE itself or one of the foreign-key updates, and if the\nlatter which one. It's a little bit difficult to see the exact plan being\nused for a foreign-key update query, but I think one way you could do it\nis to enable auto_explain with auto_explain.log_nested_statements turned\non.Output from EXPLAIN ANALYZE before additional indexes were added\"Delete on transaction_tbl (cost=0.00..10.89 rows=1 width=6) (actual time=0.086..0.086 rows=0 loops=1)\"\n\" -> Index Scan using transaction_pk on transaction_tbl (cost=0.00..10.89 rows=1 width=6) (actual time=0.012..0.013 rows=1 loops=1)\"\" Index Cond: (id = 4614717)\"\"Trigger for constraint address2transaction_fk on transaction_tbl: time=0.460 calls=1\"\n\"Trigger for constraint certificate2transaction_fk on transaction_tbl: time=0.470 calls=1\"\"Trigger for constraint msg2txn_fk on transaction_tbl: time=0.433 calls=1\"\"Trigger for constraint note2transaction_fk on transaction_tbl: time=0.808 calls=1\"\n\"Trigger for constraint order2transaction_fk on transaction_tbl: time=0.535 calls=1\"\"Trigger for constraint order2transaction_fk on transaction_tbl: time=0.222 calls=1\"\"Trigger for constraint certificate2order_fk on order_tbl: time=1827.944 calls=1\"\n\"Total runtime: 1830.995 ms\"Output from EXPLAIN ANALYZE after additional indexes were added\"Delete on transaction_tbl (cost=0.00..10.89 rows=1 width=6) (actual time=0.070..0.070 rows=0 loops=1)\"\n\" -> Index Scan using transaction_pk on transaction_tbl (cost=0.00..10.89 rows=1 width=6) (actual time=0.022..0.023 rows=1 loops=1)\"\" Index Cond: (id = 4614669)\"\"Trigger for constraint address2transaction_fk on transaction_tbl: time=0.113 calls=1\"\n\"Trigger for constraint certificate2transaction_fk on transaction_tbl: time=0.424 calls=1\"\"Trigger for constraint msg2txn_fk on transaction_tbl: time=2.614 calls=1\"\"Trigger for constraint note2transaction_fk on transaction_tbl: time=0.350 calls=1\"\n\"Trigger for constraint order2transaction_fk on transaction_tbl: time=0.231 calls=1\"\"Trigger for constraint order2transaction_fk on transaction_tbl: time=0.088 calls=1\"\"Trigger for constraint certificate2order_fk on order_tbl: time=0.165 calls=1\"\n\"Total runtime: 4.097 ms\"Why is \"order2transaction_fk\" being triggered twice? Is that because there're two affected rows? \n\n regards, tom lane\n-- JONATAN EVALD BUUSCTOMobile US +1 (305) 331-5242Mobile DK +45 2888 2861Telephone +1 (305) 777-0392Fax. +1 (305) [email protected]\nwww.cellpointmobile.com CellPoint Mobile Inc.4000 Ponce de Leon BoulevardSuite 470Coral Gables, FL 33146USA'Mobilizing the Enterprise'",
"msg_date": "Thu, 24 Apr 2014 22:57:05 +0200",
"msg_from": "Jonatan Evald Buus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance for delete query"
},
{
"msg_contents": "Jonatan Evald Buus <[email protected]> writes:\n> On 24 April 2014 22:29, Tom Lane <[email protected]> wrote:\n>> I'd really have to bet that you forgot to index one of the referencing\n>> tables.\n\n> *That was our first thought, so we went through the child tables to check\n> but apparently we missed some. (please see below for the difference in the\n> explain analyze output)*\n\nI'm confused. Your second EXPLAIN ANALYZE looks like you fixed the\nproblem. Are you still thinking there's an issue?\n\n> Why is \"order2transaction_fk\" being triggered twice? Is that because\n> there're two affected rows?\n\nNo, I'd have expected a delete of multiple rows to show as calls=N,\nnot N separate entries.\n\nMaybe there are recursive queries buried under here somewhere?\nThat is, are you expecting any of the cascaded deletes to cascade further?\nI don't recall exactly what EXPLAIN is likely to do with such cases.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Apr 2014 17:25:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance for delete query"
},
{
"msg_contents": "On 24 April 2014 23:25, Tom Lane <[email protected]> wrote:\n\n> Jonatan Evald Buus <[email protected]> writes:\n> > On 24 April 2014 22:29, Tom Lane <[email protected]> wrote:\n> >> I'd really have to bet that you forgot to index one of the referencing\n> >> tables.\n>\n> > *That was our first thought, so we went through the child tables to check\n> > but apparently we missed some. (please see below for the difference in\n> the\n> > explain analyze output)*\n>\n> I'm confused. Your second EXPLAIN ANALYZE looks like you fixed the\n> problem. Are you still thinking there's an issue?\n>\n\nI believe we improved it at least, whether it's permanently fixed remaines\nto be seen once transaction volume increases again.\n\n>\n> > Why is \"order2transaction_fk\" being triggered twice? Is that because\n> > there're two affected rows?\n>\n> No, I'd have expected a delete of multiple rows to show as calls=N,\n> not N separate entries.\n>\n> Maybe there are recursive queries buried under here somewhere?\n> That is, are you expecting any of the cascaded deletes to cascade further?\n> I don't recall exactly what EXPLAIN is likely to do with such cases.\n>\n\nDeleting from the TopTable (Transaction), I'd expect the following effects:\n- 0 affected rows in Address using\n*address2transaction_fk*- 0 affected rows in Certificate using\n*certificate2transaction_fk*- 0 affected rows in Note using\n*note2transaction_fk*\n- 1 - N affected rows in Order using *order2transaction_fk*\n\nA deletion in \"Order\" would also trigger an ON DELETE CASCADE to\nCertificate using *certificate2order_fk*, which affects 0 rows.\n\nThis doesn't explain the extra trigger of \"order2transaction_fk\".\nAny guidelines as to how we may investigate this further would be greatly\nappreciated.\n\n\n> regards, tom lane\n>\n\n\n\n-- \nJONATAN EVALD BUUS\n\nCTO\n\nMobile US +1 (305) 331-5242\nMobile DK +45 2888 2861\nTelephone +1 (305) 777-0392\nFax. +1 (305) 777-0449\[email protected]\nwww.cellpointmobile.com\n\nCellPoint Mobile Inc.\n4000 Ponce de Leon Boulevard\nSuite 470\nCoral Gables, FL 33146\nUSA\n\n'Mobilizing the Enterprise'\n\nOn 24 April 2014 23:25, Tom Lane <[email protected]> wrote:\nJonatan Evald Buus <[email protected]> writes:\n> On 24 April 2014 22:29, Tom Lane <[email protected]> wrote:\n>> I'd really have to bet that you forgot to index one of the referencing\n>> tables.\n\n> *That was our first thought, so we went through the child tables to check\n> but apparently we missed some. (please see below for the difference in the\n> explain analyze output)*\n\nI'm confused. Your second EXPLAIN ANALYZE looks like you fixed the\nproblem. Are you still thinking there's an issue? I believe we improved it at least, whether it's permanently fixed remaines to be seen once transaction volume increases again. \n\n\n> Why is \"order2transaction_fk\" being triggered twice? Is that because\n> there're two affected rows?\n\nNo, I'd have expected a delete of multiple rows to show as calls=N,\nnot N separate entries.\n\nMaybe there are recursive queries buried under here somewhere?\nThat is, are you expecting any of the cascaded deletes to cascade further?\nI don't recall exactly what EXPLAIN is likely to do with such cases.Deleting from the TopTable (Transaction), I'd expect the following effects:- 0 affected rows in Address using address2transaction_fk\n- 0 affected rows in Certificate using certificate2transaction_fk- 0 affected rows in Note using note2transaction_fk- 1 - N affected rows in Order using order2transaction_fk\n A deletion in \"Order\" would also trigger an ON DELETE CASCADE to Certificate using certificate2order_fk, which affects 0 rows.This doesn't explain the extra trigger of \"order2transaction_fk\".\nAny guidelines as to how we may investigate this further would be greatly appreciated. \n\n regards, tom lane\n-- JONATAN EVALD BUUSCTOMobile US +1 (305) 331-5242Mobile DK +45 2888 2861Telephone +1 (305) 777-0392Fax. +1 (305) [email protected]\nwww.cellpointmobile.com CellPoint Mobile Inc.4000 Ponce de Leon BoulevardSuite 470Coral Gables, FL 33146USA'Mobilizing the Enterprise'",
"msg_date": "Fri, 25 Apr 2014 07:04:59 +0200",
"msg_from": "Jonatan Evald Buus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance for delete query"
},
{
"msg_contents": "Jonatan Evald Buus <[email protected]> writes:\n> On 24 April 2014 23:25, Tom Lane <[email protected]> wrote:\n>> Jonatan Evald Buus <[email protected]> writes:\n>>> Why is \"order2transaction_fk\" being triggered twice? Is that because\n>>> there're two affected rows?\n\n>> No, I'd have expected a delete of multiple rows to show as calls=N,\n>> not N separate entries.\n\n> This doesn't explain the extra trigger of \"order2transaction_fk\".\n> Any guidelines as to how we may investigate this further would be greatly\n> appreciated.\n\nIf you could show us your exact database schema, it might become\nclearer what's happening. I'm wondering about duplicate constraint names\nfor instance ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Apr 2014 10:36:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance for delete query"
}
] |
[
{
"msg_contents": "Hi!\n\nHalf a day ago one of our production PG servers (arguably busiest one)\nbecome very slow; I went to investigate the issue and found that it runs\nsimultaneously '(auto)VACUUM ANALYZE recommendations' - largest table on\nthat server - and checkpoint, giving a 100% disk load, that resulted with\nqueue of queries which only made things worse of course.\nFor a while I tried to set different ionice settings to wal writer and\ncheckpointer processes (-c 2 -n [5-7]) for no visible effect. Then I\ncancelled autovacuum and it seems to help.\n\nWhen things settled up and day was reaching end I started VACUUM ANALYZE of\nthis table by hand and continued observations.\nVacuum ended in about 2 hours and half. But soon I noticed that server\nstarted another autovacuum of the same table...\nProblems returned and resolved after it finished (not 100% sure it was the\nreason though).\n\nIn the morning autovacuum was back. And then it finished and I gone to\nwork. And now I'm here and there is autovacuum again %)\nAnd load too. But I had to say, sometimes there is autovacuum and no load.\nI'm not really sure autovacuum is the culprit, but there is correlation and\nit behaves strange anyway.\nIn the app code nothing changed I believe.\n\nAny recommendations where to dig further?\n\nPG version: 9.2.8\n\nServer hardware: E5-2690 x 2, 96GB RAM, 146GB 15k SAS x 8, HP P420i 2G RAID\ncontroller, raid 1 for system and raid 50 for DB.\n\nPerfomance settings changed:\nshared_buffers = 24GB\ntemp_buffers = 128MB\nwork_mem = 16MB\nmaintenance_work_mem = 1GB\neffective_cache_size = 48GB\neffective_io_concurrency = 6 (I just realised I have to set it to 4, right?)\nsynchronous_commit = off\ncheckpoint_segments = 64\ncheckpoint_timeout = 10min\ncheckpoint_completion_target = 0.8\ncheckpoint_warning = 3600s\n\nPlus I set vm.dirty_background_bytes to 134217728 and vm.dirty_bytes to\n1073741824.\n\nAlso I believe now that raid 1 for system might be a mistake. Maybe give it\nfor WAL?\n\nBest regards,\nDmitriy Shalashov\n\nHi!Half a day ago one of our production PG servers (arguably busiest one) become very slow; I went to investigate the issue and found that it runs simultaneously '(auto)VACUUM ANALYZE recommendations' - largest table on that server - and checkpoint, giving a 100% disk load, that resulted with queue of queries which only made things worse of course.\nFor a while I tried to set different ionice settings to wal writer and checkpointer processes (-c 2 -n [5-7]) for no visible effect. Then I cancelled autovacuum and it seems to help.When things settled up and day was reaching end I started VACUUM ANALYZE of this table by hand and continued observations.\nVacuum ended in about 2 hours and half. But soon I noticed that server started another autovacuum of the same table...Problems returned and resolved after it finished (not 100% sure it was the reason though).\nIn the morning autovacuum was back. And then it finished and I gone to work. And now I'm here and there is autovacuum again %)And load too. But I had to say, sometimes there is autovacuum and no load. I'm not really sure autovacuum is the culprit, but there is correlation and it behaves strange anyway.\nIn the app code nothing changed I believe.Any recommendations where to dig further?PG version: 9.2.8Server hardware: E5-2690 x 2, 96GB RAM, 146GB 15k SAS x 8, HP P420i 2G RAID controller, raid 1 for system and raid 50 for DB.\nPerfomance settings changed:shared_buffers = 24GBtemp_buffers = 128MBwork_mem = 16MBmaintenance_work_mem = 1GBeffective_cache_size = 48GB\n\neffective_io_concurrency = 6 (I just realised I have to set it to 4, right?)synchronous_commit = offcheckpoint_segments = 64checkpoint_timeout = 10mincheckpoint_completion_target = 0.8\ncheckpoint_warning = 3600sPlus I set vm.dirty_background_bytes to 134217728 and vm.dirty_bytes to 1073741824.Also I believe now that raid 1 for system might be a mistake. Maybe give it for WAL?\nBest regards,Dmitriy Shalashov",
"msg_date": "Fri, 25 Apr 2014 11:47:51 +0400",
"msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Server vacuuming the same table again and again"
},
{
"msg_contents": "Hi Dmitry,\n\nOn Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <[email protected]> wrote:\n> cancelled autovacuum and it seems to help.\n\n> In the morning autovacuum was back. And then it finished and I gone to work.\n\nActually, thise two things are tightly bound and there is no chance to\navoid vacuum, you can only postpone it, this kind of work eventually\nsupposed to be done.\n\nWhat you really need to do as a first thing - configure your\nautovacuum aggressively enough and then mayde ionice autovacuum\ninstead of mission critical ckeckpointer or bgwriter.\n\nWhich exact values have you in the following settings:\n\n autovacuum_analyze_scale_factor\n autovacuum_analyze_threshold\n autovacuum_freeze_max_age\n autovacuum_max_workers\n autovacuum_naptime\n autovacuum_vacuum_cost_delay\n autovacuum_vacuum_cost_limit\n autovacuum_vacuum_scale_factor\n autovacuum_vacuum_threshold\n log_autovacuum_min_duration\n\n?\n\nBest regards, Ilya\n>\n> Best regards,\n> Dmitriy Shalashov\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Apr 2014 10:12:29 +0200",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server vacuuming the same table again and again"
},
{
"msg_contents": "Hi Ilya!\n\n> Actually, thise two things are tightly bound and there is no chance to avoid\nvacuum, you can only postpone it, this kind of work eventually supposed to\nbe done.\n\nI understand that autovacuum has to be done, but not right after previous\nautovacuum? And then again and again.\nAnd after cancelling that first autovacuum I started another one by hand;\nfrom there no autovacuum was cancelled.\n\n> ionice autovacuum instead of mission critical ckeckpointer or bgwriter\nYeah, that was desperate. I restarted server when I had a chance - to drop\nmy ionice settings back to defaults.\n\n> Which exact values have you in the following settings:\n\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_analyze_threshold = 50\nautovacuum_freeze_max_age = 200000000\nautovacuum_max_workers = 3\nautovacuum_naptime = 60\nautovacuum_vacuum_cost_delay = 20\nautovacuum_vacuum_cost_limit = -1\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_vacuum_threshold = 50\nlog_autovacuum_min_duration = 0\n\nAll defaults except last one I believe.\n\n\nMinwhile I noticed in the night logs:\ncheckpoints are occurring too frequently (138 seconds apart)\nConsider increasing the configuration parameter \"checkpoint_segments\".\n\nIncreased checkpoint_segments to 256 and reloaded config.\n\n\nBest regards,\nDmitriy Shalashov\n\n\n2014-04-25 12:12 GMT+04:00 Ilya Kosmodemiansky <\[email protected]>:\n\n> Hi Dmitry,\n>\n> On Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <[email protected]>\n> wrote:\n> > cancelled autovacuum and it seems to help.\n>\n> > In the morning autovacuum was back. And then it finished and I gone to\n> work.\n>\n> Actually, thise two things are tightly bound and there is no chance to\n> avoid vacuum, you can only postpone it, this kind of work eventually\n> supposed to be done.\n>\n> What you really need to do as a first thing - configure your\n> autovacuum aggressively enough and then mayde ionice autovacuum\n> instead of mission critical ckeckpointer or bgwriter.\n>\n> Which exact values have you in the following settings:\n>\n> autovacuum_analyze_scale_factor\n> autovacuum_analyze_threshold\n> autovacuum_freeze_max_age\n> autovacuum_max_workers\n> autovacuum_naptime\n> autovacuum_vacuum_cost_delay\n> autovacuum_vacuum_cost_limit\n> autovacuum_vacuum_scale_factor\n> autovacuum_vacuum_threshold\n> log_autovacuum_min_duration\n>\n> ?\n>\n> Best regards, Ilya\n> >\n> > Best regards,\n> > Dmitriy Shalashov\n>\n>\n>\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n\nHi Ilya!> Actually, thise two things are tightly bound and there is no chance to avoid vacuum, you can only postpone it, this kind of work eventually supposed to be done.\nI understand that autovacuum has to be done, but not right after previous autovacuum? And then again and again.And after cancelling that first autovacuum I started another one by hand; from there no autovacuum was cancelled.\n> ionice autovacuum instead of mission critical ckeckpointer or bgwriter\nYeah, that was desperate. I restarted server when I had a chance - to drop my ionice settings back to defaults.\n> Which exact values have you in the following settings:\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_analyze_threshold = 50autovacuum_freeze_max_age = 200000000autovacuum_max_workers = 3\nautovacuum_naptime = 60autovacuum_vacuum_cost_delay = 20autovacuum_vacuum_cost_limit = -1\nautovacuum_vacuum_scale_factor = 0.2autovacuum_vacuum_threshold = 50log_autovacuum_min_duration = 0\nAll defaults except last one I believe.\nMinwhile I noticed in the night logs:checkpoints are occurring too frequently (138 seconds apart)Consider increasing the configuration parameter \"checkpoint_segments\".\nIncreased checkpoint_segments to 256 and reloaded config.\nBest regards,Dmitriy Shalashov\n2014-04-25 12:12 GMT+04:00 Ilya Kosmodemiansky <[email protected]>:\nHi Dmitry,\n\nOn Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <[email protected]> wrote:\n> cancelled autovacuum and it seems to help.\n\n> In the morning autovacuum was back. And then it finished and I gone to work.\n\nActually, thise two things are tightly bound and there is no chance to\navoid vacuum, you can only postpone it, this kind of work eventually\nsupposed to be done.\n\nWhat you really need to do as a first thing - configure your\nautovacuum aggressively enough and then mayde ionice autovacuum\ninstead of mission critical ckeckpointer or bgwriter.\n\nWhich exact values have you in the following settings:\n\n autovacuum_analyze_scale_factor\n autovacuum_analyze_threshold\n autovacuum_freeze_max_age\n autovacuum_max_workers\n autovacuum_naptime\n autovacuum_vacuum_cost_delay\n autovacuum_vacuum_cost_limit\n autovacuum_vacuum_scale_factor\n autovacuum_vacuum_threshold\n log_autovacuum_min_duration\n\n?\n\nBest regards, Ilya\n>\n> Best regards,\n> Dmitriy Shalashov\n\n\n\n--\nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]",
"msg_date": "Fri, 25 Apr 2014 12:22:45 +0400",
"msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server vacuuming the same table again and again"
},
{
"msg_contents": "And right now we have a new kind of problem.\nPreviously during load disk was 100% busy; now we have around 100 active\nstate queries, 100% loaded proc, but disk is virtually idle... Normally we\nhave under 10 active queries.\nAny hints on that?\n\n\nBest regards,\nDmitriy Shalashov\n\n\n2014-04-25 12:22 GMT+04:00 Дмитрий Шалашов <[email protected]>:\n\n> Hi Ilya!\n>\n> > Actually, thise two things are tightly bound and there is no chance to avoid\n> vacuum, you can only postpone it, this kind of work eventually supposed\n> to be done.\n>\n> I understand that autovacuum has to be done, but not right after previous\n> autovacuum? And then again and again.\n> And after cancelling that first autovacuum I started another one by hand;\n> from there no autovacuum was cancelled.\n>\n> > ionice autovacuum instead of mission critical ckeckpointer or bgwriter\n> Yeah, that was desperate. I restarted server when I had a chance - to drop\n> my ionice settings back to defaults.\n>\n> > Which exact values have you in the following settings:\n>\n> autovacuum_analyze_scale_factor = 0.1\n> autovacuum_analyze_threshold = 50\n> autovacuum_freeze_max_age = 200000000\n> autovacuum_max_workers = 3\n> autovacuum_naptime = 60\n> autovacuum_vacuum_cost_delay = 20\n> autovacuum_vacuum_cost_limit = -1\n> autovacuum_vacuum_scale_factor = 0.2\n> autovacuum_vacuum_threshold = 50\n> log_autovacuum_min_duration = 0\n>\n> All defaults except last one I believe.\n>\n>\n> Minwhile I noticed in the night logs:\n> checkpoints are occurring too frequently (138 seconds apart)\n> Consider increasing the configuration parameter \"checkpoint_segments\".\n>\n> Increased checkpoint_segments to 256 and reloaded config.\n>\n>\n> Best regards,\n> Dmitriy Shalashov\n>\n>\n> 2014-04-25 12:12 GMT+04:00 Ilya Kosmodemiansky <\n> [email protected]>:\n>\n> Hi Dmitry,\n>>\n>> On Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <[email protected]>\n>> wrote:\n>> > cancelled autovacuum and it seems to help.\n>>\n>> > In the morning autovacuum was back. And then it finished and I gone to\n>> work.\n>>\n>> Actually, thise two things are tightly bound and there is no chance to\n>> avoid vacuum, you can only postpone it, this kind of work eventually\n>> supposed to be done.\n>>\n>> What you really need to do as a first thing - configure your\n>> autovacuum aggressively enough and then mayde ionice autovacuum\n>> instead of mission critical ckeckpointer or bgwriter.\n>>\n>> Which exact values have you in the following settings:\n>>\n>> autovacuum_analyze_scale_factor\n>> autovacuum_analyze_threshold\n>> autovacuum_freeze_max_age\n>> autovacuum_max_workers\n>> autovacuum_naptime\n>> autovacuum_vacuum_cost_delay\n>> autovacuum_vacuum_cost_limit\n>> autovacuum_vacuum_scale_factor\n>> autovacuum_vacuum_threshold\n>> log_autovacuum_min_duration\n>>\n>> ?\n>>\n>> Best regards, Ilya\n>> >\n>> > Best regards,\n>> > Dmitriy Shalashov\n>>\n>>\n>>\n>> --\n>> Ilya Kosmodemiansky,\n>>\n>> PostgreSQL-Consulting.com\n>> tel. +14084142500\n>> cell. +4915144336040\n>> [email protected]\n>>\n>\n>\n\nAnd right now we have a new kind of problem.Previously during load disk was 100% busy; now we have around 100 active state queries, 100% loaded proc, but disk is virtually idle... Normally we have under 10 active queries.\nAny hints on that?Best regards,Dmitriy Shalashov\n2014-04-25 12:22 GMT+04:00 Дмитрий Шалашов <[email protected]>:\nHi Ilya!> Actually, thise two things are tightly bound and there is no chance to avoid vacuum, you can only postpone it, this kind of work eventually supposed to be done.\nI understand that autovacuum has to be done, but not right after previous autovacuum? And then again and again.And after cancelling that first autovacuum I started another one by hand; from there no autovacuum was cancelled.\n\n> ionice autovacuum instead of mission critical ckeckpointer or bgwriter\nYeah, that was desperate. I restarted server when I had a chance - to drop my ionice settings back to defaults.\n> Which exact values have you in the following settings:\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_analyze_threshold = 50autovacuum_freeze_max_age = 200000000autovacuum_max_workers = 3\nautovacuum_naptime = 60autovacuum_vacuum_cost_delay = 20autovacuum_vacuum_cost_limit = -1\nautovacuum_vacuum_scale_factor = 0.2autovacuum_vacuum_threshold = 50log_autovacuum_min_duration = 0\nAll defaults except last one I believe.\nMinwhile I noticed in the night logs:checkpoints are occurring too frequently (138 seconds apart)Consider increasing the configuration parameter \"checkpoint_segments\".\nIncreased checkpoint_segments to 256 and reloaded config.\nBest regards,Dmitriy Shalashov\n2014-04-25 12:12 GMT+04:00 Ilya Kosmodemiansky <[email protected]>:\n\nHi Dmitry,\n\nOn Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <[email protected]> wrote:\n> cancelled autovacuum and it seems to help.\n\n> In the morning autovacuum was back. And then it finished and I gone to work.\n\nActually, thise two things are tightly bound and there is no chance to\navoid vacuum, you can only postpone it, this kind of work eventually\nsupposed to be done.\n\nWhat you really need to do as a first thing - configure your\nautovacuum aggressively enough and then mayde ionice autovacuum\ninstead of mission critical ckeckpointer or bgwriter.\n\nWhich exact values have you in the following settings:\n\n autovacuum_analyze_scale_factor\n autovacuum_analyze_threshold\n autovacuum_freeze_max_age\n autovacuum_max_workers\n autovacuum_naptime\n autovacuum_vacuum_cost_delay\n autovacuum_vacuum_cost_limit\n autovacuum_vacuum_scale_factor\n autovacuum_vacuum_threshold\n log_autovacuum_min_duration\n\n?\n\nBest regards, Ilya\n>\n> Best regards,\n> Dmitriy Shalashov\n\n\n\n--\nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]",
"msg_date": "Fri, 25 Apr 2014 12:29:02 +0400",
"msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server vacuuming the same table again and again"
},
{
"msg_contents": "Dmitry,\n\nHow is you filesystem under database exactly mount? (mount -l) And\njust in case, while increasing checkpoint_segments, better to increase\ncheckpoint_timeout, otherwise all checkpoints will be still frequent\nbecause segment threshold will be never reached. You could monitor\nyour pg_stat_bgwriter to understand which type of checkpoint happens\nmore frequent.\n\nOn Fri, Apr 25, 2014 at 10:22 AM, Дмитрий Шалашов <[email protected]> wrote:\n> I understand that autovacuum has to be done, but not right after previous\n> autovacuum? And then again and again.\n\nThat is exactly what happen: your autovacuum is not aggresive enough\nand that is why it runs constantly instead of doing it s job by small\nportions.\n\nyou should try something like this:\n\n autovacuum | on\n autovacuum_analyze_scale_factor | 0.05\n autovacuum_analyze_threshold | 5\n autovacuum_freeze_max_age | 200000000\n autovacuum_max_workers | 10 # set 10 for example and\nthen you could see - if they all working constantly, maybe you need\nmore. or less if not.\n autovacuum_multixact_freeze_max_age | 400000000\n autovacuum_naptime | 1\n autovacuum_vacuum_cost_delay | 10\n autovacuum_vacuum_cost_limit | -1\n autovacuum_vacuum_scale_factor | 0.01\n autovacuum_vacuum_threshold | 10\n log_autovacuum_min_duration | -1\n\n\nBest regards, Ilya\n\n> Best regards,\n> Dmitriy Shalashov\n>\n>\n> 2014-04-25 12:12 GMT+04:00 Ilya Kosmodemiansky\n> <[email protected]>:\n>\n>> Hi Dmitry,\n>>\n>> On Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <[email protected]>\n>> wrote:\n>> > cancelled autovacuum and it seems to help.\n>>\n>> > In the morning autovacuum was back. And then it finished and I gone to\n>> > work.\n>>\n>> Actually, thise two things are tightly bound and there is no chance to\n>> avoid vacuum, you can only postpone it, this kind of work eventually\n>> supposed to be done.\n>>\n>> What you really need to do as a first thing - configure your\n>> autovacuum aggressively enough and then mayde ionice autovacuum\n>> instead of mission critical ckeckpointer or bgwriter.\n>>\n>> Which exact values have you in the following settings:\n>>\n>> autovacuum_analyze_scale_factor\n>> autovacuum_analyze_threshold\n>> autovacuum_freeze_max_age\n>> autovacuum_max_workers\n>> autovacuum_naptime\n>> autovacuum_vacuum_cost_delay\n>> autovacuum_vacuum_cost_limit\n>> autovacuum_vacuum_scale_factor\n>> autovacuum_vacuum_threshold\n>> log_autovacuum_min_duration\n>>\n>> ?\n>>\n>> Best regards, Ilya\n>> >\n>> > Best regards,\n>> > Dmitriy Shalashov\n>>\n>>\n>>\n>> --\n>> Ilya Kosmodemiansky,\n>>\n>> PostgreSQL-Consulting.com\n>> tel. +14084142500\n>> cell. +4915144336040\n>> [email protected]\n>\n>\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Apr 2014 11:19:00 +0200",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server vacuuming the same table again and again"
},
{
"msg_contents": "On Fri, Apr 25, 2014 at 10:29 AM, Дмитрий Шалашов <[email protected]> wrote:\n> Previously during load disk was 100% busy; now we have around 100 active\n> state queries, 100% loaded proc, but disk is virtually idle...\n\nThat was happen after changing checkpoit_segments setting?\n\n>\n>\n> Best regards,\n> Dmitriy Shalashov\n>\n>\n> 2014-04-25 12:22 GMT+04:00 Дмитрий Шалашов <[email protected]>:\n>\n>> Hi Ilya!\n>>\n>> > Actually, thise two things are tightly bound and there is no chance to\n>> > avoid vacuum, you can only postpone it, this kind of work eventually\n>> > supposed to be done.\n>>\n>> I understand that autovacuum has to be done, but not right after previous\n>> autovacuum? And then again and again.\n>> And after cancelling that first autovacuum I started another one by hand;\n>> from there no autovacuum was cancelled.\n>>\n>> > ionice autovacuum instead of mission critical ckeckpointer or bgwriter\n>> Yeah, that was desperate. I restarted server when I had a chance - to drop\n>> my ionice settings back to defaults.\n>>\n>> > Which exact values have you in the following settings:\n>>\n>> autovacuum_analyze_scale_factor = 0.1\n>> autovacuum_analyze_threshold = 50\n>> autovacuum_freeze_max_age = 200000000\n>> autovacuum_max_workers = 3\n>> autovacuum_naptime = 60\n>> autovacuum_vacuum_cost_delay = 20\n>> autovacuum_vacuum_cost_limit = -1\n>> autovacuum_vacuum_scale_factor = 0.2\n>> autovacuum_vacuum_threshold = 50\n>> log_autovacuum_min_duration = 0\n>>\n>> All defaults except last one I believe.\n>>\n>>\n>> Minwhile I noticed in the night logs:\n>> checkpoints are occurring too frequently (138 seconds apart)\n>> Consider increasing the configuration parameter \"checkpoint_segments\".\n>>\n>> Increased checkpoint_segments to 256 and reloaded config.\n>>\n>>\n>> Best regards,\n>> Dmitriy Shalashov\n>>\n>>\n>> 2014-04-25 12:12 GMT+04:00 Ilya Kosmodemiansky\n>> <[email protected]>:\n>>\n>>> Hi Dmitry,\n>>>\n>>> On Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <[email protected]>\n>>> wrote:\n>>> > cancelled autovacuum and it seems to help.\n>>>\n>>> > In the morning autovacuum was back. And then it finished and I gone to\n>>> > work.\n>>>\n>>> Actually, thise two things are tightly bound and there is no chance to\n>>> avoid vacuum, you can only postpone it, this kind of work eventually\n>>> supposed to be done.\n>>>\n>>> What you really need to do as a first thing - configure your\n>>> autovacuum aggressively enough and then mayde ionice autovacuum\n>>> instead of mission critical ckeckpointer or bgwriter.\n>>>\n>>> Which exact values have you in the following settings:\n>>>\n>>> autovacuum_analyze_scale_factor\n>>> autovacuum_analyze_threshold\n>>> autovacuum_freeze_max_age\n>>> autovacuum_max_workers\n>>> autovacuum_naptime\n>>> autovacuum_vacuum_cost_delay\n>>> autovacuum_vacuum_cost_limit\n>>> autovacuum_vacuum_scale_factor\n>>> autovacuum_vacuum_threshold\n>>> log_autovacuum_min_duration\n>>>\n>>> ?\n>>>\n>>> Best regards, Ilya\n>>> >\n>>> > Best regards,\n>>> > Dmitriy Shalashov\n>>>\n>>>\n>>>\n>>> --\n>>> Ilya Kosmodemiansky,\n>>>\n>>> PostgreSQL-Consulting.com\n>>> tel. +14084142500\n>>> cell. +4915144336040\n>>> [email protected]\n>>\n>>\n>\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Apr 2014 11:22:55 +0200",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server vacuuming the same table again and again"
},
{
"msg_contents": "> How is you filesystem under database exactly mount?\next3 (rw)\n\nThanks, we'll try new autovacuum settings!\n\n> That was happen after changing checkpoit_segments setting?\n\nFirst, I have to say that load comes and go in waves - we don't yet\nunderstood why.\nAll new waves have that behaviour - free disk, idle cpu.\nFirst such wave was before checkpoit_segments change, next waves after. No\nmore warnings about too often checkpoints though.\n\n\nBest regards,\nDmitriy Shalashov\n\n\n2014-04-25 13:22 GMT+04:00 Ilya Kosmodemiansky <\[email protected]>:\n\n> On Fri, Apr 25, 2014 at 10:29 AM, Дмитрий Шалашов <[email protected]>\n> wrote:\n> > Previously during load disk was 100% busy; now we have around 100 active\n> > state queries, 100% loaded proc, but disk is virtually idle...\n>\n> That was happen after changing checkpoit_segments setting?\n>\n> >\n> >\n> > Best regards,\n> > Dmitriy Shalashov\n> >\n> >\n> > 2014-04-25 12:22 GMT+04:00 Дмитрий Шалашов <[email protected]>:\n> >\n> >> Hi Ilya!\n> >>\n> >> > Actually, thise two things are tightly bound and there is no chance to\n> >> > avoid vacuum, you can only postpone it, this kind of work eventually\n> >> > supposed to be done.\n> >>\n> >> I understand that autovacuum has to be done, but not right after\n> previous\n> >> autovacuum? And then again and again.\n> >> And after cancelling that first autovacuum I started another one by\n> hand;\n> >> from there no autovacuum was cancelled.\n> >>\n> >> > ionice autovacuum instead of mission critical ckeckpointer or bgwriter\n> >> Yeah, that was desperate. I restarted server when I had a chance - to\n> drop\n> >> my ionice settings back to defaults.\n> >>\n> >> > Which exact values have you in the following settings:\n> >>\n> >> autovacuum_analyze_scale_factor = 0.1\n> >> autovacuum_analyze_threshold = 50\n> >> autovacuum_freeze_max_age = 200000000\n> >> autovacuum_max_workers = 3\n> >> autovacuum_naptime = 60\n> >> autovacuum_vacuum_cost_delay = 20\n> >> autovacuum_vacuum_cost_limit = -1\n> >> autovacuum_vacuum_scale_factor = 0.2\n> >> autovacuum_vacuum_threshold = 50\n> >> log_autovacuum_min_duration = 0\n> >>\n> >> All defaults except last one I believe.\n> >>\n> >>\n> >> Minwhile I noticed in the night logs:\n> >> checkpoints are occurring too frequently (138 seconds apart)\n> >> Consider increasing the configuration parameter \"checkpoint_segments\".\n> >>\n> >> Increased checkpoint_segments to 256 and reloaded config.\n> >>\n> >>\n> >> Best regards,\n> >> Dmitriy Shalashov\n> >>\n> >>\n> >> 2014-04-25 12:12 GMT+04:00 Ilya Kosmodemiansky\n> >> <[email protected]>:\n> >>\n> >>> Hi Dmitry,\n> >>>\n> >>> On Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <[email protected]>\n> >>> wrote:\n> >>> > cancelled autovacuum and it seems to help.\n> >>>\n> >>> > In the morning autovacuum was back. And then it finished and I gone\n> to\n> >>> > work.\n> >>>\n> >>> Actually, thise two things are tightly bound and there is no chance to\n> >>> avoid vacuum, you can only postpone it, this kind of work eventually\n> >>> supposed to be done.\n> >>>\n> >>> What you really need to do as a first thing - configure your\n> >>> autovacuum aggressively enough and then mayde ionice autovacuum\n> >>> instead of mission critical ckeckpointer or bgwriter.\n> >>>\n> >>> Which exact values have you in the following settings:\n> >>>\n> >>> autovacuum_analyze_scale_factor\n> >>> autovacuum_analyze_threshold\n> >>> autovacuum_freeze_max_age\n> >>> autovacuum_max_workers\n> >>> autovacuum_naptime\n> >>> autovacuum_vacuum_cost_delay\n> >>> autovacuum_vacuum_cost_limit\n> >>> autovacuum_vacuum_scale_factor\n> >>> autovacuum_vacuum_threshold\n> >>> log_autovacuum_min_duration\n> >>>\n> >>> ?\n> >>>\n> >>> Best regards, Ilya\n> >>> >\n> >>> > Best regards,\n> >>> > Dmitriy Shalashov\n> >>>\n> >>>\n> >>>\n> >>> --\n> >>> Ilya Kosmodemiansky,\n> >>>\n> >>> PostgreSQL-Consulting.com\n> >>> tel. +14084142500\n> >>> cell. +4915144336040\n> >>> [email protected]\n> >>\n> >>\n> >\n>\n>\n>\n> --\n> Ilya Kosmodemiansky,\n>\n> PostgreSQL-Consulting.com\n> tel. +14084142500\n> cell. +4915144336040\n> [email protected]\n>\n\n> How is you filesystem under database exactly mount?ext3 (rw)\nThanks, we'll try new autovacuum settings!> That was happen after changing checkpoit_segments setting?\nFirst, I have to say that load comes and go in waves - we don't yet understood why.\nAll new waves have that behaviour - free disk, idle cpu.First such wave was before checkpoit_segments change, next waves after. No more warnings about too often checkpoints though.\nBest regards,Dmitriy Shalashov\n2014-04-25 13:22 GMT+04:00 Ilya Kosmodemiansky <[email protected]>:\nOn Fri, Apr 25, 2014 at 10:29 AM, Дмитрий Шалашов <[email protected]> wrote:\n\n\n> Previously during load disk was 100% busy; now we have around 100 active\n> state queries, 100% loaded proc, but disk is virtually idle...\n\nThat was happen after changing checkpoit_segments setting?\n\n>\n>\n> Best regards,\n> Dmitriy Shalashov\n>\n>\n> 2014-04-25 12:22 GMT+04:00 Дмитрий Шалашов <[email protected]>:\n>\n>> Hi Ilya!\n>>\n>> > Actually, thise two things are tightly bound and there is no chance to\n>> > avoid vacuum, you can only postpone it, this kind of work eventually\n>> > supposed to be done.\n>>\n>> I understand that autovacuum has to be done, but not right after previous\n>> autovacuum? And then again and again.\n>> And after cancelling that first autovacuum I started another one by hand;\n>> from there no autovacuum was cancelled.\n>>\n>> > ionice autovacuum instead of mission critical ckeckpointer or bgwriter\n>> Yeah, that was desperate. I restarted server when I had a chance - to drop\n>> my ionice settings back to defaults.\n>>\n>> > Which exact values have you in the following settings:\n>>\n>> autovacuum_analyze_scale_factor = 0.1\n>> autovacuum_analyze_threshold = 50\n>> autovacuum_freeze_max_age = 200000000\n>> autovacuum_max_workers = 3\n>> autovacuum_naptime = 60\n>> autovacuum_vacuum_cost_delay = 20\n>> autovacuum_vacuum_cost_limit = -1\n>> autovacuum_vacuum_scale_factor = 0.2\n>> autovacuum_vacuum_threshold = 50\n>> log_autovacuum_min_duration = 0\n>>\n>> All defaults except last one I believe.\n>>\n>>\n>> Minwhile I noticed in the night logs:\n>> checkpoints are occurring too frequently (138 seconds apart)\n>> Consider increasing the configuration parameter \"checkpoint_segments\".\n>>\n>> Increased checkpoint_segments to 256 and reloaded config.\n>>\n>>\n>> Best regards,\n>> Dmitriy Shalashov\n>>\n>>\n>> 2014-04-25 12:12 GMT+04:00 Ilya Kosmodemiansky\n>> <[email protected]>:\n>>\n>>> Hi Dmitry,\n>>>\n>>> On Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <[email protected]>\n>>> wrote:\n>>> > cancelled autovacuum and it seems to help.\n>>>\n>>> > In the morning autovacuum was back. And then it finished and I gone to\n>>> > work.\n>>>\n>>> Actually, thise two things are tightly bound and there is no chance to\n>>> avoid vacuum, you can only postpone it, this kind of work eventually\n>>> supposed to be done.\n>>>\n>>> What you really need to do as a first thing - configure your\n>>> autovacuum aggressively enough and then mayde ionice autovacuum\n>>> instead of mission critical ckeckpointer or bgwriter.\n>>>\n>>> Which exact values have you in the following settings:\n>>>\n>>> autovacuum_analyze_scale_factor\n>>> autovacuum_analyze_threshold\n>>> autovacuum_freeze_max_age\n>>> autovacuum_max_workers\n>>> autovacuum_naptime\n>>> autovacuum_vacuum_cost_delay\n>>> autovacuum_vacuum_cost_limit\n>>> autovacuum_vacuum_scale_factor\n>>> autovacuum_vacuum_threshold\n>>> log_autovacuum_min_duration\n>>>\n>>> ?\n>>>\n>>> Best regards, Ilya\n>>> >\n>>> > Best regards,\n>>> > Dmitriy Shalashov\n>>>\n>>>\n>>>\n>>> --\n>>> Ilya Kosmodemiansky,\n>>>\n>>> PostgreSQL-Consulting.com\n>>> tel. +14084142500\n>>> cell. +4915144336040\n>>> [email protected]\n>>\n>>\n>\n\n\n\n--\nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]",
"msg_date": "Fri, 25 Apr 2014 13:31:35 +0400",
"msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server vacuuming the same table again and again"
},
{
"msg_contents": "Dmitry,\n\nOn Fri, Apr 25, 2014 at 11:31 AM, Дмитрий Шалашов <[email protected]> wrote:\n> Thanks, we'll try new autovacuum settings!\n\nI think things with vacuum will be much better.\n\nIf not, try to find out if you have long running transaction (several\nminutes or more) and try to avoid such them.\n\n>\n> First, I have to say that load comes and go in waves - we don't yet\n> understood why.\n> All new waves have that behaviour - free disk, idle cpu.\n> First such wave was before checkpoit_segments change, next waves after.\n\nThat could be a complicate problem caused by many things from suboptimal\nsql-queries to network issues, could be not easy to guess.\n\n- how many locks you have during the wave in comparison with normal workload?\n- do you use some connection pooling (pgbouncer etc)?\n- how about long running transactions I have mentioned above?\n- are you using pg_stat_statements or any other method for detecting\nslow queries?\n\n>\n>\n> Best regards,\n> Dmitriy Shalashov\n>\n>\n> 2014-04-25 13:22 GMT+04:00 Ilya Kosmodemiansky\n> <[email protected]>:\n>\n>> On Fri, Apr 25, 2014 at 10:29 AM, Дмитрий Шалашов <[email protected]>\n>> wrote:\n>> > Previously during load disk was 100% busy; now we have around 100 active\n>> > state queries, 100% loaded proc, but disk is virtually idle...\n>>\n>> That was happen after changing checkpoit_segments setting?\n>>\n>> >\n>> >\n>> > Best regards,\n>> > Dmitriy Shalashov\n>> >\n>> >\n>> > 2014-04-25 12:22 GMT+04:00 Дмитрий Шалашов <[email protected]>:\n>> >\n>> >> Hi Ilya!\n>> >>\n>> >> > Actually, thise two things are tightly bound and there is no chance\n>> >> > to\n>> >> > avoid vacuum, you can only postpone it, this kind of work eventually\n>> >> > supposed to be done.\n>> >>\n>> >> I understand that autovacuum has to be done, but not right after\n>> >> previous\n>> >> autovacuum? And then again and again.\n>> >> And after cancelling that first autovacuum I started another one by\n>> >> hand;\n>> >> from there no autovacuum was cancelled.\n>> >>\n>> >> > ionice autovacuum instead of mission critical ckeckpointer or\n>> >> > bgwriter\n>> >> Yeah, that was desperate. I restarted server when I had a chance - to\n>> >> drop\n>> >> my ionice settings back to defaults.\n>> >>\n>> >> > Which exact values have you in the following settings:\n>> >>\n>> >> autovacuum_analyze_scale_factor = 0.1\n>> >> autovacuum_analyze_threshold = 50\n>> >> autovacuum_freeze_max_age = 200000000\n>> >> autovacuum_max_workers = 3\n>> >> autovacuum_naptime = 60\n>> >> autovacuum_vacuum_cost_delay = 20\n>> >> autovacuum_vacuum_cost_limit = -1\n>> >> autovacuum_vacuum_scale_factor = 0.2\n>> >> autovacuum_vacuum_threshold = 50\n>> >> log_autovacuum_min_duration = 0\n>> >>\n>> >> All defaults except last one I believe.\n>> >>\n>> >>\n>> >> Minwhile I noticed in the night logs:\n>> >> checkpoints are occurring too frequently (138 seconds apart)\n>> >> Consider increasing the configuration parameter \"checkpoint_segments\".\n>> >>\n>> >> Increased checkpoint_segments to 256 and reloaded config.\n>> >>\n>> >>\n>> >> Best regards,\n>> >> Dmitriy Shalashov\n>> >>\n>> >>\n>> >> 2014-04-25 12:12 GMT+04:00 Ilya Kosmodemiansky\n>> >> <[email protected]>:\n>> >>\n>> >>> Hi Dmitry,\n>> >>>\n>> >>> On Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <[email protected]>\n>> >>> wrote:\n>> >>> > cancelled autovacuum and it seems to help.\n>> >>>\n>> >>> > In the morning autovacuum was back. And then it finished and I gone\n>> >>> > to\n>> >>> > work.\n>> >>>\n>> >>> Actually, thise two things are tightly bound and there is no chance to\n>> >>> avoid vacuum, you can only postpone it, this kind of work eventually\n>> >>> supposed to be done.\n>> >>>\n>> >>> What you really need to do as a first thing - configure your\n>> >>> autovacuum aggressively enough and then mayde ionice autovacuum\n>> >>> instead of mission critical ckeckpointer or bgwriter.\n>> >>>\n>> >>> Which exact values have you in the following settings:\n>> >>>\n>> >>> autovacuum_analyze_scale_factor\n>> >>> autovacuum_analyze_threshold\n>> >>> autovacuum_freeze_max_age\n>> >>> autovacuum_max_workers\n>> >>> autovacuum_naptime\n>> >>> autovacuum_vacuum_cost_delay\n>> >>> autovacuum_vacuum_cost_limit\n>> >>> autovacuum_vacuum_scale_factor\n>> >>> autovacuum_vacuum_threshold\n>> >>> log_autovacuum_min_duration\n>> >>>\n>> >>> ?\n>> >>>\n>> >>> Best regards, Ilya\n>> >>> >\n>> >>> > Best regards,\n>> >>> > Dmitriy Shalashov\n>> >>>\n>> >>>\n>> >>>\n>> >>> --\n>> >>> Ilya Kosmodemiansky,\n>> >>>\n>> >>> PostgreSQL-Consulting.com\n>> >>> tel. +14084142500\n>> >>> cell. +4915144336040\n>> >>> [email protected]\n>> >>\n>> >>\n>> >\n>>\n>>\n>>\n>> --\n>> Ilya Kosmodemiansky,\n>>\n>> PostgreSQL-Consulting.com\n>> tel. +14084142500\n>> cell. +4915144336040\n>> [email protected]\n>\n>\n\n\n\n-- \nIlya Kosmodemiansky,\n\nPostgreSQL-Consulting.com\ntel. +14084142500\ncell. +4915144336040\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Apr 2014 11:46:30 +0200",
"msg_from": "Ilya Kosmodemiansky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server vacuuming the same table again and again"
},
{
"msg_contents": "On 25/04/14 09:47, Дмитрий Шалашов wrote:\n> Half a day ago one of our production PG servers (arguably busiest one)\n> become very slow; I went to investigate the issue and found that it runs\n> simultaneously '(auto)VACUUM ANALYZE recommendations' - largest table on\n> that server - and checkpoint, giving a 100% disk load\n\nMaybe the table has reached the state where it needs a VACUUM FREEZE.\nAutovacuum does that for you but it requires a complete scan of the table.\n\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Apr 2014 11:47:08 +0200",
"msg_from": "=?UTF-8?B?VG9yc3RlbiBGw7ZydHNjaA==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server vacuuming the same table again and again"
},
{
"msg_contents": "Turns out yesterday we fixed a bug and introduced a new bug, which was\npreviously hidden by yet another bug which in turn we had fixed last\nweek... %)\nIn result last fix led to greatly increased number of requests to the\ndatabase.\n\nBut still, thanks for that, we found out about too frequent checkpoints and\nthat our recommendations table has three times more dead tuples than live\nones.\n\nWe will fix our autovacuum configuration.\n\nAs for other problem - 100% cpu load with idle disks - it is no more\nreproducing and we don't want it to :)\n\nThanks Ilya and Torsten <https://plus.google.com/106936514849349631188>!\n\n\nBest regards,\nDmitriy Shalashov\n\n\n2014-04-25 13:47 GMT+04:00 Torsten Förtsch <[email protected]>:\n\n> On 25/04/14 09:47, Дмитрий Шалашов wrote:\n> > Half a day ago one of our production PG servers (arguably busiest one)\n> > become very slow; I went to investigate the issue and found that it runs\n> > simultaneously '(auto)VACUUM ANALYZE recommendations' - largest table on\n> > that server - and checkpoint, giving a 100% disk load\n>\n> Maybe the table has reached the state where it needs a VACUUM FREEZE.\n> Autovacuum does that for you but it requires a complete scan of the table.\n>\n> Torsten\n>\n\nTurns out yesterday we fixed a bug and introduced a new bug, which was previously hidden by yet another bug which in turn we had fixed last week... %)In result last fix led to greatly increased number of requests to the database.\nBut still, thanks for that, we found out about too frequent checkpoints and that our recommendations table has three times more dead tuples than live ones.We will fix our autovacuum configuration.\nAs for other problem - 100% cpu load with idle disks - it is no more reproducing and we don't want it to :)Thanks Ilya and Torsten!\nBest regards,Dmitriy Shalashov\n2014-04-25 13:47 GMT+04:00 Torsten Förtsch <[email protected]>:\nOn 25/04/14 09:47, Дмитрий Шалашов wrote:\n> Half a day ago one of our production PG servers (arguably busiest one)\n> become very slow; I went to investigate the issue and found that it runs\n> simultaneously '(auto)VACUUM ANALYZE recommendations' - largest table on\n> that server - and checkpoint, giving a 100% disk load\n\nMaybe the table has reached the state where it needs a VACUUM FREEZE.\nAutovacuum does that for you but it requires a complete scan of the table.\n\nTorsten",
"msg_date": "Fri, 25 Apr 2014 14:36:18 +0400",
"msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server vacuuming the same table again and again"
}
] |
[
{
"msg_contents": "Hi All\n\nI have one big Pl/Pgsql function (about 1500 line code) , i can divided it\nto about 5 part and call each script from main script . In this case i\nneed to know which way is faster .\n\nand some question about pgsql :\n\n1- is pgsql engine open one process for each script ?\n2- how i can chose max connection number for pgsql server based on cpu core\nand RAM capacity that have maximum Efficiency?\n\nThanks and Best ergards.\n\nHi All\nI have one big Pl/Pgsql function (about 1500 line code) , i can divided it to about 5 part and call each script from main script . In this case i need to know which way is faster .\nand some question about pgsql :1- is pgsql engine open one process for each script ?2- how i can chose max connection number for pgsql server based on cpu core and RAM capacity that have maximum Efficiency?\nThanks and Best ergards.",
"msg_date": "Fri, 25 Apr 2014 13:18:05 +0430",
"msg_from": "Mehdi Ravanbakhsh <[email protected]>",
"msg_from_op": true,
"msg_subject": "pl/pgsql performance"
},
{
"msg_contents": "Hello\n\n\n2014-04-25 10:48 GMT+02:00 Mehdi Ravanbakhsh <[email protected]>:\n\n> Hi All\n>\n> I have one big Pl/Pgsql function (about 1500 line code) , i can divided\n> it to about 5 part and call each script from main script . In this case\n> i need to know which way is faster .\n>\n> and some question about pgsql :\n>\n> 1- is pgsql engine open one process for each script ?\n>\n\nPostgreSQL uses one CPU per session.\n\n\n> 2- how i can chose max connection number for pgsql server based on cpu\n> core and RAM capacity that have maximum Efficiency?\n>\n\nusually max performance is about 10 x CPU connections. But it highly\ndepends on load.\n\nRegards\n\nPavel Stehule\n\n\n>\n> Thanks and Best ergards.\n>\n\nHello2014-04-25 10:48 GMT+02:00 Mehdi Ravanbakhsh <[email protected]>:\nHi All\n\nI have one big Pl/Pgsql function (about 1500 line code) , i can divided it to about 5 part and call each script from main script . In this case i need to know which way is faster .\nand some question about pgsql :1- is pgsql engine open one process for each script ?PostgreSQL uses one CPU per session.\n 2- how i can chose max connection number for pgsql server based on cpu core and RAM capacity that have maximum Efficiency?\nusually max performance is about 10 x CPU connections. But it highly depends on load.RegardsPavel Stehule \n\nThanks and Best ergards.",
"msg_date": "Fri, 25 Apr 2014 10:53:03 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql performance"
},
{
"msg_contents": "On Fri, Apr 25, 2014 at 5:53 PM, Pavel Stehule <[email protected]> wrote:\n> 2014-04-25 10:48 GMT+02:00 Mehdi Ravanbakhsh <[email protected]>:\n>> 2- how i can chose max connection number for pgsql server based on cpu\n>> core and RAM capacity that have maximum Efficiency?\n> usually max performance is about 10 x CPU connections. But it highly depends\n> on load.\nAs well as transaction model counts, are you for example using long or\nshort transactions?\nAs a starting point, you could as well use something like ((core_count\n* 2) + effective_spindle_count). have a look here for more details:\nhttp://wiki.postgresql.org/wiki/Number_Of_Database_Connections#How_to_Find_the_Optimal_Database_Connection_Pool_Size\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 27 Apr 2014 21:41:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql performance"
}
] |
[
{
"msg_contents": "*Problem description:*\nAfter a few days of running in my test environment, a query timed out\n(query timeout=4mins). Also in general the queries were taking a lot longer\nthan expected. The workload in my database is a write intensive workload.\nAnd the writes happen in a burst every 5 minutes. There are a whole bunch\nof insert and update queries that run every 5 minutes. When I analyzed the\nsituation (by enabling more postgres logs), I noticed that postgres\ncheckpoints were triggering approximately every 5 minutes and based on my\nonline research I suspected the i/o overhead of checkpoints was affecting\nthe query performance. The checkpoint related settings were:\ncheckpoint_segments = 30\ncheckpoint_timeout = 15min\n\nI modified these settings to the following:\ncheckpoint_segments = 250\ncheckpoint_timeout = 1h\ncheckpoint_completion_target = 0.9\n\nAfter I tweaked these settings, checkpoints were happening only once in an\nhour and that improved the query performance. However, when the checkpoint\nhappens every hour, the query performance is still very poor. This is still\nundesirable to my system. My question is how to track down the reason for\nthe poor performance during checkpoints and improve the query performance\nwhen the checkpoints happen?\n\n\n - *EXPLAIN ANALYZE:*\n - http://explain.depesz.com/s/BNva - An insert query inserting just\n 129 rows takes 20 seconds.\n - http://explain.depesz.com/s/5hA - An update query updating 43926\n rows takes 55 seconds.\n - *History:* It gets slower after a few days of the system running.\n\n*Table Metadata*:\n\n - The tables get updated every 5 minutes. Utmost 50000 rows in a table\n get updated every 5 minutes. About 50000 rows get inserted every 1 hour.\n - There are 90 tables in the DB. 43 of these are updated every 5\n minutes. 8/90 tables receive a high update traffic of 50000 updates/5mins.\n Remaining tables receive an update traffic of 2000 updates/5min. 43/90\n tables are updated every 1 hour.\n\n\n*PostgreSQL version: *PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu,\ncompiled by gcc (GCC) 4.6.x-google 20120601 (prerelease), 64-bit\n\n*How you installed PostgreSQL: *Compiled from source and installed.\n\n*Changes made to the settings in the postgresql.conf file:*\n\n name | current_setting | source\n\n\n------------------------------+------------------------+----------------------\n\n application_name | psql | client\n\n checkpoint_completion_target | 0.9 | configuration file\n\n checkpoint_segments | 250 | configuration file\n\n checkpoint_timeout | 1h | configuration file\n\n client_encoding | SQL_ASCII | client\n\n client_min_messages | error | configuration file\n\n constraint_exclusion | on | configuration file\n\n DateStyle | ISO, MDY | configuration file\n\n default_statistics_target | 800 | configuration file\n\n default_text_search_config | pg_catalog.english | configuration file\n\n effective_cache_size | 4GB | configuration file\n\n lc_messages | C | configuration file\n\n lc_monetary | C | configuration file\n\n lc_numeric | C | configuration file\n\n lc_time | C | configuration file\n\n listen_addresses | localhost | configuration file\n\n log_autovacuum_min_duration | 20s | configuration file\n\n log_checkpoints | on | configuration file\n\n log_connections | on | configuration file\n\n log_destination | syslog | configuration file\n\n log_disconnections | on | configuration file\n\n log_line_prefix | user=%u,db=%d | configuration file\n\n log_lock_waits | on | configuration file\n\n log_min_duration_statement | 1s | configuration file\n\n log_min_messages | error | configuration file\n\n log_temp_files | 0 | configuration file\n\n log_timezone | PST8PDT,M3.2.0,M11.1.0 | environment\nvariable\n\n maintenance_work_mem | 64MB | configuration file\n\n max_connections | 12 | configuration file\n\n max_locks_per_transaction | 700 | configuration file\n\n max_stack_depth | 2MB | environment\nvariable\n\n port | 5432 | configuration file\n\n shared_buffers | 500MB | configuration file\n\n ssl | off | configuration file\n\n statement_timeout | 4min | configuration file\n\n syslog_facility | local1 | configuration file\n\n syslog_ident | postgres | configuration file\n\ntemp_buffers | 256MB | configuration file\n\n TimeZone | PST8PDT,M3.2.0,M11.1.0 | environment\nvariable\n\n wal_buffers | 1MB | configuration file\n\n work_mem | 128MB | configuration file\n\n*Operating system and version: *Scientific Linux release 6.1 (Carbon)\n\n*What program you're using to connect to PostgreSQL: *C++ libpqxx library\n\n\n - *Relevant Schema*: All tables referenced in this question have this\n same schema\n\nmanaged_target_stats=> \\d stat_300_3_1\n\nTable \"public.stat_300_40110_1\"\n\n Column | Type | Modifiers\n\n--------+---------+-----------\n\n ts | integer |\n\n target | bigint |\n\n port | integer |\n\n data | real[] |\n\nIndexes:\n\n \"unique_stat_300_40110_1\" UNIQUE CONSTRAINT, btree (ts, target, port)\n\n \"idx_port_stat_300_40110_1\" btree (port)\n\n \"idx_target_stat_300_40110_1\" btree (target)\n\n \"idx_ts_stat_300_40110_1\" btree (ts)\n\n - *Hardware*:\n - CPU: Intel(R) Xeon(R) CPU E5205 @ 1.86GHz\n - Memory: 6GB\n - Storage Details:\n\n\nThere are 2 500GB disks (/dev/sda, /dev/sdb) with the following 6\npartitions on each disk.\n\n*Number Start End Size Type File system Flags*\n\n 1 512B 24.7MB 24.7MB primary boot\n\n 2 24.7MB 6473MB 6449MB primary linux-swap(v1)\n\n 3 6473MB 40.8GB 34.4GB primary ext3\n\n 4 40.8GB 500GB 459GB extended lba\n\n 5 40.8GB 408GB 367GB logical ext3\n\n 6 408GB 472GB 64.4GB logical ext3\n\n*Disk model and details:*\n\nModel Family: Western Digital RE3 Serial ATA family\n\nDevice Model: WDC WD5002ABYS-02B1B0\n\nSerial Number: WD-WCASYD132237\n\nFirmware Version: 02.03B03\n\nUser Capacity: 500,107,862,016 bytes\n\nDevice is: In smartctl database [for details use: -P show]\n\nATA Version is: 8\n\nATA Standard is: Exact ATA specification draft version not indicated\n\nLocal Time is: Sun Apr 27 05:05:13 2014 PDT\n\nSMART support is: Available - device has SMART capability.\n\nSMART support is: Enabled\n\n\nThe postgres data is stored on a software RAID10 on partition 5 of both\nthese disks.\n\n[admin@chief-cmc2 tmp]# mdadm --detail /dev/md3\n\n/dev/md3:\n\n Version : 0.90\n\n Creation Time : Wed Mar 19 06:40:57 2014\n\n Raid Level : raid10\n\n Array Size : 358402048 (341.80 GiB 367.00 GB)\n\n Used Dev Size : 358402048 (341.80 GiB 367.00 GB)\n\n Raid Devices : 2\n\n Total Devices : 2\n\nPreferred Minor : 3\n\n Persistence : Superblock is persistent\n\n Update Time : Sun Apr 27 04:22:07 2014\n\n State : active\n\n Active Devices : 2\n\nWorking Devices : 2\n\n Failed Devices : 0\n\n Spare Devices : 0\n\n Layout : far=2\n\n Chunk Size : 64K\n\n UUID : 79d04a1b:99461915:3d186b3c:53958f34\n\n Events : 0.24\n\n Number Major Minor RaidDevice State\n\n 0 8 5 0 active sync /dev/sda5\n\n 1 8 21 1 active sync /dev/sdb5\n\n - *Maintenance Setup*: autovacuum is running with default settings. Old\n records are deleted every night. I also do 'vacuum full' on a 12 tables\n that receive large number of updates every night at 1AM. I have noticed\n that these 'vacuum full' also time out. (I am planning to post a separate\n question regarding my vacuuming strategy).\n - *WAL Configuration*: The WAL is in the same disk.\n\nProblem description:After a few days of running in my test environment, a query timed out (query timeout=4mins). Also in general the queries were taking a lot longer than expected. The workload in my database is a write intensive workload. And the writes happen in a burst every 5 minutes. There are a whole bunch of insert and update queries that run every 5 minutes. When I analyzed the situation (by enabling more postgres logs), I noticed that postgres checkpoints were triggering approximately every 5 minutes and based on my online research I suspected the i/o overhead of checkpoints was affecting the query performance. The checkpoint related settings were:\ncheckpoint_segments = 30checkpoint_timeout = 15minI modified these settings to the following:checkpoint_segments = 250checkpoint_timeout = 1hcheckpoint_completion_target = 0.9\nAfter I tweaked these settings, checkpoints were happening only once in an hour and that improved the query performance. However, when the checkpoint happens every hour, the query performance is still very poor. This is still undesirable to my system. My question is how to track down the reason for the poor performance during checkpoints and improve the query performance when the checkpoints happen?\nEXPLAIN ANALYZE:\nhttp://explain.depesz.com/s/BNva - An insert query inserting just 129 rows takes 20 seconds.\nhttp://explain.depesz.com/s/5hA - An update query updating 43926 rows takes 55 seconds.\nHistory: It gets slower after a few days of the system running.Table Metadata:\nThe tables get updated every 5 minutes. Utmost 50000 rows in a table get updated every 5 minutes. About 50000 rows get inserted every 1 hour.There are 90 tables in the DB. 43 of these are updated every 5 minutes. 8/90 tables receive a high update traffic of 50000 updates/5mins. Remaining tables receive an update traffic of 2000 updates/5min. 43/90 tables are updated every 1 hour. \nPostgreSQL version: PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.x-google 20120601 (prerelease), 64-bit\nHow you installed PostgreSQL: Compiled from source and installed.\nChanges made to the settings in the postgresql.conf file:\n\n name | current_setting | source ------------------------------+------------------------+---------------------- application_name | psql | client\n checkpoint_completion_target | 0.9 | configuration file checkpoint_segments | 250 | configuration file checkpoint_timeout | 1h | configuration file\n client_encoding | SQL_ASCII | client client_min_messages | error | configuration file constraint_exclusion | on | configuration file\n DateStyle | ISO, MDY | configuration file default_statistics_target | 800 | configuration file default_text_search_config | pg_catalog.english | configuration file\n effective_cache_size | 4GB | configuration file lc_messages | C | configuration file lc_monetary | C | configuration file\n lc_numeric | C | configuration file lc_time | C | configuration file listen_addresses | localhost | configuration file\n log_autovacuum_min_duration | 20s | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file\n log_destination | syslog | configuration file log_disconnections | on | configuration file log_line_prefix | user=%u,db=%d | configuration file\n log_lock_waits | on | configuration file log_min_duration_statement | 1s | configuration file log_min_messages | error | configuration file\n log_temp_files | 0 | configuration file log_timezone | PST8PDT,M3.2.0,M11.1.0 | environment variable maintenance_work_mem | 64MB | configuration file\n max_connections | 12 | configuration file max_locks_per_transaction | 700 | configuration file max_stack_depth | 2MB | environment variable\n port | 5432 | configuration file shared_buffers | 500MB | configuration file ssl | off | configuration file\n statement_timeout | 4min | configuration file syslog_facility | local1 | configuration file syslog_ident | postgres | configuration file\ntemp_buffers | 256MB | configuration file TimeZone | PST8PDT,M3.2.0,M11.1.0 | environment variable wal_buffers | 1MB | configuration file\n\n work_mem | 128MB | configuration fileOperating system and version: Scientific Linux release 6.1 (Carbon)\nWhat program you're using to connect to PostgreSQL: C++ libpqxx library\n\nRelevant Schema: All tables referenced in this question have this same schema\nmanaged_target_stats=> \\d stat_300_3_1\nTable \"public.stat_300_40110_1\"\n Column | Type | Modifiers \n--------+---------+-----------\n ts | integer | \n target | bigint | \n port | integer | \n data | real[] | \nIndexes:\n \"unique_stat_300_40110_1\" UNIQUE CONSTRAINT, btree (ts, target, port)\n \"idx_port_stat_300_40110_1\" btree (port)\n \"idx_target_stat_300_40110_1\" btree (target)\n \"idx_ts_stat_300_40110_1\" btree (ts)\nHardware:CPU: Intel(R) Xeon(R) CPU E5205 @ 1.86GHz\nMemory: 6GBStorage Details: There are 2 500GB disks (/dev/sda, /dev/sdb) with the following 6 partitions on each disk.\nNumber Start End Size Type File system Flags\n 1 512B 24.7MB 24.7MB primary boot\n 2 24.7MB 6473MB 6449MB primary linux-swap(v1)\n 3 6473MB 40.8GB 34.4GB primary ext3\n 4 40.8GB 500GB 459GB extended lba\n 5 40.8GB 408GB 367GB logical ext3\n 6 408GB 472GB 64.4GB logical ext3Disk model and details:Model Family: Western Digital RE3 Serial ATA familyDevice Model: WDC WD5002ABYS-02B1B0\nSerial Number: WD-WCASYD132237Firmware Version: 02.03B03User Capacity: 500,107,862,016 bytesDevice is: In smartctl database [for details use: -P show]\nATA Version is: 8ATA Standard is: Exact ATA specification draft version not indicatedLocal Time is: Sun Apr 27 05:05:13 2014 PDTSMART support is: Available - device has SMART capability.\n\nSMART support is: EnabledThe postgres data is stored on a software RAID10 on partition 5 of both these disks.[admin@chief-cmc2 tmp]# mdadm --detail /dev/md3\n/dev/md3: Version : 0.90 Creation Time : Wed Mar 19 06:40:57 2014 Raid Level : raid10 Array Size : 358402048 (341.80 GiB 367.00 GB)\n Used Dev Size : 358402048 (341.80 GiB 367.00 GB) Raid Devices : 2 Total Devices : 2Preferred Minor : 3 Persistence : Superblock is persistent\n Update Time : Sun Apr 27 04:22:07 2014 State : active Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0\n Layout : far=2 Chunk Size : 64K UUID : 79d04a1b:99461915:3d186b3c:53958f34 Events : 0.24 Number Major Minor RaidDevice State\n 0 8 5 0 active sync /dev/sda5\n 1 8 21 1 active sync /dev/sdb5\nMaintenance Setup: autovacuum is running with default settings. Old records are deleted every night. I also do 'vacuum full' on a 12 tables that receive large number of updates every night at 1AM. I have noticed that these 'vacuum full' also time out. (I am planning to post a separate question regarding my vacuuming strategy).\nWAL Configuration: The WAL is in the same disk.",
"msg_date": "Sun, 27 Apr 2014 05:08:07 -0700",
"msg_from": "Elanchezhiyan Elango <[email protected]>",
"msg_from_op": true,
"msg_subject": "Checkpoints and slow queries"
}
] |
[
{
"msg_contents": "(I am resending this question after waiting for several hours because my\nprevious mail got stalled probably because I didn't confirm my email\naddress after subscribing. So resending the mail. Sorry if this is causing\na double post.)\n\n*Problem description:*\nAfter a few days of running in my test environment, a query timed out\n(query timeout=4mins). Also in general the queries were taking a lot longer\nthan expected. The workload in my database is a write intensive workload.\nAnd the writes happen in a burst every 5 minutes. There are a whole bunch\nof insert and update queries that run every 5 minutes. When I analyzed the\nsituation (by enabling more postgres logs), I noticed that postgres\ncheckpoints were triggering approximately every 5 minutes and based on my\nonline research I suspected the i/o overhead of checkpoints was affecting\nthe query performance. The checkpoint related settings were:\ncheckpoint_segments = 30\ncheckpoint_timeout = 15min\n\nI modified these settings to the following:\ncheckpoint_segments = 250\ncheckpoint_timeout = 1h\ncheckpoint_completion_target = 0.9\n\nAfter I tweaked these settings, checkpoints were happening only once in an\nhour and that improved the query performance. However, when the checkpoint\nhappens every hour, the query performance is still very poor. This is still\nundesirable to my system.\n\nI also tried editing dirty_background_ratio and dirty_expire_centisecs in\n/etc/sysctl.conf. All dirty related kernel settings:\n\n># sysctl -a | grep dirty\n\nvm.dirty_background_ratio = 1\n\nvm.dirty_background_bytes = 0\n\nvm.dirty_ratio = 20\n\nvm.dirty_bytes = 0\n\nvm.dirty_writeback_centisecs = 500\n\nvm.dirty_expire_centisecs = 500\n\nThis also didn't improve the situation.\nMy question is how to track down the reason for the poor performance during\ncheckpoints and improve the query performance when the checkpoints happen?\n\n\n - *EXPLAIN ANALYZE:*\n - http://explain.depesz.com/s/BNva - An insert query inserting just\n 129 rows takes 20 seconds.\n - http://explain.depesz.com/s/5hA - An update query updating 43926\n rows takes 55 seconds.\n - *History:* It gets slower after a few days of the system running.\n\n*Table Metadata*:\n\n - The tables get updated every 5 minutes. Utmost 50000 rows in a table\n get updated every 5 minutes. About 50000 rows get inserted every 1 hour.\n - There are 90 tables in the DB. 43 of these are updated every 5\n minutes. 8/90 tables receive a high update traffic of 50000 updates/5mins.\n Remaining tables receive an update traffic of 2000 updates/5min. 43/90\n tables are updated every 1 hour.\n\n\n*PostgreSQL version: *PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu,\ncompiled by gcc (GCC) 4.6.x-google 20120601 (prerelease), 64-bit\n\n*How you installed PostgreSQL: *Compiled from source and installed.\n\n*Changes made to the settings in the postgresql.conf file:*\n\n name | current_setting | source\n\n\n------------------------------+------------------------+----------------------\n\n application_name | psql | client\n\n checkpoint_completion_target | 0.9 | configuration file\n\n checkpoint_segments | 250 | configuration file\n\n checkpoint_timeout | 1h | configuration file\n\n client_encoding | SQL_ASCII | client\n\n client_min_messages | error | configuration file\n\n constraint_exclusion | on | configuration file\n\n DateStyle | ISO, MDY | configuration file\n\n default_statistics_target | 800 | configuration file\n\n default_text_search_config | pg_catalog.english | configuration file\n\n effective_cache_size | 4GB | configuration file\n\n lc_messages | C | configuration file\n\n lc_monetary | C | configuration file\n\n lc_numeric | C | configuration file\n\n lc_time | C | configuration file\n\n listen_addresses | localhost | configuration file\n\n log_autovacuum_min_duration | 20s | configuration file\n\n log_checkpoints | on | configuration file\n\n log_connections | on | configuration file\n\n log_destination | syslog | configuration file\n\n log_disconnections | on | configuration file\n\n log_line_prefix | user=%u,db=%d | configuration file\n\n log_lock_waits | on | configuration file\n\n log_min_duration_statement | 1s | configuration file\n\n log_min_messages | error | configuration file\n\n log_temp_files | 0 | configuration file\n\n log_timezone | PST8PDT,M3.2.0,M11.1.0 | environment\nvariable\n\n maintenance_work_mem | 64MB | configuration file\n\n max_connections | 12 | configuration file\n\n max_locks_per_transaction | 700 | configuration file\n\n max_stack_depth | 2MB | environment\nvariable\n\n port | 5432 | configuration file\n\n shared_buffers | 500MB | configuration file\n\n ssl | off | configuration file\n\n statement_timeout | 4min | configuration file\n\n syslog_facility | local1 | configuration file\n\n syslog_ident | postgres | configuration file\n\ntemp_buffers | 256MB | configuration file\n\n TimeZone | PST8PDT,M3.2.0,M11.1.0 | environment\nvariable\n\n wal_buffers | 1MB | configuration file\n\n work_mem | 128MB | configuration file\n\n*Operating system and version: *Scientific Linux release 6.1 (Carbon)\n\n*What program you're using to connect to PostgreSQL: *C++ libpqxx library\n\n\n - *Relevant Schema*: All tables referenced in this question have this\n same schema\n\nmanaged_target_stats=> \\d stat_300_3_1\n\nTable \"public.stat_300_40110_1\"\n\n Column | Type | Modifiers\n\n--------+---------+-----------\n\n ts | integer |\n\n target | bigint |\n\n port | integer |\n\n data | real[] |\n\nIndexes:\n\n \"unique_stat_300_40110_1\" UNIQUE CONSTRAINT, btree (ts, target, port)\n\n \"idx_port_stat_300_40110_1\" btree (port)\n\n \"idx_target_stat_300_40110_1\" btree (target)\n\n \"idx_ts_stat_300_40110_1\" btree (ts)\n\n - *Hardware*:\n - CPU: Intel(R) Xeon(R) CPU E5205 @ 1.86GHz\n - Memory: 6GB\n - Storage Details:\n\n\nThere are 2 500GB disks (/dev/sda, /dev/sdb) with the following 6\npartitions on each disk.\n\n*Number Start End Size Type File system Flags*\n\n 1 512B 24.7MB 24.7MB primary boot\n\n 2 24.7MB 6473MB 6449MB primary linux-swap(v1)\n\n 3 6473MB 40.8GB 34.4GB primary ext3\n\n 4 40.8GB 500GB 459GB extended lba\n\n 5 40.8GB 408GB 367GB logical ext3\n\n 6 408GB 472GB 64.4GB logical ext3\n\n*Disk model and details:*\n\nModel Family: Western Digital RE3 Serial ATA family\n\nDevice Model: WDC WD5002ABYS-02B1B0\n\nSerial Number: WD-WCASYD132237\n\nFirmware Version: 02.03B03\n\nUser Capacity: 500,107,862,016 bytes\n\nDevice is: In smartctl database [for details use: -P show]\n\nATA Version is: 8\n\nATA Standard is: Exact ATA specification draft version not indicated\n\nLocal Time is: Sun Apr 27 05:05:13 2014 PDT\n\nSMART support is: Available - device has SMART capability.\n\nSMART support is: Enabled\n\n\nThe postgres data is stored on a software RAID10 on partition 5 of both\nthese disks.\n\n[admin@chief-cmc2 tmp]# mdadm --detail /dev/md3\n\n/dev/md3:\n\n Version : 0.90\n\n Creation Time : Wed Mar 19 06:40:57 2014\n\n Raid Level : raid10\n\n Array Size : 358402048 (341.80 GiB 367.00 GB)\n\n Used Dev Size : 358402048 (341.80 GiB 367.00 GB)\n\n Raid Devices : 2\n\n Total Devices : 2\n\nPreferred Minor : 3\n\n Persistence : Superblock is persistent\n\n Update Time : Sun Apr 27 04:22:07 2014\n\n State : active\n\n Active Devices : 2\n\nWorking Devices : 2\n\n Failed Devices : 0\n\n Spare Devices : 0\n\n Layout : far=2\n\n Chunk Size : 64K\n\n UUID : 79d04a1b:99461915:3d186b3c:53958f34\n\n Events : 0.24\n\n Number Major Minor RaidDevice State\n\n 0 8 5 0 active sync /dev/sda5\n\n 1 8 21 1 active sync /dev/sdb5\n\n - *Maintenance Setup*: autovacuum is running with default settings. Old\n records are deleted every night. I also do 'vacuum full' on a 12 tables\n that receive large number of updates every night at 1AM. I have noticed\n that these 'vacuum full' also time out. (I am planning to post a separate\n question regarding my vacuuming strategy).\n - *WAL Configuration*: The WAL is in the same disk.\n\n(I am resending this question after waiting for several hours because my previous mail got stalled probably because I didn't confirm my email address after subscribing. So resending the mail. Sorry if this is causing a double post.)\n Problem description:After a few days of running in my test environment, a query timed out (query timeout=4mins). Also in general the queries were taking a lot longer than expected. The workload in my database is a write intensive workload. And the writes happen in a burst every 5 minutes. There are a whole bunch of insert and update queries that run every 5 minutes. When I analyzed the situation (by enabling more postgres logs), I noticed that postgres checkpoints were triggering approximately every 5 minutes and based on my online research I suspected the i/o overhead of checkpoints was affecting the query performance. The checkpoint related settings were:\ncheckpoint_segments = 30checkpoint_timeout = 15minI modified these settings to the following:checkpoint_segments = 250checkpoint_timeout = 1hcheckpoint_completion_target = 0.9\nAfter I tweaked these settings, checkpoints were happening only once in an hour and that improved the query performance. However, when the checkpoint happens every hour, the query performance is still very poor. This is still undesirable to my system. \nI also tried editing dirty_background_ratio and dirty_expire_centisecs in /etc/sysctl.conf. All dirty related kernel settings:\n># sysctl -a | grep dirtyvm.dirty_background_ratio = 1vm.dirty_background_bytes = 0vm.dirty_ratio = 20vm.dirty_bytes = 0vm.dirty_writeback_centisecs = 500vm.dirty_expire_centisecs = 500\nThis also didn't improve the situation.My question is how to track down the reason for the poor performance during checkpoints and improve the query performance when the checkpoints happen?\n\nEXPLAIN ANALYZE:http://explain.depesz.com/s/BNva - An insert query inserting just 129 rows takes 20 seconds.\nhttp://explain.depesz.com/s/5hA - An update query updating 43926 rows takes 55 seconds.\nHistory: It gets slower after a few days of the system running.Table Metadata:\nThe tables get updated every 5 minutes. Utmost 50000 rows in a table get updated every 5 minutes. About 50000 rows get inserted every 1 hour.\n\nThere are 90 tables in the DB. 43 of these are updated every 5 minutes. 8/90 tables receive a high update traffic of 50000 updates/5mins. Remaining tables receive an update traffic of 2000 updates/5min. 43/90 tables are updated every 1 hour. \nPostgreSQL version: PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.x-google 20120601 (prerelease), 64-bit\nHow you installed PostgreSQL: Compiled from source and installed.\nChanges made to the settings in the postgresql.conf file:\n name | current_setting | source ------------------------------+------------------------+---------------------- application_name | psql | client\n checkpoint_completion_target | 0.9 | configuration file checkpoint_segments | 250 | configuration file checkpoint_timeout | 1h | configuration file\n client_encoding | SQL_ASCII | client client_min_messages | error | configuration file constraint_exclusion | on | configuration file\n DateStyle | ISO, MDY | configuration file default_statistics_target | 800 | configuration file default_text_search_config | pg_catalog.english | configuration file\n effective_cache_size | 4GB | configuration file lc_messages | C | configuration file lc_monetary | C | configuration file\n lc_numeric | C | configuration file lc_time | C | configuration file listen_addresses | localhost | configuration file\n log_autovacuum_min_duration | 20s | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file\n log_destination | syslog | configuration file log_disconnections | on | configuration file log_line_prefix | user=%u,db=%d | configuration file\n log_lock_waits | on | configuration file log_min_duration_statement | 1s | configuration file log_min_messages | error | configuration file\n log_temp_files | 0 | configuration file log_timezone | PST8PDT,M3.2.0,M11.1.0 | environment variable maintenance_work_mem | 64MB | configuration file\n max_connections | 12 | configuration file max_locks_per_transaction | 700 | configuration file max_stack_depth | 2MB | environment variable\n port | 5432 | configuration file shared_buffers | 500MB | configuration file ssl | off | configuration file\n statement_timeout | 4min | configuration file syslog_facility | local1 | configuration file syslog_ident | postgres | configuration file\ntemp_buffers | 256MB | configuration file TimeZone | PST8PDT,M3.2.0,M11.1.0 | environment variable wal_buffers | 1MB | configuration file\n work_mem | 128MB | configuration fileOperating system and version: Scientific Linux release 6.1 (Carbon)\nWhat program you're using to connect to PostgreSQL: C++ libpqxx library\n\nRelevant Schema: All tables referenced in this question have this same schemamanaged_target_stats=> \\d stat_300_3_1\nTable \"public.stat_300_40110_1\" Column | Type | Modifiers --------+---------+----------- ts | integer | target | bigint | port | integer | data | real[] | \nIndexes: \"unique_stat_300_40110_1\" UNIQUE CONSTRAINT, btree (ts, target, port) \"idx_port_stat_300_40110_1\" btree (port) \"idx_target_stat_300_40110_1\" btree (target)\n \"idx_ts_stat_300_40110_1\" btree (ts)\nHardware:CPU: Intel(R) Xeon(R) CPU E5205 @ 1.86GHzMemory: 6GB\nStorage Details: There are 2 500GB disks (/dev/sda, /dev/sdb) with the following 6 partitions on each disk.\nNumber Start End Size Type File system Flags 1 512B 24.7MB 24.7MB primary boot\n 2 24.7MB 6473MB 6449MB primary linux-swap(v1) 3 6473MB 40.8GB 34.4GB primary ext3 4 40.8GB 500GB 459GB extended lba 5 40.8GB 408GB 367GB logical ext3\n 6 408GB 472GB 64.4GB logical ext3Disk model and details:Model Family: Western Digital RE3 Serial ATA familyDevice Model: WDC WD5002ABYS-02B1B0Serial Number: WD-WCASYD132237\nFirmware Version: 02.03B03User Capacity: 500,107,862,016 bytesDevice is: In smartctl database [for details use: -P show]ATA Version is: 8ATA Standard is: Exact ATA specification draft version not indicated\nLocal Time is: Sun Apr 27 05:05:13 2014 PDTSMART support is: Available - device has SMART capability.SMART support is: EnabledThe postgres data is stored on a software RAID10 on partition 5 of both these disks.\n[admin@chief-cmc2 tmp]# mdadm --detail /dev/md3/dev/md3: Version : 0.90 Creation Time : Wed Mar 19 06:40:57 2014 Raid Level : raid10 Array Size : 358402048 (341.80 GiB 367.00 GB)\n Used Dev Size : 358402048 (341.80 GiB 367.00 GB) Raid Devices : 2 Total Devices : 2Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Sun Apr 27 04:22:07 2014\n State : active Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : far=2 Chunk Size : 64K UUID : 79d04a1b:99461915:3d186b3c:53958f34\n Events : 0.24 Number Major Minor RaidDevice State 0 8 5 0 active sync /dev/sda5 1 8 21 1 active sync /dev/sdb5\n\nMaintenance Setup: autovacuum is running with default settings. Old records are deleted every night. I also do 'vacuum full' on a 12 tables that receive large number of updates every night at 1AM. I have noticed that these 'vacuum full' also time out. (I am planning to post a separate question regarding my vacuuming strategy).\nWAL Configuration: The WAL is in the same disk.",
"msg_date": "Sun, 27 Apr 2014 14:01:34 -0700",
"msg_from": "Elanchezhiyan Elango <[email protected]>",
"msg_from_op": true,
"msg_subject": "Checkpoints and slow queries"
},
{
"msg_contents": "On 27.4.2014 23:01, Elanchezhiyan Elango wrote:\n> (I am resending this question after waiting for several hours because\n> my previous mail got stalled probably because I didn't confirm my\n> email address after subscribing. So resending the mail. Sorry if this\n> is causing a double post.)\n> \n> *Problem description:*\n> After a few days of running in my test environment, a query timed out\n> (query timeout=4mins). Also in general the queries were taking a lot\n> longer than expected. The workload in my database is a write intensive\n> workload. And the writes happen in a burst every 5 minutes. There are a\n> whole bunch of insert and update queries that run every 5 minutes. When\n> I analyzed the situation (by enabling more postgres logs), I noticed\n> that postgres checkpoints were triggering approximately every 5 minutes\n> and based on my online research I suspected the i/o overhead of\n> checkpoints was affecting the query performance. The checkpoint related\n> settings were:\n> checkpoint_segments = 30\n> checkpoint_timeout = 15min\n> \n> I modified these settings to the following:\n> checkpoint_segments = 250\n> checkpoint_timeout = 1h\n> checkpoint_completion_target = 0.9\n\nThe problem is that while this makes the checkpoints less frequent, it\naccumulates more changes that need to be written to disk during the\ncheckpoint. Which means the impact more severe.\n\nThe only case when this is not true is when repeatedly modifying a\nsubset of the data (say, a few data blocks), because the changes merge\ninto a single write during checkpoint.\n\n> After I tweaked these settings, checkpoints were happening only once in\n> an hour and that improved the query performance. However, when the\n> checkpoint happens every hour, the query performance is still very poor.\n> This is still undesirable to my system. \n\nSo, can you share a few of the checkpoint log messages? So that we get\nan idea of how much data needs to be synced to disk.\n\n> I also tried editing dirty_background_ratio and dirty_expire_centisecs\n> in /etc/sysctl.conf. All dirty related kernel settings:\n> \n>># sysctl -a | grep dirty\n> \n> vm.dirty_background_ratio = 1\n> vm.dirty_background_bytes = 0\n> vm.dirty_ratio = 20\n> vm.dirty_bytes = 0\n> vm.dirty_writeback_centisecs = 500\n> vm.dirty_expire_centisecs = 500\n> \n> This also didn't improve the situation.\n\nCan you monitor the amount of dirty data in page cache, i.e. data that\nneeds to be written to disk? Wait for the checkpoint and sample the\n/proc/meminfo a few times:\n\n$ cat /proc/meminfo | grep Dirty\n\nAlso, watch \"iostat -x -k 1\" or something similar to see disk activity.\n\n> My question is how to track down the reason for the poor performance\n> during checkpoints and improve the query performance when the\n> checkpoints happen?\n> \n> * *EXPLAIN ANALYZE:*\n> o http://explain.depesz.com/s/BNva - An insert query inserting\n> just 129 rows takes 20 seconds.\n> o http://explain.depesz.com/s/5hA - An update query updating 43926\n> rows takes 55 seconds.\n> * *History:* It gets slower after a few days of the system running.\n> \n> *Table Metadata*:\n> \n> * The tables get updated every 5 minutes. Utmost 50000 rows in a table\n> get updated every 5 minutes. About 50000 rows get inserted every 1 hour.\n> * There are 90 tables in the DB. 43 of these are updated every 5\n> minutes. 8/90 tables receive a high update traffic of 50000\n> updates/5mins. Remaining tables receive an update traffic of 2000\n> updates/5min. 43/90 tables are updated every 1 hour. \n\nSo how much data in total are we talking about?\n\n> *PostgreSQL version: *PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu,\n> compiled by gcc (GCC) 4.6.x-google 20120601 (prerelease), 64-bit\n> \n> *How you installed PostgreSQL: *Compiled from source and installed.\n> \n> *Changes made to the settings in the postgresql.conf file:*\n\nSeems fine to me, except for the following changes:\n\n> name | current_setting | source \n> ------------------------------+------------------------+----------------------\n> maintenance_work_mem | 64MB | configuration file\n> temp_buffers | 256MB | configuration file\n> wal_buffers | 1MB | configuration file\n> work_mem | 128MB | configuration file\n\nAny particular reasons for setting work_mem > maintenance_work_mem? Why\nhave you modified wal_buffer and temp_buffers?\n\nI doubt these are related to the issues you're seeing, though.\n\n> * *Relevant Schema*: All tables referenced in this question have this\n> same schema\n> \n> managed_target_stats=> \\d stat_300_3_1\n> \n> Table \"public.stat_300_40110_1\"\n> \n> Column | Type | Modifiers \n> --------+---------+-----------\n> ts | integer | \n> target | bigint | \n> port | integer | \n> data | real[] | \n> \n> Indexes:\n> \"unique_stat_300_40110_1\" UNIQUE CONSTRAINT, btree (ts, target, port)\n> \"idx_port_stat_300_40110_1\" btree (port)\n> \"idx_target_stat_300_40110_1\" btree (target)\n> \"idx_ts_stat_300_40110_1\" btree (ts)\n\nOK, so there are multiple tables, and you're updating 50k rows in all\ntables in total? Can you post \\dt+ and \\di+ so that we get an idea of\ntable/index sizes?\n\n> * *Hardware*:\n> o CPU: Intel(R) Xeon(R) CPU E5205 @ 1.86GHz\n> o Memory: 6GB\n> o Storage Details: \n> \n> There are 2 500GB disks (/dev/sda, /dev/sdb) with the following 6\n> partitions on each disk.\n> \n> *Number Start End Size Type File system Flags*\n> \n> 1 512B 24.7MB 24.7MB primary boot\n> 2 24.7MB 6473MB 6449MB primary linux-swap(v1)\n> 3 6473MB 40.8GB 34.4GB primary ext3\n> 4 40.8GB 500GB 459GB extended lba\n> 5 40.8GB 408GB 367GB logical ext3\n> 6 408GB 472GB 64.4GB logical ext3\n\nThe first problem here is ext3. It's behavior when performing fsync is\nreally terrible. See\n\n http://blog.2ndquadrant.com/linux_filesystems_and_postgres/\n\nfor more details. So, the first thing you should do is switching to ext4\nor xfs.\n\n\n> *Disk model and details:*\n> \n> Model Family: Western Digital RE3 Serial ATA family\n\nRegular 7.2k SATA disk, not the most powerful piece of hardware.\n\n\n> The postgres data is stored on a software RAID10 on partition 5 of\n> both these disks.\n\nSo essentially RAID0, as you only have 2 drives.\n\n> * *Maintenance Setup*: autovacuum is running with default settings. \n> Old records are deleted every night. I also do 'vacuum full' on a 12 \n> tables that receive large number of updates every night at 1AM. I \n> have noticed that these 'vacuum full' also time out. (I am planning \n> to post a separate question regarding my vacuuming strategy).\n\nMy bet is it's related. If the system is I/O bound, it's natural the\nvacuum full are performing badly too.\n\n> * *WAL Configuration*: The WAL is in the same disk.\n\nWhich is not helping, because it interferes with the other I/O.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 00:46:14 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpoints and slow queries"
},
{
"msg_contents": ">\n> The problem is that while this makes the checkpoints less frequent, it\n> accumulates more changes that need to be written to disk during the\n> checkpoint. Which means the impact more severe.\n\nTrue. But the checkpoints finish in approximately 5-10 minutes every time\n(even with checkpoint_completion_target of 0.9).\nHere are the checkpoint logs for April 26: http://pastebin.com/Sh7bZ8u8\n\n\n> The only case when this is not true is when repeatedly modifying a\n> subset of the data (say, a few data blocks), because the changes merge\n> into a single write during checkpoint.\n\nIn my case most of the time I would be updating the same rows every 5\nminutes for the entire clock hour. Every 5 minutes I would be updating the\nsame set of rows (adding new data to the data[] field) in the tables. But\nof course checkpoints don't happen exactly at the 1 hour mark on the clock.\nBut I think my system could take advantage of the behavior you explain\nabove.\n\n$ cat /proc/meminfo | grep Dirty\n\nHere is a log of Dirty info every 1 second during checkpoint:\nhttp://pastebin.com/gmJFFAKW\nThe top stats are while the checkpoint alone was running. Checkpoint was\nrunning between 22:12:09-22:21:35. Around 22:17:30- 22:24:21 is when the\nwrite queries were running.\n\nAlso, watch \"iostat -x -k 1\" or something similar to see disk activity.\n\nHere is a log of iostat every 1 second during checkpoint:\nhttp://pastebin.com/RHyMkiQt\nThis is stats for the timeframe corresponding to the dirty logs above. But\nbecause of 500k size constraint in pastebin I had to delete a lot of stats\nat the beginning. The top stats are while the checkpoint alone was running.\nThe write queries started running in the middle and at the very end it's\nthe stats of just the write queries (checkpoint completed before the write\nqueries completed).\n\nSo how much data in total are we talking about?\n> OK, so there are multiple tables, and you're updating 50k rows in all\n> tables in total?\n\nEvery 5 minutes: 50K rows are updated in 4 tables. 2K rows are updated in\n39 tables.\nEvery 1 hour (on top of the hour): 50K rows are updated in 8 tables. 2K\nrows are updated in 78 tables.\nIf every update will take up space equivalent to 1 row, then there are 278K\nrows updated across all tables every 5 minutes. And 556K (278 * 2) rows\nupdated across all tables every 1 hour. All tables follow the same schema\nexcept some tables don't have the 'port' field. And the data[] column on\neach row could have maximum 48 values.\n\nCan you post \\dt+ and \\di+ so that we get an idea of table/index sizes?\n\n\\dt+: http://pastebin.com/Dvg2vSeb\n\\di+: http://pastebin.com/586MGn0U\n\nThanks for your input on ext3 filesystem and having WAL on a different\ndisk. I'll see if these can be changed. I cannot change these in the short\nterm.\n\nOn Sun, Apr 27, 2014 at 3:46 PM, Tomas Vondra <[email protected]> wrote:\n\n> On 27.4.2014 23:01, Elanchezhiyan Elango wrote:\n> > (I am resending this question after waiting for several hours because\n> > my previous mail got stalled probably because I didn't confirm my\n> > email address after subscribing. So resending the mail. Sorry if this\n> > is causing a double post.)\n> >\n> > *Problem description:*\n> > After a few days of running in my test environment, a query timed out\n> > (query timeout=4mins). Also in general the queries were taking a lot\n> > longer than expected. The workload in my database is a write intensive\n> > workload. And the writes happen in a burst every 5 minutes. There are a\n> > whole bunch of insert and update queries that run every 5 minutes. When\n> > I analyzed the situation (by enabling more postgres logs), I noticed\n> > that postgres checkpoints were triggering approximately every 5 minutes\n> > and based on my online research I suspected the i/o overhead of\n> > checkpoints was affecting the query performance. The checkpoint related\n> > settings were:\n> > checkpoint_segments = 30\n> > checkpoint_timeout = 15min\n> >\n> > I modified these settings to the following:\n> > checkpoint_segments = 250\n> > checkpoint_timeout = 1h\n> > checkpoint_completion_target = 0.9\n>\n> The problem is that while this makes the checkpoints less frequent, it\n> accumulates more changes that need to be written to disk during the\n> checkpoint. Which means the impact more severe.\n>\n> The only case when this is not true is when repeatedly modifying a\n> subset of the data (say, a few data blocks), because the changes merge\n> into a single write during checkpoint.\n>\n> > After I tweaked these settings, checkpoints were happening only once in\n> > an hour and that improved the query performance. However, when the\n> > checkpoint happens every hour, the query performance is still very poor.\n> > This is still undesirable to my system.\n>\n> So, can you share a few of the checkpoint log messages? So that we get\n> an idea of how much data needs to be synced to disk.\n>\n> > I also tried editing dirty_background_ratio and dirty_expire_centisecs\n> > in /etc/sysctl.conf. All dirty related kernel settings:\n> >\n> >># sysctl -a | grep dirty\n> >\n> > vm.dirty_background_ratio = 1\n> > vm.dirty_background_bytes = 0\n> > vm.dirty_ratio = 20\n> > vm.dirty_bytes = 0\n> > vm.dirty_writeback_centisecs = 500\n> > vm.dirty_expire_centisecs = 500\n> >\n> > This also didn't improve the situation.\n>\n> Can you monitor the amount of dirty data in page cache, i.e. data that\n> needs to be written to disk? Wait for the checkpoint and sample the\n> /proc/meminfo a few times:\n>\n> $ cat /proc/meminfo | grep Dirty\n>\n> Also, watch \"iostat -x -k 1\" or something similar to see disk activity.\n>\n> > My question is how to track down the reason for the poor performance\n> > during checkpoints and improve the query performance when the\n> > checkpoints happen?\n> >\n> > * *EXPLAIN ANALYZE:*\n> > o http://explain.depesz.com/s/BNva - An insert query inserting\n> > just 129 rows takes 20 seconds.\n> > o http://explain.depesz.com/s/5hA - An update query updating 43926\n> > rows takes 55 seconds.\n> > * *History:* It gets slower after a few days of the system running.\n> >\n> > *Table Metadata*:\n> >\n> > * The tables get updated every 5 minutes. Utmost 50000 rows in a table\n> > get updated every 5 minutes. About 50000 rows get inserted every 1\n> hour.\n> > * There are 90 tables in the DB. 43 of these are updated every 5\n> > minutes. 8/90 tables receive a high update traffic of 50000\n> > updates/5mins. Remaining tables receive an update traffic of 2000\n> > updates/5min. 43/90 tables are updated every 1 hour.\n>\n> So how much data in total are we talking about?\n>\n> > *PostgreSQL version: *PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu,\n> > compiled by gcc (GCC) 4.6.x-google 20120601 (prerelease), 64-bit\n> >\n> > *How you installed PostgreSQL: *Compiled from source and installed.\n> >\n> > *Changes made to the settings in the postgresql.conf file:*\n>\n> Seems fine to me, except for the following changes:\n>\n> > name | current_setting | source\n> >\n> ------------------------------+------------------------+----------------------\n> > maintenance_work_mem | 64MB | configuration\n> file\n> > temp_buffers | 256MB | configuration\n> file\n> > wal_buffers | 1MB | configuration\n> file\n> > work_mem | 128MB | configuration\n> file\n>\n> Any particular reasons for setting work_mem > maintenance_work_mem? Why\n> have you modified wal_buffer and temp_buffers?\n>\n> I doubt these are related to the issues you're seeing, though.\n>\n> > * *Relevant Schema*: All tables referenced in this question have this\n> > same schema\n> >\n> > managed_target_stats=> \\d stat_300_3_1\n> >\n> > Table \"public.stat_300_40110_1\"\n> >\n> > Column | Type | Modifiers\n> > --------+---------+-----------\n> > ts | integer |\n> > target | bigint |\n> > port | integer |\n> > data | real[] |\n> >\n> > Indexes:\n> > \"unique_stat_300_40110_1\" UNIQUE CONSTRAINT, btree (ts, target, port)\n> > \"idx_port_stat_300_40110_1\" btree (port)\n> > \"idx_target_stat_300_40110_1\" btree (target)\n> > \"idx_ts_stat_300_40110_1\" btree (ts)\n>\n> OK, so there are multiple tables, and you're updating 50k rows in all\n> tables in total? Can you post \\dt+ and \\di+ so that we get an idea of\n> table/index sizes?\n>\n> > * *Hardware*:\n> > o CPU: Intel(R) Xeon(R) CPU E5205 @ 1.86GHz\n> > o Memory: 6GB\n> > o Storage Details:\n> >\n> > There are 2 500GB disks (/dev/sda, /dev/sdb) with the following 6\n> > partitions on each disk.\n> >\n> > *Number Start End Size Type File system Flags*\n> >\n> > 1 512B 24.7MB 24.7MB primary boot\n> > 2 24.7MB 6473MB 6449MB primary linux-swap(v1)\n> > 3 6473MB 40.8GB 34.4GB primary ext3\n> > 4 40.8GB 500GB 459GB extended lba\n> > 5 40.8GB 408GB 367GB logical ext3\n> > 6 408GB 472GB 64.4GB logical ext3\n>\n> The first problem here is ext3. It's behavior when performing fsync is\n> really terrible. See\n>\n> http://blog.2ndquadrant.com/linux_filesystems_and_postgres/\n>\n> for more details. So, the first thing you should do is switching to ext4\n> or xfs.\n>\n>\n> > *Disk model and details:*\n> >\n> > Model Family: Western Digital RE3 Serial ATA family\n>\n> Regular 7.2k SATA disk, not the most powerful piece of hardware.\n>\n>\n> > The postgres data is stored on a software RAID10 on partition 5 of\n> > both these disks.\n>\n> So essentially RAID0, as you only have 2 drives.\n>\n> > * *Maintenance Setup*: autovacuum is running with default settings.\n> > Old records are deleted every night. I also do 'vacuum full' on a 12\n> > tables that receive large number of updates every night at 1AM. I\n> > have noticed that these 'vacuum full' also time out. (I am planning\n> > to post a separate question regarding my vacuuming strategy).\n>\n> My bet is it's related. If the system is I/O bound, it's natural the\n> vacuum full are performing badly too.\n>\n> > * *WAL Configuration*: The WAL is in the same disk.\n>\n> Which is not helping, because it interferes with the other I/O.\n>\n> regards\n> Tomas\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThe problem is that while this makes the checkpoints less frequent, it\n\n\naccumulates more changes that need to be written to disk during thecheckpoint. Which means the impact more severe.True. But the checkpoints finish in approximately 5-10 minutes every time (even with checkpoint_completion_target of 0.9). \nHere are the checkpoint logs for April 26: http://pastebin.com/Sh7bZ8u8 \n\n\nThe only case when this is not true is when repeatedly modifying asubset of the data (say, a few data blocks), because the changes mergeinto a single write during checkpoint.In my case most of the time I would be updating the same rows every 5 minutes for the entire clock hour. Every 5 minutes I would be updating the same set of rows (adding new data to the data[] field) in the tables. But of course checkpoints don't happen exactly at the 1 hour mark on the clock. But I think my system could take advantage of the behavior you explain above.\n$ cat /proc/meminfo | grep Dirty\nHere is a log of Dirty info every 1 second during checkpoint: http://pastebin.com/gmJFFAKWThe top stats are while the checkpoint alone was running. Checkpoint was running between 22:12:09-22:21:35. Around 22:17:30- 22:24:21 is when the write queries were running.\nAlso, watch \"iostat -x -k 1\" or something similar to see disk activity.\nHere is a log of iostat every 1 second during checkpoint: http://pastebin.com/RHyMkiQtThis is stats for the timeframe corresponding to the dirty logs above. But because of 500k size constraint in pastebin I had to delete a lot of stats at the beginning. The top stats are while the checkpoint alone was running. The write queries started running in the middle and at the very end it's the stats of just the write queries (checkpoint completed before the write queries completed).\nSo how much data in total are we talking about?\n\nOK, so there are multiple tables, and you're updating 50k rows in alltables in total? Every 5 minutes: 50K rows are updated in 4 tables. 2K rows are updated in 39 tables. Every 1 hour (on top of the hour): 50K rows are updated in 8 tables. 2K rows are updated in 78 tables.\nIf every update will take up space equivalent to 1 row, then there are 278K rows updated across all tables every 5 minutes. And 556K (278 * 2) rows updated across all tables every 1 hour. All tables follow the same schema except some tables don't have the 'port' field. And the data[] column on each row could have maximum 48 values.\nCan you post \\dt+ and \\di+ so that we get an idea of table/index sizes?\n\\dt+: http://pastebin.com/Dvg2vSeb\\di+: http://pastebin.com/586MGn0U \n\nThanks for your input on ext3 filesystem and having WAL on a different disk. I'll see if these can be changed. I cannot change these in the short term.On Sun, Apr 27, 2014 at 3:46 PM, Tomas Vondra <[email protected]> wrote:\nOn 27.4.2014 23:01, Elanchezhiyan Elango wrote:\n\n> (I am resending this question after waiting for several hours because\n> my previous mail got stalled probably because I didn't confirm my\n> email address after subscribing. So resending the mail. Sorry if this\n> is causing a double post.)\n>\n> *Problem description:*\n> After a few days of running in my test environment, a query timed out\n> (query timeout=4mins). Also in general the queries were taking a lot\n> longer than expected. The workload in my database is a write intensive\n> workload. And the writes happen in a burst every 5 minutes. There are a\n> whole bunch of insert and update queries that run every 5 minutes. When\n> I analyzed the situation (by enabling more postgres logs), I noticed\n> that postgres checkpoints were triggering approximately every 5 minutes\n> and based on my online research I suspected the i/o overhead of\n> checkpoints was affecting the query performance. The checkpoint related\n> settings were:\n> checkpoint_segments = 30\n> checkpoint_timeout = 15min\n>\n> I modified these settings to the following:\n> checkpoint_segments = 250\n> checkpoint_timeout = 1h\n> checkpoint_completion_target = 0.9\n\nThe problem is that while this makes the checkpoints less frequent, it\naccumulates more changes that need to be written to disk during the\ncheckpoint. Which means the impact more severe.\n\nThe only case when this is not true is when repeatedly modifying a\nsubset of the data (say, a few data blocks), because the changes merge\ninto a single write during checkpoint.\n\n> After I tweaked these settings, checkpoints were happening only once in\n> an hour and that improved the query performance. However, when the\n> checkpoint happens every hour, the query performance is still very poor.\n> This is still undesirable to my system.\n\nSo, can you share a few of the checkpoint log messages? So that we get\nan idea of how much data needs to be synced to disk.\n\n> I also tried editing dirty_background_ratio and dirty_expire_centisecs\n> in /etc/sysctl.conf. All dirty related kernel settings:\n>\n>># sysctl -a | grep dirty\n>\n> vm.dirty_background_ratio = 1\n> vm.dirty_background_bytes = 0\n> vm.dirty_ratio = 20\n> vm.dirty_bytes = 0\n> vm.dirty_writeback_centisecs = 500\n> vm.dirty_expire_centisecs = 500\n>\n> This also didn't improve the situation.\n\nCan you monitor the amount of dirty data in page cache, i.e. data that\nneeds to be written to disk? Wait for the checkpoint and sample the\n/proc/meminfo a few times:\n\n$ cat /proc/meminfo | grep Dirty\n\nAlso, watch \"iostat -x -k 1\" or something similar to see disk activity.\n\n> My question is how to track down the reason for the poor performance\n> during checkpoints and improve the query performance when the\n> checkpoints happen?\n>\n> * *EXPLAIN ANALYZE:*\n> o http://explain.depesz.com/s/BNva - An insert query inserting\n> just 129 rows takes 20 seconds.\n> o http://explain.depesz.com/s/5hA - An update query updating 43926\n> rows takes 55 seconds.\n> * *History:* It gets slower after a few days of the system running.\n>\n> *Table Metadata*:\n>\n> * The tables get updated every 5 minutes. Utmost 50000 rows in a table\n> get updated every 5 minutes. About 50000 rows get inserted every 1 hour.\n> * There are 90 tables in the DB. 43 of these are updated every 5\n> minutes. 8/90 tables receive a high update traffic of 50000\n> updates/5mins. Remaining tables receive an update traffic of 2000\n> updates/5min. 43/90 tables are updated every 1 hour.\n\nSo how much data in total are we talking about?\n\n> *PostgreSQL version: *PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu,\n> compiled by gcc (GCC) 4.6.x-google 20120601 (prerelease), 64-bit\n>\n> *How you installed PostgreSQL: *Compiled from source and installed.\n>\n> *Changes made to the settings in the postgresql.conf file:*\n\nSeems fine to me, except for the following changes:\n\n> name | current_setting | source\n> ------------------------------+------------------------+----------------------\n> maintenance_work_mem | 64MB | configuration file\n> temp_buffers | 256MB | configuration file\n> wal_buffers | 1MB | configuration file\n> work_mem | 128MB | configuration file\n\nAny particular reasons for setting work_mem > maintenance_work_mem? Why\nhave you modified wal_buffer and temp_buffers?\n\nI doubt these are related to the issues you're seeing, though.\n\n> * *Relevant Schema*: All tables referenced in this question have this\n> same schema\n>\n> managed_target_stats=> \\d stat_300_3_1\n>\n> Table \"public.stat_300_40110_1\"\n>\n> Column | Type | Modifiers\n> --------+---------+-----------\n> ts | integer |\n> target | bigint |\n> port | integer |\n> data | real[] |\n>\n> Indexes:\n> \"unique_stat_300_40110_1\" UNIQUE CONSTRAINT, btree (ts, target, port)\n> \"idx_port_stat_300_40110_1\" btree (port)\n> \"idx_target_stat_300_40110_1\" btree (target)\n> \"idx_ts_stat_300_40110_1\" btree (ts)\n\nOK, so there are multiple tables, and you're updating 50k rows in all\ntables in total? Can you post \\dt+ and \\di+ so that we get an idea of\ntable/index sizes?\n\n> * *Hardware*:\n> o CPU: Intel(R) Xeon(R) CPU E5205 @ 1.86GHz\n> o Memory: 6GB\n> o Storage Details:\n>\n> There are 2 500GB disks (/dev/sda, /dev/sdb) with the following 6\n> partitions on each disk.\n>\n> *Number Start End Size Type File system Flags*\n>\n> 1 512B 24.7MB 24.7MB primary boot\n> 2 24.7MB 6473MB 6449MB primary linux-swap(v1)\n> 3 6473MB 40.8GB 34.4GB primary ext3\n> 4 40.8GB 500GB 459GB extended lba\n> 5 40.8GB 408GB 367GB logical ext3\n> 6 408GB 472GB 64.4GB logical ext3\n\nThe first problem here is ext3. It's behavior when performing fsync is\nreally terrible. See\n\n http://blog.2ndquadrant.com/linux_filesystems_and_postgres/\n\nfor more details. So, the first thing you should do is switching to ext4\nor xfs.\n\n\n> *Disk model and details:*\n>\n> Model Family: Western Digital RE3 Serial ATA family\n\nRegular 7.2k SATA disk, not the most powerful piece of hardware.\n\n\n> The postgres data is stored on a software RAID10 on partition 5 of\n> both these disks.\n\nSo essentially RAID0, as you only have 2 drives.\n\n> * *Maintenance Setup*: autovacuum is running with default settings.\n> Old records are deleted every night. I also do 'vacuum full' on a 12\n> tables that receive large number of updates every night at 1AM. I\n> have noticed that these 'vacuum full' also time out. (I am planning\n> to post a separate question regarding my vacuuming strategy).\n\nMy bet is it's related. If the system is I/O bound, it's natural the\nvacuum full are performing badly too.\n\n> * *WAL Configuration*: The WAL is in the same disk.\n\nWhich is not helping, because it interferes with the other I/O.\n\nregards\nTomas\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sun, 27 Apr 2014 22:50:08 -0700",
"msg_from": "Elanchezhiyan Elango <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Checkpoints and slow queries"
},
{
"msg_contents": "Elanchezhiyan Elango <[email protected]> writes:\n>> The problem is that while this makes the checkpoints less frequent, it\n>> accumulates more changes that need to be written to disk during the\n>> checkpoint. Which means the impact more severe.\n\n> True. But the checkpoints finish in approximately 5-10 minutes every time\n> (even with checkpoint_completion_target of 0.9).\n\nThere's something wrong with that. I wonder whether you need to kick\ncheckpoint_segments up some more to keep the checkpoint from being run\ntoo fast.\n\nEven so, though, a checkpoint spread over 5-10 minutes ought to provide\nthe kernel with enough breathing room to flush things. It sounds like\nthe kernel is just sitting on the dirty buffers until it gets hit with\nfsyncs, and then it's dumping them as fast as it can. So you need some\nmore work on tuning the kernel parameters.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 10:07:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpoints and slow queries"
},
{
"msg_contents": "On 28.4.2014 16:07, Tom Lane wrote:\n> Elanchezhiyan Elango <[email protected]> writes:\n>>> The problem is that while this makes the checkpoints less\n>>> frequent, it accumulates more changes that need to be written to\n>>> disk during the checkpoint. Which means the impact more severe.\n> \n>> True. But the checkpoints finish in approximately 5-10 minutes\n>> every time (even with checkpoint_completion_target of 0.9).\n> \n> There's something wrong with that. I wonder whether you need to\n> kick checkpoint_segments up some more to keep the checkpoint from\n> being run too fast.\n> \n> Even so, though, a checkpoint spread over 5-10 minutes ought to\n> provide the kernel with enough breathing room to flush things. It\n> sounds like the kernel is just sitting on the dirty buffers until it\n> gets hit with fsyncs, and then it's dumping them as fast as it can.\n> So you need some more work on tuning the kernel parameters.\n\nThere's certainly something fishy, because although this is the supposed\nconfiguration:\n\ncheckpoint_segments = 250\ncheckpoint_timeout = 1h\ncheckpoint_completion_target = 0.9\n\nthe checkpoint logs typically finish in much shorter periods of time.\nLike this, for example:\n\n\n\n\n> \n> regards, tom lane\n> \n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 22:41:16 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpoints and slow queries"
},
{
"msg_contents": "On Mon, Apr 28, 2014 at 1:41 PM, Tomas Vondra <[email protected]> wrote:\n\n> On 28.4.2014 16:07, Tom Lane wrote:\n> > Elanchezhiyan Elango <[email protected]> writes:\n> >>> The problem is that while this makes the checkpoints less\n> >>> frequent, it accumulates more changes that need to be written to\n> >>> disk during the checkpoint. Which means the impact more severe.\n> >\n> >> True. But the checkpoints finish in approximately 5-10 minutes\n> >> every time (even with checkpoint_completion_target of 0.9).\n> >\n> > There's something wrong with that. I wonder whether you need to\n> > kick checkpoint_segments up some more to keep the checkpoint from\n> > being run too fast.\n> >\n> > Even so, though, a checkpoint spread over 5-10 minutes ought to\n> > provide the kernel with enough breathing room to flush things. It\n> > sounds like the kernel is just sitting on the dirty buffers until it\n> > gets hit with fsyncs, and then it's dumping them as fast as it can.\n> > So you need some more work on tuning the kernel parameters.\n>\n> There's certainly something fishy, because although this is the supposed\n> configuration:\n>\n> checkpoint_segments = 250\n> checkpoint_timeout = 1h\n> checkpoint_completion_target = 0.9\n>\n> the checkpoint logs typically finish in much shorter periods of time.\n>\n\n\nThat doesn't look fishy to me. The checkpointer will not take more than\none nap between buffers, so it will always write at least 10 buffers per\nsecond (of napping time) even if that means it finishes early. Which seems\nto be the case here--the length of the write cycle seems to be about one\ntenth the number of buffers written.\n\nEven if that were not the case, it also doesn't count buffers written by\nthe backends or the background writer as having been written, so that is\nanother reason for it to finish early. Perhaps the fsync queue should pass\non information of whether the written buffers were marked for the\ncheckpointer. There is no reason to think this would improve performance,\nbut it might reduce confusion.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 28, 2014 at 1:41 PM, Tomas Vondra <[email protected]> wrote:\nOn 28.4.2014 16:07, Tom Lane wrote:\n> Elanchezhiyan Elango <[email protected]> writes:\n>>> The problem is that while this makes the checkpoints less\n>>> frequent, it accumulates more changes that need to be written to\n>>> disk during the checkpoint. Which means the impact more severe.\n>\n>> True. But the checkpoints finish in approximately 5-10 minutes\n>> every time (even with checkpoint_completion_target of 0.9).\n>\n> There's something wrong with that. I wonder whether you need to\n> kick checkpoint_segments up some more to keep the checkpoint from\n> being run too fast.\n>\n> Even so, though, a checkpoint spread over 5-10 minutes ought to\n> provide the kernel with enough breathing room to flush things. It\n> sounds like the kernel is just sitting on the dirty buffers until it\n> gets hit with fsyncs, and then it's dumping them as fast as it can.\n> So you need some more work on tuning the kernel parameters.\n\nThere's certainly something fishy, because although this is the supposed\nconfiguration:\n\ncheckpoint_segments = 250\ncheckpoint_timeout = 1h\ncheckpoint_completion_target = 0.9\n\nthe checkpoint logs typically finish in much shorter periods of time.That doesn't look fishy to me. The checkpointer will not take more than one nap between buffers, so it will always write at least 10 buffers per second (of napping time) even if that means it finishes early. Which seems to be the case here--the length of the write cycle seems to be about one tenth the number of buffers written.\nEven if that were not the case, it also doesn't count buffers written by the backends or the background writer as having been written, so that is another reason for it to finish early. Perhaps the fsync queue should pass on information of whether the written buffers were marked for the checkpointer. There is no reason to think this would improve performance, but it might reduce confusion.\nCheers,Jeff",
"msg_date": "Mon, 28 Apr 2014 13:54:09 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpoints and slow queries"
},
{
"msg_contents": "Sorry, hit \"send\" too early by accident.\n\nOn 28.4.2014 16:07, Tom Lane wrote:\n> Elanchezhiyan Elango <[email protected]> writes:\n>>> The problem is that while this makes the checkpoints less\n>>> frequent, it accumulates more changes that need to be written to\n>>> disk during the checkpoint. Which means the impact more severe.\n> \n>> True. But the checkpoints finish in approximately 5-10 minutes\n>> every time (even with checkpoint_completion_target of 0.9).\n> \n> There's something wrong with that. I wonder whether you need to\n> kick checkpoint_segments up some more to keep the checkpoint from\n> being run too fast.\n\nToo fast? All the checkpoints listed in the log were \"timed\", pretty\nmuch exactly in 1h intervals:\n\nApr 26 00:12:57 LOG: checkpoint starting: time\nApr 26 01:12:57 LOG: checkpoint starting: time\nApr 26 02:12:57 LOG: checkpoint starting: time\nApr 26 03:12:57 LOG: checkpoint starting: time\nApr 26 04:12:58 LOG: checkpoint starting: time\nApr 26 05:12:57 LOG: checkpoint starting: time\nApr 26 06:12:57 LOG: checkpoint starting: time\n\nThere's certainly something fishy, because although this is the supposed\nconfiguration:\n\ncheckpoint_segments = 250\ncheckpoint_timeout = 1h\ncheckpoint_completion_target = 0.9\n\nthe checkpoint logs typically finish in much shorter periods of time.\nLike this, for example:\n\nApr 26 10:12:57 LOG: checkpoint starting: time\nApr 26 10:26:27 LOG: checkpoint complete: wrote 9777 buffers (15.3%); 0\ntransaction log file(s) added, 0 removed, 153 recycled; write=800.377 s,\nsync=8.605 s, total=809.834 s; sync files=719, longest=1.034 s,\naverage=0.011 s\n\nAnd that's one of the longer runs - most of the others run in ~5-6\nminutes. Now, maybe I'm mistaken but I'd expect the checkpoints to\nfinish in ~54 minutes, which is (0.9*checkpoint_completion_target).\n\n> Even so, though, a checkpoint spread over 5-10 minutes ought to\n> provide the kernel with enough breathing room to flush things. It\n> sounds like the kernel is just sitting on the dirty buffers until it\n> gets hit with fsyncs, and then it's dumping them as fast as it can.\n> So you need some more work on tuning the kernel parameters.\n\nI'm not sure about this - the /proc/meminfo snapshots sent in the\nprevious post show that the amount of \"Dirty\" memory is usually well\nbelow ~20MB, with max at ~36MB at 22:24:26, and within matter of seconds\nit drops down to ~10MB of dirty data.\n\nAlso, the kernel settings seem quite aggressive to me:\n\nvm.dirty_background_ratio = 1\nvm.dirty_background_bytes = 0\nvm.dirty_ratio = 20\nvm.dirty_bytes = 0\nvm.dirty_writeback_centisecs = 500\nvm.dirty_expire_centisecs = 500\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 22:59:34 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpoints and slow queries"
},
{
"msg_contents": "On 28.4.2014 07:50, Elanchezhiyan Elango wrote:\n> \n> So how much data in total are we talking about?\n> OK, so there are multiple tables, and you're updating 50k rows in all\n> tables in total? \n> \n> Every 5 minutes: 50K rows are updated in 4 tables. 2K rows are updated\n> in 39 tables. \n> Every 1 hour (on top of the hour): 50K rows are updated in 8 tables. 2K\n> rows are updated in 78 tables.\n> If every update will take up space equivalent to 1 row, then there are\n> 278K rows updated across all tables every 5 minutes. And 556K (278 * 2)\n> rows updated across all tables every 1 hour. All tables follow the same\n> schema except some tables don't have the 'port' field. And the data[]\n> column on each row could have maximum 48 values.\n\nI wasn't really asking about the amount of updates (that's reasonably\nwell seen in the checkpoint logs), but about the size of the database.\n\n> Can you post \\dt+ and \\di+ so that we get an idea of table/index sizes?\n> \n> \\dt+: http://pastebin.com/Dvg2vSeb\n> \\di+: http://pastebin.com/586MGn0U \n\nAccording to the output, it seems you're dealing with ~20GB of data and\n~30GB of indexes. Is that about right?\n\n\n> Thanks for your input on ext3 filesystem and having WAL on a\n> different disk. I'll see if these can be changed. I cannot change\n> these in the short term.\n\nWhat kernel version is this, actually?\n\nAlso, have you done some basic performance tests, to see how the disk\narray behaves? I mean something like\n\n dd if=/dev/zero of=/mnt/raid/test.file bs=1M count=16000\n dd if=/mnt/raid/test.file of=/dev/null bs=1M count=16000\n\nto test sequential performance, pgbench to test something more random\netc. Because trying to solve this from the \"it's checkpoint issue\" when\nin reality it might be something completely different.\n\nAlso, are you sure there's no other source of significant I/O activity?\nTry to run iotop to watch what's happening there.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 23:10:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpoints and slow queries"
},
{
"msg_contents": "On 28.4.2014 22:54, Jeff Janes wrote:\n> On Mon, Apr 28, 2014 at 1:41 PM, Tomas Vondra <[email protected]\n> There's certainly something fishy, because although this is the supposed\n> configuration:\n> \n> checkpoint_segments = 250\n> checkpoint_timeout = 1h\n> checkpoint_completion_target = 0.9\n> \n> the checkpoint logs typically finish in much shorter periods of time.\n> \n> That doesn't look fishy to me. The checkpointer will not take more than\n> one nap between buffers, so it will always write at least 10 buffers per\n> second (of napping time) even if that means it finishes early. Which\n> seems to be the case here--the length of the write cycle seems to be\n> about one tenth the number of buffers written.\n\nOh, makes sense I guess. Apparently I'm tuning this only on systems\ndoing a lot of I/O, so this behaviour somehow escaped me.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 23:19:08 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpoints and slow queries"
}
] |
[
{
"msg_contents": "Hello everybody,\n\nI'm experiencing a performance related issue during some validation\nmeasurements.\n\nLet's first of all clarify what kind of problem I am facing.\nI've set up a plain tpc-h database without any additional indexes or\nsomething like that.\nTo get some performance impressions I wrote a little script that uses 1)\npg_ctl to start postgres, 2) executes the same query several times and\nfinally 3) stops the postgres instance using pg_ctl again.\n\nThese 3 steps are executed several times resulting in varying performance\nresults of which I think that they should be smaller (taking runtime\ndifferences I don't have any influence on into account) .\n\nRun1:\nreal 0m15.005s\nuser 0m0.000s\nsys 0m0.000s\n\n\nRun2:\nreal 0m14.012s\nuser 0m0.000s\nsys 0m0.000s\n\n\nPlease be aware that I provided numbers taken from the middle of each run\n(to exclude any io-related effects), so the buffers are warm, I disabled\nintels' turbo boost feature and that I have exclusive access to the machine.\nI used explain (analyze, buffers) to make sure no data has to be loaded from\ndisks during runtime (top and vmstat did not show any movements as well).\nI flushed the os buffers (no effect regarding performance difference), I\ndisabled autovacuum, I adapted my buffer sizes to hold the whole data\nrequired, but I was not able to sort the performance difference out yet.\n\nThe query I'm currently using is quite easy:\nselect\nmin(l_extendedprice * l_discount * l_tax)\nfrom lineitem;\n\nNote that I'm aware that my shared_buffer settings has to be bigger than 1/4\nof the table to get past the ring buffer.\n\nWould be great if anyone could give me some advise where to look at or\nprovide me with some information about postgres performance behaviour.\n\nThank you very much\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Varying-performacne-results-after-instance-restart-tp5801717.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 06:41:19 -0700 (PDT)",
"msg_from": "MadDamon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Varying performacne results after instance restart"
}
] |
[
{
"msg_contents": "I've been doing a bit of benchmarking and real-world performance \ntesting, and have found some curious results.\n\nThe load in question is a fairly-busy machine hosting a web service that \nuses Postgresql as its back end.\n\n\"Conventional Wisdom\" is that you want to run an 8k record size to match \nPostgresql's inherent write size for the database.\n\nHowever, operational experience says this may no longer be the case now \nthat modern ZFS systems support LZ4 compression, because modern CPUs can \ncompress fast enough that they overrun raw I/O capacity. This in turn \nmeans that the recordsize is no longer the record size, basically, and \nPostgresql's on-disk file format is rather compressible -- indeed, in \nactual performance on my dataset it appears to be roughly 1.24x which is \nnothing to sneeze at.\n\nThe odd thing is that I am getting better performance with a 128k record \nsize on this application than I get with an 8k one! Not only is the \nsystem faster to respond subjectively and can it sustain a higher TPS \nload objectively but the I/O busy percentage as measured during \noperation is MARKEDLY lower (by nearly an order of magnitude!)\n\nThis is not expected behavior!\n\nWhat I am curious about, however, is the xlog -- that appears to suffer \npretty badly from 128k record size, although it compresses even \nmore-materially; 1.94x (!)\n\nThe files in the xlog directory are large (16MB each) and thus \"first \nblush\" would be that having a larger record size for that storage area \nwould help. It appears that instead it hurts.\n\nIdeas?\n\n-- \n-- Karl\[email protected]",
"msg_date": "Mon, 28 Apr 2014 10:47:38 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Revisiting disk layout on ZFS systems"
},
{
"msg_contents": "On 04/28/2014 06:47 PM, Karl Denninger wrote:\n> What I am curious about, however, is the xlog -- that appears to suffer\n> pretty badly from 128k record size, although it compresses even\n> more-materially; 1.94x (!)\n>\n> The files in the xlog directory are large (16MB each) and thus \"first\n> blush\" would be that having a larger record size for that storage area\n> would help. It appears that instead it hurts.\n\nThe WAL is fsync'd frequently. My guess is that that causes a lot of \nextra work to repeatedly recompress the same data, or something like that.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 21:04:02 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting disk layout on ZFS systems"
},
{
"msg_contents": "On 4/28/2014 1:04 PM, Heikki Linnakangas wrote:\n> On 04/28/2014 06:47 PM, Karl Denninger wrote:\n>> What I am curious about, however, is the xlog -- that appears to suffer\n>> pretty badly from 128k record size, although it compresses even\n>> more-materially; 1.94x (!)\n>>\n>> The files in the xlog directory are large (16MB each) and thus \"first\n>> blush\" would be that having a larger record size for that storage area\n>> would help. It appears that instead it hurts.\n>\n> The WAL is fsync'd frequently. My guess is that that causes a lot of \n> extra work to repeatedly recompress the same data, or something like \n> that.\n>\n> - Heikki\n>\nIt shouldn't as ZFS re-writes on change, and what's showing up is not \nhigh I/O *count* but rather percentage-busy, which implies lots of head \nmovement (that is, lots of sub-allocation unit writes.)\n\nIsn't WAL essentially sequential writes during normal operation?\n\n-- \n-- Karl\[email protected]",
"msg_date": "Mon, 28 Apr 2014 13:07:22 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting disk layout on ZFS systems"
},
{
"msg_contents": "On 04/28/2014 09:07 PM, Karl Denninger wrote:\n>> The WAL is fsync'd frequently. My guess is that that causes a lot of\n>> extra work to repeatedly recompress the same data, or something like\n>> that.\n>\n> It shouldn't as ZFS re-writes on change, and what's showing up is not\n> high I/O*count* but rather percentage-busy, which implies lots of head\n> movement (that is, lots of sub-allocation unit writes.)\n\nThat sounds consistent frequent fsyncs.\n\n> Isn't WAL essentially sequential writes during normal operation?\n\nYes, it's totally sequential. But it's fsync'd at every commit, which \nmeans a lot of small writes.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 21:22:36 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting disk layout on ZFS systems"
},
{
"msg_contents": "On Mon, Apr 28, 2014 at 11:07 AM, Karl Denninger <[email protected]> wrote:\n\n>\n> On 4/28/2014 1:04 PM, Heikki Linnakangas wrote:\n>\n>> On 04/28/2014 06:47 PM, Karl Denninger wrote:\n>>\n>>> What I am curious about, however, is the xlog -- that appears to suffer\n>>> pretty badly from 128k record size, although it compresses even\n>>> more-materially; 1.94x (!)\n>>>\n>>> The files in the xlog directory are large (16MB each) and thus \"first\n>>> blush\" would be that having a larger record size for that storage area\n>>> would help. It appears that instead it hurts.\n>>>\n>>\n>> The WAL is fsync'd frequently. My guess is that that causes a lot of\n>> extra work to repeatedly recompress the same data, or something like that.\n>>\n>> - Heikki\n>>\n>> It shouldn't as ZFS re-writes on change, and what's showing up is not\n> high I/O *count* but rather percentage-busy, which implies lots of head\n> movement (that is, lots of sub-allocation unit writes.)\n>\n> Isn't WAL essentially sequential writes during normal operation?\n\n\nOnly if you have some sort of non-volatile intermediary, or are willing to\nrisk your data integrity. Otherwise, the fsync nature trumps the\nsequential nature.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 28, 2014 at 11:07 AM, Karl Denninger <[email protected]> wrote:\n\nOn 4/28/2014 1:04 PM, Heikki Linnakangas wrote:\n\nOn 04/28/2014 06:47 PM, Karl Denninger wrote:\n\nWhat I am curious about, however, is the xlog -- that appears to suffer\npretty badly from 128k record size, although it compresses even\nmore-materially; 1.94x (!)\n\nThe files in the xlog directory are large (16MB each) and thus \"first\nblush\" would be that having a larger record size for that storage area\nwould help. It appears that instead it hurts.\n\n\nThe WAL is fsync'd frequently. My guess is that that causes a lot of extra work to repeatedly recompress the same data, or something like that.\n\n- Heikki\n\n\nIt shouldn't as ZFS re-writes on change, and what's showing up is not high I/O *count* but rather percentage-busy, which implies lots of head movement (that is, lots of sub-allocation unit writes.)\n\nIsn't WAL essentially sequential writes during normal operation?Only if you have some sort of non-volatile intermediary, or are willing to risk your data integrity. Otherwise, the fsync nature trumps the sequential nature.\nCheers,Jeff",
"msg_date": "Mon, 28 Apr 2014 11:26:26 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting disk layout on ZFS systems"
},
{
"msg_contents": "On 4/28/2014 1:22 PM, Heikki Linnakangas wrote:\n> On 04/28/2014 09:07 PM, Karl Denninger wrote:\n>>> The WAL is fsync'd frequently. My guess is that that causes a lot of\n>>> extra work to repeatedly recompress the same data, or something like\n>>> that.\n>>\n>> It shouldn't as ZFS re-writes on change, and what's showing up is not\n>> high I/O*count* but rather percentage-busy, which implies lots of head\n>> movement (that is, lots of sub-allocation unit writes.)\n>\n> That sounds consistent frequent fsyncs.\n>\n>> Isn't WAL essentially sequential writes during normal operation?\n>\n> Yes, it's totally sequential. But it's fsync'd at every commit, which \n> means a lot of small writes.\n>\n> - Heikki\n\nMakes sense; I'll muse on whether there's a way to optimize this \nfurther... I'm not running into performance problems at present but I'd \nrather be ahead of it....\n\n-- \n-- Karl\[email protected]",
"msg_date": "Mon, 28 Apr 2014 13:27:07 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting disk layout on ZFS systems"
},
{
"msg_contents": "On 4/28/2014 1:26 PM, Jeff Janes wrote:\n> On Mon, Apr 28, 2014 at 11:07 AM, Karl Denninger <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n>\n> Isn't WAL essentially sequential writes during normal operation?\n>\n>\n> Only if you have some sort of non-volatile intermediary, or are \n> willing to risk your data integrity. Otherwise, the fsync nature \n> trumps the sequential nature.\n>\nThat would be a \"no\" on the data integrity :-)\n\n-- \n-- Karl\[email protected]",
"msg_date": "Mon, 28 Apr 2014 13:29:29 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting disk layout on ZFS systems"
},
{
"msg_contents": "Karl Denninger wrote:\r\n> I've been doing a bit of benchmarking and real-world performance\r\n> testing, and have found some curious results.\r\n\r\n[...]\r\n\r\n> The odd thing is that I am getting better performance with a 128k record\r\n> size on this application than I get with an 8k one!\r\n\r\n[...]\r\n\r\n> What I am curious about, however, is the xlog -- that appears to suffer\r\n> pretty badly from 128k record size, although it compresses even\r\n> more-materially; 1.94x (!)\r\n> \r\n> The files in the xlog directory are large (16MB each) and thus \"first\r\n> blush\" would be that having a larger record size for that storage area\r\n> would help. It appears that instead it hurts.\r\n\r\nAs has been explained, the access patterns for WAL are quite different.\r\n\r\nFor your experiment, I'd keep them on different file systems so that\r\nyou can tune them independently.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 29 Apr 2014 08:13:54 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting disk layout on ZFS systems"
},
{
"msg_contents": "On 4/29/2014 3:13 AM, Albe Laurenz wrote:\n> Karl Denninger wrote:\n>> I've been doing a bit of benchmarking and real-world performance\n>> testing, and have found some curious results.\n> [...]\n>\n>> The odd thing is that I am getting better performance with a 128k record\n>> size on this application than I get with an 8k one!\n> [...]\n>\n>> What I am curious about, however, is the xlog -- that appears to suffer\n>> pretty badly from 128k record size, although it compresses even\n>> more-materially; 1.94x (!)\n>>\n>> The files in the xlog directory are large (16MB each) and thus \"first\n>> blush\" would be that having a larger record size for that storage area\n>> would help. It appears that instead it hurts.\n> As has been explained, the access patterns for WAL are quite different.\n>\n> For your experiment, I'd keep them on different file systems so that\n> you can tune them independently.\n>\nThey're on physically-different packs (pools and groups of spindles) as \nthat has been best practice for performance reasons pretty-much always \n-- I just thought it was interesting, and worth noting, that the usual \nrecommendation to run an 8k record size for the data store itself may no \nlonger be valid.\n\nIt certainly isn't with my workload.\n\n-- \n-- Karl\[email protected]",
"msg_date": "Tue, 29 Apr 2014 07:12:37 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Revisiting disk layout on ZFS systems"
},
{
"msg_contents": "On 04/28/2014 08:47 AM, Karl Denninger wrote:\n> The odd thing is that I am getting better performance with a 128k record\n> size on this application than I get with an 8k one! Not only is the\n> system faster to respond subjectively and can it sustain a higher TPS\n> load objectively but the I/O busy percentage as measured during\n> operation is MARKEDLY lower (by nearly an order of magnitude!)\n\nThanks for posting your experience! I'd love it even more if you could\npost some numbers to go with.\n\nQuestions:\n\n1) is your database (or the active portion thereof) smaller than RAM?\n\n2) is this a DW workload, where most writes are large writes?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 01 May 2014 14:34:46 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Revisiting disk layout on ZFS systems"
}
] |
[
{
"msg_contents": "I'm trying to get to the bottom of a performance issue on a server \nrunning PostgreSQL 9.3.1 on Centos 5. The machine is a dual quad-core \nXeon E5620 with 24GB ECC RAM and four enterprise SATA Seagate \nConstellation ES drives configured as 2 software RAID1 volumes. The \nmain DB is on one volume and some, more used indexes and WAL logs are on \nthe other.\n\nOverall, the load doesn't appear to be heavy, but we're still getting \nslow queries, for example, here's an extract from the log:\n\n2014-04-28 16:51:02.904 GMT 25998: LOG: checkpoint starting: time\n2014-04-28 16:53:37.204 GMT 30053: LOG: duration: 1067.464 ms execute \n<unnamed>: select \"id\" from \"ProductSupplier\" where \n\"product\"='25553848082928'\n2014-04-28 16:54:12.701 GMT 30053: LOG: duration: 1105.844 ms execute \n<unnamed>: select \"id\" from \"ProductSupplier\" where \n\"product\"='1626407082928'\n2014-04-28 16:54:46.789 GMT 30053: LOG: duration: 1060.585 ms execute \n<unnamed>: select \n\"id\",\"updated\",\"ean\",\"frontImagePresent\",\"backImagePresent\",\"jacketScanned\",\"x80ImagePresent\",\"x120ImagePresent\",\"x200ImagePresent\",\"x400ImagePresent\",\"type\",\"brand\",\"manProductCode\",\"name\",\"series\",\"subtitle\",\"keywords\",\"mass\",\"length\",\"width\",\"thickness\",\"releaseDate\",\"originalReleaseDate\",\"firstAvailableDate\",\"stockLevel\",\"stockCost\",\"url\",\"ratingTotal\",\"ratingCount\",\"beta\",\"available\",\"reorderSource\",\"reorderPoint\",\"reorderQuantity\",\"status\",\"replacement\",\"countryOfOrigin\",\"formType\",\"formDetail\",\"formDetail1\",\"formDetail2\",\"formDetail3\",\"formDetail4\",\"contentType\",\"packagingType\",\"ebookType\",\"mixedMedia\",\"mediaCount\",\"shortAnnotationType\",\"longAnnotationType\",\"wikipediaTopic\",\"allMusicId\",\"allMovieId\",\"allGameId\",\"imdbId\",\"boost\",\"salesMovingAverage\",\"movingAverageDate\",\"lifetimeSales\",\"shippingRequirement\",\"shippingSubsidyCalculator\",\"edition\",\"dewey\",\"lcc\",\"lccn\",\"pages\",\"minutes\",\"bicSubjects\",\"bisacSubjects\",\"regionEncoding\",\"audioFormat\",\"videoFormat\",\"platform\",\"audit\" \nfrom \"Product\" where id='6686838082928'\n2014-04-28 16:55:58.058 GMT 30053: LOG: duration: 1309.192 ms execute \n<unnamed>: select \"id\" from \"ProductCategory\" where \n\"product\"='1932872082928'\n2014-04-28 16:56:06.019 GMT 25998: LOG: checkpoint complete: wrote 6647 \nbuffers (2.5%); 0 transaction log file(s) added, 0 removed, 2 recycled; \nwrite=298.862 s, sync=4.072 s, total=303.115 s; sync files=155, \nlongest=0.234 s, average=0.026 s\n\nAlthough these tables (ProductSupplier, Product, ProductCategory) are \nlarge (millions of rows), all these queries are done via indexes, for \nexample:\n\n=# explain (analyze, buffers) select \"id\" from \"ProductSupplier\" where \n\"product\"='25553848082928';\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using \"i433275-productIndex\" on \"ProductSupplier\" \n(cost=0.57..15.81 rows=3 width=8) (actual time=0.069..0.105 rows=6 loops=1)\n Index Cond: (product = 25553848082928::bigint)\n Buffers: shared hit=10\n Total runtime: 0.138 ms\n(4 rows)\n\nSo, I would expect it to be really quick, and > 1 second seems really slow.\n\nOverall the load on the server seems quite low, for example, typical \nvmstat -1 is:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy \nid wa st\n 0 1 304 77740 11960 17709156 0 0 99 82 2 2 2 \n1 89 8 0\n 1 0 304 75228 11960 17711164 0 0 1256 635 1418 6498 0 \n0 94 6 0\n 0 1 304 73440 11968 17712036 0 0 1232 149 1253 6232 1 \n0 94 6 0\n 0 2 304 78472 11968 17706016 0 0 1760 89 1325 6361 1 \n0 94 5 0\n 0 1 304 75616 11968 17708480 0 0 2164 72 1371 6855 1 \n0 94 5 0\n 0 1 304 73292 11968 17710320 0 0 1760 112 1335 6673 1 \n0 94 5 0\n 0 2 304 77956 11964 17703528 0 0 1204 5614 1867 6712 0 \n0 94 6 0\n\nAnd iostat also seems okay:\n\n sda sdb sdc sdd \nmd0 md1 cpu\n kps tps svc_t kps tps svc_t kps tps svc_t kps tps svc_t kps tps \nsvc_t kps tps svc_t us sy wt id\n 619 64 5.4 1478 96 8.9 1494 98 8.7 542 60 5.5 1998 \n139 0.0 804 96 0.0 2 1 8 89\n 1147 151 3.7 5328 88 8.0 5388 90 8.3 1147 145 3.0 5638 \n171 0.0 2245 284 0.0 0 0 4 95\n 865 110 4.6 214 24 12.6 214 24 13.1 885 113 4.5 252 \n28 0.0 1749 222 0.0 4 0 1 95\n 937 107 5.0 40 5 10.0 40 5 13.0 821 93 6.4 80 \n10 0.0 1596 193 0.0 9 0 1 90\n 1206 154 3.2 0 0 0.0 0 0 0.0 1195 153 3.2 0 0 \n0.0 2401 307 0.0 6 0 0 94\n 1111 139 3.8 24 3 10.0 16 2 12.0 1115 144 3.8 40 \n5 0.0 2141 274 0.0 10 0 0 89\n 1222 156 3.6 101 12 4.8 101 12 4.2 1171 150 4.2 93 \n10 0.0 1986 252 0.0 10 0 0 90\n 739 98 5.5 0 0 0.0 0 0 0.0 687 93 5.2 0 0 \n0.0 1425 191 0.0 7 0 0 93\n 775 102 5.2 8 1 13.0 8 1 21.0 755 99 5.1 16 \n2 0.0 1529 201 0.0 10 0 0 89\n 780 105 5.3 16 2 13.0 16 2 11.0 784 103 5.4 32 \n4 0.0 1555 205 0.0 6 0 1 92\n 573 75 6.8 110 32 1.3 110 31 1.1 561 73 7.6 102 \n30 0.0 1109 143 0.0 10 1 0 89\n 639 84 7.3 2833 349 2.1 3085 367 2.0 683 90 6.0 4854 \n592 0.0 1026 135 0.0 2 0 4 93\n 510 70 7.8 2020 240 4.2 1808 227 4.2 586 81 7.2 40 \n5 0.0 1077 146 0.0 0 0 7 93\n 538 75 7.8 24 3 290.0 20 3 239.7 582 81 7.5 40 \n5 0.0 1066 147 0.0 0 0 6 94\n 504 69 9.3 132 17 25.8 128 16 14.4 600 81 8.2 256 \n32 0.0 1083 144 0.0 1 0 6 94\n\nI've tried to optimise postgresql.conf for performance:\n\nmax_connections = 1000 # (change requires restart)\nshared_buffers = 2GB # min 128kB or max_connections*16kB\nwork_mem = 100MB # min 64kB\nmaintenance_work_mem = 100MB # min 1MB\nsynchronous_commit = off # immediate fsync at commit\nwal_buffers = 16MB # min 32kB\ncheckpoint_segments = 64 # in logfile segments, min 1, \n16MB each\ncheckpoint_timeout = 10min # range 30s-1h\neffective_cache_size = 16GB\nlogging_collector = on # Enable capturing of stderr and \ncsvlog\nlog_directory = 'pg_log' # directory where log files are \nwritten,\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,\nlog_rotation_age = 1d # Automatic rotation of logfiles \nwill\nlog_min_duration_statement = 1000 # -1 is disabled, 0 logs all \nstatements\nlog_checkpoints = on\nlog_line_prefix = '%m %p: ' # special values:\nautovacuum = on # Enable autovacuum subprocess? \n'on'\nautovacuum_vacuum_threshold = 10000 # min number of row updates before\nautovacuum_analyze_threshold = 1000 # min number of row updates before\nautovacuum_vacuum_scale_factor = 0.1 # fraction of table size before \nvacuum\nautovacuum_analyze_scale_factor = 0.05 # fraction of table size before \nanalyze\nautovacuum_vacuum_cost_delay = 5 # default vacuum cost delay for\ndatestyle = 'iso, dmy'\ndefault_text_search_config = 'pg_catalog.english'\nbackslash_quote = on # on, off, or safe_encoding\nstandard_conforming_strings = off\n\nIf anyone can see anything amiss in the above info, I'd be grateful for \nany suggestions...\n\nThanks,\nMichael.\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 19:12:31 +0200",
"msg_from": "Michael van Rooyen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow queries on 9.3.1 despite use of index"
},
{
"msg_contents": "Michael van Rooyen <[email protected]> writes:\n> I'm trying to get to the bottom of a performance issue on a server \n> running PostgreSQL 9.3.1 on Centos 5.\n\nHm ... it seems pretty suspicious that all of these examples take just\nabout exactly 1 second longer than you might expect. I'm wondering\nif there is something sitting on an exclusive table lock somewhere,\nand releasing it after 1 second.\n\nIn particular, this looks quite a bit like the old behavior of autovacuum\nwhen it was trying to truncate empty pages off the end of a relation ---\nit would hold off other accesses to the table until deadlock_timeout\nelapsed, whereupon it'd get kicked off the exclusive lock (and have to\nretry the truncation next time). Are you *sure* this server is running\n9.3.1, and not something pre-9.3?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 13:50:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries on 9.3.1 despite use of index"
},
{
"msg_contents": "On Mon, Apr 28, 2014 at 10:12 AM, Michael van Rooyen <[email protected]>wrote:\n\n> I'm trying to get to the bottom of a performance issue on a server running\n> PostgreSQL 9.3.1 on Centos 5. The machine is a dual quad-core Xeon E5620\n> with 24GB ECC RAM and four enterprise SATA Seagate Constellation ES drives\n> configured as 2 software RAID1 volumes. The main DB is on one volume and\n> some, more used indexes and WAL logs are on the other.\n>\n> Overall, the load doesn't appear to be heavy, but we're still getting slow\n> queries, for example, here's an extract from the log:\n>\n> 2014-04-28 16:51:02.904 GMT 25998: LOG: checkpoint starting: time\n> 2014-04-28 16:53:37.204 GMT 30053: LOG: duration: 1067.464 ms execute\n> <unnamed>: select \"id\" from \"ProductSupplier\" where\n> \"product\"='25553848082928'\n> 2014-04-28 16:54:12.701 GMT 30053: LOG: duration: 1105.844 ms execute\n> <unnamed>: select \"id\" from \"ProductSupplier\" where\n> \"product\"='1626407082928'\n> 2014-04-28 16:54:46.789 GMT 30053: LOG: duration: 1060.585 ms execute\n> <unnamed>: select\n\n ...\n\n\n> 2014-04-28 16:55:58.058 GMT 30053: LOG: duration: 1309.192 ms execute\n> <unnamed>: select \"id\" from \"ProductCategory\" where\n> \"product\"='1932872082928'\n> 2014-04-28 16:56:06.019 GMT 25998: LOG: checkpoint complete: wrote 6647\n> buffers (2.5%); 0 transaction log file(s) added, 0 removed, 2 recycled;\n> write=298.862 s, sync=4.072 s, total=303.115 s; sync files=155,\n> longest=0.234 s, average=0.026 s\n>\n\nIt looks like something is causing your IO to seize up briefly. It is\ncommon for the sync phase of the checkpoint to do that, but that would only\nexplain 3 of the 4 reports above.\n\nIs this causing an actual problem for your users, or are you just trying to\nbe proactive?\n\nYou could change the kernel setting dirty_background_bytes to try to reduce\nthis problem.\n\n\n> Overall the load on the server seems quite low, for example, typical\n> vmstat -1 is:\n>\n\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa st\n> 0 1 304 77740 11960 17709156 0 0 99 82 2 2 2 1\n> 89 8 0\n> 1 0 304 75228 11960 17711164 0 0 1256 635 1418 6498 0 0\n> 94 6 0\n> 0 1 304 73440 11968 17712036 0 0 1232 149 1253 6232 1 0\n> 94 6 0\n> 0 2 304 78472 11968 17706016 0 0 1760 89 1325 6361 1 0\n> 94 5 0\n> 0 1 304 75616 11968 17708480 0 0 2164 72 1371 6855 1 0\n> 94 5 0\n> 0 1 304 73292 11968 17710320 0 0 1760 112 1335 6673 1 0\n> 94 5 0\n> 0 2 304 77956 11964 17703528 0 0 1204 5614 1867 6712 0 0\n> 94 6 0\n>\n\nIt that typical for when the problem is not occurring, or typical for when\nit is occurring. Without having timestamps to correlate the vmstat back to\nlog file, it is very hard to make use of this info. Some versions of\nvmstat have a -t flag.\n\n\n\n\n>\n>\n> I've tried to optimise postgresql.conf for performance:\n>\n> max_connections = 1000 # (change requires restart)\n>\n\n1000 is extremely high. How many connections do you actually use at any\none time?\n\n\n> shared_buffers = 2GB # min 128kB or max_connections*16kB\n> work_mem = 100MB # min 64kB\n>\n\n100MB is also very high, at least on conjunction with the high\nmax_connections.\n\n\n> maintenance_work_mem = 100MB # min 1MB\n> synchronous_commit = off # immediate fsync at commit\n> wal_buffers = 16MB # min 32kB\n> checkpoint_segments = 64 # in logfile segments, min 1, 16MB\n> each\n> checkpoint_timeout = 10min # range 30s-1h\n> effective_cache_size = 16GB\n> logging_collector = on # Enable capturing of stderr and\n> csvlog\n> log_directory = 'pg_log' # directory where log files are\n> written,\n> log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,\n> log_rotation_age = 1d # Automatic rotation of logfiles\n> will\n> log_min_duration_statement = 1000 # -1 is disabled, 0 logs all\n> statements\n>\n\nI would lower this. You can see that few statements were just over 1000\nms, but can't tell if there are lot that are at 800 ms, or if you have\nbimodal distribution with most being 1ms and a few being 1200ms.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 28, 2014 at 10:12 AM, Michael van Rooyen <[email protected]> wrote:\nI'm trying to get to the bottom of a performance issue on a server running PostgreSQL 9.3.1 on Centos 5. The machine is a dual quad-core Xeon E5620 with 24GB ECC RAM and four enterprise SATA Seagate Constellation ES drives configured as 2 software RAID1 volumes. The main DB is on one volume and some, more used indexes and WAL logs are on the other.\n\nOverall, the load doesn't appear to be heavy, but we're still getting slow queries, for example, here's an extract from the log:\n\n2014-04-28 16:51:02.904 GMT 25998: LOG: checkpoint starting: time\n2014-04-28 16:53:37.204 GMT 30053: LOG: duration: 1067.464 ms execute <unnamed>: select \"id\" from \"ProductSupplier\" where \"product\"='25553848082928'\n2014-04-28 16:54:12.701 GMT 30053: LOG: duration: 1105.844 ms execute <unnamed>: select \"id\" from \"ProductSupplier\" where \"product\"='1626407082928'\n2014-04-28 16:54:46.789 GMT 30053: LOG: duration: 1060.585 ms execute <unnamed>: select ... \n2014-04-28 16:55:58.058 GMT 30053: LOG: duration: 1309.192 ms execute <unnamed>: select \"id\" from \"ProductCategory\" where \"product\"='1932872082928'\n2014-04-28 16:56:06.019 GMT 25998: LOG: checkpoint complete: wrote 6647 buffers (2.5%); 0 transaction log file(s) added, 0 removed, 2 recycled; write=298.862 s, sync=4.072 s, total=303.115 s; sync files=155, longest=0.234 s, average=0.026 s\nIt looks like something is causing your IO to seize up briefly. It is common for the sync phase of the checkpoint to do that, but that would only explain 3 of the 4 reports above.\nIs this causing an actual problem for your users, or are you just trying to be proactive?You could change the kernel setting dirty_background_bytes to try to reduce this problem.\n\nOverall the load on the server seems quite low, for example, typical vmstat -1 is:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n 0 1 304 77740 11960 17709156 0 0 99 82 2 2 2 1 89 8 0\n 1 0 304 75228 11960 17711164 0 0 1256 635 1418 6498 0 0 94 6 0\n 0 1 304 73440 11968 17712036 0 0 1232 149 1253 6232 1 0 94 6 0\n 0 2 304 78472 11968 17706016 0 0 1760 89 1325 6361 1 0 94 5 0\n 0 1 304 75616 11968 17708480 0 0 2164 72 1371 6855 1 0 94 5 0\n 0 1 304 73292 11968 17710320 0 0 1760 112 1335 6673 1 0 94 5 0\n 0 2 304 77956 11964 17703528 0 0 1204 5614 1867 6712 0 0 94 6 0It that typical for when the problem is not occurring, or typical for when it is occurring. Without having timestamps to correlate the vmstat back to log file, it is very hard to make use of this info. Some versions of vmstat have a -t flag.\n \n\nI've tried to optimise postgresql.conf for performance:\n\nmax_connections = 1000 # (change requires restart)1000 is extremely high. How many connections do you actually use at any one time? \n\nshared_buffers = 2GB # min 128kB or max_connections*16kB\nwork_mem = 100MB # min 64kB100MB is also very high, at least on conjunction with the high max_connections. \n\nmaintenance_work_mem = 100MB # min 1MB\nsynchronous_commit = off # immediate fsync at commit\nwal_buffers = 16MB # min 32kB\ncheckpoint_segments = 64 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 10min # range 30s-1h\neffective_cache_size = 16GB\nlogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' # directory where log files are written,\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,\nlog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_min_duration_statement = 1000 # -1 is disabled, 0 logs all statementsI would lower this. You can see that few statements were just over 1000 ms, but can't tell if there are lot that are at 800 ms, or if you have bimodal distribution with most being 1ms and a few being 1200ms.\n Cheers,Jeff",
"msg_date": "Mon, 28 Apr 2014 10:52:41 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries on 9.3.1 despite use of index"
},
{
"msg_contents": "\nOn 2014/04/28 07:50 PM, Tom Lane wrote:\n> Michael van Rooyen <[email protected]> writes:\n>> I'm trying to get to the bottom of a performance issue on a server\n>> running PostgreSQL 9.3.1 on Centos 5.\n> Hm ... it seems pretty suspicious that all of these examples take just\n> about exactly 1 second longer than you might expect. I'm wondering\n> if there is something sitting on an exclusive table lock somewhere,\n> and releasing it after 1 second.\nI do have log_min_duration_statement = 1000, which may cause this.\n> In particular, this looks quite a bit like the old behavior of autovacuum\n> when it was trying to truncate empty pages off the end of a relation ---\n> it would hold off other accesses to the table until deadlock_timeout\n> elapsed, whereupon it'd get kicked off the exclusive lock (and have to\n> retry the truncation next time). Are you *sure* this server is running\n> 9.3.1, and not something pre-9.3?\nDefinitely 9.3.1. The strange thing is I have other servers with \nsimilar configurations and load and with the same database, where \nperformance is great, so it's hard for me to know what's different \nhere. Maybe I'm expecting too much from these SATA drives, or it's time \nto add lots of RAM...\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 23:22:08 +0200",
"msg_from": "Michael van Rooyen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow queries on 9.3.1 despite use of index"
},
{
"msg_contents": "Michael van Rooyen <[email protected]> writes:\n> On 2014/04/28 07:50 PM, Tom Lane wrote:\n>> Hm ... it seems pretty suspicious that all of these examples take just\n>> about exactly 1 second longer than you might expect. I'm wondering\n>> if there is something sitting on an exclusive table lock somewhere,\n>> and releasing it after 1 second.\n\n> I do have log_min_duration_statement = 1000, which may cause this.\n\nAh, I overlooked that detail. Never mind that theory then. Although\nit might still be worth turning on log_lock_waits for awhile, just to\neliminate the possibility of lock-induced delays.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Apr 2014 19:06:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries on 9.3.1 despite use of index"
},
{
"msg_contents": "\nOn 2014/04/28 07:52 PM, Jeff Janes wrote:\n> On Mon, Apr 28, 2014 at 10:12 AM, Michael van Rooyen \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> It looks like something is causing your IO to seize up briefly. It is \n> common for the sync phase of the checkpoint to do that, but that would \n> only explain 3 of the 4 reports above.\n>\n> Is this causing an actual problem for your users, or are you just \n> trying to be proactive?\n>\n> You could change the kernel setting dirty_background_bytes to try to \n> reduce this problem.\nThe problem is that this server running background tasks very slowly \n(about 10x slower than a similar server with the same DB but 3x more RAM).\n\nI changed dirty_background_bytes to 16M, previously the \ndirty_background_ratio was 10%. No real effect on the DB performance, \nbut it seems a good change anyway. Thanks for the tip.\n>\n>\n> Overall the load on the server seems quite low, for example,\n> typical vmstat -1 is:\n>\n>\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu------\n> r b swpd free buff cache si so bi bo in cs us\n> sy id wa st\n> 0 1 304 77740 11960 17709156 0 0 99 82 2 \n> 2 2 1 89 8 0\n> 1 0 304 75228 11960 17711164 0 0 1256 635 1418\n> 6498 0 0 94 6 0\n> 0 1 304 73440 11968 17712036 0 0 1232 149 1253\n> 6232 1 0 94 6 0\n> 0 2 304 78472 11968 17706016 0 0 1760 89 1325\n> 6361 1 0 94 5 0\n> 0 1 304 75616 11968 17708480 0 0 2164 72 1371\n> 6855 1 0 94 5 0\n> 0 1 304 73292 11968 17710320 0 0 1760 112 1335\n> 6673 1 0 94 5 0\n> 0 2 304 77956 11964 17703528 0 0 1204 5614 1867\n> 6712 0 0 94 6 0\n>\n>\n> It that typical for when the problem is not occurring, or typical for \n> when it is occurring. Without having timestamps to correlate the \n> vmstat back to log file, it is very hard to make use of this info. \n> Some versions of vmstat have a -t flag.\n>\nIt's fairly typical - and although the same underlying query will \nsometimes complete faster or slower, the overall performance / \nthroughput is consistently (as opposed to sporadically) poor.\n>\n>\n>\n> I've tried to optimise postgresql.conf for performance:\n>\n> max_connections = 1000 # (change requires restart)\n>\n>\n> 1000 is extremely high. How many connections do you actually use at \n> any one time?\n>\n> shared_buffers = 2GB # min 128kB or\n> max_connections*16kB\n> work_mem = 100MB # min 64kB\n>\n>\n> 100MB is also very high, at least on conjunction with the high \n> max_connections.\nBlush. Thanks - I've reduced these to more reasonable values (200 / \n10MB), but it didn't have any effect on performance.\n\n> maintenance_work_mem = 100MB # min 1MB\n> synchronous_commit = off # immediate fsync at commit\n> wal_buffers = 16MB # min 32kB\n> checkpoint_segments = 64 # in logfile segments, min\n> 1, 16MB each\n> checkpoint_timeout = 10min # range 30s-1h\n> effective_cache_size = 16GB\n> logging_collector = on # Enable capturing of\n> stderr and csvlog\n> log_directory = 'pg_log' # directory where log\n> files are written,\n> log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name\n> pattern,\n> log_rotation_age = 1d # Automatic rotation of\n> logfiles will\n> log_min_duration_statement = 1000 # -1 is disabled, 0 logs\n> all statements\n>\n>\n> I would lower this. You can see that few statements were just over \n> 1000 ms, but can't tell if there are lot that are at 800 ms, or if you \n> have bimodal distribution with most being 1ms and a few being 1200ms.\nI lowered it to 100ms, and taking the same query in my original post \nover the last few hours, the times vary in the spectrum from 100ms to \njust over a 1s. It seems like an exponential distribution with the norm \nclose to 100ms. I am becoming increasingly sure that I'm just up against \nthe limitations of the SATA disks due to the load profile on this \nparticular server. Maybe it's time to reassess the load, or install an \nSSD or lots of RAM...\n> Cheers,\n>\n> Jeff\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 29 Apr 2014 16:17:52 +0200",
"msg_from": "Michael van Rooyen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow queries on 9.3.1 despite use of index"
}
] |
[
{
"msg_contents": "Hi all, I'm using PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu \n I have a schema where I have lots of messages and some users who might have \nread some of them. When a message is read by a user I create an entry i a table \nmessage_property holding the property (is_read) for that user. The schema is \nas follows: drop table if exists message_property;\n drop table if exists message;\n drop table if exists person; create table person(\n id serial primary key,\n username varchar not null unique\n ); create table message(\n id serial primary key,\n subject varchar\n ); create table message_property(\n message_id integer not null references message(id),\n person_id integer not null references person(id),\n is_read boolean not null default false,\n unique(message_id, person_id)\n ); insert into person(username) values('user_' || generate_series(0, 999));\n insert into message(subject) values('Subject ' || random() || \ngenerate_series(0, 999999));\n insert into message_property(message_id, person_id, is_read) select id, 1, \ntrue from message order by id limit 999990;\n insert into message_property(message_id, person_id, is_read) select id, 1, \nfalse from message order by id limit 5 offset 999990; analyze; So, for person \n1 there are 10 unread messages, out of a total 1mill. 5 of those unread does \nnot have an entry in message_property and 5 have an entry and is_read set to \nFALSE. I have the following query to list all un-read messages for person \nwith id=1: SELECT\n m.id AS message_id,\n prop.person_id,\n coalesce(prop.is_read, FALSE) AS is_read,\n m.subject\n FROM message m\n LEFT OUTER JOIN message_property prop ON prop.message_id = m.id AND \nprop.person_id = 1\n WHERE 1 = 1\n AND NOT EXISTS(SELECT\n *\n FROM message_property pr\n WHERE pr.message_id = m.id AND pr.person_id = \nprop.person_id AND prop.is_read = TRUE)\n ; \n The problem is that it's not quite efficient and performs badly, explain \nanalyze shows: \n \nQUERY PLAN\n \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Anti Join (cost=1.27..148784.09 rows=5 width=40) (actual \ntime=918.906..918.913 rows=10 loops=1)\n Merge Cond: (m.id = pr.message_id)\n Join Filter: (prop.is_read AND (pr.person_id = prop.person_id))\n Rows Removed by Join Filter: 5\n -> Merge Left Join (cost=0.85..90300.76 rows=1000000 width=40) (actual \ntime=0.040..530.748 rows=1000000 loops=1)\n Merge Cond: (m.id = prop.message_id)\n -> Index Scan using message_pkey on message m (cost=0.42..34317.43 \nrows=1000000 width=35) (actual time=0.014..115.829 rows=1000000 loops=1)\n -> Index Scan using message_property_message_id_person_id_key on \nmessage_property prop (cost=0.42..40983.40 rows=999995 width=9) (actual \ntime=0.020..130.728 rows=999995 loops=1)\n Index Cond: (person_id = 1)\n -> Index Only Scan using message_property_message_id_person_id_key on \nmessage_property pr (cost=0.42..40983.40 rows=999995 width=8) (actual \ntime=0.024..140.349 rows=999995 loops=1)\n Index Cond: (person_id = 1)\n Heap Fetches: 999995\n Total runtime: 918.975 ms\n (13 rows) \n Does anyone have suggestions on how to optimize the query or schema? It's \nimportant that any message not having an entry in message_property for a user \nis considered un-read. \n Thanks! -- Andreas Jospeh Krogh CTO / Partner - Visena AS Mobile: +47 909 \n56 963 [email protected] <mailto:[email protected]> www.visena.com \n<https://www.visena.com> <https://www.visena.com>",
"msg_date": "Thu, 1 May 2014 13:26:02 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimize query for listing un-read messages"
},
{
"msg_contents": "\nHi Andreas,\n\n[New to this list, forgive my ignorance.]\n\nOn 05/01/2014 01:26 PM, Andreas Joseph Krogh wrote:\n> I'm using PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu \nMy machine has PostgreSQL 9.1.13 on x86_64-unknown-linux-gnu.\n> I have a schema where I have lots of messages and some users who might \n> have read some of them. When a message is read by a user I create an \n> entry i a table message_property holding the property (is_read) for \n> that user.\n> The schema is as follows:\n> drop table if exists message_property;\n> drop table if exists message;\n> drop table if exists person;\n> create table person(\n> id serial primary key,\n> username varchar not null unique\n> );\n> create table message(\n> id serial primary key,\n> subject varchar\n> );\n> create table message_property(\n> message_id integer not null references message(id),\n> person_id integer not null references person(id),\n> is_read boolean not null default false,\n> unique(message_id, person_id)\n> );\n[snip]\n> So, for person 1 there are 10 unread messages, out of a total 1mill. 5 \n> of those unread does not have an entry in message_property and 5 have \n> an entry and is_read set to FALSE.\n> I have the following query to list all un-read messages for person \n> with id=1:\n> SELECT\n> m.id AS message_id,\n> prop.person_id,\n> coalesce(prop.is_read, FALSE) AS is_read,\n> m.subject\n> FROM message m\n> LEFT OUTER JOIN message_property prop ON prop.message_id = m.id \n> AND prop.person_id = 1\n> WHERE 1 = 1\n> AND NOT EXISTS(SELECT\n> *\n> FROM message_property pr\n> WHERE pr.message_id = m.id AND pr.person_id = \n> prop.person_id AND prop.is_read = TRUE)\n> ;\n>\n> The problem is that it's not quite efficient and performs badly, \n> explain analyze shows:\n[snip]\n\n> Does anyone have suggestions on how to optimize the query or schema?\n\nI'm getting better performance with:\n\nSELECT\nm.id AS message_id,\n1 AS person_id,\nFALSE AS is_read,\nm.subject\nFROM message m\nWHERE 1 = 1\nAND NOT EXISTS(SELECT\n *\n FROM message_property pr\n WHERE pr.message_id = m.id AND pr.person_id = 1 AND pr.is_read);\n\nYou then lose the distinction between message_property with is_read = \nFALSE, and nonexistent message_property for the message row.\n\nIf that is essential, I'm getting a roughly 2x speedup on my non-tuned \nPostgreSQL with:\n SELECT\n m.id AS message_id,\n prop.person_id,\n coalesce(prop.is_read, FALSE) AS is_read,\n m.subject\nFROM message m\n LEFT OUTER JOIN message_property prop ON prop.message_id = m.id AND \nprop.person_id = 1\nWHERE not coalesce(prop.is_read, false);\n\nHTH,\nJochem\n\n-- \nJochem Berndsen | [email protected]\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 01 May 2014 20:35:07 +0200",
"msg_from": "Jochem Berndsen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "På torsdag 01. mai 2014 kl. 20:35:07, skrev Jochem Berndsen <[email protected] \n<mailto:[email protected]>>: \n Hi Andreas,\n\n [New to this list, forgive my ignorance.]\n [snip]\n I'm getting better performance with:\n\n SELECT\n m.id AS message_id,\n 1 AS person_id,\n FALSE AS is_read,\n m.subject\n FROM message m\n WHERE 1 = 1\n AND NOT EXISTS(SELECT\n *\n FROM message_property pr\n WHERE pr.message_id = m.id AND pr.person_id = 1 AND pr.is_read);\n\n You then lose the distinction between message_property with is_read =\n FALSE, and nonexistent message_property for the message row.\n\n If that is essential, I'm getting a roughly 2x speedup on my non-tuned\n PostgreSQL with:\n SELECT\n m.id AS message_id,\n prop.person_id,\n coalesce(prop.is_read, FALSE) AS is_read,\n m.subject\n FROM message m\n LEFT OUTER JOIN message_property prop ON prop.message_id = m.id AND\n prop.person_id = 1\n WHERE not coalesce(prop.is_read, false); Hi Jochem, Thansk for looking \nat it. I'm still seing ~500ms being spent and I was hoping for a way to do this \nusing index so one could achieve 1-10ms, but maybe that's impossible given the \nschema? Is there a way to design an equivalent schema to achieve <10ms \nexecution-time? -- Andreas Jospeh Krogh CTO / Partner - Visena AS Mobile: +47 \n909 56 963 [email protected] <mailto:[email protected]> www.visena.com \n<https://www.visena.com> <https://www.visena.com>",
"msg_date": "Thu, 1 May 2014 21:17:36 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "Hello\n\n\n2014-05-01 21:17 GMT+02:00 Andreas Joseph Krogh <[email protected]>:\n\n> På torsdag 01. mai 2014 kl. 20:35:07, skrev Jochem Berndsen <\n> [email protected]>:\n>\n>\n> Hi Andreas,\n>\n> [New to this list, forgive my ignorance.]\n> [snip]\n> I'm getting better performance with:\n>\n> SELECT\n> m.id AS message_id,\n> 1 AS person_id,\n> FALSE AS is_read,\n> m.subject\n> FROM message m\n> WHERE 1 = 1\n> AND NOT EXISTS(SELECT\n> *\n> FROM message_property pr\n> WHERE pr.message_id = m.id AND pr.person_id = 1 AND pr.is_read);\n>\n> You then lose the distinction between message_property with is_read =\n> FALSE, and nonexistent message_property for the message row.\n>\n> If that is essential, I'm getting a roughly 2x speedup on my non-tuned\n> PostgreSQL with:\n> SELECT\n> m.id AS message_id,\n> prop.person_id,\n> coalesce(prop.is_read, FALSE) AS is_read,\n> m.subject\n> FROM message m\n> LEFT OUTER JOIN message_property prop ON prop.message_id = m.id AND\n> prop.person_id = 1\n> WHERE not coalesce(prop.is_read, false);\n>\n>\n>\n> Hi Jochem,\n>\n> Thansk for looking at it. I'm still seing ~500ms being spent and I was\n> hoping for a way to do this using index so one could achieve 1-10ms, but\n> maybe that's impossible given the schema?\n>\n> Is there a way to design an equivalent schema to achieve <10ms\n> execution-time?\n>\n\nI had a perfect success on similar use case with descent ordered partial\nindex\n\nhttp://www.postgresql.org/docs/9.3/interactive/sql-createindex.html\n\nRegards\n\nPavel\n\n\n>\n> --\n> *Andreas Jospeh Krogh*\n> CTO / Partner - Visena AS\n> Mobile: +47 909 56 963\n> [email protected]\n> www.visena.com\n> <https://www.visena.com>\n>\n>",
"msg_date": "Thu, 1 May 2014 21:30:39 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "På torsdag 01. mai 2014 kl. 21:30:39, skrev Pavel Stehule <\[email protected] <mailto:[email protected]>>: Hello [snip] I had \na perfect success on similar use case with descent ordered partial index\n\nhttp://www.postgresql.org/docs/9.3/interactive/sql-createindex.html \n<http://www.postgresql.org/docs/9.3/interactive/sql-createindex.html> I'm not \ngetting good performance. Are you able to craft an example using my schema and \npartial index? Thanks. -- Andreas Jospeh Krogh CTO / Partner - Visena AS \nMobile: +47 909 56 963 [email protected] <mailto:[email protected]> \nwww.visena.com <https://www.visena.com> <https://www.visena.com>",
"msg_date": "Thu, 1 May 2014 21:39:36 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "2014-05-01 21:39 GMT+02:00 Andreas Joseph Krogh <[email protected]>:\n\n> På torsdag 01. mai 2014 kl. 21:30:39, skrev Pavel Stehule <\n> [email protected]>:\n>\n> Hello\n> [snip]\n>\n> I had a perfect success on similar use case with descent ordered partial\n> index\n>\n> http://www.postgresql.org/docs/9.3/interactive/sql-createindex.html\n>\n>\n> I'm not getting good performance. Are you able to craft an example using\n> my schema and partial index?\n>\n\nmaybe some like\n\nCREATE INDEX ON message_property (person_id, message_id) WHERE pr.is_read\n\nWhen I am thinking about your schema, it is designed well, but it is not\nindex friendly, so for some fast access you should to hold a cache (table)\nof unread messages.\n\nRegards\n\nPavel\n\n\n>\n> Thanks.\n>\n> --\n> *Andreas Jospeh Krogh*\n> CTO / Partner - Visena AS\n> Mobile: +47 909 56 963\n> [email protected]\n> www.visena.com\n> <https://www.visena.com>\n>\n>",
"msg_date": "Thu, 1 May 2014 21:53:32 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "På torsdag 01. mai 2014 kl. 21:53:32, skrev Pavel Stehule <\[email protected] <mailto:[email protected]>>: 2014-05-01 21:39 \nGMT+02:00 Andreas Joseph Krogh<[email protected] <mailto:[email protected]>>: \nPå torsdag 01. mai 2014 kl. 21:30:39, skrev Pavel Stehule <\[email protected] <mailto:[email protected]>>: Hello [snip] I had \na perfect success on similar use case with descent ordered partial index\n\nhttp://www.postgresql.org/docs/9.3/interactive/sql-createindex.html \n<http://www.postgresql.org/docs/9.3/interactive/sql-createindex.html> I'm not \ngetting good performance. Are you able to craft an example using my schema and \npartial index? maybe some like\n CREATE INDEX ON message_property (person_id, message_id) WHERE pr.is_read\n When I am thinking about your schema, it is designed well, but it is not \nindex friendly, so for some fast access you should to hold a cache (table) of \nunread messages Ah, that's what I was hoping to not having to do. In my \nsystem, messages arrive all the time and having to update a cache for all new \nmessages for all users seems messy... Seems I could just as well create a \nmessage_property for all users when a new message arrives, so I can INNER JOIN \nit and get good performance. But that table will quickly grow *very* large... \n--Andreas Jospeh Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 \[email protected] <mailto:[email protected]> www.visena.com \n<https://www.visena.com> <https://www.visena.com>",
"msg_date": "Thu, 1 May 2014 22:30:09 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "2014-05-01 22:30 GMT+02:00 Andreas Joseph Krogh <[email protected]>:\n\n> På torsdag 01. mai 2014 kl. 21:53:32, skrev Pavel Stehule <\n> [email protected]>:\n>\n>\n>\n> 2014-05-01 21:39 GMT+02:00 Andreas Joseph Krogh <[email protected]>:\n>>\n>> På torsdag 01. mai 2014 kl. 21:30:39, skrev Pavel Stehule <\n>> [email protected]>:\n>>\n>> Hello\n>> [snip]\n>>\n>> I had a perfect success on similar use case with descent ordered partial\n>> index\n>>\n>> http://www.postgresql.org/docs/9.3/interactive/sql-createindex.html\n>>\n>>\n>> I'm not getting good performance. Are you able to craft an example using\n>> my schema and partial index?\n>>\n>\n> maybe some like\n>\n> CREATE INDEX ON message_property (person_id, message_id) WHERE pr.is_read\n>\n> When I am thinking about your schema, it is designed well, but it is not\n> index friendly, so for some fast access you should to hold a cache (table)\n> of unread messages\n>\n>\n> Ah, that's what I was hoping to not having to do. In my system, messages\n> arrive all the time and having to update a cache for all new messages for\n> all users seems messy... Seems I could just as well create a\n> message_property for all users when a new message arrives, so I can INNER\n> JOIN it and get good performance. But that table will quickly grow *very*\n> large...\n>\n\nWhat you need is a JOIN index, that is not possible in Postgres.\n\nI afraid so some \"ugly\" solutions is necessary (when you require extra fast\naccess). You need a index (small index) and it require some existing set -\nyou cannot do index on the difference two sets.\n\nI expect so some flag on the relation \"message\" - like \"it should not be\nnot read\" can helps little bit - and can be used in partial index as\nconditions. Other possibility is some variant of partitioning - you can\ndivide a messages and users to distinct sets and then you decrease a number\nof possible combinations.\n\nRegards\n\nPavel\n\n\n>\n> --\n> *Andreas Jospeh Krogh*\n> CTO / Partner - Visena AS\n> Mobile: +47 909 56 963\n> [email protected]\n> www.visena.com\n> <https://www.visena.com>\n>\n>",
"msg_date": "Thu, 1 May 2014 23:02:13 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "På torsdag 01. mai 2014 kl. 23:02:13, skrev Pavel Stehule <\[email protected] <mailto:[email protected]>>: 2014-05-01 22:30 \nGMT+02:00 Andreas Joseph Krogh<[email protected] <mailto:[email protected]>>: \nPå torsdag 01. mai 2014 kl. 21:53:32, skrev Pavel Stehule <\[email protected] <mailto:[email protected]>>: 2014-05-01 21:39 \nGMT+02:00 Andreas Joseph Krogh<[email protected] <mailto:[email protected]>>: \nPå torsdag 01. mai 2014 kl. 21:30:39, skrev Pavel Stehule <\[email protected] <mailto:[email protected]>>: Hello [snip] I had \na perfect success on similar use case with descent ordered partial index\n\nhttp://www.postgresql.org/docs/9.3/interactive/sql-createindex.html \n<http://www.postgresql.org/docs/9.3/interactive/sql-createindex.html> I'm not \ngetting good performance. Are you able to craft an example using my schema and \npartial index? maybe some like\n CREATE INDEX ON message_property (person_id, message_id) WHERE pr.is_read\n When I am thinking about your schema, it is designed well, but it is not \nindex friendly, so for some fast access you should to hold a cache (table) of \nunread messages Ah, that's what I was hoping to not having to do. In my \nsystem, messages arrive all the time and having to update a cache for all new \nmessages for all users seems messy... Seems I could just as well create a \nmessage_property for all users when a new message arrives, so I can INNER JOIN \nit and get good performance. But that table will quickly grow *very* large... \nWhat you need is a JOIN index, that is not possible in Postgres. I afraid so \nsome \"ugly\" solutions is necessary (when you require extra fast access). You \nneed a index (small index) and it require some existing set - you cannot do \nindex on the difference two sets.\n I expect so some flag on the relation \"message\" - like \"it should not be \nnot read\" can helps little bit - and can be used in partial index as \nconditions. Other possibility is some variant of partitioning - you can divide \na messages and users to distinct sets and then you decrease a number of \npossible combinations. Just curious: Is such a JOIN index possible in other \nDBs, if so - which? Can other DBs do index on difference between two sets? Will \nPG ever have this, is it even possible? In my real system the \nmessage_property holds other properties for a message also, so I have to LEFT \nOUTER JOIN with it to get the properties where they exist when listing \nmessages. The system works by assuming that when an entry in message_property \ndoes not exist for a given message for a given user then the property is equal \nto \"false\". I don't quite see how maintaining a message_properrty entry for \nall users, for all messages, is a good idea. That's quite some work to be done \nwhen adding/removing users etc. Thanks for having this discussion. -- \nAndreas Jospeh Krogh CTO / Partner - Visena AS Mobile: +47 909 56 963 \[email protected] <mailto:[email protected]> www.visena.com \n<https://www.visena.com> <https://www.visena.com>",
"msg_date": "Thu, 1 May 2014 23:19:33 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "How does something like:\n\nWITH unreads AS (\nSELECT messageid FROM message\nEXCEPT\nSELECT messageid FROM message_property WHERE personid=1 AND has_read\n)\nSELECT ...\nFROM unreads\nJOIN messages USING (messageid)\n;\n\nperform?\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Optimize-query-for-listing-un-read-messages-tp5802097p5802157.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 May 2014 14:19:55 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "På torsdag 01. mai 2014 kl. 23:19:55, skrev David G Johnston <\[email protected] <mailto:[email protected]>>: How does \nsomething like:\n\n WITH unreads AS (\n SELECT messageid FROM message\n EXCEPT\n SELECT messageid FROM message_property WHERE personid=1 AND has_read\n )\n SELECT ...\n FROM unreads\n JOIN messages USING (messageid)\n ;\n\n perform? It actually performs worse. The best query so far is: SELECT\n m.id AS message_id,\n prop.person_id,\n coalesce(prop.is_read, FALSE) AS is_read,\n m.subject\n FROM message m\n LEFT OUTER JOIN message_property prop ON prop.message_id = m.id AND\n prop.person_id = 1\n WHERE coalesce(prop.is_read, false) = false; Giving the plan: \n \nQUERY PLAN\n \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=4.20..90300.76 rows=500000 width=40) (actual \ntime=445.021..445.025 rows=10 loops=1)\n Merge Cond: (m.id = prop.message_id)\n Filter: (NOT COALESCE(prop.is_read, false))\n Rows Removed by Filter: 999990\n -> Index Scan using message_pkey on message m (cost=0.42..34317.43 \nrows=1000000 width=35) (actual time=0.014..113.314 rows=1000000 loops=1)\n -> Index Scan using message_property_message_id_person_id_key on \nmessage_property prop (cost=0.42..40983.40 rows=999995 width=9) (actual \ntime=0.018..115.019 rows=999995 loops=1)\n Index Cond: (person_id = 1)\n Total runtime: 445.076 ms\n (8 rows) -- Andreas Jospeh Krogh CTO / Partner - Visena AS Mobile: +47 909 \n56 963 [email protected] <mailto:[email protected]> www.visena.com \n<https://www.visena.com> <https://www.visena.com>",
"msg_date": "Thu, 1 May 2014 23:31:33 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "On 1.5.2014 23:19, Andreas Joseph Krogh wrote:\n> På torsdag 01. mai 2014 kl. 23:02:13, skrev Pavel Stehule\n> <[email protected] <mailto:[email protected]>>:\n> \n> \n> \n> 2014-05-01 22:30 GMT+02:00 Andreas Joseph Krogh <[email protected]\n> <mailto:[email protected]>>:\n> \n> På torsdag 01. mai 2014 kl. 21:53:32, skrev Pavel Stehule\n> <[email protected] <mailto:[email protected]>>:\n> \n> \n> \n> 2014-05-01 21:39 GMT+02:00 Andreas Joseph Krogh\n> <[email protected] <mailto:[email protected]>>:\n> \n> På torsdag 01. mai 2014 kl. 21:30:39, skrev Pavel\n> Stehule <[email protected]\n> <mailto:[email protected]>>:\n> \n> Hello\n> [snip]\n> \n> I had a perfect success on similar use case with\n> descent ordered partial index\n> \n> http://www.postgresql.org/docs/9.3/interactive/sql-createindex.html\n> \n> \n> I'm not getting good performance. Are you able to craft\n> an example using my schema and partial index?\n> \n> \n> maybe some like\n> \n> CREATE INDEX ON message_property (person_id, message_id)\n> WHERE pr.is_read\n> \n> When I am thinking about your schema, it is designed well,\n> but it is not index friendly, so for some fast access you\n> should to hold a cache (table) of unread messages\n> \n> \n> Ah, that's what I was hoping to not having to do. In my system,\n> messages arrive all the time and having to update a cache for\n> all new messages for all users seems messy... Seems I could just\n> as well create a message_property for all users when a new\n> message arrives, so I can INNER JOIN it and get good\n> performance. But that table will quickly grow *very* large...\n> \n> \n> What you need is a JOIN index, that is not possible in Postgres.\n> \n> I afraid so some \"ugly\" solutions is necessary (when you require\n> extra fast access). You need a index (small index) and it require\n> some existing set - you cannot do index on the difference two sets.\n> \n> I expect so some flag on the relation \"message\" - like \"it should\n> not be not read\" can helps little bit - and can be used in partial\n> index as conditions. Other possibility is some variant of\n> partitioning - you can divide a messages and users to distinct sets\n> and then you decrease a number of possible combinations.\n> \n> \n> Just curious:\n> Is such a JOIN index possible in other DBs, if so - which?\n> Can other DBs do index on difference between two sets?\n> Will PG ever have this, is it even possible?\n\nI'm not aware of such database, but maybe it's possible at least for\nsome cases. But I'd expect that to significantly depend on the schema.\nAnd I'm not aware of any such effort in case of PostgreSQL, do don't\nhold your breath.\n\nIMHO the problem with your schema is that while each 'read' message has\na matching row in message_property, 'undread' messages may or may not\nhave a matching row. Is there a particular reason for that?\n\nIf you could get rid of this, i.e. require that each pair (message,\nrecipient) has a row in message_propery (irrespectedly whether the\nmessage was read or not), you can do this:\n\nCREATE INDEX message_unread_idx\n ON message_property(message_id, person_id) WHERE (NOT is_read)\n\nand then simply do the query like this:\n\nSELECT\n m.id,\n prop.person_id,\n prop.is_read,\n m.subject\nFROM messages m JOIN message_property p ON (m.id = p.message_id)\nWHERE (NOT is_read) AND person_id = :pid\n\nand I'd expect this to use the partial index, and being much faster.\n\nregards\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 01 May 2014 23:45:49 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "På torsdag 01. mai 2014 kl. 23:45:49, skrev Tomas Vondra <[email protected] \n<mailto:[email protected]>>: On 1.5.2014 23:19, Andreas Joseph Krogh wrote:\n > På torsdag 01. mai 2014 kl. 23:02:13, skrev Pavel Stehule\n > <[email protected] <mailto:[email protected]>>:\n >\n > \n > \n > 2014-05-01 22:30 GMT+02:00 Andreas Joseph Krogh <[email protected]\n > <mailto:[email protected]>>:\n >\n > På torsdag 01. mai 2014 kl. 21:53:32, skrev Pavel Stehule\n > <[email protected] <mailto:[email protected]>>:\n >\n > \n > \n > 2014-05-01 21:39 GMT+02:00 Andreas Joseph Krogh\n > <[email protected] <mailto:[email protected]>>:\n >\n > På torsdag 01. mai 2014 kl. 21:30:39, skrev Pavel\n > Stehule <[email protected]\n > <mailto:[email protected]>>:\n >\n > Hello\n > [snip]\n > \n > I had a perfect success on similar use case with\n > descent ordered partial index\n >\n > \n http://www.postgresql.org/docs/9.3/interactive/sql-createindex.html\n >\n > \n > I'm not getting good performance. Are you able to craft\n > an example using my schema and partial index?\n >\n > \n > maybe some like\n > \n > CREATE INDEX ON message_property (person_id, message_id)\n > WHERE pr.is_read\n > \n > When I am thinking about your schema, it is designed well,\n > but it is not index friendly, so for some fast access you\n > should to hold a cache (table) of unread messages\n >\n > \n > Ah, that's what I was hoping to not having to do. In my system,\n > messages arrive all the time and having to update a cache for\n > all new messages for all users seems messy... Seems I could just\n > as well create a message_property for all users when a new\n > message arrives, so I can INNER JOIN it and get good\n > performance. But that table will quickly grow *very* large...\n >\n > \n > What you need is a JOIN index, that is not possible in Postgres.\n > \n > I afraid so some \"ugly\" solutions is necessary (when you require\n > extra fast access). You need a index (small index) and it require\n > some existing set - you cannot do index on the difference two sets.\n > \n > I expect so some flag on the relation \"message\" - like \"it should\n > not be not read\" can helps little bit - and can be used in partial\n > index as conditions. Other possibility is some variant of\n > partitioning - you can divide a messages and users to distinct sets\n > and then you decrease a number of possible combinations.\n >\n > \n > Just curious:\n > Is such a JOIN index possible in other DBs, if so - which?\n > Can other DBs do index on difference between two sets?\n > Will PG ever have this, is it even possible?\n\n I'm not aware of such database, but maybe it's possible at least for\n some cases. But I'd expect that to significantly depend on the schema.\n And I'm not aware of any such effort in case of PostgreSQL, do don't\n hold your breath.\n\n IMHO the problem with your schema is that while each 'read' message has\n a matching row in message_property, 'undread' messages may or may not\n have a matching row. Is there a particular reason for that? Yes. The \npoint is that maintaining a message_property pair for all messages for all \nusers in the system imposes quite a maintainance-headache. As the schema is now \nany new message is per definition un-read, and when a user reads it then it \ngets an entry with is_read=true in message_property. This table holds other \nproperties too. This way I'm avoiding having to book-keep so much when a new \nmessage arrives and when a new user is created. A message in my system does not \nnecessarily have only one recipient, it might have one, many or none, and might \nbe visible to many. If you could get rid of this, i.e. require that each pair \n(message,\n recipient) has a row in message_propery (irrespectedly whether the\n message was read or not), you can do this:\n\n CREATE INDEX message_unread_idx\n ON message_property(message_id, person_id) WHERE (NOT is_read)\n\n and then simply do the query like this:\n\n SELECT\n m.id,\n prop.person_id,\n prop.is_read,\n m.subject\n FROM messages m JOIN message_property p ON (m.id = p.message_id)\n WHERE (NOT is_read) AND person_id = :pid\n\n and I'd expect this to use the partial index, and being much faster. I'm \naware of the performance-gain using such a plain JOIN-query. Thanks for your \nfeedback. -- Andreas Jospeh Krogh CTO / Partner - Visena AS Mobile: +47 909 \n56 963 [email protected] <mailto:[email protected]> www.visena.com \n<https://www.visena.com> <https://www.visena.com>",
"msg_date": "Thu, 1 May 2014 23:58:40 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "On 1.5.2014 23:58, Andreas Joseph Krogh wrote:\n> På torsdag 01. mai 2014 kl. 23:45:49, skrev Tomas Vondra <[email protected]\n> <mailto:[email protected]>>:\n> \n> On 1.5.2014 23:19, Andreas Joseph Krogh wrote:\n> > Just curious:\n> > Is such a JOIN index possible in other DBs, if so - which?\n> > Can other DBs do index on difference between two sets?\n> > Will PG ever have this, is it even possible?\n> \n> I'm not aware of such database, but maybe it's possible at least for\n> some cases. But I'd expect that to significantly depend on the schema.\n> And I'm not aware of any such effort in case of PostgreSQL, do don't\n> hold your breath.\n> \n> IMHO the problem with your schema is that while each 'read' message has\n> a matching row in message_property, 'undread' messages may or may not\n> have a matching row. Is there a particular reason for that?\n> \n> \n> \n> Yes. The point is that maintaining a message_property pair for all\n> messages for all users in the system imposes quite a\n> maintainance-headache. As the schema is now any new message is per\n> definition un-read, and when a user reads it then it gets an entry with\n> is_read=true in message_property. This table holds other properties too.\n> This way I'm avoiding having to book-keep so much when a new message\n> arrives and when a new user is created. A message in my system does not\n> necessarily have only one recipient, it might have one, many or none,\n> and might be visible to many.\n\nSo how do you determine who's the recipient of a message? Or is that the\ncase that everyone can read everything (which is why you're displaying\nthem the unread messages, right)?\n\nI understand you're trying to solve this without storing a row for each\npossible message-person combination, but won't this eventually happen\nanyway (with is_read=true for all rows)?\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 02 May 2014 00:34:34 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "På fredag 02. mai 2014 kl. 00:34:34, skrev Tomas Vondra <[email protected] \n<mailto:[email protected]>>: On 1.5.2014 23:58, Andreas Joseph Krogh wrote:\n > På torsdag 01. mai 2014 kl. 23:45:49, skrev Tomas Vondra <[email protected]\n > <mailto:[email protected]>>:\n >\n > On 1.5.2014 23:19, Andreas Joseph Krogh wrote:\n > > Just curious:\n > > Is such a JOIN index possible in other DBs, if so - which?\n > > Can other DBs do index on difference between two sets?\n > > Will PG ever have this, is it even possible?\n >\n > I'm not aware of such database, but maybe it's possible at least for\n > some cases. But I'd expect that to significantly depend on the schema.\n > And I'm not aware of any such effort in case of PostgreSQL, do don't\n > hold your breath.\n >\n > IMHO the problem with your schema is that while each 'read' message has\n > a matching row in message_property, 'undread' messages may or may not\n > have a matching row. Is there a particular reason for that?\n >\n > \n > \n > Yes. The point is that maintaining a message_property pair for all\n > messages for all users in the system imposes quite a\n > maintainance-headache. As the schema is now any new message is per\n > definition un-read, and when a user reads it then it gets an entry with\n > is_read=true in message_property. This table holds other properties too.\n > This way I'm avoiding having to book-keep so much when a new message\n > arrives and when a new user is created. A message in my system does not\n > necessarily have only one recipient, it might have one, many or none,\n > and might be visible to many.\n\n So how do you determine who's the recipient of a message? Or is that the\n case that everyone can read everything (which is why you're displaying\n them the unread messages, right)? A message might have a recipient and \nmight be read by others. I understand you're trying to solve this without \nstoring a row for each\n possible message-person combination, but won't this eventually happen\n anyway (with is_read=true for all rows)? I will end up with that only if \nall users read all messages, which is not nearly the case. -- Andreas Jospeh \nKrogh CTO / Partner - Visena AS Mobile: +47 909 56 963 [email protected] \n<mailto:[email protected]> www.visena.com <https://www.visena.com> \n<https://www.visena.com>",
"msg_date": "Fri, 2 May 2014 00:38:49 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "Andreas Joseph Krogh-2 wrote\n> I will end up with that only if \n> all users read all messages, which is not nearly the case.\n\nThese observations probably won't help but...\n\nYou have what amounts to a mathematical \"spare matrix\" problem on your\nhands...\n\nIs there any way to expire messages so that dimension does not grow\nunbounded?\n\nPer-User caching does seem to be something that is going to be needed...\n\nDepending on how many users are being tracked would storing the \"reader_id\"\nin an indexed array improve matters? \" SELECT ... FROM message WHERE NOT (1\n= ANY(reader_ids)) ; UPDATE message SET reader_ids = reader_ids || 1 WHERE\nmessageid = ...\" I'm not that familiar with how well indexes over arrays\nwork or which kind is needed (i.e. gin/gist).\n\nHTH\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Optimize-query-for-listing-un-read-messages-tp5802097p5802170.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 May 2014 15:55:25 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "På fredag 02. mai 2014 kl. 00:55:25, skrev David G Johnston <\[email protected] <mailto:[email protected]>>: Andreas Joseph \nKrogh-2 wrote\n > I will end up with that only if\n > all users read all messages, which is not nearly the case.\n\n These observations probably won't help but...\n\n You have what amounts to a mathematical \"spare matrix\" problem on your\n hands...\n\n Is there any way to expire messages so that dimension does not grow\n unbounded? No, unfortunately... Per-User caching does seem to be \nsomething that is going to be needed...\n\n Depending on how many users are being tracked would storing the \"reader_id\"\n in an indexed array improve matters? \" SELECT ... FROM message WHERE NOT (1\n = ANY(reader_ids)) ; UPDATE message SET reader_ids = reader_ids || 1 WHERE\n messageid = ...\" I'm not that familiar with how well indexes over arrays\n work or which kind is needed (i.e. gin/gist). \"is_read\" is one of many \nproperties being tracked for a message... -- Andreas Jospeh Krogh CTO / \nPartner - Visena AS Mobile: +47 909 56 963 [email protected] \n<mailto:[email protected]> www.visena.com <https://www.visena.com> \n<https://www.visena.com>",
"msg_date": "Fri, 2 May 2014 01:51:14 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": ">\n>\n> Per-User caching does seem to be something that is going to be needed...\n>\n> Depending on how many users are being tracked would storing the \"reader_id\"\n> in an indexed array improve matters? \" SELECT ... FROM message WHERE NOT\n> (1\n> = ANY(reader_ids)) ; UPDATE message SET reader_ids = reader_ids || 1 WHERE\n> messageid = ...\" I'm not that familiar with how well indexes over arrays\n> work or which kind is needed (i.e. gin/gist).\n>\n>\n>\n> \"is_read\" is one of many properties being tracked for a message...\n>\n>\nBut you don't have to have all of them on the same table. Once you've\nidentified the messages in question performing a standard join onto a\nsupplemental detail table should be fairly straight-forward.\n\nDo these other properties have values when \"is_read\" is false or only when\n\"is_read\" is true? Since you already allow for the possibility of a\nmissing record (giving it the meaning of \"not read\") these other\nproperties cannot currently exist in that situation.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Optimize-query-for-listing-un-read-messages-tp5802097p5802174.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\nPer-User caching does seem to be something that is going to be needed...\n\nDepending on how many users are being tracked would storing the \"reader_id\"\nin an indexed array improve matters? \" SELECT ... FROM message WHERE NOT (1\n= ANY(reader_ids)) ; UPDATE message SET reader_ids = reader_ids || 1 WHERE\nmessageid = ...\" I'm not that familiar with how well indexes over arrays\nwork or which kind is needed (i.e. gin/gist).\n\n \n \n\"is_read\" is one of many properties being tracked for a message...\nBut you don't have to have all of them on the same table. Once you've identified the messages in question performing a standard join onto a supplemental detail table should be fairly straight-forward.\nDo these other properties have values when \"is_read\" is false or only when \"is_read\" is true? Since you already allow for the possibility of a missing record (giving it the meaning of \"not read\") these other properties cannot currently exist in that situation.\nDavid J.\n\nView this message in context: Re: Optimize query for listing un-read messages\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Thu, 1 May 2014 16:58:04 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "På fredag 02. mai 2014 kl. 01:58:04, skrev David G Johnston <\[email protected] <mailto:[email protected]>>: \n Per-User caching does seem to be something that is going to be needed...\n\n Depending on how many users are being tracked would storing the \"reader_id\"\n in an indexed array improve matters? \" SELECT ... FROM message WHERE NOT (1\n = ANY(reader_ids)) ; UPDATE message SET reader_ids = reader_ids || 1 WHERE\n messageid = ...\" I'm not that familiar with how well indexes over arrays\n work or which kind is needed (i.e. gin/gist). \"is_read\" is one of many \nproperties being tracked for a message... But you don't have to have all \nof them on the same table. Once you've identified the messages in question \nperforming a standard join onto a supplemental detail table should be fairly \nstraight-forward. Do these other properties have values when \"is_read\" is \nfalse or only when \"is_read\" is true? Since you already allow for the \npossibility of a missing record (giving it the meaning of \"not read\") these \nother properties cannot currently exist in that situation. A message might \nhold a property (ie. is_important) when is_read is FALSE (it might be set back \nto is_read=FALSE after being read the first time). -- Andreas Jospeh Krogh \nCTO / Partner - Visena AS Mobile: +47 909 56 963 [email protected] \n<mailto:[email protected]> www.visena.com <https://www.visena.com> \n<https://www.visena.com>",
"msg_date": "Fri, 2 May 2014 02:10:05 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "On Thu, May 1, 2014 at 4:26 AM, Andreas Joseph Krogh <[email protected]>wrote:\n\n> I have a schema where I have lots of messages and some users who might\n> have read some of them. When a message is read by a user I create an entry\n> i a table message_property holding the property (is_read) for that user.\n>\n> The schema is as follows:\n>\n[...]\n\n>\n> create table person(\n> id serial primary key,\n> username varchar not null unique\n> );\n>\n> create table message(\n> id serial primary key,\n> subject varchar\n> );\n>\n> create table message_property(\n> message_id integer not null references message(id),\n> person_id integer not null references person(id),\n> is_read boolean not null default false,\n> unique(message_id, person_id)\n> );\n>\n[...]\n\n> So, for person 1 there are 10 unread messages, out of a total 1mill. 5 of\n> those unread does not have an entry in message_property and 5 have an entry\n> and is_read set to FALSE.\n>\n\nHere's a possible enhancement: add two columns, an indexed timestamp to the\nmessage table, and a \"timestamp of the oldest message this user has NOT\nread\" on the person table. If most users read messages in a timely fashion,\nthis would (in most cases) narrow down the portion of the messages table to\na tiny fraction of the total -- just those messages newer than the oldest\nmessage this user has not read.\n\nWhen you sign up a new user, you can set his timestamp to the time the\naccount was created, since presumably messages before that time don't apply.\n\nWhether this will help depends a lot on actual use patterns, i.e. do users\ntypically read all messages or do they leave a bunch of unread messages\nsitting around forever?\n\nCraig\n\nOn Thu, May 1, 2014 at 4:26 AM, Andreas Joseph Krogh <[email protected]> wrote:\nI have a schema where I have lots of messages and some users who might have read some of them. When a message is read by a user I create an entry i a table message_property holding the property (is_read) for that user.\n\n \nThe schema is as follows:[...] \n create table person(\n id serial primary key,\n username varchar not null unique\n);\n \ncreate table message(\n id serial primary key,\n subject varchar\n);\n \ncreate table message_property(\n message_id integer not null references message(id),\n person_id integer not null references person(id),\n is_read boolean not null default false,\n unique(message_id, person_id)\n);\n[...] So, for person 1 there are 10 unread messages, out of a total 1mill. 5 of those unread does not have an entry in message_property and 5 have an entry and is_read set to FALSE.\nHere's a possible enhancement: add two columns, an indexed timestamp to the message table, and a \"timestamp of the oldest message this user has NOT read\" on the person table. If most users read messages in a timely fashion, this would (in most cases) narrow down the portion of the messages table to a tiny fraction of the total -- just those messages newer than the oldest message this user has not read.\nWhen you sign up a new user, you can set his timestamp to the time the account was created, since presumably messages before that time don't apply.Whether this will help depends a lot on actual use patterns, i.e. do users typically read all messages or do they leave a bunch of unread messages sitting around forever?\nCraig",
"msg_date": "Thu, 1 May 2014 17:17:58 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "På fredag 02. mai 2014 kl. 02:17:58, skrev Craig James <[email protected] \n<mailto:[email protected]>>: On Thu, May 1, 2014 at 4:26 AM, Andreas Joseph \nKrogh<[email protected] <mailto:[email protected]>> wrote: I have a schema \nwhere I have lots of messages and some users who might have read some of them. \nWhen a message is read by a user I create an entry i a table message_property \nholding the property (is_read) for that user. The schema is as follows: [...] \n create table person( id serial primary key,\n username varchar not null unique\n ); create table message(\n id serial primary key,\n subject varchar\n ); create table message_property(\n message_id integer not null references message(id),\n person_id integer not null references person(id),\n is_read boolean not null default false,\n unique(message_id, person_id)\n ); [...] So, for person 1 there are 10 unread messages, out of a total \n1mill. 5 of those unread does not have an entry in message_property and 5 have \nan entry and is_read set to FALSE. Here's a possible enhancement: add two \ncolumns, an indexed timestamp to the message table, and a \"timestamp of the \noldest message this user has NOT read\" on the person table. If most users read \nmessages in a timely fashion, this would (in most cases) narrow down the \nportion of the messages table to a tiny fraction of the total -- just those \nmessages newer than the oldest message this user has not read.\n When you sign up a new user, you can set his timestamp to the time the \naccount was created, since presumably messages before that time don't apply.\n Whether this will help depends a lot on actual use patterns, i.e. do users \ntypically read all messages or do they leave a bunch of unread messages sitting \naround forever? Thanks fort the suggestion. A user must be able to read \narbitrary old messages, and messages don't expire. -- Andreas Jospeh Krogh \nCTO / Partner - Visena AS Mobile: +47 909 56 963 [email protected] \n<mailto:[email protected]> www.visena.com <https://www.visena.com> \n<https://www.visena.com>",
"msg_date": "Fri, 2 May 2014 09:20:22 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize query for listing un-read messages"
},
{
"msg_contents": "What statistics do you have on the data? I suppose most messages are read\nby low number of users, mostly 0 or one.\nI can see two options to consider:\n1) Use arrays to store information on which users have already read the\nmessage. You may need GIN/GIST index to search fast.\n2) Introduce some kind of special column(s) for the cases when the message\nis unread by everybody or was read by at most one user. E.g. read_by\ncolumns with null value for unread, special value for read by many and real\nuser if read by only one.\nin this case your condition would be (read_by is null or read_by not in\n(current_user or special_value) or (read_by = special_value and not\nexists()). Note that optimizer may have problems with such a complex\nexpression nd you may need to use \"union all\" instead on \"or\". Partial\nindex(es) for null/special value may help.\n\nBest regards, Vitalii Tymchyshyn\n\n\n2014-05-02 10:20 GMT+03:00 Andreas Joseph Krogh <[email protected]>:\n\n> På fredag 02. mai 2014 kl. 02:17:58, skrev Craig James <\n> [email protected]>:\n>\n> On Thu, May 1, 2014 at 4:26 AM, Andreas Joseph Krogh <[email protected]>wrote:\n>\n>> I have a schema where I have lots of messages and some users who might\n>> have read some of them. When a message is read by a user I create an entry\n>> i a table message_property holding the property (is_read) for that user.\n>>\n>> The schema is as follows:\n>>\n> [...]\n>\n>>\n>> create table person(\n>> id serial primary key,\n>> username varchar not null unique\n>> );\n>>\n>> create table message(\n>> id serial primary key,\n>> subject varchar\n>> );\n>>\n>> create table message_property(\n>> message_id integer not null references message(id),\n>> person_id integer not null references person(id),\n>> is_read boolean not null default false,\n>> unique(message_id, person_id)\n>> );\n>>\n>>\n> [...]\n>\n>> So, for person 1 there are 10 unread messages, out of a total 1mill. 5\n>> of those unread does not have an entry in message_property and 5 have an\n>> entry and is_read set to FALSE.\n>>\n>\n> Here's a possible enhancement: add two columns, an indexed timestamp to\n> the message table, and a \"timestamp of the oldest message this user has NOT\n> read\" on the person table. If most users read messages in a timely fashion,\n> this would (in most cases) narrow down the portion of the messages table to\n> a tiny fraction of the total -- just those messages newer than the oldest\n> message this user has not read.\n>\n> When you sign up a new user, you can set his timestamp to the time the\n> account was created, since presumably messages before that time don't apply.\n>\n> Whether this will help depends a lot on actual use patterns, i.e. do users\n> typically read all messages or do they leave a bunch of unread messages\n> sitting around forever?\n>\n>\n> Thanks fort the suggestion. A user must be able to read arbitrary old\n> messages, and messages don't expire.\n>\n> --\n> *Andreas Jospeh Krogh*\n> CTO / Partner - Visena AS\n> Mobile: +47 909 56 963\n> [email protected]\n> www.visena.com\n> <https://www.visena.com>\n>\n>",
"msg_date": "Fri, 2 May 2014 20:28:44 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize query for listing un-read messages"
}
] |
[
{
"msg_contents": "olavgg wrote\n> I have a table with 4 indexes =>\n> \"stock_trade_pkey\" PRIMARY KEY, btree (id)\n> \"stock_trade_source_idx\" btree (source_id)\n> \"stock_trade_stock_id_time_idx\" btree (stock_id, \"time\")\n> \"stock_trade_time_idx\" btree (\"time\")\n> \n> This table store time series data, basically every trade happening on a\n> stock every day.\n> \n> However I have two similar queries that use completely different index,\n> which has HUGE impact on performance.\n> \n> *********** QUERY START ************\n> myfinance=> EXPLAIN (buffers,analyze) \n> SELECT COUNT(1) \n> FROM stock_trade st \n> WHERE st.stock_id = any(array(\n> SELECT s.id FROM stock s WHERE s.exchange_id IN(1,2,3))\n> ) \n> AND st.time BETWEEN '2014-04-22 00:00' AND '2014-04-22 23:59'; \n> \n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=19148.27..19148.37 rows=1 width=0) (actual\n> time=3644.474..3644.475 rows=1 loops=1)\n> Buffers: shared hit=5994 read=1524\n> InitPlan 1 (returns $0)\n> -> Index Scan using stock_exchange_idx on stock s \n> (cost=28.38..794.17 rows=1482 width=8) (actual time=0.066..4.412 rows=1486\n> loops=1)\n> Index Cond: (exchange_id = ANY ('{1,2,3}'::bigint[]))\n> Buffers: shared hit=34\n> -> Index Only Scan using stock_trade_stock_id_time_idx on stock_trade\n> st (cost=58.50..14380.10 rows=15896 width=0) (actual time=8.033..3071.828\n> rows=395019 loops=1)\n> Index Cond: ((stock_id = ANY ($0)) AND (\"time\" >= '2014-04-22\n> 00:00:00'::timestamp without time zone) AND (\"time\" <= '2014-04-22\n> 23:59:00'::timestamp without time zone))\n> Heap Fetches: 0\n> Buffers: shared hit=5994 read=1524\n> Total runtime: 3644.604 ms\n> *********** QUERY END ************\n> \n> This query is using the 'stock_trade_stock_id_time_idx' multi-column\n> index, with good performance.\n> However once I change the date to a more recent one, it is suddenly using\n> another and MUCH slower index...\n> \n> *********** QUERY START ************\n> myfinance=> EXPLAIN (buffers,analyze) \n> SELECT COUNT(1) \n> FROM stock_trade st \n> WHERE st.stock_id = any(array(\n> SELECT s.id FROM stock s WHERE s.exchange_id IN(1,2,3))\n> ) \n> AND st.time BETWEEN '2014-05-02 00:00' AND '2014-05-02 23:59'; \n> \n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=859.78..859.88 rows=1 width=0) (actual\n> time=115505.403..115505.405 rows=1 loops=1)\n> Buffers: shared hit=4433244\n> InitPlan 1 (returns $0)\n> -> Index Scan using stock_exchange_idx on stock s \n> (cost=28.38..794.17 rows=1482 width=8) (actual time=0.047..4.361 rows=1486\n> loops=1)\n> Index Cond: (exchange_id = ANY ('{1,2,3}'::bigint[]))\n> Buffers: shared hit=34\n> -> Index Scan using stock_trade_time_idx on stock_trade st \n> (cost=57.50..65.35 rows=1 width=0) (actual time=7.415..114921.242\n> rows=395834 loops=1)\n> Index Cond: ((\"time\" >= '2014-05-02 00:00:00'::timestamp without\n> time zone) AND (\"time\" <= '2014-05-02 23:59:00'::timestamp without time\n> zone))\n> Filter: (stock_id = ANY ($0))\n> Rows Removed by Filter: 6903136\n> Buffers: shared hit=4433244\n> Total runtime: 115505.545 ms\n> *********** QUERY END ************\n> \n> As you see, now it is using the 'stock_trade_time_idx' index.\n> I have a similar problem when using IN or EXISTS for stock_id's, it will\n> automatically chose the wrong index. But when I tried with\n> any(array($subquery)), the right index would be chosen for data that is a\n> few days old(Not sure why the query planner is behaving like this).\n> \n> I've tried running VACUUM and ANALYZE without any effect. Are there other\n> things I can do?\n\nI suspect the the index-only aspect of the first plan is what is giving the\nlargest performance boost. As time passes the likelihood of having data be\nall-visible increases. I do not know how or if it is possible to force\nvisibility for this purpose.\n\nTime is likely more selective than stock_id so for the multiple-column index\ntime should probably the first listed field. \n\nThe planner figures being more selective and filtering is going to be faster\nthan scanning the much larger section of index covered by the stock_id(s)\nand then going and fetching those pages and then checking them for\nvisibility. But if it can get most or all of the data directly from the\nindex then the savings are substantial enough to use the compound index. \nOthers will comment more broadly on the trade-off between the two\nindexes/plans but the answer in this case is that the best index would be\n(time, stock); I don't think you can improve the query of the more recent\ndata beyond that since index-only scans are not likely something you can\nforce in this situation.\n\nDavid J.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/PostgreSQL-s-query-planner-is-using-the-wrong-index-what-can-I-do-to-improve-this-situation-tp5802349p5802377.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 4 May 2014 13:51:24 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL's query planner is using the wrong index, what can I\n do to improve this situation?"
}
] |
[
{
"msg_contents": "hello list,\n\nsince i got no reply i am afraid i'll go the dump/restore cycle path\nhopping this will solve my problem.\n\nbest regards,\n\n/mstelios\n\n\nStelios Mavromichalis\nCytech Ltd. - http://www.cytech.gr/\nScience & Technology Park of Crete\nfax: +30 2810 31 1045\ntel.: +30 2810 31 4127\nmob.: +30 697 7078013\nskype: mstelios\n\n\nOn Mon, May 5, 2014 at 5:11 PM, Stelios Mavromichalis <[email protected]>wrote:\n\n> hello,\n>\n> after reading this guide:\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> i decided to seek for your help.\n>\n> my problem is that the same query/function some times run fast/normal (as\n> expected) and, recently like 5 days now, some/most of the times, it run\n> really slow to _very_ slow.\n>\n> the query is in essence a very simple update on a balance of _a certain\n> user_ (no other user has this issue). yes, this user has the most frequent\n> updates to his balance.\n>\n> i've tried restarting,manual vacuuming (with analyze full etc or not),\n> reindex the database with no improvement. also it's not a hardware problem.\n> diagnostics run fine and no kernel messages or anything weird/unexpected.\n> the load of the machine is also low (like 0.2).\n>\n> i would dump+restore cycle the database without bothering you, hoping\n> that that would solve my problem, but then i though i wouldn't learn\n> anything out of it, nor you would have the chance to potentially trace a\n> problem/bug thus help the community.\n>\n> so, without further due:\n>\n>\n> Full Table and Index Schema\n>\n> the function that has the problems(easysms_jb_pay(int,int) return int):\n> Source code\n> -----------------------------------------------------------\n>\n> DECLARE\n> user_id ALIAS FOR $1;\n> amount ALIAS FOR $2;\n> myuser record;\n> mg record;\n> newbalance float;\n> BEGIN\n> SELECT INTO myuser es.login, es.balance as esbalance\n> from\n> easysms_users es\n> where\n> es.usid = user_id;\n>\n> IF NOT FOUND THEN\n> RAISE EXCEPTION 'Cannot find user';\n> return -2;\n> END IF;\n>\n> IF myuser.login = 'jbuser' THEN\n> return -3;\n> END IF;\n>\n> IF myuser.esbalance < amount THEN\n> return -1;\n> END IF;\n>\n> UPDATE easysms_users SET balance = balance - amount\n> WHERE usid = user_id;\n>\n> return 1;\n> END;\n>\n>\n> the related table:\n> Table \"public.easysms_users\"\n> Column | Type\n> | Modifiers\n>\n> ------------------------+-----------------------------+-------------------------------------------------------------\n> login | character varying(20) |\n> passwd | character varying(32) | not null\n> mobile | character varying(16) | not null\n> name | character varying(20) |\n> lastname | character varying(20) |\n> balance | bigint | not null default 0\n> email | character varying(40) |\n> status | character varying(1) | default\n> 'p'::character varying\n> lang | character varying(2) |\n> trusted | boolean | default false\n> opt_originator | character varying(16) |\n> opt_fullname | character varying(50) |\n> opt_afm | character varying(30) |\n> opt_invoice_details | text |\n> opt_postal_address | text |\n> opt_want_invoice | smallint | default 0\n> bulklimit | integer | default 100\n> lastlogin | timestamp without time zone |\n> daily_report | boolean | default false\n> pro | boolean | default true\n> country_code | integer |\n> mobnumber | character varying(10) |\n> cctld | character varying(2) |\n> mpid | integer |\n> ifee | boolean |\n> gsm_code | character varying(8) |\n> account_reminder_email | boolean | default false\n> usid | integer | default (-2)\n> namedays | boolean | default true\n> opt_concat | boolean | default false\n> opt_smtype | character(1) | default 't'::bpchar\n> opt_url | text |\n> opt_permit_concat | boolean | default true\n> opt_email | boolean | default false\n> suser | boolean | default false\n> susid | integer |\n> perm | character varying(20) |\n> opt_statsperiod | character varying(3) |\n> opt_balance | boolean |\n> opt_lblimit | integer |\n> opt_override | boolean | default false\n> opt_letstamp | timestamp with time zone | default (now() -\n> '1 day'::interval)\n> opt_lbststamp | timestamp with time zone | default now()\n> opt_pushdlr_enabled | boolean | default false\n> opt_pushdlr_ltstamp | timestamp with time zone | default now()\n> opt_pushdlr_rperiod | integer | default 300\n> opt_pushdlr_dperiod | integer | default 2\n> opt_pushdlrs | boolean | default false\n> regdate | timestamp with time zone | not null default\n> ('now'::text)::timestamp(6) with time zone\n> opt_occupation | character varying(50) |\n> opt_invoice_address | text |\n> opt_city | character varying(50) |\n> opt_invoice_city | character varying(50) |\n> opt_pcode | character varying(30) |\n> opt_invoice_pcode | character varying(30) |\n> opt_doy | character varying(50) |\n> opt_phone | character varying(50) |\n> opt_invoice_country | character varying(50) |\n> opt_country | character varying(50) |\n> billid | integer |\n> opt_smpp_enabled | boolean | default false\n> Indexes:\n> \"idx_easysms_users_usid\" UNIQUE, btree (usid)\n> \"easysms_users_cctld_idx\" btree (cctld)\n> \"easysms_users_email_idx\" btree (email)\n> \"easysms_users_mobile_idx\" btree (mobile)\n> \"easysms_users_mpid_idx\" btree (mpid)\n> \"easysms_users_status_idx\" btree (status)\n>\n>\n> Table Metadata\n> done not contain large objects\n> has a fair amount of nulls\n> does receive a large number of updates, no deletes\n> is not growing rapidly, but very slow\n> indexes you can see the schema\n> does not use triggers\n>\n>\n> History\n> what i've mentioned at the start of this email. i can't think of any event\n> that could link to this behavior.\n>\n>\n> Hardware Components (Dedicated to dbs, also runs a low traffic mysql, runs\n> open suse 12.3 x86-64bit)\n> Harddisk 2x 2000 GB SATA 3,5\" 7.200 rpm (in raid 1)\n> RAM 32x Gigabyte RAM\n> RAID-Controller HP SmartArrayP410 (battery backed, write back is enabled)\n> Barebone Hewlett Packard DL320e G8\n> CPU Intel Xeon E3-1230v2\n>\n>\n> Maintenance Setup\n> autovacuuming on default settings. manual vacuum only on cases like this\n> and not regularly. see db config\n>\n>\n> WAL Configuration\n> nothing special here, all run on same disk/part. see db config\n>\n>\n> GUC Settings\n> name | current_setting | source\n> ------------------------------+-------------------+----------------------\n> application_name | psql | client\n> checkpoint_completion_target | 0.9 | configuration file\n> checkpoint_segments | 64 | configuration file\n> client_encoding | UTF8 | client\n> client_min_messages | log | configuration file\n> DateStyle | ISO, DMY | configuration file\n> deadlock_timeout | 10s | configuration file\n> debug_print_rewritten | off | configuration file\n> default_statistics_target | 100 | configuration file\n> default_text_search_config | pg_catalog.simple | configuration file\n> effective_cache_size | 8GB | configuration file\n> fsync | off | configuration file\n> lc_messages | el_GR.UTF-8 | configuration file\n> lc_monetary | el_GR.UTF-8 | configuration file\n> lc_numeric | el_GR.UTF-8 | configuration file\n> lc_time | el_GR.UTF-8 | configuration file\n> listen_addresses | * | configuration file\n> log_connections | off | configuration file\n> log_destination | syslog | configuration file\n> log_disconnections | off | configuration file\n> log_error_verbosity | verbose | configuration file\n> log_hostname | on | configuration file\n> log_line_prefix | %d %u | configuration file\n> log_lock_waits | on | configuration file\n> log_min_duration_statement | 1s | configuration file\n> log_min_error_statement | debug5 | configuration file\n> log_min_messages | info | configuration file\n> log_statement | none | configuration file\n> logging_collector | on | configuration file\n> maintenance_work_mem | 704MB | configuration file\n> max_connections | 400 | configuration file\n> max_prepared_transactions | 1000 | configuration file\n> max_stack_depth | 2MB | environment variable\n> random_page_cost | 1.5 | configuration file\n> shared_buffers | 2816MB | configuration file\n> TimeZone | Europe/Athens | configuration file\n> wal_buffers | 16MB | configuration file\n> work_mem | 28MB | configuration file\n> (38 rows)\n>\n>\n> Postgres version\n> # select version();\n> version\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.2.7 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux)\n> 4.8.1 20130909 [gcc-4_8-branch revision 202388], 64-bit\n> (1 row)\n>\n> normal speed query that really stacks: <http://explain.depesz.com/s/XeQm>\n>\n> slow version of it: <http://explain.depesz.com/s/AjwK>\n>\n> thank you so very much in advance for your time and efforts to help.\n>\n> best regards,\n>\n> /mstelios\n>\n>\n> Stelios Mavromichalis\n> Cytech Ltd. - http://www.cytech.gr/\n> Science & Technology Park of Crete\n> fax: +30 2810 31 1045\n> tel.: +30 2810 31 4127\n> mob.: +30 697 7078013\n> skype: mstelios\n>\n\nhello list,since i got no reply i am afraid i'll go the dump/restore cycle path hopping this will solve my problem.best regards,/mstelios\nStelios MavromichalisCytech Ltd. - http://www.cytech.gr/Science & Technology Park of Cretefax: +30 2810 31 1045\r\ntel.: +30 2810 31 4127mob.: +30 697 7078013skype: mstelios\nOn Mon, May 5, 2014 at 5:11 PM, Stelios Mavromichalis <[email protected]> wrote:\nhello,after reading this guide: http://wiki.postgresql.org/wiki/SlowQueryQuestionsi decided to seek for your help.\n\r\nmy problem is that the same query/function some times run fast/normal (as expected) and, recently like 5 days now, some/most of the times, it run really slow to _very_ slow.the query is in essence a very simple update on a balance of _a certain user_ (no other user has this issue). yes, this user has the most frequent updates to his balance.\ni've tried restarting,manual vacuuming (with analyze full etc or not), reindex the database with no improvement. also it's not a hardware problem. diagnostics run fine and no kernel messages or anything weird/unexpected. the load of the machine is also low (like 0.2).\ni would dump+restore cycle the database without bothering you, hoping that that would solve my problem, but then i though i wouldn't learn anything out of it, nor you would have the chance to potentially trace a problem/bug thus help the community.\nso, without further due:Full Table and Index Schemathe function that has the problems(easysms_jb_pay(int,int) return int): Source code -----------------------------------------------------------\r\n\r\n DECLARE user_id ALIAS FOR $1; amount ALIAS FOR $2; \r\n\r\n myuser record; mg record; newbalance float; BEGIN \r\n\r\n SELECT INTO myuser es.login, es.balance as esbalance from easysms_users es where \r\n\r\n es.usid = user_id; IF NOT FOUND THEN RAISE EXCEPTION 'Cannot find user'; \r\n\r\n return -2; END IF; IF myuser.login = 'jbuser' THEN \r\n\r\n return -3; END IF; IF myuser.esbalance < amount THEN \r\n\r\n return -1; END IF; UPDATE easysms_users SET balance = balance - amount \r\n\r\n WHERE usid = user_id; return 1; END; \nthe related table: Table \"public.easysms_users\" Column | Type | Modifiers------------------------+-----------------------------+-------------------------------------------------------------\r\n\r\n login | character varying(20) | passwd | character varying(32) | not null mobile | character varying(16) | not null name | character varying(20) |\r\n\r\n lastname | character varying(20) | balance | bigint | not null default 0 email | character varying(40) | status | character varying(1) | default 'p'::character varying\r\n\r\n lang | character varying(2) | trusted | boolean | default false opt_originator | character varying(16) | opt_fullname | character varying(50) |\r\n\r\n opt_afm | character varying(30) | opt_invoice_details | text | opt_postal_address | text | opt_want_invoice | smallint | default 0\r\n\r\n bulklimit | integer | default 100 lastlogin | timestamp without time zone | daily_report | boolean | default false pro | boolean | default true\r\n\r\n country_code | integer | mobnumber | character varying(10) | cctld | character varying(2) | mpid | integer |\r\n\r\n ifee | boolean | gsm_code | character varying(8) | account_reminder_email | boolean | default false usid | integer | default (-2)\r\n\r\n namedays | boolean | default true opt_concat | boolean | default false opt_smtype | character(1) | default 't'::bpchar\r\n\r\n opt_url | text | opt_permit_concat | boolean | default true opt_email | boolean | default false suser | boolean | default false\r\n\r\n susid | integer | perm | character varying(20) | opt_statsperiod | character varying(3) | opt_balance | boolean |\r\n\r\n opt_lblimit | integer | opt_override | boolean | default false opt_letstamp | timestamp with time zone | default (now() - '1 day'::interval)\r\n\r\n opt_lbststamp | timestamp with time zone | default now() opt_pushdlr_enabled | boolean | default false opt_pushdlr_ltstamp | timestamp with time zone | default now() opt_pushdlr_rperiod | integer | default 300\r\n\r\n opt_pushdlr_dperiod | integer | default 2 opt_pushdlrs | boolean | default false regdate | timestamp with time zone | not null default ('now'::text)::timestamp(6) with time zone\r\n\r\n opt_occupation | character varying(50) | opt_invoice_address | text | opt_city | character varying(50) | opt_invoice_city | character varying(50) |\r\n\r\n opt_pcode | character varying(30) | opt_invoice_pcode | character varying(30) | opt_doy | character varying(50) | opt_phone | character varying(50) |\r\n\r\n opt_invoice_country | character varying(50) | opt_country | character varying(50) | billid | integer | opt_smpp_enabled | boolean | default false\r\n\r\nIndexes: \"idx_easysms_users_usid\" UNIQUE, btree (usid) \"easysms_users_cctld_idx\" btree (cctld) \"easysms_users_email_idx\" btree (email) \"easysms_users_mobile_idx\" btree (mobile)\r\n\r\n \"easysms_users_mpid_idx\" btree (mpid) \"easysms_users_status_idx\" btree (status)Table Metadatadone not contain large objectshas a fair amount of nullsdoes receive a large number of updates, no deletes\r\n\r\nis not growing rapidly, but very slowindexes you can see the schemadoes not use triggersHistorywhat i've mentioned at the start of this email. i can't think of any event that could link to this behavior.\nHardware Components (Dedicated to dbs, also runs a low traffic mysql, runs open suse 12.3 x86-64bit)Harddisk 2x 2000 GB SATA 3,5\" 7.200 rpm (in raid 1)RAM 32x Gigabyte RAMRAID-Controller HP SmartArrayP410 (battery backed, write back is enabled)\r\n\r\nBarebone Hewlett Packard DL320e G8CPU Intel Xeon E3-1230v2Maintenance Setupautovacuuming on default settings. manual vacuum only on cases like this and not regularly. see db configWAL Configuration\r\n\r\nnothing special here, all run on same disk/part. see db configGUC Settings name | current_setting | source------------------------------+-------------------+----------------------\r\n\r\n application_name | psql | client checkpoint_completion_target | 0.9 | configuration file checkpoint_segments | 64 | configuration file client_encoding | UTF8 | client\r\n\r\n client_min_messages | log | configuration file DateStyle | ISO, DMY | configuration file deadlock_timeout | 10s | configuration file\r\n\r\n debug_print_rewritten | off | configuration file default_statistics_target | 100 | configuration file default_text_search_config | pg_catalog.simple | configuration file\r\n\r\n effective_cache_size | 8GB | configuration file fsync | off | configuration file lc_messages | el_GR.UTF-8 | configuration file\r\n\r\n lc_monetary | el_GR.UTF-8 | configuration file lc_numeric | el_GR.UTF-8 | configuration file lc_time | el_GR.UTF-8 | configuration file\r\n\r\n listen_addresses | * | configuration file log_connections | off | configuration file log_destination | syslog | configuration file\r\n\r\n log_disconnections | off | configuration file log_error_verbosity | verbose | configuration file log_hostname | on | configuration file\r\n\r\n log_line_prefix | %d %u | configuration file log_lock_waits | on | configuration file log_min_duration_statement | 1s | configuration file\r\n\r\n log_min_error_statement | debug5 | configuration file log_min_messages | info | configuration file log_statement | none | configuration file\r\n\r\n logging_collector | on | configuration file maintenance_work_mem | 704MB | configuration file max_connections | 400 | configuration file\r\n\r\n max_prepared_transactions | 1000 | configuration file max_stack_depth | 2MB | environment variable random_page_cost | 1.5 | configuration file\r\n\r\n shared_buffers | 2816MB | configuration file TimeZone | Europe/Athens | configuration file wal_buffers | 16MB | configuration file\r\n\r\n work_mem | 28MB | configuration file(38 rows)Postgres version# select version(); version---------------------------------------------------------------------------------------------------------------------------------\r\n\r\n PostgreSQL 9.2.7 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux) 4.8.1 20130909 [gcc-4_8-branch revision 202388], 64-bit(1 row)normal speed query that really stacks: <http://explain.depesz.com/s/XeQm>\nslow version of it: <http://explain.depesz.com/s/AjwK>thank you so very much in advance for your time and efforts to help.\nbest regards,\n/msteliosStelios MavromichalisCytech Ltd. - http://www.cytech.gr/\r\nScience & Technology Park of Cretefax: +30 2810 31 1045tel.: +30 2810 31 4127\r\n\r\nmob.: +30 697 7078013skype: mstelios",
"msg_date": "Tue, 6 May 2014 00:21:06 +0300",
"msg_from": "Stelios Mavromichalis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: recently and selectively slow, but very simple, update query...."
},
{
"msg_contents": "Stelios Mavromichalis wrote\n>> the load of the machine is also low (like 0.2).\n\nWhich means little if the update is waiting for a lock to be released by one\nother process; which is more likely the situation (or some other concurrency\ncontention) especially as you said that this particular user generates\nsignificant transaction/query volume (implied by the fact the user has the\nmost balance updates).\n\nDuring slow-update executions you want to look at:\npg_stat_activity\npg_locks \n\nto see what other concurrent activity is taking place.\n\nIt is doubtful that dump/restore would have any effect given that the\nsymptoms are sporadic and we are only talking about a select statement that\nreturns a single row; and an update that does not hit any indexed column and\ntherefore benefits from \"HOT\" optimization.\n\nHTH\n\nDavid J.\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Re-recently-and-selectively-slow-but-very-simple-update-query-tp5802553p5802555.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 5 May 2014 14:51:21 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: recently and selectively slow, but very simple, update query...."
},
{
"msg_contents": "hello David,\n\nthank you! for the reply.\n\nright, to this regard i did some research following the recommendation here\n<http://wiki.postgresql.org/wiki/Lock_Monitoring>\n\nhowever, it showed that nothing was wrong. no deadlocks, no pending locks\nno nothing.\n\nhave to mention that this update is run in serial (no other thread/process\nis trying to update that balance -or any other-, only one. well not 100%\ntrue, except the topup mechanism that happens relatively rare).\n\nalso have to mention that his exact same mechanic is there some time now,\nlife a few years, and it never had similar problem.\n\nalso the fact that if i use another usid, with great many updates, will\nroll normally leads me to think that it might be a corruption or something,\nthus the dump/restore hope :)\n\nas a prior step to dump/restore i am thinking of deleting and re-inserting\nthat particular row. that might share some light you think?\n\nbest regards,\n\n/mstelios\n\nStelios Mavromichalis\nCytech Ltd. - http://www.cytech.gr/\nScience & Technology Park of Crete\nfax: +30 2810 31 1045\ntel.: +30 2810 31 4127\nmob.: +30 697 7078013\nskype: mstelios\n\n\nOn Tue, May 6, 2014 at 12:51 AM, David G Johnston <\[email protected]> wrote:\n\n> Stelios Mavromichalis wrote\n> >> the load of the machine is also low (like 0.2).\n>\n> Which means little if the update is waiting for a lock to be released by\n> one\n> other process; which is more likely the situation (or some other\n> concurrency\n> contention) especially as you said that this particular user generates\n> significant transaction/query volume (implied by the fact the user has the\n> most balance updates).\n>\n> During slow-update executions you want to look at:\n> pg_stat_activity\n> pg_locks\n>\n> to see what other concurrent activity is taking place.\n>\n> It is doubtful that dump/restore would have any effect given that the\n> symptoms are sporadic and we are only talking about a select statement that\n> returns a single row; and an update that does not hit any indexed column\n> and\n> therefore benefits from \"HOT\" optimization.\n>\n> HTH\n>\n> David J.\n>\n>\n>\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Re-recently-and-selectively-slow-but-very-simple-update-query-tp5802553p5802555.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nhello David,thank you! for the reply.right, to this regard i did some research following the recommendation here <http://wiki.postgresql.org/wiki/Lock_Monitoring>\nhowever, it showed that nothing was wrong. no deadlocks, no pending locks no nothing.have to mention that this update is run in serial (no other thread/process is trying to update that balance -or any other-, only one. well not 100% true, except the topup mechanism that happens relatively rare).\nalso have to mention that his exact same mechanic is there some time now, life a few years, and it never had similar problem.also the fact that if i use another usid, with great many updates, will roll normally leads me to think that it might be a corruption or something, thus the dump/restore hope :)\nas a prior step to dump/restore i am thinking of deleting and re-inserting that particular row. that might share some light you think?best regards,/mstelios\nStelios MavromichalisCytech Ltd. - http://www.cytech.gr/Science & Technology Park of Crete\nfax: +30 2810 31 1045tel.: +30 2810 31 4127mob.: +30 697 7078013skype: mstelios\nOn Tue, May 6, 2014 at 12:51 AM, David G Johnston <[email protected]> wrote:\nStelios Mavromichalis wrote\n>> the load of the machine is also low (like 0.2).\n\nWhich means little if the update is waiting for a lock to be released by one\nother process; which is more likely the situation (or some other concurrency\ncontention) especially as you said that this particular user generates\nsignificant transaction/query volume (implied by the fact the user has the\nmost balance updates).\n\nDuring slow-update executions you want to look at:\npg_stat_activity\npg_locks\n\nto see what other concurrent activity is taking place.\n\nIt is doubtful that dump/restore would have any effect given that the\nsymptoms are sporadic and we are only talking about a select statement that\nreturns a single row; and an update that does not hit any indexed column and\ntherefore benefits from \"HOT\" optimization.\n\nHTH\n\nDavid J.\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Re-recently-and-selectively-slow-but-very-simple-update-query-tp5802553p5802555.html\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 6 May 2014 01:06:33 +0300",
"msg_from": "Stelios Mavromichalis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: recently and selectively slow, but very simple,\n update query...."
},
{
"msg_contents": "Stelios Mavromichalis wrote\n> as a prior step to dump/restore i am thinking of deleting and re-inserting\n> that particular row. that might share some light you think?\n\nI still dislike the randomness of the unresponsiveness...\n\nEvery time you perform an update you \"delete and insert\" that row - that is\nhow an update works in MVCC - so doing so explicitly is unlikely to provide\nany benefit. Since that row is continually being inserted, and no other\nrows are having this issue, I'm seriously doubting that a dump/restore is\ngoing to have any effect either. Note that the index scan took twice as\nlong in the bad case - but still reasonable and you didn't notice any\nbenefit from a REINDEX. This is what I would expect.\n\nThe only other difference, if concurrency has been ruled out, is the 4 vs 18\nbuffers that had to be read. I cannot imagine that, since all 22 were in\ncache, that simply reading that much more data would account for the\ndifference (we're talking a 10,000-fold increase, not 2to4-fold). The\nreason for this particular difference, IIUC, is how may candidate tuples are\npresent whose visibility has to be accounted for (assuming 1 buffer per\ntuple, you needed to scan 4 vs 18 for visibility in the two queries).\n\nIs there any log file information you can share? Especially if you can set\nlog_min_statement_duration (or whatever that GUC is named) so that whenever\none of these gets adversely delayed it appears in the log along with\nwhatever other system messages are being sent. Checkpoints are a typical\nculprit though that should be affecting a great deal more than what you\nindicate you are seeing.\n\nI'm pretty certain you are seeing this here largely because of the frequency\nof activity on this particular user; not because the data itself is\ncorrupted. It could be some kind of symptom of internal concurrency that\nyou just haven't observed yet but it could also be I/O or other system\ncontention that you also haven't properly instrumented. Unfortunately that\nis beyond my current help-providing skill-set.\n\nA dump-restore likely would not make anything worse though I'd be surprised\nif it were to improve matters. It also doesn't seem like hardware - unless\nthe RAM is bad. Software bugs are unlikely if this had been working well\nbefore 5 days ago. So, you need to observe the system during both periods\n(good and bad) and observe something that is different - probably not within\nPostgreSQL if indeed you've minimized concurrency. And also see if you can\nsee if any other queries, executed during both these times, exhibit a\nperformance decrease. Logging all statements would help matters greatly if\nyou can afford it in your production environment - it would make looking for\ninternal concurrency much easier.\n\nDavid J.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Re-recently-and-selectively-slow-but-very-simple-update-query-tp5802553p5802579.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 5 May 2014 16:54:39 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: recently and selectively slow, but very simple, update query...."
},
{
"msg_contents": "hello again,\n\nsome more data on the subject:\n\nyou are right, delete/re-insert didn't solve the problem (haven't yet tried\ndump/restore, i might tonight, when low traffic).\n\na short snip from logs were i log all queries that take longer then 1sec:\n\n2014-05-06T15:57:46.303880+03:00 s10 postgres[46220]: [1891-1] prosms\nprosms LOG: 00000: duration: 1947.172 ms execute <unnamed>: select\neasysms_jb_pay($1,\n$2)\n2014-05-06T15:57:46.304230+03:00 s10 postgres[46220]: [1891-2] prosms\nprosms DETAIL: parameters: $1 = '10808', $2 = '2'\n2014-05-06T15:57:46.304439+03:00 s10 postgres[46220]: [1891-3] prosms\nprosms LOCATION: exec_execute_message, postgres.c:1989\n2014-05-06T15:57:56.199005+03:00 s10 postgres[58002]: [2570-1] prosms\nprosms LOG: 00000: duration: 6869.886 ms execute <unnamed>: select\neasysms_jb_pay($1,$2)\n2014-05-06T15:57:56.199349+03:00 s10 postgres[58002]: [2570-2] prosms\nprosms DETAIL: parameters: $1 = '10808', $2 = '2'\n2014-05-06T15:57:56.199567+03:00 s10 postgres[58002]: [2570-3] prosms\nprosms LOCATION: exec_execute_message, postgres.c:1989\n2014-05-06T15:57:59.134982+03:00 s10 postgres[58002]: [2571-1] prosms\nprosms LOG: 00000: duration: 1797.747 ms execute <unnamed>: select\neasysms_jb_pay($1,$2)\n2014-05-06T15:57:59.135334+03:00 s10 postgres[58002]: [2571-2] prosms\nprosms DETAIL: parameters: $1 = '10808', $2 = '2'\n2014-05-06T15:57:59.135562+03:00 s10 postgres[58002]: [2571-3] prosms\nprosms LOCATION: exec_execute_message, postgres.c:1989\n2014-05-06T15:58:07.149477+03:00 s10 postgres[46220]: [1892-1] prosms\nprosms LOG: 00000: duration: 3938.979 ms execute <unnamed>: select\neasysms_jb_pay($1,$2)\n2014-05-06T15:58:07.149830+03:00 s10 postgres[46220]: [1892-2] prosms\nprosms DETAIL: parameters: $1 = '10808', $2 = '2'\n2014-05-06T15:58:07.150067+03:00 s10 postgres[46220]: [1892-3] prosms\nprosms LOCATION: exec_execute_message, postgres.c:1989\n2014-05-06T16:01:33.784422+03:00 s10 postgres[58002]: [2572-1] prosms\nprosms LOG: 00000: duration: 2921.212 ms execute <unnamed>: select\neasysms_jb_pay($1,$2)\n2014-05-06T16:01:33.784842+03:00 s10 postgres[58002]: [2572-2] prosms\nprosms DETAIL: parameters: $1 = '10808', $2 = '4'\n2014-05-06T16:01:33.785037+03:00 s10 postgres[58002]: [2572-3] prosms\nprosms LOCATION: exec_execute_message, postgres.c:1989\n\n\nshould you deem helpful yes i can enable full query logging and summon the\nlisting for you.\n\na typical vmstat:\n# vmstat 1\nprocs -----------memory---------- ---swap-- -----io---- -system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 1 0 363476 325852 294064 30520736 0 0 26 347 0 0 1 0\n99 0 0\n 1 0 363476 328400 294068 30520732 0 0 0 28 4930 8014 6 1\n94 0 0\n 2 0 363476 331756 294068 30520732 0 0 0 4384 4950 7980 6 1\n93 0 0\n 0 0 363476 334016 294068 30520756 0 0 0 4384 4961 7981 7 1\n92 0 0\n 0 0 363476 334700 294068 30520756 0 0 0 4424 4012 6467 4 1\n95 0 0\n 1 0 363476 330852 294068 30520788 0 0 0 0 2559 3861 5 1\n95 0 0\n 1 0 363476 331316 294072 30520788 0 0 0 4408 5013 8127 6 1\n94 0 0\n 1 0 363476 330788 294072 30520788 0 0 0 4384 5535 9055 6 1\n93 0 0\n 0 0 363476 331496 294072 30520804 0 0 0 4384 5031 8092 7 1\n92 0 0\n 2 0 363476 331268 294072 30520804 0 0 0 4428 5052 8246 6 1\n94 0 0\n 1 0 363476 330848 294080 30520812 0 0 0 32 4892 7950 5 1\n94 0 0\n 1 0 363476 330480 294080 30520812 0 0 0 4388 4935 8036 6 1\n94 0 0\n 2 0 363476 332616 294084 30521092 0 0 0 4408 5064 8194 6 1\n93 0 0\n 0 0 363476 333596 294084 30521008 0 0 0 4384 5205 8463 8 1\n91 0 0\n 1 0 363476 333324 294084 30521008 0 0 0 40 5014 8151 6 1\n94 0 0\n 0 0 363476 332740 294084 30521016 0 0 0 4384 5047 8163 6 1\n93 0 0\n 1 0 363476 336052 294084 30520888 0 0 0 4384 4849 7780 6 1\n94 0 0\n 1 0 363476 334732 294088 30520892 0 0 8 4400 5520 9012 6 1\n93 0 0\n 0 0 363476 334064 294096 30520884 0 0 0 220 3903 6193 6 1\n94 0 0\n 0 0 363476 333124 294096 30520916 0 0 0 2232 4088 6462 6 1\n93 0 0\n\nthe process that is constantly writing the majority of data is:\n\n\"postgres: stats collector process\"\n\nand varies from 2.5mb/sec up to 5.7mb/sec\n\nall the other postgres (and non-postgres) processes write very little data\nand rarely.\n\nthe checkpointer process is like 78.6kb/sec (a few seconds now as i write\nthis email but no other is having a constant rate or I/O)\n\nalso, _while having the problem_ the results of the following queries are\n(taken from http://wiki.postgresql.org/wiki/Lock_Monitoring):\n\n SELECT relation::regclass, * FROM pg_locks WHERE NOT granted;\n(0 rows)\n\n\n SELECT bl.pid AS blocked_pid,\n a.usename AS blocked_user,\n ka.query AS blocking_statement,\n now() - ka.query_start AS blocking_duration,\n kl.pid AS blocking_pid,\n ka.usename AS blocking_user,\n a.query AS blocked_statement,\n now() - a.query_start AS blocked_duration\n FROM pg_catalog.pg_locks bl\n JOIN pg_catalog.pg_stat_activity a ON a.pid = bl.pid\n JOIN pg_catalog.pg_locks kl ON kl.transactionid =\nbl.transactionid AND kl.pid != bl.pid\n JOIN pg_catalog.pg_stat_activity ka ON ka.pid = kl.pid\n WHERE NOT bl.granted;\n(0 rows)\n\nSELECT\n waiting.locktype AS waiting_locktype,\n waiting.relation::regclass AS waiting_table,\n waiting_stm.query AS waiting_query,\n waiting.mode AS waiting_mode,\n waiting.pid AS waiting_pid,\n other.locktype AS other_locktype,\n other.relation::regclass AS other_table,\n other_stm.query AS other_query,\n other.mode AS other_mode,\n other.pid AS other_pid,\n other.granted AS other_grantedFROM\n pg_catalog.pg_locks AS waitingJOIN\n pg_catalog.pg_stat_activity AS waiting_stm\n ON (\n waiting_stm.pid = waiting.pid\n )JOIN\n pg_catalog.pg_locks AS other\n ON (\n (\n waiting.\"database\" = other.\"database\"\n AND waiting.relation = other.relation\n )\n OR waiting.transactionid = other.transactionid\n )JOIN\n pg_catalog.pg_stat_activity AS other_stm\n ON (\n other_stm.pid = other.pid\n )WHERE\n NOT waiting.grantedAND\n waiting.pid <> other.pid;\n(0 rows)\n\n\n\nWITH RECURSIVE\n c(requested, current) AS\n ( VALUES\n ('AccessShareLock'::text, 'AccessExclusiveLock'::text),\n ('RowShareLock'::text, 'ExclusiveLock'::text),\n ('RowShareLock'::text, 'AccessExclusiveLock'::text),\n ('RowExclusiveLock'::text, 'ShareLock'::text),\n ('RowExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('RowExclusiveLock'::text, 'ExclusiveLock'::text),\n ('RowExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ShareLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('ShareLock'::text, 'RowExclusiveLock'::text),\n ('ShareLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ShareLock'::text, 'ShareRowExclusiveLock'::text),\n ('ShareLock'::text, 'ExclusiveLock'::text),\n ('ShareLock'::text, 'AccessExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'RowExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'ShareLock'::text),\n ('ShareRowExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'ExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('ExclusiveLock'::text, 'RowShareLock'::text),\n ('ExclusiveLock'::text, 'RowExclusiveLock'::text),\n ('ExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ExclusiveLock'::text, 'ShareLock'::text),\n ('ExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('ExclusiveLock'::text, 'ExclusiveLock'::text),\n ('ExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'AccessShareLock'::text),\n ('AccessExclusiveLock'::text, 'RowShareLock'::text),\n ('AccessExclusiveLock'::text, 'RowExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'ShareLock'::text),\n ('AccessExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'ExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'AccessExclusiveLock'::text)\n ),\n l AS\n (\n SELECT\n (locktype,DATABASE,relation::regclass::text,page,tuple,virtualxid,transactionid,classid,objid,objsubid)\nAS target,\n virtualtransaction,\n pid,\n mode,\n granted\n FROM pg_catalog.pg_locks\n ),\n t AS\n (\n SELECT\n blocker.target AS blocker_target,\n blocker.pid AS blocker_pid,\n blocker.mode AS blocker_mode,\n blocked.target AS target,\n blocked.pid AS pid,\n blocked.mode AS mode\n FROM l blocker\n JOIN l blocked\n ON ( NOT blocked.granted\n AND blocker.granted\n AND blocked.pid != blocker.pid\n AND blocked.target IS NOT DISTINCT FROM blocker.target)\n JOIN c ON (c.requested = blocked.mode AND c.current = blocker.mode)\n ),\n r AS\n (\n SELECT\n blocker_target,\n blocker_pid,\n blocker_mode,\n '1'::int AS depth,\n target,\n pid,\n mode,\n blocker_pid::text || ',' || pid::text AS seq\n FROM t\n UNION ALL\n SELECT\n blocker.blocker_target,\n blocker.blocker_pid,\n blocker.blocker_mode,\n blocker.depth + 1,\n blocked.target,\n blocked.pid,\n blocked.mode,\n blocker.seq || ',' || blocked.pid::text\n FROM r blocker\n JOIN t blocked\n ON (blocked.blocker_pid = blocker.pid)\n WHERE blocker.depth < 1000\n )SELECT * FROM r\n ORDER BY seq;\n(0 rows)\n\n\nno finding here either :(\n\nbest,\n\n/mstelios\n\n\nStelios Mavromichalis\nCytech Ltd. - http://www.cytech.gr/\nScience & Technology Park of Crete\nfax: +30 2810 31 1045\ntel.: +30 2810 31 4127\nmob.: +30 697 7078013\nskype: mstelios\n\n\nOn Tue, May 6, 2014 at 2:54 AM, David G Johnston <[email protected]\n> wrote:\n\n> Stelios Mavromichalis wrote\n> > as a prior step to dump/restore i am thinking of deleting and\n> re-inserting\n> > that particular row. that might share some light you think?\n>\n> I still dislike the randomness of the unresponsiveness...\n>\n> Every time you perform an update you \"delete and insert\" that row - that is\n> how an update works in MVCC - so doing so explicitly is unlikely to provide\n> any benefit. Since that row is continually being inserted, and no other\n> rows are having this issue, I'm seriously doubting that a dump/restore is\n> going to have any effect either. Note that the index scan took twice as\n> long in the bad case - but still reasonable and you didn't notice any\n> benefit from a REINDEX. This is what I would expect.\n>\n> The only other difference, if concurrency has been ruled out, is the 4 vs\n> 18\n> buffers that had to be read. I cannot imagine that, since all 22 were in\n> cache, that simply reading that much more data would account for the\n> difference (we're talking a 10,000-fold increase, not 2to4-fold). The\n> reason for this particular difference, IIUC, is how may candidate tuples\n> are\n> present whose visibility has to be accounted for (assuming 1 buffer per\n> tuple, you needed to scan 4 vs 18 for visibility in the two queries).\n>\n> Is there any log file information you can share? Especially if you can set\n> log_min_statement_duration (or whatever that GUC is named) so that whenever\n> one of these gets adversely delayed it appears in the log along with\n> whatever other system messages are being sent. Checkpoints are a typical\n> culprit though that should be affecting a great deal more than what you\n> indicate you are seeing.\n>\n> I'm pretty certain you are seeing this here largely because of the\n> frequency\n> of activity on this particular user; not because the data itself is\n> corrupted. It could be some kind of symptom of internal concurrency that\n> you just haven't observed yet but it could also be I/O or other system\n> contention that you also haven't properly instrumented. Unfortunately that\n> is beyond my current help-providing skill-set.\n>\n> A dump-restore likely would not make anything worse though I'd be surprised\n> if it were to improve matters. It also doesn't seem like hardware - unless\n> the RAM is bad. Software bugs are unlikely if this had been working well\n> before 5 days ago. So, you need to observe the system during both periods\n> (good and bad) and observe something that is different - probably not\n> within\n> PostgreSQL if indeed you've minimized concurrency. And also see if you can\n> see if any other queries, executed during both these times, exhibit a\n> performance decrease. Logging all statements would help matters greatly if\n> you can afford it in your production environment - it would make looking\n> for\n> internal concurrency much easier.\n>\n> David J.\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Re-recently-and-selectively-slow-but-very-simple-update-query-tp5802553p5802579.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nhello again,some more data on the subject:you are right, delete/re-insert didn't solve the problem (haven't yet tried dump/restore, i might tonight, when low traffic).\na short snip from logs were i log all queries that take longer then 1sec:2014-05-06T15:57:46.303880+03:00\n s10 postgres[46220]: [1891-1] prosms prosms LOG: 00000: duration: \n1947.172 ms execute <unnamed>: select easysms_jb_pay($1,$2)2014-05-06T15:57:46.304230+03:00 s10 postgres[46220]: [1891-2] prosms prosms DETAIL: parameters: $1 = '10808', $2 = '2'2014-05-06T15:57:46.304439+03:00 s10 postgres[46220]: [1891-3] prosms prosms LOCATION: exec_execute_message, postgres.c:1989\n2014-05-06T15:57:56.199005+03:00\n s10 postgres[58002]: [2570-1] prosms prosms LOG: 00000: duration: \n6869.886 ms execute <unnamed>: select easysms_jb_pay($1,$2)2014-05-06T15:57:56.199349+03:00 s10 postgres[58002]: [2570-2] prosms prosms DETAIL: parameters: $1 = '10808', $2 = '2'2014-05-06T15:57:56.199567+03:00 s10 postgres[58002]: [2570-3] prosms prosms LOCATION: exec_execute_message, postgres.c:1989\n2014-05-06T15:57:59.134982+03:00\n s10 postgres[58002]: [2571-1] prosms prosms LOG: 00000: duration: \n1797.747 ms execute <unnamed>: select easysms_jb_pay($1,$2)2014-05-06T15:57:59.135334+03:00 s10 postgres[58002]: [2571-2] prosms prosms DETAIL: parameters: $1 = '10808', $2 = '2'2014-05-06T15:57:59.135562+03:00 s10 postgres[58002]: [2571-3] prosms prosms LOCATION: exec_execute_message, postgres.c:1989\n2014-05-06T15:58:07.149477+03:00\n s10 postgres[46220]: [1892-1] prosms prosms LOG: 00000: duration: \n3938.979 ms execute <unnamed>: select easysms_jb_pay($1,$2)2014-05-06T15:58:07.149830+03:00 s10 postgres[46220]: [1892-2] prosms prosms DETAIL: parameters: $1 = '10808', $2 = '2'2014-05-06T15:58:07.150067+03:00 s10 postgres[46220]: [1892-3] prosms prosms LOCATION: exec_execute_message, postgres.c:1989\n2014-05-06T16:01:33.784422+03:00\n s10 postgres[58002]: [2572-1] prosms prosms LOG: 00000: duration: \n2921.212 ms execute <unnamed>: select easysms_jb_pay($1,$2)2014-05-06T16:01:33.784842+03:00 s10 postgres[58002]: [2572-2] prosms prosms DETAIL: parameters: $1 = '10808', $2 = '4'2014-05-06T16:01:33.785037+03:00 s10 postgres[58002]: [2572-3] prosms prosms LOCATION: exec_execute_message, postgres.c:1989\nshould you deem helpful yes i can enable full query logging and summon the listing for you.a typical vmstat:# vmstat 1procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 363476 325852 294064 30520736 0 0 26 347 0 0 1 0 99 0 0 1 0 363476 328400 294068 30520732 0 0 0 28 4930 8014 6 1 94 0 0\n 2 0 363476 331756 294068 30520732 0 0 0 4384 4950 7980 6 1 93 0 0 0 0 363476 334016 294068 30520756 0 0 0 4384 4961 7981 7 1 92 0 0 0 0 363476 334700 294068 30520756 0 0 0 4424 4012 6467 4 1 95 0 0\n 1 0 363476 330852 294068 30520788 0 0 0 0 2559 3861 5 1 95 0 0 1 0 363476 331316 294072 30520788 0 0 0 4408 5013 8127 6 1 94 0 0 1 0 363476 330788 294072 30520788 0 0 0 4384 5535 9055 6 1 93 0 0\n 0 0 363476 331496 294072 30520804 0 0 0 4384 5031 8092 7 1 92 0 0 2 0 363476 331268 294072 30520804 0 0 0 4428 5052 8246 6 1 94 0 0 1 0 363476 330848 294080 30520812 0 0 0 32 4892 7950 5 1 94 0 0\n 1 0 363476 330480 294080 30520812 0 0 0 4388 4935 8036 6 1 94 0 0 2 0 363476 332616 294084 30521092 0 0 0 4408 5064 8194 6 1 93 0 0 0 0 363476 333596 294084 30521008 0 0 0 4384 5205 8463 8 1 91 0 0\n 1 0 363476 333324 294084 30521008 0 0 0 40 5014 8151 6 1 94 0 0 0 0 363476 332740 294084 30521016 0 0 0 4384 5047 8163 6 1 93 0 0 1 0 363476 336052 294084 30520888 0 0 0 4384 4849 7780 6 1 94 0 0\n 1 0 363476 334732 294088 30520892 0 0 8 4400 5520 9012 6 1 93 0 0 0 0 363476 334064 294096 30520884 0 0 0 220 3903 6193 6 1 94 0 0 0 0 363476 333124 294096 30520916 0 0 0 2232 4088 6462 6 1 93 0 0\nthe process that is constantly writing the majority of data is:\"postgres: stats collector process\"and varies from 2.5mb/sec up to 5.7mb/secall the other postgres (and non-postgres) processes write very little data and rarely.\nthe\n checkpointer process is like 78.6kb/sec (a few seconds now as i write \nthis email but no other is having a constant rate or I/O)also,\n _while having the problem_ the results of the following queries are \n(taken from http://wiki.postgresql.org/wiki/Lock_Monitoring): SELECT relation::regclass, * FROM pg_locks WHERE NOT granted;\n(0 rows) SELECT bl.pid AS blocked_pid,\n a.usename AS blocked_user,\n ka.query AS blocking_statement,\n now() - ka.query_start AS blocking_duration,\n kl.pid AS blocking_pid,\n ka.usename AS blocking_user,\n a.query AS blocked_statement,\n now() - a.query_start AS blocked_duration\n FROM pg_catalog.pg_locks bl\n JOIN pg_catalog.pg_stat_activity a ON a.pid = bl.pid\n JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND kl.pid != bl.pid\n JOIN pg_catalog.pg_stat_activity ka ON ka.pid = kl.pid\n WHERE NOT bl.granted;(0 rows)SELECT \n waiting.locktype AS waiting_locktype,\n waiting.relation::regclass AS waiting_table,\n waiting_stm.query AS waiting_query,\n waiting.mode AS waiting_mode,\n waiting.pid AS waiting_pid,\n other.locktype AS other_locktype,\n other.relation::regclass AS other_table,\n other_stm.query AS other_query,\n other.mode AS other_mode,\n other.pid AS other_pid,\n other.granted AS other_granted\nFROM\n pg_catalog.pg_locks AS waiting\nJOIN\n pg_catalog.pg_stat_activity AS waiting_stm\n ON (\n waiting_stm.pid = waiting.pid\n )\nJOIN\n pg_catalog.pg_locks AS other\n ON (\n (\n waiting.\"database\" = other.\"database\"\n AND waiting.relation = other.relation\n )\n OR waiting.transactionid = other.transactionid\n )\nJOIN\n pg_catalog.pg_stat_activity AS other_stm\n ON (\n other_stm.pid = other.pid\n )\nWHERE\n NOT waiting.granted\nAND\n waiting.pid <> other.pid;(0 rows)WITH RECURSIVE\n c(requested, current) AS\n ( VALUES\n ('AccessShareLock'::text, 'AccessExclusiveLock'::text),\n ('RowShareLock'::text, 'ExclusiveLock'::text),\n ('RowShareLock'::text, 'AccessExclusiveLock'::text),\n ('RowExclusiveLock'::text, 'ShareLock'::text),\n ('RowExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('RowExclusiveLock'::text, 'ExclusiveLock'::text),\n ('RowExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ShareLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('ShareLock'::text, 'RowExclusiveLock'::text),\n ('ShareLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ShareLock'::text, 'ShareRowExclusiveLock'::text),\n ('ShareLock'::text, 'ExclusiveLock'::text),\n ('ShareLock'::text, 'AccessExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'RowExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'ShareLock'::text),\n ('ShareRowExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'ExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('ExclusiveLock'::text, 'RowShareLock'::text),\n ('ExclusiveLock'::text, 'RowExclusiveLock'::text),\n ('ExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ExclusiveLock'::text, 'ShareLock'::text),\n ('ExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('ExclusiveLock'::text, 'ExclusiveLock'::text),\n ('ExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'AccessShareLock'::text),\n ('AccessExclusiveLock'::text, 'RowShareLock'::text),\n ('AccessExclusiveLock'::text, 'RowExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'ShareLock'::text),\n ('AccessExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'ExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'AccessExclusiveLock'::text)\n ),\n l AS\n (\n SELECT\n (locktype,DATABASE,relation::regclass::text,page,tuple,virtualxid,transactionid,classid,objid,objsubid) AS target,\n virtualtransaction,\n pid,\n mode,\n granted\n FROM pg_catalog.pg_locks\n ),\n t AS\n (\n SELECT\n blocker.target AS blocker_target,\n blocker.pid AS blocker_pid,\n blocker.mode AS blocker_mode,\n blocked.target AS target,\n blocked.pid AS pid,\n blocked.mode AS mode\n FROM l blocker\n JOIN l blocked\n ON ( NOT blocked.granted\n AND blocker.granted\n AND blocked.pid != blocker.pid\n AND blocked.target IS NOT DISTINCT FROM blocker.target)\n JOIN c ON (c.requested = blocked.mode AND c.current = blocker.mode)\n ),\n r AS\n (\n SELECT\n blocker_target,\n blocker_pid,\n blocker_mode,\n '1'::int AS depth,\n target,\n pid,\n mode,\n blocker_pid::text || ',' || pid::text AS seq\n FROM t\n UNION ALL\n SELECT\n blocker.blocker_target,\n blocker.blocker_pid,\n blocker.blocker_mode,\n blocker.depth + 1,\n blocked.target,\n blocked.pid,\n blocked.mode,\n blocker.seq || ',' || blocked.pid::text\n FROM r blocker\n JOIN t blocked\n ON (blocked.blocker_pid = blocker.pid)\n WHERE blocker.depth < 1000\n )\nSELECT * FROM r\n ORDER BY seq;(0 rows)no finding here either :(best,/mstelios\nStelios MavromichalisCytech Ltd. - http://www.cytech.gr/Science & Technology Park of Cretefax: +30 2810 31 1045tel.: +30 2810 31 4127\nmob.: +30 697 7078013skype: mstelios\nOn Tue, May 6, 2014 at 2:54 AM, David G Johnston <[email protected]> wrote:\nStelios Mavromichalis wrote\n> as a prior step to dump/restore i am thinking of deleting and re-inserting\n> that particular row. that might share some light you think?\n\nI still dislike the randomness of the unresponsiveness...\n\nEvery time you perform an update you \"delete and insert\" that row - that is\nhow an update works in MVCC - so doing so explicitly is unlikely to provide\nany benefit. Since that row is continually being inserted, and no other\nrows are having this issue, I'm seriously doubting that a dump/restore is\ngoing to have any effect either. Note that the index scan took twice as\nlong in the bad case - but still reasonable and you didn't notice any\nbenefit from a REINDEX. This is what I would expect.\n\nThe only other difference, if concurrency has been ruled out, is the 4 vs 18\nbuffers that had to be read. I cannot imagine that, since all 22 were in\ncache, that simply reading that much more data would account for the\ndifference (we're talking a 10,000-fold increase, not 2to4-fold). The\nreason for this particular difference, IIUC, is how may candidate tuples are\npresent whose visibility has to be accounted for (assuming 1 buffer per\ntuple, you needed to scan 4 vs 18 for visibility in the two queries).\n\nIs there any log file information you can share? Especially if you can set\nlog_min_statement_duration (or whatever that GUC is named) so that whenever\none of these gets adversely delayed it appears in the log along with\nwhatever other system messages are being sent. Checkpoints are a typical\nculprit though that should be affecting a great deal more than what you\nindicate you are seeing.\n\nI'm pretty certain you are seeing this here largely because of the frequency\nof activity on this particular user; not because the data itself is\ncorrupted. It could be some kind of symptom of internal concurrency that\nyou just haven't observed yet but it could also be I/O or other system\ncontention that you also haven't properly instrumented. Unfortunately that\nis beyond my current help-providing skill-set.\n\nA dump-restore likely would not make anything worse though I'd be surprised\nif it were to improve matters. It also doesn't seem like hardware - unless\nthe RAM is bad. Software bugs are unlikely if this had been working well\nbefore 5 days ago. So, you need to observe the system during both periods\n(good and bad) and observe something that is different - probably not within\nPostgreSQL if indeed you've minimized concurrency. And also see if you can\nsee if any other queries, executed during both these times, exhibit a\nperformance decrease. Logging all statements would help matters greatly if\nyou can afford it in your production environment - it would make looking for\ninternal concurrency much easier.\n\nDavid J.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Re-recently-and-selectively-slow-but-very-simple-update-query-tp5802553p5802579.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 6 May 2014 16:31:25 +0300",
"msg_from": "Stelios Mavromichalis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: recently and selectively slow, but very simple,\n update query...."
},
{
"msg_contents": "hello everyone,\n\ni feel like i owe you all an apology.\n\nit _was_ a locking problem after all.\n\ni don't know why i was unlucky enough to never detect it for so long, no\nmatter how hard i tried and how many times i did.\n\neventually, after many hours on the problem and hitting a locks dependency\nquery (last one from <http://wiki.postgresql.org/wiki/Lock_Monitoring>)\nmany times without apparent (seemed at the time) reason, it revealed the\nheart of my problem.\n\nthere was another query updating another column (opt_pushdlr_ltstamp) on\nthat particular row (usid = 10808). however, due to bad written code\n(didn't addBatch and execute all at once instead updated instantly and\ndelaying the commit while waiting for external _slow_ servers to reply) and\nexternal events that update didn't commit for long periods of time, forcing\nthe original balance update to wait for it.\n\nso, thanks to your advise (that a dump/restore wouldn't help me) i didn't\nbring the service down for a dump/restore cycle that wasn't required after\nall saving me service down time.\n\ni can't thank you enough for leading me to solve this problem and helping\nme obtain deeper understanding of postgres.\n\nwish you all have a great day.\n\nmy apology and best regards,\n\n/mstelios\n\n\nStelios Mavromichalis\nCytech Ltd. - http://www.cytech.gr/\nScience & Technology Park of Crete\nfax: +30 2810 31 1045\ntel.: +30 2810 31 4127\nmob.: +30 697 7078013\nskype: mstelios\n\n\nOn Tue, May 6, 2014 at 4:31 PM, Stelios Mavromichalis <[email protected]>wrote:\n\n> hello again,\n>\n> some more data on the subject:\n>\n> you are right, delete/re-insert didn't solve the problem (haven't yet\n> tried dump/restore, i might tonight, when low traffic).\n>\n> a short snip from logs were i log all queries that take longer then 1sec:\n>\n> 2014-05-06T15:57:46.303880+03:00 s10 postgres[46220]: [1891-1] prosms\n> prosms LOG: 00000: duration: 1947.172 ms execute <unnamed>: select\n> easysms_jb_pay($1,\n> $2)\n> 2014-05-06T15:57:46.304230+03:00 s10 postgres[46220]: [1891-2] prosms\n> prosms DETAIL: parameters: $1 = '10808', $2 = '2'\n> 2014-05-06T15:57:46.304439+03:00 s10 postgres[46220]: [1891-3] prosms\n> prosms LOCATION: exec_execute_message, postgres.c:1989\n> 2014-05-06T15:57:56.199005+03:00 s10 postgres[58002]: [2570-1] prosms\n> prosms LOG: 00000: duration: 6869.886 ms execute <unnamed>: select\n> easysms_jb_pay($1,$2)\n> 2014-05-06T15:57:56.199349+03:00 s10 postgres[58002]: [2570-2] prosms\n> prosms DETAIL: parameters: $1 = '10808', $2 = '2'\n> 2014-05-06T15:57:56.199567+03:00 s10 postgres[58002]: [2570-3] prosms\n> prosms LOCATION: exec_execute_message, postgres.c:1989\n> 2014-05-06T15:57:59.134982+03:00 s10 postgres[58002]: [2571-1] prosms\n> prosms LOG: 00000: duration: 1797.747 ms execute <unnamed>: select\n> easysms_jb_pay($1,$2)\n> 2014-05-06T15:57:59.135334+03:00 s10 postgres[58002]: [2571-2] prosms\n> prosms DETAIL: parameters: $1 = '10808', $2 = '2'\n> 2014-05-06T15:57:59.135562+03:00 s10 postgres[58002]: [2571-3] prosms\n> prosms LOCATION: exec_execute_message, postgres.c:1989\n> 2014-05-06T15:58:07.149477+03:00 s10 postgres[46220]: [1892-1] prosms\n> prosms LOG: 00000: duration: 3938.979 ms execute <unnamed>: select\n> easysms_jb_pay($1,$2)\n> 2014-05-06T15:58:07.149830+03:00 s10 postgres[46220]: [1892-2] prosms\n> prosms DETAIL: parameters: $1 = '10808', $2 = '2'\n> 2014-05-06T15:58:07.150067+03:00 s10 postgres[46220]: [1892-3] prosms\n> prosms LOCATION: exec_execute_message, postgres.c:1989\n> 2014-05-06T16:01:33.784422+03:00 s10 postgres[58002]: [2572-1] prosms\n> prosms LOG: 00000: duration: 2921.212 ms execute <unnamed>: select\n> easysms_jb_pay($1,$2)\n> 2014-05-06T16:01:33.784842+03:00 s10 postgres[58002]: [2572-2] prosms\n> prosms DETAIL: parameters: $1 = '10808', $2 = '4'\n> 2014-05-06T16:01:33.785037+03:00 s10 postgres[58002]: [2572-3] prosms\n> prosms LOCATION: exec_execute_message, postgres.c:1989\n>\n>\n> should you deem helpful yes i can enable full query logging and summon the\n> listing for you.\n>\n> a typical vmstat:\n> # vmstat 1\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> -----cpu------\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa st\n> 1 0 363476 325852 294064 30520736 0 0 26 347 0 0 1 0\n> 99 0 0\n> 1 0 363476 328400 294068 30520732 0 0 0 28 4930 8014 6 1\n> 94 0 0\n> 2 0 363476 331756 294068 30520732 0 0 0 4384 4950 7980 6 1\n> 93 0 0\n> 0 0 363476 334016 294068 30520756 0 0 0 4384 4961 7981 7 1\n> 92 0 0\n> 0 0 363476 334700 294068 30520756 0 0 0 4424 4012 6467 4 1\n> 95 0 0\n> 1 0 363476 330852 294068 30520788 0 0 0 0 2559 3861 5 1\n> 95 0 0\n> 1 0 363476 331316 294072 30520788 0 0 0 4408 5013 8127 6 1\n> 94 0 0\n> 1 0 363476 330788 294072 30520788 0 0 0 4384 5535 9055 6 1\n> 93 0 0\n> 0 0 363476 331496 294072 30520804 0 0 0 4384 5031 8092 7 1\n> 92 0 0\n> 2 0 363476 331268 294072 30520804 0 0 0 4428 5052 8246 6 1\n> 94 0 0\n> 1 0 363476 330848 294080 30520812 0 0 0 32 4892 7950 5 1\n> 94 0 0\n> 1 0 363476 330480 294080 30520812 0 0 0 4388 4935 8036 6 1\n> 94 0 0\n> 2 0 363476 332616 294084 30521092 0 0 0 4408 5064 8194 6 1\n> 93 0 0\n> 0 0 363476 333596 294084 30521008 0 0 0 4384 5205 8463 8 1\n> 91 0 0\n> 1 0 363476 333324 294084 30521008 0 0 0 40 5014 8151 6 1\n> 94 0 0\n> 0 0 363476 332740 294084 30521016 0 0 0 4384 5047 8163 6 1\n> 93 0 0\n> 1 0 363476 336052 294084 30520888 0 0 0 4384 4849 7780 6 1\n> 94 0 0\n> 1 0 363476 334732 294088 30520892 0 0 8 4400 5520 9012 6 1\n> 93 0 0\n> 0 0 363476 334064 294096 30520884 0 0 0 220 3903 6193 6 1\n> 94 0 0\n> 0 0 363476 333124 294096 30520916 0 0 0 2232 4088 6462 6 1\n> 93 0 0\n>\n> the process that is constantly writing the majority of data is:\n>\n> \"postgres: stats collector process\"\n>\n> and varies from 2.5mb/sec up to 5.7mb/sec\n>\n> all the other postgres (and non-postgres) processes write very little data\n> and rarely.\n>\n> the checkpointer process is like 78.6kb/sec (a few seconds now as i write\n> this email but no other is having a constant rate or I/O)\n>\n> also, _while having the problem_ the results of the following queries are\n> (taken from http://wiki.postgresql.org/wiki/Lock_Monitoring):\n>\n> SELECT relation::regclass, * FROM pg_locks WHERE NOT granted;\n>\n> (0 rows)\n>\n>\n> SELECT bl.pid AS blocked_pid,\n> a.usename AS blocked_user,\n> ka.query AS blocking_statement,\n> now() - ka.query_start AS blocking_duration,\n> kl.pid AS blocking_pid,\n> ka.usename AS blocking_user,\n> a.query AS blocked_statement,\n> now() - a.query_start AS blocked_duration\n> FROM pg_catalog.pg_locks bl\n> JOIN pg_catalog.pg_stat_activity a ON a.pid = bl.pid\n> JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND kl.pid != bl.pid\n> JOIN pg_catalog.pg_stat_activity ka ON ka.pid = kl.pid\n> WHERE NOT bl.granted;\n> (0 rows)\n>\n> SELECT\n> waiting.locktype AS waiting_locktype,\n> waiting.relation::regclass AS waiting_table,\n> waiting_stm.query AS waiting_query,\n> waiting.mode AS waiting_mode,\n> waiting.pid AS waiting_pid,\n> other.locktype AS other_locktype,\n> other.relation::regclass AS other_table,\n> other_stm.query AS other_query,\n> other.mode AS other_mode,\n> other.pid AS other_pid,\n> other.granted AS other_grantedFROM\n> pg_catalog.pg_locks AS waitingJOIN\n> pg_catalog.pg_stat_activity AS waiting_stm\n> ON (\n> waiting_stm.pid = waiting.pid\n> )JOIN\n> pg_catalog.pg_locks AS other\n> ON (\n> (\n> waiting.\"database\" = other.\"database\"\n> AND waiting.relation = other.relation\n> )\n> OR waiting.transactionid = other.transactionid\n> )JOIN\n> pg_catalog.pg_stat_activity AS other_stm\n> ON (\n> other_stm.pid = other.pid\n> )WHERE\n> NOT waiting.grantedAND\n> waiting.pid <> other.pid;\n> (0 rows)\n>\n>\n>\n> WITH RECURSIVE\n> c(requested, current) AS\n> ( VALUES\n> ('AccessShareLock'::text, 'AccessExclusiveLock'::text),\n> ('RowShareLock'::text, 'ExclusiveLock'::text),\n> ('RowShareLock'::text, 'AccessExclusiveLock'::text),\n> ('RowExclusiveLock'::text, 'ShareLock'::text),\n> ('RowExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n> ('RowExclusiveLock'::text, 'ExclusiveLock'::text),\n> ('RowExclusiveLock'::text, 'AccessExclusiveLock'::text),\n> ('ShareUpdateExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n> ('ShareUpdateExclusiveLock'::text, 'ShareLock'::text),\n> ('ShareUpdateExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n> ('ShareUpdateExclusiveLock'::text, 'ExclusiveLock'::text),\n> ('ShareUpdateExclusiveLock'::text, 'AccessExclusiveLock'::text),\n> ('ShareLock'::text, 'RowExclusiveLock'::text),\n> ('ShareLock'::text, 'ShareUpdateExclusiveLock'::text),\n> ('ShareLock'::text, 'ShareRowExclusiveLock'::text),\n> ('ShareLock'::text, 'ExclusiveLock'::text),\n> ('ShareLock'::text, 'AccessExclusiveLock'::text),\n> ('ShareRowExclusiveLock'::text, 'RowExclusiveLock'::text),\n> ('ShareRowExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n> ('ShareRowExclusiveLock'::text, 'ShareLock'::text),\n> ('ShareRowExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n> ('ShareRowExclusiveLock'::text, 'ExclusiveLock'::text),\n> ('ShareRowExclusiveLock'::text, 'AccessExclusiveLock'::text),\n> ('ExclusiveLock'::text, 'RowShareLock'::text),\n> ('ExclusiveLock'::text, 'RowExclusiveLock'::text),\n> ('ExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n> ('ExclusiveLock'::text, 'ShareLock'::text),\n> ('ExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n> ('ExclusiveLock'::text, 'ExclusiveLock'::text),\n> ('ExclusiveLock'::text, 'AccessExclusiveLock'::text),\n> ('AccessExclusiveLock'::text, 'AccessShareLock'::text),\n> ('AccessExclusiveLock'::text, 'RowShareLock'::text),\n> ('AccessExclusiveLock'::text, 'RowExclusiveLock'::text),\n> ('AccessExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n> ('AccessExclusiveLock'::text, 'ShareLock'::text),\n> ('AccessExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n> ('AccessExclusiveLock'::text, 'ExclusiveLock'::text),\n> ('AccessExclusiveLock'::text, 'AccessExclusiveLock'::text)\n> ),\n> l AS\n> (\n> SELECT\n> (locktype,DATABASE,relation::regclass::text,page,tuple,virtualxid,transactionid,classid,objid,objsubid) AS target,\n> virtualtransaction,\n> pid,\n> mode,\n> granted\n> FROM pg_catalog.pg_locks\n> ),\n> t AS\n> (\n> SELECT\n> blocker.target AS blocker_target,\n> blocker.pid AS blocker_pid,\n> blocker.mode AS blocker_mode,\n> blocked.target AS target,\n> blocked.pid AS pid,\n> blocked.mode AS mode\n> FROM l blocker\n> JOIN l blocked\n> ON ( NOT blocked.granted\n> AND blocker.granted\n> AND blocked.pid != blocker.pid\n> AND blocked.target IS NOT DISTINCT FROM blocker.target)\n> JOIN c ON (c.requested = blocked.mode AND c.current = blocker.mode)\n> ),\n> r AS\n> (\n> SELECT\n> blocker_target,\n> blocker_pid,\n> blocker_mode,\n> '1'::int AS depth,\n> target,\n> pid,\n> mode,\n> blocker_pid::text || ',' || pid::text AS seq\n> FROM t\n> UNION ALL\n> SELECT\n> blocker.blocker_target,\n> blocker.blocker_pid,\n> blocker.blocker_mode,\n> blocker.depth + 1,\n> blocked.target,\n> blocked.pid,\n> blocked.mode,\n> blocker.seq || ',' || blocked.pid::text\n> FROM r blocker\n> JOIN t blocked\n> ON (blocked.blocker_pid = blocker.pid)\n> WHERE blocker.depth < 1000\n> )SELECT * FROM r\n> ORDER BY seq;\n> (0 rows)\n>\n>\n> no finding here either :(\n>\n> best,\n>\n> /mstelios\n>\n>\n> Stelios Mavromichalis\n> Cytech Ltd. - http://www.cytech.gr/\n> Science & Technology Park of Crete\n> fax: +30 2810 31 1045\n> tel.: +30 2810 31 4127\n> mob.: +30 697 7078013\n> skype: mstelios\n>\n>\n> On Tue, May 6, 2014 at 2:54 AM, David G Johnston <\n> [email protected]> wrote:\n>\n>> Stelios Mavromichalis wrote\n>> > as a prior step to dump/restore i am thinking of deleting and\n>> re-inserting\n>> > that particular row. that might share some light you think?\n>>\n>> I still dislike the randomness of the unresponsiveness...\n>>\n>> Every time you perform an update you \"delete and insert\" that row - that\n>> is\n>> how an update works in MVCC - so doing so explicitly is unlikely to\n>> provide\n>> any benefit. Since that row is continually being inserted, and no other\n>> rows are having this issue, I'm seriously doubting that a dump/restore is\n>> going to have any effect either. Note that the index scan took twice as\n>> long in the bad case - but still reasonable and you didn't notice any\n>> benefit from a REINDEX. This is what I would expect.\n>>\n>> The only other difference, if concurrency has been ruled out, is the 4 vs\n>> 18\n>> buffers that had to be read. I cannot imagine that, since all 22 were in\n>> cache, that simply reading that much more data would account for the\n>> difference (we're talking a 10,000-fold increase, not 2to4-fold). The\n>> reason for this particular difference, IIUC, is how may candidate tuples\n>> are\n>> present whose visibility has to be accounted for (assuming 1 buffer per\n>> tuple, you needed to scan 4 vs 18 for visibility in the two queries).\n>>\n>> Is there any log file information you can share? Especially if you can\n>> set\n>> log_min_statement_duration (or whatever that GUC is named) so that\n>> whenever\n>> one of these gets adversely delayed it appears in the log along with\n>> whatever other system messages are being sent. Checkpoints are a typical\n>> culprit though that should be affecting a great deal more than what you\n>> indicate you are seeing.\n>>\n>> I'm pretty certain you are seeing this here largely because of the\n>> frequency\n>> of activity on this particular user; not because the data itself is\n>> corrupted. It could be some kind of symptom of internal concurrency that\n>> you just haven't observed yet but it could also be I/O or other system\n>> contention that you also haven't properly instrumented. Unfortunately\n>> that\n>> is beyond my current help-providing skill-set.\n>>\n>> A dump-restore likely would not make anything worse though I'd be\n>> surprised\n>> if it were to improve matters. It also doesn't seem like hardware -\n>> unless\n>> the RAM is bad. Software bugs are unlikely if this had been working well\n>> before 5 days ago. So, you need to observe the system during both\n>> periods\n>> (good and bad) and observe something that is different - probably not\n>> within\n>> PostgreSQL if indeed you've minimized concurrency. And also see if you\n>> can\n>> see if any other queries, executed during both these times, exhibit a\n>> performance decrease. Logging all statements would help matters greatly\n>> if\n>> you can afford it in your production environment - it would make looking\n>> for\n>> internal concurrency much easier.\n>>\n>> David J.\n>>\n>>\n>>\n>> --\n>> View this message in context:\n>> http://postgresql.1045698.n5.nabble.com/Re-recently-and-selectively-slow-but-very-simple-update-query-tp5802553p5802579.html\n>> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n\nhello everyone,i feel like i owe you all an apology.it _was_ a locking problem after all.i don't know why i was unlucky enough to never detect it for so long, no matter how hard i tried and how many times i did.\neventually, after many hours on the problem and hitting a locks dependency query (last one from <http://wiki.postgresql.org/wiki/Lock_Monitoring>) many times without apparent (seemed at the time) reason, it revealed the heart of my problem.\nthere was another query updating another column (opt_pushdlr_ltstamp) on that particular row (usid = 10808). however, due to bad written code (didn't addBatch and execute all at once instead updated instantly and delaying the commit while waiting for external _slow_ servers to reply) and external events that update didn't commit for long periods of time, forcing the original balance update to wait for it.\nso, thanks to your advise (that a dump/restore wouldn't help me) i didn't bring the service down for a dump/restore cycle that wasn't required after all saving me service down time.i can't thank you enough for leading me to solve this problem and helping me obtain deeper understanding of postgres.\nwish you all have a great day.my apology and best regards,/msteliosStelios MavromichalisCytech Ltd. - http://www.cytech.gr/\nScience & Technology Park of Cretefax: +30 2810 31 1045tel.: +30 2810 31 4127mob.: +30 697 7078013skype: mstelios\nOn Tue, May 6, 2014 at 4:31 PM, Stelios Mavromichalis <[email protected]> wrote:\nhello again,some more data on the subject:you are right, delete/re-insert didn't solve the problem (haven't yet tried dump/restore, i might tonight, when low traffic).\na short snip from logs were i log all queries that take longer then 1sec:2014-05-06T15:57:46.303880+03:00\n s10 postgres[46220]: [1891-1] prosms prosms LOG: 00000: duration: \n1947.172 ms execute <unnamed>: select easysms_jb_pay($1,$2)2014-05-06T15:57:46.304230+03:00 s10 postgres[46220]: [1891-2] prosms prosms DETAIL: parameters: $1 = '10808', $2 = '2'2014-05-06T15:57:46.304439+03:00 s10 postgres[46220]: [1891-3] prosms prosms LOCATION: exec_execute_message, postgres.c:1989\n\n2014-05-06T15:57:56.199005+03:00\n s10 postgres[58002]: [2570-1] prosms prosms LOG: 00000: duration: \n6869.886 ms execute <unnamed>: select easysms_jb_pay($1,$2)2014-05-06T15:57:56.199349+03:00 s10 postgres[58002]: [2570-2] prosms prosms DETAIL: parameters: $1 = '10808', $2 = '2'2014-05-06T15:57:56.199567+03:00 s10 postgres[58002]: [2570-3] prosms prosms LOCATION: exec_execute_message, postgres.c:1989\n\n2014-05-06T15:57:59.134982+03:00\n s10 postgres[58002]: [2571-1] prosms prosms LOG: 00000: duration: \n1797.747 ms execute <unnamed>: select easysms_jb_pay($1,$2)2014-05-06T15:57:59.135334+03:00 s10 postgres[58002]: [2571-2] prosms prosms DETAIL: parameters: $1 = '10808', $2 = '2'2014-05-06T15:57:59.135562+03:00 s10 postgres[58002]: [2571-3] prosms prosms LOCATION: exec_execute_message, postgres.c:1989\n\n2014-05-06T15:58:07.149477+03:00\n s10 postgres[46220]: [1892-1] prosms prosms LOG: 00000: duration: \n3938.979 ms execute <unnamed>: select easysms_jb_pay($1,$2)2014-05-06T15:58:07.149830+03:00 s10 postgres[46220]: [1892-2] prosms prosms DETAIL: parameters: $1 = '10808', $2 = '2'2014-05-06T15:58:07.150067+03:00 s10 postgres[46220]: [1892-3] prosms prosms LOCATION: exec_execute_message, postgres.c:1989\n\n2014-05-06T16:01:33.784422+03:00\n s10 postgres[58002]: [2572-1] prosms prosms LOG: 00000: duration: \n2921.212 ms execute <unnamed>: select easysms_jb_pay($1,$2)2014-05-06T16:01:33.784842+03:00 s10 postgres[58002]: [2572-2] prosms prosms DETAIL: parameters: $1 = '10808', $2 = '4'2014-05-06T16:01:33.785037+03:00 s10 postgres[58002]: [2572-3] prosms prosms LOCATION: exec_execute_message, postgres.c:1989\nshould you deem helpful yes i can enable full query logging and summon the listing for you.a typical vmstat:# vmstat 1procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------\n\n r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 363476 325852 294064 30520736 0 0 26 347 0 0 1 0 99 0 0 1 0 363476 328400 294068 30520732 0 0 0 28 4930 8014 6 1 94 0 0\n\n 2 0 363476 331756 294068 30520732 0 0 0 4384 4950 7980 6 1 93 0 0 0 0 363476 334016 294068 30520756 0 0 0 4384 4961 7981 7 1 92 0 0 0 0 363476 334700 294068 30520756 0 0 0 4424 4012 6467 4 1 95 0 0\n\n 1 0 363476 330852 294068 30520788 0 0 0 0 2559 3861 5 1 95 0 0 1 0 363476 331316 294072 30520788 0 0 0 4408 5013 8127 6 1 94 0 0 1 0 363476 330788 294072 30520788 0 0 0 4384 5535 9055 6 1 93 0 0\n\n 0 0 363476 331496 294072 30520804 0 0 0 4384 5031 8092 7 1 92 0 0 2 0 363476 331268 294072 30520804 0 0 0 4428 5052 8246 6 1 94 0 0 1 0 363476 330848 294080 30520812 0 0 0 32 4892 7950 5 1 94 0 0\n\n 1 0 363476 330480 294080 30520812 0 0 0 4388 4935 8036 6 1 94 0 0 2 0 363476 332616 294084 30521092 0 0 0 4408 5064 8194 6 1 93 0 0 0 0 363476 333596 294084 30521008 0 0 0 4384 5205 8463 8 1 91 0 0\n\n 1 0 363476 333324 294084 30521008 0 0 0 40 5014 8151 6 1 94 0 0 0 0 363476 332740 294084 30521016 0 0 0 4384 5047 8163 6 1 93 0 0 1 0 363476 336052 294084 30520888 0 0 0 4384 4849 7780 6 1 94 0 0\n\n 1 0 363476 334732 294088 30520892 0 0 8 4400 5520 9012 6 1 93 0 0 0 0 363476 334064 294096 30520884 0 0 0 220 3903 6193 6 1 94 0 0 0 0 363476 333124 294096 30520916 0 0 0 2232 4088 6462 6 1 93 0 0\nthe process that is constantly writing the majority of data is:\"postgres: stats collector process\"and varies from 2.5mb/sec up to 5.7mb/secall the other postgres (and non-postgres) processes write very little data and rarely.\nthe\n checkpointer process is like 78.6kb/sec (a few seconds now as i write \nthis email but no other is having a constant rate or I/O)also,\n _while having the problem_ the results of the following queries are \n(taken from http://wiki.postgresql.org/wiki/Lock_Monitoring): SELECT relation::regclass, * FROM pg_locks WHERE NOT granted;\n\n(0 rows) SELECT bl.pid AS blocked_pid,\n a.usename AS blocked_user,\n ka.query AS blocking_statement,\n now() - ka.query_start AS blocking_duration,\n kl.pid AS blocking_pid,\n ka.usename AS blocking_user,\n a.query AS blocked_statement,\n now() - a.query_start AS blocked_duration\n FROM pg_catalog.pg_locks bl\n JOIN pg_catalog.pg_stat_activity a ON a.pid = bl.pid\n JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND kl.pid != bl.pid\n JOIN pg_catalog.pg_stat_activity ka ON ka.pid = kl.pid\n WHERE NOT bl.granted;(0 rows)SELECT \n waiting.locktype AS waiting_locktype,\n waiting.relation::regclass AS waiting_table,\n waiting_stm.query AS waiting_query,\n waiting.mode AS waiting_mode,\n waiting.pid AS waiting_pid,\n other.locktype AS other_locktype,\n other.relation::regclass AS other_table,\n other_stm.query AS other_query,\n other.mode AS other_mode,\n other.pid AS other_pid,\n other.granted AS other_granted\nFROM\n pg_catalog.pg_locks AS waiting\nJOIN\n pg_catalog.pg_stat_activity AS waiting_stm\n ON (\n waiting_stm.pid = waiting.pid\n )\nJOIN\n pg_catalog.pg_locks AS other\n ON (\n (\n waiting.\"database\" = other.\"database\"\n AND waiting.relation = other.relation\n )\n OR waiting.transactionid = other.transactionid\n )\nJOIN\n pg_catalog.pg_stat_activity AS other_stm\n ON (\n other_stm.pid = other.pid\n )\nWHERE\n NOT waiting.granted\nAND\n waiting.pid <> other.pid;(0 rows)WITH RECURSIVE\n c(requested, current) AS\n ( VALUES\n ('AccessShareLock'::text, 'AccessExclusiveLock'::text),\n ('RowShareLock'::text, 'ExclusiveLock'::text),\n ('RowShareLock'::text, 'AccessExclusiveLock'::text),\n ('RowExclusiveLock'::text, 'ShareLock'::text),\n ('RowExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('RowExclusiveLock'::text, 'ExclusiveLock'::text),\n ('RowExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ShareLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'ExclusiveLock'::text),\n ('ShareUpdateExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('ShareLock'::text, 'RowExclusiveLock'::text),\n ('ShareLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ShareLock'::text, 'ShareRowExclusiveLock'::text),\n ('ShareLock'::text, 'ExclusiveLock'::text),\n ('ShareLock'::text, 'AccessExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'RowExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'ShareLock'::text),\n ('ShareRowExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'ExclusiveLock'::text),\n ('ShareRowExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('ExclusiveLock'::text, 'RowShareLock'::text),\n ('ExclusiveLock'::text, 'RowExclusiveLock'::text),\n ('ExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('ExclusiveLock'::text, 'ShareLock'::text),\n ('ExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('ExclusiveLock'::text, 'ExclusiveLock'::text),\n ('ExclusiveLock'::text, 'AccessExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'AccessShareLock'::text),\n ('AccessExclusiveLock'::text, 'RowShareLock'::text),\n ('AccessExclusiveLock'::text, 'RowExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'ShareUpdateExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'ShareLock'::text),\n ('AccessExclusiveLock'::text, 'ShareRowExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'ExclusiveLock'::text),\n ('AccessExclusiveLock'::text, 'AccessExclusiveLock'::text)\n ),\n l AS\n (\n SELECT\n (locktype,DATABASE,relation::regclass::text,page,tuple,virtualxid,transactionid,classid,objid,objsubid) AS target,\n virtualtransaction,\n pid,\n mode,\n granted\n FROM pg_catalog.pg_locks\n ),\n t AS\n (\n SELECT\n blocker.target AS blocker_target,\n blocker.pid AS blocker_pid,\n blocker.mode AS blocker_mode,\n blocked.target AS target,\n blocked.pid AS pid,\n blocked.mode AS mode\n FROM l blocker\n JOIN l blocked\n ON ( NOT blocked.granted\n AND blocker.granted\n AND blocked.pid != blocker.pid\n AND blocked.target IS NOT DISTINCT FROM blocker.target)\n JOIN c ON (c.requested = blocked.mode AND c.current = blocker.mode)\n ),\n r AS\n (\n SELECT\n blocker_target,\n blocker_pid,\n blocker_mode,\n '1'::int AS depth,\n target,\n pid,\n mode,\n blocker_pid::text || ',' || pid::text AS seq\n FROM t\n UNION ALL\n SELECT\n blocker.blocker_target,\n blocker.blocker_pid,\n blocker.blocker_mode,\n blocker.depth + 1,\n blocked.target,\n blocked.pid,\n blocked.mode,\n blocker.seq || ',' || blocked.pid::text\n FROM r blocker\n JOIN t blocked\n ON (blocked.blocker_pid = blocker.pid)\n WHERE blocker.depth < 1000\n )\nSELECT * FROM r\n ORDER BY seq;(0 rows)no finding here either :(best,/mstelios\nStelios MavromichalisCytech Ltd. - http://www.cytech.gr/Science & Technology Park of Cretefax: +30 2810 31 1045\ntel.: +30 2810 31 4127\nmob.: +30 697 7078013skype: mstelios\nOn Tue, May 6, 2014 at 2:54 AM, David G Johnston <[email protected]> wrote:\n\nStelios Mavromichalis wrote\n> as a prior step to dump/restore i am thinking of deleting and re-inserting\n> that particular row. that might share some light you think?\n\nI still dislike the randomness of the unresponsiveness...\n\nEvery time you perform an update you \"delete and insert\" that row - that is\nhow an update works in MVCC - so doing so explicitly is unlikely to provide\nany benefit. Since that row is continually being inserted, and no other\nrows are having this issue, I'm seriously doubting that a dump/restore is\ngoing to have any effect either. Note that the index scan took twice as\nlong in the bad case - but still reasonable and you didn't notice any\nbenefit from a REINDEX. This is what I would expect.\n\nThe only other difference, if concurrency has been ruled out, is the 4 vs 18\nbuffers that had to be read. I cannot imagine that, since all 22 were in\ncache, that simply reading that much more data would account for the\ndifference (we're talking a 10,000-fold increase, not 2to4-fold). The\nreason for this particular difference, IIUC, is how may candidate tuples are\npresent whose visibility has to be accounted for (assuming 1 buffer per\ntuple, you needed to scan 4 vs 18 for visibility in the two queries).\n\nIs there any log file information you can share? Especially if you can set\nlog_min_statement_duration (or whatever that GUC is named) so that whenever\none of these gets adversely delayed it appears in the log along with\nwhatever other system messages are being sent. Checkpoints are a typical\nculprit though that should be affecting a great deal more than what you\nindicate you are seeing.\n\nI'm pretty certain you are seeing this here largely because of the frequency\nof activity on this particular user; not because the data itself is\ncorrupted. It could be some kind of symptom of internal concurrency that\nyou just haven't observed yet but it could also be I/O or other system\ncontention that you also haven't properly instrumented. Unfortunately that\nis beyond my current help-providing skill-set.\n\nA dump-restore likely would not make anything worse though I'd be surprised\nif it were to improve matters. It also doesn't seem like hardware - unless\nthe RAM is bad. Software bugs are unlikely if this had been working well\nbefore 5 days ago. So, you need to observe the system during both periods\n(good and bad) and observe something that is different - probably not within\nPostgreSQL if indeed you've minimized concurrency. And also see if you can\nsee if any other queries, executed during both these times, exhibit a\nperformance decrease. Logging all statements would help matters greatly if\nyou can afford it in your production environment - it would make looking for\ninternal concurrency much easier.\n\nDavid J.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Re-recently-and-selectively-slow-but-very-simple-update-query-tp5802553p5802579.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 7 May 2014 18:57:50 +0300",
"msg_from": "Stelios Mavromichalis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: recently and selectively slow, but very simple,\n update query...."
}
] |
[
{
"msg_contents": "I am busy reading Gregory Smith' s PostgreSQL 9.0 High Performance and\nwhen the book was written he seemed to me a bit sceptical about SSD's. I\nsuspect the reliability of the SSD's has improved significantly since then.\n\nOur present server (128Gb RAM and 2.5 Tb disk space and 12 CPU cores -\nRAID 10) will become a development server and we are going to buy a new\nserver.\n\nAt the moment the 'base' directory uses 1.5Tb of disk space and there is\nstill more data to come.\n\nThe database contains blbliometric data that receive updates on a weekly\nbasis but not much changes other than that except for cleaning of data by a\nfew persons.\n\nSome of the queries can take many hours to finish.\n\nOn our present system there are sometimes more than 300GB in temporary\nfiles which I suspect will not be the case on the new system with a much\nlarger RAM.\n\nAnalysis or the SAR-logs showed that there were too much iowait in the\nCPU's on the old system which has a lower spec CPU than the ones considered\nfor the new system.\n\nWe are looking possibly the following hardware:\n\nCPU: 2 x Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores\nRAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS - 768Gb\n\nwith enough disk space - about 4.8 Tb on RAID 10.\nMy question is about the possible advantage and usage of SSD disks in the\nnew server. At the moment I am considering using 2 x 200GB SSD' s for a\nseparate partion for temporary files and 2 x 100GB for the operating system.\n\nSo my questions:\n\n1. Will the SSD's in this case be worth the cost?\n2. What will the best way to utilize them in the system?\n\nRegards\nJohann\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nI am busy reading Gregory Smith' s PostgreSQL 9.0 \nHigh Performance and when the book was written he seemed to me a bit \nsceptical about SSD's. I suspect the reliability of the SSD's has \nimproved significantly since then.Our present server (128Gb RAM and 2.5 Tb disk space and 12 CPU cores - RAID 10) will become a development server and we are going to buy a new server.At the moment the 'base' directory uses 1.5Tb of disk space and there is still more data to come.\nThe\n database contains blbliometric data that receive updates on a weekly \nbasis but not much changes other than that except for cleaning of data \nby a few persons.Some of the queries can take many hours to finish.On our present system there are sometimes more than 300GB in \ntemporary files which I suspect will not be the case on the new system \nwith a much larger RAM.Analysis or the SAR-logs \nshowed that there were too much iowait in the CPU's on the old system \nwhich has a lower spec CPU than the ones considered for the new system.We are looking possibly the following hardware:CPU: 2 x Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores\nRAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS - 768Gbwith enough disk space - about 4.8 Tb on RAID 10.My question is about the possible advantage and usage of SSD disks in the new server. At the moment I am considering using 2 x 200GB SSD' s for a separate partion for temporary files and 2 x 100GB for the operating system.\nSo my questions:1. Will the SSD's in this case be worth the cost?2. What will the best way to utilize them in the system? RegardsJohann\n-- Because experiencing your loyal love is better than life itself, my lips will praise you. (Psalm 63:3)",
"msg_date": "Tue, 6 May 2014 11:13:42 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Specifications for a new server"
},
{
"msg_contents": "I can suggest to have a disks' layout using at least two RAIDs:\n\n1) RAID10 SSD (or 15kRPM HDD) SAS for O.S. and \"pg_xlog\" folder where PG\nwrites WAL files before checkpoint calls.\n2) RAID10 using how many span is possible for the default DB folder.\n\n\nRegards,\n\n\n2014-05-06 11:13 GMT+02:00 Johann Spies <[email protected]>:\n\n> I am busy reading Gregory Smith' s PostgreSQL 9.0 High Performance and\n> when the book was written he seemed to me a bit sceptical about SSD's. I\n> suspect the reliability of the SSD's has improved significantly since then.\n>\n> Our present server (128Gb RAM and 2.5 Tb disk space and 12 CPU cores -\n> RAID 10) will become a development server and we are going to buy a new\n> server.\n>\n> At the moment the 'base' directory uses 1.5Tb of disk space and there is\n> still more data to come.\n>\n> The database contains blbliometric data that receive updates on a weekly\n> basis but not much changes other than that except for cleaning of data by a\n> few persons.\n>\n> Some of the queries can take many hours to finish.\n>\n> On our present system there are sometimes more than 300GB in temporary\n> files which I suspect will not be the case on the new system with a much\n> larger RAM.\n>\n> Analysis or the SAR-logs showed that there were too much iowait in the\n> CPU's on the old system which has a lower spec CPU than the ones considered\n> for the new system.\n>\n> We are looking possibly the following hardware:\n>\n> CPU: 2 x Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores\n> RAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS - 768Gb\n>\n> with enough disk space - about 4.8 Tb on RAID 10.\n> My question is about the possible advantage and usage of SSD disks in the\n> new server. At the moment I am considering using 2 x 200GB SSD' s for a\n> separate partion for temporary files and 2 x 100GB for the operating system.\n>\n> So my questions:\n>\n> 1. Will the SSD's in this case be worth the cost?\n> 2. What will the best way to utilize them in the system?\n>\n> Regards\n> Johann\n> --\n> Because experiencing your loyal love is better than life itself,\n> my lips will praise you. (Psalm 63:3)\n>\n\nI can suggest to have a disks' layout using at least two RAIDs:1) RAID10 SSD (or 15kRPM HDD) SAS for O.S. and \"pg_xlog\" folder where PG writes WAL files before checkpoint calls.\n2) RAID10 using how many span is possible for the default DB folder.Regards,2014-05-06 11:13 GMT+02:00 Johann Spies <[email protected]>:\nI am busy reading Gregory Smith' s PostgreSQL 9.0 \nHigh Performance and when the book was written he seemed to me a bit \nsceptical about SSD's. I suspect the reliability of the SSD's has \nimproved significantly since then.Our present server (128Gb RAM and 2.5 Tb disk space and 12 CPU cores - RAID 10) will become a development server and we are going to buy a new server.At the moment the 'base' directory uses 1.5Tb of disk space and there is still more data to come.\nThe\n database contains blbliometric data that receive updates on a weekly \nbasis but not much changes other than that except for cleaning of data \nby a few persons.Some of the queries can take many hours to finish.On our present system there are sometimes more than 300GB in \ntemporary files which I suspect will not be the case on the new system \nwith a much larger RAM.Analysis or the SAR-logs \nshowed that there were too much iowait in the CPU's on the old system \nwhich has a lower spec CPU than the ones considered for the new system.We are looking possibly the following hardware:CPU: 2 x Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores\n\n\nRAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS - 768Gbwith enough disk space - about 4.8 Tb on RAID 10.My question is about the possible advantage and usage of SSD disks in the new server. At the moment I am considering using 2 x 200GB SSD' s for a separate partion for temporary files and 2 x 100GB for the operating system.\nSo my questions:1. Will the SSD's in this case be worth the cost?2. What will the best way to utilize them in the system? RegardsJohann\n-- Because experiencing your loyal love is better than life itself, my lips will praise you. (Psalm 63:3)",
"msg_date": "Tue, 6 May 2014 11:25:42 +0200",
"msg_from": "DFE <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specifications for a new server"
},
{
"msg_contents": "On Tue, May 06, 2014 at 11:13:42AM +0200, Johann Spies wrote:\n>Analysis or the SAR-logs showed that there were too much iowait in the CPU's on\n>the old system which has a lower spec CPU than the ones considered for the new\n>system.\n\niowait means the cpu is doing nothing but waiting for data from the \ndisk. buying faster cpus means that they will be able to spend more time \nwaiting for data from the disk. you'd probably get much better bang for \nthe buck upgrading the storage subsystem than throwing more money at \ncpus.\n\n>We are looking possibly the following hardware:\n>\n>CPU: 2 x� Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores\n>RAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS� - 768Gb\n>\n>with enough disk space - about 4.8 Tb on RAID 10.\n>My question is about the possible advantage and usage of SSD disks in the new\n>server.� \n\n>At the moment I am considering using 2 x 200GB SSD' s for a separate\n>partion for temporary files and 2 x 100GB for the operating system.\n\nIf you're talking about SSDs for the OS, that's a complete waste; there \nis essentially no I/O relating to the OS once you've booted.\n\n>So my questions:\n>\n>1. Will the SSD's in this case be worth the cost?\n>2.� What will the best way to utilize them in the system?\n\nThe best way to utilize them would probably be to spend less on the CPU \nand RAM and more on the storage, and use SSD either for all of the \nstorage or for specific items that have a high level of I/O (such as the \nindexes). Can't be more specific than that without a lot more \ninformation about the database, how it is utilized, and what's actually \nslow.\n\nMike Stone\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 May 2014 07:07:14 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specifications for a new server"
},
{
"msg_contents": "Since the commitlog/WAL is sequential-write, does it mattert that much to\nput it in ssd ?(i understand that it matters to put it in separate\ndisk-subsystem so the write/read patterns don't interfere)\n\n\nOn Tue, May 6, 2014 at 1:07 PM, Michael Stone <[email protected]>wrote:\n\n> On Tue, May 06, 2014 at 11:13:42AM +0200, Johann Spies wrote:\n>\n>> Analysis or the SAR-logs showed that there were too much iowait in the\n>> CPU's on\n>> the old system which has a lower spec CPU than the ones considered for\n>> the new\n>> system.\n>>\n>\n> iowait means the cpu is doing nothing but waiting for data from the disk.\n> buying faster cpus means that they will be able to spend more time waiting\n> for data from the disk. you'd probably get much better bang for the buck\n> upgrading the storage subsystem than throwing more money at cpus.\n>\n>\n> We are looking possibly the following hardware:\n>>\n>> CPU: 2 x Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores\n>> RAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS - 768Gb\n>>\n>> with enough disk space - about 4.8 Tb on RAID 10.\n>> My question is about the possible advantage and usage of SSD disks in the\n>> new\n>> server.\n>>\n>\n> At the moment I am considering using 2 x 200GB SSD' s for a separate\n>> partion for temporary files and 2 x 100GB for the operating system.\n>>\n>\n> If you're talking about SSDs for the OS, that's a complete waste; there is\n> essentially no I/O relating to the OS once you've booted.\n>\n>\n> So my questions:\n>>\n>> 1. Will the SSD's in this case be worth the cost?\n>> 2. What will the best way to utilize them in the system?\n>>\n>\n> The best way to utilize them would probably be to spend less on the CPU\n> and RAM and more on the storage, and use SSD either for all of the storage\n> or for specific items that have a high level of I/O (such as the indexes).\n> Can't be more specific than that without a lot more information about the\n> database, how it is utilized, and what's actually slow.\n>\n> Mike Stone\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nSince the commitlog/WAL is sequential-write, does it mattert that much to put it in ssd ?(i understand that it matters to put it in separate disk-subsystem so the write/read patterns don't interfere)\nOn Tue, May 6, 2014 at 1:07 PM, Michael Stone <[email protected]> wrote:\nOn Tue, May 06, 2014 at 11:13:42AM +0200, Johann Spies wrote:\n\nAnalysis or the SAR-logs showed that there were too much iowait in the CPU's on\nthe old system which has a lower spec CPU than the ones considered for the new\nsystem.\n\n\niowait means the cpu is doing nothing but waiting for data from the disk. buying faster cpus means that they will be able to spend more time waiting for data from the disk. you'd probably get much better bang for the buck upgrading the storage subsystem than throwing more money at cpus.\n\n\n\nWe are looking possibly the following hardware:\n\nCPU: 2 x Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores\nRAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS - 768Gb\n\nwith enough disk space - about 4.8 Tb on RAID 10.\nMy question is about the possible advantage and usage of SSD disks in the new\nserver. \n\n\n\nAt the moment I am considering using 2 x 200GB SSD' s for a separate\npartion for temporary files and 2 x 100GB for the operating system.\n\n\nIf you're talking about SSDs for the OS, that's a complete waste; there is essentially no I/O relating to the OS once you've booted.\n\n\nSo my questions:\n\n1. Will the SSD's in this case be worth the cost?\n2. What will the best way to utilize them in the system?\n\n\nThe best way to utilize them would probably be to spend less on the CPU and RAM and more on the storage, and use SSD either for all of the storage or for specific items that have a high level of I/O (such as the indexes). Can't be more specific than that without a lot more information about the database, how it is utilized, and what's actually slow.\n\nMike Stone\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 6 May 2014 13:15:10 +0200",
"msg_from": "Dorian Hoxha <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specifications for a new server"
},
{
"msg_contents": "On Tue, May 06, 2014 at 01:15:10PM +0200, Dorian Hoxha wrote:\n>Since the commitlog/WAL is sequential-write, does it mattert that much to put\n>it in ssd \n\nNo, assuming a good storage system with nvram write buffer. \n\nMike Stone\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 6 May 2014 07:24:10 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specifications for a new server"
},
{
"msg_contents": "And nvram is ram on hardware-raid controller that is not erased on\nreboot(battery) ?\nCan this nvram be used also when the configuration is\njbod(justabunchofdisks) or some kind of raid(0/1/5/6/10 etc) is required to\nuse the nvram.\n\n\nOn Tue, May 6, 2014 at 1:24 PM, Michael Stone <[email protected]>wrote:\n\n> On Tue, May 06, 2014 at 01:15:10PM +0200, Dorian Hoxha wrote:\n>\n>> Since the commitlog/WAL is sequential-write, does it mattert that much to\n>> put\n>> it in ssd\n>>\n>\n> No, assuming a good storage system with nvram write buffer.\n> Mike Stone\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAnd nvram is ram on hardware-raid controller that is not erased on reboot(battery) ?Can this nvram be used also when the configuration is jbod(justabunchofdisks) or some kind of raid(0/1/5/6/10 etc) is required to use the nvram.\nOn Tue, May 6, 2014 at 1:24 PM, Michael Stone <[email protected]> wrote:\nOn Tue, May 06, 2014 at 01:15:10PM +0200, Dorian Hoxha wrote:\n\nSince the commitlog/WAL is sequential-write, does it mattert that much to put\nit in ssd \n\n\nNo, assuming a good storage system with nvram write buffer. \nMike Stone\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 6 May 2014 16:43:48 +0200",
"msg_from": "Dorian Hoxha <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specifications for a new server"
},
{
"msg_contents": "On 6.5.2014 16:43, Dorian Hoxha wrote:\n> And nvram is ram on hardware-raid controller that is not erased on \n> reboot(battery) ?\n\nYes.\n\n> Can this nvram be used also when the configuration is \n> jbod(justabunchofdisks) or some kind of raid(0/1/5/6/10 etc) is\n> required to use the nvram.\n\nThat probably depends on the controller used. Some controllers (e.g.\nDell Perc H700 and such) don't even support JBOD - an easy way around\nthis is create single-drive RAID0 arrays (never tried it, though).\n\nWhy do you even want to use JBOD?\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 07 May 2014 23:35:14 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specifications for a new server"
},
{
"msg_contents": "Hello,\n\nhere I am - the lonely user of JBOD :-) I use exactly what you describe below, we have a couple of Dell Servers with battery backed H700 controllers. On one we let the controller do a RAID10 on others I use the single disks (as you describe in useless RAID0s). \n\nWhat we do then is use those disks in a Solaris ZFS Pool as filesystem for our databases. The reason is, that with ZFS (maybe there are other fs around there which can do this, but far back in good old Sun-times it was the only one) I can build up a pool and mix nearline SAS with SSDs in a neat way. ZFS has the concept of an ZFS Intent Log (ZIL), where writes go first before migrated to the pool - and you can put them on seperate disks. And you have the adaptive replacement cache (ARC) you can also put on seperate disks, where hot data is cached. Now guess where we use the expensive SSDs.\n\nIt boils down to this:\n\n- slow mechanical drives backed with nvram controller\n- SSD expecially for the ARC (and ZIL)\n- RAM for ZFS further caching\n\nOn my very first try when ZIL and ARC approached in Solaris my first idea was just put one SSD splitted to two partitions as ARC and ZIL to our exisiting database. I didn't change anything else (to be fair, that was a development system with out nvram controller) - gave me a boost in performace of about 10.\n\nTo be a bit more fair (I can, as I do not work for Sun - ehm Oracle): If this would work for someone else, the usage pattern should be kept in mind. Also setting and tuning a Solaris server is of some more work and ZFS itself uses a good bunch of CPU and RAM (we leave 8G for ZFS) and one would need much testing and tuning.\n\nA positive effect we use is for backing up data - since ZFS offers atomic snapshots, we just snapshot the filesystem. The datafiles are guaranteed to be vaild as the WAL. A restore is to put the files back into place and restart Postgres - it will run a recovery and replay the WAL and everything is done. We used that to provide back in time recovery and warm standby with old postgres installations before replication was included into core.\n\nCheers,\nRoland\n\n\n\n> Can this nvram be used also when the configuration is \n> jbod(justabunchofdisks) or some kind of raid(0/1/5/6/10 etc) is\n> required to use the nvram.\n\nThat probably depends on the controller used. Some controllers (e.g.\nDell Perc H700 and such) don't even support JBOD - an easy way around\nthis is create single-drive RAID0 arrays (never tried it, though).\n\nWhy do you even want to use JBOD?\n\n\n\n----------------------------------------------------------------\nE-Mail-Postfach voll? Jetzt kostenlos E-Mail-Adresse @t-online.de sichern und endlich Platz für tausende E-Mails haben.\nhttp://www.t-online.de/email-kostenlos\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 08 May 2014 07:20:43 +0200",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?UTF-8?Q?=5BPERFORM=5D?= Specifications for a new server"
},
{
"msg_contents": "On 6 May 2014 13:07, Michael Stone <[email protected]> wrote:\n\n> On Tue, May 06, 2014 at 11:13:42AM +0200, Johann Spies wrote:\n>\n>> Analysis or the SAR-logs showed that there were too much iowait in the\n>> CPU's on\n>> the old system which has a lower spec CPU than the ones considered for\n>> the new\n>> system.\n>>\n>\n> iowait means the cpu is doing nothing but waiting for data from the disk.\n> buying faster cpus means that they will be able to spend more time waiting\n> for data from the disk. you'd probably get much better bang for the buck\n> upgrading the storage subsystem than throwing more money at cpus.\n>\n>\n> In that case I apologise for making the wrong assumption. People who are\nmore experienced than me analyzed the logs told me that to their surprise\nthe CPU' s were under pressure. I just assumed that the iowait was the\nproblem having looked at the logs myself.\n\n\n> If you're talking about SSDs for the OS, that's a complete waste; there is\n> essentially no I/O relating to the OS once you've booted.\n>\n>\nI also thought this might be an overkill but I was not sure.\n\n\n>\n> So my questions:\n>>\n>> 1. Will the SSD's in this case be worth the cost?\n>> 2. What will the best way to utilize them in the system?\n>>\n>\n> The best way to utilize them would probably be to spend less on the CPU\n> and RAM and more on the storage, and use SSD either for all of the storage\n> or for specific items that have a high level of I/O (such as the indexes).\n> Can't be more specific than that without a lot more information about the\n> database, how it is utilized, and what's actually slow.\n>\n>\n\nI understand your remark about the CPU in the light of my wrong assumption\nearlier, but I do not understand your remark about the RAM. The fact that\ntemporary files of up to 250Gb are created at times during complex queries,\nis to me an indication of too low RAM.\n\nQuestion: How do I dedicate a partition to indexes? Were do I configure\nPostgreSQL to write them in a particular area?\n\nRegards\nJohann\n\nOn 6 May 2014 13:07, Michael Stone <[email protected]> wrote:\nOn Tue, May 06, 2014 at 11:13:42AM +0200, Johann Spies wrote:\n\nAnalysis or the SAR-logs showed that there were too much iowait in the CPU's on\nthe old system which has a lower spec CPU than the ones considered for the new\nsystem.\n\n\niowait means the cpu is doing nothing but waiting for data from the disk. buying faster cpus means that they will be able to spend more time waiting for data from the disk. you'd probably get much better bang for the buck upgrading the storage subsystem than throwing more money at cpus.\n\nIn that case I apologise for making the wrong assumption. People who are more experienced than me analyzed the logs told me that to their surprise the CPU' s were under pressure. I just assumed that the iowait was the problem having looked at the logs myself.\n\nIf you're talking about SSDs for the OS, that's a complete waste; there is essentially no I/O relating to the OS once you've booted.I also thought this might be an overkill but I was not sure.\n \n\n\nSo my questions:\n\n1. Will the SSD's in this case be worth the cost?\n2. What will the best way to utilize them in the system?\n\n\nThe best way to utilize them would probably be to spend less on the CPU and RAM and more on the storage, and use SSD either for all of the storage or for specific items that have a high level of I/O (such as the indexes). Can't be more specific than that without a lot more information about the database, how it is utilized, and what's actually slow.\nI understand your remark about the CPU in the light of my wrong assumption earlier, but I do not understand your remark about the RAM. The fact that temporary files of up to 250Gb are created at times during complex queries, is to me an indication of too low RAM.\nQuestion: How do I dedicate a partition to indexes? Were do I configure PostgreSQL to write them in a particular area?RegardsJohann",
"msg_date": "Thu, 8 May 2014 10:11:38 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Specifications for a new server"
},
{
"msg_contents": "On 8 May 2014 10:11, Johann Spies <[email protected]> wrote:\n\n>\n> Question: How do I dedicate a partition to indexes? Were do I configure\n> PostgreSQL to write them in a particular area?\n>\n>\n>\nI just discovered TABLESPACE which answered my question.\n\nRegards\nJohann\n\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nOn 8 May 2014 10:11, Johann Spies <[email protected]> wrote:\nQuestion: How do I dedicate a partition to indexes? Were do I configure PostgreSQL to write them in a particular area?\nI just discovered TABLESPACE which answered my question.RegardsJohann -- Because experiencing your loyal love is better than life itself, \nmy lips will praise you. (Psalm 63:3)",
"msg_date": "Thu, 8 May 2014 10:28:01 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Specifications for a new server"
},
{
"msg_contents": ">\n> Why do you even want to use JBOD?\n>\n\nNot for postgresql , but for distributed filesystems like hdfs/qfs (which\nare supposed to work on JBOD) with hypertable on top (so the nvram would\nhelp with the commits, since it is the biggest bottleneck when\nwriting(commits need to be saved to multiple servers before 'ok' is\nreturned in the client)).\n\n\nOn Thu, May 8, 2014 at 10:28 AM, Johann Spies <[email protected]>wrote:\n\n>\n>\n>\n> On 8 May 2014 10:11, Johann Spies <[email protected]> wrote:\n>\n>>\n>> Question: How do I dedicate a partition to indexes? Were do I configure\n>> PostgreSQL to write them in a particular area?\n>>\n>>\n>>\n> I just discovered TABLESPACE which answered my question.\n>\n> Regards\n> Johann\n>\n> --\n> Because experiencing your loyal love is better than life itself,\n> my lips will praise you. (Psalm 63:3)\n>\n\nWhy do you even want to use JBOD?\nNot for postgresql , but for distributed filesystems like hdfs/qfs (which are supposed to work on JBOD) with hypertable on top (so the nvram would help with the commits, since it is the biggest bottleneck when writing(commits need to be saved to multiple servers before 'ok' is returned in the client)).\nOn Thu, May 8, 2014 at 10:28 AM, Johann Spies <[email protected]> wrote:\nOn 8 May 2014 10:11, Johann Spies <[email protected]> wrote:\nQuestion: How do I dedicate a partition to indexes? Were do I configure PostgreSQL to write them in a particular area?\nI just discovered TABLESPACE which answered my question.RegardsJohann \n-- Because experiencing your loyal love is better than life itself, \nmy lips will praise you. (Psalm 63:3)",
"msg_date": "Thu, 8 May 2014 13:44:12 +0200",
"msg_from": "Dorian Hoxha <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specifications for a new server"
},
{
"msg_contents": "On Thu, May 08, 2014 at 10:11:38AM +0200, Johann Spies wrote:\n>I understand your remark about the CPU in the light of my wrong assumption\n>earlier, but I do not understand your remark about the RAM.� The fact that\n>temporary files of up to 250Gb are created at times during complex queries, is\n>to me an indication of too low RAM.\n\nIf you can afford infinite RAM, then infinite RAM is great. If your \nworking set size exceeds the memory size, then you will eventually need \nto deal with disk IO. At that point, maybe a bit more memory will help \nand maybe it will not--you'll be able to fit a little bit more working \ndata into memory, but that won't likely radically change the \nperformance. (If you can afford to fit *everything* you need into RAM \nthan that's ideal, but that's not the case for most people with \nnon-trival data sets.) What is certain is that improving the disk IO \nperformance will improve your overall performance if you're IO bound.\n\n(And the mere existence of temporary files isn't an indication of \ninsufficient RAM if the system can utilize the memory more efficiently \nwith the files than it can without them--they could contain data that \nisn't needed in a particular phase of a query, freeing up resources that \nare needed for other data in that phase.)\n\nMike Stone\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 13 May 2014 10:19:13 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specifications for a new server"
},
{
"msg_contents": "On Thu, May 8, 2014 at 1:11 AM, Johann Spies <[email protected]> wrote:\n\n>\n>\n>>\n>> So my questions:\n>>>\n>>> 1. Will the SSD's in this case be worth the cost?\n>>> 2. What will the best way to utilize them in the system?\n>>>\n>>\n>> The best way to utilize them would probably be to spend less on the CPU\n>> and RAM and more on the storage, and use SSD either for all of the storage\n>> or for specific items that have a high level of I/O (such as the indexes).\n>> Can't be more specific than that without a lot more information about the\n>> database, how it is utilized, and what's actually slow.\n>>\n>>\n>\n> I understand your remark about the CPU in the light of my wrong assumption\n> earlier, but I do not understand your remark about the RAM. The fact that\n> temporary files of up to 250Gb are created at times during complex queries,\n> is to me an indication of too low RAM.\n>\n\nAre these PostgreSQL temp files or other temp files? PostgreSQL doesn't\nsuppress the use of temp files just because you have a lot of RAM. You\nwould also have to set work_mem to a very large setting, probably\ninappropriately large, and even that might not work because there other\nlimits on how much memory PostgreSQL can use for any given operation (for\nexample, you can't sort more than 2**32 (or 2**31?) tuples in memory, no\nmatter how much memory you have, and in older versions even less than\nthat). But that doesn't mean the RAM is not useful. The OS can use the\nRAM to buffer the temp files so that they might not ever see the disk, or\nmight not be read from disk because they are still in memory.\n\nSSD is probably wasted on temp files, as they are designed to be accessed\nmostly sequentially.\n\nCheers,\n\nJeff\n\nOn Thu, May 8, 2014 at 1:11 AM, Johann Spies <[email protected]> wrote:\n \n\n\n\nSo my questions:\n\n1. Will the SSD's in this case be worth the cost?\n2. What will the best way to utilize them in the system?\n\n\nThe best way to utilize them would probably be to spend less on the CPU and RAM and more on the storage, and use SSD either for all of the storage or for specific items that have a high level of I/O (such as the indexes). Can't be more specific than that without a lot more information about the database, how it is utilized, and what's actually slow.\nI understand your remark about the CPU in the light of my wrong assumption earlier, but I do not understand your remark about the RAM. The fact that temporary files of up to 250Gb are created at times during complex queries, is to me an indication of too low RAM.\nAre these PostgreSQL temp files or other temp files? PostgreSQL doesn't suppress the use of temp files just because you have a lot of RAM. You would also have to set work_mem to a very large setting, probably inappropriately large, and even that might not work because there other limits on how much memory PostgreSQL can use for any given operation (for example, you can't sort more than 2**32 (or 2**31?) tuples in memory, no matter how much memory you have, and in older versions even less than that). But that doesn't mean the RAM is not useful. The OS can use the RAM to buffer the temp files so that they might not ever see the disk, or might not be read from disk because they are still in memory.\nSSD is probably wasted on temp files, as they are designed to be accessed mostly sequentially.Cheers,Jeff",
"msg_date": "Tue, 13 May 2014 10:12:46 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Specifications for a new server"
}
] |
[
{
"msg_contents": "Hello PostgreSQL community ,\n\nI'm doing benchmark between column store and traditional row-oriented store. I would like to know if there is any way to measure memory consummed by a query execution?\n\nThanks\nMinh,\n\n\n\n\n\n\n\n\n\nHello PostgreSQL community ,\n\n\nI'm doing benchmark between column store and traditional row-oriented store. I would like to know if there is any way to measure memory consummed by a query execution?\n\n\nThanks\nMinh,",
"msg_date": "Thu, 8 May 2014 07:04:43 +0000",
"msg_from": "=?iso-8859-1?Q?Phan_C=F4ng_Minh?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Check memory consumption of postgresql query"
},
{
"msg_contents": "On Thu, May 8, 2014 at 3:04 AM, Phan Công Minh <[email protected]> wrote:\n> Hello PostgreSQL community ,\n>\n> I'm doing benchmark between column store and traditional row-oriented store.\n> I would like to know if there is any way to measure memory consummed by a\n> query execution?\n\n\nIn linux you can look at the memory usage for a particular backend in\n/proc/[pid]/smaps. Get the pid with pg_backend_pid() or from\npg_stat_activity.\n\nFor more info, check out\nhttp://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/\n\n- Clinton\n\n\n>\n> Thanks\n> Minh,\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 8 May 2014 10:04:01 -0400",
"msg_from": "Clinton Adams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check memory consumption of postgresql query"
},
{
"msg_contents": "Hi Clinton,\n\nThank you for your response. I check the article (http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/) and it seems to work with general process as well. \nHowever does it have anyway to calculate the memory used by single query, not the whole postgresql process?\n\nThanks,\nMinh\n________________________________________\nFrom: Clinton Adams <[email protected]>\nSent: Thursday, May 8, 2014 4:04 PM\nTo: Phan Công Minh\nCc: [email protected]\nSubject: Re: [PERFORM] Check memory consumption of postgresql query\n\nOn Thu, May 8, 2014 at 3:04 AM, Phan Công Minh <[email protected]> wrote:\n> Hello PostgreSQL community ,\n>\n> I'm doing benchmark between column store and traditional row-oriented store.\n> I would like to know if there is any way to measure memory consummed by a\n> query execution?\n\n\nIn linux you can look at the memory usage for a particular backend in\n/proc/[pid]/smaps. Get the pid with pg_backend_pid() or from\npg_stat_activity.\n\nFor more info, check out\nhttp://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/\n\n- Clinton\n\n\n>\n> Thanks\n> Minh,\n>\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 12 May 2014 07:02:54 +0000",
"msg_from": "=?iso-8859-1?Q?Phan_C=F4ng_Minh?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Check memory consumption of postgresql query"
},
{
"msg_contents": "On Mon, May 12, 2014 at 4:02 AM, Phan Công Minh <[email protected]> wrote:\n\n> Thank you for your response. I check the article (\n> http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/) and\n> it seems to work with general process as well.\n> However does it have anyway to calculate the memory used by single query,\n> not the whole postgresql process?\n>\n\nYou can check only the /proc/<pid>/ for the backend you are interested in.\nAlso, an EXPLAIN ANALYZE of your qurey will show memory used by some\noperations (like sort, hash, etc.), those are limited by the work_mem\nparameter, so if you are working on benchmarks, you may want to tune that\nproperly.\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, May 12, 2014 at 4:02 AM, Phan Công Minh <[email protected]> wrote:\n\nThank you for your response. I check the article (http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/) and it seems to work with general process as well.\n\n\nHowever does it have anyway to calculate the memory used by single query, not the whole postgresql process?You can check only the /proc/<pid>/ for the backend you are interested in. Also, an EXPLAIN ANALYZE of your qurey will show memory used by some operations (like sort, hash, etc.), those are limited by the work_mem parameter, so if you are working on benchmarks, you may want to tune that properly.\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 12 May 2014 13:38:00 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check memory consumption of postgresql query"
}
] |
[
{
"msg_contents": "Dear Community friends,\n\n \n\nWe are planning to use postgresql 9.3 for building a mobile backend. Can we\nget a benchmark on the level of concurrency that can be supported by\nPostgres 9.3 and it will be able to handle the spike in traffic if the app\ngets popular.\n\n \n\nAny help regarding the same will be highly appreciated.\n\n \n\nThanks,\nRajiv\n\n\nThis message may contain privileged and confidential information and is solely for the use of intended recipient. The views expressed in this email are those of the sender and not of Pine Labs. The recipient should check this email and attachments for the presence of viruses / malwares etc. Pine Labs accepts no liability for any damage caused by any virus transmitted by this email. Pine Labs may monitor and record all emails.\nDear Community friends, We are planning to use postgresql 9.3 for building a mobile backend. Can we get a benchmark on the level of concurrency that can be supported by Postgres 9.3 and it will be able to handle the spike in traffic if the app gets popular. Any help regarding the same will be highly appreciated. Thanks,Rajiv\nThis message may contain privileged and confidential information and is solely for the use of intended recipient. The views expressed in this email are those of the sender and not of Pine Labs. The recipient should check this email and attachments for the presence of viruses / malwares etc. Pine Labs accepts no liability for any damage caused by any virus transmitted by this email. Pine Labs may monitor and record all emails.",
"msg_date": "Thu, 8 May 2014 12:40:20 +0530",
"msg_from": "\"Rajiv Kasera\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql 9.3 for a Mobile Backend"
},
{
"msg_contents": "2014-05-08 9:10 GMT+02:00 Rajiv Kasera <[email protected]>:\n\n> Dear Community friends,\n>\n>\n>\n> We are planning to use postgresql 9.3 for building a mobile backend. Can\n> we get a benchmark on the level of concurrency that can be supported by\n> Postgres 9.3 and it will be able to handle the spike in traffic if the app\n> gets popular.\n>\n>\n>\n> Any help regarding the same will be highly appreciated.\n>\n>\n>\n> Thanks,\n> Rajiv\n>\n> ------------------------------\n> This message may contain privileged and confidential information and is\n> solely for the use of intended recipient. The views expressed in this email\n> are those of the sender and not of Pine Labs. The recipient should check\n> this email and attachments for the presence of viruses / malwares etc. Pine\n> Labs accepts no liability for any damage caused by any virus transmitted by\n> this email. Pine Labs may monitor and record all emails.\n> ------------------------------\n>\n>\nHello,\n you could use Hammerdb : http://hammerora.sourceforge.net/\n, pgbench , yahoo ycsb https://github.com/brianfrankcooper/YCSB\n\n\nBye\n\nMat Dba\n\n2014-05-08 9:10 GMT+02:00 Rajiv Kasera <[email protected]>:\nDear Community friends,\n We are planning to use postgresql 9.3 for building a mobile backend. Can we get a benchmark on the level of concurrency that can be supported by Postgres 9.3 and it will be able to handle the spike in traffic if the app gets popular.\n Any help regarding the same will be highly appreciated. Thanks,Rajiv\n\nThis message may contain privileged and confidential information and is solely for the use of intended recipient. The views expressed in this email are those of the sender and not of Pine Labs. The recipient should check this email and attachments for the presence of viruses / malwares etc. Pine Labs accepts no liability for any damage caused by any virus transmitted by this email. Pine Labs may monitor and record all emails.\nHello, you could use Hammerdb : http://hammerora.sourceforge.net/ , pgbench , yahoo ycsb https://github.com/brianfrankcooper/YCSB\nByeMat Dba",
"msg_date": "Thu, 8 May 2014 09:31:39 +0200",
"msg_from": "desmodemone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.3 for a Mobile Backend"
},
{
"msg_contents": "On 8.5.2014 09:10, Rajiv Kasera wrote:\n> Dear Community friends,\n> \n> \n> \n> We are planning to use postgresql 9.3 for building a mobile backend. Can\n> we get a benchmark on the level of concurrency that can be supported by\n> Postgres 9.3 and it will be able to handle the spike in traffic if the\n> app gets popular.\n\nI'm afraid the only answer we can give you is 'that depends ...' :-(\n\nThree most important things you need to know before answering the\nquestions is;\n\n1) dataset - How much data are we talking about? Is only small part\n active? Is it static / does it change a lot? Etc.\n\n2) workload - Are you doing simple or complex queries? What portion of\n the workload is read-only?\n\n3) hardware - CPUs, drives, ...\n\nThere's no way to answer the question without answering these questions\nfirst. There are databases that promise you everything without asking\nyou these questions - treat them just like Ulysses approached sirens.\n\nI understand these questions are difficult to answer before the project\neven started, but you certainly have an idea how it's going to work so\ndo estimates and review them as the project progresses.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 09 May 2014 00:59:37 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.3 for a Mobile Backend"
}
] |
[
{
"msg_contents": "Hi all,\n\nIs there a propensity for 9.2.4 to prefer a sort-merge-join, in place of a\nhash join?\n\nI’m fairly sure the answer is yes, but I also want to be sure I’m\ninterpreting the explain output correctly.\n\nI’m comparing behaviour between two systems, which for all intents and\npurposes are identical save for the version of postgres.\nThere appears to be nothing wrong with the row estimates given in the\nexplain plan on either machine, however actual performance is significantly\nimpaired on the 9.2.4 setup due to the preference for the use of a\nsort-merge join, compared to a hash-join on 9.3.0\n\n\n\nSnippet from 9.2.4\n -> Merge Left Join (cost=19598754.29..19602284.00\nrows=469996 width=434) (actual time=6369152.750..6386029.191 rows=6866896\nloops=1)\n Buffers: shared hit=489837 read=1724585\n -> Sort (cost=19598650.62..19599825.61\nrows=469996 width=120) (actual time=6369151.307..6373591.881 rows=6866896\nloops=1) (A)\n Sort Method: quicksort Memory: 1162266kB\n Buffers: shared hit=489765 read=1724585\n -> Hash Left Join\n(cost=429808.90..19554371.62 rows=469996 width=120) (actual\ntime=37306.534..6353455.046 rows=6866896 loops=1) (B)\n Rows Removed by Filter: 20862464\n Buffers: shared hit=489765\nread=1724585\n\nSnippet from 9.3.0\n\n -> Hash Left Join (cost=617050.43..20948535.43\nrows=566893 width=434) (actual time=51816.864..934723.548 rows=6866896\nloops=1)\n Buffers: shared hit=1732 read=2010920 written=1\n -> Hash Left Join (cost=616993.23..20870529.73\nrows=566893 width=120) (actual time=51796.882..923196.579 rows=6866896\nloops=1)\n Rows Removed by Filter: 20862464\n Buffers: shared hit=1732 read=2010877\nwritten=1\n\n\nAs you can see, the estimates are similar enough between them, but 9.2.4\nwant’s to run sort-merge plan (A) – and the resulting execution time blows\nout hugely.\nIntersetingly, it actually looks like it is the hash join immediately\npreceding the quick sort that isn’t performing well (B). Though I suspect\nthis is just how an explain plan reads - is this ultimately because the sort\nnode is unable to retrieve tuples from the child node quickly enough?\n\n\nSetting enable_mergejoin = 0 appears to solve this, but I think an upgrade\nto 9.3.4 is going to win over.\n\nCheers,\n\nTim\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi all,Is there a propensity for 9.2.4 to prefer a sort-merge-join, in place of a hash join?I’m fairly sure the answer is yes, but I also want to be sure I’m interpreting the explain output correctly.I’m comparing behaviour between two systems, which for all intents and purposes are identical save for the version of postgres.There appears to be nothing wrong with the row estimates given in the explain plan on either machine, however actual performance is significantly impaired on the 9.2.4 setup due to the preference for the use of a sort-merge join, compared to a hash-join on 9.3.0Snippet from 9.2.4 -> Merge Left Join (cost=19598754.29..19602284.00 rows=469996 width=434) (actual time=6369152.750..6386029.191 rows=6866896 loops=1) Buffers: shared hit=489837 read=1724585 -> Sort (cost=19598650.62..19599825.61 rows=469996 width=120) (actual time=6369151.307..6373591.881 rows=6866896 loops=1) (A) Sort Method: quicksort Memory: 1162266kB Buffers: shared hit=489765 read=1724585 -> Hash Left Join (cost=429808.90..19554371.62 rows=469996 width=120) (actual time=37306.534..6353455.046 rows=6866896 loops=1) (B) Rows Removed by Filter: 20862464 Buffers: shared hit=489765 read=1724585Snippet from 9.3.0 -> Hash Left Join (cost=617050.43..20948535.43 rows=566893 width=434) (actual time=51816.864..934723.548 rows=6866896 loops=1) Buffers: shared hit=1732 read=2010920 written=1 -> Hash Left Join (cost=616993.23..20870529.73 rows=566893 width=120) (actual time=51796.882..923196.579 rows=6866896 loops=1) Rows Removed by Filter: 20862464 Buffers: shared hit=1732 read=2010877 written=1As you can see, the estimates are similar enough between them, but 9.2.4 want’s to run sort-merge plan (A) – and the resulting execution time blows out hugely.Intersetingly, it actually looks like it is the hash join immediately preceding the quick sort that isn’t performing well (B). Though I suspect this is just how an explain plan reads - is this ultimately because the sort node is unable to retrieve tuples from the child node quickly enough?Setting enable_mergejoin = 0 appears to solve this, but I think an upgrade to 9.3.4 is going to win over.Cheers,Tim",
"msg_date": "Mon, 12 May 2014 11:45:20 +0100",
"msg_from": "Tim Kane <[email protected]>",
"msg_from_op": true,
"msg_subject": "9.2.4 vs 9.3.0 query planning (sort merge join vs hash join)"
},
{
"msg_contents": "Tim Kane <[email protected]> writes:\n> Is there a propensity for 9.2.4 to prefer a sort-merge-join, in place of a\n> hash join?\n\nNot particularly; I don't think there's any actual difference in the cost\nestimation equations between 9.2 and 9.3. The two plans you show are\nclose enough in estimated cost that the ordering of their costs might be\ncoming out differently just as a matter of random variation in statistics.\n\nIt'd be worth double-checking the work_mem setting on both systems,\nthough, as (IIRC) an undersized work_mem hurts the estimate for hashes\nmore than for sorts.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 12 May 2014 07:23:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.2.4 vs 9.3.0 query planning (sort merge join vs hash join)"
},
{
"msg_contents": "Hmm. Interesting.\n\nThanks Tom, it does indeed look like the planner is evaluating and excluding\nthe hashed-join plan as having a higher cost. I can see this by setting\nenable_mergejoin=0.\nI think this may play against other aspects of the query (though only\nmarginally), so I can’t really compare the resulting cost metrics – but\nthey’re certainly close.\n\n\nI’ve just now played a little more with work_mem. I had already tried\nincreasing work_mem to all kinds of obscene levels.\n\nOddly, it seems the solution here is in fact to *reduce* work_mem in order\nto elicit the preferred hash-join based plan.\nIn fact, I needed to reduce it to as low as 64MB for this behaviour – which\nseems counter-intuitive. These are not small queries, so I previously had it\npushed up to 6GB for these tasks.\n\nFor smaller datasets,I need to reduce work_mem further still in order to\nobtain a hash-join plan – though the difference in execution time becomes\nless of a problem at this size.\n\nTim\n\n\nFrom: Tom Lane <[email protected]>\nDate: Monday, 12 May 2014 12:23\nTo: Tim Kane <[email protected]>\nCc: \"[email protected]\" <[email protected]>\nSubject: Re: [PERFORM] 9.2.4 vs 9.3.0 query planning (sort merge join vs\nhash join)\n\nTim Kane <[email protected]> writes:\n> Is there a propensity for 9.2.4 to prefer a sort-merge-join, in place of a\n> hash join?\n\nNot particularly; I don't think there's any actual difference in the cost\nestimation equations between 9.2 and 9.3. The two plans you show are\nclose enough in estimated cost that the ordering of their costs might be\ncoming out differently just as a matter of random variation in statistics.\n\nIt'd be worth double-checking the work_mem setting on both systems,\nthough, as (IIRC) an undersized work_mem hurts the estimate for hashes\nmore than for sorts.\n\nregards, tom lane\n\n\n\n\nHmm. Interesting.Thanks Tom, it does indeed look like the planner is evaluating and excluding the hashed-join plan as having a higher cost. I can see this by setting enable_mergejoin=0.I think this may play against other aspects of the query (though only marginally), so I can’t really compare the resulting cost metrics – but they’re certainly close.I’ve just now played a little more with work_mem. I had already tried increasing work_mem to all kinds of obscene levels.Oddly, it seems the solution here is in fact to *reduce* work_mem in order to elicit the preferred hash-join based plan.In fact, I needed to reduce it to as low as 64MB for this behaviour – which seems counter-intuitive. These are not small queries, so I previously had it pushed up to 6GB for these tasks.For smaller datasets,I need to reduce work_mem further still in order to obtain a hash-join plan – though the difference in execution time becomes less of a problem at this size.TimFrom: Tom Lane <[email protected]>Date: Monday, 12 May 2014 12:23To: Tim Kane <[email protected]>Cc: \"[email protected]\" <[email protected]>Subject: Re: [PERFORM] 9.2.4 vs 9.3.0 query planning (sort merge join vs hash join)Tim Kane <[email protected]> writes: Is there a propensity for 9.2.4 to prefer a sort-merge-join, in place of a hash join?Not particularly; I don't think there's any actual difference in the costestimation equations between 9.2 and 9.3. The two plans you show areclose enough in estimated cost that the ordering of their costs might becoming out differently just as a matter of random variation in statistics.It'd be worth double-checking the work_mem setting on both systems,though, as (IIRC) an undersized work_mem hurts the estimate for hashesmore than for sorts. regards, tom lane",
"msg_date": "Mon, 12 May 2014 13:22:51 +0100",
"msg_from": "Tim Kane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 9.2.4 vs 9.3.0 query planning (sort merge join vs hash\n join)"
},
{
"msg_contents": "On Mon, May 12, 2014 at 3:45 AM, Tim Kane <[email protected]> wrote:\n\n> Hi all,\n>\n> Is there a propensity for 9.2.4 to prefer a sort-merge-join, in place of a\n> hash join?\n>\n> I’m fairly sure the answer is yes, but I also want to be sure I’m\n> interpreting the explain output correctly.\n>\n> I’m comparing behaviour between two systems, which for all intents and\n> purposes are identical save for the version of postgres.\n> There appears to be nothing wrong with the row estimates given in the\n> explain plan on either machine, however actual performance is significantly\n> impaired on the 9.2.4 setup due to the preference for the use of a\n> sort-merge join, compared to a hash-join on 9.3.0\n>\n>\n>\n> Snippet from 9.2.4\n> -> Merge Left Join (cost=19598754.29..19602284.00\n> rows=469996 width=434) (actual time=6369152.750..6386029.191 rows=6866896\n> loops=1)\n> Buffers: shared hit=489837 read=1724585\n> * -> Sort (cost=19598650.62..19599825.61\n> rows=469996 width=120) (actual time=6369151.307..6373591.881 rows=6866896\n> loops=1) (A)*\n> * Sort Method: quicksort Memory:\n> 1162266kB*\n> Buffers: shared hit=489765 read=1724585\n> -> Hash Left Join\n> (cost=429808.90..19554371.62 rows=469996 width=120) (actual\n> time=37306.534..6353455.046 rows=6866896 loops=1) * (B)*\n> Rows Removed by Filter: 20862464\n> Buffers: shared hit=489765\n> read=1724585\n>\n> Snippet from 9.3.0\n>\n> -> Hash Left Join (cost=617050.43..20948535.43\n> rows=566893 width=434) (actual time=51816.864..934723.548 rows=6866896\n> loops=1)\n> Buffers: shared hit=1732 read=2010920 written=1\n> -> Hash Left Join\n> (cost=616993.23..20870529.73 rows=566893 width=120) (actual\n> time=51796.882..923196.579 rows=6866896 loops=1)\n> Rows Removed by Filter: 20862464\n> Buffers: shared hit=1732 read=2010877\n> written=1\n>\n>\n> As you can see, the estimates are similar enough between them, but 9.2.4\n> want’s to run sort-merge plan (A) – and the resulting execution time blows\n> out hugely.\n> Intersetingly, it actually looks like it is the hash join immediately\n> preceding the quick sort that isn’t performing well (B). Though I suspect\n> this is just how an explain plan reads - is this ultimately because the\n> sort node is unable to retrieve tuples from the child node quickly enough?\n>\n\nIt looks to me like a caching issue. The two Hash Left Joins seem to be\nidentical (although occurring on different levels of the plan) but one\ntakes much more time. But it is hard to know without seeing what if\nfeeding into those hash joins.\n\nCheers,\n\nJeff\n\n\n>\n>\n> Setting *enable_mergejoin = 0 *appears to solve this, but I think an\n> upgrade to 9.3.4 is going to win over.\n>\n> Cheers,\n>\n> Tim\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\nOn Mon, May 12, 2014 at 3:45 AM, Tim Kane <[email protected]> wrote:\nHi all,\nIs there a propensity for 9.2.4 to prefer a sort-merge-join, in place of a hash join?\nI’m fairly sure the answer is yes, but I also want to be sure I’m interpreting the explain output correctly.I’m comparing behaviour between two systems, which for all intents and purposes are identical save for the version of postgres.\nThere appears to be nothing wrong with the row estimates given in the explain plan on either machine, however actual performance is significantly impaired on the 9.2.4 setup due to the preference for the use of a sort-merge join, compared to a hash-join on 9.3.0\nSnippet from 9.2.4 -> Merge Left Join (cost=19598754.29..19602284.00 rows=469996 width=434) (actual time=6369152.750..6386029.191 rows=6866896 loops=1)\n Buffers: shared hit=489837 read=1724585 -> Sort (cost=19598650.62..19599825.61 rows=469996 width=120) (actual time=6369151.307..6373591.881 rows=6866896 loops=1) (A)\n Sort Method: quicksort Memory: 1162266kB Buffers: shared hit=489765 read=1724585\n -> Hash Left Join (cost=429808.90..19554371.62 rows=469996 width=120) (actual time=37306.534..6353455.046 rows=6866896 loops=1) (B)\n Rows Removed by Filter: 20862464 Buffers: shared hit=489765 read=1724585\nSnippet from 9.3.0 -> Hash Left Join (cost=617050.43..20948535.43 rows=566893 width=434) (actual time=51816.864..934723.548 rows=6866896 loops=1)\n Buffers: shared hit=1732 read=2010920 written=1 -> Hash Left Join (cost=616993.23..20870529.73 rows=566893 width=120) (actual time=51796.882..923196.579 rows=6866896 loops=1)\n Rows Removed by Filter: 20862464 Buffers: shared hit=1732 read=2010877 written=1\nAs you can see, the estimates are similar enough between them, but 9.2.4 want’s to run sort-merge plan (A) – and the resulting execution time blows out hugely.\nIntersetingly, it actually looks like it is the hash join immediately preceding the quick sort that isn’t performing well (B). Though I suspect this is just how an explain plan reads - is this ultimately because the sort node is unable to retrieve tuples from the child node quickly enough?\nIt looks to me like a caching issue. The two Hash Left Joins seem to be identical (although occurring on different levels of the plan) but one takes much more time. But it is hard to know without seeing what if feeding into those hash joins.\nCheers,Jeff \nSetting enable_mergejoin = 0 appears to solve this, but I think an upgrade to 9.3.4 is going to win over.\nCheers,Tim",
"msg_date": "Mon, 12 May 2014 09:02:12 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.2.4 vs 9.3.0 query planning (sort merge join vs hash join)"
}
] |
[
{
"msg_contents": "Hi all,\n\nFirst some background.\nI have inherited a system that appears to have a lot of logic built into\nviews upon views upon views (and then some more views for good measure).\nIt struck me that the CASE conditions built into those views are causing\npoorer performance than expected – so I thought I would run a few tests\nagainst the base tables see where the difference lies.\n\nAnyway, that’s all by the by.. Because what I found on my travels is that\nthe parent table of the relevant partitions is being included and appended\nin the query plan.\nThis is all documented and I understand why, fine. But the impact of this\nis greater than I was expecting.\n\nFor instance, if I query the partition directly (for all tuples it contains)\nversus a query that targets the same partition via exclusion rules - I find\nthe direct query runs in less than half the time.\n\n\nDirect query:\n Seq Scan on partitioned.ts_201405 track_streams (cost=0.00..4167467.56\nrows=65067252 width=253) (actual time=0.010..96796.053 rows=65328073\nloops=1)\n Output: \n Filter: \n Buffers: shared hit=354 read=2215096\n Total runtime: 137437.675 ms\n(5 rows)\n\nIndirect query:\n Result (cost=0.00..4167467.56 rows=65067253 width=253) (actual\ntime=0.011..250057.941 rows=65328073 loops=1)\n Output:\n Buffers: shared hit=322 read=2215128\n -> Append (cost=0.00..4167467.56 rows=65067253 width=253) (actual\ntime=0.010..163452.326 rows=65328073 loops=1)\n Buffers: shared hit=322 read=2215128\n -> Seq Scan on archive.ts (cost=0.00..0.00 rows=1 width=199)\n(actual time=0.001..0.001 rows=0 loops=1)\n Output:\n Filter:\n -> Seq Scan on partitioned.ts_201405 (cost=0.00..4167467.56\nrows=65067252 width=253) (actual time=0.006..85883.925 rows=65328073\nloops=1)\n Output:\n Filter:\n Buffers: shared hit=322 read=2215128\n Total runtime: 289238.187 ms\n(13 rows)\n\n\nSo what is the append node actually doing, and why is it necessary?\nI expect that it simply does what it says, and appends the results of those\ntwo seq-scans. But in reality, there isn’t a lot to do there. While I\nexpect a little bit of overhead, surely it just passes the tuples straight\nthrough to the result node and that will be that.. No?\n\n(yeah, I’ve made a few assumptions/guesses here, but I’m not sure I’m ready\nto look at the code just yet)\n\n\nCheers,\n\nTim\n\n\n\n\n\n\n\n\n\n\n\n\nHi all,First some background.I have inherited a system that appears to have a lot of logic built into views upon views upon views (and then some more views for good measure).It struck me that the CASE conditions built into those views are causing poorer performance than expected – so I thought I would run a few tests against the base tables see where the difference lies.Anyway, that’s all by the by.. Because what I found on my travels is that the parent table of the relevant partitions is being included and appended in the query plan.This is all documented and I understand why, fine. But the impact of this is greater than I was expecting.For instance, if I query the partition directly (for all tuples it contains) versus a query that targets the same partition via exclusion rules - I find the direct query runs in less than half the time.Direct query: Seq Scan on partitioned.ts_201405 track_streams (cost=0.00..4167467.56 rows=65067252 width=253) (actual time=0.010..96796.053 rows=65328073 loops=1) Output: Filter: Buffers: shared hit=354 read=2215096 Total runtime: 137437.675 ms(5 rows)Indirect query: Result (cost=0.00..4167467.56 rows=65067253 width=253) (actual time=0.011..250057.941 rows=65328073 loops=1) Output: Buffers: shared hit=322 read=2215128 -> Append (cost=0.00..4167467.56 rows=65067253 width=253) (actual time=0.010..163452.326 rows=65328073 loops=1) Buffers: shared hit=322 read=2215128 -> Seq Scan on archive.ts (cost=0.00..0.00 rows=1 width=199) (actual time=0.001..0.001 rows=0 loops=1) Output: Filter: -> Seq Scan on partitioned.ts_201405 (cost=0.00..4167467.56 rows=65067252 width=253) (actual time=0.006..85883.925 rows=65328073 loops=1) Output: Filter: Buffers: shared hit=322 read=2215128 Total runtime: 289238.187 ms(13 rows)So what is the append node actually doing, and why is it necessary? I expect that it simply does what it says, and appends the results of those two seq-scans. But in reality, there isn’t a lot to do there. While I expect a little bit of overhead, surely it just passes the tuples straight through to the result node and that will be that.. No?(yeah, I’ve made a few assumptions/guesses here, but I’m not sure I’m ready to look at the code just yet)Cheers,Tim",
"msg_date": "Tue, 13 May 2014 21:02:16 +0100",
"msg_from": "Tim Kane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Constraint exclusion won't exclude parent table"
},
{
"msg_contents": "Tim Kane <[email protected]> writes:\n> So what is the append node actually doing, and why is it necessary?\n> I expect that it simply does what it says, and appends the results of those\n> two seq-scans. But in reality, there isn’t a lot to do there. While I\n> expect a little bit of overhead, surely it just passes the tuples straight\n> through to the result node and that will be that.. No?\n\nYeah, it's not expected that that's going to cost much. I am suspicious\nthat what you are looking at is mostly measurement overhead: during\nEXPLAIN ANALYZE, each plan node has to do two gettimeofday() calls per\ncall, and there are lots of platforms where that is significant relative\nto the actual work done per node.\n\nYou might try comparing the overall times for select count(*) from ...\nrather than EXPLAIN ANALYZE for these two cases. If those times are\nmuch closer together than what you're getting from EXPLAIN ANALYZE,\nthen you've got a machine with expensive gettimeofday() and you have\nto take your measurements with an appropriate quantum of salt.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 13 May 2014 16:24:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constraint exclusion won't exclude parent table"
},
{
"msg_contents": "> \n> \n> Yeah, it's not expected that that's going to cost much. I am suspicious\n> that what you are looking at is mostly measurement overhead: during\n> EXPLAIN ANALYZE, each plan node has to do two gettimeofday() calls per\n> call, and there are lots of platforms where that is significant relative\n> to the actual work done per node.\n> \n> You might try comparing the overall times for select count(*) from ...\n> rather than EXPLAIN ANALYZE for these two cases. If those times are\n> much closer together than what you're getting from EXPLAIN ANALYZE,\n> then you've got a machine with expensive gettimeofday() and you have\n> to take your measurements with an appropriate quantum of salt.\n> \n> regards, tom lane\n\nInteresting.. \n\nDirect query:\nTime: 374336.514 ms\n\nIndirect query:\nTime: 387114.059 ms\n\nMystery solved. Thanks again Tom.\n\nFor what it’s worth: Linux 3.2.0-4-amd64 Debian 3.2.46-1+deb7u1 x86_64\n> \n\n\n\nYeah, it's not expected that that's going to cost much. I am suspiciousthat what you are looking at is mostly measurement overhead: duringEXPLAIN ANALYZE, each plan node has to do two gettimeofday() calls percall, and there are lots of platforms where that is significant relativeto the actual work done per node.You might try comparing the overall times for select count(*) from ...rather than EXPLAIN ANALYZE for these two cases. If those times aremuch closer together than what you're getting from EXPLAIN ANALYZE,then you've got a machine with expensive gettimeofday() and you haveto take your measurements with an appropriate quantum of salt. regards, tom laneInteresting.. Direct query:Time: 374336.514 msIndirect query:Time: 387114.059 msMystery solved. Thanks again Tom.For what it’s worth: Linux 3.2.0-4-amd64 Debian 3.2.46-1+deb7u1 x86_64",
"msg_date": "Tue, 13 May 2014 21:57:10 +0100",
"msg_from": "Tim Kane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Constraint exclusion won't exclude parent table"
}
] |
[
{
"msg_contents": "Hi all,\n\nSo I was thinking about the following, after experimenting with constraint\nexclusion.\n\nI thought I would see what happens when I do this:\n\n SELECT * FROM ONLY table_a UNION SELECT * FROM table_b;\n\n\nI noticed that despite table_a still having no data in it, the planner has\nalready decided that it needs to insert a chain of ‘append->sort->unique’\nnodes into the plan.\n\nThat’s fairly reasonable.\nWhile I understand that we can’t readily know about wether a given node will\nreturn anything or not - would it be possible to have the execution engine\nbranch off in the event that a given node returns nothing at all?\n\nI guess there are probably a lot of considerations, and I suspect it would\nconsiderably increase planning time, though maybe it also presents an\nopportunity for some interesting approaches to adaptive query execution.\n\nI don’t know so much about this, though I’m sure there are all kinds of\nresearch papers discussing it.\nIs this something that has been considered before?\n\nTim\n\n\n\n\nHi all,So I was thinking about the following, after experimenting with constraint exclusion.I thought I would see what happens when I do this: SELECT * FROM ONLY table_a UNION SELECT * FROM table_b;I noticed that despite table_a still having no data in it, the planner has already decided that it needs to insert a chain of ‘append->sort->unique’ nodes into the plan.That’s fairly reasonable.While I understand that we can’t readily know about wether a given node will return anything or not - would it be possible to have the execution engine branch off in the event that a given node returns nothing at all?I guess there are probably a lot of considerations, and I suspect it would considerably increase planning time, though maybe it also presents an opportunity for some interesting approaches to adaptive query execution.I don’t know so much about this, though I’m sure there are all kinds of research papers discussing it.Is this something that has been considered before?Tim",
"msg_date": "Tue, 13 May 2014 21:08:09 +0100",
"msg_from": "Tim Kane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Adaptive query execution"
},
{
"msg_contents": "On Tue, May 13, 2014 at 5:08 PM, Tim Kane <[email protected]> wrote:\n> Hi all,\n>\n> So I was thinking about the following, after experimenting with constraint\n> exclusion.\n>\n> I thought I would see what happens when I do this:\n>\n> SELECT * FROM ONLY table_a UNION SELECT * FROM table_b;\n>\n>\n> I noticed that despite table_a still having no data in it, the planner has\n> already decided that it needs to insert a chain of ‘append->sort->unique’\n> nodes into the plan.\n>\n> That’s fairly reasonable.\n> While I understand that we can’t readily know about wether a given node will\n> return anything or not - would it be possible to have the execution engine\n> branch off in the event that a given node returns nothing at all?\n>\n> I guess there are probably a lot of considerations, and I suspect it would\n> considerably increase planning time, though maybe it also presents an\n> opportunity for some interesting approaches to adaptive query execution.\n\n\nWhat's the point, in the context of this example?\n\nThe sort-unique still has to be performed even if you didn't have data\nin one side, since the other could still have duplicates.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 13 May 2014 17:13:42 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query execution"
},
{
"msg_contents": "> \n> \n> From: Claudio Freire <[email protected]>\n>> I thought I would see what happens when I do this:\n>> \n>> SELECT * FROM ONLY table_a UNION SELECT * FROM table_b;\n>> \n>> \n>> \n> What's the point, in the context of this example?\n> \n> The sort-unique still has to be performed even if you didn't have data\n> in one side, since the other could still have duplicates.\n\n\nDamn it. Okay, bad example. I should sleep.\n\nA better example would be (from my other post just now -\nhttp://www.postgresql.org/message-id/CF9838D8.7EF3D%[email protected] )\nwhere an empty parent-table is append’ed into a result set involving one\n(and only one) of its child relations. Whereas a more optimum solution\nwould involve only the child relation without the need to append the empty\nparent relation.\n\nI’m sure there are other scenarios where adaptive query execution would be\nof greater benefit.\n\nTim\n\n\n\n\n\n\n\n> \n\n\n\nFrom: Claudio Freire <[email protected]> I thought I would see what happens when I do this: SELECT * FROM ONLY table_a UNION SELECT * FROM table_b;What's the point, in the context of this example?The sort-unique still has to be performed even if you didn't have datain one side, since the other could still have duplicates.Damn it. Okay, bad example. I should sleep.A better example would be (from my other post just now - http://www.postgresql.org/message-id/CF9838D8.7EF3D%[email protected] ) where an empty parent-table is append’ed into a result set involving one (and only one) of its child relations. Whereas a more optimum solution would involve only the child relation without the need to append the empty parent relation.I’m sure there are other scenarios where adaptive query execution would be of greater benefit.Tim",
"msg_date": "Tue, 13 May 2014 21:33:58 +0100",
"msg_from": "Tim Kane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adaptive query execution"
}
] |
[
{
"msg_contents": "Day and night, the postgres stats collector process runs at about 20 MB/sec\noutput. vmstat shows this:\n\n$ vmstat 2\nprocs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n 0 0 55864 135740 123804 10712928 4 6 445 2642 0 0 5 1\n92 2\n 1 0 55864 134820 123804 10713012 0 0 0 34880 540 338 1 1\n98 0\n 0 0 55864 135820 123812 10712896 0 0 0 20980 545 422 1 1\n98 0\n\niotop(1) shows that it's the stats collector, running at 20 MB/sec.\n\nIs this normal?\n\nCraig\n\nDay and night, the postgres stats collector process runs at about 20 MB/sec output. vmstat shows this:$ vmstat 2procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\r\n r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 55864 135740 123804 10712928 4 6 445 2642 0 0 5 1 92 2 1 0 55864 134820 123804 10713012 0 0 0 34880 540 338 1 1 98 0\r\n 0 0 55864 135820 123812 10712896 0 0 0 20980 545 422 1 1 98 0iotop(1) shows that it's the stats collector, running at 20 MB/sec.Is this normal?Craig",
"msg_date": "Wed, 14 May 2014 21:18:15 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stats collector constant I/O"
},
{
"msg_contents": "On May 14, 2014 9:19 PM, \"Craig James\" <[email protected]> wrote:\n>\n> Day and night, the postgres stats collector process runs at about 20\nMB/sec output. vmstat shows this:\n>\n> $ vmstat 2\n> procs -----------memory---------- ---swap-- -----io---- -system--\n----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy\nid wa\n> 0 0 55864 135740 123804 10712928 4 6 445 2642 0 0 5 1\n92 2\n> 1 0 55864 134820 123804 10713012 0 0 0 34880 540 338 1 1\n98 0\n> 0 0 55864 135820 123812 10712896 0 0 0 20980 545 422 1 1\n98 0\n>\n> iotop(1) shows that it's the stats collector, running at 20 MB/sec.\n\nThis is normal for 9.2 and below if you have hundreds of databases in the\ncluster.\n\nCheers,\n\nJeff\n\n\nOn May 14, 2014 9:19 PM, \"Craig James\" <[email protected]> wrote:\n>\n> Day and night, the postgres stats collector process runs at about 20 MB/sec output. vmstat shows this:\n>\n> $ vmstat 2\n> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 0 55864 135740 123804 10712928 4 6 445 2642 0 0 5 1 92 2\n> 1 0 55864 134820 123804 10713012 0 0 0 34880 540 338 1 1 98 0\n> 0 0 55864 135820 123812 10712896 0 0 0 20980 545 422 1 1 98 0\n>\n> iotop(1) shows that it's the stats collector, running at 20 MB/sec.\nThis is normal for 9.2 and below if you have hundreds of databases in the cluster.\nCheers,\nJeff",
"msg_date": "Thu, 15 May 2014 01:18:16 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stats collector constant I/O"
},
{
"msg_contents": "Hello\n\nwe had similar issue - you can try to move statfile to ramdisc\n\nhttp://serverfault.com/questions/495057/too-much-i-o-generated-by-postgres-stats-collector-process\n\nRegards\n\nPavel Stehule\n\n\n2014-05-15 6:18 GMT+02:00 Craig James <[email protected]>:\n\n> Day and night, the postgres stats collector process runs at about 20\n> MB/sec output. vmstat shows this:\n>\n> $ vmstat 2\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa\n> 0 0 55864 135740 123804 10712928 4 6 445 2642 0 0 5 1\n> 92 2\n> 1 0 55864 134820 123804 10713012 0 0 0 34880 540 338 1 1\n> 98 0\n> 0 0 55864 135820 123812 10712896 0 0 0 20980 545 422 1 1\n> 98 0\n>\n> iotop(1) shows that it's the stats collector, running at 20 MB/sec.\n>\n> Is this normal?\n>\n> Craig\n>\n\nHellowe had similar issue - you can try to move statfile to ramdischttp://serverfault.com/questions/495057/too-much-i-o-generated-by-postgres-stats-collector-process\nRegardsPavel Stehule2014-05-15 6:18 GMT+02:00 Craig James <[email protected]>:\nDay and night, the postgres stats collector process runs at about 20 MB/sec output. vmstat shows this:\n$ vmstat 2procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 55864 135740 123804 10712928 4 6 445 2642 0 0 5 1 92 2 1 0 55864 134820 123804 10713012 0 0 0 34880 540 338 1 1 98 0\n\n\n 0 0 55864 135820 123812 10712896 0 0 0 20980 545 422 1 1 98 0iotop(1) shows that it's the stats collector, running at 20 MB/sec.Is this normal?\nCraig",
"msg_date": "Thu, 15 May 2014 10:25:34 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stats collector constant I/O"
},
{
"msg_contents": "On 15.5.2014 06:18, Craig James wrote:\n> Day and night, the postgres stats collector process runs at about 20\n> MB/sec output. vmstat shows this:\n> \n> $ vmstat 2\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy\n> id wa\n> 0 0 55864 135740 123804 10712928 4 6 445 2642 0 0 5 \n> 1 92 2\n> 1 0 55864 134820 123804 10713012 0 0 0 34880 540 338 1 \n> 1 98 0\n> 0 0 55864 135820 123812 10712896 0 0 0 20980 545 422 1 \n> 1 98 0\n> \n> iotop(1) shows that it's the stats collector, running at 20 MB/sec.\n\nWhich PostgreSQL version are you running? And how many databases /\nobjects (tables, indexes) are there?\n\nWith versions up to 9.1, and large number of objects this is quite\nnormal. The file size is proportional to the number of objects, and may\nget written quite frequently.\n\nThe newer versions (since 9.2) have per-database file, which usually\nsignificantly decreases the load - both CPU and I/O. But if you have all\nthe objects are in a single database, this is not going to help.\n\nI'd recommend moving the stat directory to tmpfs (i.e. memory-based\nfilesystem) - this improves the I/O load, but it may still consume\nnontrivial amount of CPU time to read/write it.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 May 2014 01:27:21 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stats collector constant I/O"
}
] |
[
{
"msg_contents": "Hi,all\nI have a table to save received measure data.\n\n\nCREATE TABLE measure_data\n(\n id serial NOT NULL,\n telegram_id integer NOT NULL,\n measure_time timestamp without time zone NOT NULL,\n item_id integer NOT NULL,\n val double precision,\n CONSTRAINT measure_data_pkey PRIMARY KEY (id)\n);\n\nCREATE INDEX index_measure_data_telegram_id ON measure_data USING btree (telegram_id);\n\n\nin my scenario,a telegram contains measure data for multiple data items and timestamps,\nBTW,another table is for telegram.\n\nThe SQL I used in my application is \n select * from measure_data where telegram_id in(1,2,...,n)\nand this query used the index_measure_data_telegram_id index,as expected.\n\nIn order to see the performance of my query ,\nI used the following query to search the measure data for randomly 30 telegrams.\n\n\nexplain analyze\nSELECT md.*\n FROM measure_data md\n where telegram_id in \n (\n SELECT distinct\n trunc((132363-66484) * random() + 66484)\n FROM generate_series(1,30) as s(telegram_id)\n )\n ;\n\nthe 132363 and 66484 are the max and min of the telegram id,separately.\n\nWhat surprised me is that index is not used,instead,a seq scan is performed on measure_data.\nAlthough,intuitively,in this case,it is much wiser to use the index.\nWould you please give some clue to why this happened?\n\n\"Hash Semi Join (cost=65.00..539169.32 rows=10277280 width=28) (actual time=76.454..17177.054 rows=9360 loops=1)\"\n\" Hash Cond: ((md.telegram_id)::double precision = (trunc(((65879::double precision * random()) + 66484::double precision))))\"\n\" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560 width=28) (actual time=0.012..13874.809 rows=20554560 loops=1)\"\n\" -> Hash (cost=52.50..52.50 rows=1000 width=8) (actual time=0.062..0.062 rows=30 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n\" -> HashAggregate (cost=22.50..42.50 rows=1000 width=0) (actual time=0.048..0.053 rows=30 loops=1)\"\n\" -> Function Scan on generate_series s (cost=0.00..20.00 rows=1000 width=0) (actual time=0.020..0.034 rows=30 loops=1)\"\n\"Total runtime: 17177.527 ms\"\n\n\n \t\t \t \t\t \n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 May 2014 14:16:47 +0900",
"msg_from": "=?utf-8?B?5bi46LaF?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "how do functions affect query plan?"
},
{
"msg_contents": "常超 wrote\n> Hi,all\n> I have a table to save received measure data.\n> \n> \n> CREATE TABLE measure_data\n> (\n> id serial NOT NULL,\n> telegram_id integer NOT NULL,\n> measure_time timestamp without time zone NOT NULL,\n> item_id integer NOT NULL,\n> val double precision,\n> CONSTRAINT measure_data_pkey PRIMARY KEY (id)\n> );\n> \n> CREATE INDEX index_measure_data_telegram_id ON measure_data USING btree\n> (telegram_id);\n> \n> \n> in my scenario,a telegram contains measure data for multiple data items\n> and timestamps,\n> BTW,another table is for telegram.\n> \n> The SQL I used in my application is \n> select * from measure_data where telegram_id in(1,2,...,n)\n> and this query used the index_measure_data_telegram_id index,as expected.\n> \n> In order to see the performance of my query ,\n> I used the following query to search the measure data for randomly 30\n> telegrams.\n> \n> \n> explain analyze\n> SELECT md.*\n> FROM measure_data md\n> where telegram_id in \n> (\n> SELECT distinct\n> trunc((132363-66484) * random() + 66484)\n> FROM generate_series(1,30) as s(telegram_id)\n> )\n> ;\n> \n> the 132363 and 66484 are the max and min of the telegram id,separately.\n> \n> What surprised me is that index is not used,instead,a seq scan is\n> performed on measure_data.\n> Although,intuitively,in this case,it is much wiser to use the index.\n> Would you please give some clue to why this happened?\n> \n> \"Hash Semi Join (cost=65.00..539169.32 rows=10277280 width=28) (actual\n> time=76.454..17177.054 rows=9360 loops=1)\"\n> \" Hash Cond: ((md.telegram_id)::double precision = (trunc(((65879::double\n> precision * random()) + 66484::double precision))))\"\n> \" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560\n> width=28) (actual time=0.012..13874.809 rows=20554560 loops=1)\"\n> \" -> Hash (cost=52.50..52.50 rows=1000 width=8) (actual\n> time=0.062..0.062 rows=30 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n> \" -> HashAggregate (cost=22.50..42.50 rows=1000 width=0) (actual\n> time=0.048..0.053 rows=30 loops=1)\"\n> \" -> Function Scan on generate_series s (cost=0.00..20.00\n> rows=1000 width=0) (actual time=0.020..0.034 rows=30 loops=1)\"\n> \"Total runtime: 17177.527 ms\"\n\nThe planner expects to need to return half the table when you provide 1,000\ndistinct telegram_ids, which is best handled by scanning the whole table\nsequentially and tossing out invalid data.\n\nI am curious if the plan will be different if you added a LIMIT 30 to the\nsub-query.\n\nThe root of the problem is the planner has no way of knowing whether\ngenerate_series is going to return 1 or 1,000,000 rows so by default it (and\nall functions) are assumed (by the planner) to return 1,000 rows. By adding\nan explicit limit you can better inform the planner as to how many rows you\nare going to be passing up to the parent query and it will hopefully, with\nknowledge of only 30 distinct values, use the index.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/how-do-functions-affect-query-plan-tp5803993p5803996.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 May 2014 22:43:24 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how do functions affect query plan?"
},
{
"msg_contents": "Hi,David\n\nSeems that the root of evil is in the function(random,trunc),\nalthough I don't know why.\n\nHere is the comparison.\n\n1.w/o function : index is wisely used.(Even without the limit 30 clause)\n\nexplain analyze\nSELECT md.*\n FROM measure_data md\n where telegram_id in \n (\n SELECT 66484 + (132363-66484)/30 * i \n FROM generate_series(1,30) as s(i)\n limit 30\n )\n ;\n\n\"Nested Loop (cost=10.01..39290.79 rows=10392 width=28) (actual time=0.079..3.490 rows=9360 loops=1)\"\n\" -> HashAggregate (cost=0.83..1.13 rows=30 width=4) (actual time=0.027..0.032 rows=30 loops=1)\"\n\" -> Limit (cost=0.00..0.45 rows=30 width=4) (actual time=0.013..0.020 rows=30 loops=1)\"\n\" -> Function Scan on generate_series s (cost=0.00..15.00 rows=1000 width=4) (actual time=0.011..0.016 rows=30 loops=1)\"\n\" -> Bitmap Heap Scan on measure_data md (cost=9.19..1306.20 rows=346 width=28) (actual time=0.030..0.075 rows=312 loops=30)\"\n\" Recheck Cond: (telegram_id = ((66484 + (2195 * s.i))))\"\n\" -> Bitmap Index Scan on index_measure_data_telegram_id (cost=0.00..9.10 rows=346 width=0) (actual time=0.025..0.025 rows=312 loops=30)\"\n\" Index Cond: (telegram_id = ((66484 + (2195 * s.i))))\"\n\"Total runtime: 3.714 ms\"\n\n\n2.when function is there: seq scan\n\nexplain analyze\nSELECT md.*\n FROM measure_data md\n where telegram_id in \n (\n SELECT trunc((132363-66484) * random()) +66484\n FROM generate_series(1,30) as s(i)\n limit 30\n )\n ;\n\n\n\"Hash Join (cost=1.65..490288.89 rows=10277280 width=28) (actual time=0.169..4894.847 rows=9360 loops=1)\"\n\" Hash Cond: ((md.telegram_id)::double precision = ((trunc((65879::double precision * random())) + 66484::double precision)))\"\n\" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560 width=28) (actual time=0.010..2076.932 rows=20554560 loops=1)\"\n\" -> Hash (cost=1.28..1.28 rows=30 width=8) (actual time=0.041..0.041 rows=30 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n\" -> HashAggregate (cost=0.98..1.28 rows=30 width=8) (actual time=0.034..0.036 rows=30 loops=1)\"\n\" -> Limit (cost=0.00..0.60 rows=30 width=0) (actual time=0.016..0.026 rows=30 loops=1)\"\n\" -> Function Scan on generate_series s (cost=0.00..20.00 rows=1000 width=0) (actual time=0.015..0.023 rows=30 loops=1)\"\n\"Total runtime: 4895.239 ms\"\n\n\n----------------------------------------\n> Date: Wed, 14 May 2014 22:43:24 -0700\n> From: [email protected]\n> To: [email protected]\n> Subject: Re: [PERFORM] how do functions affect query plan?\n>\n> 常超 wrote\n>> Hi,all\n>> I have a table to save received measure data.\n>>\n>>\n>> CREATE TABLE measure_data\n>> (\n>> id serial NOT NULL,\n>> telegram_id integer NOT NULL,\n>> measure_time timestamp without time zone NOT NULL,\n>> item_id integer NOT NULL,\n>> val double precision,\n>> CONSTRAINT measure_data_pkey PRIMARY KEY (id)\n>> );\n>>\n>> CREATE INDEX index_measure_data_telegram_id ON measure_data USING btree\n>> (telegram_id);\n>>\n>>\n>> in my scenario,a telegram contains measure data for multiple data items\n>> and timestamps,\n>> BTW,another table is for telegram.\n>>\n>> The SQL I used in my application is\n>> select * from measure_data where telegram_id in(1,2,...,n)\n>> and this query used the index_measure_data_telegram_id index,as expected.\n>>\n>> In order to see the performance of my query ,\n>> I used the following query to search the measure data for randomly 30\n>> telegrams.\n>>\n>>\n>> explain analyze\n>> SELECT md.*\n>> FROM measure_data md\n>> where telegram_id in\n>> (\n>> SELECT distinct\n>> trunc((132363-66484) * random() + 66484)\n>> FROM generate_series(1,30) as s(telegram_id)\n>> )\n>> ;\n>>\n>> the 132363 and 66484 are the max and min of the telegram id,separately.\n>>\n>> What surprised me is that index is not used,instead,a seq scan is\n>> performed on measure_data.\n>> Although,intuitively,in this case,it is much wiser to use the index.\n>> Would you please give some clue to why this happened?\n>>\n>> \"Hash Semi Join (cost=65.00..539169.32 rows=10277280 width=28) (actual\n>> time=76.454..17177.054 rows=9360 loops=1)\"\n>> \" Hash Cond: ((md.telegram_id)::double precision = (trunc(((65879::double\n>> precision * random()) + 66484::double precision))))\"\n>> \" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560\n>> width=28) (actual time=0.012..13874.809 rows=20554560 loops=1)\"\n>> \" -> Hash (cost=52.50..52.50 rows=1000 width=8) (actual\n>> time=0.062..0.062 rows=30 loops=1)\"\n>> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n>> \" -> HashAggregate (cost=22.50..42.50 rows=1000 width=0) (actual\n>> time=0.048..0.053 rows=30 loops=1)\"\n>> \" -> Function Scan on generate_series s (cost=0.00..20.00\n>> rows=1000 width=0) (actual time=0.020..0.034 rows=30 loops=1)\"\n>> \"Total runtime: 17177.527 ms\"\n>\n> The planner expects to need to return half the table when you provide 1,000\n> distinct telegram_ids, which is best handled by scanning the whole table\n> sequentially and tossing out invalid data.\n>\n> I am curious if the plan will be different if you added a LIMIT 30 to the\n> sub-query.\n>\n> The root of the problem is the planner has no way of knowing whether\n> generate_series is going to return 1 or 1,000,000 rows so by default it (and\n> all functions) are assumed (by the planner) to return 1,000 rows. By adding\n> an explicit limit you can better inform the planner as to how many rows you\n> are going to be passing up to the parent query and it will hopefully, with\n> knowledge of only 30 distinct values, use the index.\n>\n>\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/how-do-functions-affect-query-plan-tp5803993p5803996.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 May 2014 15:19:13 +0900",
"msg_from": "changchao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how do functions affect query plan?"
},
{
"msg_contents": "\n\nInterestingly,adding type cast made postgresql wiser.\nAnyone knows the reason?\n\n1.no type cast\nSELECT md.*\n FROM measure_data md\n where telegram_id in (trunc(66484.2),trunc(132362.1 ))\n\n\n\"Seq Scan on measure_data md (cost=0.00..459455.40 rows=205546 width=28) (actual time=77.144..6458.870 rows=624 loops=1)\"\n\" Filter: ((telegram_id)::numeric = ANY ('{66484,132362}'::numeric[]))\"\n\" Rows Removed by Filter: 20553936\"\n\"Total runtime: 6458.921 ms\"\n\n\n2.type cast\n\nSELECT md.*\n FROM measure_data md\n where telegram_id in (trunc(66484.2)::int,trunc(132362.1 )::int)\n\n\"Bitmap Heap Scan on measure_data md (cost=16.06..2618.86 rows=684 width=28) (actual time=0.076..0.154 rows=624 loops=1)\"\n\" Recheck Cond: (telegram_id = ANY ('{66484,132362}'::integer[]))\"\n\" -> Bitmap Index Scan on index_measure_data_telegram_id (cost=0.00..15.88 rows=684 width=0) (actual time=0.065..0.065 rows=624 loops=1)\"\n\" Index Cond: (telegram_id = ANY ('{66484,132362}'::integer[]))\"\n\"Total runtime: 0.187 ms\"\n\n\n----------------------------------------\n> From: [email protected]\n> To: [email protected]; [email protected]\n> Subject: Re: [PERFORM] how do functions affect query plan?\n> Date: Thu, 15 May 2014 15:19:13 +0900\n>\n> Hi,David\n>\n> Seems that the root of evil is in the function(random,trunc),\n> although I don't know why.\n>\n> Here is the comparison.\n>\n> 1.w/o function : index is wisely used.(Even without the limit 30 clause)\n>\n> explain analyze\n> SELECT md.*\n> FROM measure_data md\n> where telegram_id in\n> (\n> SELECT 66484 + (132363-66484)/30 * i\n> FROM generate_series(1,30) as s(i)\n> limit 30\n> )\n> ;\n>\n> \"Nested Loop (cost=10.01..39290.79 rows=10392 width=28) (actual time=0.079..3.490 rows=9360 loops=1)\"\n> \" -> HashAggregate (cost=0.83..1.13 rows=30 width=4) (actual time=0.027..0.032 rows=30 loops=1)\"\n> \" -> Limit (cost=0.00..0.45 rows=30 width=4) (actual time=0.013..0.020 rows=30 loops=1)\"\n> \" -> Function Scan on generate_series s (cost=0.00..15.00 rows=1000 width=4) (actual time=0.011..0.016 rows=30 loops=1)\"\n> \" -> Bitmap Heap Scan on measure_data md (cost=9.19..1306.20 rows=346 width=28) (actual time=0.030..0.075 rows=312 loops=30)\"\n> \" Recheck Cond: (telegram_id = ((66484 + (2195 * s.i))))\"\n> \" -> Bitmap Index Scan on index_measure_data_telegram_id (cost=0.00..9.10 rows=346 width=0) (actual time=0.025..0.025 rows=312 loops=30)\"\n> \" Index Cond: (telegram_id = ((66484 + (2195 * s.i))))\"\n> \"Total runtime: 3.714 ms\"\n>\n>\n> 2.when function is there: seq scan\n>\n> explain analyze\n> SELECT md.*\n> FROM measure_data md\n> where telegram_id in\n> (\n> SELECT trunc((132363-66484) * random()) +66484\n> FROM generate_series(1,30) as s(i)\n> limit 30\n> )\n> ;\n>\n>\n> \"Hash Join (cost=1.65..490288.89 rows=10277280 width=28) (actual time=0.169..4894.847 rows=9360 loops=1)\"\n> \" Hash Cond: ((md.telegram_id)::double precision = ((trunc((65879::double precision * random())) + 66484::double precision)))\"\n> \" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560 width=28) (actual time=0.010..2076.932 rows=20554560 loops=1)\"\n> \" -> Hash (cost=1.28..1.28 rows=30 width=8) (actual time=0.041..0.041 rows=30 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n> \" -> HashAggregate (cost=0.98..1.28 rows=30 width=8) (actual time=0.034..0.036 rows=30 loops=1)\"\n> \" -> Limit (cost=0.00..0.60 rows=30 width=0) (actual time=0.016..0.026 rows=30 loops=1)\"\n> \" -> Function Scan on generate_series s (cost=0.00..20.00 rows=1000 width=0) (actual time=0.015..0.023 rows=30 loops=1)\"\n> \"Total runtime: 4895.239 ms\"\n>\n>\n> ----------------------------------------\n>> Date: Wed, 14 May 2014 22:43:24 -0700\n>> From: [email protected]\n>> To: [email protected]\n>> Subject: Re: [PERFORM] how do functions affect query plan?\n>>\n>> 常超 wrote\n>>> Hi,all\n>>> I have a table to save received measure data.\n>>>\n>>>\n>>> CREATE TABLE measure_data\n>>> (\n>>> id serial NOT NULL,\n>>> telegram_id integer NOT NULL,\n>>> measure_time timestamp without time zone NOT NULL,\n>>> item_id integer NOT NULL,\n>>> val double precision,\n>>> CONSTRAINT measure_data_pkey PRIMARY KEY (id)\n>>> );\n>>>\n>>> CREATE INDEX index_measure_data_telegram_id ON measure_data USING btree\n>>> (telegram_id);\n>>>\n>>>\n>>> in my scenario,a telegram contains measure data for multiple data items\n>>> and timestamps,\n>>> BTW,another table is for telegram.\n>>>\n>>> The SQL I used in my application is\n>>> select * from measure_data where telegram_id in(1,2,...,n)\n>>> and this query used the index_measure_data_telegram_id index,as expected.\n>>>\n>>> In order to see the performance of my query ,\n>>> I used the following query to search the measure data for randomly 30\n>>> telegrams.\n>>>\n>>>\n>>> explain analyze\n>>> SELECT md.*\n>>> FROM measure_data md\n>>> where telegram_id in\n>>> (\n>>> SELECT distinct\n>>> trunc((132363-66484) * random() + 66484)\n>>> FROM generate_series(1,30) as s(telegram_id)\n>>> )\n>>> ;\n>>>\n>>> the 132363 and 66484 are the max and min of the telegram id,separately.\n>>>\n>>> What surprised me is that index is not used,instead,a seq scan is\n>>> performed on measure_data.\n>>> Although,intuitively,in this case,it is much wiser to use the index.\n>>> Would you please give some clue to why this happened?\n>>>\n>>> \"Hash Semi Join (cost=65.00..539169.32 rows=10277280 width=28) (actual\n>>> time=76.454..17177.054 rows=9360 loops=1)\"\n>>> \" Hash Cond: ((md.telegram_id)::double precision = (trunc(((65879::double\n>>> precision * random()) + 66484::double precision))))\"\n>>> \" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560\n>>> width=28) (actual time=0.012..13874.809 rows=20554560 loops=1)\"\n>>> \" -> Hash (cost=52.50..52.50 rows=1000 width=8) (actual\n>>> time=0.062..0.062 rows=30 loops=1)\"\n>>> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n>>> \" -> HashAggregate (cost=22.50..42.50 rows=1000 width=0) (actual\n>>> time=0.048..0.053 rows=30 loops=1)\"\n>>> \" -> Function Scan on generate_series s (cost=0.00..20.00\n>>> rows=1000 width=0) (actual time=0.020..0.034 rows=30 loops=1)\"\n>>> \"Total runtime: 17177.527 ms\"\n>>\n>> The planner expects to need to return half the table when you provide 1,000\n>> distinct telegram_ids, which is best handled by scanning the whole table\n>> sequentially and tossing out invalid data.\n>>\n>> I am curious if the plan will be different if you added a LIMIT 30 to the\n>> sub-query.\n>>\n>> The root of the problem is the planner has no way of knowing whether\n>> generate_series is going to return 1 or 1,000,000 rows so by default it (and\n>> all functions) are assumed (by the planner) to return 1,000 rows. By adding\n>> an explicit limit you can better inform the planner as to how many rows you\n>> are going to be passing up to the parent query and it will hopefully, with\n>> knowledge of only 30 distinct values, use the index.\n>>\n>>\n>>\n>>\n>> --\n>> View this message in context: http://postgresql.1045698.n5.nabble.com/how-do-functions-affect-query-plan-tp5803993p5803996.html\n>> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 May 2014 16:59:30 +0900",
"msg_from": "changchao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how do functions affect query plan?"
},
{
"msg_contents": "hi\n\ni think the telegram_id's type should be integer.\n\nplease change telegram_id to numeric and try to run the the following sql.\nthe index should be used.\n\nexplain SELECT md.*\n FROM measure_data md\n where telegram_id in (trunc(66484.2),trunc(132362.1 ))\n\n\n2014-05-15 17:28 GMT+09:00 changchao <[email protected]>:\n\n>\n>\n> ----------------------------------------\n> > From: [email protected]\n> > To: [email protected]\n> > Subject: Re: [PERFORM] how do functions affect query plan?\n> > Date: Thu, 15 May 2014 16:59:30 +0900\n> >\n> >\n> >\n> > Interestingly,adding type cast made postgresql wiser.\n> > Anyone knows the reason?\n> >\n> > 1.no type cast\n> > SELECT md.*\n> > FROM measure_data md\n> > where telegram_id in (trunc(66484.2),trunc(132362.1 ))\n> >\n> >\n> > \"Seq Scan on measure_data md (cost=0.00..459455.40 rows=205546\n> width=28) (actual time=77.144..6458.870 rows=624 loops=1)\"\n> > \" Filter: ((telegram_id)::numeric = ANY ('{66484,132362}'::numeric[]))\"\n> > \" Rows Removed by Filter: 20553936\"\n> > \"Total runtime: 6458.921 ms\"\n> >\n> >\n> > 2.type cast\n> >\n> > SELECT md.*\n> > FROM measure_data md\n> > where telegram_id in (trunc(66484.2)::int,trunc(132362.1 )::int)\n> >\n> > \"Bitmap Heap Scan on measure_data md (cost=16.06..2618.86 rows=684\n> width=28) (actual time=0.076..0.154 rows=624 loops=1)\"\n> > \" Recheck Cond: (telegram_id = ANY ('{66484,132362}'::integer[]))\"\n> > \" -> Bitmap Index Scan on index_measure_data_telegram_id\n> (cost=0.00..15.88 rows=684 width=0) (actual time=0.065..0.065 rows=624\n> loops=1)\"\n> > \" Index Cond: (telegram_id = ANY ('{66484,132362}'::integer[]))\"\n> > \"Total runtime: 0.187 ms\"\n> >\n> >\n> > ----------------------------------------\n> >> From: [email protected]\n> >> To: [email protected]; [email protected]\n> >> Subject: Re: [PERFORM] how do functions affect query plan?\n> >> Date: Thu, 15 May 2014 15:19:13 +0900\n> >>\n> >> Hi,David\n> >>\n> >> Seems that the root of evil is in the function(random,trunc),\n> >> although I don't know why.\n> >>\n> >> Here is the comparison.\n> >>\n> >> 1.w/o function : index is wisely used.(Even without the limit 30 clause)\n> >>\n> >> explain analyze\n> >> SELECT md.*\n> >> FROM measure_data md\n> >> where telegram_id in\n> >> (\n> >> SELECT 66484 + (132363-66484)/30 * i\n> >> FROM generate_series(1,30) as s(i)\n> >> limit 30\n> >> )\n> >> ;\n> >>\n> >> \"Nested Loop (cost=10.01..39290.79 rows=10392 width=28) (actual\n> time=0.079..3.490 rows=9360 loops=1)\"\n> >> \" -> HashAggregate (cost=0.83..1.13 rows=30 width=4) (actual\n> time=0.027..0.032 rows=30 loops=1)\"\n> >> \" -> Limit (cost=0.00..0.45 rows=30 width=4) (actual time=0.013..0.020\n> rows=30 loops=1)\"\n> >> \" -> Function Scan on generate_series s (cost=0.00..15.00 rows=1000\n> width=4) (actual time=0.011..0.016 rows=30 loops=1)\"\n> >> \" -> Bitmap Heap Scan on measure_data md (cost=9.19..1306.20 rows=346\n> width=28) (actual time=0.030..0.075 rows=312 loops=30)\"\n> >> \" Recheck Cond: (telegram_id = ((66484 + (2195 * s.i))))\"\n> >> \" -> Bitmap Index Scan on index_measure_data_telegram_id\n> (cost=0.00..9.10 rows=346 width=0) (actual time=0.025..0.025 rows=312\n> loops=30)\"\n> >> \" Index Cond: (telegram_id = ((66484 + (2195 * s.i))))\"\n> >> \"Total runtime: 3.714 ms\"\n> >>\n> >>\n> >> 2.when function is there: seq scan\n> >>\n> >> explain analyze\n> >> SELECT md.*\n> >> FROM measure_data md\n> >> where telegram_id in\n> >> (\n> >> SELECT trunc((132363-66484) * random()) +66484\n> >> FROM generate_series(1,30) as s(i)\n> >> limit 30\n> >> )\n> >> ;\n> >>\n> >>\n> >> \"Hash Join (cost=1.65..490288.89 rows=10277280 width=28) (actual\n> time=0.169..4894.847 rows=9360 loops=1)\"\n> >> \" Hash Cond: ((md.telegram_id)::double precision =\n> ((trunc((65879::double precision * random())) + 66484::double precision)))\"\n> >> \" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560\n> width=28) (actual time=0.010..2076.932 rows=20554560 loops=1)\"\n> >> \" -> Hash (cost=1.28..1.28 rows=30 width=8) (actual time=0.041..0.041\n> rows=30 loops=1)\"\n> >> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n> >> \" -> HashAggregate (cost=0.98..1.28 rows=30 width=8) (actual\n> time=0.034..0.036 rows=30 loops=1)\"\n> >> \" -> Limit (cost=0.00..0.60 rows=30 width=0) (actual time=0.016..0.026\n> rows=30 loops=1)\"\n> >> \" -> Function Scan on generate_series s (cost=0.00..20.00 rows=1000\n> width=0) (actual time=0.015..0.023 rows=30 loops=1)\"\n> >> \"Total runtime: 4895.239 ms\"\n> >>\n> >>\n> >> ----------------------------------------\n> >>> Date: Wed, 14 May 2014 22:43:24 -0700\n> >>> From: [email protected]\n> >>> To: [email protected]\n> >>> Subject: Re: [PERFORM] how do functions affect query plan?\n> >>>\n> >>> 常超 wrote\n> >>>> Hi,all\n> >>>> I have a table to save received measure data.\n> >>>>\n> >>>>\n> >>>> CREATE TABLE measure_data\n> >>>> (\n> >>>> id serial NOT NULL,\n> >>>> telegram_id integer NOT NULL,\n> >>>> measure_time timestamp without time zone NOT NULL,\n> >>>> item_id integer NOT NULL,\n> >>>> val double precision,\n> >>>> CONSTRAINT measure_data_pkey PRIMARY KEY (id)\n> >>>> );\n> >>>>\n> >>>> CREATE INDEX index_measure_data_telegram_id ON measure_data USING\n> btree\n> >>>> (telegram_id);\n> >>>>\n> >>>>\n> >>>> in my scenario,a telegram contains measure data for multiple data\n> items\n> >>>> and timestamps,\n> >>>> BTW,another table is for telegram.\n> >>>>\n> >>>> The SQL I used in my application is\n> >>>> select * from measure_data where telegram_id in(1,2,...,n)\n> >>>> and this query used the index_measure_data_telegram_id index,as\n> expected.\n> >>>>\n> >>>> In order to see the performance of my query ,\n> >>>> I used the following query to search the measure data for randomly 30\n> >>>> telegrams.\n> >>>>\n> >>>>\n> >>>> explain analyze\n> >>>> SELECT md.*\n> >>>> FROM measure_data md\n> >>>> where telegram_id in\n> >>>> (\n> >>>> SELECT distinct\n> >>>> trunc((132363-66484) * random() + 66484)\n> >>>> FROM generate_series(1,30) as s(telegram_id)\n> >>>> )\n> >>>> ;\n> >>>>\n> >>>> the 132363 and 66484 are the max and min of the telegram\n> id,separately.\n> >>>>\n> >>>> What surprised me is that index is not used,instead,a seq scan is\n> >>>> performed on measure_data.\n> >>>> Although,intuitively,in this case,it is much wiser to use the index.\n> >>>> Would you please give some clue to why this happened?\n> >>>>\n> >>>> \"Hash Semi Join (cost=65.00..539169.32 rows=10277280 width=28) (actual\n> >>>> time=76.454..17177.054 rows=9360 loops=1)\"\n> >>>> \" Hash Cond: ((md.telegram_id)::double precision =\n> (trunc(((65879::double\n> >>>> precision * random()) + 66484::double precision))))\"\n> >>>> \" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560\n> >>>> width=28) (actual time=0.012..13874.809 rows=20554560 loops=1)\"\n> >>>> \" -> Hash (cost=52.50..52.50 rows=1000 width=8) (actual\n> >>>> time=0.062..0.062 rows=30 loops=1)\"\n> >>>> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n> >>>> \" -> HashAggregate (cost=22.50..42.50 rows=1000 width=0) (actual\n> >>>> time=0.048..0.053 rows=30 loops=1)\"\n> >>>> \" -> Function Scan on generate_series s (cost=0.00..20.00\n> >>>> rows=1000 width=0) (actual time=0.020..0.034 rows=30 loops=1)\"\n> >>>> \"Total runtime: 17177.527 ms\"\n> >>>\n> >>> The planner expects to need to return half the table when you provide\n> 1,000\n> >>> distinct telegram_ids, which is best handled by scanning the whole\n> table\n> >>> sequentially and tossing out invalid data.\n> >>>\n> >>> I am curious if the plan will be different if you added a LIMIT 30 to\n> the\n> >>> sub-query.\n> >>>\n> >>> The root of the problem is the planner has no way of knowing whether\n> >>> generate_series is going to return 1 or 1,000,000 rows so by default\n> it (and\n> >>> all functions) are assumed (by the planner) to return 1,000 rows. By\n> adding\n> >>> an explicit limit you can better inform the planner as to how many\n> rows you\n> >>> are going to be passing up to the parent query and it will hopefully,\n> with\n> >>> knowledge of only 30 distinct values, use the index.\n> >>>\n> >>>\n> >>>\n> >>>\n> >>> --\n> >>> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/how-do-functions-affect-query-plan-tp5803993p5803996.html\n> >>> Sent from the PostgreSQL - performance mailing list archive at\n> Nabble.com.\n> >>>\n> >>>\n> >>> --\n> >>> Sent via pgsql-performance mailing list (\n> [email protected])\n> >>> To make changes to your subscription:\n> >>> http://www.postgresql.org/mailpref/pgsql-performance\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list (\n> [email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nhi\ni think the telegram_id's type should be integer. \nplease change telegram_id to numeric and try to run the the following sql. the index should be used. \nexplain SELECT md.* \n FROM measure_data md \n where telegram_id in (trunc(66484.2),trunc(132362.1 )) \n2014-05-15 17:28 GMT+09:00 changchao <[email protected]>:\n\n\n----------------------------------------\n> From: [email protected]\n> To: [email protected]\n> Subject: Re: [PERFORM] how do functions affect query plan?\n> Date: Thu, 15 May 2014 16:59:30 +0900\n>\n>\n>\n> Interestingly,adding type cast made postgresql wiser.\n> Anyone knows the reason?\n>\n> 1.no type cast\n> SELECT md.*\n> FROM measure_data md\n> where telegram_id in (trunc(66484.2),trunc(132362.1 ))\n>\n>\n> \"Seq Scan on measure_data md (cost=0.00..459455.40 rows=205546 width=28) (actual time=77.144..6458.870 rows=624 loops=1)\"\n> \" Filter: ((telegram_id)::numeric = ANY ('{66484,132362}'::numeric[]))\"\n> \" Rows Removed by Filter: 20553936\"\n> \"Total runtime: 6458.921 ms\"\n>\n>\n> 2.type cast\n>\n> SELECT md.*\n> FROM measure_data md\n> where telegram_id in (trunc(66484.2)::int,trunc(132362.1 )::int)\n>\n> \"Bitmap Heap Scan on measure_data md (cost=16.06..2618.86 rows=684 width=28) (actual time=0.076..0.154 rows=624 loops=1)\"\n> \" Recheck Cond: (telegram_id = ANY ('{66484,132362}'::integer[]))\"\n> \" -> Bitmap Index Scan on index_measure_data_telegram_id (cost=0.00..15.88 rows=684 width=0) (actual time=0.065..0.065 rows=624 loops=1)\"\n> \" Index Cond: (telegram_id = ANY ('{66484,132362}'::integer[]))\"\n> \"Total runtime: 0.187 ms\"\n>\n>\n> ----------------------------------------\n>> From: [email protected]\n>> To: [email protected]; [email protected]\n>> Subject: Re: [PERFORM] how do functions affect query plan?\n>> Date: Thu, 15 May 2014 15:19:13 +0900\n>>\n>> Hi,David\n>>\n>> Seems that the root of evil is in the function(random,trunc),\n>> although I don't know why.\n>>\n>> Here is the comparison.\n>>\n>> 1.w/o function : index is wisely used.(Even without the limit 30 clause)\n>>\n>> explain analyze\n>> SELECT md.*\n>> FROM measure_data md\n>> where telegram_id in\n>> (\n>> SELECT 66484 + (132363-66484)/30 * i\n>> FROM generate_series(1,30) as s(i)\n>> limit 30\n>> )\n>> ;\n>>\n>> \"Nested Loop (cost=10.01..39290.79 rows=10392 width=28) (actual time=0.079..3.490 rows=9360 loops=1)\"\n>> \" -> HashAggregate (cost=0.83..1.13 rows=30 width=4) (actual time=0.027..0.032 rows=30 loops=1)\"\n>> \" -> Limit (cost=0.00..0.45 rows=30 width=4) (actual time=0.013..0.020 rows=30 loops=1)\"\n>> \" -> Function Scan on generate_series s (cost=0.00..15.00 rows=1000 width=4) (actual time=0.011..0.016 rows=30 loops=1)\"\n>> \" -> Bitmap Heap Scan on measure_data md (cost=9.19..1306.20 rows=346 width=28) (actual time=0.030..0.075 rows=312 loops=30)\"\n>> \" Recheck Cond: (telegram_id = ((66484 + (2195 * s.i))))\"\n>> \" -> Bitmap Index Scan on index_measure_data_telegram_id (cost=0.00..9.10 rows=346 width=0) (actual time=0.025..0.025 rows=312 loops=30)\"\n>> \" Index Cond: (telegram_id = ((66484 + (2195 * s.i))))\"\n>> \"Total runtime: 3.714 ms\"\n>>\n>>\n>> 2.when function is there: seq scan\n>>\n>> explain analyze\n>> SELECT md.*\n>> FROM measure_data md\n>> where telegram_id in\n>> (\n>> SELECT trunc((132363-66484) * random()) +66484\n>> FROM generate_series(1,30) as s(i)\n>> limit 30\n>> )\n>> ;\n>>\n>>\n>> \"Hash Join (cost=1.65..490288.89 rows=10277280 width=28) (actual time=0.169..4894.847 rows=9360 loops=1)\"\n>> \" Hash Cond: ((md.telegram_id)::double precision = ((trunc((65879::double precision * random())) + 66484::double precision)))\"\n>> \" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560 width=28) (actual time=0.010..2076.932 rows=20554560 loops=1)\"\n>> \" -> Hash (cost=1.28..1.28 rows=30 width=8) (actual time=0.041..0.041 rows=30 loops=1)\"\n>> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n>> \" -> HashAggregate (cost=0.98..1.28 rows=30 width=8) (actual time=0.034..0.036 rows=30 loops=1)\"\n>> \" -> Limit (cost=0.00..0.60 rows=30 width=0) (actual time=0.016..0.026 rows=30 loops=1)\"\n>> \" -> Function Scan on generate_series s (cost=0.00..20.00 rows=1000 width=0) (actual time=0.015..0.023 rows=30 loops=1)\"\n>> \"Total runtime: 4895.239 ms\"\n>>\n>>\n>> ----------------------------------------\n>>> Date: Wed, 14 May 2014 22:43:24 -0700\n>>> From: [email protected]\n>>> To: [email protected]\n>>> Subject: Re: [PERFORM] how do functions affect query plan?\n>>>\n>>> 常超 wrote\n>>>> Hi,all\n>>>> I have a table to save received measure data.\n>>>>\n>>>>\n>>>> CREATE TABLE measure_data\n>>>> (\n>>>> id serial NOT NULL,\n>>>> telegram_id integer NOT NULL,\n>>>> measure_time timestamp without time zone NOT NULL,\n>>>> item_id integer NOT NULL,\n>>>> val double precision,\n>>>> CONSTRAINT measure_data_pkey PRIMARY KEY (id)\n>>>> );\n>>>>\n>>>> CREATE INDEX index_measure_data_telegram_id ON measure_data USING btree\n>>>> (telegram_id);\n>>>>\n>>>>\n>>>> in my scenario,a telegram contains measure data for multiple data items\n>>>> and timestamps,\n>>>> BTW,another table is for telegram.\n>>>>\n>>>> The SQL I used in my application is\n>>>> select * from measure_data where telegram_id in(1,2,...,n)\n>>>> and this query used the index_measure_data_telegram_id index,as expected.\n>>>>\n>>>> In order to see the performance of my query ,\n>>>> I used the following query to search the measure data for randomly 30\n>>>> telegrams.\n>>>>\n>>>>\n>>>> explain analyze\n>>>> SELECT md.*\n>>>> FROM measure_data md\n>>>> where telegram_id in\n>>>> (\n>>>> SELECT distinct\n>>>> trunc((132363-66484) * random() + 66484)\n>>>> FROM generate_series(1,30) as s(telegram_id)\n>>>> )\n>>>> ;\n>>>>\n>>>> the 132363 and 66484 are the max and min of the telegram id,separately.\n>>>>\n>>>> What surprised me is that index is not used,instead,a seq scan is\n>>>> performed on measure_data.\n>>>> Although,intuitively,in this case,it is much wiser to use the index.\n>>>> Would you please give some clue to why this happened?\n>>>>\n>>>> \"Hash Semi Join (cost=65.00..539169.32 rows=10277280 width=28) (actual\n>>>> time=76.454..17177.054 rows=9360 loops=1)\"\n>>>> \" Hash Cond: ((md.telegram_id)::double precision = (trunc(((65879::double\n>>>> precision * random()) + 66484::double precision))))\"\n>>>> \" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560\n>>>> width=28) (actual time=0.012..13874.809 rows=20554560 loops=1)\"\n>>>> \" -> Hash (cost=52.50..52.50 rows=1000 width=8) (actual\n>>>> time=0.062..0.062 rows=30 loops=1)\"\n>>>> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n>>>> \" -> HashAggregate (cost=22.50..42.50 rows=1000 width=0) (actual\n>>>> time=0.048..0.053 rows=30 loops=1)\"\n>>>> \" -> Function Scan on generate_series s (cost=0.00..20.00\n>>>> rows=1000 width=0) (actual time=0.020..0.034 rows=30 loops=1)\"\n>>>> \"Total runtime: 17177.527 ms\"\n>>>\n>>> The planner expects to need to return half the table when you provide 1,000\n>>> distinct telegram_ids, which is best handled by scanning the whole table\n>>> sequentially and tossing out invalid data.\n>>>\n>>> I am curious if the plan will be different if you added a LIMIT 30 to the\n>>> sub-query.\n>>>\n>>> The root of the problem is the planner has no way of knowing whether\n>>> generate_series is going to return 1 or 1,000,000 rows so by default it (and\n>>> all functions) are assumed (by the planner) to return 1,000 rows. By adding\n>>> an explicit limit you can better inform the planner as to how many rows you\n>>> are going to be passing up to the parent query and it will hopefully, with\n>>> knowledge of only 30 distinct values, use the index.\n>>>\n>>>\n>>>\n>>>\n>>> --\n>>> View this message in context: http://postgresql.1045698.n5.nabble.com/how-do-functions-affect-query-plan-tp5803993p5803996.html\n\n>>> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 15 May 2014 17:31:10 +0900",
"msg_from": "=?UTF-8?B?5qWK5paw5rOi?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: how do functions affect query plan?"
},
{
"msg_contents": "\nYour answer seemed to get the point.\n\nindex on telegram_id(type=integer) column can't be used for the filter condition below\nbecause type mismatches.\n\n ((telegram_id)::numeric = ANY ('{66484,132362}'::numeric[]))\" \n\n________________________________\n> Date: Thu, 15 May 2014 17:31:10 +0900 \n> Subject: Re: FW: [PERFORM] how do functions affect query plan? \n> From: [email protected] \n> To: [email protected] \n> \n> hi \n> \n> i think the telegram_id's type should be integer. \n> \n> please change telegram_id to numeric and try to run the the following \n> sql. the index should be used. \n> \n> explain SELECT md.* \n> FROM measure_data md \n> where telegram_id in (trunc(66484.2),trunc(132362.1 )) \n> \n> \n> 2014-05-15 17:28 GMT+09:00 changchao \n> <[email protected]<mailto:[email protected]>>: \n> \n> \n> ---------------------------------------- \n> > From: [email protected]<mailto:[email protected]> \n> > To: \n> [email protected]<mailto:[email protected]> \n> > Subject: Re: [PERFORM] how do functions affect query plan? \n> > Date: Thu, 15 May 2014 16:59:30 +0900 \n> > \n> > \n> > \n> > Interestingly,adding type cast made postgresql wiser. \n> > Anyone knows the reason? \n> > \n> > 1.no<http://1.no> type cast \n> > SELECT md.* \n> > FROM measure_data md \n> > where telegram_id in (trunc(66484.2),trunc(132362.1 )) \n> > \n> > \n> > \"Seq Scan on measure_data md (cost=0.00..459455.40 rows=205546 \n> width=28) (actual time=77.144..6458.870 rows=624 loops=1)\" \n> > \" Filter: ((telegram_id)::numeric = ANY ('{66484,132362}'::numeric[]))\" \n> > \" Rows Removed by Filter: 20553936\" \n> > \"Total runtime: 6458.921 ms\" \n> > \n> > \n> > 2.type cast \n> > \n> > SELECT md.* \n> > FROM measure_data md \n> > where telegram_id in (trunc(66484.2)::int,trunc(132362.1 )::int) \n> > \n> > \"Bitmap Heap Scan on measure_data md (cost=16.06..2618.86 rows=684 \n> width=28) (actual time=0.076..0.154 rows=624 loops=1)\" \n> > \" Recheck Cond: (telegram_id = ANY ('{66484,132362}'::integer[]))\" \n> > \" -> Bitmap Index Scan on index_measure_data_telegram_id \n> (cost=0.00..15.88 rows=684 width=0) (actual time=0.065..0.065 rows=624 \n> loops=1)\" \n> > \" Index Cond: (telegram_id = ANY ('{66484,132362}'::integer[]))\" \n> > \"Total runtime: 0.187 ms\" \n> > \n> > \n> > ---------------------------------------- \n> >> From: [email protected]<mailto:[email protected]> \n> >> To: [email protected]<mailto:[email protected]>; \n> [email protected]<mailto:[email protected]> \n> >> Subject: Re: [PERFORM] how do functions affect query plan? \n> >> Date: Thu, 15 May 2014 15:19:13 +0900 \n> >> \n> >> Hi,David \n> >> \n> >> Seems that the root of evil is in the function(random,trunc), \n> >> although I don't know why. \n> >> \n> >> Here is the comparison. \n> >> \n> >> 1.w/o function : index is wisely used.(Even without the limit 30 clause) \n> >> \n> >> explain analyze \n> >> SELECT md.* \n> >> FROM measure_data md \n> >> where telegram_id in \n> >> ( \n> >> SELECT 66484 + (132363-66484)/30 * i \n> >> FROM generate_series(1,30) as s(i) \n> >> limit 30 \n> >> ) \n> >> ; \n> >> \n> >> \"Nested Loop (cost=10.01..39290.79 rows=10392 width=28) (actual \n> time=0.079..3.490 rows=9360 loops=1)\" \n> >> \" -> HashAggregate (cost=0.83..1.13 rows=30 width=4) (actual \n> time=0.027..0.032 rows=30 loops=1)\" \n> >> \" -> Limit (cost=0.00..0.45 rows=30 width=4) (actual \n> time=0.013..0.020 rows=30 loops=1)\" \n> >> \" -> Function Scan on generate_series s (cost=0.00..15.00 rows=1000 \n> width=4) (actual time=0.011..0.016 rows=30 loops=1)\" \n> >> \" -> Bitmap Heap Scan on measure_data md (cost=9.19..1306.20 \n> rows=346 width=28) (actual time=0.030..0.075 rows=312 loops=30)\" \n> >> \" Recheck Cond: (telegram_id = ((66484 + (2195 * s.i))))\" \n> >> \" -> Bitmap Index Scan on index_measure_data_telegram_id \n> (cost=0.00..9.10 rows=346 width=0) (actual time=0.025..0.025 rows=312 \n> loops=30)\" \n> >> \" Index Cond: (telegram_id = ((66484 + (2195 * s.i))))\" \n> >> \"Total runtime: 3.714 ms\" \n> >> \n> >> \n> >> 2.when function is there: seq scan \n> >> \n> >> explain analyze \n> >> SELECT md.* \n> >> FROM measure_data md \n> >> where telegram_id in \n> >> ( \n> >> SELECT trunc((132363-66484) * random()) +66484 \n> >> FROM generate_series(1,30) as s(i) \n> >> limit 30 \n> >> ) \n> >> ; \n> >> \n> >> \n> >> \"Hash Join (cost=1.65..490288.89 rows=10277280 width=28) (actual \n> time=0.169..4894.847 rows=9360 loops=1)\" \n> >> \" Hash Cond: ((md.telegram_id)::double precision = \n> ((trunc((65879::double precision * random())) + 66484::double \n> precision)))\" \n> >> \" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560 \n> width=28) (actual time=0.010..2076.932 rows=20554560 loops=1)\" \n> >> \" -> Hash (cost=1.28..1.28 rows=30 width=8) (actual \n> time=0.041..0.041 rows=30 loops=1)\" \n> >> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\" \n> >> \" -> HashAggregate (cost=0.98..1.28 rows=30 width=8) (actual \n> time=0.034..0.036 rows=30 loops=1)\" \n> >> \" -> Limit (cost=0.00..0.60 rows=30 width=0) (actual \n> time=0.016..0.026 rows=30 loops=1)\" \n> >> \" -> Function Scan on generate_series s (cost=0.00..20.00 rows=1000 \n> width=0) (actual time=0.015..0.023 rows=30 loops=1)\" \n> >> \"Total runtime: 4895.239 ms\" \n> >> \n> >> \n> >> ---------------------------------------- \n> >>> Date: Wed, 14 May 2014 22:43:24 -0700 \n> >>> From: [email protected]<mailto:[email protected]> \n> >>> To: \n> [email protected]<mailto:[email protected]> \n> >>> Subject: Re: [PERFORM] how do functions affect query plan? \n> >>> \n> >>> 常超 wrote \n> >>>> Hi,all \n> >>>> I have a table to save received measure data. \n> >>>> \n> >>>> \n> >>>> CREATE TABLE measure_data \n> >>>> ( \n> >>>> id serial NOT NULL, \n> >>>> telegram_id integer NOT NULL, \n> >>>> measure_time timestamp without time zone NOT NULL, \n> >>>> item_id integer NOT NULL, \n> >>>> val double precision, \n> >>>> CONSTRAINT measure_data_pkey PRIMARY KEY (id) \n> >>>> ); \n> >>>> \n> >>>> CREATE INDEX index_measure_data_telegram_id ON measure_data USING btree \n> >>>> (telegram_id); \n> >>>> \n> >>>> \n> >>>> in my scenario,a telegram contains measure data for multiple data items \n> >>>> and timestamps, \n> >>>> BTW,another table is for telegram. \n> >>>> \n> >>>> The SQL I used in my application is \n> >>>> select * from measure_data where telegram_id in(1,2,...,n) \n> >>>> and this query used the index_measure_data_telegram_id index,as \n> expected. \n> >>>> \n> >>>> In order to see the performance of my query , \n> >>>> I used the following query to search the measure data for randomly 30 \n> >>>> telegrams. \n> >>>> \n> >>>> \n> >>>> explain analyze \n> >>>> SELECT md.* \n> >>>> FROM measure_data md \n> >>>> where telegram_id in \n> >>>> ( \n> >>>> SELECT distinct \n> >>>> trunc((132363-66484) * random() + 66484) \n> >>>> FROM generate_series(1,30) as s(telegram_id) \n> >>>> ) \n> >>>> ; \n> >>>> \n> >>>> the 132363 and 66484 are the max and min of the telegram id,separately. \n> >>>> \n> >>>> What surprised me is that index is not used,instead,a seq scan is \n> >>>> performed on measure_data. \n> >>>> Although,intuitively,in this case,it is much wiser to use the index. \n> >>>> Would you please give some clue to why this happened? \n> >>>> \n> >>>> \"Hash Semi Join (cost=65.00..539169.32 rows=10277280 width=28) (actual \n> >>>> time=76.454..17177.054 rows=9360 loops=1)\" \n> >>>> \" Hash Cond: ((md.telegram_id)::double precision = \n> (trunc(((65879::double \n> >>>> precision * random()) + 66484::double precision))))\" \n> >>>> \" -> Seq Scan on measure_data md (cost=0.00..356682.60 rows=20554560 \n> >>>> width=28) (actual time=0.012..13874.809 rows=20554560 loops=1)\" \n> >>>> \" -> Hash (cost=52.50..52.50 rows=1000 width=8) (actual \n> >>>> time=0.062..0.062 rows=30 loops=1)\" \n> >>>> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\" \n> >>>> \" -> HashAggregate (cost=22.50..42.50 rows=1000 width=0) (actual \n> >>>> time=0.048..0.053 rows=30 loops=1)\" \n> >>>> \" -> Function Scan on generate_series s (cost=0.00..20.00 \n> >>>> rows=1000 width=0) (actual time=0.020..0.034 rows=30 loops=1)\" \n> >>>> \"Total runtime: 17177.527 ms\" \n> >>> \n> >>> The planner expects to need to return half the table when you \n> provide 1,000 \n> >>> distinct telegram_ids, which is best handled by scanning the whole table \n> >>> sequentially and tossing out invalid data. \n> >>> \n> >>> I am curious if the plan will be different if you added a LIMIT 30 to the \n> >>> sub-query. \n> >>> \n> >>> The root of the problem is the planner has no way of knowing whether \n> >>> generate_series is going to return 1 or 1,000,000 rows so by \n> default it (and \n> >>> all functions) are assumed (by the planner) to return 1,000 rows. \n> By adding \n> >>> an explicit limit you can better inform the planner as to how many \n> rows you \n> >>> are going to be passing up to the parent query and it will \n> hopefully, with \n> >>> knowledge of only 30 distinct values, use the index. \n> >>> \n> >>> \n> >>> \n> >>> \n> >>> -- \n> >>> View this message in context: \n> http://postgresql.1045698.n5.nabble.com/how-do-functions-affect-query-plan-tp5803993p5803996.html \n> >>> Sent from the PostgreSQL - performance mailing list archive at \n> Nabble.com. \n> >>> \n> >>> \n> >>> -- \n> >>> Sent via pgsql-performance mailing list \n> ([email protected]<mailto:[email protected]>) \n> >>> To make changes to your subscription: \n> >>> http://www.postgresql.org/mailpref/pgsql-performance \n> >> \n> >> -- \n> >> Sent via pgsql-performance mailing list \n> ([email protected]<mailto:[email protected]>) \n> >> To make changes to your subscription: \n> >> http://www.postgresql.org/mailpref/pgsql-performance \n> > \n> > -- \n> > Sent via pgsql-performance mailing list \n> ([email protected]<mailto:[email protected]>) \n> > To make changes to your subscription: \n> > http://www.postgresql.org/mailpref/pgsql-performance \n> \n> \n \t\t \t \t\t \n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 May 2014 17:44:39 +0900",
"msg_from": "changchao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how do functions affect query plan?"
}
] |
[
{
"msg_contents": "OK so we have a query that does OK in 8.4, goes to absolute crap in\n9.2 and then works great in 9.3. Thing is we've spent several months\nregression testing 9.2 and no time testing 9.3, so we can't just \"go\nto 9.3\" in an afternoon. But we might have to. 9.2 seems hopelessly\nbroken here.\n\nThe query looks something like this:\n\nSELECT COUNT(*) FROM u, ug\nWHERE u.ugid = ug.id\nAND NOT u.d\nAND ug.somefield IN (SELECT somefunction(12345));\n\nIn 8.4 we get this plan http://explain.depesz.com/s/r3hF which takes ~5ms\nIn 9.2 we get this plan http://explain.depesz.com/s/vM7 which takes ~10s\nIn 9.3 we get this plan http://explain.depesz.com/s/Wub which takes ~0.35ms\n\nThe data sets are identical, the schemas are identical. Making changes\nto random_page_cost, sequential_page_cost and various other tuning\nparameters don't make it any better.\n\nPG versions: 8.4.20, 9.2.8, 9.3.4\n\nAdding a limit to the function DOES make 9.2 better, ala:\n\nSELECT COUNT(*) FROM u, ug\nWHERE u.ugid = ug.id\nAND NOT u.d\nAND ug.somefield IN (SELECT somefunction(12345) limit 199);\n\nIf the limit is 200 the bad plan shows up again.\n\nQuestion, is this a known issue with 9.2? If so is it something that\nwill one day be fixed or are we stuck with it? Is there a workaround\nto make it better? Note: I'd rather not have to compile 9.2 from\nsource with a patch, but at this point that would be acceptable over\n\"you're stuck with it\".\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 May 2014 10:35:11 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query plan good in 8.4, bad in 9.2 and better in 9.3"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> OK so we have a query that does OK in 8.4, goes to absolute crap in\n> 9.2 and then works great in 9.3. Thing is we've spent several months\n> regression testing 9.2 and no time testing 9.3, so we can't just \"go\n> to 9.3\" in an afternoon. But we might have to. 9.2 seems hopelessly\n> broken here.\n\n> The query looks something like this:\n\n> SELECT COUNT(*) FROM u, ug\n> WHERE u.ugid = ug.id\n> AND NOT u.d\n> AND ug.somefield IN (SELECT somefunction(12345));\n\nYou really should show us somefunction's definition if you want\nuseful comments. I gather however that it returns a set. 8.4\nseems to be planning on the assumption that the set contains\nonly one row, which is completely unjustified in general though\nit happens to be true in your example. 9.2 is assuming 1000 rows\nin the set, and getting a sucky plan because that's wrong. 9.3\nis still assuming that; and I rather doubt that you are really\ntesting 9.3 on the same data, because 9.2 is finding millions of\nrows in a seqscan of u while 9.3 is finding none in the exact\nsame seqscan.\n\nI'd suggest affixing a ROWS estimate to somefunction, or better\ndeclaring it to return singleton not set if that's actually\nalways the case.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 May 2014 12:52:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan good in 8.4, bad in 9.2 and better in 9.3"
},
{
"msg_contents": "On Thu, May 15, 2014 at 9:35 AM, Scott Marlowe <[email protected]>wrote:\n\n> OK so we have a query that does OK in 8.4, goes to absolute crap in\n> 9.2 and then works great in 9.3. Thing is we've spent several months\n> regression testing 9.2 and no time testing 9.3, so we can't just \"go\n> to 9.3\" in an afternoon. But we might have to. 9.2 seems hopelessly\n> broken here.\n>\n> The query looks something like this:\n>\n> SELECT COUNT(*) FROM u, ug\n> WHERE u.ugid = ug.id\n> AND NOT u.d\n> AND ug.somefield IN (SELECT somefunction(12345));\n>\n> In 8.4 we get this plan http://explain.depesz.com/s/r3hF which takes ~5ms\n> In 9.2 we get this plan http://explain.depesz.com/s/vM7 which takes ~10s\n> In 9.3 we get this plan http://explain.depesz.com/s/Wub which takes\n> ~0.35ms\n>\n\nBased on the actual row counts given in the seq scan on u, , in 9.2, u\ncontains millions of rows. In 9.3, it contains zero rows.\n\n\n\n>\n> The data sets are identical, the schemas are identical.\n\n\nPlease double check that.\n\n\nCheers,\n\nJeff\n\nOn Thu, May 15, 2014 at 9:35 AM, Scott Marlowe <[email protected]> wrote:\nOK so we have a query that does OK in 8.4, goes to absolute crap in\n9.2 and then works great in 9.3. Thing is we've spent several months\nregression testing 9.2 and no time testing 9.3, so we can't just \"go\nto 9.3\" in an afternoon. But we might have to. 9.2 seems hopelessly\nbroken here.\n\nThe query looks something like this:\n\nSELECT COUNT(*) FROM u, ug\nWHERE u.ugid = ug.id\nAND NOT u.d\nAND ug.somefield IN (SELECT somefunction(12345));\n\nIn 8.4 we get this plan http://explain.depesz.com/s/r3hF which takes ~5ms\nIn 9.2 we get this plan http://explain.depesz.com/s/vM7 which takes ~10s\nIn 9.3 we get this plan http://explain.depesz.com/s/Wub which takes ~0.35msBased on the actual row counts given in the seq scan on u, , in 9.2, u contains millions of rows. In 9.3, it contains zero rows.\n \n\nThe data sets are identical, the schemas are identical. Please double check that. Cheers,Jeff",
"msg_date": "Thu, 15 May 2014 09:54:27 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan good in 8.4, bad in 9.2 and better in 9.3"
},
{
"msg_contents": "On Thu, May 15, 2014 at 10:52 AM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> OK so we have a query that does OK in 8.4, goes to absolute crap in\n>> 9.2 and then works great in 9.3. Thing is we've spent several months\n>> regression testing 9.2 and no time testing 9.3, so we can't just \"go\n>> to 9.3\" in an afternoon. But we might have to. 9.2 seems hopelessly\n>> broken here.\n>\n>> The query looks something like this:\n>\n>> SELECT COUNT(*) FROM u, ug\n>> WHERE u.ugid = ug.id\n>> AND NOT u.d\n>> AND ug.somefield IN (SELECT somefunction(12345));\n>\n> You really should show us somefunction's definition if you want\n> useful comments. I gather however that it returns a set. 8.4\n> seems to be planning on the assumption that the set contains\n> only one row, which is completely unjustified in general though\n> it happens to be true in your example. 9.2 is assuming 1000 rows\n> in the set, and getting a sucky plan because that's wrong. 9.3\n> is still assuming that; and I rather doubt that you are really\n> testing 9.3 on the same data, because 9.2 is finding millions of\n> rows in a seqscan of u while 9.3 is finding none in the exact\n> same seqscan.\n>\n> I'd suggest affixing a ROWS estimate to somefunction, or better\n> declaring it to return singleton not set if that's actually\n> always the case.\n\nWell great, now I look like an idiot. Last time I trust someone else\nto set up my test servers.\n\nAnyway, yeah, affixing a rows estimate fixes this for us 100%. So thanks!\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 19 May 2014 08:50:30 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query plan good in 8.4, bad in 9.2 and better in 9.3"
}
] |
[
{
"msg_contents": "Hi buddies,\n\nI've got a query as below, it runs several times with different execution plan and totally different execution time. The one using hash-join is slow and the one using semi-hash join is very fast. However, I have no control over the optimizer behavior of PostgreSQL database. Or, do I have?\n\nThe database version is 9.3.4\n\nSELECT dem_type,\n dem_value,\n Count(*)\nFROM demo_weekly a\nWHERE date = '2013-11-30'\nAND userid IN ( select userid from test1)\n AND dem_type IN ( 'Gender', 'Age', 'Hobbies' )\nGROUP BY dem_type,\n dem_value ;\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=322386.94..322786.94 rows=40000 width=29) (actual time=3142.849..3142.927 rows=19 loops=1)\n -> Hash Semi Join (cost=14460.06..314403.08 rows=1064514 width=29) (actual time=803.671..2786.979 rows=1199961 loops=1)\n Hash Cond: ((a.userid)::text = (test1.userid)::text)\n -> Append (cost=0.00..277721.30 rows=2129027 width=78) (actual time=536.829..1691.270 rows=2102611 loops=1)\n -> Seq Scan on demo_weekly a (cost=0.00..0.00 rows=1 width=808) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: ((date = '2013-11-30'::date) AND ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[])))\n -> Bitmap Heap Scan on demo_weekly_20131130 a_1 (cost=50045.63..277721.30 rows=2129026 width=78) (actual time=536.826..1552.203 rows=2102611 loops=1)\n Recheck Cond: ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[]))\n Filter: (date = '2013-11-30'::date)\n -> Bitmap Index Scan on demo_weekly_20131130_dt_idx (cost=0.00..49513.37 rows=2129026 width=0) (actual time=467.453..467.453 rows=2102611 loops=1)\n Index Cond: ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[]))\n -> Hash (cost=8938.36..8938.36 rows=441736 width=50) (actual time=266.501..266.501 rows=441736 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 35541kB\n -> Seq Scan on test1 (cost=0.00..8938.36 rows=441736 width=50) (actual time=0.023..87.869 rows=441736 loops=1)\nTotal runtime: 3149.004 ms\n(15 rows)\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=318351.90..318751.90 rows=40000 width=29) (actual time=23668.646..23668.723 rows=19 loops=1)\n -> Hash Join (cost=5316.68..310497.81 rows=1047212 width=29) (actual time=1059.182..23218.864 rows=1199961 loops=1)\n Hash Cond: ((a.userid)::text = (test1.userid)::text)\n -> Append (cost=0.00..276382.82 rows=2094423 width=78) (actual time=528.116..2002.462 rows=2102611 loops=1)\n -> Seq Scan on demo_weekly a (cost=0.00..0.00 rows=1 width=808) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((date = '2013-11-30'::date) AND ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[])))\n -> Bitmap Heap Scan on demo_weekly_20131130 a_1 (cost=49269.46..276382.82 rows=2094422 width=78) (actual time=528.114..1825.265 rows=2102611 loops=1)\n Recheck Cond: ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[]))\n Filter: (date = '2013-11-30'::date)\n -> Bitmap Index Scan on demo_weekly_20131130_dt_idx (cost=0.00..48745.85 rows=2094422 width=0) (actual time=458.694..458.694 rows=2102611 loops=1)\n Index Cond: ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[]))\n -> Hash (cost=5314.18..5314.18 rows=200 width=516) (actual time=530.930..530.930 rows=441736 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 35541kB\n -> HashAggregate (cost=5312.18..5314.18 rows=200 width=516) (actual time=298.301..411.734 rows=441736 loops=1)\n -> Seq Scan on test1 (cost=0.00..5153.94 rows=63294 width=516) (actual time=0.068..91.378 rows=441736 loops=1)\nTotal runtime: 23679.096 ms\n(16 rows)\n\n\n\n\n\n\n\n\n\nHi buddies,\n \nI’ve got a query as below, it runs several times with different execution plan and totally different execution time. The one using hash-join is slow and the one using semi-hash join is very fast. However, I have no control over the optimizer\n behavior of PostgreSQL database. Or, do I have?\n \nThe database version is 9.3.4\n \nSELECT dem_type, \n dem_value, \n Count(*) \nFROM demo_weekly a\nWHERE date = '2013-11-30' \nAND userid IN ( select userid from test1) \n AND dem_type IN ( 'Gender', 'Age', 'Hobbies' ) \nGROUP BY dem_type, \n dem_value ;\n \n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=322386.94..322786.94 rows=40000 width=29) (actual time=3142.849..3142.927 rows=19 loops=1)\n -> Hash Semi Join (cost=14460.06..314403.08 rows=1064514 width=29) (actual time=803.671..2786.979 rows=1199961 loops=1)\n Hash Cond: ((a.userid)::text = (test1.userid)::text)\n -> Append (cost=0.00..277721.30 rows=2129027 width=78) (actual time=536.829..1691.270 rows=2102611 loops=1)\n -> Seq Scan on demo_weekly a (cost=0.00..0.00 rows=1 width=808) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: ((date = '2013-11-30'::date) AND ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[])))\n -> Bitmap Heap Scan on demo_weekly_20131130 a_1 (cost=50045.63..277721.30 rows=2129026 width=78) (actual time=536.826..1552.203 rows=2102611 loops=1)\n Recheck Cond: ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[]))\n Filter: (date = '2013-11-30'::date)\n -> Bitmap Index Scan on demo_weekly_20131130_dt_idx (cost=0.00..49513.37 rows=2129026 width=0) (actual time=467.453..467.453 rows=2102611 loops=1)\n Index Cond: ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[]))\n -> Hash (cost=8938.36..8938.36 rows=441736 width=50) (actual time=266.501..266.501 rows=441736 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 35541kB\n -> Seq Scan on test1 (cost=0.00..8938.36 rows=441736 width=50) (actual time=0.023..87.869 rows=441736 loops=1)\nTotal runtime: 3149.004 ms\n(15 rows)\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=318351.90..318751.90 rows=40000 width=29) (actual time=23668.646..23668.723 rows=19 loops=1)\n -> Hash Join (cost=5316.68..310497.81 rows=1047212 width=29) (actual time=1059.182..23218.864 rows=1199961 loops=1)\n Hash Cond: ((a.userid)::text = (test1.userid)::text)\n -> Append (cost=0.00..276382.82 rows=2094423 width=78) (actual time=528.116..2002.462 rows=2102611 loops=1)\n -> Seq Scan on demo_weekly a (cost=0.00..0.00 rows=1 width=808) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((date = '2013-11-30'::date) AND ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[])))\n -> Bitmap Heap Scan on demo_weekly_20131130 a_1 (cost=49269.46..276382.82 rows=2094422 width=78) (actual time=528.114..1825.265 rows=2102611 loops=1)\n Recheck Cond: ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[]))\n Filter: (date = '2013-11-30'::date)\n -> Bitmap Index Scan on demo_weekly_20131130_dt_idx (cost=0.00..48745.85 rows=2094422 width=0) (actual time=458.694..458.694 rows=2102611 loops=1)\n Index Cond: ((dem_type)::text = ANY ('{Gender,Age,\"Hobbies\"}'::text[]))\n -> Hash (cost=5314.18..5314.18 rows=200 width=516) (actual time=530.930..530.930 rows=441736 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 35541kB\n -> HashAggregate (cost=5312.18..5314.18 rows=200 width=516) (actual time=298.301..411.734 rows=441736 loops=1)\n -> Seq Scan on test1 (cost=0.00..5153.94 rows=63294 width=516) (actual time=0.068..91.378 rows=441736 loops=1)\nTotal runtime: 23679.096 ms\n(16 rows)",
"msg_date": "Fri, 16 May 2014 02:38:22 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "same query different execution plan (hash join vs. semi-hash join)"
},
{
"msg_contents": "\"Huang, Suya\" <[email protected]> writes:\n> I've got a query as below, it runs several times with different execution plan and totally different execution time. The one using hash-join is slow and the one using semi-hash join is very fast. However, I have no control over the optimizer behavior of PostgreSQL database. Or, do I have?\n\nA salient feature of the slow plan is that the planner is misinformed\nabout the size of test1:\n\n> -> Seq Scan on test1 (cost=0.00..5153.94 rows=63294 width=516) (actual time=0.068..91.378 rows=441736 loops=1)\n\nwhereas in the fast plan its rows estimate for that scan is dead on.\nIt looks like the two cases also have different ideas of how many\ndistinct values are in the test1.userid column, though this is more a\nguess than an indisputable fact.\n\nIn short, I suspect you're recreating the test1 table and not bothering\nto ANALYZE it after you fill it. This leaves you at the mercy of when\nthe autovacuum daemon gets around to analyzing the table before you'll\nget good plans for it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 May 2014 22:58:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same query different execution plan (hash join vs. semi-hash\n join)"
},
{
"msg_contents": "Thank you Tom. But the time spent on scanning table test1 is less than 1 second (91.738 compares to 87.869), so I guess this shouldn't be the issue?\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, May 16, 2014 12:58 PM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] same query different execution plan (hash join vs. semi-hash join)\n\n\"Huang, Suya\" <[email protected]> writes:\n> I've got a query as below, it runs several times with different execution plan and totally different execution time. The one using hash-join is slow and the one using semi-hash join is very fast. However, I have no control over the optimizer behavior of PostgreSQL database. Or, do I have?\n\nA salient feature of the slow plan is that the planner is misinformed about the size of test1:\n\n> -> Seq Scan on test1 (cost=0.00..5153.94 \n> rows=63294 width=516) (actual time=0.068..91.378 rows=441736 loops=1)\n\nwhereas in the fast plan its rows estimate for that scan is dead on.\nIt looks like the two cases also have different ideas of how many distinct values are in the test1.userid column, though this is more a guess than an indisputable fact.\n\nIn short, I suspect you're recreating the test1 table and not bothering to ANALYZE it after you fill it. This leaves you at the mercy of when the autovacuum daemon gets around to analyzing the table before you'll get good plans for it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 19 May 2014 06:14:32 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: same query different execution plan (hash join vs. semi-hash\n join)"
},
{
"msg_contents": "\"Huang, Suya\" <[email protected]> writes:\n> Thank you Tom. But the time spent on scanning table test1 is less than 1 second (91.738 compares to 87.869), so I guess this shouldn't be the issue?\n\nNo, the point is that the bad rowcount estimate (and, possibly, lack of\nstats about join column contents) causes the planner to pick a join method\nthat's not ideal for this query.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 19 May 2014 10:22:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same query different execution plan (hash join vs. semi-hash\n join)"
},
{
"msg_contents": "Thanks Tom, I think you're right. I just did an analyze on table test1 and the execution plan now generated is more stable and predictable.\n\nThanks,\nSuya\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, May 20, 2014 12:22 AM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] same query different execution plan (hash join vs. semi-hash join)\n\n\"Huang, Suya\" <[email protected]> writes:\n> Thank you Tom. But the time spent on scanning table test1 is less than 1 second (91.738 compares to 87.869), so I guess this shouldn't be the issue?\n\nNo, the point is that the bad rowcount estimate (and, possibly, lack of stats about join column contents) causes the planner to pick a join method that's not ideal for this query.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 May 2014 00:16:02 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: same query different execution plan (hash join vs. semi-hash\n join)"
}
] |
[
{
"msg_contents": "On a 9.3.1 server , I have a key busy_table in that is hit by most\ntransactions running on our system. One DB's copy of this table has 60K rows\nand 1/3 of that tables rows can updated every minute.\n\nAutovacuum autovacuum_analyze_scale_factor is set 0.02, so that analyse runs\nnearly every minute. But when autovacuum vacuum runs I sometimes see the\nfollowing message in logs:\n\nLOG: automatic vacuum of table \"busy_table\":* index scans: 0*\n pages: 0 removed, 22152 remain\n tuples: 0 removed, 196927 remain\n buffer usage: 46241 hits, 478 misses, 715 dirtied\n avg read rate: 0.561 MB/s, avg write rate: 0.839 MB/s\n system usage: CPU 0.07s/0.06u sec elapsed 6.66 sec\n\nand the tuples remaining is then overestimated by a factor >3 , and have\nseen this over estimate as large at >20 times IE 5M\n\nThis causes the query planner to then fail to get the best plan, in fact\nthis can result in queries that take 30 Minutes that normally return in 4-6\nseconds.\n\nIE it starts table scanning tables in joins to busy_table rather than using\nthe index.\n\nAs soon as following appears:\nLOG: automatic vacuum of table \"busy_table\": *index scans: 1*\n\nall is well again for the queries that follow this , but for the 20-30 user\ninteractions during the bad period never return as they take too long.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/autovacuum-vacuum-creates-bad-statistics-for-planner-when-it-log-index-scans-0-tp5804416.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 18 May 2014 17:35:29 -0700 (PDT)",
"msg_from": "tim_wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum vacuum creates bad statistics for planner when it log\n index scans: 0"
},
{
"msg_contents": "Just to add a little more detail about my busy_table\n1) there are no rows deleted\n2) 98% of updates are HOT\n3) there are two DB's on this postgres instance both with the same table,\nboth seeing the same issue\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/autovacuum-vacuum-creates-bad-statistics-for-planner-when-it-log-index-scans-0-tp5804416p5804424.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 18 May 2014 20:56:16 -0700 (PDT)",
"msg_from": "tim_wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum vacuum creates bad statistics for planner when it\n log index scans: 0"
},
{
"msg_contents": "tim_wilson <[email protected]> writes:\n> On a 9.3.1 server , I have a key busy_table in that is hit by most\n> transactions running on our system. One DB's copy of this table has 60K rows\n> and 1/3 of that tables rows can updated every minute.\n\n> Autovacuum autovacuum_analyze_scale_factor is set 0.02, so that analyse runs\n> nearly every minute. But when autovacuum vacuum runs I sometimes see the\n> following message in logs:\n\n> LOG: automatic vacuum of table \"busy_table\":* index scans: 0*\n> pages: 0 removed, 22152 remain\n> tuples: 0 removed, 196927 remain\n> buffer usage: 46241 hits, 478 misses, 715 dirtied\n> avg read rate: 0.561 MB/s, avg write rate: 0.839 MB/s\n> system usage: CPU 0.07s/0.06u sec elapsed 6.66 sec\n\n> and the tuples remaining is then overestimated by a factor >3 , and have\n> seen this over estimate as large at >20 times IE 5M\n\nFWIW, I tried to reproduce this without success.\n\nThere's some code in there that attempts to extrapolate the total number\nof live tuples when VACUUM has not scanned the entire table. It's surely\nplausible that that logic went off the rails ... but without a test case\nor at least a more specific description of the problem scenario, it's\nhard to know what's wrong exactly.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 May 2014 00:13:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum vacuum creates bad statistics for planner when it log\n index scans: 0"
},
{
"msg_contents": "Thanks for you response Tom:\nbut what does index_scans:0 mean? vs index scans: 1?\n\nI have had a look at the c code but cannot see when it that would be the\ncase.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/autovacuum-vacuum-creates-bad-statistics-for-planner-when-it-log-index-scans-0-tp5804416p5806283.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jun 2014 17:55:19 -0700 (PDT)",
"msg_from": "tim_wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum vacuum creates bad statistics for planner when it\n log index scans: 0"
},
{
"msg_contents": "tim_wilson <[email protected]> writes:\n> Thanks for you response Tom:\n> but what does index_scans:0 mean? vs index scans: 1?\n\nI believe the former means that VACUUM found no removable tuples, so it\nhad no need to make any passes over the table's indexes.\n\n(Ordinarily you wouldn't see the number of scans as more than 1, unless\nVACUUM removed quite a lot of dead tuples, more than it could remember\nwithin maintenance_work_mem; in which case it would make multiple passes\nover the indexes to remove index entries.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 05 Jun 2014 22:02:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: autovacuum vacuum creates bad statistics for planner when it\n log index scans: 0"
},
{
"msg_contents": "I have now created a repeatable test for this ...bug, well that may be\ndebatable, but getting the query plan this wrong after vacum and analyze\nhave run certainly looks like a bug to me.\n\nI have created a test case that matches my problem domain but can probably\nbe simplified.\npostgres_bug.sql\n<http://postgresql.1045698.n5.nabble.com/file/n5806302/postgres_bug.sql> \n\nEven after autovac vacuum and autovac analyze have run the query plan goes\nfrom using indexes on the big table to table scanning them as it thinks my\nunit_test table is large due to the error in the estimate for rows in the\ntable that autovac generated. Once you run a cluster on the table all goes\nback to using the correct indexes. And the next autovacuum gets things\nstraight again on the stats table.\n\nLook forward to solution as this is hurting us. A very hot table on our\nsystem goes bad every night and needs constant watching.\n\nregards\nTim\n\n\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/autovacuum-vacuum-creates-bad-statistics-for-planner-when-it-log-index-scans-0-tp5804416p5806302.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jun 2014 23:43:32 -0700 (PDT)",
"msg_from": "tim_wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum vacuum creates bad statistics for planner when it\n log index scans: 0"
},
{
"msg_contents": "Yes I can create a simpler version that exhibits the problem:\npostgres_bug_simpler.sql\n<http://postgresql.1045698.n5.nabble.com/file/n5806320/postgres_bug_simpler.sql> \n\nThis only now involves one smaller table 60K rows, and a linked table with\n20M rows. I tried with 6K and 1M but could not get problem to occur. Both\nare now unchanging in size. The smaller table gets updated frequently, and\nthen starts exhibiting the bad query plan, it seems especially after the 2nd\nauto vacuum and auto analyze. When the dead_rows goes to zero in the stats\nthe live_tup can stay at an huge factor larger than the table really is for\nsome time.\n\nIn my system the smaller table that is updated frequently grows only\nslightly if at all. I never want a table scan to happen of the big table,\nbut even with enable_seq_scan=false set in functions that query these tables\nI can get the bad query plan.\n\nWould it be possible to have a setting on a table that gave an expression\nfor determining the table size? IE For key highly updated tables I could set\nand maintain the meta-data for the size table and even shape of the data.\nThen for these tables autovac would not need to make the effort of having to\nestimate size.\n\nregards\nTim\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/autovacuum-vacuum-creates-bad-statistics-for-planner-when-it-log-index-scans-0-tp5804416p5806320.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Jun 2014 04:22:32 -0700 (PDT)",
"msg_from": "tim_wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum vacuum creates bad statistics for planner when it\n log index scans: 0"
},
{
"msg_contents": "Was my example not able to be repeated or do I need to give you a better\nexample of the problem, or is there just a lot of stuff happening?\n\nHappy to do more work on example sql for the problem if it needs it, just\nneed some feed back on how whether this problem is going to be looked at or\nnot.\n\nregards\nTim\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/autovacuum-vacuum-creates-bad-statistics-for-planner-when-it-log-index-scans-0-tp5804416p5806743.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Jun 2014 14:37:20 -0700 (PDT)",
"msg_from": "tim_wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum vacuum creates bad statistics for planner when it\n log index scans: 0"
},
{
"msg_contents": "tim_wilson <[email protected]> writes:\n> Was my example not able to be repeated or do I need to give you a better\n> example of the problem, or is there just a lot of stuff happening?\n\nThe latter ...\n\nhttp://www.postgresql.org/message-id/flat/[email protected]\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Jun 2014 18:07:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: autovacuum vacuum creates bad statistics for planner when it\n log index scans: 0"
},
{
"msg_contents": "Great thanks a lot.\n\nWe will be ready to build and test a patch or 9.4 version as soon as you\nhave a test patch you want to try.\n\nregards\nTim\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/autovacuum-vacuum-creates-bad-statistics-for-planner-when-it-log-index-scans-0-tp5804416p5806747.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Jun 2014 15:41:25 -0700 (PDT)",
"msg_from": "tim_wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum vacuum creates bad statistics for planner when it\n log index scans: 0"
},
{
"msg_contents": "Have been looking at lazyvacuum.c trying to think of a way of changing the\ncalculation of the stats to ensure that my hot table stats do not get so\nbadly distorted.\n\nWhat I have noticed is that when I reindex my hot table the stats on the\ntable do not seem to change in pg_stat_user_tables (ie they stay bad) but\nthe EXPLAIN for the query starts using the correct index again. \n\nThe index itself does not seem very bloated but reindex seems to alter the\nquery optimizer choices. I can't see in index.c where side effect would be\ncaused.\n\nWhy is this? Not sure if this is entirely helpful as reindex takes exclusive\nlock, but if I used a concurrent rebuild and rename of the index I might be\nable to work around this.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/autovacuum-vacuum-creates-bad-statistics-for-planner-when-it-log-index-scans-0-tp5804416p5808509.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Jun 2014 17:39:47 -0700 (PDT)",
"msg_from": "tim_wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum vacuum creates bad statistics for planner when it\n log index scans: 0"
}
] |
[
{
"msg_contents": "I am sending this on behalf of my colleague who tried to post to this list last year but without success, then also tried [email protected] but without getting a reply. \n\nI have recently re-tested this in P/G version 9.3.4 with the same results: \n\nHi, \n\nI have created a table 'test_table' and index 'idx_test_table' with a view 'v_test_table'. However the query plan used by the view does not use the index but when running the select statement itself it does use the index. Given that query specific hints are not available in Postgres 9.1 how can I persuade the view to use the same query plan as the select statement? \n\nThanks, \n\nTim \n\n\n--DROP table test_table CASCADE; \n\n-- create test table \nCREATE TABLE test_table ( \nhistory_id SERIAL, \nid character varying(50) NOT NULL , \nname character varying(50), \nCONSTRAINT test_table_pkey PRIMARY KEY (history_id) \n); \n\n-- create index on test table \nCREATE INDEX idx_test_table ON test_table (id); \n\n-- populate test table \nINSERT INTO test_table (id, name) SELECT *, 'Danger Mouse' FROM (SELECT md5(random()::text) from generate_series(1,10000)) q; \n\n-- collect stats \nANALYZE test_table; \n\n\nEXPLAIN (ANALYZE, BUFFERS) \nSELECT * \nFROM test_table \nWHERE id = '02b304b1c54542570d9f7bd39361f5b4'; \n\n\"Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1 width=50) (actual time=0.021..0.022 rows=1 loops=1)\" \n\" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\" \n\" Buffers: shared hit=3\" \n\"Total runtime: 0.051 ms\" \n\n\n-- select statement with good plan \n\nEXPLAIN (ANALYZE, BUFFERS) \nSELECT id, \nCASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id), name || 'x') <> name \nthen name \nend as name \nFROM test_table \nWHERE id = '02b304b1c54542570d9f7bd39361f5b4'; \n\n\"WindowAgg (cost=8.28..8.31 rows=1 width=50) (actual time=0.050..0.051 rows=1 loops=1)\" \n\" Buffers: shared hit=3\" \n\" -> Sort (cost=8.28..8.29 rows=1 width=50) (actual time=0.039..0.039 rows=1 loops=1)\" \n\" Sort Key: history_id\" \n\" Sort Method: quicksort Memory: 25kB\" \n\" Buffers: shared hit=3\" \n\" -> Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1 width=50) (actual time=0.030..0.031 rows=1 loops=1)\" \n\" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\" \n\" Buffers: shared hit=3\" \n\"Total runtime: 0.102 ms\" \n\n\n--DROP VIEW v_test_table; \n\nCREATE OR REPLACE VIEW v_test_table AS \nSELECT id, \nCASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id), name || 'x') <> name \nthen name \nend as name \nFROM test_table; \n\n\n-- Query via view with bad plan \n\nEXPLAIN (ANALYZE, BUFFERS) \nSELECT * \nFROM v_test_table \nWHERE id = '02b304b1c54542570d9f7bd39361f5b4'; \n\n\"Subquery Scan on v_test_table (cost=868.39..1243.39 rows=50 width=65) (actual time=26.115..33.327 rows=1 loops=1)\" \n\" Filter: ((v_test_table.id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\" \n\" Buffers: shared hit=104, temp read=77 written=77\" \n\" -> WindowAgg (cost=868.39..1118.39 rows=10000 width=50) (actual time=26.022..32.519 rows=10000 loops=1)\" \n\" Buffers: shared hit=104, temp read=77 written=77\" \n\" -> Sort (cost=868.39..893.39 rows=10000 width=50) (actual time=26.013..27.796 rows=10000 loops=1)\" \n\" Sort Key: test_table.id, test_table.history_id\" \n\" Sort Method: external merge Disk: 608kB\" \n\" Buffers: shared hit=104, temp read=77 written=77\" \n\" -> Seq Scan on test_table (cost=0.00..204.00 rows=10000 width=50) (actual time=0.010..1.804 rows=10000 loops=1)\" \n\" Buffers: shared hit=104\" \n\"Total runtime: 33.491 ms\" \n\n\nHow can I get the view to use the same query plan as the select statement? \n\n\n\nI am sending this on behalf of my colleague who tried to post to this list last year but without success, then also tried [email protected] but without getting a reply.I have recently re-tested this in P/G version 9.3.4 with the same results:Hi,I have created a table 'test_table' and index 'idx_test_table' with a view 'v_test_table'. However the query plan used by the view does not use the index but when running the select statement itself it does use the index. Given that query specific hints are not available in Postgres 9.1 how can I persuade the view to use the same query plan as the select statement?Thanks,Tim--DROP table test_table CASCADE;-- create test tableCREATE TABLE test_table (history_id SERIAL,id character varying(50) NOT NULL ,name character varying(50),CONSTRAINT test_table_pkey PRIMARY KEY (history_id));-- create index on test tableCREATE INDEX idx_test_table ON test_table (id);-- populate test tableINSERT INTO test_table (id, name) SELECT *, 'Danger Mouse' FROM (SELECT md5(random()::text) from generate_series(1,10000)) q;-- collect statsANALYZE test_table;EXPLAIN (ANALYZE, BUFFERS)SELECT *FROM test_tableWHERE id = '02b304b1c54542570d9f7bd39361f5b4';\"Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1 width=50) (actual time=0.021..0.022 rows=1 loops=1)\"\" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\" Buffers: shared hit=3\"\"Total runtime: 0.051 ms\"-- select statement with good planEXPLAIN (ANALYZE, BUFFERS)SELECT id,CASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id), name || 'x') <> namethen nameend as nameFROM test_tableWHERE id = '02b304b1c54542570d9f7bd39361f5b4';\"WindowAgg (cost=8.28..8.31 rows=1 width=50) (actual time=0.050..0.051 rows=1 loops=1)\"\" Buffers: shared hit=3\"\" -> Sort (cost=8.28..8.29 rows=1 width=50) (actual time=0.039..0.039 rows=1 loops=1)\"\" Sort Key: history_id\"\" Sort Method: quicksort Memory: 25kB\"\" Buffers: shared hit=3\"\" -> Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1 width=50) (actual time=0.030..0.031 rows=1 loops=1)\"\" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\" Buffers: shared hit=3\"\"Total runtime: 0.102 ms\"--DROP VIEW v_test_table;CREATE OR REPLACE VIEW v_test_table ASSELECT id,CASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id), name || 'x') <> namethen nameend as nameFROM test_table;-- Query via view with bad planEXPLAIN (ANALYZE, BUFFERS)SELECT *FROM v_test_tableWHERE id = '02b304b1c54542570d9f7bd39361f5b4';\"Subquery Scan on v_test_table (cost=868.39..1243.39 rows=50 width=65) (actual time=26.115..33.327 rows=1 loops=1)\"\" Filter: ((v_test_table.id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\" Buffers: shared hit=104, temp read=77 written=77\"\" -> WindowAgg (cost=868.39..1118.39 rows=10000 width=50) (actual time=26.022..32.519 rows=10000 loops=1)\"\" Buffers: shared hit=104, temp read=77 written=77\"\" -> Sort (cost=868.39..893.39 rows=10000 width=50) (actual time=26.013..27.796 rows=10000 loops=1)\"\" Sort Key: test_table.id, test_table.history_id\"\" Sort Method: external merge Disk: 608kB\"\" Buffers: shared hit=104, temp read=77 written=77\"\" -> Seq Scan on test_table (cost=0.00..204.00 rows=10000 width=50) (actual time=0.010..1.804 rows=10000 loops=1)\"\" Buffers: shared hit=104\"\"Total runtime: 33.491 ms\"How can I get the view to use the same query plan as the select statement?",
"msg_date": "Mon, 19 May 2014 16:47:14 +1200 (NZST)",
"msg_from": "Geoff Hull <[email protected]>",
"msg_from_op": true,
"msg_subject": "View has different query plan than select statement"
},
{
"msg_contents": "On Mon, May 19, 2014 at 4:47 PM, Geoff Hull <[email protected]>wrote:\n\n> I am sending this on behalf of my colleague who tried to post to this list\n> last year but without success, then also tried\n> [email protected] but without getting a reply.\n>\n> I have recently re-tested this in P/G version 9.3.4 with the same results:\n>\n> Hi,\n>\n> I have created a table 'test_table' and index 'idx_test_table' with a view\n> 'v_test_table'. However the query plan used by the view does not use the\n> index but when running the select statement itself it does use the index.\n> Given that query specific hints are not available in Postgres 9.1 how can I\n> persuade the view to use the same query plan as the select statement?\n>\n> Thanks,\n>\n> Tim\n>\n>\n> --DROP table test_table CASCADE;\n>\n> -- create test table\n> CREATE TABLE test_table (\n> history_id SERIAL,\n> id character varying(50) NOT NULL ,\n> name character varying(50),\n> CONSTRAINT test_table_pkey PRIMARY KEY (history_id)\n> );\n>\n> -- create index on test table\n> CREATE INDEX idx_test_table ON test_table (id);\n>\n> -- populate test table\n> INSERT INTO test_table (id, name) SELECT *, 'Danger Mouse' FROM (SELECT\n> md5(random()::text) from generate_series(1,10000)) q;\n>\n> -- collect stats\n> ANALYZE test_table;\n>\n>\n> EXPLAIN (ANALYZE, BUFFERS)\n> SELECT *\n> FROM test_table\n> WHERE id = '02b304b1c54542570d9f7bd39361f5b4';\n>\n> \"Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1\n> width=50) (actual time=0.021..0.022 rows=1 loops=1)\"\n> \" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\n> \" Buffers: shared hit=3\"\n> \"Total runtime: 0.051 ms\"\n>\n>\n> -- select statement with good plan\n>\n> EXPLAIN (ANALYZE, BUFFERS)\n> SELECT id,\n> CASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id),\n> name || 'x') <> name\n> then name\n> end as name\n> FROM test_table\n> WHERE id = '02b304b1c54542570d9f7bd39361f5b4';\n>\n> \"WindowAgg (cost=8.28..8.31 rows=1 width=50) (actual time=0.050..0.051\n> rows=1 loops=1)\"\n> \" Buffers: shared hit=3\"\n> \" -> Sort (cost=8.28..8.29 rows=1 width=50) (actual time=0.039..0.039\n> rows=1 loops=1)\"\n> \" Sort Key: history_id\"\n> \" Sort Method: quicksort Memory: 25kB\"\n> \" Buffers: shared hit=3\"\n> \" -> Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1\n> width=50) (actual time=0.030..0.031 rows=1 loops=1)\"\n> \" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\n> \" Buffers: shared hit=3\"\n> \"Total runtime: 0.102 ms\"\n>\n>\n> --DROP VIEW v_test_table;\n>\n> CREATE OR REPLACE VIEW v_test_table AS\n> SELECT id,\n> CASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id),\n> name || 'x') <> name\n> then name\n> end as name\n> FROM test_table;\n>\n>\n> -- Query via view with bad plan\n>\n> EXPLAIN (ANALYZE, BUFFERS)\n> SELECT *\n> FROM v_test_table\n> WHERE id = '02b304b1c54542570d9f7bd39361f5b4';\n>\n> \"Subquery Scan on v_test_table (cost=868.39..1243.39 rows=50 width=65)\n> (actual time=26.115..33.327 rows=1 loops=1)\"\n> \" Filter: ((v_test_table.id)::text =\n> '02b304b1c54542570d9f7bd39361f5b4'::text)\"\n> \" Buffers: shared hit=104, temp read=77 written=77\"\n> \" -> WindowAgg (cost=868.39..1118.39 rows=10000 width=50) (actual time=\n> 26.022..32.519 rows=10000 loops=1)\"\n> \" Buffers: shared hit=104, temp read=77 written=77\"\n> \" -> Sort (cost=868.39..893.39 rows=10000 width=50) (actual\n> time=26.013..27.796 rows=10000 loops=1)\"\n> \" Sort Key: test_table.id, test_table.history_id\"\n> \" Sort Method: external merge Disk: 608kB\"\n> \" Buffers: shared hit=104, temp read=77 written=77\"\n> \" -> Seq Scan on test_table (cost=0.00..204.00 rows=10000 width=50)\n> (actual time=0.010..1.804 rows=10000 loops=1)\"\n> \" Buffers: shared hit=104\"\n> \"Total runtime: 33.491 ms\"\n>\n>\n> How can I get the view to use the same query plan as the select statement?\n>\n>\nHi Geoff,\n\nUnfortunately the view is not making use of the index due to the presence\nof the windowing function in the view. I think you would find that if that\nwas removed then the view would more than likely use the index again.\n\nThe reason for this is that currently the WHERE clause of the outer query\nis not pushed down into the view due to some overly strict code which\ncompletely disallows pushdowns of where clauses into sub queries that\ncontain windowing functions...\n\nIn your case, because you have this id in your partition by clause, then\ntechnically it is possible to push the where clause down into the sub\nquery. I wrote a patch a while back which lifts this restriction. it\nunfortunately missed the boat for 9.4, but with any luck it will make it\ninto 9.5. If you're up for compiling postgres from source, then you can\ntest the patch out:\n\nhttp://www.postgresql.org/message-id/CAHoyFK9ihoSarntWc-NJ5tPHko4Wcausd-1C_0wEcogi9UEKTw@mail.gmail.com\n\nIt should apply to current HEAD without too much trouble.\n\nRegards\n\nDavid Rowley\n\nOn Mon, May 19, 2014 at 4:47 PM, Geoff Hull <[email protected]> wrote:\n\nI am sending this on behalf of my colleague who tried to post to this list last year but without success, then also tried [email protected] but without getting a reply.\nI have recently re-tested this in P/G version 9.3.4 with the same results:Hi,I have created a table 'test_table' and index 'idx_test_table' with a view 'v_test_table'. However the query plan used by the view does not use the index but when running the select statement itself it does use the index. Given that query specific hints are not available in Postgres 9.1 how can I persuade the view to use the same query plan as the select statement?\nThanks,Tim--DROP table test_table CASCADE;-- create test tableCREATE TABLE test_table (history_id SERIAL,id character varying(50) NOT NULL ,name character varying(50),CONSTRAINT test_table_pkey PRIMARY KEY (history_id)\n);-- create index on test tableCREATE INDEX idx_test_table ON test_table (id);-- populate test tableINSERT INTO test_table (id, name) SELECT *, 'Danger Mouse' FROM (SELECT md5(random()::text) from generate_series(1,10000)) q;\n-- collect statsANALYZE test_table;EXPLAIN (ANALYZE, BUFFERS)SELECT *FROM test_tableWHERE id = '02b304b1c54542570d9f7bd39361f5b4';\"Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1 width=50) (actual time=0.021..0.022 rows=1 loops=1)\"\n\" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\" Buffers: shared hit=3\"\"Total runtime: 0.051 ms\"-- select statement with good planEXPLAIN (ANALYZE, BUFFERS)\nSELECT id,CASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id), name || 'x') <> namethen nameend as nameFROM test_tableWHERE id = '02b304b1c54542570d9f7bd39361f5b4';\n\"WindowAgg (cost=8.28..8.31 rows=1 width=50) (actual time=0.050..0.051 rows=1 loops=1)\"\" Buffers: shared hit=3\"\" -> Sort (cost=8.28..8.29 rows=1 width=50) (actual time=0.039..0.039 rows=1 loops=1)\"\n\" Sort Key: history_id\"\" Sort Method: quicksort Memory: 25kB\"\" Buffers: shared hit=3\"\" -> Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1 width=50) (actual time=0.030..0.031 rows=1 loops=1)\"\n\" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\" Buffers: shared hit=3\"\"Total runtime: 0.102 ms\"--DROP VIEW v_test_table;CREATE OR REPLACE VIEW v_test_table AS\nSELECT id,CASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id), name || 'x') <> namethen nameend as nameFROM test_table;-- Query via view with bad planEXPLAIN (ANALYZE, BUFFERS)\nSELECT *FROM v_test_tableWHERE id = '02b304b1c54542570d9f7bd39361f5b4';\"Subquery Scan on v_test_table (cost=868.39..1243.39 rows=50 width=65) (actual time=26.115..33.327 rows=1 loops=1)\"\n\" Filter: ((v_test_table.id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\" Buffers: shared hit=104, temp read=77 written=77\"\" -> WindowAgg (cost=868.39..1118.39 rows=10000 width=50) (actual time=26.022..32.519 rows=10000 loops=1)\"\n\" Buffers: shared hit=104, temp read=77 written=77\"\" -> Sort (cost=868.39..893.39 rows=10000 width=50) (actual time=26.013..27.796 rows=10000 loops=1)\"\n\" Sort Key: test_table.id, test_table.history_id\"\" Sort Method: external merge Disk: 608kB\"\" Buffers: shared hit=104, temp read=77 written=77\"\n\" -> Seq Scan on test_table (cost=0.00..204.00 rows=10000 width=50) (actual time=0.010..1.804 rows=10000 loops=1)\"\" Buffers: shared hit=104\"\"Total runtime: 33.491 ms\"How can I get the view to use the same query plan as the select statement?\nHi Geoff,Unfortunately the view is not making use of the index due to the presence of the windowing function in the view. I think you would find that if that was removed then the view would more than likely use the index again. \nThe reason for this is that currently the WHERE clause of the outer query is not pushed down into the view due to some overly strict code which completely disallows pushdowns of where clauses into sub queries that contain windowing functions...\nIn your case, because you have this id in your partition by clause, then technically it is possible to push the where clause down into the sub query. I wrote a patch a while back which lifts this restriction. it unfortunately missed the boat for 9.4, but with any luck it will make it into 9.5. If you're up for compiling postgres from source, then you can test the patch out:\nhttp://www.postgresql.org/message-id/CAHoyFK9ihoSarntWc-NJ5tPHko4Wcausd-1C_0wEcogi9UEKTw@mail.gmail.com\nIt should apply to current HEAD without too much trouble.RegardsDavid Rowley",
"msg_date": "Mon, 19 May 2014 19:19:17 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: View has different query plan than select statement"
},
{
"msg_contents": "----- Original Message -----\n\nFrom: \"David Rowley\" <[email protected]> \nTo: \"Geoff Hull\" <[email protected]> \nCc: \"pgsql-performance\" <[email protected]> \nSent: Monday, 19 May, 2014 7:19:17 PM \nSubject: Re: [PERFORM] View has different query plan than select statement \n\nOn Mon, May 19, 2014 at 4:47 PM, Geoff Hull < [email protected] > wrote: \n\n\n\nI am sending this on behalf of my colleague who tried to post to this list last year but without success, then also tried [email protected] but without getting a reply. \n\nI have recently re-tested this in P/G version 9.3.4 with the same results: \n\nHi, \n\nI have created a table 'test_table' and index 'idx_test_table' with a view 'v_test_table'. However the query plan used by the view does not use the index but when running the select statement itself it does use the index. Given that query specific hints are not available in Postgres 9.1 how can I persuade the view to use the same query plan as the select statement? \n\nThanks, \n\nTim \n\n\n--DROP table test_table CASCADE; \n\n-- create test table \nCREATE TABLE test_table ( \nhistory_id SERIAL, \nid character varying(50) NOT NULL , \nname character varying(50), \nCONSTRAINT test_table_pkey PRIMARY KEY (history_id) \n); \n\n-- create index on test table \nCREATE INDEX idx_test_table ON test_table (id); \n\n-- populate test table \nINSERT INTO test_table (id, name) SELECT *, 'Danger Mouse' FROM (SELECT md5(random()::text) from generate_series(1,10000)) q; \n\n-- collect stats \nANALYZE test_table; \n\n\nEXPLAIN (ANALYZE, BUFFERS) \nSELECT * \nFROM test_table \nWHERE id = '02b304b1c54542570d9f7bd39361f5b4'; \n\n\"Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1 width=50) (actual time=0.021..0.022 rows=1 loops=1)\" \n\" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\" \n\" Buffers: shared hit=3\" \n\"Total runtime: 0.051 ms\" \n\n\n-- select statement with good plan \n\nEXPLAIN (ANALYZE, BUFFERS) \nSELECT id, \nCASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id), name || 'x') <> name \nthen name \nend as name \nFROM test_table \nWHERE id = '02b304b1c54542570d9f7bd39361f5b4'; \n\n\"WindowAgg (cost=8.28..8.31 rows=1 width=50) (actual time=0.050..0.051 rows=1 loops=1)\" \n\" Buffers: shared hit=3\" \n\" -> Sort (cost=8.28..8.29 rows=1 width=50) (actual time=0.039..0.039 rows=1 loops=1)\" \n\" Sort Key: history_id\" \n\" Sort Method: quicksort Memory: 25kB\" \n\" Buffers: shared hit=3\" \n\" -> Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1 width=50) (actual time=0.030..0.031 rows=1 loops=1)\" \n\" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\" \n\" Buffers: shared hit=3\" \n\"Total runtime: 0.102 ms\" \n\n\n--DROP VIEW v_test_table; \n\nCREATE OR REPLACE VIEW v_test_table AS \nSELECT id, \nCASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id), name || 'x') <> name \nthen name \nend as name \nFROM test_table; \n\n\n-- Query via view with bad plan \n\nEXPLAIN (ANALYZE, BUFFERS) \nSELECT * \nFROM v_test_table \nWHERE id = '02b304b1c54542570d9f7bd39361f5b4'; \n\n\"Subquery Scan on v_test_table (cost=868.39..1243.39 rows=50 width=65) (actual time=26.115..33.327 rows=1 loops=1)\" \n\" Filter: (( v_test_table.id )::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\" \n\" Buffers: shared hit=104, temp read=77 written=77\" \n\" -> WindowAgg (cost=868.39..1118.39 rows=10000 width=50) (actual time= 26.022..32.519 rows=10000 loops=1)\" \n\" Buffers: shared hit=104, temp read=77 written=77\" \n\" -> Sort (cost= 868.39..893.39 rows=10000 width=50) (actual time=26.013..27.796 rows=10000 loops=1)\" \n\" Sort Key: test_table.id , test_table.history_id\" \n\" Sort Method: external merge Disk: 608kB\" \n\" Buffers: shared hit=104, temp read=77 written=77\" \n\" -> Seq Scan on test_table (cost=0.00..204.00 rows=10000 width=50) (actual time=0.010..1.804 rows=10000 loops=1)\" \n\" Buffers: shared hit=104\" \n\"Total runtime: 33.491 ms\" \n\n\nHow can I get the view to use the same query plan as the select statement? \n\n\n\n\n\nHi Geoff, \n\nUnfortunately the view is not making use of the index due to the presence of the windowing function in the view. I think you would find that if that was removed then the view would more than likely use the index again. \n\nThe reason for this is that currently the WHERE clause of the outer query is not pushed down into the view due to some overly strict code which completely disallows pushdowns of where clauses into sub queries that contain windowing functions... \n\nIn your case, because you have this id in your partition by clause, then technically it is possible to push the where clause down into the sub query. I wrote a patch a while back which lifts this restriction. it unfortunately missed the boat for 9.4, but with any luck it will make it into 9.5. If you're up for compiling postgres from source, then you can test the patch out: \n\nhttp://www.postgresql.org/message-id/CAHoyFK9ihoSarntWc-NJ5tPHko4Wcausd-1C_0wEcogi9UEKTw@mail.gmail.com \n\nIt should apply to current HEAD without too much trouble. \n\nRegards \n\nDavid Rowley \n\n\nDavid, \n\nThank you so much for the helpful (and speedy) reply. \n\nI talked to our developer Tim about this, and your reply exactly described his problem. \n\nI downloaded the source for the 9.4beta1 version and used your patch. I compiled it, etc, then we ran Tim's test and it worked perfectly - it now uses the index in the view: \n\nSELECT: \n\n\"WindowAgg (cost=8.31..8.34 rows=1 width=50) (actual time=0.043..0.043 rows=0 loops=1)\" \n\" Buffers: shared hit=5\" \n\" -> Sort (cost=8.31..8.32 rows=1 width=50) (actual time=0.041..0.041 rows=0 loops=1)\" \n\" Sort Key: history_id\" \n\" Sort Method: quicksort Memory: 25kB\" \n\" Buffers: shared hit=5\" \n\" -> Index Scan using idx_test_table on test_table (cost=0.29..8.30 rows=1 width=50) (actual time=0.008..0.008 rows=0 loops=1)\" \n\" Index Cond: ((id)::text = '\"cb05b1cd2659f7cea9436ed20e055df5\"'::text)\" \n\" Buffers: shared hit=2\" \n\"Planning time: 0.188 ms\" \n\"Execution time: 0.133 ms\" \n\nVIEW: \n\n\"Subquery Scan on v_test_table (cost=8.31..8.35 rows=1 width=65) (actual time=0.030..0.030 rows=0 loops=1)\" \n\" Buffers: shared hit=2\" \n\" -> WindowAgg (cost=8.31..8.34 rows=1 width=50) (actual time=0.030..0.030 rows=0 loops=1)\" \n\" Buffers: shared hit=2\" \n\" -> Sort (cost=8.31..8.32 rows=1 width=50) (actual time=0.028..0.028 rows=0 loops=1)\" \n\" Sort Key: test_table.history_id\" \n\" Sort Method: quicksort Memory: 25kB\" \n\" Buffers: shared hit=2\" \n\" -> Index Scan using idx_test_table on test_table (cost=0.29..8.30 rows=1 width=50) (actual time=0.012..0.012 rows=0 loops=1)\" \n\" Index Cond: ((id)::text = '\"cb05b1cd2659f7cea9436ed20e055df5\"'::text)\" \n\" Buffers: shared hit=2\" \n\"Planning time: 0.216 ms\" \n\"Execution time: 0.120 ms\" \n\nLovely! \n\nWe're looking forward to PostgreSQL 9.5. \n\nThanks, \nGeoff and Tim \n\n\nFrom: \"David Rowley\" <[email protected]>To: \"Geoff Hull\" <[email protected]>Cc: \"pgsql-performance\" <[email protected]>Sent: Monday, 19 May, 2014 7:19:17 PMSubject: Re: [PERFORM] View has different query plan than select statementOn Mon, May 19, 2014 at 4:47 PM, Geoff Hull <[email protected]> wrote:\n\nI am sending this on behalf of my colleague who tried to post to this list last year but without success, then also tried [email protected] but without getting a reply.\nI have recently re-tested this in P/G version 9.3.4 with the same results:Hi,I have created a table 'test_table' and index 'idx_test_table' with a view 'v_test_table'. However the query plan used by the view does not use the index but when running the select statement itself it does use the index. Given that query specific hints are not available in Postgres 9.1 how can I persuade the view to use the same query plan as the select statement?\nThanks,Tim--DROP table test_table CASCADE;-- create test tableCREATE TABLE test_table (history_id SERIAL,id character varying(50) NOT NULL ,name character varying(50),CONSTRAINT test_table_pkey PRIMARY KEY (history_id)\n);-- create index on test tableCREATE INDEX idx_test_table ON test_table (id);-- populate test tableINSERT INTO test_table (id, name) SELECT *, 'Danger Mouse' FROM (SELECT md5(random()::text) from generate_series(1,10000)) q;\n-- collect statsANALYZE test_table;EXPLAIN (ANALYZE, BUFFERS)SELECT *FROM test_tableWHERE id = '02b304b1c54542570d9f7bd39361f5b4';\"Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1 width=50) (actual time=0.021..0.022 rows=1 loops=1)\"\n\" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\" Buffers: shared hit=3\"\"Total runtime: 0.051 ms\"-- select statement with good planEXPLAIN (ANALYZE, BUFFERS)\nSELECT id,CASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id), name || 'x') <> namethen nameend as nameFROM test_tableWHERE id = '02b304b1c54542570d9f7bd39361f5b4';\n\"WindowAgg (cost=8.28..8.31 rows=1 width=50) (actual time=0.050..0.051 rows=1 loops=1)\"\" Buffers: shared hit=3\"\" -> Sort (cost=8.28..8.29 rows=1 width=50) (actual time=0.039..0.039 rows=1 loops=1)\"\n\" Sort Key: history_id\"\" Sort Method: quicksort Memory: 25kB\"\" Buffers: shared hit=3\"\" -> Index Scan using idx_test_table on test_table (cost=0.00..8.27 rows=1 width=50) (actual time=0.030..0.031 rows=1 loops=1)\"\n\" Index Cond: ((id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\" Buffers: shared hit=3\"\"Total runtime: 0.102 ms\"--DROP VIEW v_test_table;CREATE OR REPLACE VIEW v_test_table AS\nSELECT id,CASE WHEN COALESCE(LAG(name) OVER (PARTITION BY id ORDER BY history_id), name || 'x') <> namethen nameend as nameFROM test_table;-- Query via view with bad planEXPLAIN (ANALYZE, BUFFERS)\nSELECT *FROM v_test_tableWHERE id = '02b304b1c54542570d9f7bd39361f5b4';\"Subquery Scan on v_test_table (cost=868.39..1243.39 rows=50 width=65) (actual time=26.115..33.327 rows=1 loops=1)\"\n\" Filter: ((v_test_table.id)::text = '02b304b1c54542570d9f7bd39361f5b4'::text)\"\" Buffers: shared hit=104, temp read=77 written=77\"\" -> WindowAgg (cost=868.39..1118.39 rows=10000 width=50) (actual time=26.022..32.519 rows=10000 loops=1)\"\n\" Buffers: shared hit=104, temp read=77 written=77\"\" -> Sort (cost=868.39..893.39 rows=10000 width=50) (actual time=26.013..27.796 rows=10000 loops=1)\"\n\" Sort Key: test_table.id, test_table.history_id\"\" Sort Method: external merge Disk: 608kB\"\" Buffers: shared hit=104, temp read=77 written=77\"\n\" -> Seq Scan on test_table (cost=0.00..204.00 rows=10000 width=50) (actual time=0.010..1.804 rows=10000 loops=1)\"\" Buffers: shared hit=104\"\"Total runtime: 33.491 ms\"How can I get the view to use the same query plan as the select statement?\nHi Geoff,Unfortunately the view is not making use of the index due to the presence of the windowing function in the view. I think you would find that if that was removed then the view would more than likely use the index again. \nThe reason for this is that currently the WHERE clause of the outer query is not pushed down into the view due to some overly strict code which completely disallows pushdowns of where clauses into sub queries that contain windowing functions...\nIn your case, because you have this id in your partition by clause, then technically it is possible to push the where clause down into the sub query. I wrote a patch a while back which lifts this restriction. it unfortunately missed the boat for 9.4, but with any luck it will make it into 9.5. If you're up for compiling postgres from source, then you can test the patch out:\nhttp://www.postgresql.org/message-id/CAHoyFK9ihoSarntWc-NJ5tPHko4Wcausd-1C_0wEcogi9UEKTw@mail.gmail.com\nIt should apply to current HEAD without too much trouble.RegardsDavid Rowley David,Thank you so much for the helpful (and speedy) reply.I talked to our developer Tim about this, and your reply exactly described his problem.I downloaded the source for the 9.4beta1 version and used your patch. I compiled it, etc, then we ran Tim's test and it worked perfectly - it now uses the index in the view:SELECT:\"WindowAgg (cost=8.31..8.34 rows=1 width=50) (actual time=0.043..0.043 rows=0 loops=1)\"\" Buffers: shared hit=5\"\" -> Sort (cost=8.31..8.32 rows=1 width=50) (actual time=0.041..0.041 rows=0 loops=1)\"\" Sort Key: history_id\"\" Sort Method: quicksort Memory: 25kB\"\" Buffers: shared hit=5\"\" -> Index Scan using idx_test_table on test_table (cost=0.29..8.30 rows=1 width=50) (actual time=0.008..0.008 rows=0 loops=1)\"\" Index Cond: ((id)::text = '\"cb05b1cd2659f7cea9436ed20e055df5\"'::text)\"\" Buffers: shared hit=2\"\"Planning time: 0.188 ms\"\"Execution time: 0.133 ms\"VIEW:\"Subquery Scan on v_test_table (cost=8.31..8.35 rows=1 width=65) (actual time=0.030..0.030 rows=0 loops=1)\"\" Buffers: shared hit=2\"\" -> WindowAgg (cost=8.31..8.34 rows=1 width=50) (actual time=0.030..0.030 rows=0 loops=1)\"\" Buffers: shared hit=2\"\" -> Sort (cost=8.31..8.32 rows=1 width=50) (actual time=0.028..0.028 rows=0 loops=1)\"\" Sort Key: test_table.history_id\"\" Sort Method: quicksort Memory: 25kB\"\" Buffers: shared hit=2\"\" -> Index Scan using idx_test_table on test_table (cost=0.29..8.30 rows=1 width=50) (actual time=0.012..0.012 rows=0 loops=1)\"\" Index Cond: ((id)::text = '\"cb05b1cd2659f7cea9436ed20e055df5\"'::text)\"\" Buffers: shared hit=2\"\"Planning time: 0.216 ms\"\"Execution time: 0.120 ms\"Lovely!We're looking forward to PostgreSQL 9.5.Thanks,Geoff and Tim",
"msg_date": "Wed, 21 May 2014 11:06:46 +1200 (NZST)",
"msg_from": "Geoff Hull <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: View has different query plan than select statement"
}
] |
[
{
"msg_contents": "Is there any way to get the call stack of a function when profiling\nPostgreSQL with perf ?\nI configured with --enable-debug, I run a benchmark against the system and\nI'm able to identify a bottleneck.\n40% of the time is spent on an spinlock yet I cannot find out the codepath\nthat gets me there.\nUsing --call-graph with perf record didn't seem to help.\n\nAny ideas ?\n\nCheers,\nDimitris\n\nIs there any way to get the call stack of a function when profiling PostgreSQL with perf ?I configured with --enable-debug, I run a benchmark against the system and I'm able to identify a bottleneck.\n40% of the time is spent on an spinlock yet I cannot find out the codepath that gets me there.Using --call-graph with perf record didn't seem to help.Any ideas ?\nCheers,Dimitris",
"msg_date": "Thu, 22 May 2014 15:27:26 +0200",
"msg_from": "Dimitris Karampinas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Profiling PostgreSQL"
},
{
"msg_contents": "On 5/22/2014 7:27 AM, Dimitris Karampinas wrote:\n> Is there any way to get the call stack of a function when profiling \n> PostgreSQL with perf ?\n> I configured with --enable-debug, I run a benchmark against the system \n> and I'm able to identify a bottleneck.\n> 40% of the time is spent on an spinlock yet I cannot find out the \n> codepath that gets me there.\n> Using --call-graph with perf record didn't seem to help.\n>\n> Any ideas ?\n>\nCan you arrange to run 'pstack' a few times on the target process \n(either manually or with a shell script)?\nIf the probability of the process being in the spinning state is high, \nthen this approach should snag you at least one call stack.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 22 May 2014 07:34:36 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profiling PostgreSQL"
},
{
"msg_contents": "Dimitris Karampinas <[email protected]> writes:\n> Is there any way to get the call stack of a function when profiling\n> PostgreSQL with perf ?\n> I configured with --enable-debug, I run a benchmark against the system and\n> I'm able to identify a bottleneck.\n> 40% of the time is spent on an spinlock yet I cannot find out the codepath\n> that gets me there.\n> Using --call-graph with perf record didn't seem to help.\n\nCall graph data usually isn't trustworthy unless you built the program\nwith -fno-omit-frame-pointer ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 22 May 2014 09:48:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profiling PostgreSQL"
},
{
"msg_contents": "On Thu, May 22, 2014 at 10:48 PM, Tom Lane <[email protected]> wrote:\n> Call graph data usually isn't trustworthy unless you built the program\n> with -fno-omit-frame-pointer ...\nThis page is full of ideas as well:\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 May 2014 08:39:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profiling PostgreSQL"
},
{
"msg_contents": "Thanks for your answers. A script around pstack worked for me.\n\n(I'm not sure if I should open a new thread, I hope it's OK to ask another\nquestion here)\n\nFor the workload I run it seems that PostgreSQL scales with the number of\nconcurrent clients up to the point that these reach the number of cores\n(more or less).\nFurther increase to the number of clients leads to dramatic performance\ndegradation. pstack and perf show that backends block on LWLockAcquire\ncalls, so, someone could assume that the reason the system slows down is\nbecause of multiple concurrent transactions that access the same data.\nHowever I did the two following experiments:\n1) I completely removed the UPDATE transactions from my workload. The\nthroughput turned out to be better yet the trend was the same. Increasing\nthe number of clients, has a very negative performance impact.\n2) I deployed PostgreSQL on more cores. The throughput improved a lot. If\nthe problem was due to concurrecy control, the throughput should remain the\nsame - no matter the number of hardware contexts.\n\nAny insight why the system behaves like this ?\n\nCheers,\nDimitris\n\n\nOn Fri, May 23, 2014 at 1:39 AM, Michael Paquier\n<[email protected]>wrote:\n\n> On Thu, May 22, 2014 at 10:48 PM, Tom Lane <[email protected]> wrote:\n> > Call graph data usually isn't trustworthy unless you built the program\n> > with -fno-omit-frame-pointer ...\n> This page is full of ideas as well:\n> https://wiki.postgresql.org/wiki/Profiling_with_perf\n> --\n> Michael\n>\n\nThanks for your answers. A script around pstack worked for me.(I'm not sure if I should open a new thread, I hope it's OK to ask another question here)\nFor the workload I run it seems that PostgreSQL scales with the number of concurrent clients up to the point that these reach the number of cores (more or less).Further increase to the number of clients leads to dramatic performance degradation. pstack and perf show that backends block on LWLockAcquire calls, so, someone could assume that the reason the system slows down is because of multiple concurrent transactions that access the same data.\nHowever I did the two following experiments:1) I completely removed the UPDATE transactions from my workload. The throughput turned out to be better yet the trend was the same. Increasing the number of clients, has a very negative performance impact.\n2) I deployed PostgreSQL on more cores. The throughput improved a lot. If the problem was due to concurrecy control, the throughput should remain the same - no matter the number of hardware contexts. \nAny insight why the system behaves like this ?Cheers,DimitrisOn Fri, May 23, 2014 at 1:39 AM, Michael Paquier <[email protected]> wrote:\nOn Thu, May 22, 2014 at 10:48 PM, Tom Lane <[email protected]> wrote:\n\n> Call graph data usually isn't trustworthy unless you built the program\n> with -fno-omit-frame-pointer ...\nThis page is full of ideas as well:\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n--\nMichael",
"msg_date": "Fri, 23 May 2014 16:40:31 +0200",
"msg_from": "Dimitris Karampinas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Profiling PostgreSQL"
},
{
"msg_contents": "Dne 23.5.2014 16:41 \"Dimitris Karampinas\" <[email protected]> napsal(a):\n>\n> Thanks for your answers. A script around pstack worked for me.\n>\n> (I'm not sure if I should open a new thread, I hope it's OK to ask\nanother question here)\n>\n> For the workload I run it seems that PostgreSQL scales with the number of\nconcurrent clients up to the point that these reach the number of cores\n(more or less).\n> Further increase to the number of clients leads to dramatic performance\ndegradation. pstack and perf show that backends block on LWLockAcquire\ncalls, so, someone could assume that the reason the system slows down is\nbecause of multiple concurrent transactions that access the same data.\n> However I did the two following experiments:\n> 1) I completely removed the UPDATE transactions from my workload. The\nthroughput turned out to be better yet the trend was the same. Increasing\nthe number of clients, has a very negative performance impact.\n> 2) I deployed PostgreSQL on more cores. The throughput improved a lot. If\nthe problem was due to concurrecy control, the throughput should remain the\nsame - no matter the number of hardware contexts.\n>\n> Any insight why the system behaves like this ?\n\nPhysical limits, there two possible botleneck: cpu or io. Postgres use one\ncpu per session, and if you have cpu intensive benchmark, then max should\nbe in cpu related workers. Later a workers shares cpu, bu total throughput\nshould be same to cca 10xCpu (depends on test)\n\n>\n> Cheers,\n> Dimitris\n>\n>\n> On Fri, May 23, 2014 at 1:39 AM, Michael Paquier <\[email protected]> wrote:\n>>\n>> On Thu, May 22, 2014 at 10:48 PM, Tom Lane <[email protected]> wrote:\n>> > Call graph data usually isn't trustworthy unless you built the program\n>> > with -fno-omit-frame-pointer ...\n>> This page is full of ideas as well:\n>> https://wiki.postgresql.org/wiki/Profiling_with_perf\n>> --\n>> Michael\n>\n>\n\n\nDne 23.5.2014 16:41 \"Dimitris Karampinas\" <[email protected]> napsal(a):\n>\n> Thanks for your answers. A script around pstack worked for me.\n>\n> (I'm not sure if I should open a new thread, I hope it's OK to ask another question here)\n>\n> For the workload I run it seems that PostgreSQL scales with the number of concurrent clients up to the point that these reach the number of cores (more or less).\n> Further increase to the number of clients leads to dramatic performance degradation. pstack and perf show that backends block on LWLockAcquire calls, so, someone could assume that the reason the system slows down is because of multiple concurrent transactions that access the same data.\n\n> However I did the two following experiments:\n> 1) I completely removed the UPDATE transactions from my workload. The throughput turned out to be better yet the trend was the same. Increasing the number of clients, has a very negative performance impact.\n> 2) I deployed PostgreSQL on more cores. The throughput improved a lot. If the problem was due to concurrecy control, the throughput should remain the same - no matter the number of hardware contexts. \n>\n> Any insight why the system behaves like this ?\nPhysical limits, there two possible botleneck: cpu or io. Postgres use one cpu per session, and if you have cpu intensive benchmark, then max should be in cpu related workers. Later a workers shares cpu, bu total throughput should be same to cca 10xCpu (depends on test)\n>\n> Cheers,\n> Dimitris\n>\n>\n> On Fri, May 23, 2014 at 1:39 AM, Michael Paquier <[email protected]> wrote:\n>>\n>> On Thu, May 22, 2014 at 10:48 PM, Tom Lane <[email protected]> wrote:\n>> > Call graph data usually isn't trustworthy unless you built the program\n>> > with -fno-omit-frame-pointer ...\n>> This page is full of ideas as well:\n>> https://wiki.postgresql.org/wiki/Profiling_with_perf\n>> --\n>> Michael\n>\n>",
"msg_date": "Fri, 23 May 2014 17:13:29 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profiling PostgreSQL"
},
{
"msg_contents": "On Fri, May 23, 2014 at 7:40 AM, Dimitris Karampinas <[email protected]>wrote:\n\n> Thanks for your answers. A script around pstack worked for me.\n>\n> (I'm not sure if I should open a new thread, I hope it's OK to ask another\n> question here)\n>\n> For the workload I run it seems that PostgreSQL scales with the number of\n> concurrent clients up to the point that these reach the number of cores\n> (more or less).\n> Further increase to the number of clients leads to dramatic performance\n> degradation. pstack and perf show that backends block on LWLockAcquire\n> calls, so, someone could assume that the reason the system slows down is\n> because of multiple concurrent transactions that access the same data.\n> However I did the two following experiments:\n> 1) I completely removed the UPDATE transactions from my workload. The\n> throughput turned out to be better yet the trend was the same. Increasing\n> the number of clients, has a very negative performance impact.\n>\n\nCurrently acquisition and release of all LWLock, even in shared mode, are\nprotected by spinlocks, which are exclusive. So they cause a lot of\ncontention even on read-only workloads. Also if the working set fits in\nRAM but not in shared_buffers, you will have a lot of exclusive locks on\nthe buffer freelist and the buffer mapping tables.\n\n\n\n> 2) I deployed PostgreSQL on more cores. The throughput improved a lot. If\n> the problem was due to concurrecy control, the throughput should remain the\n> same - no matter the number of hardware contexts.\n>\n\nHardware matters! How did you change the number of cores?\n\nCheers,\n\nJeff\n\nOn Fri, May 23, 2014 at 7:40 AM, Dimitris Karampinas <[email protected]> wrote:\nThanks for your answers. A script around pstack worked for me.(I'm not sure if I should open a new thread, I hope it's OK to ask another question here)\n\nFor the workload I run it seems that PostgreSQL scales with the number of concurrent clients up to the point that these reach the number of cores (more or less).Further increase to the number of clients leads to dramatic performance degradation. pstack and perf show that backends block on LWLockAcquire calls, so, someone could assume that the reason the system slows down is because of multiple concurrent transactions that access the same data.\nHowever I did the two following experiments:1) I completely removed the UPDATE transactions from my workload. The throughput turned out to be better yet the trend was the same. Increasing the number of clients, has a very negative performance impact.\nCurrently acquisition and release of all LWLock, even in shared mode, are protected by spinlocks, which are exclusive. So they cause a lot of contention even on read-only workloads. Also if the working set fits in RAM but not in shared_buffers, you will have a lot of exclusive locks on the buffer freelist and the buffer mapping tables.\n \n2) I deployed PostgreSQL on more cores. The throughput improved a lot. If the problem was due to concurrecy control, the throughput should remain the same - no matter the number of hardware contexts. \nHardware matters! How did you change the number of cores?Cheers,Jeff",
"msg_date": "Fri, 23 May 2014 09:52:51 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profiling PostgreSQL"
},
{
"msg_contents": "I want to bypass any disk bottleneck so I store all the data in ramfs (the\npurpose the project is to profile pg so I don't care for data loss if\nanything goes wrong).\nSince my data are memory resident, I thought the size of the shared buffers\nwouldn't play much role, yet I have to admit that I saw difference in\nperformance when modifying shared_buffers parameter.\n\nI use taskset to control the number of cores that PostgreSQL is deployed on.\n\nIs there any parameter/variable in the system that is set dynamically and\ndepends on the number of cores ?\n\nCheers,\nDimitris\n\n\nOn Fri, May 23, 2014 at 6:52 PM, Jeff Janes <[email protected]> wrote:\n\n> On Fri, May 23, 2014 at 7:40 AM, Dimitris Karampinas <[email protected]>wrote:\n>\n>> Thanks for your answers. A script around pstack worked for me.\n>>\n>> (I'm not sure if I should open a new thread, I hope it's OK to ask\n>> another question here)\n>>\n>> For the workload I run it seems that PostgreSQL scales with the number of\n>> concurrent clients up to the point that these reach the number of cores\n>> (more or less).\n>> Further increase to the number of clients leads to dramatic performance\n>> degradation. pstack and perf show that backends block on LWLockAcquire\n>> calls, so, someone could assume that the reason the system slows down is\n>> because of multiple concurrent transactions that access the same data.\n>> However I did the two following experiments:\n>> 1) I completely removed the UPDATE transactions from my workload. The\n>> throughput turned out to be better yet the trend was the same. Increasing\n>> the number of clients, has a very negative performance impact.\n>>\n>\n> Currently acquisition and release of all LWLock, even in shared mode, are\n> protected by spinlocks, which are exclusive. So they cause a lot of\n> contention even on read-only workloads. Also if the working set fits in\n> RAM but not in shared_buffers, you will have a lot of exclusive locks on\n> the buffer freelist and the buffer mapping tables.\n>\n>\n>\n>> 2) I deployed PostgreSQL on more cores. The throughput improved a lot. If\n>> the problem was due to concurrecy control, the throughput should remain the\n>> same - no matter the number of hardware contexts.\n>>\n>\n> Hardware matters! How did you change the number of cores?\n>\n> Cheers,\n>\n> Jeff\n>\n\nI want to bypass any disk bottleneck so I store all the data in ramfs (the purpose the project is to profile pg so I don't care for data loss if anything goes wrong).Since my data are memory resident, I thought the size of the shared buffers wouldn't play much role, yet I have to admit that I saw difference in performance when modifying shared_buffers parameter.\nI use taskset to control the number of cores that PostgreSQL is deployed on.Is there any parameter/variable in the system that is set dynamically and depends on the number of cores ?\nCheers,DimitrisOn Fri, May 23, 2014 at 6:52 PM, Jeff Janes <[email protected]> wrote:\nOn Fri, May 23, 2014 at 7:40 AM, Dimitris Karampinas <[email protected]> wrote:\nThanks for your answers. A script around pstack worked for me.(I'm not sure if I should open a new thread, I hope it's OK to ask another question here)\n\nFor the workload I run it seems that PostgreSQL scales with the number of concurrent clients up to the point that these reach the number of cores (more or less).Further increase to the number of clients leads to dramatic performance degradation. pstack and perf show that backends block on LWLockAcquire calls, so, someone could assume that the reason the system slows down is because of multiple concurrent transactions that access the same data.\nHowever I did the two following experiments:1) I completely removed the UPDATE transactions from my workload. The throughput turned out to be better yet the trend was the same. Increasing the number of clients, has a very negative performance impact.\nCurrently acquisition and release of all LWLock, even in shared mode, are protected by spinlocks, which are exclusive. So they cause a lot of contention even on read-only workloads. Also if the working set fits in RAM but not in shared_buffers, you will have a lot of exclusive locks on the buffer freelist and the buffer mapping tables.\n\n \n2) I deployed PostgreSQL on more cores. The throughput improved a lot. If the problem was due to concurrecy control, the throughput should remain the same - no matter the number of hardware contexts. \nHardware matters! How did you change the number of cores?Cheers,Jeff",
"msg_date": "Fri, 23 May 2014 19:25:12 +0200",
"msg_from": "Dimitris Karampinas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Profiling PostgreSQL"
},
{
"msg_contents": "On Fri, May 23, 2014 at 10:25 AM, Dimitris Karampinas\n<[email protected]>wrote:\n\n> I want to bypass any disk bottleneck so I store all the data in ramfs (the\n> purpose the project is to profile pg so I don't care for data loss if\n> anything goes wrong).\n> Since my data are memory resident, I thought the size of the shared\n> buffers wouldn't play much role, yet I have to admit that I saw difference\n> in performance when modifying shared_buffers parameter.\n>\n\nIn which direction? If making shared_buffers larger improves things, that\nsuggests that you have contention on the BufFreelistLock. Increasing\nshared_buffers reduces buffer churn (assuming you increase it by enough)\nand so decreases that contention.\n\n\n>\n> I use taskset to control the number of cores that PostgreSQL is deployed\n> on.\n>\n\nIt can be important what bits you set. For example if you have 4 sockets,\neach one with a quadcore, you would probably maximize the consequences of\nspinlock contention by putting one process on each socket, rather than\nputting them all on the same socket.\n\n\n>\n> Is there any parameter/variable in the system that is set dynamically and\n> depends on the number of cores ?\n>\n\nThe number of spins a spinlock goes through before sleeping,\nspins_per_delay, is determined dynamically based on how often a tight loop\n\"pays off\". But I don't think this is very sensitive to the exact number\nof processors, just the difference between 1 and more than 1.\n\nOn Fri, May 23, 2014 at 10:25 AM, Dimitris Karampinas <[email protected]> wrote:\nI want to bypass any disk bottleneck so I store all the data in ramfs (the purpose the project is to profile pg so I don't care for data loss if anything goes wrong).\nSince my data are memory resident, I thought the size of the shared buffers wouldn't play much role, yet I have to admit that I saw difference in performance when modifying shared_buffers parameter.\nIn which direction? If making shared_buffers larger improves things, that suggests that you have contention on the BufFreelistLock. Increasing shared_buffers reduces buffer churn (assuming you increase it by enough) and so decreases that contention.\n \nI use taskset to control the number of cores that PostgreSQL is deployed on.It can be important what bits you set. For example if you have 4 sockets, each one with a quadcore, you would probably maximize the consequences of spinlock contention by putting one process on each socket, rather than putting them all on the same socket.\n Is there any parameter/variable in the system that is set dynamically and depends on the number of cores ?\nThe number of spins a spinlock goes through before sleeping, spins_per_delay, is determined dynamically based on how often a tight loop \"pays off\". But I don't think this is very sensitive to the exact number of processors, just the difference between 1 and more than 1.",
"msg_date": "Fri, 23 May 2014 10:57:17 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profiling PostgreSQL"
},
{
"msg_contents": "Increasing the shared_buffers size improved the performance by 15%. The\ntrend remains the same though: steep drop in performance after a certain\nnumber of clients.\n\nMy deployment is \"NUMA-aware\". I allocate cores that reside on the same\nsocket. Once I reach the maximum number of cores, I start allocating cores\nfrom a neighbouring socket.\n\nI'll try to print the number of spins_per_delay for each experiment... just\nin case I get something interesting.\n\n\nOn Fri, May 23, 2014 at 7:57 PM, Jeff Janes <[email protected]> wrote:\n\n> On Fri, May 23, 2014 at 10:25 AM, Dimitris Karampinas <[email protected]\n> > wrote:\n>\n>> I want to bypass any disk bottleneck so I store all the data in ramfs\n>> (the purpose the project is to profile pg so I don't care for data loss if\n>> anything goes wrong).\n>> Since my data are memory resident, I thought the size of the shared\n>> buffers wouldn't play much role, yet I have to admit that I saw difference\n>> in performance when modifying shared_buffers parameter.\n>>\n>\n> In which direction? If making shared_buffers larger improves things, that\n> suggests that you have contention on the BufFreelistLock. Increasing\n> shared_buffers reduces buffer churn (assuming you increase it by enough)\n> and so decreases that contention.\n>\n>\n>>\n>> I use taskset to control the number of cores that PostgreSQL is deployed\n>> on.\n>>\n>\n> It can be important what bits you set. For example if you have 4 sockets,\n> each one with a quadcore, you would probably maximize the consequences of\n> spinlock contention by putting one process on each socket, rather than\n> putting them all on the same socket.\n>\n>\n>>\n>> Is there any parameter/variable in the system that is set dynamically and\n>> depends on the number of cores ?\n>>\n>\n> The number of spins a spinlock goes through before sleeping,\n> spins_per_delay, is determined dynamically based on how often a tight loop\n> \"pays off\". But I don't think this is very sensitive to the exact number\n> of processors, just the difference between 1 and more than 1.\n>\n>\n>\n\nIncreasing the shared_buffers size improved the performance by 15%. The trend remains the same though: steep drop in performance after a certain number of clients.My deployment is \"NUMA-aware\". I allocate cores that reside on the same socket. Once I reach the maximum number of cores, I start allocating cores from a neighbouring socket.\nI'll try to print the number of spins_per_delay for each experiment... just in case I get something interesting.On Fri, May 23, 2014 at 7:57 PM, Jeff Janes <[email protected]> wrote:\nOn Fri, May 23, 2014 at 10:25 AM, Dimitris Karampinas <[email protected]> wrote:\nI want to bypass any disk bottleneck so I store all the data in ramfs (the purpose the project is to profile pg so I don't care for data loss if anything goes wrong).\nSince my data are memory resident, I thought the size of the shared buffers wouldn't play much role, yet I have to admit that I saw difference in performance when modifying shared_buffers parameter.\nIn which direction? If making shared_buffers larger improves things, that suggests that you have contention on the BufFreelistLock. Increasing shared_buffers reduces buffer churn (assuming you increase it by enough) and so decreases that contention.\n\n \nI use taskset to control the number of cores that PostgreSQL is deployed on.It can be important what bits you set. For example if you have 4 sockets, each one with a quadcore, you would probably maximize the consequences of spinlock contention by putting one process on each socket, rather than putting them all on the same socket.\n\n Is there any parameter/variable in the system that is set dynamically and depends on the number of cores ?\nThe number of spins a spinlock goes through before sleeping, spins_per_delay, is determined dynamically based on how often a tight loop \"pays off\". But I don't think this is very sensitive to the exact number of processors, just the difference between 1 and more than 1.",
"msg_date": "Sun, 25 May 2014 18:26:04 +0200",
"msg_from": "Dimitris Karampinas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Profiling PostgreSQL"
},
{
"msg_contents": "On Sun, May 25, 2014 at 1:26 PM, Dimitris Karampinas <[email protected]>wrote:\n\n> My deployment is \"NUMA-aware\". I allocate cores that reside on the same\n> socket. Once I reach the maximum number of cores, I start allocating cores\n> from a neighbouring socket.\n\n\nI'm not sure if it solves your issue, but on a NUMA environemnt and recent\nversion of Linux kernel, you should try to disable vm.zone_reclaim_mode, as\nit seems to cause performance degradation for database workloads, see [1]\nand [2].\n\n[1] http://www.postgresql.org/message-id/[email protected]\n[2]\nhttp://frosty-postgres.blogspot.com.br/2012/08/postgresql-numa-and-zone-reclaim-mode.html\n\nBest regards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Sun, May 25, 2014 at 1:26 PM, Dimitris Karampinas <[email protected]> wrote:\nMy deployment is \"NUMA-aware\". I allocate cores that reside on the same socket. Once I reach the maximum number of cores, I start allocating cores from a neighbouring socket.\nI'm not sure if it solves your issue, but on a NUMA environemnt and recent version of Linux kernel, you should try to disable vm.zone_reclaim_mode, as it seems to cause performance degradation for database workloads, see [1] and [2].\n[1] http://www.postgresql.org/message-id/[email protected][2] http://frosty-postgres.blogspot.com.br/2012/08/postgresql-numa-and-zone-reclaim-mode.html\nBest regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Sun, 25 May 2014 19:45:05 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Profiling PostgreSQL"
}
] |
[
{
"msg_contents": "I just learned that NFS does not use a file system cache on the client side.\r\n\r\nOn the other hand, PostgreSQL relies on the file system cache for performance,\r\nbecause beyond a certain amount of shared_buffers performance will suffer.\r\n\r\nTogether these things seem to indicate that you cannot get good performance\r\nwith a large database over NFS since you can leverage memory speed.\r\n\r\nNow I wonder if there are any remedies (CacheFS?) and what experiences\r\npeople have made with the performance of large databases over NFS.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 May 2014 11:06:49 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": true,
"msg_subject": "NFS, file system cache and shared_buffers"
},
{
"msg_contents": "On 05/27/2014 02:06 PM, Albe Laurenz wrote:\n> I just learned that NFS does not use a file system cache on the client side.\n>\n> On the other hand, PostgreSQL relies on the file system cache for performance,\n> because beyond a certain amount of shared_buffers performance will suffer.\n>\n> Together these things seem to indicate that you cannot get good performance\n> with a large database over NFS since you can leverage memory speed.\n>\n> Now I wonder if there are any remedies (CacheFS?) and what experiences\n> people have made with the performance of large databases over NFS.\n\nI have no personal experience with NFS, but sounds like a \nhigher-than-usual shared_buffers value would be good.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 May 2014 15:26:56 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "* Heikki Linnakangas ([email protected]) wrote:\n> On 05/27/2014 02:06 PM, Albe Laurenz wrote:\n> >I just learned that NFS does not use a file system cache on the client side.\n> >\n> >On the other hand, PostgreSQL relies on the file system cache for performance,\n> >because beyond a certain amount of shared_buffers performance will suffer.\n> >\n> >Together these things seem to indicate that you cannot get good performance\n> >with a large database over NFS since you can leverage memory speed.\n> >\n> >Now I wonder if there are any remedies (CacheFS?) and what experiences\n> >people have made with the performance of large databases over NFS.\n> \n> I have no personal experience with NFS, but sounds like a\n> higher-than-usual shared_buffers value would be good.\n\nIt would certainly be worthwhile to test it. In the end you would,\nhopefully, end up with a situation where you're maximizing RAM usage-\nthe NFS server is certainly caching in *its* filesystem cache, while on\nthe PG server you're getting the benefit of shared_buffers without the\ndrawback of double-buffering (since you couldn't ever use the NFS\nserver's memory for shared_buffers anyway).\n\nAll that said, there has always been a recommendation of caution around\nusing NFS as a backing store for PG, or any RDBMS..\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Tue, 27 May 2014 10:40:13 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "Stephen Frost wrote:\r\n> All that said, there has always been a recommendation of caution around\r\n> using NFS as a backing store for PG, or any RDBMS..\r\n\r\nI know that Oracle recommends it - they even built an NFS client\r\ninto their database server to make the most of it.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 May 2014 15:00:01 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "On 05/27/2014 10:00 AM, Albe Laurenz wrote:\n\n> I know that Oracle recommends it - they even built an NFS client\n> into their database server to make the most of it.\n\nThat's odd. Every time the subject of NFS comes up, it's almost \nimmediately shot down with explicit advice to Never Do That(tm). It can \nbe kinda safe-ish if mounted in sync mode with caching disabled, but I'd \nnever use it on any of our systems.\n\nWe also have this in the Wiki:\n\nhttp://wiki.postgresql.org/wiki/Shared_Storage\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 May 2014 10:09:41 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "\nOn 5/27/2014 9:09 AM, Shaun Thomas wrote:\n> On 05/27/2014 10:00 AM, Albe Laurenz wrote:\n>\n>> I know that Oracle recommends it - they even built an NFS client\n>> into their database server to make the most of it.\n>\n> That's odd. Every time the subject of NFS comes up, it's almost \n> immediately shot down with explicit advice to Never Do That(tm). It \n> can be kinda safe-ish if mounted in sync mode with caching disabled, \n> but I'd never use it on any of our systems.\n\nIt has been a long time since I was in the weeds of this issue, but the \ncrux is that it was (still is?) hard to be sure that the filesystem's \nbehavior was exactly as expected. My recollection of the Oracle story \nwas that they had to verify the end-to-end behavior, and essentially \ncertify its correctness to guarantee database acid. So you needed to be \nrunning a very specific version of the NFS code, configured in a very \nspecific way. This isn't entirely inconsistent with the reference above \nthat they \"built an NFS client\". That's something you might need to do \nin order to be sure it behaves in the way you expect. Possibly the NFS \nimplementations deployed today are more consistent and correct than was \nthe case in the past. I wouldn't use a network filesystem for any kind \nof database storage myself though.\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 May 2014 09:18:47 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "On Tue, May 27, 2014 at 8:00 AM, Albe Laurenz <[email protected]>wrote:\n\n> Stephen Frost wrote:\n> > All that said, there has always been a recommendation of caution around\n> > using NFS as a backing store for PG, or any RDBMS..\n>\n> I know that Oracle recommends it - they even built an NFS client\n> into their database server to make the most of it.\n>\n\nLast I heard (which has been a while), Oracle supported specific brand\nnamed implementations of NFS, and warned against any others on a data\nintegrity basis.\n\nWhy would they implement their own client? Did they have to do something\nspecial in their client to make it safe?\n\nCheers,\n\nJeff\n\nOn Tue, May 27, 2014 at 8:00 AM, Albe Laurenz <[email protected]> wrote:\nStephen Frost wrote:\n> All that said, there has always been a recommendation of caution around\n> using NFS as a backing store for PG, or any RDBMS..\n\nI know that Oracle recommends it - they even built an NFS client\ninto their database server to make the most of it.Last I heard (which has been a while), Oracle supported specific brand named implementations of NFS, and warned against any others on a data integrity basis.\nWhy would they implement their own client? Did they have to do something special in their client to make it safe?Cheers,Jeff",
"msg_date": "Tue, 27 May 2014 08:32:01 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "On Tue, May 27, 2014 at 4:06 AM, Albe Laurenz <[email protected]>wrote:\n\n> I just learned that NFS does not use a file system cache on the client\n> side.\n>\n\nMy experience suggested that it did something a little weirder than that.\n It would cache read data as long as it was clean, but once the data was\ndirtied and written back, it would drop it from the cache. But it probably\ndepends on lot of variables and details I don't recall anymore.\n\n\n>\n> On the other hand, PostgreSQL relies on the file system cache for\n> performance,\n> because beyond a certain amount of shared_buffers performance will suffer.\n>\n\nSome people have some problems sometimes, and they are not readily\nreproducible (at least not in a publicly disclosable way, that I know of).\n Other people use large shared_buffers and have no problems at all, or none\nthat are fixed by lowering shared_buffers.\n\nWe should not elevate a rumor to a law.\n\n\n>\n> Together these things seem to indicate that you cannot get good performance\n> with a large database over NFS since you can leverage memory speed.\n>\n> Now I wonder if there are any remedies (CacheFS?) and what experiences\n> people have made with the performance of large databases over NFS.\n>\n\nI've only used it in cases where I didn't consider durability important,\nand even then didn't find it worth pursuing due to the performance. But I\nwas piggybacking on existing resource, I didn't have an impressive NFS\nserver tuned specifically for this usage, so my experience probably doesn't\nmean much performance wise.\n\nCheers,\n\nJeff\n\nOn Tue, May 27, 2014 at 4:06 AM, Albe Laurenz <[email protected]> wrote:\nI just learned that NFS does not use a file system cache on the client side.My experience suggested that it did something a little weirder than that. It would cache read data as long as it was clean, but once the data was dirtied and written back, it would drop it from the cache. But it probably depends on lot of variables and details I don't recall anymore.\n \n\nOn the other hand, PostgreSQL relies on the file system cache for performance,\nbecause beyond a certain amount of shared_buffers performance will suffer.Some people have some problems sometimes, and they are not readily reproducible (at least not in a publicly disclosable way, that I know of). Other people use large shared_buffers and have no problems at all, or none that are fixed by lowering shared_buffers.\nWe should not elevate a rumor to a law. \n\nTogether these things seem to indicate that you cannot get good performance\nwith a large database over NFS since you can leverage memory speed.\n\nNow I wonder if there are any remedies (CacheFS?) and what experiences\npeople have made with the performance of large databases over NFS.I've only used it in cases where I didn't consider durability important, and even then didn't find it worth pursuing due to the performance. But I was piggybacking on existing resource, I didn't have an impressive NFS server tuned specifically for this usage, so my experience probably doesn't mean much performance wise.\nCheers,Jeff",
"msg_date": "Tue, 27 May 2014 09:54:23 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "On Tue, May 27, 2014 at 4:06 AM, Albe Laurenz <[email protected]>wrote:\n\n> I just learned that NFS does not use a file system cache on the client\n> side.\n>\n\nThat's ... incorrect. NFS is cache-capable. NFSv3 (I think? It may have\nbeen v2) started sending metadata on file operations that was intended to\nallow for client-side caches. NFSv4 added all sorts of stateful behavior\nwhich allows for much more aggressive caching.\n\nWhere did you read that you could not use caching with NFS?\n\n\n\n-- \nJohn Melesky | Sr Database Administrator\n503.284.7581 x204 | [email protected] <[email protected]>\nRENTRAK | www.rentrak.com | NASDAQ: RENT\n\nNotice: This message is confidential and is intended only for the\nrecipient(s) named above. If you have received this message in error,\nor are not the named recipient(s), please immediately notify the\nsender and delete this message.\n\nOn Tue, May 27, 2014 at 4:06 AM, Albe Laurenz <[email protected]> wrote:\nI just learned that NFS does not use a file system cache on the client side.That's ... incorrect. NFS is cache-capable. NFSv3 (I think? It may have been v2) started sending metadata on file operations that was intended to allow for client-side caches. NFSv4 added all sorts of stateful behavior which allows for much more aggressive caching.\nWhere did you read that you could not use caching with NFS?-- John Melesky | Sr Database Administrator\n503.284.7581 x204 | [email protected]\nRENTRAK | www.rentrak.com | NASDAQ: RENTNotice: This message is confidential and is intended only for therecipient(s) named above. If you have received this message in error,\nor are not the named recipient(s), please immediately notify thesender and delete this message.",
"msg_date": "Tue, 27 May 2014 11:20:32 -0700",
"msg_from": "John Melesky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "John Melesky wrote:\r\n>> I just learned that NFS does not use a file system cache on the client side.\r\n> \r\n> That's ... incorrect. NFS is cache-capable. NFSv3 (I think? It may have been v2) started sending\r\n> metadata on file operations that was intended to allow for client-side caches. NFSv4 added all sorts\r\n> of stateful behavior which allows for much more aggressive caching.\r\n\r\nWhat do you mean by \"allows\"? Does it cache files in memory or not?\r\nDo you need additional software? Special configuration?\r\n\r\n> Where did you read that you could not use caching with NFS?\r\n\r\nI have it by hearsay from somebody who seemed knowledgable, and the\r\nexistence of CacheFS seemed to indicate it was true.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 May 2014 07:50:40 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "Jeff Janes wrote:\r\n>>> All that said, there has always been a recommendation of caution around\r\n>>> using NFS as a backing store for PG, or any RDBMS..\r\n>> \r\n>> \tI know that Oracle recommends it - they even built an NFS client\r\n>> \tinto their database server to make the most of it.\r\n> \r\n> Last I heard (which has been a while), Oracle supported specific brand named implementations of NFS,\r\n> and warned against any others on a data integrity basis.\r\n\r\nI couldn't find any detailed information, but it seems that only certain\r\nNFS devices are supported.\r\n\r\n> Why would they implement their own client? Did they have to do something special in their client to\r\n> make it safe?\r\n\r\nI think it is mostly a performance issue. Each backend mounts its own copy\r\nof the data files it needs.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 May 2014 08:41:18 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "\n>> Why would they implement their own client? Did they have to do something special in their client to\n>> make it safe?\n> \n> I think it is mostly a performance issue. Each backend mounts its own copy\n> of the data files it needs.\n\nI personally would never put PostgreSQL on an NFS share on Linux.\nUnless things have changed radically in the last couple years, Linux's\nNFS code is flaky and unreliable, including flat-out lying about whether\nstuff has been sent and received or not. This is why NetApp's NFS\nservers came with their own, proprietary, Linux kernel module.\n\nNFS on Solaris/Illumos is a different story. Not sure about FreeBSD.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 May 2014 17:36:04 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
},
{
"msg_contents": "I wrote:\r\n>Jeff Janes wrote:\r\n>>>> All that said, there has always been a recommendation of caution around\r\n>>>> using NFS as a backing store for PG, or any RDBMS..\r\n>>>\r\n>>> \tI know that Oracle recommends it - they even built an NFS client\r\n>>> \tinto their database server to make the most of it.\r\n>>\r\n>> Last I heard (which has been a while), Oracle supported specific brand named implementations of NFS,\r\n>> and warned against any others on a data integrity basis.\r\n>\r\n> I couldn't find any detailed information, but it seems that only certain\r\n> NFS devices are supported.\r\n\r\nFor the record: Oracle support told me that all NFS is supported on Linux,\r\nregardless of the device. \"Supported\" does not mean \"recommended\", of course.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 30 May 2014 07:12:05 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NFS, file system cache and shared_buffers"
}
] |
[
{
"msg_contents": "Hi,\n\nI wonder why planner uses Seq Scan instead of Index Scan.\n\nHere is my table (partial):\ncontent.contents\n-------------------------+-----------------------------+-----------------------------------------------------------------\n id | bigint | niepusty domyślnie nextval('content.contents_id_seq'::regclass)\n version | integer | niepusty\n date_published | timestamp without time zone | \n moderation_status | character varying(50) | \n publication_status | character varying(30) | \n\nAnd indexes (there are some other indexes too):\n \"contents_id_pkey\" PRIMARY KEY, btree (id)\n \"contents_date_published_idx\" btree (date_published)\n \"contents_moderation_status_idx\" btree (moderation_status)\n \"contents_publication_status_idx\" btree (publication_status)\n\nI tried also creating following indexes:\n \"contents_date_published_publication_status_moderation_statu_idx\" btree (date_published, publication_status, moderation_status)\n \"contents_publication_status_idx1\" btree ((publication_status::text))\n \"contents_moderation_status_idx1\" btree ((moderation_status::text))\n\nThen for this query (genrated by Hibernate):\nexplain (analyze, buffers) select count(*) as y0_ from content.contents this_ inner join content.content_categories cat1_ on this_.CONTENT_CATEGORY_ID=cat1_.ID where cat1_.name in ([...])\nand this_.date_published<='2014-05-26 12:23:31.557000 +02:00:00'\nand (this_.PUBLICATION_STATUS is null or this_.PUBLICATION_STATUS<>'DRAFT')\nand (this_.moderation_status is null or this_.moderation_status<>'DANGEROUS')\nand exists(select * from content.content_visibilities cv where cv.content_id = this_.ID and cv.user_group_id in (1,2));\n\nPlanner creates such plan:\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=31706.84..106020.81 rows=21871 width=2076) (actual time=1197.658..6012.406 rows=430218 loops=1)\n Hash Cond: (this_.id = cv.content_id)\n Buffers: shared hit=5 read=59031 written=3, temp read=47611 written=47549\n -> Hash Join (cost=2.22..56618.11 rows=22881 width=2076) (actual time=0.163..1977.304 rows=430221 loops=1)\n Hash Cond: (this_.content_category_id = cat1_.id)\n Buffers: shared hit=1 read=46829 written=1\n -> Seq Scan on contents this_ (cost=0.00..54713.92 rows=446176 width=2030) (actual time=0.048..915.724 rows=450517 loops=1)\n Filter: ((date_published <= '2014-05-26 12:23:31.557'::timestamp without time zone) AND ((publication_status IS NULL) OR ((publication_status)::text <> 'DRAFT'::text)) AND ((moderation_status IS NULL) OR ((moderation_status)::text <> 'DANGEROUS'::text)))\n Rows Removed by Filter: 50\n Buffers: shared read=46829 written=1\n -> Hash (cost=2.17..2.17 rows=4 width=46) (actual time=0.089..0.089 rows=4 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n Buffers: shared hit=1\n -> Seq Scan on content_categories cat1_ (cost=0.00..2.17 rows=4 width=46) (actual time=0.053..0.076 rows=4 loops=1)\n Filter: ((name)::text = ANY ('{przeglad-prasy/rp,przeglad-prasy/parkiet,komunikat-z-rynku-pap-emitent,komunikat-z-rynku-pap-depesze}'::text[]))\n Rows Removed by Filter: 74\n Buffers: shared hit=1\n -> Hash (cost=24435.09..24435.09 rows=443083 width=8) (actual time=1197.146..1197.146 rows=447624 loops=1)\n Buckets: 4096 Batches: 32 Memory Usage: 560kB\n Buffers: shared hit=4 read=12202 written=2, temp written=1467\n -> Bitmap Heap Scan on content_visibilities cv (cost=7614.55..24435.09 rows=443083 width=8) (actual time=61.034..647.729 rows=447624 loops=1)\n Recheck Cond: (user_group_id = ANY ('{1,2}'::bigint[]))\n Buffers: shared hit=4 read=12202 written=2\n -> Bitmap Index Scan on content_visibilities_user_group_id_idx (cost=0.00..7503.78 rows=443083 width=0) (actual time=58.680..58.680 rows=447626 loops=1)\n Index Cond: (user_group_id = ANY ('{1,2}'::bigint[]))\n Buffers: shared hit=3 read=1226\n Total runtime: 6364.689 ms\n(27 wierszy)\n\nThe suspicious part is:\n -> Seq Scan on contents this_ (cost=0.00..54713.92 \nrows=446176 width=2030) (actual time=0.048..915.724 rows=450517 loops=1)\n \n Filter: ((date_published <= '2014-05-26 12:23:31.557'::timestamp \nwithout time zone) AND ((publication_status IS NULL) OR \n((publication_status)::text <> 'DRAFT'::text)) AND \n((moderation_status IS NULL) OR ((moderation_status)::text <> \n'DANGEROUS'::text)))\n\nI don't understand why planner doesn't use indexes. The problem is there are about 0.5M rows satisfying condition (almost every row in the table). Could you please explain this behavior?\n\nI'm using PostgreSQL 9.2.8 on Ubuntu 12.04 LTS x86_64\n\nBest regards,\nGrzegorz Olszewski\n \t\t \t \t\t \n\n\n\nHi,I wonder why planner uses Seq Scan instead of Index Scan.Here is my table (partial):content.contents-------------------------+-----------------------------+----------------------------------------------------------------- id | bigint | niepusty domyślnie nextval('content.contents_id_seq'::regclass) version | integer | niepusty date_published | timestamp without time zone | moderation_status | character varying(50) | publication_status | character varying(30) | And indexes (there are some other indexes too): \"contents_id_pkey\" PRIMARY KEY, btree (id) \"contents_date_published_idx\" btree (date_published) \"contents_moderation_status_idx\" btree (moderation_status) \"contents_publication_status_idx\" btree (publication_status)I tried also creating following indexes: \"contents_date_published_publication_status_moderation_statu_idx\" btree (date_published, publication_status, moderation_status) \"contents_publication_status_idx1\" btree ((publication_status::text)) \"contents_moderation_status_idx1\" btree ((moderation_status::text))Then for this query (genrated by Hibernate):explain (analyze, buffers) select count(*) as y0_ from content.contents this_ inner join content.content_categories cat1_ on this_.CONTENT_CATEGORY_ID=cat1_.ID where cat1_.name in ([...])and this_.date_published<='2014-05-26 12:23:31.557000 +02:00:00'and (this_.PUBLICATION_STATUS is null or this_.PUBLICATION_STATUS<>'DRAFT')and (this_.moderation_status is null or this_.moderation_status<>'DANGEROUS')and exists(select * from content.content_visibilities cv where cv.content_id = this_.ID and cv.user_group_id in (1,2));Planner creates such plan: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Hash Semi Join (cost=31706.84..106020.81 rows=21871 width=2076) (actual time=1197.658..6012.406 rows=430218 loops=1) Hash Cond: (this_.id = cv.content_id) Buffers: shared hit=5 read=59031 written=3, temp read=47611 written=47549 -> Hash Join (cost=2.22..56618.11 rows=22881 width=2076) (actual time=0.163..1977.304 rows=430221 loops=1) Hash Cond: (this_.content_category_id = cat1_.id) Buffers: shared hit=1 read=46829 written=1 -> Seq Scan on contents this_ (cost=0.00..54713.92 rows=446176 width=2030) (actual time=0.048..915.724 rows=450517 loops=1) Filter: ((date_published <= '2014-05-26 12:23:31.557'::timestamp without time zone) AND ((publication_status IS NULL) OR ((publication_status)::text <> 'DRAFT'::text)) AND ((moderation_status IS NULL) OR ((moderation_status)::text <> 'DANGEROUS'::text))) Rows Removed by Filter: 50 Buffers: shared read=46829 written=1 -> Hash (cost=2.17..2.17 rows=4 width=46) (actual time=0.089..0.089 rows=4 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 1kB Buffers: shared hit=1 -> Seq Scan on content_categories cat1_ (cost=0.00..2.17 rows=4 width=46) (actual time=0.053..0.076 rows=4 loops=1) Filter: ((name)::text = ANY ('{przeglad-prasy/rp,przeglad-prasy/parkiet,komunikat-z-rynku-pap-emitent,komunikat-z-rynku-pap-depesze}'::text[])) Rows Removed by Filter: 74 Buffers: shared hit=1 -> Hash (cost=24435.09..24435.09 rows=443083 width=8) (actual time=1197.146..1197.146 rows=447624 loops=1) Buckets: 4096 Batches: 32 Memory Usage: 560kB Buffers: shared hit=4 read=12202 written=2, temp written=1467 -> Bitmap Heap Scan on content_visibilities cv (cost=7614.55..24435.09 rows=443083 width=8) (actual time=61.034..647.729 rows=447624 loops=1) Recheck Cond: (user_group_id = ANY ('{1,2}'::bigint[])) Buffers: shared hit=4 read=12202 written=2 -> Bitmap Index Scan on content_visibilities_user_group_id_idx (cost=0.00..7503.78 rows=443083 width=0) (actual time=58.680..58.680 rows=447626 loops=1) Index Cond: (user_group_id = ANY ('{1,2}'::bigint[])) Buffers: shared hit=3 read=1226 Total runtime: 6364.689 ms(27 wierszy)The suspicious part is: -> Seq Scan on contents this_ (cost=0.00..54713.92 \nrows=446176 width=2030) (actual time=0.048..915.724 rows=450517 loops=1) \n Filter: ((date_published <= '2014-05-26 12:23:31.557'::timestamp \nwithout time zone) AND ((publication_status IS NULL) OR \n((publication_status)::text <> 'DRAFT'::text)) AND \n((moderation_status IS NULL) OR ((moderation_status)::text <> \n'DANGEROUS'::text)))I don't understand why planner doesn't use indexes. The problem is there are about 0.5M rows satisfying condition (almost every row in the table). Could you please explain this behavior?I'm using PostgreSQL 9.2.8 on Ubuntu 12.04 LTS x86_64Best regards,Grzegorz Olszewski",
"msg_date": "Tue, 27 May 2014 23:09:45 +0200",
"msg_from": "Grzegorz Olszewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner doesn't take indexes into account"
},
{
"msg_contents": "What is random_page_cost and seq_page_cost in your server?\nAnd how many rows does the table have?\n\n\nOn Tue, May 27, 2014 at 2:09 PM, Grzegorz Olszewski <\[email protected]> wrote:\n\n> Hi,\n>\n> I wonder why planner uses Seq Scan instead of Index Scan.\n>\n> Here is my table (partial):\n> content.contents\n>\n> -------------------------+-----------------------------+-----------------------------------------------------------------\n> id | bigint | niepusty\n> domyślnie nextval('content.contents_id_seq'::regclass)\n> version | integer | niepusty\n> date_published | timestamp without time zone |\n> moderation_status | character varying(50) |\n> publication_status | character varying(30) |\n>\n> And indexes (there are some other indexes too):\n> \"contents_id_pkey\" PRIMARY KEY, btree (id)\n> \"contents_date_published_idx\" btree (date_published)\n> \"contents_moderation_status_idx\" btree (moderation_status)\n> \"contents_publication_status_idx\" btree (publication_status)\n>\n> I tried also creating following indexes:\n> \"contents_date_published_publication_status_moderation_statu_idx\"\n> btree (date_published, publication_status, moderation_status)\n> \"contents_publication_status_idx1\" btree ((publication_status::text))\n> \"contents_moderation_status_idx1\" btree ((moderation_status::text))\n>\n> Then for this query (genrated by Hibernate):\n> explain (analyze, buffers) select count(*) as y0_ from content.contents\n> this_ inner join content.content_categories cat1_ on\n> this_.CONTENT_CATEGORY_ID=cat1_.ID where cat1_.name in ([...])\n> and this_.date_published<='2014-05-26 12:23:31.557000 +02:00:00'\n> and (this_.PUBLICATION_STATUS is null or this_.PUBLICATION_STATUS<>'DRAFT')\n> and (this_.moderation_status is null or\n> this_.moderation_status<>'DANGEROUS')\n> and exists(select * from content.content_visibilities cv where\n> cv.content_id = this_.ID and cv.user_group_id in (1,2));\n>\n> Planner creates such plan:\n>\n> QUERY\n> PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Semi Join (cost=31706.84..106020.81 rows=21871 width=2076) (actual\n> time=1197.658..6012.406 rows=430218 loops=1)\n> Hash Cond: (this_.id = cv.content_id)\n> Buffers: shared hit=5 read=59031 written=3, temp read=47611\n> written=47549\n> -> Hash Join (cost=2.22..56618.11 rows=22881 width=2076) (actual time=\n> 0.163..1977.304 rows=430221 loops=1)\n> Hash Cond: (this_.content_category_id = cat1_.id)\n> Buffers: shared hit=1 read=46829 written=1\n> -> Seq Scan on contents this_ (cost=0.00..54713.92 rows=446176\n> width=2030) (actual time=0.048..915.724 rows=450517 loops=1)\n> Filter: ((date_published <= '2014-05-26\n> 12:23:31.557'::timestamp without time zone) AND ((publication_status IS\n> NULL) OR ((publication_status)::text <> 'DRAFT'::text)) AND\n> ((moderation_status IS NULL) OR ((moderation_status)::text <>\n> 'DANGEROUS'::text)))\n> Rows Removed by Filter: 50\n> Buffers: shared read=46829 written=1\n> -> Hash (cost=2.17..2.17 rows=4 width=46) (actual\n> time=0.089..0.089 rows=4 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 1kB\n> Buffers: shared hit=1\n> -> Seq Scan on content_categories cat1_ (cost=0.00..2.17\n> rows=4 width=46) (actual time=0.053..0.076 rows=4 loops=1)\n> Filter: ((name)::text = ANY\n> ('{przeglad-prasy/rp,przeglad-prasy/parkiet,komunikat-z-rynku-pap-emitent,komunikat-z-rynku-pap-depesze}'::text[]))\n> Rows Removed by Filter: 74\n> Buffers: shared hit=1\n> -> Hash (cost=24435.09..24435.09 rows=443083 width=8) (actual\n> time=1197.146..1197.146 rows=447624 loops=1)\n> Buckets: 4096 Batches: 32 Memory Usage: 560kB\n> Buffers: shared hit=4 read=12202 written=2, temp written=1467\n> -> Bitmap Heap Scan on content_visibilities cv\n> (cost=7614.55..24435.09 rows=443083 width=8) (actual time=61.034..647.729\n> rows=447624 loops=1)\n> Recheck Cond: (user_group_id = ANY ('{1,2}'::bigint[]))\n> Buffers: shared hit=4 read=12202 written=2\n> -> Bitmap Index Scan on\n> content_visibilities_user_group_id_idx (cost=0.00..7503.78 rows=443083\n> width=0) (actual time=58.680..58.680 rows=447626 loops=1)\n> Index Cond: (user_group_id = ANY ('{1,2}'::bigint[]))\n> Buffers: shared hit=3 read=1226\n> Total runtime: 6364.689 ms\n> (27 wierszy)\n>\n> The suspicious part is:\n> -> Seq Scan on contents this_ (cost=0.00..54713.92 rows=446176\n> width=2030) (actual time=0.048..915.724 rows=450517 loops=1)\n> Filter: ((date_published <= '2014-05-26\n> 12:23:31.557'::timestamp without time zone) AND ((publication_status IS\n> NULL) OR ((publication_status)::text <> 'DRAFT'::text)) AND\n> ((moderation_status IS NULL) OR ((moderation_status)::text <>\n> 'DANGEROUS'::text)))\n>\n> I don't understand why planner doesn't use indexes. The problem is there\n> are about 0.5M rows satisfying condition (almost every row in the table).\n> Could you please explain this behavior?\n>\n> I'm using PostgreSQL 9.2.8 on Ubuntu 12.04 LTS x86_64\n>\n> Best regards,\n> Grzegorz Olszewski\n>\n\nWhat is random_page_cost and seq_page_cost in your server?And how many rows does the table have?On Tue, May 27, 2014 at 2:09 PM, Grzegorz Olszewski <[email protected]> wrote:\n\nHi,I wonder why planner uses Seq Scan instead of Index Scan.Here is my table (partial):content.contents-------------------------+-----------------------------+-----------------------------------------------------------------\r\n id | bigint | niepusty domyślnie nextval('content.contents_id_seq'::regclass) version | integer | niepusty date_published | timestamp without time zone | \r\n moderation_status | character varying(50) | publication_status | character varying(30) | And indexes (there are some other indexes too): \"contents_id_pkey\" PRIMARY KEY, btree (id)\r\n \"contents_date_published_idx\" btree (date_published) \"contents_moderation_status_idx\" btree (moderation_status) \"contents_publication_status_idx\" btree (publication_status)\nI tried also creating following indexes: \"contents_date_published_publication_status_moderation_statu_idx\" btree (date_published, publication_status, moderation_status) \"contents_publication_status_idx1\" btree ((publication_status::text))\r\n \"contents_moderation_status_idx1\" btree ((moderation_status::text))Then for this query (genrated by Hibernate):explain (analyze, buffers) select count(*) as y0_ from content.contents this_ inner join content.content_categories cat1_ on this_.CONTENT_CATEGORY_ID=cat1_.ID where cat1_.name in ([...])\r\nand this_.date_published<='2014-05-26 12:23:31.557000 +02:00:00'and (this_.PUBLICATION_STATUS is null or this_.PUBLICATION_STATUS<>'DRAFT')and (this_.moderation_status is null or this_.moderation_status<>'DANGEROUS')\r\nand exists(select * from content.content_visibilities cv where cv.content_id = this_.ID and cv.user_group_id in (1,2));Planner creates such plan: QUERY PLAN \r\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Hash Semi Join (cost=31706.84..106020.81 rows=21871 width=2076) (actual time=1197.658..6012.406 rows=430218 loops=1) Hash Cond: (this_.id = cv.content_id) Buffers: shared hit=5 read=59031 written=3, temp read=47611 written=47549\r\n -> Hash Join (cost=2.22..56618.11 rows=22881 width=2076) (actual time=0.163..1977.304 rows=430221 loops=1) Hash Cond: (this_.content_category_id = cat1_.id)\r\n Buffers: shared hit=1 read=46829 written=1 -> Seq Scan on contents this_ (cost=0.00..54713.92 rows=446176 width=2030) (actual time=0.048..915.724 rows=450517 loops=1) Filter: ((date_published <= '2014-05-26 12:23:31.557'::timestamp without time zone) AND ((publication_status IS NULL) OR ((publication_status)::text <> 'DRAFT'::text)) AND ((moderation_status IS NULL) OR ((moderation_status)::text <> 'DANGEROUS'::text)))\r\n Rows Removed by Filter: 50 Buffers: shared read=46829 written=1 -> Hash (cost=2.17..2.17 rows=4 width=46) (actual time=0.089..0.089 rows=4 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 1kB\r\n Buffers: shared hit=1 -> Seq Scan on content_categories cat1_ (cost=0.00..2.17 rows=4 width=46) (actual time=0.053..0.076 rows=4 loops=1) Filter: ((name)::text = ANY ('{przeglad-prasy/rp,przeglad-prasy/parkiet,komunikat-z-rynku-pap-emitent,komunikat-z-rynku-pap-depesze}'::text[]))\r\n Rows Removed by Filter: 74 Buffers: shared hit=1 -> Hash (cost=24435.09..24435.09 rows=443083 width=8) (actual time=1197.146..1197.146 rows=447624 loops=1) Buckets: 4096 Batches: 32 Memory Usage: 560kB\r\n Buffers: shared hit=4 read=12202 written=2, temp written=1467 -> Bitmap Heap Scan on content_visibilities cv (cost=7614.55..24435.09 rows=443083 width=8) (actual time=61.034..647.729 rows=447624 loops=1)\r\n Recheck Cond: (user_group_id = ANY ('{1,2}'::bigint[])) Buffers: shared hit=4 read=12202 written=2 -> Bitmap Index Scan on content_visibilities_user_group_id_idx (cost=0.00..7503.78 rows=443083 width=0) (actual time=58.680..58.680 rows=447626 loops=1)\r\n Index Cond: (user_group_id = ANY ('{1,2}'::bigint[])) Buffers: shared hit=3 read=1226 Total runtime: 6364.689 ms(27 wierszy)The suspicious part is: -> Seq Scan on contents this_ (cost=0.00..54713.92 \r\nrows=446176 width=2030) (actual time=0.048..915.724 rows=450517 loops=1) \r\n Filter: ((date_published <= '2014-05-26 12:23:31.557'::timestamp \r\nwithout time zone) AND ((publication_status IS NULL) OR \r\n((publication_status)::text <> 'DRAFT'::text)) AND \r\n((moderation_status IS NULL) OR ((moderation_status)::text <> \r\n'DANGEROUS'::text)))I don't understand why planner doesn't use indexes. The problem is there are about 0.5M rows satisfying condition (almost every row in the table). Could you please explain this behavior?\nI'm using PostgreSQL 9.2.8 on Ubuntu 12.04 LTS x86_64Best regards,Grzegorz Olszewski",
"msg_date": "Tue, 27 May 2014 14:14:21 -0700",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner doesn't take indexes into account"
},
{
"msg_contents": "random_page_cost = 4.0\nseq_page_cost = 1.0\n\nThere is about 500,000 rows and about 500 new rows each business day.\n\nAbout 96% of rows meet given conditions, that is, count shoud be about 480,000.\n\nBR,\nGrzegorz Olszewski\n\nDate: Tue, 27 May 2014 14:14:21 -0700\nSubject: Re: [PERFORM] Planner doesn't take indexes into account\nFrom: [email protected]\nTo: [email protected]\nCC: [email protected]\n\nWhat is random_page_cost and seq_page_cost in your server?And how many rows does the table have?\n\nOn Tue, May 27, 2014 at 2:09 PM, Grzegorz Olszewski <[email protected]> wrote:\n\n\n\n\nHi,\n\nI wonder why planner uses Seq Scan instead of Index Scan.\n\nHere is my table (partial):\ncontent.contents\n-------------------------+-----------------------------+-----------------------------------------------------------------\n\n id | bigint | niepusty domyślnie nextval('content.contents_id_seq'::regclass)\n version | integer | niepusty\n date_published | timestamp without time zone | \n\n moderation_status | character varying(50) | \n publication_status | character varying(30) | \n\nAnd indexes (there are some other indexes too):\n \"contents_id_pkey\" PRIMARY KEY, btree (id)\n\n \"contents_date_published_idx\" btree (date_published)\n \"contents_moderation_status_idx\" btree (moderation_status)\n \"contents_publication_status_idx\" btree (publication_status)\n\n\nI tried also creating following indexes:\n \"contents_date_published_publication_status_moderation_statu_idx\" btree (date_published, publication_status, moderation_status)\n \"contents_publication_status_idx1\" btree ((publication_status::text))\n\n \"contents_moderation_status_idx1\" btree ((moderation_status::text))\n\nThen for this query (genrated by Hibernate):\nexplain (analyze, buffers) select count(*) as y0_ from content.contents this_ inner join content.content_categories cat1_ on this_.CONTENT_CATEGORY_ID=cat1_.ID where cat1_.name in ([...])\n\nand this_.date_published<='2014-05-26 12:23:31.557000 +02:00:00'\nand (this_.PUBLICATION_STATUS is null or this_.PUBLICATION_STATUS<>'DRAFT')\nand (this_.moderation_status is null or this_.moderation_status<>'DANGEROUS')\n\nand exists(select * from content.content_visibilities cv where cv.content_id = this_.ID and cv.user_group_id in (1,2));\n\nPlanner creates such plan:\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Hash Semi Join (cost=31706.84..106020.81 rows=21871 width=2076) (actual time=1197.658..6012.406 rows=430218 loops=1)\n Hash Cond: (this_.id = cv.content_id)\n Buffers: shared hit=5 read=59031 written=3, temp read=47611 written=47549\n\n -> Hash Join (cost=2.22..56618.11 rows=22881 width=2076) (actual time=0.163..1977.304 rows=430221 loops=1)\n Hash Cond: (this_.content_category_id = cat1_.id)\n\n Buffers: shared hit=1 read=46829 written=1\n -> Seq Scan on contents this_ (cost=0.00..54713.92 rows=446176 width=2030) (actual time=0.048..915.724 rows=450517 loops=1)\n Filter: ((date_published <= '2014-05-26 12:23:31.557'::timestamp without time zone) AND ((publication_status IS NULL) OR ((publication_status)::text <> 'DRAFT'::text)) AND ((moderation_status IS NULL) OR ((moderation_status)::text <> 'DANGEROUS'::text)))\n\n Rows Removed by Filter: 50\n Buffers: shared read=46829 written=1\n -> Hash (cost=2.17..2.17 rows=4 width=46) (actual time=0.089..0.089 rows=4 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n\n Buffers: shared hit=1\n -> Seq Scan on content_categories cat1_ (cost=0.00..2.17 rows=4 width=46) (actual time=0.053..0.076 rows=4 loops=1)\n Filter: ((name)::text = ANY ('{przeglad-prasy/rp,przeglad-prasy/parkiet,komunikat-z-rynku-pap-emitent,komunikat-z-rynku-pap-depesze}'::text[]))\n\n Rows Removed by Filter: 74\n Buffers: shared hit=1\n -> Hash (cost=24435.09..24435.09 rows=443083 width=8) (actual time=1197.146..1197.146 rows=447624 loops=1)\n Buckets: 4096 Batches: 32 Memory Usage: 560kB\n\n Buffers: shared hit=4 read=12202 written=2, temp written=1467\n -> Bitmap Heap Scan on content_visibilities cv (cost=7614.55..24435.09 rows=443083 width=8) (actual time=61.034..647.729 rows=447624 loops=1)\n\n Recheck Cond: (user_group_id = ANY ('{1,2}'::bigint[]))\n Buffers: shared hit=4 read=12202 written=2\n -> Bitmap Index Scan on content_visibilities_user_group_id_idx (cost=0.00..7503.78 rows=443083 width=0) (actual time=58.680..58.680 rows=447626 loops=1)\n\n Index Cond: (user_group_id = ANY ('{1,2}'::bigint[]))\n Buffers: shared hit=3 read=1226\n Total runtime: 6364.689 ms\n(27 wierszy)\n\nThe suspicious part is:\n -> Seq Scan on contents this_ (cost=0.00..54713.92 \nrows=446176 width=2030) (actual time=0.048..915.724 rows=450517 loops=1)\n \n Filter: ((date_published <= '2014-05-26 12:23:31.557'::timestamp \nwithout time zone) AND ((publication_status IS NULL) OR \n((publication_status)::text <> 'DRAFT'::text)) AND \n((moderation_status IS NULL) OR ((moderation_status)::text <> \n'DANGEROUS'::text)))\n\nI don't understand why planner doesn't use indexes. The problem is there are about 0.5M rows satisfying condition (almost every row in the table). Could you please explain this behavior?\n\n\nI'm using PostgreSQL 9.2.8 on Ubuntu 12.04 LTS x86_64\n\nBest regards,\nGrzegorz Olszewski\n \t\t \t \t\t \n\n \t\t \t \t\t \n\n\n\nrandom_page_cost = 4.0seq_page_cost = 1.0There is about 500,000 rows and about 500 new rows each business day.About 96% of rows meet given conditions, that is, count shoud be about 480,000.BR,Grzegorz OlszewskiDate: Tue, 27 May 2014 14:14:21 -0700Subject: Re: [PERFORM] Planner doesn't take indexes into accountFrom: [email protected]: [email protected]: [email protected] is random_page_cost and seq_page_cost in your server?And how many rows does the table have?On Tue, May 27, 2014 at 2:09 PM, Grzegorz Olszewski <[email protected]> wrote:\n\nHi,I wonder why planner uses Seq Scan instead of Index Scan.Here is my table (partial):content.contents-------------------------+-----------------------------+-----------------------------------------------------------------\n id | bigint | niepusty domyślnie nextval('content.contents_id_seq'::regclass) version | integer | niepusty date_published | timestamp without time zone | \n moderation_status | character varying(50) | publication_status | character varying(30) | And indexes (there are some other indexes too): \"contents_id_pkey\" PRIMARY KEY, btree (id)\n \"contents_date_published_idx\" btree (date_published) \"contents_moderation_status_idx\" btree (moderation_status) \"contents_publication_status_idx\" btree (publication_status)\nI tried also creating following indexes: \"contents_date_published_publication_status_moderation_statu_idx\" btree (date_published, publication_status, moderation_status) \"contents_publication_status_idx1\" btree ((publication_status::text))\n \"contents_moderation_status_idx1\" btree ((moderation_status::text))Then for this query (genrated by Hibernate):explain (analyze, buffers) select count(*) as y0_ from content.contents this_ inner join content.content_categories cat1_ on this_.CONTENT_CATEGORY_ID=cat1_.ID where cat1_.name in ([...])\nand this_.date_published<='2014-05-26 12:23:31.557000 +02:00:00'and (this_.PUBLICATION_STATUS is null or this_.PUBLICATION_STATUS<>'DRAFT')and (this_.moderation_status is null or this_.moderation_status<>'DANGEROUS')\nand exists(select * from content.content_visibilities cv where cv.content_id = this_.ID and cv.user_group_id in (1,2));Planner creates such plan: QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=31706.84..106020.81 rows=21871 width=2076) (actual time=1197.658..6012.406 rows=430218 loops=1) Hash Cond: (this_.id = cv.content_id) Buffers: shared hit=5 read=59031 written=3, temp read=47611 written=47549\n -> Hash Join (cost=2.22..56618.11 rows=22881 width=2076) (actual time=0.163..1977.304 rows=430221 loops=1) Hash Cond: (this_.content_category_id = cat1_.id)\n Buffers: shared hit=1 read=46829 written=1 -> Seq Scan on contents this_ (cost=0.00..54713.92 rows=446176 width=2030) (actual time=0.048..915.724 rows=450517 loops=1) Filter: ((date_published <= '2014-05-26 12:23:31.557'::timestamp without time zone) AND ((publication_status IS NULL) OR ((publication_status)::text <> 'DRAFT'::text)) AND ((moderation_status IS NULL) OR ((moderation_status)::text <> 'DANGEROUS'::text)))\n Rows Removed by Filter: 50 Buffers: shared read=46829 written=1 -> Hash (cost=2.17..2.17 rows=4 width=46) (actual time=0.089..0.089 rows=4 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 1kB\n Buffers: shared hit=1 -> Seq Scan on content_categories cat1_ (cost=0.00..2.17 rows=4 width=46) (actual time=0.053..0.076 rows=4 loops=1) Filter: ((name)::text = ANY ('{przeglad-prasy/rp,przeglad-prasy/parkiet,komunikat-z-rynku-pap-emitent,komunikat-z-rynku-pap-depesze}'::text[]))\n Rows Removed by Filter: 74 Buffers: shared hit=1 -> Hash (cost=24435.09..24435.09 rows=443083 width=8) (actual time=1197.146..1197.146 rows=447624 loops=1) Buckets: 4096 Batches: 32 Memory Usage: 560kB\n Buffers: shared hit=4 read=12202 written=2, temp written=1467 -> Bitmap Heap Scan on content_visibilities cv (cost=7614.55..24435.09 rows=443083 width=8) (actual time=61.034..647.729 rows=447624 loops=1)\n Recheck Cond: (user_group_id = ANY ('{1,2}'::bigint[])) Buffers: shared hit=4 read=12202 written=2 -> Bitmap Index Scan on content_visibilities_user_group_id_idx (cost=0.00..7503.78 rows=443083 width=0) (actual time=58.680..58.680 rows=447626 loops=1)\n Index Cond: (user_group_id = ANY ('{1,2}'::bigint[])) Buffers: shared hit=3 read=1226 Total runtime: 6364.689 ms(27 wierszy)The suspicious part is: -> Seq Scan on contents this_ (cost=0.00..54713.92 \nrows=446176 width=2030) (actual time=0.048..915.724 rows=450517 loops=1) \n Filter: ((date_published <= '2014-05-26 12:23:31.557'::timestamp \nwithout time zone) AND ((publication_status IS NULL) OR \n((publication_status)::text <> 'DRAFT'::text)) AND \n((moderation_status IS NULL) OR ((moderation_status)::text <> \n'DANGEROUS'::text)))I don't understand why planner doesn't use indexes. The problem is there are about 0.5M rows satisfying condition (almost every row in the table). Could you please explain this behavior?\nI'm using PostgreSQL 9.2.8 on Ubuntu 12.04 LTS x86_64Best regards,Grzegorz Olszewski",
"msg_date": "Wed, 28 May 2014 11:59:50 +0200",
"msg_from": "Grzegorz Olszewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner doesn't take indexes into account"
},
{
"msg_contents": "On 05/28/2014 12:59 PM, Grzegorz Olszewski wrote:\n> random_page_cost = 4.0\n> seq_page_cost = 1.0\n>\n> There is about 500,000 rows and about 500 new rows each business day.\n>\n> About 96% of rows meet given conditions, that is, count shoud be about 480,000.\n\nWhen such a large percentage of the rows match, a sequential scan is \nindeed a better plan than an index scan. Sequential access is much \nfaster than random access.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 May 2014 13:22:43 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner doesn't take indexes into account"
},
{
"msg_contents": "On 05/28/2014 04:59 AM, Grzegorz Olszewski wrote:\n\n> There is about 500,000 rows and about 500 new rows each business day.\n>\n> About 96% of rows meet given conditions, that is, count shoud be about\n> 480,000.\n\nHeikki is right on this. Indexes are not a magic secret sauce that are \nalways used simply because they exist. Think of it like this...\n\nIf the table really matches about 480,000 rows, by forcing it to use the \nindex, it has to perform *at least* 480,000 random seeks. Even if you \nhave a high-performance SSD array that can do 100,000 random reads per \nsecond, you will need about five seconds just to read the data.\n\nA sequence scan can perform that same operation in a fraction of a \nsecond because it's faster to read the entire table and filter out the \n*non* matching rows.\n\nIndexes are really only used, or useful, when the number of matches is \nmuch lower than the row count of the table. I highly recommend reading \nup on cardinality and selectivity before creating more indexes. This \npage in the documentation does a really good job:\n\nhttp://www.postgresql.org/docs/9.3/static/row-estimation-examples.html\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 May 2014 08:31:38 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner doesn't take indexes into account"
},
{
"msg_contents": "OK, thank you very much. I've tried similar query but with very few rows matching. In this case index was present in the plan.\n\nBR,\nGrzegorz Olszewski\n\n> Date: Wed, 28 May 2014 08:31:38 -0500\n> From: [email protected]\n> To: [email protected]; [email protected]\n> CC: [email protected]\n> Subject: Re: [PERFORM] Planner doesn't take indexes into account\n> \n> On 05/28/2014 04:59 AM, Grzegorz Olszewski wrote:\n> \n> > There is about 500,000 rows and about 500 new rows each business day.\n> >\n> > About 96% of rows meet given conditions, that is, count shoud be about\n> > 480,000.\n> \n> Heikki is right on this. Indexes are not a magic secret sauce that are \n> always used simply because they exist. Think of it like this...\n> \n> If the table really matches about 480,000 rows, by forcing it to use the \n> index, it has to perform *at least* 480,000 random seeks. Even if you \n> have a high-performance SSD array that can do 100,000 random reads per \n> second, you will need about five seconds just to read the data.\n> \n> A sequence scan can perform that same operation in a fraction of a \n> second because it's faster to read the entire table and filter out the \n> *non* matching rows.\n> \n> Indexes are really only used, or useful, when the number of matches is \n> much lower than the row count of the table. I highly recommend reading \n> up on cardinality and selectivity before creating more indexes. This \n> page in the documentation does a really good job:\n> \n> http://www.postgresql.org/docs/9.3/static/row-estimation-examples.html\n> \n> -- \n> Shaun Thomas\n> OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n \t\t \t \t\t \n\n\n\nOK, thank you very much. I've tried similar query but with very few rows matching. In this case index was present in the plan.BR,Grzegorz Olszewski> Date: Wed, 28 May 2014 08:31:38 -0500> From: [email protected]> To: [email protected]; [email protected]> CC: [email protected]> Subject: Re: [PERFORM] Planner doesn't take indexes into account> > On 05/28/2014 04:59 AM, Grzegorz Olszewski wrote:> > > There is about 500,000 rows and about 500 new rows each business day.> >> > About 96% of rows meet given conditions, that is, count shoud be about> > 480,000.> > Heikki is right on this. Indexes are not a magic secret sauce that are > always used simply because they exist. Think of it like this...> > If the table really matches about 480,000 rows, by forcing it to use the > index, it has to perform *at least* 480,000 random seeks. Even if you > have a high-performance SSD array that can do 100,000 random reads per > second, you will need about five seconds just to read the data.> > A sequence scan can perform that same operation in a fraction of a > second because it's faster to read the entire table and filter out the > *non* matching rows.> > Indexes are really only used, or useful, when the number of matches is > much lower than the row count of the table. I highly recommend reading > up on cardinality and selectivity before creating more indexes. This > page in the documentation does a really good job:> > http://www.postgresql.org/docs/9.3/static/row-estimation-examples.html> > -- > Shaun Thomas> OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604> 312-676-8870> [email protected]> > ______________________________________________> > See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email> > > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 28 May 2014 23:03:22 +0200",
"msg_from": "Grzegorz Olszewski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planner doesn't take indexes into account"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm using postgresql 9.3.4 on Red Hat Enterprise Linux Server release 6.5 (Santiago) \nLinux 193-45-142-74 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux Server specs:\n4x Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz (40 physical cores in total)\n441 GB of RAM I have a schema when multi process daemon is setted up on the system and each process holds 1 postgresql session.\n\nEach process of this daemon run readonly queries over the database.\nIn normal situation it at most 35 ms for queries but from time to time (at a random point of time) each database session hanges in some very strange semop call. Here is a part of the strace:\n\n41733 20:15:09.682186 lseek(41, 0, SEEK_END) = 16384 <0.000007>\n41733 20:15:09.682218 lseek(42, 0, SEEK_END) = 16384 <0.000008>\n41733 20:15:09.682258 lseek(43, 0, SEEK_END) = 8192 <0.000007>\n41733 20:15:09.682290 lseek(44, 0, SEEK_END) = 16384 <0.000007>\n41733 20:15:09.682365 brk(0x1a79000) = 0x1a79000 <0.000010>\n41733 20:15:09.682507 semop(393228, {{0, -1, 0}}, 1) = 0 <2.080439>\n41733 20:15:11.769030 brk(0x1b79000) = 0x1b79000 <0.000028>\n41733 20:15:11.774384 lseek(20, 0, SEEK_END) = 81920 <0.000032>\n41733 20:15:11.774591 lseek(35, 0, SEEK_END) = 98263040 <0.000084>\n41733 20:15:11.775000 brk(0x1b9b000) = 0x1b9b000 <0.000021>\n41733 20:15:11.775741 lseek(35, 0, SEEK_END) = 98263040 <0.000056>\n41733 20:15:11.776763 brk(0x19b9000) = 0x19b9000 <0.000329>\n41733 20:15:11.777195 sendto(9,\n\n41733 20:45:58.300097 lseek(39, 0, SEEK_END) = 32768 <0.000015>\n41733 20:45:58.300167 lseek(40, 0, SEEK_END) = 32768 <0.000015>\n41733 20:45:58.300244 lseek(41, 0, SEEK_END) = 16384 <0.000015>\n41733 20:45:58.300314 lseek(42, 0, SEEK_END) = 16384 <0.000015>\n41733 20:45:58.300384 lseek(43, 0, SEEK_END) = 8192 <0.000014>\n41733 20:45:58.300452 lseek(44, 0, SEEK_END) = 16384 <0.000015>\n41733 20:45:58.300599 brk(0x1a79000) = 0x1a79000 <0.000020>\n41733 20:45:58.306472 brk(0x1b79000) = 0x1b79000 <0.000024>\n41733 20:45:58.311412 lseek(20, 0, SEEK_END) = 81920 <0.000026>\n41733 20:45:58.311649 lseek(35, 0, SEEK_END) = 98263040 <0.000022>\n41733 20:45:58.312049 brk(0x1b9f000) = 0x1b9f000 <0.000021>\n41733 20:45:58.312502 lseek(35, 0, SEEK_END) = 98263040 <0.000024>41733 20:45:58.313207 brk(0x19b9000) = 0x19b9000 <0.000243>\n41733 20:45:58.313544 sendto(10,\nYou may see that semop took 2 seconds from the whole system call.\nSame semops could be find in other database sessions.\n\nCould you point me how can i find\n\nBest Regards,\nSuren Arustamyan\[email protected]\nHi all,I'm using postgresql 9.3.4 on Red Hat Enterprise Linux Server release 6.5 (Santiago) Linux 193-45-142-74 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64 x86_64 GNU/LinuxServer specs:4x Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz (40 physical cores in total)441 GB of RAMI have a schema when multi process daemon is setted up on the system and each process holds 1 postgresql session.Each process of this daemon run readonly queries over the database.In normal situation it at most 35 ms for queries but from time to time (at a random point of time) each database session hanges in some very strange semop call. Here is a part of the strace:41733 20:15:09.682186 lseek(41, 0, SEEK_END) = 16384 <0.000007>41733 20:15:09.682218 lseek(42, 0, SEEK_END) = 16384 <0.000008>41733 20:15:09.682258 lseek(43, 0, SEEK_END) = 8192 <0.000007>41733 20:15:09.682290 lseek(44, 0, SEEK_END) = 16384 <0.000007>41733 20:15:09.682365 brk(0x1a79000) = 0x1a79000 <0.000010>41733 20:15:09.682507 semop(393228, {{0, -1, 0}}, 1) = 0 <2.080439>41733 20:15:11.769030 brk(0x1b79000) = 0x1b79000 <0.000028>41733 20:15:11.774384 lseek(20, 0, SEEK_END) = 81920 <0.000032>41733 20:15:11.774591 lseek(35, 0, SEEK_END) = 98263040 <0.000084>41733 20:15:11.775000 brk(0x1b9b000) = 0x1b9b000 <0.000021>41733 20:15:11.775741 lseek(35, 0, SEEK_END) = 98263040 <0.000056>41733 20:15:11.776763 brk(0x19b9000) = 0x19b9000 <0.000329>41733 20:15:11.777195 sendto(9,41733 20:45:58.300097 lseek(39, 0, SEEK_END) = 32768 <0.000015>41733 20:45:58.300167 lseek(40, 0, SEEK_END) = 32768 <0.000015>41733 20:45:58.300244 lseek(41, 0, SEEK_END) = 16384 <0.000015>41733 20:45:58.300314 lseek(42, 0, SEEK_END) = 16384 <0.000015>41733 20:45:58.300384 lseek(43, 0, SEEK_END) = 8192 <0.000014>41733 20:45:58.300452 lseek(44, 0, SEEK_END) = 16384 <0.000015>41733 20:45:58.300599 brk(0x1a79000) = 0x1a79000 <0.000020>41733 20:45:58.306472 brk(0x1b79000) = 0x1b79000 <0.000024>41733 20:45:58.311412 lseek(20, 0, SEEK_END) = 81920 <0.000026>41733 20:45:58.311649 lseek(35, 0, SEEK_END) = 98263040 <0.000022>41733 20:45:58.312049 brk(0x1b9f000) = 0x1b9f000 <0.000021>41733 20:45:58.312502 lseek(35, 0, SEEK_END) = 98263040 <0.000024>41733 20:45:58.313207 brk(0x19b9000) = 0x19b9000 <0.000243>41733 20:45:58.313544 sendto(10,You may see that semop took 2 seconds from the whole system call.Same semops could be find in other database sessions.Could you point me how can i findBest Regards,Suren [email protected]",
"msg_date": "Fri, 30 May 2014 18:55:13 +0400",
"msg_from": "=?UTF-8?B?0KHRg9GA0LXQvSDQkNGA0YPRgdGC0LDQvNGP0L0=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?U0VMRUNUIG91dGFnZSBpbiBzZW1vcA==?="
},
{
"msg_contents": "Excuse me last e-mail was not full.\nHere is the rest:\n\nOnce more trace where problem is seen:\n41733 20:15:09.682258 lseek(43, 0, SEEK_END) = 8192 <0.000007>\n41733 20:15:09.682290 lseek(44, 0, SEEK_END) = 16384 <0.000007>\n41733 20:15:09.682365 brk(0x1a79000) = 0x1a79000 <0.000010>\n41733 20:15:09.682507 semop(393228, {{0, -1, 0}}, 1) = 0 <2.080439>\n41733 20:15:11.769030 brk(0x1b79000) = 0x1b79000 <0.000028>\nTrace without the problem\n41733 20:45:58.300384 lseek(43, 0, SEEK_END) = 8192 <0.000014>\n41733 20:45:58.300452 lseek(44, 0, SEEK_END) = 16384 <0.000015>\n41733 20:45:58.300599 brk(0x1a79000) = 0x1a79000 <0.000020>\n41733 20:45:58.306472 brk(0x1b79000) = 0x1b79000 <0.000024>\n41733 20:45:58.311412 lseek(20, 0, SEEK_END) = 81920 <0.000026>\n41733 20:45:58.311649 lseek(35, 0, SEEK_END) = 98263040 <0.000022>\n41733 20:45:58.312049 brk(0x1b9f000) = 0x1b9f000 <0.000021>\n41733 20:45:58.312502 lseek(35, 0, SEEK_END) = 98263040 <0.000024>\n41733 20:45:58.313207 brk(0x19b9000) = 0x19b9000 <0.000243>\n41733 20:45:58.313544 sendto(10,\nPlease let me know if you have any ideas what those outages are and how can i remove them. \n\n\n\nFri, 30 May 2014 18:55:13 +0400 от Сурен Арустамян <[email protected]>:\n>Hi all,\n>\n>I'm using postgresql 9.3.4 on Red Hat Enterprise Linux Server release 6.5 (Santiago) \n>Linux 193-45-142-74 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux Server specs:\n>4x Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz (40 physical cores in total)\n>441 GB of RAM I have a schema when multi process daemon is setted up on the system and each process holds 1 postgresql session.\n>\n>Each process of this daemon run readonly queries over the database.\n>In normal situation it at most 35 ms for queries but from time to time (at a random point of time) each database session hanges in some very strange semop call. Here is a part of the strace:\n>\n>41733 20:15:09.682186 lseek(41, 0, SEEK_END) = 16384 <0.000007>\n>41733 20:15:09.682218 lseek(42, 0, SEEK_END) = 16384 <0.000008>\n>41733 20:15:09.682258 lseek(43, 0, SEEK_END) = 8192 <0.000007>\n>41733 20:15:09.682290 lseek(44, 0, SEEK_END) = 16384 <0.000007>\n>41733 20:15:09.682365 brk(0x1a79000) = 0x1a79000 <0.000010>\n>41733 20:15:09.682507 semop(393228, {{0, -1, 0}}, 1) = 0 <2.080439>\n>41733 20:15:11.769030 brk(0x1b79000) = 0x1b79000 <0.000028>\n>41733 20:15:11.774384 lseek(20, 0, SEEK_END) = 81920 <0.000032>\n>41733 20:15:11.774591 lseek(35, 0, SEEK_END) = 98263040 <0.000084>\n>41733 20:15:11.775000 brk(0x1b9b000) = 0x1b9b000 <0.000021>\n>41733 20:15:11.775741 lseek(35, 0, SEEK_END) = 98263040 <0.000056>\n>41733 20:15:11.776763 brk(0x19b9000) = 0x19b9000 <0.000329>\n>41733 20:15:11.777195 sendto(9,\n>\n>41733 20:45:58.300097 lseek(39, 0, SEEK_END) = 32768 <0.000015>\n>41733 20:45:58.300167 lseek(40, 0, SEEK_END) = 32768 <0.000015>\n>41733 20:45:58.300244 lseek(41, 0, SEEK_END) = 16384 <0.000015>\n>41733 20:45:58.300314 lseek(42, 0, SEEK_END) = 16384 <0.000015>\n>41733 20:45:58.300384 lseek(43, 0, SEEK_END) = 8192 <0.000014>\n>41733 20:45:58.300452 lseek(44, 0, SEEK_END) = 16384 <0.000015>\n>41733 20:45:58.300599 brk(0x1a79000) = 0x1a79000 <0.000020>\n>41733 20:45:58.306472 brk(0x1b79000) = 0x1b79000 <0.000024>\n>41733 20:45:58.311412 lseek(20, 0, SEEK_END) = 81920 <0.000026>\n>41733 20:45:58.311649 lseek(35, 0, SEEK_END) = 98263040 <0.000022>\n>41733 20:45:58.312049 brk(0x1b9f000) = 0x1b9f000 <0.000021>\n>41733 20:45:58.312502 lseek(35, 0, SEEK_END) = 98263040 <0.000024>41733 20:45:58.313207 brk(0x19b9000) = 0x19b9000 <0.000243>\n>41733 20:45:58.313544 sendto(10,\n>You may see that semop took 2 seconds from the whole system call.\n>Same semops could be find in other database sessions.\n>\n>Could you point me how can i find\n>\n>Best Regards,\n>Suren Arustamyan\n>[email protected]\n\nBest Regards,\nСурен Арустамян\[email protected]\n\nExcuse me last e-mail was not full.Here is the rest:Once more trace where problem is seen:41733 20:15:09.682258 lseek(43, 0, SEEK_END) = 8192 <0.000007>41733 20:15:09.682290 lseek(44, 0, SEEK_END) = 16384 <0.000007>41733 20:15:09.682365 brk(0x1a79000) = 0x1a79000 <0.000010>41733 20:15:09.682507 semop(393228, {{0, -1, 0}}, 1) = 0 <2.080439>41733 20:15:11.769030 brk(0x1b79000) = 0x1b79000 <0.000028>Trace without the problem41733 20:45:58.300384 lseek(43, 0, SEEK_END) = 8192 <0.000014>41733 20:45:58.300452 lseek(44, 0, SEEK_END) = 16384 <0.000015>41733 20:45:58.300599 brk(0x1a79000) = 0x1a79000 <0.000020>41733 20:45:58.306472 brk(0x1b79000) = 0x1b79000 <0.000024>41733 20:45:58.311412 lseek(20, 0, SEEK_END) = 81920 <0.000026>41733 20:45:58.311649 lseek(35, 0, SEEK_END) = 98263040 <0.000022>41733 20:45:58.312049 brk(0x1b9f000) = 0x1b9f000 <0.000021>41733 20:45:58.312502 lseek(35, 0, SEEK_END) = 98263040 <0.000024>41733 20:45:58.313207 brk(0x19b9000) = 0x19b9000 <0.000243>41733 20:45:58.313544 sendto(10,Please let me know if you have any ideas what those outages are and how can i remove them. Fri, 30 May 2014 18:55:13 +0400 от Сурен Арустамян <[email protected]>:\n\n\n\n\n\n\n\nHi all,I'm using postgresql 9.3.4 on Red Hat Enterprise Linux Server release 6.5 (Santiago) Linux 193-45-142-74 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64 x86_64 GNU/LinuxServer specs:4x Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz (40 physical cores in total)441 GB of RAMI have a schema when multi process daemon is setted up on the system and each process holds 1 postgresql session.Each process of this daemon run readonly queries over the database.In normal situation it at most 35 ms for queries but from time to time (at a random point of time) each database session hanges in some very strange semop call. Here is a part of the strace:41733 20:15:09.682186 lseek(41, 0, SEEK_END) = 16384 <0.000007>41733 20:15:09.682218 lseek(42, 0, SEEK_END) = 16384 <0.000008>41733 20:15:09.682258 lseek(43, 0, SEEK_END) = 8192 <0.000007>41733 20:15:09.682290 lseek(44, 0, SEEK_END) = 16384 <0.000007>41733 20:15:09.682365 brk(0x1a79000) = 0x1a79000 <0.000010>41733 20:15:09.682507 semop(393228, {{0, -1, 0}}, 1) = 0 <2.080439>41733 20:15:11.769030 brk(0x1b79000) = 0x1b79000 <0.000028>41733 20:15:11.774384 lseek(20, 0, SEEK_END) = 81920 <0.000032>41733 20:15:11.774591 lseek(35, 0, SEEK_END) = 98263040 <0.000084>41733 20:15:11.775000 brk(0x1b9b000) = 0x1b9b000 <0.000021>41733 20:15:11.775741 lseek(35, 0, SEEK_END) = 98263040 <0.000056>41733 20:15:11.776763 brk(0x19b9000) = 0x19b9000 <0.000329>41733 20:15:11.777195 sendto(9,41733 20:45:58.300097 lseek(39, 0, SEEK_END) = 32768 <0.000015>41733 20:45:58.300167 lseek(40, 0, SEEK_END) = 32768 <0.000015>41733 20:45:58.300244 lseek(41, 0, SEEK_END) = 16384 <0.000015>41733 20:45:58.300314 lseek(42, 0, SEEK_END) = 16384 <0.000015>41733 20:45:58.300384 lseek(43, 0, SEEK_END) = 8192 <0.000014>41733 20:45:58.300452 lseek(44, 0, SEEK_END) = 16384 <0.000015>41733 20:45:58.300599 brk(0x1a79000) = 0x1a79000 <0.000020>41733 20:45:58.306472 brk(0x1b79000) = 0x1b79000 <0.000024>41733 20:45:58.311412 lseek(20, 0, SEEK_END) = 81920 <0.000026>41733 20:45:58.311649 lseek(35, 0, SEEK_END) = 98263040 <0.000022>41733 20:45:58.312049 brk(0x1b9f000) = 0x1b9f000 <0.000021>41733 20:45:58.312502 lseek(35, 0, SEEK_END) = 98263040 <0.000024>41733 20:45:58.313207 brk(0x19b9000) = 0x19b9000 <0.000243>41733 20:45:58.313544 sendto(10,You may see that semop took 2 seconds from the whole system call.Same semops could be find in other database sessions.Could you point me how can i findBest Regards,Suren [email protected]\n\n\n\n\n\n\n\n\nBest Regards,Сурен Арустамян[email protected]",
"msg_date": "Fri, 30 May 2014 19:00:53 +0400",
"msg_from": "=?UTF-8?B?0KHRg9GA0LXQvSDQkNGA0YPRgdGC0LDQvNGP0L0=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?UmU6IFNFTEVDVCBvdXRhZ2UgaW4gc2Vtb3A=?="
},
{
"msg_contents": "Сурен Арустамян wrote:\r\n\r\n> I'm using postgresql 9.3.4 on Red Hat Enterprise Linux Server release 6.5 (Santiago)\r\n> \r\n> Linux 193-45-142-74 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64\r\n> x86_64 GNU/Linux\r\n> \r\n> Server specs:\r\n> 4x Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz (40 physical cores in total)\r\n> \r\n> \r\n> 441 GB of RAM\r\n> \r\n> I have a schema when multi process daemon is setted up on the system and each process holds 1\r\n> postgresql session.\r\n> \r\n> Each process of this daemon run readonly queries over the database.\r\n> In normal situation it at most 35 ms for queries but from time to time (at a random point of time)\r\n> each database session hanges in some very strange semop call. Here is a part of the strace:\r\n\r\n[...]\r\n\r\n> 41733 20:15:09.682507 semop(393228, {{0, -1, 0}}, 1) = 0 <2.080439>\r\n\r\n[...]\r\n\r\n> You may see that semop took 2 seconds from the whole system call.\r\n> Same semops could be find in other database sessions.\r\n> \r\n> Could you point me how can i find\r\n\r\nWhat is your PostgreSQL configuration?\r\n\r\nIs your database workload read-only?\r\nIf not, could these be locks?\r\nYou could set log_lock_waits and see if anything is logged.\r\n\r\nAnything noteworthy in the database server log?\r\nHow busy is the I/O system and the CPU when this happens?\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 30 May 2014 15:19:05 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT outage in semop"
},
{
"msg_contents": "Hello Albe,\n\nHere are changes that were made on the postgresql.conf from the default configuration:\n\nmax_connections = 200\n\nshared_buffers = 129215MB\nwork_mem = 256MB\n\nmaintenance_work_mem = 512MB\n\nvacuum_cost_delay = 70\nvacuum_cost_limit = 30\nwal_level = hot_standby (system sends data to 1 slave server using hot standby streaming replication. Problem still observed when there is no replication running)\nwal_buffers = 2MB\n\ncommit_delay = 500\ncheckpoint_segments = 256\nwal_keep_segments = 512\nenable_seqscan = off\neffective_cache_size = 258430MB\nmax_locks_per_transaction = 128\n\nIn general system has write queries also but this daemon runs read only queries.\n\nIt aquired near the 20-30 ACCESS SHARE locks per query so the only way to lock them would be Exclusive lock. \nThere is no explicit exclusive locks in the application.\n\nDuring the problem LA 0.1 - 2 \nNo iowait. \n\nAlso interesting point i have setted up monitoring daemon that runs select from pg_stat_activity and from pg_locks each half a second and during the time i observer the problem daemon was not able to run those queries also - only after semop timeout.\n\n\n\n\n\n\nFri, 30 May 2014 15:19:05 +0000 от Albe Laurenz <[email protected]>:\n>Сурен Арустамян wrote:\n>\n>> I'm using postgresql 9.3.4 on Red Hat Enterprise Linux Server release 6.5 (Santiago)\n>> \n>> Linux 193-45-142-74 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64\n>> x86_64 GNU/Linux\n>> \n>> Server specs:\n>> 4x Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz (40 physical cores in total)\n>> \n>> \n>> 441 GB of RAM\n>> \n>> I have a schema when multi process daemon is setted up on the system and each process holds 1\n>> postgresql session.\n>> \n>> Each process of this daemon run readonly queries over the database.\n>> In normal situation it at most 35 ms for queries but from time to time (at a random point of time)\n>> each database session hanges in some very strange semop call. Here is a part of the strace:\n>\n>[...]\n>\n>> 41733 20:15:09.682507 semop(393228, {{0, -1, 0}}, 1) = 0 <2.080439>\n>\n>[...]\n>\n>> You may see that semop took 2 seconds from the whole system call.\n>> Same semops could be find in other database sessions.\n>> \n>> Could you point me how can i find\n>\n>What is your PostgreSQL configuration?\n>\n>Is your database workload read-only?\n>If not, could these be locks?\n>You could set log_lock_waits and see if anything is logged.\n>\n>Anything noteworthy in the database server log?\n>How busy is the I/O system and the CPU when this happens?\n>\n>Yours,\n>Laurenz Albe\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\nBest Regards,\nSuren Arustamyan\[email protected]\n\nHello Albe,Here are changes that were made on the postgresql.conf from the default configuration:max_connections = 200shared_buffers = 129215MBwork_mem = 256MBmaintenance_work_mem = 512MBvacuum_cost_delay = 70vacuum_cost_limit = 30wal_level = hot_standby (system sends data to 1 slave server using hot standby streaming replication. Problem still observed when there is no replication running)wal_buffers = 2MBcommit_delay = 500checkpoint_segments = 256wal_keep_segments = 512enable_seqscan = offeffective_cache_size = 258430MBmax_locks_per_transaction = 128In general system has write queries also but this daemon runs read only queries.It aquired near the 20-30 ACCESS SHARE locks per query so the only way to lock them would be Exclusive lock. There is no explicit exclusive locks in the application.During the problem LA 0.1 - 2 No iowait. Also interesting point i have setted up monitoring daemon that runs select from pg_stat_activity and from pg_locks each half a second and during the time i observer the problem daemon was not able to run those queries also - only after semop timeout.Fri, 30 May 2014 15:19:05 +0000 от Albe Laurenz <[email protected]>:\n\n\n\n\n\n\nСурен Арустамян wrote:\n\n> I'm using postgresql 9.3.4 on Red Hat Enterprise Linux Server release 6.5 (Santiago)\n> \n> Linux 193-45-142-74 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64\n> x86_64 GNU/Linux\n> \n> Server specs:\n> 4x Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz (40 physical cores in total)\n> \n> \n> 441 GB of RAM\n> \n> I have a schema when multi process daemon is setted up on the system and each process holds 1\n> postgresql session.\n> \n> Each process of this daemon run readonly queries over the database.\n> In normal situation it at most 35 ms for queries but from time to time (at a random point of time)\n> each database session hanges in some very strange semop call. Here is a part of the strace:\n\n[...]\n\n> 41733 20:15:09.682507 semop(393228, {{0, -1, 0}}, 1) = 0 <2.080439>\n\n[...]\n\n> You may see that semop took 2 seconds from the whole system call.\n> Same semops could be find in other database sessions.\n> \n> Could you point me how can i find\n\nWhat is your PostgreSQL configuration?\n\nIs your database workload read-only?\nIf not, could these be locks?\nYou could set log_lock_waits and see if anything is logged.\n\nAnything noteworthy in the database server log?\nHow busy is the I/O system and the CPU when this happens?\n\nYours,\nLaurenz Albe\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n\nBest Regards,Suren [email protected]",
"msg_date": "Sat, 31 May 2014 13:03:30 +0400",
"msg_from": "=?UTF-8?B?U3VyZW4gQXJ1c3RhbXlhbg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmVbMl06IFtQRVJGT1JNXSBTRUxFQ1Qgb3V0YWdlIGluIHNlbW9w?="
}
] |
[
{
"msg_contents": "For the past few days, we've been seeing unexpected high CPU spikes in our\nsystem. We observed the following:\n\n- every single CPU spike was preceded by low 'free' memory even though\n'cached' is quite high\n- as soon as we shut down any of our applications which is occupying some\nDB connections (e.g., pgpool), the 'free' memory usage goes up and CPU load\nimmediately drops (please see below)\n- we saw instances when the ‘free’ memory did reach low values but CPU\nremained OK\n\nI understand how running out of memory could cause various issues with the\nDB, but in this case, we had plenty of memory in the ‘cached’ portion. Why\nwould CPU load go up when there's still plenty of room in the 'cached'\nmemory?\n\nHere's the session:\n\n 04:58:37 up 31 days, 23:41, 0 users, load average: 2.37, 1.91, 1.68\n total used free shared buffers cached\nMem: 31720 31188 532 0 90 22852\n(…)\n 05:00:37 up 31 days, 23:43, 1 user, load average: 5.51, 2.66, 1.95\n total used free shared buffers cached\nMem: 31720 31452 268 0 77 22267\n(…)\n 05:00:58 up 31 days, 23:44, 1 user, load average: 21.44, 6.52, 3.24\n total used free shared buffers cached\nMem: 31720 31482 237 0 77 21704\n(…)\n 05:01:18 up 31 days, 23:44, 1 user, load average: 42.98, 12.36, 5.22\n total used free shared buffers cached\nMem: 31720 31477 243 0 77 21061\n(…)\n 05:01:38 up 31 days, 23:44, 1 user, load average: 63.38, 18.99, 7.56\n total used free shared buffers cached\nMem: 31720 31454 266 0 77 20410\n(…)\n 05:03:20 up 31 days, 23:46, 1 user, load average: 110.10, 47.85, 19.07\n total used free shared buffers cached\nMem: 31720 31326 394 0 76 19290\n\n\nAt this point, pgpool and apache were shut down:\n\n\n 05:03:40 up 31 days, 23:46, 1 user, load average: 113.51, 52.66, 21.26\n total used free shared buffers cached\nMem: 31720 29835 1885 0 76 19291\n(…)\n 05:04:00 up 31 days, 23:47, 1 user, load average: 82.49, 49.53, 20.90\n total used free shared buffers cached\nMem: 31720 26082 5638 0 76 19300\n(…)\n 05:04:20 up 31 days, 23:47, 1 user, load average: 60.37, 46.62, 20.56\n total used free shared buffers cached\nMem: 31720 24701 7019 0 76 19311\n(…)\n 05:04:40 up 31 days, 23:47, 1 user, load average: 43.63, 43.70, 20.15\n total used free shared buffers cached\nMem: 31720 24797 6923 0 76 19320\n(…)\n 05:05:00 up 31 days, 23:48, 1 user, load average: 31.70, 40.96, 19.75\n total used free shared buffers cached\nMem: 31720 24947 6773 0 76 19326\n(…)\n 05:05:20 up 31 days, 23:48, 1 user, load average: 23.12, 38.41, 19.36\n total used free shared buffers cached\nMem: 31720 25036 6684 0 76 19334\n(…)\n 05:05:40 up 31 days, 23:48, 1 user, load average: 17.12, 36.05, 18.99\n total used free shared buffers cached\nMem: 31720 25197 6523 0 76 19340\n(…)\n 05:06:00 up 31 days, 23:49, 1 user, load average: 12.84, 33.84, 18.63\n total used free shared buffers cached\nMem: 31720 25316 6404 0 76 19367\n(…)\n 05:06:20 up 31 days, 23:49, 1 user, load average: 9.85, 31.80, 18.28\n total used free shared buffers cached\nMem: 31720 24728 6992 0 76 18839\n(…)\n 05:06:40 up 31 days, 23:49, 1 user, load average: 7.61, 29.86, 17.93\n total used free shared buffers cached\nMem: 31720 24835 6885 0 76 18847\n(…)\n 05:07:00 up 31 days, 23:50, 1 user, load average: 5.74, 27.99, 17.57\n total used free shared buffers cached\nMem: 31720 24971 6749 0 76 18852\n(…)\n 05:07:20 up 31 days, 23:50, 1 user, load average: 4.48, 26.26, 17.22\n total used free shared buffers cached\nMem: 31720 25133 6587 0 76 18861\n(…)\n 05:07:40 up 31 days, 23:50, 2 users, load average: 3.83, 24.70, 16.90\n total used free shared buffers cached\nMem: 31720 25351 6369 0 76 18872\n(…)\n 05:08:00 up 31 days, 23:51, 2 users, load average: 3.10, 23.18, 16.56\n total used free shared buffers cached\nMem: 31720 25334 6385 0 76 18879\n(…)\n 05:08:20 up 31 days, 23:51, 2 users, load average: 2.52, 21.75, 16.23\n total used free shared buffers cached\nMem: 31720 25362 6358 0 77 18884\n\n\nHere are the pertinent machine and OS and Postgres details:\n PostgreSQL 9.1.11 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7\n20120313 (Red Hat 4.4.7-3), 64-bit\n Linux ps2db 2.6.32-431.11.2.el6.x86_64 #1 SMP Tue Mar 25 19:59:55 UTC 2014\nx86_64 x86_64 x86_64 GNU/Linux\npostgres=# SELECT name, current_setting(name), source\npostgres-# FROM pg_settings\npostgres-# WHERE source NOT IN ('default', 'override');\n name | current_setting |\n source\n------------------------------+-------------------------------+----------------------\n application_name | psql | client\n archive_command | /bin/true |\nconfiguration file\n archive_mode | on |\nconfiguration file\n autovacuum_analyze_threshold | 50 |\nconfiguration file\n autovacuum_freeze_max_age | 800000000 |\nconfiguration file\n autovacuum_naptime | 5min |\nconfiguration file\n autovacuum_vacuum_threshold | 50 |\nconfiguration file\n bytea_output | escape |\nconfiguration file\n checkpoint_completion_target | 0.7 |\nconfiguration file\n checkpoint_segments | 128 |\nconfiguration file\n checkpoint_timeout | 15min |\nconfiguration file\n checkpoint_warning | 30s |\nconfiguration file\n client_encoding | UTF8 | client\n constraint_exclusion | partition |\nconfiguration file\n cpu_index_tuple_cost | 0.005 |\nconfiguration file\n cpu_operator_cost | 0.0025 |\nconfiguration file\n cpu_tuple_cost | 0.01 |\nconfiguration file\n custom_variable_classes | pg_stat_statements |\nconfiguration file\n DateStyle | ISO, MDY |\nconfiguration file\n default_statistics_target | 100 |\nconfiguration file\n default_text_search_config | pg_catalog.english |\nconfiguration file\n effective_cache_size | 16GB |\nconfiguration file\n effective_io_concurrency | 1 |\nconfiguration file\n enable_material | off |\nconfiguration file\n escape_string_warning | on |\nconfiguration file\n hot_standby | on |\nconfiguration file\n lc_messages | C |\nconfiguration file\n lc_monetary | en_US.UTF-8 |\nconfiguration file\n lc_numeric | en_US.UTF-8 |\nconfiguration file\n lc_time | en_US.UTF-8 |\nconfiguration file\n listen_addresses | * |\nconfiguration file\n log_autovacuum_min_duration | 0 |\nconfiguration file\n log_checkpoints | on |\nconfiguration file\n log_connections | on |\nconfiguration file\n log_destination | csvlog |\nconfiguration file\n log_directory | pg_log |\nconfiguration file\n log_disconnections | on |\nconfiguration file\n log_filename | postgresql.log.ps2db.%H |\nconfiguration file\n log_line_prefix | %t [%d] [%u] [%p]: [%l-1] %h |\nconfiguration file\n log_lock_waits | on |\nconfiguration file\n log_min_duration_statement | 0 |\nconfiguration file\n log_rotation_age | 1h |\nconfiguration file\n log_temp_files | 0 |\nconfiguration file\n log_timezone | Canada/Pacific | environment\nvariable\n log_truncate_on_rotation | on |\nconfiguration file\n logging_collector | on |\nconfiguration file\n maintenance_work_mem | 1GB |\nconfiguration file\n max_connections | 500 |\nconfiguration file\n max_locks_per_transaction | 512 |\nconfiguration file\n max_stack_depth | 2MB | environment\nvariable\n max_standby_streaming_delay | 90min |\nconfiguration file\n max_wal_senders | 6 |\nconfiguration file\n pg_stat_statements.max | 10000 |\nconfiguration file\n pg_stat_statements.track | all |\nconfiguration file\n port | 5432 |\nconfiguration file\n random_page_cost | 4 |\nconfiguration file\n shared_buffers | 6GB |\nconfiguration file\n shared_preload_libraries | pg_stat_statements |\nconfiguration file\n standard_conforming_strings | off |\nconfiguration file\n stats_temp_directory | /ram_postgres_stats |\nconfiguration file\n temp_buffers | 16MB |\nconfiguration file\n TimeZone | Canada/Pacific | environment\nvariable\n wal_keep_segments | 64 |\nconfiguration file\n wal_level | hot_standby |\nconfiguration file\n work_mem | 8MB |\nconfiguration file\n(65 rows)\n\nRAM: 32 gigs\nCPU: 24 cores; Intel(R) Xeon(R) CPU X7460 @ 2.66GHz\nRAID 10\n\nFor the past few days, we've been seeing unexpected high CPU spikes in our system. We observed the following:\n\r\n- every single CPU spike was preceded by low 'free' memory even though 'cached' is quite high\r\n- as soon as we shut down any of our applications which is occupying some DB connections (e.g., pgpool), the 'free' memory usage goes up and CPU load immediately drops (please see below)\r\n- we saw instances when the ‘free’ memory did reach low values but CPU remained OK\n\r\nI understand how running out of memory could cause various issues with the DB, but in this case, we had plenty of memory in the ‘cached’ portion. Why would CPU load go up when there's still plenty of room in the 'cached' memory?\n\r\nHere's the session:\n\r\n 04:58:37 up 31 days, 23:41, 0 users, load average: 2.37, 1.91, 1.68\r\n total used free shared buffers cached\r\nMem: 31720 31188 532 0 90 22852\r\n(…)\r\n 05:00:37 up 31 days, 23:43, 1 user, load average: 5.51, 2.66, 1.95\r\n total used free shared buffers cached\r\nMem: 31720 31452 268 0 77 22267\r\n(…)\r\n 05:00:58 up 31 days, 23:44, 1 user, load average: 21.44, 6.52, 3.24\r\n total used free shared buffers cached\r\nMem: 31720 31482 237 0 77 21704\r\n(…)\r\n 05:01:18 up 31 days, 23:44, 1 user, load average: 42.98, 12.36, 5.22\r\n total used free shared buffers cached\r\nMem: 31720 31477 243 0 77 21061\r\n(…)\r\n 05:01:38 up 31 days, 23:44, 1 user, load average: 63.38, 18.99, 7.56\r\n total used free shared buffers cached\r\nMem: 31720 31454 266 0 77 20410\r\n(…)\r\n 05:03:20 up 31 days, 23:46, 1 user, load average: 110.10, 47.85, 19.07\r\n total used free shared buffers cached\r\nMem: 31720 31326 394 0 76 19290\n\n\r\nAt this point, pgpool and apache were shut down:\n\n\r\n 05:03:40 up 31 days, 23:46, 1 user, load average: 113.51, 52.66, 21.26\r\n total used free shared buffers cached\r\nMem: 31720 29835 1885 0 76 19291\r\n(…)\r\n 05:04:00 up 31 days, 23:47, 1 user, load average: 82.49, 49.53, 20.90\r\n total used free shared buffers cached\r\nMem: 31720 26082 5638 0 76 19300\r\n(…)\r\n 05:04:20 up 31 days, 23:47, 1 user, load average: 60.37, 46.62, 20.56\r\n total used free shared buffers cached\r\nMem: 31720 24701 7019 0 76 19311\r\n(…)\r\n 05:04:40 up 31 days, 23:47, 1 user, load average: 43.63, 43.70, 20.15\r\n total used free shared buffers cached\r\nMem: 31720 24797 6923 0 76 19320\r\n(…)\r\n 05:05:00 up 31 days, 23:48, 1 user, load average: 31.70, 40.96, 19.75\r\n total used free shared buffers cached\r\nMem: 31720 24947 6773 0 76 19326\r\n(…)\r\n 05:05:20 up 31 days, 23:48, 1 user, load average: 23.12, 38.41, 19.36\r\n total used free shared buffers cached\r\nMem: 31720 25036 6684 0 76 19334\r\n(…) \r\n 05:05:40 up 31 days, 23:48, 1 user, load average: 17.12, 36.05, 18.99\r\n total used free shared buffers cached\r\nMem: 31720 25197 6523 0 76 19340\r\n(…) \r\n 05:06:00 up 31 days, 23:49, 1 user, load average: 12.84, 33.84, 18.63\r\n total used free shared buffers cached\r\nMem: 31720 25316 6404 0 76 19367\r\n(…) \r\n 05:06:20 up 31 days, 23:49, 1 user, load average: 9.85, 31.80, 18.28\r\n total used free shared buffers cached\r\nMem: 31720 24728 6992 0 76 18839\r\n(…) \r\n 05:06:40 up 31 days, 23:49, 1 user, load average: 7.61, 29.86, 17.93\r\n total used free shared buffers cached\r\nMem: 31720 24835 6885 0 76 18847\r\n(…) \r\n 05:07:00 up 31 days, 23:50, 1 user, load average: 5.74, 27.99, 17.57\r\n total used free shared buffers cached\r\nMem: 31720 24971 6749 0 76 18852\r\n(…) \r\n 05:07:20 up 31 days, 23:50, 1 user, load average: 4.48, 26.26, 17.22\r\n total used free shared buffers cached\r\nMem: 31720 25133 6587 0 76 18861\r\n(…) \r\n 05:07:40 up 31 days, 23:50, 2 users, load average: 3.83, 24.70, 16.90\r\n total used free shared buffers cached\r\nMem: 31720 25351 6369 0 76 18872\r\n(…) \r\n 05:08:00 up 31 days, 23:51, 2 users, load average: 3.10, 23.18, 16.56\r\n total used free shared buffers cached\r\nMem: 31720 25334 6385 0 76 18879\r\n(…) \r\n 05:08:20 up 31 days, 23:51, 2 users, load average: 2.52, 21.75, 16.23\r\n total used free shared buffers cached\r\nMem: 31720 25362 6358 0 77 18884\n\n\r\nHere are the pertinent machine and OS and Postgres details:\r\n PostgreSQL 9.1.11 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit\r\n Linux ps2db 2.6.32-431.11.2.el6.x86_64 #1 SMP Tue Mar 25 19:59:55 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux\r\npostgres=# SELECT name, current_setting(name), source\r\npostgres-# FROM pg_settings\r\npostgres-# WHERE source NOT IN ('default', 'override');\r\n name | current_setting | source \r\n------------------------------+-------------------------------+----------------------\r\n application_name | psql | client\r\n archive_command | /bin/true | configuration file\r\n archive_mode | on | configuration file\r\n autovacuum_analyze_threshold | 50 | configuration file\r\n autovacuum_freeze_max_age | 800000000 | configuration file\r\n autovacuum_naptime | 5min | configuration file\r\n autovacuum_vacuum_threshold | 50 | configuration file\r\n bytea_output | escape | configuration file\r\n checkpoint_completion_target | 0.7 | configuration file\r\n checkpoint_segments | 128 | configuration file\r\n checkpoint_timeout | 15min | configuration file\r\n checkpoint_warning | 30s | configuration file\r\n client_encoding | UTF8 | client\r\n constraint_exclusion | partition | configuration file\r\n cpu_index_tuple_cost | 0.005 | configuration file\r\n cpu_operator_cost | 0.0025 | configuration file\r\n cpu_tuple_cost | 0.01 | configuration file\r\n custom_variable_classes | pg_stat_statements | configuration file\r\n DateStyle | ISO, MDY | configuration file\r\n default_statistics_target | 100 | configuration file\r\n default_text_search_config | pg_catalog.english | configuration file\r\n effective_cache_size | 16GB | configuration file\r\n effective_io_concurrency | 1 | configuration file\r\n enable_material | off | configuration file\r\n escape_string_warning | on | configuration file\r\n hot_standby | on | configuration file\r\n lc_messages | C | configuration file\r\n lc_monetary | en_US.UTF-8 | configuration file\r\n lc_numeric | en_US.UTF-8 | configuration file\r\n lc_time | en_US.UTF-8 | configuration file\r\n listen_addresses | * | configuration file\r\n log_autovacuum_min_duration | 0 | configuration file\r\n log_checkpoints | on | configuration file\r\n log_connections | on | configuration file\r\n log_destination | csvlog | configuration file\r\n log_directory | pg_log | configuration file\r\n log_disconnections | on | configuration file\r\n log_filename | postgresql.log.ps2db.%H | configuration file\r\n log_line_prefix | %t [%d] [%u] [%p]: [%l-1] %h | configuration file\r\n log_lock_waits | on | configuration file\r\n log_min_duration_statement | 0 | configuration file\r\n log_rotation_age | 1h | configuration file\r\n log_temp_files | 0 | configuration file\r\n log_timezone | Canada/Pacific | environment variable\r\n log_truncate_on_rotation | on | configuration file\r\n logging_collector | on | configuration file\r\n maintenance_work_mem | 1GB | configuration file\r\n max_connections | 500 | configuration file\r\n max_locks_per_transaction | 512 | configuration file\r\n max_stack_depth | 2MB | environment variable\r\n max_standby_streaming_delay | 90min | configuration file\r\n max_wal_senders | 6 | configuration file\r\n pg_stat_statements.max | 10000 | configuration file\r\n pg_stat_statements.track | all | configuration file\r\n port | 5432 | configuration file\r\n random_page_cost | 4 | configuration file\r\n shared_buffers | 6GB | configuration file\r\n shared_preload_libraries | pg_stat_statements | configuration file\r\n standard_conforming_strings | off | configuration file\r\n stats_temp_directory | /ram_postgres_stats | configuration file\r\n temp_buffers | 16MB | configuration file\r\n TimeZone | Canada/Pacific | environment variable\r\n wal_keep_segments | 64 | configuration file\r\n wal_level | hot_standby | configuration file\r\n work_mem | 8MB | configuration file\r\n(65 rows)\n\r\nRAM: 32 gigs\r\nCPU: 24 cores; Intel(R) Xeon(R) CPU X7460 @ 2.66GHz\r\nRAID 10",
"msg_date": "Mon, 2 Jun 2014 16:43:30 -0700",
"msg_from": "Vince Lasmarias <[email protected]>",
"msg_from_op": true,
"msg_subject": "High CPU load when 'free -m' shows low 'free' memory even though\n large 'cached' memory still available"
},
{
"msg_contents": "Vince Lasmarias <[email protected]> writes:\n> For the past few days, we've been seeing unexpected high CPU spikes in our\n> system.\n\nRecent reports have suggested that disabling transparent huge page\nmanagement in your kernel can help with this. If the excess CPU\nload is mostly \"system\" time not \"user\" time then this is probably\nthe culprit.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 05 Jun 2014 09:47:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU load when 'free -m' shows low 'free' memory even though\n large 'cached' memory still available"
},
{
"msg_contents": "On Thu, Jun 5, 2014 at 8:47 AM, Tom Lane <[email protected]> wrote:\n> Vince Lasmarias <[email protected]> writes:\n>> For the past few days, we've been seeing unexpected high CPU spikes in our\n>> system.\n>\n> Recent reports have suggested that disabling transparent huge page\n> management in your kernel can help with this. If the excess CPU\n> load is mostly \"system\" time not \"user\" time then this is probably\n> the culprit.\n\nOP double posted this (OP: please refrain from doing that). I'm not\nsure if THP is the issue here (although it is definitely a major\nbugaboo if not a disaster IMNSHO) -- see commentary on the 'other\nthread'.\n\nhttp://www.postgresql.org/message-id/[email protected]\n\nmelrin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jun 2014 15:03:46 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU load when 'free -m' shows low 'free' memory\n even though large 'cached' memory still available"
}
] |
[
{
"msg_contents": "Hello,\n\nSome days ago I upgraded from 8.4 to 9.3, after the upgrade some queries started performing a lot slower, the query I am using in this example is pasted here:\n\nhttp://pastebin.com/71DjEC21\n\n\nConsidering it is a production database users are complaining because queries are much slower than before, so I tried to downgrade to 9.2 with the same result as 9.3, I finally restored the database on 8.4 and the query is as fast as before.\n\nAll this tests are done on Debian Squeeze with 2.6.32-5-amd64 kernel version, the hardware is Intel Xeon E5520, 32Gb ECC RAM, the storage is software RAID 10 with 4 SEAGATE ST3146356SS SAS drives.\n\npostgresql.conf:\nmax_connections = 250\nshared_buffers = 6144MB\ntemp_buffers = 8MB\nmax_prepared_transactions = 0\nwork_mem = 24MB\nmaintenance_work_mem = 384MB\nmax_stack_depth = 7MB\ndefault_statistics_target = 150\neffective_cache_size = 24576MB\n\n\n9.3 explain:\nhttp://explain.depesz.com/s/jP7o\n\n9.3 explain analyze:\nhttp://explain.depesz.com/s/6UQT\n\n9.2 explain:\nhttp://explain.depesz.com/s/EW1g\n\n8.4 explain:\nhttp://explain.depesz.com/s/iAba\n\n8.4 explain analyze:\nhttp://explain.depesz.com/s/MPt\n\nIt seems to me that the total estimated cost went too high in 9.2 and 9.3 but I am not sure why, I tried commenting out part of the query and disabling indexonlyscan but still I have very bad timings and estimates.\n\nThe dump file is the same for all versions and after the restore process ended I did vacuum analyze on the restored database in all versions.\n\nRegards,\nMiguel Angel.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 04 Jun 2014 15:56:08 +0200",
"msg_from": "Linos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Possible performance regression in PostgreSQL 9.2/9.3?"
},
{
"msg_contents": "On Wed, Jun 4, 2014 at 8:56 AM, Linos <[email protected]> wrote:\n> Hello,\n>\n> Some days ago I upgraded from 8.4 to 9.3, after the upgrade some queries started performing a lot slower, the query I am using in this example is pasted here:\n>\n> http://pastebin.com/71DjEC21\n>\n>\n> Considering it is a production database users are complaining because queries are much slower than before, so I tried to downgrade to 9.2 with the same result as 9.3, I finally restored the database on 8.4 and the query is as fast as before.\n>\n> All this tests are done on Debian Squeeze with 2.6.32-5-amd64 kernel version, the hardware is Intel Xeon E5520, 32Gb ECC RAM, the storage is software RAID 10 with 4 SEAGATE ST3146356SS SAS drives.\n>\n> postgresql.conf:\n> max_connections = 250\n> shared_buffers = 6144MB\n> temp_buffers = 8MB\n> max_prepared_transactions = 0\n> work_mem = 24MB\n> maintenance_work_mem = 384MB\n> max_stack_depth = 7MB\n> default_statistics_target = 150\n> effective_cache_size = 24576MB\n>\n>\n> 9.3 explain:\n> http://explain.depesz.com/s/jP7o\n>\n> 9.3 explain analyze:\n> http://explain.depesz.com/s/6UQT\n>\n> 9.2 explain:\n> http://explain.depesz.com/s/EW1g\n>\n> 8.4 explain:\n> http://explain.depesz.com/s/iAba\n>\n> 8.4 explain analyze:\n> http://explain.depesz.com/s/MPt\n>\n> It seems to me that the total estimated cost went too high in 9.2 and 9.3 but I am not sure why, I tried commenting out part of the query and disabling indexonlyscan but still I have very bad timings and estimates.\n>\n> The dump file is the same for all versions and after the restore process ended I did vacuum analyze on the restored database in all versions.\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nThe rowcount estimates are garbage on all versions so a good execution\nplan can be chalked up to chance. That being said, it seems like\nwe're getting an awful lot of regressions of this type with recent\nversions.\n\nCan you try re-running this query with enable_nestloop and/or\nenable_material disabled? (you can disable them for a particular\nsession via: set enable_material = false;) . This is a \"ghetto fix\"\nbut worth trying. If it was me, I'd be simplifying and optimizing the\nquery.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 4 Jun 2014 14:36:45 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible performance regression in PostgreSQL 9.2/9.3?"
},
{
"msg_contents": "On 04/06/14 21:36, Merlin Moncure wrote:\n> On Wed, Jun 4, 2014 at 8:56 AM, Linos <[email protected]> wrote:\n>> Hello,\n>>\n>> Some days ago I upgraded from 8.4 to 9.3, after the upgrade some queries started performing a lot slower, the query I am using in this example is pasted here:\n>>\n>> http://pastebin.com/71DjEC21\n>>\n>>\n>> Considering it is a production database users are complaining because queries are much slower than before, so I tried to downgrade to 9.2 with the same result as 9.3, I finally restored the database on 8.4 and the query is as fast as before.\n>>\n>> All this tests are done on Debian Squeeze with 2.6.32-5-amd64 kernel version, the hardware is Intel Xeon E5520, 32Gb ECC RAM, the storage is software RAID 10 with 4 SEAGATE ST3146356SS SAS drives.\n>>\n>> postgresql.conf:\n>> max_connections = 250\n>> shared_buffers = 6144MB\n>> temp_buffers = 8MB\n>> max_prepared_transactions = 0\n>> work_mem = 24MB\n>> maintenance_work_mem = 384MB\n>> max_stack_depth = 7MB\n>> default_statistics_target = 150\n>> effective_cache_size = 24576MB\n>>\n>>\n>> 9.3 explain:\n>> http://explain.depesz.com/s/jP7o\n>>\n>> 9.3 explain analyze:\n>> http://explain.depesz.com/s/6UQT\n>>\n>> 9.2 explain:\n>> http://explain.depesz.com/s/EW1g\n>>\n>> 8.4 explain:\n>> http://explain.depesz.com/s/iAba\n>>\n>> 8.4 explain analyze:\n>> http://explain.depesz.com/s/MPt\n>>\n>> It seems to me that the total estimated cost went too high in 9.2 and 9.3 but I am not sure why, I tried commenting out part of the query and disabling indexonlyscan but still I have very bad timings and estimates.\n>>\n>> The dump file is the same for all versions and after the restore process ended I did vacuum analyze on the restored database in all versions.\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> The rowcount estimates are garbage on all versions so a good execution\n> plan can be chalked up to chance. That being said, it seems like\n> we're getting an awful lot of regressions of this type with recent\n> versions.\n>\n> Can you try re-running this query with enable_nestloop and/or\n> enable_material disabled? (you can disable them for a particular\n> session via: set enable_material = false;) . This is a \"ghetto fix\"\n> but worth trying. If it was me, I'd be simplifying and optimizing the\n> query.\n>\n> merlin\n>\n>\n\nMuch better with this options set to false, thank you Merlin, even better than 8.4\n\n9.3 explain analyze with enable_nestloop and enable_material set to false.\nhttp://explain.depesz.com/s/94D\n\nThe thing is I have plenty of queries that are now a lot slower than before, this is only one example. I would like to find a fix or workaround.\n\nI can downgrade to 9.1, I didn't try on 9.1 but it's the first version that supports exceptions inside plpython and I would like to use them. Do you think this situation would be better on 9.1?\n\nOr maybe can I disable material and nestloop on postgresql.conf? I thought was bad to trick the planner but given this strange behavior I am not sure anymore.\n\nRegards,\nMiguel Angel.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 04 Jun 2014 21:58:29 +0200",
"msg_from": "Linos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possible performance regression in PostgreSQL 9.2/9.3?"
},
{
"msg_contents": "On Wed, Jun 4, 2014 at 2:58 PM, Linos <[email protected]> wrote:\n> On 04/06/14 21:36, Merlin Moncure wrote:\n>> On Wed, Jun 4, 2014 at 8:56 AM, Linos <[email protected]> wrote:\n>>> Hello,\n>>>\n>>> Some days ago I upgraded from 8.4 to 9.3, after the upgrade some queries started performing a lot slower, the query I am using in this example is pasted here:\n>>>\n>>> http://pastebin.com/71DjEC21\n>>>\n>>>\n>>> Considering it is a production database users are complaining because queries are much slower than before, so I tried to downgrade to 9.2 with the same result as 9.3, I finally restored the database on 8.4 and the query is as fast as before.\n>>>\n>>> All this tests are done on Debian Squeeze with 2.6.32-5-amd64 kernel version, the hardware is Intel Xeon E5520, 32Gb ECC RAM, the storage is software RAID 10 with 4 SEAGATE ST3146356SS SAS drives.\n>>>\n>>> postgresql.conf:\n>>> max_connections = 250\n>>> shared_buffers = 6144MB\n>>> temp_buffers = 8MB\n>>> max_prepared_transactions = 0\n>>> work_mem = 24MB\n>>> maintenance_work_mem = 384MB\n>>> max_stack_depth = 7MB\n>>> default_statistics_target = 150\n>>> effective_cache_size = 24576MB\n>>>\n>>>\n>>> 9.3 explain:\n>>> http://explain.depesz.com/s/jP7o\n>>>\n>>> 9.3 explain analyze:\n>>> http://explain.depesz.com/s/6UQT\n>>>\n>>> 9.2 explain:\n>>> http://explain.depesz.com/s/EW1g\n>>>\n>>> 8.4 explain:\n>>> http://explain.depesz.com/s/iAba\n>>>\n>>> 8.4 explain analyze:\n>>> http://explain.depesz.com/s/MPt\n>>>\n>>> It seems to me that the total estimated cost went too high in 9.2 and 9.3 but I am not sure why, I tried commenting out part of the query and disabling indexonlyscan but still I have very bad timings and estimates.\n>>>\n>>> The dump file is the same for all versions and after the restore process ended I did vacuum analyze on the restored database in all versions.\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>> The rowcount estimates are garbage on all versions so a good execution\n>> plan can be chalked up to chance. That being said, it seems like\n>> we're getting an awful lot of regressions of this type with recent\n>> versions.\n>>\n>> Can you try re-running this query with enable_nestloop and/or\n>> enable_material disabled? (you can disable them for a particular\n>> session via: set enable_material = false;) . This is a \"ghetto fix\"\n>> but worth trying. If it was me, I'd be simplifying and optimizing the\n>> query.\n>>\n>> merlin\n>>\n>>\n>\n> Much better with this options set to false, thank you Merlin, even better than 8.4\n>\n> 9.3 explain analyze with enable_nestloop and enable_material set to false.\n> http://explain.depesz.com/s/94D\n>\n> The thing is I have plenty of queries that are now a lot slower than before, this is only one example. I would like to find a fix or workaround.\n>\n> I can downgrade to 9.1, I didn't try on 9.1 but it's the first version that supports exceptions inside plpython and I would like to use them. Do you think this situation would be better on 9.1?\n>\n> Or maybe can I disable material and nestloop on postgresql.conf? I thought was bad to trick the planner but given this strange behavior I am not sure anymore.\n>\n\nI would against advise adjusting postgresql.conf. nestloops often\ngive worse plans than other choices but can often give the best plan,\nsometimes by an order of magnitude or more. planner directives should\nbe considered a 'last resort' fix and should generally not be changed\nin postgresql.conf. If i were in your shoes, I'd be breaking the\nquery down and figuring out where it goes off the rails. Best case\nscenario, you have a simplified, test case reproducible reduction of\nthe problem that can help direct changes to the planner. In lieu of\nthat, I'd look at this as a special case optimization of problem\nqueries.\n\nThere is something else to try. Can you (temporarily) raise\njoin_collapse_limit higher (to, say 20), and see if you get a better\nplan (with and without other planner adjustments)?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 4 Jun 2014 15:57:40 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible performance regression in PostgreSQL 9.2/9.3?"
},
{
"msg_contents": "On 04/06/14 22:57, Merlin Moncure wrote:\n> On Wed, Jun 4, 2014 at 2:58 PM, Linos <[email protected]> wrote:\n>> On 04/06/14 21:36, Merlin Moncure wrote:\n>>> On Wed, Jun 4, 2014 at 8:56 AM, Linos <[email protected]> wrote:\n>>>> Hello,\n>>>>\n>>>> Some days ago I upgraded from 8.4 to 9.3, after the upgrade some queries started performing a lot slower, the query I am using in this example is pasted here:\n>>>>\n>>>> http://pastebin.com/71DjEC21\n>>>>\n>>>>\n>>>> Considering it is a production database users are complaining because queries are much slower than before, so I tried to downgrade to 9.2 with the same result as 9.3, I finally restored the database on 8.4 and the query is as fast as before.\n>>>>\n>>>> All this tests are done on Debian Squeeze with 2.6.32-5-amd64 kernel version, the hardware is Intel Xeon E5520, 32Gb ECC RAM, the storage is software RAID 10 with 4 SEAGATE ST3146356SS SAS drives.\n>>>>\n>>>> postgresql.conf:\n>>>> max_connections = 250\n>>>> shared_buffers = 6144MB\n>>>> temp_buffers = 8MB\n>>>> max_prepared_transactions = 0\n>>>> work_mem = 24MB\n>>>> maintenance_work_mem = 384MB\n>>>> max_stack_depth = 7MB\n>>>> default_statistics_target = 150\n>>>> effective_cache_size = 24576MB\n>>>>\n>>>>\n>>>> 9.3 explain:\n>>>> http://explain.depesz.com/s/jP7o\n>>>>\n>>>> 9.3 explain analyze:\n>>>> http://explain.depesz.com/s/6UQT\n>>>>\n>>>> 9.2 explain:\n>>>> http://explain.depesz.com/s/EW1g\n>>>>\n>>>> 8.4 explain:\n>>>> http://explain.depesz.com/s/iAba\n>>>>\n>>>> 8.4 explain analyze:\n>>>> http://explain.depesz.com/s/MPt\n>>>>\n>>>> It seems to me that the total estimated cost went too high in 9.2 and 9.3 but I am not sure why, I tried commenting out part of the query and disabling indexonlyscan but still I have very bad timings and estimates.\n>>>>\n>>>> The dump file is the same for all versions and after the restore process ended I did vacuum analyze on the restored database in all versions.\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>> The rowcount estimates are garbage on all versions so a good execution\n>>> plan can be chalked up to chance. That being said, it seems like\n>>> we're getting an awful lot of regressions of this type with recent\n>>> versions.\n>>>\n>>> Can you try re-running this query with enable_nestloop and/or\n>>> enable_material disabled? (you can disable them for a particular\n>>> session via: set enable_material = false;) . This is a \"ghetto fix\"\n>>> but worth trying. If it was me, I'd be simplifying and optimizing the\n>>> query.\n>>>\n>>> merlin\n>>>\n>>>\n>> Much better with this options set to false, thank you Merlin, even better than 8.4\n>>\n>> 9.3 explain analyze with enable_nestloop and enable_material set to false.\n>> http://explain.depesz.com/s/94D\n>>\n>> The thing is I have plenty of queries that are now a lot slower than before, this is only one example. I would like to find a fix or workaround.\n>>\n>> I can downgrade to 9.1, I didn't try on 9.1 but it's the first version that supports exceptions inside plpython and I would like to use them. Do you think this situation would be better on 9.1?\n>>\n>> Or maybe can I disable material and nestloop on postgresql.conf? I thought was bad to trick the planner but given this strange behavior I am not sure anymore.\n>>\n> I would against advise adjusting postgresql.conf. nestloops often\n> give worse plans than other choices but can often give the best plan,\n> sometimes by an order of magnitude or more. planner directives should\n> be considered a 'last resort' fix and should generally not be changed\n> in postgresql.conf. If i were in your shoes, I'd be breaking the\n> query down and figuring out where it goes off the rails. Best case\n> scenario, you have a simplified, test case reproducible reduction of\n> the problem that can help direct changes to the planner. In lieu of\n> that, I'd look at this as a special case optimization of problem\n> queries.\n>\n> There is something else to try. Can you (temporarily) raise\n> join_collapse_limit higher (to, say 20), and see if you get a better\n> plan (with and without other planner adjustments)?\n>\n> merlin\n>\n>\n\nThis is the plan with join_collapse_limit=20, enable_nestloop=false, enable_material=false:\nhttp://explain.depesz.com/s/PpL\n\nThe plan with join_collapse_limit=20 but nestloops and enable_material true is taking too much time, seems to have the same problem as with join_collapse_limit=8.\n\nI will try to create a simpler reproducible example, thank you.\n\nRegards,\nMiguel Angel.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 05 Jun 2014 00:09:45 +0200",
"msg_from": "Linos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possible performance regression in PostgreSQL 9.2/9.3?"
},
{
"msg_contents": "> -----Original Message-----\r\n> From: [email protected] [mailto:pgsql-\r\n> [email protected]] On Behalf Of Linos\r\n> Sent: Wednesday, June 04, 2014 6:10 PM\r\n> To: Merlin Moncure\r\n> Cc: [email protected]\r\n> Subject: Re: [PERFORM] Possible performance regression in PostgreSQL\r\n> 9.2/9.3?\r\n> \r\n> On 04/06/14 22:57, Merlin Moncure wrote:\r\n> > On Wed, Jun 4, 2014 at 2:58 PM, Linos <[email protected]> wrote:\r\n> >> On 04/06/14 21:36, Merlin Moncure wrote:\r\n> >>> On Wed, Jun 4, 2014 at 8:56 AM, Linos <[email protected]> wrote:\r\n> >>>> Hello,\r\n> >>>>\r\n> >>>> Some days ago I upgraded from 8.4 to 9.3, after the upgrade some\r\n> queries started performing a lot slower, the query I am using in this example\r\n> is pasted here:\r\n> >>>>\r\n> >>>> http://pastebin.com/71DjEC21\r\n> >>>>\r\n> >>>>\r\n> >>>> Considering it is a production database users are complaining because\r\n> queries are much slower than before, so I tried to downgrade to 9.2 with the\r\n> same result as 9.3, I finally restored the database on 8.4 and the query is as\r\n> fast as before.\r\n> >>>>\r\n> >>>> All this tests are done on Debian Squeeze with 2.6.32-5-amd64 kernel\r\n> version, the hardware is Intel Xeon E5520, 32Gb ECC RAM, the storage is\r\n> software RAID 10 with 4 SEAGATE ST3146356SS SAS drives.\r\n> >>>>\r\n> >>>> postgresql.conf:\r\n> >>>> max_connections = 250\r\n> >>>> shared_buffers = 6144MB\r\n> >>>> temp_buffers = 8MB\r\n> >>>> max_prepared_transactions = 0\r\n> >>>> work_mem = 24MB\r\n> >>>> maintenance_work_mem = 384MB\r\n> >>>> max_stack_depth = 7MB\r\n> >>>> default_statistics_target = 150\r\n> >>>> effective_cache_size = 24576MB\r\n> >>>>\r\n> >>>>\r\n> >>>> 9.3 explain:\r\n> >>>> http://explain.depesz.com/s/jP7o\r\n> >>>>\r\n> >>>> 9.3 explain analyze:\r\n> >>>> http://explain.depesz.com/s/6UQT\r\n> >>>>\r\n> >>>> 9.2 explain:\r\n> >>>> http://explain.depesz.com/s/EW1g\r\n> >>>>\r\n> >>>> 8.4 explain:\r\n> >>>> http://explain.depesz.com/s/iAba\r\n> >>>>\r\n> >>>> 8.4 explain analyze:\r\n> >>>> http://explain.depesz.com/s/MPt\r\n> >>>>\r\n> >>>> It seems to me that the total estimated cost went too high in 9.2 and\r\n> 9.3 but I am not sure why, I tried commenting out part of the query and\r\n> disabling indexonlyscan but still I have very bad timings and estimates.\r\n> >>>>\r\n> >>>> The dump file is the same for all versions and after the restore process\r\n> ended I did vacuum analyze on the restored database in all versions.\r\n> >>>> http://www.postgresql.org/mailpref/pgsql-performance\r\n> >>> The rowcount estimates are garbage on all versions so a good\r\n> >>> execution plan can be chalked up to chance. That being said, it\r\n> >>> seems like we're getting an awful lot of regressions of this type\r\n> >>> with recent versions.\r\n> >>>\r\n> >>> Can you try re-running this query with enable_nestloop and/or\r\n> >>> enable_material disabled? (you can disable them for a particular\r\n> >>> session via: set enable_material = false;) . This is a \"ghetto fix\"\r\n> >>> but worth trying. If it was me, I'd be simplifying and optimizing\r\n> >>> the query.\r\n> >>>\r\n> >>> merlin\r\n> >>>\r\n> >>>\r\n> >> Much better with this options set to false, thank you Merlin, even\r\n> >> better than 8.4\r\n> >>\r\n> >> 9.3 explain analyze with enable_nestloop and enable_material set to\r\n> false.\r\n> >> http://explain.depesz.com/s/94D\r\n> >>\r\n> >> The thing is I have plenty of queries that are now a lot slower than before,\r\n> this is only one example. I would like to find a fix or workaround.\r\n> >>\r\n> >> I can downgrade to 9.1, I didn't try on 9.1 but it's the first version that\r\n> supports exceptions inside plpython and I would like to use them. Do you\r\n> think this situation would be better on 9.1?\r\n> >>\r\n> >> Or maybe can I disable material and nestloop on postgresql.conf? I\r\n> thought was bad to trick the planner but given this strange behavior I am not\r\n> sure anymore.\r\n> >>\r\n> > I would against advise adjusting postgresql.conf. nestloops often\r\n> > give worse plans than other choices but can often give the best plan,\r\n> > sometimes by an order of magnitude or more. planner directives should\r\n> > be considered a 'last resort' fix and should generally not be changed\r\n> > in postgresql.conf. If i were in your shoes, I'd be breaking the\r\n> > query down and figuring out where it goes off the rails. Best case\r\n> > scenario, you have a simplified, test case reproducible reduction of\r\n> > the problem that can help direct changes to the planner. In lieu of\r\n> > that, I'd look at this as a special case optimization of problem\r\n> > queries.\r\n> >\r\n> > There is something else to try. Can you (temporarily) raise\r\n> > join_collapse_limit higher (to, say 20), and see if you get a better\r\n> > plan (with and without other planner adjustments)?\r\n> >\r\n> > merlin\r\n> >\r\n> >\r\n> \r\n> This is the plan with join_collapse_limit=20, enable_nestloop=false,\r\n> enable_material=false:\r\n> http://explain.depesz.com/s/PpL\r\n> \r\n> The plan with join_collapse_limit=20 but nestloops and enable_material true\r\n> is taking too much time, seems to have the same problem as with\r\n> join_collapse_limit=8.\r\n> \r\n> I will try to create a simpler reproducible example, thank you.\r\n> \r\n> Regards,\r\n> Miguel Angel.\r\n> \r\n> \r\n> \r\n> --\r\n> Sent via pgsql-performance mailing list ([email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n\r\nUsually, when I increase join_collapse_limit, I also increase from_collaps_limit (to the same value).\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jun 2014 13:29:32 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possible performance regression in PostgreSQL 9.2/9.3?"
},
{
"msg_contents": "On 05/06/14 15:29, Igor Neyman wrote:\n>> -----Original Message-----\n>> From: [email protected] [mailto:pgsql-\n>> [email protected]] On Behalf Of Linos\n>> Sent: Wednesday, June 04, 2014 6:10 PM\n>> To: Merlin Moncure\n>> Cc: [email protected]\n>> Subject: Re: [PERFORM] Possible performance regression in PostgreSQL\n>> 9.2/9.3?\n>>\n>> On 04/06/14 22:57, Merlin Moncure wrote:\n>>> On Wed, Jun 4, 2014 at 2:58 PM, Linos <[email protected]> wrote:\n>>>> On 04/06/14 21:36, Merlin Moncure wrote:\n>>>>> On Wed, Jun 4, 2014 at 8:56 AM, Linos <[email protected]> wrote:\n>>>>>> Hello,\n>>>>>>\n>>>>>> Some days ago I upgraded from 8.4 to 9.3, after the upgrade some\n>> queries started performing a lot slower, the query I am using in this example\n>> is pasted here:\n>>>>>> http://pastebin.com/71DjEC21\n>>>>>>\n>>>>>>\n>>>>>> Considering it is a production database users are complaining because\n>> queries are much slower than before, so I tried to downgrade to 9.2 with the\n>> same result as 9.3, I finally restored the database on 8.4 and the query is as\n>> fast as before.\n>>>>>> All this tests are done on Debian Squeeze with 2.6.32-5-amd64 kernel\n>> version, the hardware is Intel Xeon E5520, 32Gb ECC RAM, the storage is\n>> software RAID 10 with 4 SEAGATE ST3146356SS SAS drives.\n>>>>>> postgresql.conf:\n>>>>>> max_connections = 250\n>>>>>> shared_buffers = 6144MB\n>>>>>> temp_buffers = 8MB\n>>>>>> max_prepared_transactions = 0\n>>>>>> work_mem = 24MB\n>>>>>> maintenance_work_mem = 384MB\n>>>>>> max_stack_depth = 7MB\n>>>>>> default_statistics_target = 150\n>>>>>> effective_cache_size = 24576MB\n>>>>>>\n>>>>>>\n>>>>>> 9.3 explain:\n>>>>>> http://explain.depesz.com/s/jP7o\n>>>>>>\n>>>>>> 9.3 explain analyze:\n>>>>>> http://explain.depesz.com/s/6UQT\n>>>>>>\n>>>>>> 9.2 explain:\n>>>>>> http://explain.depesz.com/s/EW1g\n>>>>>>\n>>>>>> 8.4 explain:\n>>>>>> http://explain.depesz.com/s/iAba\n>>>>>>\n>>>>>> 8.4 explain analyze:\n>>>>>> http://explain.depesz.com/s/MPt\n>>>>>>\n>>>>>> It seems to me that the total estimated cost went too high in 9.2 and\n>> 9.3 but I am not sure why, I tried commenting out part of the query and\n>> disabling indexonlyscan but still I have very bad timings and estimates.\n>>>>>> The dump file is the same for all versions and after the restore process\n>> ended I did vacuum analyze on the restored database in all versions.\n>>>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>> The rowcount estimates are garbage on all versions so a good\n>>>>> execution plan can be chalked up to chance. That being said, it\n>>>>> seems like we're getting an awful lot of regressions of this type\n>>>>> with recent versions.\n>>>>>\n>>>>> Can you try re-running this query with enable_nestloop and/or\n>>>>> enable_material disabled? (you can disable them for a particular\n>>>>> session via: set enable_material = false;) . This is a \"ghetto fix\"\n>>>>> but worth trying. If it was me, I'd be simplifying and optimizing\n>>>>> the query.\n>>>>>\n>>>>> merlin\n>>>>>\n>>>>>\n>>>> Much better with this options set to false, thank you Merlin, even\n>>>> better than 8.4\n>>>>\n>>>> 9.3 explain analyze with enable_nestloop and enable_material set to\n>> false.\n>>>> http://explain.depesz.com/s/94D\n>>>>\n>>>> The thing is I have plenty of queries that are now a lot slower than before,\n>> this is only one example. I would like to find a fix or workaround.\n>>>> I can downgrade to 9.1, I didn't try on 9.1 but it's the first version that\n>> supports exceptions inside plpython and I would like to use them. Do you\n>> think this situation would be better on 9.1?\n>>>> Or maybe can I disable material and nestloop on postgresql.conf? I\n>> thought was bad to trick the planner but given this strange behavior I am not\n>> sure anymore.\n>>> I would against advise adjusting postgresql.conf. nestloops often\n>>> give worse plans than other choices but can often give the best plan,\n>>> sometimes by an order of magnitude or more. planner directives should\n>>> be considered a 'last resort' fix and should generally not be changed\n>>> in postgresql.conf. If i were in your shoes, I'd be breaking the\n>>> query down and figuring out where it goes off the rails. Best case\n>>> scenario, you have a simplified, test case reproducible reduction of\n>>> the problem that can help direct changes to the planner. In lieu of\n>>> that, I'd look at this as a special case optimization of problem\n>>> queries.\n>>>\n>>> There is something else to try. Can you (temporarily) raise\n>>> join_collapse_limit higher (to, say 20), and see if you get a better\n>>> plan (with and without other planner adjustments)?\n>>>\n>>> merlin\n>>>\n>>>\n>> This is the plan with join_collapse_limit=20, enable_nestloop=false,\n>> enable_material=false:\n>> http://explain.depesz.com/s/PpL\n>>\n>> The plan with join_collapse_limit=20 but nestloops and enable_material true\n>> is taking too much time, seems to have the same problem as with\n>> join_collapse_limit=8.\n>>\n>> I will try to create a simpler reproducible example, thank you.\n>>\n>> Regards,\n>> Miguel Angel.\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> Usually, when I increase join_collapse_limit, I also increase from_collaps_limit (to the same value).\n>\n> Regards,\n> Igor Neyman\n>\n>\n\nI tried that already and it didn't work, thank you Igor.\n\nI have created a more complete example of this problem in pgsql-hackers list at:\nhttp://www.postgresql.org/message-id/[email protected]\n\nTo continue the conversation there.\n\nRegards,\nMiguel Angel.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 05 Jun 2014 15:36:37 +0200",
"msg_from": "Linos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possible performance regression in PostgreSQL 9.2/9.3?"
}
] |
[
{
"msg_contents": "Hi, \n\ni just wanted to know if group commit (as described in https://wiki.postgresql.org/wiki/Group_commit ) was committed.\n\nAnd if parallel replication is going to be introduced. \nMysql 5.7 going to have intra-database parallel slave based on group commit on master. \n\n\nHi, i just wanted to know if group commit (as described in https://wiki.postgresql.org/wiki/Group_commit ) was committed.And if parallel replication is going to be introduced. Mysql 5.7 going to have intra-database parallel slave based on group commit on master.",
"msg_date": "Wed, 4 Jun 2014 21:44:40 +0300",
"msg_from": "Evgeny Shishkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "group commit"
},
{
"msg_contents": "Evgeniy Shishkin wrote\n> Hi, \n> \n> i just wanted to know if group commit (as described in\n> https://wiki.postgresql.org/wiki/Group_commit ) was committed.\n\nI guess that depends on whether this comment in the 9.2 release notes covers\nthe same material described in the linked wiki page (I would presume it\ndoes).\n\nhttp://www.postgresql.org/docs/9.2/interactive/release-9-2.html\nSection: E.9.3.1.1. Performance\n\n\n> Allow group commit to work effectively under heavy load (Peter Geoghegan,\n> Simon Riggs, Heikki Linnakangas)\n> \n> Previously, batching of commits became ineffective as the write workload\n> increased, because of internal lock contention.\n\nThough based upon your question regarding parallel replication I am thinking\nthat maybe your concept of \"group commit\" and the one that was implemented\nare quite different...\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/group-commit-tp5806056p5806064.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 4 Jun 2014 12:37:18 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: group commit"
}
] |
[
{
"msg_contents": "For the past few days, we've been seeing unexpected extremely high CPU spikes\nin our system. We observed the following: the 'free' memory would go down to\nlower than 300 MB; at that point, 'cached' slowly starts to go down, and\nthen CPU starts to go way up. \n\nIt's almost as if the OS was not releasing 'cached' memory fast enough for\nPostgres. Is that analysis correct? Is there a way to fix this?\n\nHere's the session:\n\n 04:58:37 load average: 2.37, free: 532, cached: 22852\n 04:58:57 load average: 1.91, free: 451, cached: 22859\n 04:59:17 load average: 1.82, free: 469, cached: 22866\n 04:59:57 load average: 1.57, free: 387, cached: 22884\n 05:00:17 load average: 3.03, free: 574, cached: 22632\n 05:00:37 load average: 5.51, free: 268, cached: 22267\n 05:00:58 load average: 21.44, free: 237, cached: 21704\n 05:01:18 load average: 42.98, free: 243, cached: 21061\n 05:01:38 load average: 63.38, free: 266, cached: 20410\n 05:01:58 load average: 78.69, free: 315, cached: 20135\n 05:02:19 load average: 89.82, free: 214, cached: 20034\n 05:02:39 load average: 99.06, free: 253, cached: 19873\n 05:02:59 load average: 105.60, free: 390, cached: 19497\n 05:03:20 load average: 110.10, free: 394; cached: 19290\n\nHere are the pertinent machine and OS and Postgres details:\nRAM: 32 gigs\nCPU: 24 cores; Intel(R) Xeon(R) CPU X7460 @ 2.66GHz\nRAID 10\nvm.swappiness=0\n PostgreSQL 9.1.11 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7\n20120313 (Red Hat 4.4.7-3), 64-bit\n Linux ps2db 2.6.32-431.11.2.el6.x86_64 #1 SMP Tue Mar 25 19:59:55 UTC 2014\nx86_64 x86_64 x86_64 GNU/Linux\npostgres=# SELECT name, current_setting(name), source\npostgres-# FROM pg_settings\npostgres-# WHERE source NOT IN ('default', 'override');\n name | current_setting | \nsource \n------------------------------+-------------------------------+----------------------\n application_name | psql | client\n archive_command | /bin/true |\nconfiguration file\n archive_mode | on |\nconfiguration file\n autovacuum_analyze_threshold | 50 |\nconfiguration file\n autovacuum_freeze_max_age | 800000000 |\nconfiguration file\n autovacuum_naptime | 5min |\nconfiguration file\n autovacuum_vacuum_threshold | 50 |\nconfiguration file\n bytea_output | escape |\nconfiguration file\n checkpoint_completion_target | 0.7 |\nconfiguration file\n checkpoint_segments | 128 |\nconfiguration file\n checkpoint_timeout | 15min |\nconfiguration file\n checkpoint_warning | 30s |\nconfiguration file\n client_encoding | UTF8 | client\n constraint_exclusion | partition |\nconfiguration file\n cpu_index_tuple_cost | 0.005 |\nconfiguration file\n cpu_operator_cost | 0.0025 |\nconfiguration file\n cpu_tuple_cost | 0.01 |\nconfiguration file\n custom_variable_classes | pg_stat_statements |\nconfiguration file\n DateStyle | ISO, MDY |\nconfiguration file\n default_statistics_target | 100 |\nconfiguration file\n default_text_search_config | pg_catalog.english |\nconfiguration file\n effective_cache_size | 16GB |\nconfiguration file\n effective_io_concurrency | 1 |\nconfiguration file\n enable_material | off |\nconfiguration file\n escape_string_warning | on |\nconfiguration file\n hot_standby | on |\nconfiguration file\n lc_messages | C |\nconfiguration file\n lc_monetary | en_US.UTF-8 |\nconfiguration file\n lc_numeric | en_US.UTF-8 |\nconfiguration file\n lc_time | en_US.UTF-8 |\nconfiguration file\n listen_addresses | * |\nconfiguration file\n log_autovacuum_min_duration | 0 |\nconfiguration file\n log_checkpoints | on |\nconfiguration file\n log_connections | on |\nconfiguration file\n log_destination | csvlog |\nconfiguration file\n log_directory | pg_log |\nconfiguration file\n log_disconnections | on |\nconfiguration file\n log_filename | postgresql.log.ps2db.%H |\nconfiguration file\n log_line_prefix | %t [%d] [%u] [%p]: [%l-1] %h |\nconfiguration file\n log_lock_waits | on |\nconfiguration file\n log_min_duration_statement | 0 |\nconfiguration file\n log_rotation_age | 1h |\nconfiguration file\n log_temp_files | 0 |\nconfiguration file\n log_timezone | Canada/Pacific | environment\nvariable\n log_truncate_on_rotation | on |\nconfiguration file\n logging_collector | on |\nconfiguration file\n maintenance_work_mem | 1GB |\nconfiguration file\n max_connections | 500 |\nconfiguration file\n max_locks_per_transaction | 512 |\nconfiguration file\n max_stack_depth | 2MB | environment\nvariable\n max_standby_streaming_delay | 90min |\nconfiguration file\n max_wal_senders | 6 |\nconfiguration file\n pg_stat_statements.max | 10000 |\nconfiguration file\n pg_stat_statements.track | all |\nconfiguration file\n port | 5432 |\nconfiguration file\n random_page_cost | 4 |\nconfiguration file\n shared_buffers | 6GB |\nconfiguration file\n shared_preload_libraries | pg_stat_statements |\nconfiguration file\n standard_conforming_strings | off |\nconfiguration file\n stats_temp_directory | /ram_postgres_stats |\nconfiguration file\n temp_buffers | 16MB |\nconfiguration file\n TimeZone | Canada/Pacific | environment\nvariable\n wal_keep_segments | 64 |\nconfiguration file\n wal_level | hot_standby |\nconfiguration file\n work_mem | 8MB |\nconfiguration file\n(65 rows)\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/CPU-load-spikes-when-CentOS-tries-to-reclaim-cached-memory-tp5806122.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 4 Jun 2014 17:27:27 -0700 (PDT)",
"msg_from": "vlasmarias <[email protected]>",
"msg_from_op": true,
"msg_subject": "CPU load spikes when CentOS tries to reclaim 'cached' memory"
},
{
"msg_contents": "On Wed, Jun 4, 2014 at 5:27 PM, vlasmarias <[email protected]> wrote:\n\n> For the past few days, we've been seeing unexpected extremely high CPU\n> spikes\n> in our system. We observed the following: the 'free' memory would go down\n> to\n> lower than 300 MB; at that point, 'cached' slowly starts to go down, and\n> then CPU starts to go way up.\n>\n> It's almost as if the OS was not releasing 'cached' memory fast enough for\n> Postgres. Is that analysis correct? Is there a way to fix this?\n>\n\nThis sounds like a kernel problem, probably either the zone reclaim issue,\nor the transparent huge pages issue.\n\nI don't know the exact details off the top of my head, but both have been\ndiscussed a lot on both this list and the pgsql-hackers list.\n\n\n\n\n>\n> Here's the session:\n>\n> 04:58:37 load average: 2.37, free: 532, cached: 22852\n> 04:58:57 load average: 1.91, free: 451, cached: 22859\n> 04:59:17 load average: 1.82, free: 469, cached: 22866\n> 04:59:57 load average: 1.57, free: 387, cached: 22884\n>\n\nWhat tool is that? I'm not familiar with this output format.\n\n\n\n\n> max_connections | 500\n>\n\nWhile this is probably fundamentally a kernel problem, you are not doing\nyourself any favors by allowing 500 connections to a machine with 24 cores.\n High numbers of connections can trigger poor kernel behavior.\n\nCheers,\n\nJeff\n\nOn Wed, Jun 4, 2014 at 5:27 PM, vlasmarias <[email protected]> wrote:\nFor the past few days, we've been seeing unexpected extremely high CPU spikes\nin our system. We observed the following: the 'free' memory would go down to\nlower than 300 MB; at that point, 'cached' slowly starts to go down, and\nthen CPU starts to go way up.\n\nIt's almost as if the OS was not releasing 'cached' memory fast enough for\nPostgres. Is that analysis correct? Is there a way to fix this?This sounds like a kernel problem, probably either the zone reclaim issue, or the transparent huge pages issue.\nI don't know the exact details off the top of my head, but both have been discussed a lot on both this list and the pgsql-hackers list. \n\nHere's the session:\n\n 04:58:37 load average: 2.37, free: 532, cached: 22852\n 04:58:57 load average: 1.91, free: 451, cached: 22859\n 04:59:17 load average: 1.82, free: 469, cached: 22866\n 04:59:57 load average: 1.57, free: 387, cached: 22884What tool is that? I'm not familiar with this output format. \n\n max_connections | 500 While this is probably fundamentally a kernel problem, you are not doing yourself any favors by allowing 500 connections to a machine with 24 cores. High numbers of connections can trigger poor kernel behavior.\nCheers,Jeff",
"msg_date": "Thu, 5 Jun 2014 08:58:29 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU load spikes when CentOS tries to reclaim 'cached' memory"
},
{
"msg_contents": "On Thu, Jun 5, 2014 at 10:58 AM, Jeff Janes <[email protected]> wrote:\n> This sounds like a kernel problem, probably either the zone reclaim issue,\n> or the transparent huge pages issue.\n\nI at first thought maybe same, but I don't think THP was introduced\nuntil 2.6.38...OP is running 2.6.32-431.11.2.el6.x86_6. Maybe it's\nNUMA related, but would not be idiomatic of NUMA issues as I\nunderstand them (poor memory utilization/high IO utilization). Would\nbe a very cheap/easy thing to try though.\n\nIs this server virtualized?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jun 2014 14:23:46 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU load spikes when CentOS tries to reclaim 'cached' memory"
},
{
"msg_contents": "We saw very similar issues with a CentOS server with 40 cores (32\nvirtualized) when moving from a physical server to a virtual server (I\nthink it had 128GB RAM). Never had the problem on a physical server. We\nchecked the same things as noted here, but never found a bug. We really\nthought it had something to do with NUMA zone reclaim, but could never\nprove that. In our case it was all kernel time in the guest, all CPUs at\n100%. Sometimes it would last for a few seconds or minutes. Sometimes we\nwould go days without a problem, and then it would completely tank.\n\nIf you figure out what is going on, I would like to know (especially if it\nis virtualized).\n\nDeron\n\n\n\nOn Thu, Jun 5, 2014 at 12:23 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Jun 5, 2014 at 10:58 AM, Jeff Janes <[email protected]> wrote:\n> > This sounds like a kernel problem, probably either the zone reclaim\n> issue,\n> > or the transparent huge pages issue.\n>\n> I at first thought maybe same, but I don't think THP was introduced\n> until 2.6.38...OP is running 2.6.32-431.11.2.el6.x86_6. Maybe it's\n> NUMA related, but would not be idiomatic of NUMA issues as I\n> understand them (poor memory utilization/high IO utilization). Would\n> be a very cheap/easy thing to try though.\n>\n> Is this server virtualized?\n>\n> merlin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWe saw very similar issues with a CentOS server with 40 cores (32 virtualized) when moving from a physical server to a virtual server (I think it had 128GB RAM). Never had the problem on a physical server. We checked the same things as noted here, but never found a bug. We really thought it had something to do with NUMA zone reclaim, but could never prove that. In our case it was all kernel time in the guest, all CPUs at 100%. Sometimes it would last for a few seconds or minutes. Sometimes we would go days without a problem, and then it would completely tank. \nIf you figure out what is going on, I would like to know (especially if it is virtualized).Deron\nOn Thu, Jun 5, 2014 at 12:23 PM, Merlin Moncure <[email protected]> wrote:\nOn Thu, Jun 5, 2014 at 10:58 AM, Jeff Janes <[email protected]> wrote:\n> This sounds like a kernel problem, probably either the zone reclaim issue,\n> or the transparent huge pages issue.\n\nI at first thought maybe same, but I don't think THP was introduced\nuntil 2.6.38...OP is running 2.6.32-431.11.2.el6.x86_6. Maybe it's\nNUMA related, but would not be idiomatic of NUMA issues as I\nunderstand them (poor memory utilization/high IO utilization). Would\nbe a very cheap/easy thing to try though.\n\nIs this server virtualized?\n\nmerlin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 5 Jun 2014 12:47:34 -0700",
"msg_from": "Deron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU load spikes when CentOS tries to reclaim 'cached' memory"
},
{
"msg_contents": "On Thu, Jun 5, 2014 at 2:47 PM, Deron <[email protected]> wrote:\n> We saw very similar issues with a CentOS server with 40 cores (32\n> virtualized) when moving from a physical server to a virtual server (I think\n> it had 128GB RAM). Never had the problem on a physical server. We checked\n> the same things as noted here, but never found a bug. We really thought it\n> had something to do with NUMA zone reclaim, but could never prove that.\n> In our case it was all kernel time in the guest, all CPUs at 100%.\n> Sometimes it would last for a few seconds or minutes. Sometimes we would go\n> days without a problem, and then it would completely tank.\n>\n> If you figure out what is going on, I would like to know (especially if it\n> is virtualized).\n\nThere is a class of problems in virutalized enviroment that come from\nover-aggressive reclaiming of memory from the guest to the host. When\nthe guest tries to access the 'unpinned' memory it will manifest as\nhigh latency memory reads and show up as high user time. That may or\nmay not be the case here.\n\nWhat we'd need from the OP to get a better diagnosis is:\n*) top/sar output showing if the load average is due to high user,sys, or iowait\n*) is/isnot virtualized as noted above\n*) captured 'perf' snapshot during slowdown, particularly if we are\nseeing high user space loads. For example, we could be looking at\nhigh spinlock activity (which seems unlikely given how the problem is\ndescribed but is something to rule out for sure).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jun 2014 14:57:13 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU load spikes when CentOS tries to reclaim 'cached' memory"
},
{
"msg_contents": "Thanks for the informative responses and suggestions. My responses below:\n\n* Sorry for the double post. I posted the original message using my gmail\naccount and got a \"is not a member of any of the restrict_post groups\"\nresponse and when I didn't see it for a day, I ended up wondering if it was\ndue to my use of a gmail account - so I tried using my company email account\ninstead to post an updated version of the original post.\n\n* Our system is not virtualized.\n\n* Jeff, the output format of the load and free/cached memory did not come\nfrom a tool but came from my script. My script does OEuptime; free m¹, and\nthen another script massages the data to only grab date, 1-minute load\naverage, free, and cached.\n\n* For the 'top' outputs, I don't have the actual 'top' output, but I have\nthe following:\n\n2014-05-30 00:01:01 procs -----------memory---------- ---swap-- -----io----\n--system-- -----cpu-----\n2014-05-30 00:01:01 r b swpd free buff cache si so bi bo\nin cs us sy id wa st\n(...)\n2014-05-30 04:59:52 1 0 0 621976 92340 23431340 0 0 0\n16756 3538 3723 7 2 91 0 0\n2014-05-30 04:59:53 1 0 0 671896 92340 23431364 0 0 0\n236 1933 825 2 1 97 0 0\n2014-05-30 04:59:54 2 0 0 542964 92340 23433596 0 0 2148\n300 3751 1394 6 1 92 0 0\n2014-05-30 04:59:55 0 0 0 566140 92340 23433616 0 0 0\n192 3485 1465 6 1 94 0 0\n2014-05-30 04:59:56 2 0 0 614348 92340 23433760 0 0 0\n424 3238 4278 4 1 95 0 0\n2014-05-30 04:59:57 4 0 0 408812 92340 23433792 0 0 8\n944 6249 12512 12 2 86 0 0\n2014-05-30 04:59:58 3 0 0 471716 92356 23434012 0 0 0\n440 9028 4164 13 1 86 0 0\n2014-05-30 04:59:59 4 0 0 380988 92356 23434428 0 0 0\n248 10009 10967 15 3 83 0 0\n2014-05-30 05:00:00 6 0 0 462052 92356 23434904 0 0 0\n960 7260 9242 12 2 85 0 0\n2014-05-30 05:00:01 5 0 0 409796 92360 23435224 0 0 96\n1860 11475 95765 18 7 75 0 0\n2014-05-30 05:00:02 10 0 0 221464 92360 23433868 0 0 428\n10800 13933 128264 23 9 67 0 0\n2014-05-30 05:00:03 12 0 0 231444 91956 23355644 0 0 0\n1480 26651 10817 10 35 54 0 0\n2014-05-30 05:00:04 11 0 0 385672 91508 23254120 0 0 0\n3096 30849 44776 22 28 50 0 0\n2014-05-30 05:00:05 9 0 0 408932 91508 23270216 0 0 0\n1996 21925 24978 12 26 63 0 0\n2014-05-30 05:00:06 10 0 0 373992 91508 23270580 0 0 0\n2160 25778 5994 11 31 58 0 0\n2014-05-30 05:00:07 5 0 0 457900 91508 23270688 0 0 0\n6080 25185 11705 14 21 65 0 0\n2014-05-30 05:00:08 0 0 0 658300 91508 23270804 0 0 0\n2089 5989 5849 11 2 88 0 0\n2014-05-30 05:00:09 1 0 0 789972 91508 23270928 0 0 0\n2508 2346 2550 2 1 97 0 0\n2014-05-30 05:00:10 0 0 0 845736 91508 23274260 0 0 12\n1728 2109 1494 2 1 97 0 0\n2014-05-30 05:00:11 3 0 0 686352 91516 23274644 0 0 8\n2100 4039 5288 6 2 92 0 0\n2014-05-30 05:00:12 11 1 0 447636 91516 23274808 0 0 520\n1436 10299 50523 24 7 68 1 0\n2014-05-30 05:00:13 13 0 0 352380 91516 23276220 0 0 1060\n816 18283 18682 15 36 48 1 0\n2014-05-30 05:00:14 12 0 0 356720 88880 23179276 0 0 704\n868 16193 140313 36 12 51 1 0\n2014-05-30 05:00:15 5 0 0 513784 88880 23173344 0 0 2248\n748 12350 21178 30 6 62 2 0\n2014-05-30 05:00:16 2 0 0 623020 88884 23175808 0 0 1568\n500 5841 4999 12 2 86 1 0\n2014-05-30 05:00:17 5 0 0 590488 88884 23175844 0 0 24\n584 6573 4905 14 2 84 0 0\n2014-05-30 05:00:18 3 0 0 632408 88884 23176116 0 0 0\n496 6846 4358 14 2 84 0 0\n2014-05-30 05:00:19 5 0 0 596716 88884 23176948 0 0 656\n668 7135 5262 14 3 83 0 0\n2014-05-30 05:00:20 6 0 0 558692 88884 23179964 0 0 2816\n1012 8566 7742 17 4 79 0 0\n2014-05-30 05:00:21 7 1 0 476580 88888 23181200 0 0 1272\n968 11240 14308 23 6 71 1 0\n2014-05-30 05:00:22 8 0 0 695396 88888 23183028 0 0 728\n1128 9751 7121 22 4 74 1 0\n2014-05-30 05:00:23 9 0 0 536084 88888 23199080 0 0 392\n1024 12523 22269 26 6 68 0 0\n2014-05-30 05:00:24 13 0 0 416296 88888 23200416 0 0 40\n1000 16319 61822 29 21 51 0 0\n2014-05-30 05:00:25 14 0 0 386904 88888 23200704 0 0 24\n816 20850 4424 16 38 46 0 0\n2014-05-30 05:00:26 17 0 0 334688 88896 23201028 0 0 24\n1000 26758 16934 24 46 30 0 0\n2014-05-30 05:00:27 18 0 0 307304 88896 23193928 0 0 0\n1068 27051 67778 21 46 33 0 0\n2014-05-30 05:00:28 20 1 0 295560 88896 23162456 0 0 0\n860 31012 27787 15 67 18 0 0\n2014-05-30 05:00:29 22 1 0 281272 88896 23153312 0 0 16\n928 28899 2857 9 78 13 0 0\n2014-05-30 05:00:30 26 0 0 400804 87976 22979324 0 0 0\n1536 37689 4368 9 88 3 0 0\n2014-05-30 05:00:31 27 0 0 395588 87976 22979412 0 0 0\n1564 29195 4305 8 92 0 0 0\n2014-05-30 05:00:32 25 0 0 353176 87976 22979592 0 0 24\n9404 29845 15845 13 85 2 0 0\n2014-05-30 05:00:33 28 0 0 318680 87976 22979776 0 0 0\n1588 28097 3372 9 91 0 0 0\n2014-05-30 05:00:34 27 0 0 304676 87352 22969136 0 0 0\n1480 29387 4330 10 90 0 0 0\n2014-05-30 05:00:35 27 0 0 337960 79784 22900220 0 0 48\n924 38334 8253 13 86 1 0 0\n2014-05-30 05:00:36 29 0 0 297308 79788 22898608 0 0 0\n952 30865 3067 10 90 0 0 0\n2014-05-30 05:00:37 33 0 0 282612 79624 22801728 0 0 8\n17169 29197 6855 13 87 0 0 0\n2014-05-30 05:00:38 32 0 0 224140 79624 22717680 0 0 0\n908 29640 6579 14 86 0 0 0\n2014-05-30 05:00:39 29 0 0 304456 79624 22661712 0 0 0\n696 31838 3563 10 90 0 0 0\n2014-05-30 05:00:40 32 0 0 322080 79624 22657328 0 0 0\n396 29206 2274 4 96 0 0 0\n2014-05-30 05:00:41 33 0 0 309184 79624 22649172 0 0 0\n660 27084 2385 4 96 0 0 0\n2014-05-30 05:00:42 34 0 0 285348 79624 22646216 0 0 0\n4316 26418 2571 3 97 0 0 0\n2014-05-30 05:00:43 37 0 0 273732 79624 22632920 0 0 0\n740 26728 2663 3 97 0 0 0\n2014-05-30 05:00:44 38 0 0 276304 79624 22632640 0 0 0\n464 26376 2593 3 97 0 0 0\n2014-05-30 05:00:45 41 0 0 257480 79624 22624828 0 0 0\n448 26505 3516 4 96 0 0 0\n2014-05-30 05:00:46 32 0 0 305108 79608 22559216 0 0 0\n304 29610 6162 8 92 0 0 0\n2014-05-30 05:00:47 46 0 0 310504 79592 22487756 0 0 0\n360 30559 10591 12 88 0 0 0\n2014-05-30 05:00:48 47 0 0 286236 79596 22478168 0 0 0\n312 25733 4250 5 95 0 0 0\n2014-05-30 05:00:49 48 0 0 300284 79596 22477684 0 0 0\n356 26464 4690 5 95 0 0 0\n2014-05-30 05:00:50 52 0 0 276896 79588 22449560 0 0 124\n280 26238 4132 4 96 0 0 0\n2014-05-30 05:00:51 60 0 0 234836 79568 22406932 0 0 4\n344 26446 5055 5 95 0 0 0\n2014-05-30 05:00:52 62 0 0 247304 79564 22351916 0 0 12\n452 26807 4694 3 97 0 0 0\n2014-05-30 05:00:53 63 0 0 231892 79564 22347368 0 0 0\n308 25378 4376 3 97 0 0 0\n2014-05-30 05:00:54 67 0 0 236056 79564 22309368 0 0 0\n156 25737 4022 3 97 0 0 0\n2014-05-30 05:00:55 66 0 0 232984 79564 22286336 0 0 0\n216 25393 3874 2 98 0 0 0\n2014-05-30 05:00:56 67 0 0 240720 79560 22267736 0 0 0\n588 25944 4678 2 98 0 0 0\n2014-05-30 05:00:57 70 0 0 242836 79540 22232068 0 0 0\n16800 26058 4607 3 97 0 0 0\n2014-05-30 05:00:58 72 0 0 234944 79548 22224948 0 0 0\n608 25589 4687 2 98 0 0 0\n2014-05-30 05:00:59 73 0 0 236064 79536 22173496 0 0 0\n188 25747 4530 3 97 0 0 0\n2014-05-30 05:01:00 77 0 0 232708 79524 22135168 0 0 0\n304 25546 5247 3 97 0 0 0\n2014-05-30 05:01:01 72 0 0 269328 79528 22117528 0 0 24\n396 27545 8488 5 95 0 0 0\n2014-05-30 05:01:02 83 0 0 220892 79496 22043024 0 0 0\n3280 28665 9805 7 94 0 0 0\n2014-05-30 05:01:03 86 0 0 224004 79488 21995400 0 0 0\n440 26338 6090 3 97 0 0 0\n2014-05-30 05:01:04 90 0 0 249684 79476 21932468 0 0 0\n408 26341 5834 3 97 0 0 0\n2014-05-30 05:01:05 91 0 0 257336 79464 21883060 0 0 0\n380 26272 5717 2 98 0 0 0\n2014-05-30 05:01:06 98 0 0 242896 79468 21878016 0 0 0\n608 25628 5648 2 98 0 0 0\n2014-05-30 05:01:07 94 0 0 237276 79468 21876148 0 0 0\n908 25041 5883 3 98 0 0 0\n2014-05-30 05:01:08 99 0 0 225832 79488 21858572 0 0 24\n504 25271 5913 3 97 0 0 0\n2014-05-30 05:01:09 94 0 0 245796 79460 21812404 0 0 0\n264 25106 6189 3 97 0 0 0\n2014-05-30 05:01:10 94 0 0 246268 79460 21811144 0 0 0\n188 25087 5989 2 98 0 0 0\n2014-05-30 05:01:11 94 0 0 232900 79456 21775152 0 0 0\n168 24965 5949 3 97 0 0 0\n2014-05-30 05:01:12 100 0 0 227900 79428 21737032 0 0 32\n276 25560 6798 4 96 0 0 0\n2014-05-30 05:01:13 98 0 0 253644 79396 21713908 0 0 0\n552 27440 9052 5 95 0 0 0\n2014-05-30 05:01:14 104 0 0 269648 79376 21634336 0 0 0\n540 26100 6633 3 97 0 0 0\n2014-05-30 05:01:15 104 0 0 259436 79376 21622164 0 0 0\n368 25094 6417 2 98 0 0 0\n2014-05-30 05:01:16 105 0 0 262596 79372 21616292 0 0 0\n200 24995 6276 2 98 0 0 0\n2014-05-30 05:01:17 109 0 0 232172 79360 21583800 0 0 0\n388 25112 6570 3 97 0 0 0\n2014-05-30 05:01:18 109 0 0 231628 79372 21566604 0 0 0\n364 25221 6644 2 98 0 0 0\n2014-05-30 05:01:19 110 0 0 223920 79372 21532992 0 0 0\n340 25383 6874 3 97 0 0 0\n2014-05-30 05:01:20 111 0 0 223028 79368 21501868 0 0 0\n288 25369 6465 2 98 0 0 0\n2014-05-30 05:01:21 113 0 0 211676 79352 21434584 0 0 0\n240 24939 6607 4 96 0 0 0\n2014-05-30 05:01:22 114 0 0 211020 79300 21327264 0 0 0\n308 25390 7239 5 95 0 0 0\n2014-05-30 05:01:23 110 0 0 213524 79256 21215612 0 0 40\n336 25494 7878 7 93 0 0 0\n2014-05-30 05:01:24 114 0 0 222748 79220 21107976 0 0 0\n276 25257 7032 7 93 0 0 0\n2014-05-30 05:01:25 115 0 0 262004 79160 21012468 0 0 0\n300 25986 6746 8 92 0 0 0\n(...)\n\n* For the OEperf¹ outputs, I¹m still waiting for a time where I can gather\nit. Unfortunately, this is on a Production system, and we had to quickly\nlook for a workaround the workaround we have found is: when the OEfree¹\nmemory drops below a certain threshold, we run: /bin/sync && /bin/echo 3 >\n/proc/sys/vm/drop_caches . Since we¹ve been doing that, we haven¹t had the\nhigh load issue.\n\n* I forgot to mention that the debugging info I posted came from our slave\nserver (the master and slave have the same specs, but different OS version \nmaster has 2.6.18-371.3.1.el5 #1 SMP Thu Dec 5 12:47:02 EST 2013). We\nactually first saw the issue on our master server and not the slave server.\nTo get the master going, we edited our code to move our heaviest queries so\nthat they hit only the slave. After we did that, the slave started having\nthe high load issues, but with one difference - on the slave, 'cached' was\nhigh, but as can be seen in the master debugging session below, on the\nmaster, cached was low (corresponds to what we had for shared_buffers), and\nwhen 'free' dropped, it looks like the OS managed to grab the memory from\n'used' (do note that we will be adding more memory to our servers shortly):\n\n09:24 load average: 8.76 used: 32166 free: 31851 shared: 314 buffers: 0\ncached: 8827\n09:25 load average: 6.30 used: 31840 free: 325 shared: 0 buffers: 13 cached:\n8851\n09:26 load average: 95.83 used: 17883 free: 14282 shared: 0 buffers: 2\ncached: 8563\n(...)\n10:00 load average: 4.45 used: 31945 free: 220 shared: 0 buffers: 15 cached:\n8862\n10:01 load average: 4.56 used: 31983 free: 182 shared: 0 buffers: 15 cached:\n8844\n10:02 load average: 5.34 used: 31983 free: 183 shared: 0 buffers: 14 cached:\n8832\n10:03 load average: 6.57 used: 31987 free: 179 shared: 0 buffers: 9 cached:\n8709\n10:04 load average: 71.21 used: 18095 free: 14071 shared: 0 buffers: 1\ncached: 8556\n(...)\n10:40 load average: 5.38 used: 31970 free: 196 shared: 0 buffers: 12 cached:\n8929\n10:41 load average: 6.10 used: 31889 free: 276 shared: 0 buffers: 13 cached:\n8804\n10:42 load average: 6.94 used: 31984 free: 182 shared: 0 buffers: 2 cached:\n8761\n10:43 load average: 13.90 used: 31777 free: 389 shared: 0 buffers: 2 cached:\n8555\n10:44 load average: 54.30 used: 18894 free: 13272 shared: 0 buffers: 4\ncached: 8592\n(...)\n11:21 load average: 5.54 used: 31985 free: 181 shared: 0 buffers: 11 cached:\n8764\n11:22 load average: 5.15 used: 31768 free: 397 shared: 0 buffers: 10 cached:\n8721\n11:23 load average: 5.62 used: 31901 free: 265 shared: 0 buffers: 11 cached:\n8742\n11:24 load average: 4.80 used: 31969 free: 196 shared: 0 buffers: 9 cached:\n8675\n11:25 load average: 53.74 used: 18644 free: 13522 shared: 0 buffers: 1\ncached: 8578\n\nMore detailed output from our master server:\n\n2014-05-29 00:01:01 procs -----------memory---------- ---swap-- -----io----\n--system-- -----cpu------\n2014-05-29 00:01:01 r b swpd free buff cache si so bi bo\nin cs us sy id wa st\n(...)\n2014-05-29 09:24:41 10 0 7044 364616 14048 9055060 0 0 4 1908\n7115 11287 29 1 70 0 0\n2014-05-29 09:24:42 7 0 7044 360160 14072 9057648 0 0 2064 796\n7512 11746 34 2 64 0 0\n2014-05-29 09:24:43 2 0 7044 339576 14076 9057644 0 0 12 824\n7143 11633 25 1 74 0 0\n2014-05-29 09:24:44 3 0 7044 333120 14080 9058096 0 0 4 696\n5762 7558 17 1 82 0 0\n2014-05-29 09:24:45 2 0 7044 332500 14080 9058096 0 0 0 592\n5138 6043 13 1 86 0 0\n2014-05-29 09:24:46 9 0 7044 322008 14092 9058500 0 0 12 1800\n6545 10885 22 2 76 0 0\n2014-05-29 09:24:47 6 0 7044 316428 14124 9058468 0 0 24 900\n7371 12690 34 2 64 0 0\n2014-05-29 09:24:48 8 0 7044 309608 14128 9058984 0 0 28 920\n7088 10422 23 1 76 0 0\n2014-05-29 09:24:49 1 0 7044 309236 14128 9058984 0 0 8 856\n6898 10685 24 1 75 0 0\n2014-05-29 09:24:50 4 0 7044 295140 14128 9059392 0 0 16 904\n6977 11955 25 2 73 0 0\n2014-05-29 09:24:51 3 0 7044 294700 14148 9059372 0 0 20 2176\n6471 9320 19 1 80 0 0\n2014-05-29 09:24:52 1 0 7044 293392 14172 9059816 0 0 72 804\n6836 9151 18 1 81 0 0\n2014-05-29 09:24:53 7 0 7044 297460 14176 9059812 0 0 12 760\n5914 8573 22 2 76 0 0\n2014-05-29 09:24:54 2 0 7044 305768 14180 9060228 0 0 4 680\n6204 9062 21 1 78 0 0\n2014-05-29 09:24:55 4 0 7044 299824 14188 9062396 0 0 2072 800\n6533 9405 21 1 78 0 0\n2014-05-29 09:24:56 7 0 7044 352600 14216 9062884 0 0 212 1612\n7084 10758 28 2 70 0 0\n2014-05-29 09:24:57 10 0 7044 350244 14232 9062868 0 0 16 844\n7873 14913 45 4 52 0 0\n2014-05-29 09:24:58 6 0 7044 348260 14236 9063268 0 0 4 616\n6029 8907 36 2 62 0 0\n2014-05-29 09:24:59 6 0 7044 335488 14248 9063256 0 0 28 840\n7111 10687 27 1 72 0 0\n2014-05-29 09:25:00 9 0 7044 343672 14248 9063612 0 0 16 1016\n7768 13125 29 2 70 0 0\n2014-05-29 09:25:01 8 0 7044 341564 14284 9063576 0 0 4 2156\n6498 8663 18 1 82 0 0\n2014-05-29 09:25:02 8 0 7044 333756 14336 9064004 0 0 12 952\n5848 7911 15 1 84 0 0\n2014-05-29 09:25:03 8 1 7044 277940 14352 9063988 0 0 40 784\n6502 11541 23 2 75 0 0\n2014-05-29 09:25:04 7 0 7044 192320 14376 9064564 0 0 84 896\n7373 18005 33 3 63 0 0\n2014-05-29 09:25:05 2 0 7044 311492 14376 9064564 0 0 0 984\n6858 10804 24 1 75 0 0\n2014-05-29 09:25:06 12 0 7044 305456 14380 9064980 0 0 12 3200\n6969 10565 25 2 73 0 0\n2014-05-29 09:25:07 5 0 7044 303596 14420 9064940 0 0 40 1348\n7110 10512 28 1 71 0 0\n2014-05-29 09:25:08 4 0 7044 299124 14420 9065316 0 0 0 1040\n7804 12488 26 1 72 0 0\n2014-05-29 09:25:09 5 0 7044 281336 14428 9067484 0 0 2056 720\n6217 9241 23 2 75 0 0\n2014-05-29 09:25:10 8 0 7044 269892 14436 9067916 0 0 32 1000\n7188 10796 24 1 75 0 0\n2014-05-29 09:25:11 7 0 7044 267784 14436 9067916 0 0 8 1960\n7967 12888 31 1 68 0 0\n2014-05-29 09:25:12 9 0 7044 232936 14460 9068380 0 0 24 1112\n6582 10024 23 1 76 0 0\n2014-05-29 09:25:13 4 0 7044 231076 14504 9068640 0 0 100 912\n6939 12254 26 2 71 0 0\n2014-05-29 09:25:14 10 0 7044 189040 14588 9068788 0 0 188 1080\n6915 14296 32 4 64 0 0\n2014-05-29 09:25:15 3 0 7044 185320 14596 9068780 0 0 16 448\n3805 5329 26 2 73 0 0\n2014-05-29 09:25:16 3 0 7044 181352 14604 9069276 0 0 8 1540\n4239 5568 20 1 79 0 0\n2014-05-29 09:25:17 3 0 7044 179616 14632 9069248 0 0 20 436\n3957 4246 12 0 87 0 0\n2014-05-29 09:25:18 14 0 7044 177756 14640 9069488 0 0 8 616\n4961 6912 11 1 88 0 0\n2014-05-29 09:25:19 31 0 7044 177316 14656 9069472 0 0 24 1992\n11500 36077 75 4 21 0 0\n2014-05-29 09:25:20 6 0 7044 165436 14524 9055920 0 0 28 1280\n8124 25478 47 4 50 0 0\n2014-05-29 09:25:21 3 0 7044 169144 14504 9054308 0 0 24 1696\n6682 8512 20 1 79 0 0\n2014-05-29 09:25:22 9 0 7044 173172 14524 9056504 0 0 2060 780\n7241 10328 27 3 70 0 0\n2014-05-29 09:25:23 10 0 7044 162756 14528 9056500 0 0 12 744\n6599 10688 38 3 59 0 0\n2014-05-29 09:25:24 12 0 7044 173760 14260 9044720 0 0 96 1196\n6781 12568 44 9 47 0 0\n2014-05-29 09:25:25 30 0 7044 168648 14040 9030188 0 0 12 664\n6627 12749 55 7 38 0 0\n2014-05-29 09:25:26 14 1 7044 176392 13932 9024712 0 0 16 1864\n7724 17989 76 6 17 0 0\n2014-05-29 09:25:27 19 0 7288 154564 13468 9003416 0 4 8 1836\n7381 16152 54 17 28 0 0\n2014-05-29 09:25:28 36 1 7288 154860 12084 8973284 0 0 588 736\n6623 13916 42 42 16 0 0\n2014-05-29 09:25:30 40 0 7288 154372 11492 8957252 0 0 580 896\n5525 11853 47 49 4 0 0\n2014-05-29 09:25:32 58 1 7288 154556 10724 8915168 0 0 328 2960\n6951 8010 33 65 2 0 0\n2014-05-29 09:25:34 44 1 7288 172180 10636 8901096 0 0 264 1860\n4392 5677 23 76 1 0 0\n2014-05-29 09:25:35 36 1 7288 154224 10272 8892344 0 0 772 664\n4171 7064 40 58 2 0 0\n2014-05-29 09:25:36 34 0 7288 154108 9984 8885292 0 0 808 224\n2222 3110 17 81 2 0 0\n2014-05-29 09:25:37 20 0 7320 154612 9844 8881288 0 24 0 1544\n1423 1115 9 81 10 0 0\n2014-05-29 09:25:38 23 0 7320 154860 9420 8874228 0 8 120 876\n1206 3054 1 80 19 0 0\n2014-05-29 09:25:39 23 1 7320 166132 9244 8872204 0 0 164 112\n2078 2616 13 82 4 0 0\n2014-05-29 09:25:40 46 1 7320 154612 8908 8861160 0 0 448 80\n1716 10859 10 73 16 1 0\n2014-05-29 09:25:41 42 0 7448 154312 7804 8848784 0 244 36 2124\n2149 3477 11 89 0 0 0\n2014-05-29 09:25:42 36 0 7448 154416 6576 8834556 0 0 92 1104\n3077 6462 26 74 0 0 0\n2014-05-29 09:25:43 64 2 7448 154644 5492 8821736 0 0 140 332\n2175 3466 11 78 10 0 0\n2014-05-29 09:25:44 49 0 7448 154364 4608 8811244 0 0 128 80\n1939 2115 10 90 0 0 0\n2014-05-29 09:25:49 34 0 7448 155792 2932 8783364 0 0 1624 2112\n11642 12972 15 84 1 0 0\n2014-05-29 09:25:50 27 0 7448 164700 2744 8781504 0 128 228 240\n1585 2297 9 89 2 0 0\n2014-05-29 09:25:51 29 0 7692 154540 2516 8778032 0 0 0 244\n1842 5052 10 86 4 0 0\n2014-05-29 09:25:55 35 0 7812 160476 1784 8766552 0 332 384 648\n6206 5487 6 92 2 0 0\n2014-05-29 09:25:56 30 1 7812 160380 1696 8765552 0 0 4 0\n1000 412 0 89 11 0 0\n2014-05-29 09:25:57 27 1 7812 154352 1900 8768940 0 0 5184 496\n1197 946 0 89 9 1 0\n2014-05-29 09:25:58 19 0 7816 154136 1764 8767872 0 0 0 0\n1021 297 0 83 16 0 0\n2014-05-29 09:25:59 19 0 8024 154564 1724 8765880 0 0 44 1888\n1116 1024 0 80 20 0 0\n2014-05-29 09:26:03 42 1 10676 154140 1332 8760968 0 32 932 344\n4871 10038 1 82 17 0 0\n2014-05-29 09:26:04 39 0 10676 154284 1332 8758704 0 0 12 68\n1194 560 1 99 0 0 0\n2014-05-29 09:26:18 67 1 10312 11751464 1388 8764704 480 2756 9836\n5932 26274 32175 6 94 0 0 0\n2014-05-29 09:26:19 82 1 10312 13208352 1464 8766044 0 120 1632\n772 3121 8808 17 83 0 0 0\n2014-05-29 09:26:20 54 1 10228 14452424 1600 8766600 64 0 1628\n404 3528 7679 15 84 0 0 0\n2014-05-29 09:26:21 12 2 10212 14493080 2060 8768192 0 32 2176\n1664 5697 38657 24 57 17 2 0\n2014-05-29 09:26:22 14 1 10212 14195536 3056 8772540 0 0 2720\n624 7115 93905 38 10 48 3 0\n2014-05-29 09:26:23 24 1 10212 14144780 3576 8775768 0 0 3184\n900 7854 104139 36 6 54 3 0\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jun 2014 16:57:11 -0700",
"msg_from": "Vincent Lasmarias <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU load spikes when CentOS tries to reclaim 'cached'\n memory"
},
{
"msg_contents": "I saw similar behavior a while back on a new PG 9.1 instance (also\ncoincidentally with high max_connections) running Centos 2.6.32 (also not\nvirtualized).\nIn particular the bit about free memory seeming to drop faster than the\nsystem could reclaim cache caught my eye.\n\nAfter lots of research focused on NUMA as well as bumping the box up to 80\ncores and 512GB RAM for a DB that was then only something like 650GB(?) I\nfound and experimented with increasing vm.min_free_kbytes.\nThis was intended to give the kernel enough free memory to gobble up in a\nhurry when it needed it, without having to stop what it was doing to\nactively reclaim cached memory for a waiting process.\nWhen the kernel was getting into this active cache reclamation business I\nsaw performance troughs that correlated with CPU spikes.\nLoads in the hundreds, free memory nearly exhausted, the whole bit.\n\nHere's a little more info about vm.min_free_kbytes:\nhttp://lkml.iu.edu//hypermail/linux/kernel/1109.0/00311.html\nhttp://lists.centos.org/pipermail/centos/2006-December/030766.html\n\nIn my case, increasing vm.min_free_kbytes helped a bit but wasn't the game\nchanger I expected it to be.\n\nUltimately we tracked all of these problems back to a poor hibernate\nconnection pooler implementation. If memory serves, the pooler code was\ngetting sideways at GC and then looping which in turn was sending\nconnection requests and statement binds in large bursts.\nActually proving the connection pooler to be the problem was a torturous\nprocess however, as all symptoms of this problem seemed to express\nthemselves on the DB.\n\nOnce the pooler was fixed the symptoms on the DB server vanished and\nhaven't been back even as the application load has increased.\n\nI'm happy to share more details if any of that seems to line up with your\nsituation.\n\n\n\n\n\n\n\n\nOn Thu, Jun 5, 2014 at 4:57 PM, Vincent Lasmarias <[email protected]>\nwrote:\n\n> Thanks for the informative responses and suggestions. My responses below:\n>\n> * Sorry for the double post. I posted the original message using my gmail\n> account and got a \"is not a member of any of the restrict_post groups\"\n> response and when I didn't see it for a day, I ended up wondering if it was\n> due to my use of a gmail account - so I tried using my company email\n> account\n> instead to post an updated version of the original post.\n>\n> * Our system is not virtualized.\n>\n> * Jeff, the output format of the load and free/cached memory did not come\n> from a tool but came from my script. My script does OEuptime; free m¹, and\n> then another script massages the data to only grab date, 1-minute load\n> average, free, and cached.\n>\n> * For the 'top' outputs, I don't have the actual 'top' output, but I have\n> the following:\n>\n> 2014-05-30 00:01:01 procs -----------memory---------- ---swap-- -----io----\n> --system-- -----cpu-----\n> 2014-05-30 00:01:01 r b swpd free buff cache si so bi bo\n> in cs us sy id wa st\n> (...)\n> 2014-05-30 04:59:52 1 0 0 621976 92340 23431340 0 0 0\n> 16756 3538 3723 7 2 91 0 0\n> 2014-05-30 04:59:53 1 0 0 671896 92340 23431364 0 0 0\n> 236 1933 825 2 1 97 0 0\n> 2014-05-30 04:59:54 2 0 0 542964 92340 23433596 0 0 2148\n> 300 3751 1394 6 1 92 0 0\n> 2014-05-30 04:59:55 0 0 0 566140 92340 23433616 0 0 0\n> 192 3485 1465 6 1 94 0 0\n> 2014-05-30 04:59:56 2 0 0 614348 92340 23433760 0 0 0\n> 424 3238 4278 4 1 95 0 0\n> 2014-05-30 04:59:57 4 0 0 408812 92340 23433792 0 0 8\n> 944 6249 12512 12 2 86 0 0\n> 2014-05-30 04:59:58 3 0 0 471716 92356 23434012 0 0 0\n> 440 9028 4164 13 1 86 0 0\n> 2014-05-30 04:59:59 4 0 0 380988 92356 23434428 0 0 0\n> 248 10009 10967 15 3 83 0 0\n> 2014-05-30 05:00:00 6 0 0 462052 92356 23434904 0 0 0\n> 960 7260 9242 12 2 85 0 0\n> 2014-05-30 05:00:01 5 0 0 409796 92360 23435224 0 0 96\n> 1860 11475 95765 18 7 75 0 0\n> 2014-05-30 05:00:02 10 0 0 221464 92360 23433868 0 0 428\n> 10800 13933 128264 23 9 67 0 0\n> 2014-05-30 05:00:03 12 0 0 231444 91956 23355644 0 0 0\n> 1480 26651 10817 10 35 54 0 0\n> 2014-05-30 05:00:04 11 0 0 385672 91508 23254120 0 0 0\n> 3096 30849 44776 22 28 50 0 0\n> 2014-05-30 05:00:05 9 0 0 408932 91508 23270216 0 0 0\n> 1996 21925 24978 12 26 63 0 0\n> 2014-05-30 05:00:06 10 0 0 373992 91508 23270580 0 0 0\n> 2160 25778 5994 11 31 58 0 0\n> 2014-05-30 05:00:07 5 0 0 457900 91508 23270688 0 0 0\n> 6080 25185 11705 14 21 65 0 0\n> 2014-05-30 05:00:08 0 0 0 658300 91508 23270804 0 0 0\n> 2089 5989 5849 11 2 88 0 0\n> 2014-05-30 05:00:09 1 0 0 789972 91508 23270928 0 0 0\n> 2508 2346 2550 2 1 97 0 0\n> 2014-05-30 05:00:10 0 0 0 845736 91508 23274260 0 0 12\n> 1728 2109 1494 2 1 97 0 0\n> 2014-05-30 05:00:11 3 0 0 686352 91516 23274644 0 0 8\n> 2100 4039 5288 6 2 92 0 0\n> 2014-05-30 05:00:12 11 1 0 447636 91516 23274808 0 0 520\n> 1436 10299 50523 24 7 68 1 0\n> 2014-05-30 05:00:13 13 0 0 352380 91516 23276220 0 0 1060\n> 816 18283 18682 15 36 48 1 0\n> 2014-05-30 05:00:14 12 0 0 356720 88880 23179276 0 0 704\n> 868 16193 140313 36 12 51 1 0\n> 2014-05-30 05:00:15 5 0 0 513784 88880 23173344 0 0 2248\n> 748 12350 21178 30 6 62 2 0\n> 2014-05-30 05:00:16 2 0 0 623020 88884 23175808 0 0 1568\n> 500 5841 4999 12 2 86 1 0\n> 2014-05-30 05:00:17 5 0 0 590488 88884 23175844 0 0 24\n> 584 6573 4905 14 2 84 0 0\n> 2014-05-30 05:00:18 3 0 0 632408 88884 23176116 0 0 0\n> 496 6846 4358 14 2 84 0 0\n> 2014-05-30 05:00:19 5 0 0 596716 88884 23176948 0 0 656\n> 668 7135 5262 14 3 83 0 0\n> 2014-05-30 05:00:20 6 0 0 558692 88884 23179964 0 0 2816\n> 1012 8566 7742 17 4 79 0 0\n> 2014-05-30 05:00:21 7 1 0 476580 88888 23181200 0 0 1272\n> 968 11240 14308 23 6 71 1 0\n> 2014-05-30 05:00:22 8 0 0 695396 88888 23183028 0 0 728\n> 1128 9751 7121 22 4 74 1 0\n> 2014-05-30 05:00:23 9 0 0 536084 88888 23199080 0 0 392\n> 1024 12523 22269 26 6 68 0 0\n> 2014-05-30 05:00:24 13 0 0 416296 88888 23200416 0 0 40\n> 1000 16319 61822 29 21 51 0 0\n> 2014-05-30 05:00:25 14 0 0 386904 88888 23200704 0 0 24\n> 816 20850 4424 16 38 46 0 0\n> 2014-05-30 05:00:26 17 0 0 334688 88896 23201028 0 0 24\n> 1000 26758 16934 24 46 30 0 0\n> 2014-05-30 05:00:27 18 0 0 307304 88896 23193928 0 0 0\n> 1068 27051 67778 21 46 33 0 0\n> 2014-05-30 05:00:28 20 1 0 295560 88896 23162456 0 0 0\n> 860 31012 27787 15 67 18 0 0\n> 2014-05-30 05:00:29 22 1 0 281272 88896 23153312 0 0 16\n> 928 28899 2857 9 78 13 0 0\n> 2014-05-30 05:00:30 26 0 0 400804 87976 22979324 0 0 0\n> 1536 37689 4368 9 88 3 0 0\n> 2014-05-30 05:00:31 27 0 0 395588 87976 22979412 0 0 0\n> 1564 29195 4305 8 92 0 0 0\n> 2014-05-30 05:00:32 25 0 0 353176 87976 22979592 0 0 24\n> 9404 29845 15845 13 85 2 0 0\n> 2014-05-30 05:00:33 28 0 0 318680 87976 22979776 0 0 0\n> 1588 28097 3372 9 91 0 0 0\n> 2014-05-30 05:00:34 27 0 0 304676 87352 22969136 0 0 0\n> 1480 29387 4330 10 90 0 0 0\n> 2014-05-30 05:00:35 27 0 0 337960 79784 22900220 0 0 48\n> 924 38334 8253 13 86 1 0 0\n> 2014-05-30 05:00:36 29 0 0 297308 79788 22898608 0 0 0\n> 952 30865 3067 10 90 0 0 0\n> 2014-05-30 05:00:37 33 0 0 282612 79624 22801728 0 0 8\n> 17169 29197 6855 13 87 0 0 0\n> 2014-05-30 05:00:38 32 0 0 224140 79624 22717680 0 0 0\n> 908 29640 6579 14 86 0 0 0\n> 2014-05-30 05:00:39 29 0 0 304456 79624 22661712 0 0 0\n> 696 31838 3563 10 90 0 0 0\n> 2014-05-30 05:00:40 32 0 0 322080 79624 22657328 0 0 0\n> 396 29206 2274 4 96 0 0 0\n> 2014-05-30 05:00:41 33 0 0 309184 79624 22649172 0 0 0\n> 660 27084 2385 4 96 0 0 0\n> 2014-05-30 05:00:42 34 0 0 285348 79624 22646216 0 0 0\n> 4316 26418 2571 3 97 0 0 0\n> 2014-05-30 05:00:43 37 0 0 273732 79624 22632920 0 0 0\n> 740 26728 2663 3 97 0 0 0\n> 2014-05-30 05:00:44 38 0 0 276304 79624 22632640 0 0 0\n> 464 26376 2593 3 97 0 0 0\n> 2014-05-30 05:00:45 41 0 0 257480 79624 22624828 0 0 0\n> 448 26505 3516 4 96 0 0 0\n> 2014-05-30 05:00:46 32 0 0 305108 79608 22559216 0 0 0\n> 304 29610 6162 8 92 0 0 0\n> 2014-05-30 05:00:47 46 0 0 310504 79592 22487756 0 0 0\n> 360 30559 10591 12 88 0 0 0\n> 2014-05-30 05:00:48 47 0 0 286236 79596 22478168 0 0 0\n> 312 25733 4250 5 95 0 0 0\n> 2014-05-30 05:00:49 48 0 0 300284 79596 22477684 0 0 0\n> 356 26464 4690 5 95 0 0 0\n> 2014-05-30 05:00:50 52 0 0 276896 79588 22449560 0 0 124\n> 280 26238 4132 4 96 0 0 0\n> 2014-05-30 05:00:51 60 0 0 234836 79568 22406932 0 0 4\n> 344 26446 5055 5 95 0 0 0\n> 2014-05-30 05:00:52 62 0 0 247304 79564 22351916 0 0 12\n> 452 26807 4694 3 97 0 0 0\n> 2014-05-30 05:00:53 63 0 0 231892 79564 22347368 0 0 0\n> 308 25378 4376 3 97 0 0 0\n> 2014-05-30 05:00:54 67 0 0 236056 79564 22309368 0 0 0\n> 156 25737 4022 3 97 0 0 0\n> 2014-05-30 05:00:55 66 0 0 232984 79564 22286336 0 0 0\n> 216 25393 3874 2 98 0 0 0\n> 2014-05-30 05:00:56 67 0 0 240720 79560 22267736 0 0 0\n> 588 25944 4678 2 98 0 0 0\n> 2014-05-30 05:00:57 70 0 0 242836 79540 22232068 0 0 0\n> 16800 26058 4607 3 97 0 0 0\n> 2014-05-30 05:00:58 72 0 0 234944 79548 22224948 0 0 0\n> 608 25589 4687 2 98 0 0 0\n> 2014-05-30 05:00:59 73 0 0 236064 79536 22173496 0 0 0\n> 188 25747 4530 3 97 0 0 0\n> 2014-05-30 05:01:00 77 0 0 232708 79524 22135168 0 0 0\n> 304 25546 5247 3 97 0 0 0\n> 2014-05-30 05:01:01 72 0 0 269328 79528 22117528 0 0 24\n> 396 27545 8488 5 95 0 0 0\n> 2014-05-30 05:01:02 83 0 0 220892 79496 22043024 0 0 0\n> 3280 28665 9805 7 94 0 0 0\n> 2014-05-30 05:01:03 86 0 0 224004 79488 21995400 0 0 0\n> 440 26338 6090 3 97 0 0 0\n> 2014-05-30 05:01:04 90 0 0 249684 79476 21932468 0 0 0\n> 408 26341 5834 3 97 0 0 0\n> 2014-05-30 05:01:05 91 0 0 257336 79464 21883060 0 0 0\n> 380 26272 5717 2 98 0 0 0\n> 2014-05-30 05:01:06 98 0 0 242896 79468 21878016 0 0 0\n> 608 25628 5648 2 98 0 0 0\n> 2014-05-30 05:01:07 94 0 0 237276 79468 21876148 0 0 0\n> 908 25041 5883 3 98 0 0 0\n> 2014-05-30 05:01:08 99 0 0 225832 79488 21858572 0 0 24\n> 504 25271 5913 3 97 0 0 0\n> 2014-05-30 05:01:09 94 0 0 245796 79460 21812404 0 0 0\n> 264 25106 6189 3 97 0 0 0\n> 2014-05-30 05:01:10 94 0 0 246268 79460 21811144 0 0 0\n> 188 25087 5989 2 98 0 0 0\n> 2014-05-30 05:01:11 94 0 0 232900 79456 21775152 0 0 0\n> 168 24965 5949 3 97 0 0 0\n> 2014-05-30 05:01:12 100 0 0 227900 79428 21737032 0 0 32\n> 276 25560 6798 4 96 0 0 0\n> 2014-05-30 05:01:13 98 0 0 253644 79396 21713908 0 0 0\n> 552 27440 9052 5 95 0 0 0\n> 2014-05-30 05:01:14 104 0 0 269648 79376 21634336 0 0 0\n> 540 26100 6633 3 97 0 0 0\n> 2014-05-30 05:01:15 104 0 0 259436 79376 21622164 0 0 0\n> 368 25094 6417 2 98 0 0 0\n> 2014-05-30 05:01:16 105 0 0 262596 79372 21616292 0 0 0\n> 200 24995 6276 2 98 0 0 0\n> 2014-05-30 05:01:17 109 0 0 232172 79360 21583800 0 0 0\n> 388 25112 6570 3 97 0 0 0\n> 2014-05-30 05:01:18 109 0 0 231628 79372 21566604 0 0 0\n> 364 25221 6644 2 98 0 0 0\n> 2014-05-30 05:01:19 110 0 0 223920 79372 21532992 0 0 0\n> 340 25383 6874 3 97 0 0 0\n> 2014-05-30 05:01:20 111 0 0 223028 79368 21501868 0 0 0\n> 288 25369 6465 2 98 0 0 0\n> 2014-05-30 05:01:21 113 0 0 211676 79352 21434584 0 0 0\n> 240 24939 6607 4 96 0 0 0\n> 2014-05-30 05:01:22 114 0 0 211020 79300 21327264 0 0 0\n> 308 25390 7239 5 95 0 0 0\n> 2014-05-30 05:01:23 110 0 0 213524 79256 21215612 0 0 40\n> 336 25494 7878 7 93 0 0 0\n> 2014-05-30 05:01:24 114 0 0 222748 79220 21107976 0 0 0\n> 276 25257 7032 7 93 0 0 0\n> 2014-05-30 05:01:25 115 0 0 262004 79160 21012468 0 0 0\n> 300 25986 6746 8 92 0 0 0\n> (...)\n>\n> * For the OEperf¹ outputs, I¹m still waiting for a time where I can gather\n> it. Unfortunately, this is on a Production system, and we had to quickly\n> look for a workaround the workaround we have found is: when the OEfree¹\n> memory drops below a certain threshold, we run: /bin/sync && /bin/echo 3 >\n> /proc/sys/vm/drop_caches . Since we¹ve been doing that, we haven¹t had the\n> high load issue.\n>\n> * I forgot to mention that the debugging info I posted came from our slave\n> server (the master and slave have the same specs, but different OS version\n> \n> master has 2.6.18-371.3.1.el5 #1 SMP Thu Dec 5 12:47:02 EST 2013). We\n> actually first saw the issue on our master server and not the slave server.\n> To get the master going, we edited our code to move our heaviest queries so\n> that they hit only the slave. After we did that, the slave started having\n> the high load issues, but with one difference - on the slave, 'cached' was\n> high, but as can be seen in the master debugging session below, on the\n> master, cached was low (corresponds to what we had for shared_buffers), and\n> when 'free' dropped, it looks like the OS managed to grab the memory from\n> 'used' (do note that we will be adding more memory to our servers shortly):\n>\n> 09:24 load average: 8.76 used: 32166 free: 31851 shared: 314 buffers: 0\n> cached: 8827\n> 09:25 load average: 6.30 used: 31840 free: 325 shared: 0 buffers: 13\n> cached:\n> 8851\n> 09:26 load average: 95.83 used: 17883 free: 14282 shared: 0 buffers: 2\n> cached: 8563\n> (...)\n> 10:00 load average: 4.45 used: 31945 free: 220 shared: 0 buffers: 15\n> cached:\n> 8862\n> 10:01 load average: 4.56 used: 31983 free: 182 shared: 0 buffers: 15\n> cached:\n> 8844\n> 10:02 load average: 5.34 used: 31983 free: 183 shared: 0 buffers: 14\n> cached:\n> 8832\n> 10:03 load average: 6.57 used: 31987 free: 179 shared: 0 buffers: 9 cached:\n> 8709\n> 10:04 load average: 71.21 used: 18095 free: 14071 shared: 0 buffers: 1\n> cached: 8556\n> (...)\n> 10:40 load average: 5.38 used: 31970 free: 196 shared: 0 buffers: 12\n> cached:\n> 8929\n> 10:41 load average: 6.10 used: 31889 free: 276 shared: 0 buffers: 13\n> cached:\n> 8804\n> 10:42 load average: 6.94 used: 31984 free: 182 shared: 0 buffers: 2 cached:\n> 8761\n> 10:43 load average: 13.90 used: 31777 free: 389 shared: 0 buffers: 2\n> cached:\n> 8555\n> 10:44 load average: 54.30 used: 18894 free: 13272 shared: 0 buffers: 4\n> cached: 8592\n> (...)\n> 11:21 load average: 5.54 used: 31985 free: 181 shared: 0 buffers: 11\n> cached:\n> 8764\n> 11:22 load average: 5.15 used: 31768 free: 397 shared: 0 buffers: 10\n> cached:\n> 8721\n> 11:23 load average: 5.62 used: 31901 free: 265 shared: 0 buffers: 11\n> cached:\n> 8742\n> 11:24 load average: 4.80 used: 31969 free: 196 shared: 0 buffers: 9 cached:\n> 8675\n> 11:25 load average: 53.74 used: 18644 free: 13522 shared: 0 buffers: 1\n> cached: 8578\n>\n> More detailed output from our master server:\n>\n> 2014-05-29 00:01:01 procs -----------memory---------- ---swap-- -----io----\n> --system-- -----cpu------\n> 2014-05-29 00:01:01 r b swpd free buff cache si so bi bo\n> in cs us sy id wa st\n> (...)\n> 2014-05-29 09:24:41 10 0 7044 364616 14048 9055060 0 0 4\n> 1908\n> 7115 11287 29 1 70 0 0\n> 2014-05-29 09:24:42 7 0 7044 360160 14072 9057648 0 0 2064\n> 796\n> 7512 11746 34 2 64 0 0\n> 2014-05-29 09:24:43 2 0 7044 339576 14076 9057644 0 0 12\n> 824\n> 7143 11633 25 1 74 0 0\n> 2014-05-29 09:24:44 3 0 7044 333120 14080 9058096 0 0 4\n> 696\n> 5762 7558 17 1 82 0 0\n> 2014-05-29 09:24:45 2 0 7044 332500 14080 9058096 0 0 0\n> 592\n> 5138 6043 13 1 86 0 0\n> 2014-05-29 09:24:46 9 0 7044 322008 14092 9058500 0 0 12\n> 1800\n> 6545 10885 22 2 76 0 0\n> 2014-05-29 09:24:47 6 0 7044 316428 14124 9058468 0 0 24\n> 900\n> 7371 12690 34 2 64 0 0\n> 2014-05-29 09:24:48 8 0 7044 309608 14128 9058984 0 0 28\n> 920\n> 7088 10422 23 1 76 0 0\n> 2014-05-29 09:24:49 1 0 7044 309236 14128 9058984 0 0 8\n> 856\n> 6898 10685 24 1 75 0 0\n> 2014-05-29 09:24:50 4 0 7044 295140 14128 9059392 0 0 16\n> 904\n> 6977 11955 25 2 73 0 0\n> 2014-05-29 09:24:51 3 0 7044 294700 14148 9059372 0 0 20\n> 2176\n> 6471 9320 19 1 80 0 0\n> 2014-05-29 09:24:52 1 0 7044 293392 14172 9059816 0 0 72\n> 804\n> 6836 9151 18 1 81 0 0\n> 2014-05-29 09:24:53 7 0 7044 297460 14176 9059812 0 0 12\n> 760\n> 5914 8573 22 2 76 0 0\n> 2014-05-29 09:24:54 2 0 7044 305768 14180 9060228 0 0 4\n> 680\n> 6204 9062 21 1 78 0 0\n> 2014-05-29 09:24:55 4 0 7044 299824 14188 9062396 0 0 2072\n> 800\n> 6533 9405 21 1 78 0 0\n> 2014-05-29 09:24:56 7 0 7044 352600 14216 9062884 0 0 212\n> 1612\n> 7084 10758 28 2 70 0 0\n> 2014-05-29 09:24:57 10 0 7044 350244 14232 9062868 0 0 16\n> 844\n> 7873 14913 45 4 52 0 0\n> 2014-05-29 09:24:58 6 0 7044 348260 14236 9063268 0 0 4\n> 616\n> 6029 8907 36 2 62 0 0\n> 2014-05-29 09:24:59 6 0 7044 335488 14248 9063256 0 0 28\n> 840\n> 7111 10687 27 1 72 0 0\n> 2014-05-29 09:25:00 9 0 7044 343672 14248 9063612 0 0 16\n> 1016\n> 7768 13125 29 2 70 0 0\n> 2014-05-29 09:25:01 8 0 7044 341564 14284 9063576 0 0 4\n> 2156\n> 6498 8663 18 1 82 0 0\n> 2014-05-29 09:25:02 8 0 7044 333756 14336 9064004 0 0 12\n> 952\n> 5848 7911 15 1 84 0 0\n> 2014-05-29 09:25:03 8 1 7044 277940 14352 9063988 0 0 40\n> 784\n> 6502 11541 23 2 75 0 0\n> 2014-05-29 09:25:04 7 0 7044 192320 14376 9064564 0 0 84\n> 896\n> 7373 18005 33 3 63 0 0\n> 2014-05-29 09:25:05 2 0 7044 311492 14376 9064564 0 0 0\n> 984\n> 6858 10804 24 1 75 0 0\n> 2014-05-29 09:25:06 12 0 7044 305456 14380 9064980 0 0 12\n> 3200\n> 6969 10565 25 2 73 0 0\n> 2014-05-29 09:25:07 5 0 7044 303596 14420 9064940 0 0 40\n> 1348\n> 7110 10512 28 1 71 0 0\n> 2014-05-29 09:25:08 4 0 7044 299124 14420 9065316 0 0 0\n> 1040\n> 7804 12488 26 1 72 0 0\n> 2014-05-29 09:25:09 5 0 7044 281336 14428 9067484 0 0 2056\n> 720\n> 6217 9241 23 2 75 0 0\n> 2014-05-29 09:25:10 8 0 7044 269892 14436 9067916 0 0 32\n> 1000\n> 7188 10796 24 1 75 0 0\n> 2014-05-29 09:25:11 7 0 7044 267784 14436 9067916 0 0 8\n> 1960\n> 7967 12888 31 1 68 0 0\n> 2014-05-29 09:25:12 9 0 7044 232936 14460 9068380 0 0 24\n> 1112\n> 6582 10024 23 1 76 0 0\n> 2014-05-29 09:25:13 4 0 7044 231076 14504 9068640 0 0 100\n> 912\n> 6939 12254 26 2 71 0 0\n> 2014-05-29 09:25:14 10 0 7044 189040 14588 9068788 0 0 188\n> 1080\n> 6915 14296 32 4 64 0 0\n> 2014-05-29 09:25:15 3 0 7044 185320 14596 9068780 0 0 16\n> 448\n> 3805 5329 26 2 73 0 0\n> 2014-05-29 09:25:16 3 0 7044 181352 14604 9069276 0 0 8\n> 1540\n> 4239 5568 20 1 79 0 0\n> 2014-05-29 09:25:17 3 0 7044 179616 14632 9069248 0 0 20\n> 436\n> 3957 4246 12 0 87 0 0\n> 2014-05-29 09:25:18 14 0 7044 177756 14640 9069488 0 0 8\n> 616\n> 4961 6912 11 1 88 0 0\n> 2014-05-29 09:25:19 31 0 7044 177316 14656 9069472 0 0 24\n> 1992\n> 11500 36077 75 4 21 0 0\n> 2014-05-29 09:25:20 6 0 7044 165436 14524 9055920 0 0 28\n> 1280\n> 8124 25478 47 4 50 0 0\n> 2014-05-29 09:25:21 3 0 7044 169144 14504 9054308 0 0 24\n> 1696\n> 6682 8512 20 1 79 0 0\n> 2014-05-29 09:25:22 9 0 7044 173172 14524 9056504 0 0 2060\n> 780\n> 7241 10328 27 3 70 0 0\n> 2014-05-29 09:25:23 10 0 7044 162756 14528 9056500 0 0 12\n> 744\n> 6599 10688 38 3 59 0 0\n> 2014-05-29 09:25:24 12 0 7044 173760 14260 9044720 0 0 96\n> 1196\n> 6781 12568 44 9 47 0 0\n> 2014-05-29 09:25:25 30 0 7044 168648 14040 9030188 0 0 12\n> 664\n> 6627 12749 55 7 38 0 0\n> 2014-05-29 09:25:26 14 1 7044 176392 13932 9024712 0 0 16\n> 1864\n> 7724 17989 76 6 17 0 0\n> 2014-05-29 09:25:27 19 0 7288 154564 13468 9003416 0 4 8\n> 1836\n> 7381 16152 54 17 28 0 0\n> 2014-05-29 09:25:28 36 1 7288 154860 12084 8973284 0 0 588\n> 736\n> 6623 13916 42 42 16 0 0\n> 2014-05-29 09:25:30 40 0 7288 154372 11492 8957252 0 0 580\n> 896\n> 5525 11853 47 49 4 0 0\n> 2014-05-29 09:25:32 58 1 7288 154556 10724 8915168 0 0 328\n> 2960\n> 6951 8010 33 65 2 0 0\n> 2014-05-29 09:25:34 44 1 7288 172180 10636 8901096 0 0 264\n> 1860\n> 4392 5677 23 76 1 0 0\n> 2014-05-29 09:25:35 36 1 7288 154224 10272 8892344 0 0 772\n> 664\n> 4171 7064 40 58 2 0 0\n> 2014-05-29 09:25:36 34 0 7288 154108 9984 8885292 0 0 808\n> 224\n> 2222 3110 17 81 2 0 0\n> 2014-05-29 09:25:37 20 0 7320 154612 9844 8881288 0 24 0\n> 1544\n> 1423 1115 9 81 10 0 0\n> 2014-05-29 09:25:38 23 0 7320 154860 9420 8874228 0 8 120\n> 876\n> 1206 3054 1 80 19 0 0\n> 2014-05-29 09:25:39 23 1 7320 166132 9244 8872204 0 0 164\n> 112\n> 2078 2616 13 82 4 0 0\n> 2014-05-29 09:25:40 46 1 7320 154612 8908 8861160 0 0 448\n> 80\n> 1716 10859 10 73 16 1 0\n> 2014-05-29 09:25:41 42 0 7448 154312 7804 8848784 0 244 36\n> 2124\n> 2149 3477 11 89 0 0 0\n> 2014-05-29 09:25:42 36 0 7448 154416 6576 8834556 0 0 92\n> 1104\n> 3077 6462 26 74 0 0 0\n> 2014-05-29 09:25:43 64 2 7448 154644 5492 8821736 0 0 140\n> 332\n> 2175 3466 11 78 10 0 0\n> 2014-05-29 09:25:44 49 0 7448 154364 4608 8811244 0 0 128\n> 80\n> 1939 2115 10 90 0 0 0\n> 2014-05-29 09:25:49 34 0 7448 155792 2932 8783364 0 0 1624\n> 2112\n> 11642 12972 15 84 1 0 0\n> 2014-05-29 09:25:50 27 0 7448 164700 2744 8781504 0 128 228\n> 240\n> 1585 2297 9 89 2 0 0\n> 2014-05-29 09:25:51 29 0 7692 154540 2516 8778032 0 0 0\n> 244\n> 1842 5052 10 86 4 0 0\n> 2014-05-29 09:25:55 35 0 7812 160476 1784 8766552 0 332 384\n> 648\n> 6206 5487 6 92 2 0 0\n> 2014-05-29 09:25:56 30 1 7812 160380 1696 8765552 0 0 4\n> 0\n> 1000 412 0 89 11 0 0\n> 2014-05-29 09:25:57 27 1 7812 154352 1900 8768940 0 0 5184\n> 496\n> 1197 946 0 89 9 1 0\n> 2014-05-29 09:25:58 19 0 7816 154136 1764 8767872 0 0 0\n> 0\n> 1021 297 0 83 16 0 0\n> 2014-05-29 09:25:59 19 0 8024 154564 1724 8765880 0 0 44\n> 1888\n> 1116 1024 0 80 20 0 0\n> 2014-05-29 09:26:03 42 1 10676 154140 1332 8760968 0 32 932\n> 344\n> 4871 10038 1 82 17 0 0\n> 2014-05-29 09:26:04 39 0 10676 154284 1332 8758704 0 0 12\n> 68\n> 1194 560 1 99 0 0 0\n> 2014-05-29 09:26:18 67 1 10312 11751464 1388 8764704 480 2756 9836\n> 5932 26274 32175 6 94 0 0 0\n> 2014-05-29 09:26:19 82 1 10312 13208352 1464 8766044 0 120 1632\n> 772 3121 8808 17 83 0 0 0\n> 2014-05-29 09:26:20 54 1 10228 14452424 1600 8766600 64 0 1628\n> 404 3528 7679 15 84 0 0 0\n> 2014-05-29 09:26:21 12 2 10212 14493080 2060 8768192 0 32 2176\n> 1664 5697 38657 24 57 17 2 0\n> 2014-05-29 09:26:22 14 1 10212 14195536 3056 8772540 0 0 2720\n> 624 7115 93905 38 10 48 3 0\n> 2014-05-29 09:26:23 24 1 10212 14144780 3576 8775768 0 0 3184\n> 900 7854 104139 36 6 54 3 0\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI saw similar behavior a while back on a new PG 9.1 instance (also coincidentally with high max_connections) running Centos 2.6.32 (also not virtualized).In particular the bit about free memory seeming to drop faster than the system could reclaim cache caught my eye.\nAfter lots of research focused on NUMA as well as bumping the box up to 80 cores and 512GB RAM for a DB that was then only something like 650GB(?) I found and experimented with increasing vm.min_free_kbytes.\nThis was intended to give the kernel enough free memory to gobble up in a hurry when it needed it, without having to stop what it was doing to actively reclaim cached memory for a waiting process.When the kernel was getting into this active cache reclamation business I saw performance troughs that correlated with CPU spikes. \nLoads in the hundreds, free memory nearly exhausted, the whole bit.Here's a little more info about vm.min_free_kbytes:http://lkml.iu.edu//hypermail/linux/kernel/1109.0/00311.html\nhttp://lists.centos.org/pipermail/centos/2006-December/030766.htmlIn my case, increasing vm.min_free_kbytes helped a bit but wasn't the game changer I expected it to be.\nUltimately we tracked all of these problems back to a poor hibernate connection pooler implementation. If memory serves, the pooler code was getting sideways at GC and then looping which in turn was sending connection requests and statement binds in large bursts.\nActually proving the connection pooler to be the problem was a torturous process however, as all symptoms of this problem seemed to express themselves on the DB.Once the pooler was fixed the symptoms on the DB server vanished and haven't been back even as the application load has increased.\nI'm happy to share more details if any of that seems to line up with your situation.\nOn Thu, Jun 5, 2014 at 4:57 PM, Vincent Lasmarias <[email protected]> wrote:\nThanks for the informative responses and suggestions. My responses below:\n\n* Sorry for the double post. I posted the original message using my gmail\naccount and got a \"is not a member of any of the restrict_post groups\"\nresponse and when I didn't see it for a day, I ended up wondering if it was\ndue to my use of a gmail account - so I tried using my company email account\ninstead to post an updated version of the original post.\n\n* Our system is not virtualized.\n\n* Jeff, the output format of the load and free/cached memory did not come\nfrom a tool but came from my script. My script does OEuptime; free m¹, and\nthen another script massages the data to only grab date, 1-minute load\naverage, free, and cached.\n\n* For the 'top' outputs, I don't have the actual 'top' output, but I have\nthe following:\n\n2014-05-30 00:01:01 procs -----------memory---------- ---swap-- -----io----\n--system-- -----cpu-----\n2014-05-30 00:01:01 r b swpd free buff cache si so bi bo\nin cs us sy id wa st\n(...)\n2014-05-30 04:59:52 1 0 0 621976 92340 23431340 0 0 0\n16756 3538 3723 7 2 91 0 0\n2014-05-30 04:59:53 1 0 0 671896 92340 23431364 0 0 0\n236 1933 825 2 1 97 0 0\n2014-05-30 04:59:54 2 0 0 542964 92340 23433596 0 0 2148\n300 3751 1394 6 1 92 0 0\n2014-05-30 04:59:55 0 0 0 566140 92340 23433616 0 0 0\n192 3485 1465 6 1 94 0 0\n2014-05-30 04:59:56 2 0 0 614348 92340 23433760 0 0 0\n424 3238 4278 4 1 95 0 0\n2014-05-30 04:59:57 4 0 0 408812 92340 23433792 0 0 8\n944 6249 12512 12 2 86 0 0\n2014-05-30 04:59:58 3 0 0 471716 92356 23434012 0 0 0\n440 9028 4164 13 1 86 0 0\n2014-05-30 04:59:59 4 0 0 380988 92356 23434428 0 0 0\n248 10009 10967 15 3 83 0 0\n2014-05-30 05:00:00 6 0 0 462052 92356 23434904 0 0 0\n960 7260 9242 12 2 85 0 0\n2014-05-30 05:00:01 5 0 0 409796 92360 23435224 0 0 96\n1860 11475 95765 18 7 75 0 0\n2014-05-30 05:00:02 10 0 0 221464 92360 23433868 0 0 428\n10800 13933 128264 23 9 67 0 0\n2014-05-30 05:00:03 12 0 0 231444 91956 23355644 0 0 0\n1480 26651 10817 10 35 54 0 0\n2014-05-30 05:00:04 11 0 0 385672 91508 23254120 0 0 0\n3096 30849 44776 22 28 50 0 0\n2014-05-30 05:00:05 9 0 0 408932 91508 23270216 0 0 0\n1996 21925 24978 12 26 63 0 0\n2014-05-30 05:00:06 10 0 0 373992 91508 23270580 0 0 0\n2160 25778 5994 11 31 58 0 0\n2014-05-30 05:00:07 5 0 0 457900 91508 23270688 0 0 0\n6080 25185 11705 14 21 65 0 0\n2014-05-30 05:00:08 0 0 0 658300 91508 23270804 0 0 0\n2089 5989 5849 11 2 88 0 0\n2014-05-30 05:00:09 1 0 0 789972 91508 23270928 0 0 0\n2508 2346 2550 2 1 97 0 0\n2014-05-30 05:00:10 0 0 0 845736 91508 23274260 0 0 12\n1728 2109 1494 2 1 97 0 0\n2014-05-30 05:00:11 3 0 0 686352 91516 23274644 0 0 8\n2100 4039 5288 6 2 92 0 0\n2014-05-30 05:00:12 11 1 0 447636 91516 23274808 0 0 520\n1436 10299 50523 24 7 68 1 0\n2014-05-30 05:00:13 13 0 0 352380 91516 23276220 0 0 1060\n816 18283 18682 15 36 48 1 0\n2014-05-30 05:00:14 12 0 0 356720 88880 23179276 0 0 704\n868 16193 140313 36 12 51 1 0\n2014-05-30 05:00:15 5 0 0 513784 88880 23173344 0 0 2248\n748 12350 21178 30 6 62 2 0\n2014-05-30 05:00:16 2 0 0 623020 88884 23175808 0 0 1568\n500 5841 4999 12 2 86 1 0\n2014-05-30 05:00:17 5 0 0 590488 88884 23175844 0 0 24\n584 6573 4905 14 2 84 0 0\n2014-05-30 05:00:18 3 0 0 632408 88884 23176116 0 0 0\n496 6846 4358 14 2 84 0 0\n2014-05-30 05:00:19 5 0 0 596716 88884 23176948 0 0 656\n668 7135 5262 14 3 83 0 0\n2014-05-30 05:00:20 6 0 0 558692 88884 23179964 0 0 2816\n1012 8566 7742 17 4 79 0 0\n2014-05-30 05:00:21 7 1 0 476580 88888 23181200 0 0 1272\n968 11240 14308 23 6 71 1 0\n2014-05-30 05:00:22 8 0 0 695396 88888 23183028 0 0 728\n1128 9751 7121 22 4 74 1 0\n2014-05-30 05:00:23 9 0 0 536084 88888 23199080 0 0 392\n1024 12523 22269 26 6 68 0 0\n2014-05-30 05:00:24 13 0 0 416296 88888 23200416 0 0 40\n1000 16319 61822 29 21 51 0 0\n2014-05-30 05:00:25 14 0 0 386904 88888 23200704 0 0 24\n816 20850 4424 16 38 46 0 0\n2014-05-30 05:00:26 17 0 0 334688 88896 23201028 0 0 24\n1000 26758 16934 24 46 30 0 0\n2014-05-30 05:00:27 18 0 0 307304 88896 23193928 0 0 0\n1068 27051 67778 21 46 33 0 0\n2014-05-30 05:00:28 20 1 0 295560 88896 23162456 0 0 0\n860 31012 27787 15 67 18 0 0\n2014-05-30 05:00:29 22 1 0 281272 88896 23153312 0 0 16\n928 28899 2857 9 78 13 0 0\n2014-05-30 05:00:30 26 0 0 400804 87976 22979324 0 0 0\n1536 37689 4368 9 88 3 0 0\n2014-05-30 05:00:31 27 0 0 395588 87976 22979412 0 0 0\n1564 29195 4305 8 92 0 0 0\n2014-05-30 05:00:32 25 0 0 353176 87976 22979592 0 0 24\n9404 29845 15845 13 85 2 0 0\n2014-05-30 05:00:33 28 0 0 318680 87976 22979776 0 0 0\n1588 28097 3372 9 91 0 0 0\n2014-05-30 05:00:34 27 0 0 304676 87352 22969136 0 0 0\n1480 29387 4330 10 90 0 0 0\n2014-05-30 05:00:35 27 0 0 337960 79784 22900220 0 0 48\n924 38334 8253 13 86 1 0 0\n2014-05-30 05:00:36 29 0 0 297308 79788 22898608 0 0 0\n952 30865 3067 10 90 0 0 0\n2014-05-30 05:00:37 33 0 0 282612 79624 22801728 0 0 8\n17169 29197 6855 13 87 0 0 0\n2014-05-30 05:00:38 32 0 0 224140 79624 22717680 0 0 0\n908 29640 6579 14 86 0 0 0\n2014-05-30 05:00:39 29 0 0 304456 79624 22661712 0 0 0\n696 31838 3563 10 90 0 0 0\n2014-05-30 05:00:40 32 0 0 322080 79624 22657328 0 0 0\n396 29206 2274 4 96 0 0 0\n2014-05-30 05:00:41 33 0 0 309184 79624 22649172 0 0 0\n660 27084 2385 4 96 0 0 0\n2014-05-30 05:00:42 34 0 0 285348 79624 22646216 0 0 0\n4316 26418 2571 3 97 0 0 0\n2014-05-30 05:00:43 37 0 0 273732 79624 22632920 0 0 0\n740 26728 2663 3 97 0 0 0\n2014-05-30 05:00:44 38 0 0 276304 79624 22632640 0 0 0\n464 26376 2593 3 97 0 0 0\n2014-05-30 05:00:45 41 0 0 257480 79624 22624828 0 0 0\n448 26505 3516 4 96 0 0 0\n2014-05-30 05:00:46 32 0 0 305108 79608 22559216 0 0 0\n304 29610 6162 8 92 0 0 0\n2014-05-30 05:00:47 46 0 0 310504 79592 22487756 0 0 0\n360 30559 10591 12 88 0 0 0\n2014-05-30 05:00:48 47 0 0 286236 79596 22478168 0 0 0\n312 25733 4250 5 95 0 0 0\n2014-05-30 05:00:49 48 0 0 300284 79596 22477684 0 0 0\n356 26464 4690 5 95 0 0 0\n2014-05-30 05:00:50 52 0 0 276896 79588 22449560 0 0 124\n280 26238 4132 4 96 0 0 0\n2014-05-30 05:00:51 60 0 0 234836 79568 22406932 0 0 4\n344 26446 5055 5 95 0 0 0\n2014-05-30 05:00:52 62 0 0 247304 79564 22351916 0 0 12\n452 26807 4694 3 97 0 0 0\n2014-05-30 05:00:53 63 0 0 231892 79564 22347368 0 0 0\n308 25378 4376 3 97 0 0 0\n2014-05-30 05:00:54 67 0 0 236056 79564 22309368 0 0 0\n156 25737 4022 3 97 0 0 0\n2014-05-30 05:00:55 66 0 0 232984 79564 22286336 0 0 0\n216 25393 3874 2 98 0 0 0\n2014-05-30 05:00:56 67 0 0 240720 79560 22267736 0 0 0\n588 25944 4678 2 98 0 0 0\n2014-05-30 05:00:57 70 0 0 242836 79540 22232068 0 0 0\n16800 26058 4607 3 97 0 0 0\n2014-05-30 05:00:58 72 0 0 234944 79548 22224948 0 0 0\n608 25589 4687 2 98 0 0 0\n2014-05-30 05:00:59 73 0 0 236064 79536 22173496 0 0 0\n188 25747 4530 3 97 0 0 0\n2014-05-30 05:01:00 77 0 0 232708 79524 22135168 0 0 0\n304 25546 5247 3 97 0 0 0\n2014-05-30 05:01:01 72 0 0 269328 79528 22117528 0 0 24\n396 27545 8488 5 95 0 0 0\n2014-05-30 05:01:02 83 0 0 220892 79496 22043024 0 0 0\n3280 28665 9805 7 94 0 0 0\n2014-05-30 05:01:03 86 0 0 224004 79488 21995400 0 0 0\n440 26338 6090 3 97 0 0 0\n2014-05-30 05:01:04 90 0 0 249684 79476 21932468 0 0 0\n408 26341 5834 3 97 0 0 0\n2014-05-30 05:01:05 91 0 0 257336 79464 21883060 0 0 0\n380 26272 5717 2 98 0 0 0\n2014-05-30 05:01:06 98 0 0 242896 79468 21878016 0 0 0\n608 25628 5648 2 98 0 0 0\n2014-05-30 05:01:07 94 0 0 237276 79468 21876148 0 0 0\n908 25041 5883 3 98 0 0 0\n2014-05-30 05:01:08 99 0 0 225832 79488 21858572 0 0 24\n504 25271 5913 3 97 0 0 0\n2014-05-30 05:01:09 94 0 0 245796 79460 21812404 0 0 0\n264 25106 6189 3 97 0 0 0\n2014-05-30 05:01:10 94 0 0 246268 79460 21811144 0 0 0\n188 25087 5989 2 98 0 0 0\n2014-05-30 05:01:11 94 0 0 232900 79456 21775152 0 0 0\n168 24965 5949 3 97 0 0 0\n2014-05-30 05:01:12 100 0 0 227900 79428 21737032 0 0 32\n276 25560 6798 4 96 0 0 0\n2014-05-30 05:01:13 98 0 0 253644 79396 21713908 0 0 0\n552 27440 9052 5 95 0 0 0\n2014-05-30 05:01:14 104 0 0 269648 79376 21634336 0 0 0\n540 26100 6633 3 97 0 0 0\n2014-05-30 05:01:15 104 0 0 259436 79376 21622164 0 0 0\n368 25094 6417 2 98 0 0 0\n2014-05-30 05:01:16 105 0 0 262596 79372 21616292 0 0 0\n200 24995 6276 2 98 0 0 0\n2014-05-30 05:01:17 109 0 0 232172 79360 21583800 0 0 0\n388 25112 6570 3 97 0 0 0\n2014-05-30 05:01:18 109 0 0 231628 79372 21566604 0 0 0\n364 25221 6644 2 98 0 0 0\n2014-05-30 05:01:19 110 0 0 223920 79372 21532992 0 0 0\n340 25383 6874 3 97 0 0 0\n2014-05-30 05:01:20 111 0 0 223028 79368 21501868 0 0 0\n288 25369 6465 2 98 0 0 0\n2014-05-30 05:01:21 113 0 0 211676 79352 21434584 0 0 0\n240 24939 6607 4 96 0 0 0\n2014-05-30 05:01:22 114 0 0 211020 79300 21327264 0 0 0\n308 25390 7239 5 95 0 0 0\n2014-05-30 05:01:23 110 0 0 213524 79256 21215612 0 0 40\n336 25494 7878 7 93 0 0 0\n2014-05-30 05:01:24 114 0 0 222748 79220 21107976 0 0 0\n276 25257 7032 7 93 0 0 0\n2014-05-30 05:01:25 115 0 0 262004 79160 21012468 0 0 0\n300 25986 6746 8 92 0 0 0\n(...)\n\n* For the OEperf¹ outputs, I¹m still waiting for a time where I can gather\nit. Unfortunately, this is on a Production system, and we had to quickly\nlook for a workaround the workaround we have found is: when the OEfree¹\nmemory drops below a certain threshold, we run: /bin/sync && /bin/echo 3 >\n/proc/sys/vm/drop_caches . Since we¹ve been doing that, we haven¹t had the\nhigh load issue.\n\n* I forgot to mention that the debugging info I posted came from our slave\nserver (the master and slave have the same specs, but different OS version \nmaster has 2.6.18-371.3.1.el5 #1 SMP Thu Dec 5 12:47:02 EST 2013). We\nactually first saw the issue on our master server and not the slave server.\nTo get the master going, we edited our code to move our heaviest queries so\nthat they hit only the slave. After we did that, the slave started having\nthe high load issues, but with one difference - on the slave, 'cached' was\nhigh, but as can be seen in the master debugging session below, on the\nmaster, cached was low (corresponds to what we had for shared_buffers), and\nwhen 'free' dropped, it looks like the OS managed to grab the memory from\n'used' (do note that we will be adding more memory to our servers shortly):\n\n09:24 load average: 8.76 used: 32166 free: 31851 shared: 314 buffers: 0\ncached: 8827\n09:25 load average: 6.30 used: 31840 free: 325 shared: 0 buffers: 13 cached:\n8851\n09:26 load average: 95.83 used: 17883 free: 14282 shared: 0 buffers: 2\ncached: 8563\n(...)\n10:00 load average: 4.45 used: 31945 free: 220 shared: 0 buffers: 15 cached:\n8862\n10:01 load average: 4.56 used: 31983 free: 182 shared: 0 buffers: 15 cached:\n8844\n10:02 load average: 5.34 used: 31983 free: 183 shared: 0 buffers: 14 cached:\n8832\n10:03 load average: 6.57 used: 31987 free: 179 shared: 0 buffers: 9 cached:\n8709\n10:04 load average: 71.21 used: 18095 free: 14071 shared: 0 buffers: 1\ncached: 8556\n(...)\n10:40 load average: 5.38 used: 31970 free: 196 shared: 0 buffers: 12 cached:\n8929\n10:41 load average: 6.10 used: 31889 free: 276 shared: 0 buffers: 13 cached:\n8804\n10:42 load average: 6.94 used: 31984 free: 182 shared: 0 buffers: 2 cached:\n8761\n10:43 load average: 13.90 used: 31777 free: 389 shared: 0 buffers: 2 cached:\n8555\n10:44 load average: 54.30 used: 18894 free: 13272 shared: 0 buffers: 4\ncached: 8592\n(...)\n11:21 load average: 5.54 used: 31985 free: 181 shared: 0 buffers: 11 cached:\n8764\n11:22 load average: 5.15 used: 31768 free: 397 shared: 0 buffers: 10 cached:\n8721\n11:23 load average: 5.62 used: 31901 free: 265 shared: 0 buffers: 11 cached:\n8742\n11:24 load average: 4.80 used: 31969 free: 196 shared: 0 buffers: 9 cached:\n8675\n11:25 load average: 53.74 used: 18644 free: 13522 shared: 0 buffers: 1\ncached: 8578\n\nMore detailed output from our master server:\n\n2014-05-29 00:01:01 procs -----------memory---------- ---swap-- -----io----\n--system-- -----cpu------\n2014-05-29 00:01:01 r b swpd free buff cache si so bi bo\nin cs us sy id wa st\n(...)\n2014-05-29 09:24:41 10 0 7044 364616 14048 9055060 0 0 4 1908\n7115 11287 29 1 70 0 0\n2014-05-29 09:24:42 7 0 7044 360160 14072 9057648 0 0 2064 796\n7512 11746 34 2 64 0 0\n2014-05-29 09:24:43 2 0 7044 339576 14076 9057644 0 0 12 824\n7143 11633 25 1 74 0 0\n2014-05-29 09:24:44 3 0 7044 333120 14080 9058096 0 0 4 696\n5762 7558 17 1 82 0 0\n2014-05-29 09:24:45 2 0 7044 332500 14080 9058096 0 0 0 592\n5138 6043 13 1 86 0 0\n2014-05-29 09:24:46 9 0 7044 322008 14092 9058500 0 0 12 1800\n6545 10885 22 2 76 0 0\n2014-05-29 09:24:47 6 0 7044 316428 14124 9058468 0 0 24 900\n7371 12690 34 2 64 0 0\n2014-05-29 09:24:48 8 0 7044 309608 14128 9058984 0 0 28 920\n7088 10422 23 1 76 0 0\n2014-05-29 09:24:49 1 0 7044 309236 14128 9058984 0 0 8 856\n6898 10685 24 1 75 0 0\n2014-05-29 09:24:50 4 0 7044 295140 14128 9059392 0 0 16 904\n6977 11955 25 2 73 0 0\n2014-05-29 09:24:51 3 0 7044 294700 14148 9059372 0 0 20 2176\n6471 9320 19 1 80 0 0\n2014-05-29 09:24:52 1 0 7044 293392 14172 9059816 0 0 72 804\n6836 9151 18 1 81 0 0\n2014-05-29 09:24:53 7 0 7044 297460 14176 9059812 0 0 12 760\n5914 8573 22 2 76 0 0\n2014-05-29 09:24:54 2 0 7044 305768 14180 9060228 0 0 4 680\n6204 9062 21 1 78 0 0\n2014-05-29 09:24:55 4 0 7044 299824 14188 9062396 0 0 2072 800\n6533 9405 21 1 78 0 0\n2014-05-29 09:24:56 7 0 7044 352600 14216 9062884 0 0 212 1612\n7084 10758 28 2 70 0 0\n2014-05-29 09:24:57 10 0 7044 350244 14232 9062868 0 0 16 844\n7873 14913 45 4 52 0 0\n2014-05-29 09:24:58 6 0 7044 348260 14236 9063268 0 0 4 616\n6029 8907 36 2 62 0 0\n2014-05-29 09:24:59 6 0 7044 335488 14248 9063256 0 0 28 840\n7111 10687 27 1 72 0 0\n2014-05-29 09:25:00 9 0 7044 343672 14248 9063612 0 0 16 1016\n7768 13125 29 2 70 0 0\n2014-05-29 09:25:01 8 0 7044 341564 14284 9063576 0 0 4 2156\n6498 8663 18 1 82 0 0\n2014-05-29 09:25:02 8 0 7044 333756 14336 9064004 0 0 12 952\n5848 7911 15 1 84 0 0\n2014-05-29 09:25:03 8 1 7044 277940 14352 9063988 0 0 40 784\n6502 11541 23 2 75 0 0\n2014-05-29 09:25:04 7 0 7044 192320 14376 9064564 0 0 84 896\n7373 18005 33 3 63 0 0\n2014-05-29 09:25:05 2 0 7044 311492 14376 9064564 0 0 0 984\n6858 10804 24 1 75 0 0\n2014-05-29 09:25:06 12 0 7044 305456 14380 9064980 0 0 12 3200\n6969 10565 25 2 73 0 0\n2014-05-29 09:25:07 5 0 7044 303596 14420 9064940 0 0 40 1348\n7110 10512 28 1 71 0 0\n2014-05-29 09:25:08 4 0 7044 299124 14420 9065316 0 0 0 1040\n7804 12488 26 1 72 0 0\n2014-05-29 09:25:09 5 0 7044 281336 14428 9067484 0 0 2056 720\n6217 9241 23 2 75 0 0\n2014-05-29 09:25:10 8 0 7044 269892 14436 9067916 0 0 32 1000\n7188 10796 24 1 75 0 0\n2014-05-29 09:25:11 7 0 7044 267784 14436 9067916 0 0 8 1960\n7967 12888 31 1 68 0 0\n2014-05-29 09:25:12 9 0 7044 232936 14460 9068380 0 0 24 1112\n6582 10024 23 1 76 0 0\n2014-05-29 09:25:13 4 0 7044 231076 14504 9068640 0 0 100 912\n6939 12254 26 2 71 0 0\n2014-05-29 09:25:14 10 0 7044 189040 14588 9068788 0 0 188 1080\n6915 14296 32 4 64 0 0\n2014-05-29 09:25:15 3 0 7044 185320 14596 9068780 0 0 16 448\n3805 5329 26 2 73 0 0\n2014-05-29 09:25:16 3 0 7044 181352 14604 9069276 0 0 8 1540\n4239 5568 20 1 79 0 0\n2014-05-29 09:25:17 3 0 7044 179616 14632 9069248 0 0 20 436\n3957 4246 12 0 87 0 0\n2014-05-29 09:25:18 14 0 7044 177756 14640 9069488 0 0 8 616\n4961 6912 11 1 88 0 0\n2014-05-29 09:25:19 31 0 7044 177316 14656 9069472 0 0 24 1992\n11500 36077 75 4 21 0 0\n2014-05-29 09:25:20 6 0 7044 165436 14524 9055920 0 0 28 1280\n8124 25478 47 4 50 0 0\n2014-05-29 09:25:21 3 0 7044 169144 14504 9054308 0 0 24 1696\n6682 8512 20 1 79 0 0\n2014-05-29 09:25:22 9 0 7044 173172 14524 9056504 0 0 2060 780\n7241 10328 27 3 70 0 0\n2014-05-29 09:25:23 10 0 7044 162756 14528 9056500 0 0 12 744\n6599 10688 38 3 59 0 0\n2014-05-29 09:25:24 12 0 7044 173760 14260 9044720 0 0 96 1196\n6781 12568 44 9 47 0 0\n2014-05-29 09:25:25 30 0 7044 168648 14040 9030188 0 0 12 664\n6627 12749 55 7 38 0 0\n2014-05-29 09:25:26 14 1 7044 176392 13932 9024712 0 0 16 1864\n7724 17989 76 6 17 0 0\n2014-05-29 09:25:27 19 0 7288 154564 13468 9003416 0 4 8 1836\n7381 16152 54 17 28 0 0\n2014-05-29 09:25:28 36 1 7288 154860 12084 8973284 0 0 588 736\n6623 13916 42 42 16 0 0\n2014-05-29 09:25:30 40 0 7288 154372 11492 8957252 0 0 580 896\n5525 11853 47 49 4 0 0\n2014-05-29 09:25:32 58 1 7288 154556 10724 8915168 0 0 328 2960\n6951 8010 33 65 2 0 0\n2014-05-29 09:25:34 44 1 7288 172180 10636 8901096 0 0 264 1860\n4392 5677 23 76 1 0 0\n2014-05-29 09:25:35 36 1 7288 154224 10272 8892344 0 0 772 664\n4171 7064 40 58 2 0 0\n2014-05-29 09:25:36 34 0 7288 154108 9984 8885292 0 0 808 224\n2222 3110 17 81 2 0 0\n2014-05-29 09:25:37 20 0 7320 154612 9844 8881288 0 24 0 1544\n1423 1115 9 81 10 0 0\n2014-05-29 09:25:38 23 0 7320 154860 9420 8874228 0 8 120 876\n1206 3054 1 80 19 0 0\n2014-05-29 09:25:39 23 1 7320 166132 9244 8872204 0 0 164 112\n2078 2616 13 82 4 0 0\n2014-05-29 09:25:40 46 1 7320 154612 8908 8861160 0 0 448 80\n1716 10859 10 73 16 1 0\n2014-05-29 09:25:41 42 0 7448 154312 7804 8848784 0 244 36 2124\n2149 3477 11 89 0 0 0\n2014-05-29 09:25:42 36 0 7448 154416 6576 8834556 0 0 92 1104\n3077 6462 26 74 0 0 0\n2014-05-29 09:25:43 64 2 7448 154644 5492 8821736 0 0 140 332\n2175 3466 11 78 10 0 0\n2014-05-29 09:25:44 49 0 7448 154364 4608 8811244 0 0 128 80\n1939 2115 10 90 0 0 0\n2014-05-29 09:25:49 34 0 7448 155792 2932 8783364 0 0 1624 2112\n11642 12972 15 84 1 0 0\n2014-05-29 09:25:50 27 0 7448 164700 2744 8781504 0 128 228 240\n1585 2297 9 89 2 0 0\n2014-05-29 09:25:51 29 0 7692 154540 2516 8778032 0 0 0 244\n1842 5052 10 86 4 0 0\n2014-05-29 09:25:55 35 0 7812 160476 1784 8766552 0 332 384 648\n6206 5487 6 92 2 0 0\n2014-05-29 09:25:56 30 1 7812 160380 1696 8765552 0 0 4 0\n1000 412 0 89 11 0 0\n2014-05-29 09:25:57 27 1 7812 154352 1900 8768940 0 0 5184 496\n1197 946 0 89 9 1 0\n2014-05-29 09:25:58 19 0 7816 154136 1764 8767872 0 0 0 0\n1021 297 0 83 16 0 0\n2014-05-29 09:25:59 19 0 8024 154564 1724 8765880 0 0 44 1888\n1116 1024 0 80 20 0 0\n2014-05-29 09:26:03 42 1 10676 154140 1332 8760968 0 32 932 344\n4871 10038 1 82 17 0 0\n2014-05-29 09:26:04 39 0 10676 154284 1332 8758704 0 0 12 68\n1194 560 1 99 0 0 0\n2014-05-29 09:26:18 67 1 10312 11751464 1388 8764704 480 2756 9836\n5932 26274 32175 6 94 0 0 0\n2014-05-29 09:26:19 82 1 10312 13208352 1464 8766044 0 120 1632\n772 3121 8808 17 83 0 0 0\n2014-05-29 09:26:20 54 1 10228 14452424 1600 8766600 64 0 1628\n404 3528 7679 15 84 0 0 0\n2014-05-29 09:26:21 12 2 10212 14493080 2060 8768192 0 32 2176\n1664 5697 38657 24 57 17 2 0\n2014-05-29 09:26:22 14 1 10212 14195536 3056 8772540 0 0 2720\n624 7115 93905 38 10 48 3 0\n2014-05-29 09:26:23 24 1 10212 14144780 3576 8775768 0 0 3184\n900 7854 104139 36 6 54 3 0\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 6 Jun 2014 02:17:33 -0700",
"msg_from": "Steven Crandell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU load spikes when CentOS tries to reclaim 'cached' memory"
},
{
"msg_contents": "On Thu, Jun 5, 2014 at 6:57 PM, Vincent Lasmarias\n<[email protected]> wrote:\n> Thanks for the informative responses and suggestions. My responses below:\n>\n> * Sorry for the double post. I posted the original message using my gmail\n> account and got a \"is not a member of any of the restrict_post groups\"\n> response and when I didn't see it for a day, I ended up wondering if it was\n> due to my use of a gmail account - so I tried using my company email account\n> instead to post an updated version of the original post.\n>\n> * Our system is not virtualized.\n>\n> * Jeff, the output format of the load and free/cached memory did not come\n> from a tool but came from my script. My script does OEuptime; free m¹, and\n> then another script massages the data to only grab date, 1-minute load\n> average, free, and cached.\n>\n> * For the 'top' outputs, I don't have the actual 'top' output, but I have\n> the following:\n\nAll right; top is pretty clearly indicating a problem in the kernel.\nMaybe this is numa or something else. If you can reproduce this on\nthe 'newer' kernel, it might be useful to try and get a 'perf' going;\nit was introduced in 2.6.31. This might fail for one reason or\nanother though but if you could get it rigged it might give some clues\nas to where in the kernel you're getting bound up.\n\nLong term, looking into upgrading the O/S might be a smart move.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Jun 2014 16:32:09 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU load spikes when CentOS tries to reclaim 'cached' memory"
}
] |
[
{
"msg_contents": "Hello,\n\nI am using Pgbadger to analyze the postgresql database log recently and noticed a section \"Prepared queries ratio\". For my report, it has:\n\n1.03 as Ratio of bind vs prepare\n0.12% Ratio between prepared and \"usual\" statements\n\nI'm trying to understand what the above metrics mean and if it's a problem. I found people can clearly clarify the parse/bind/execute time of a query. To my limited knowledge of Postgres, using explain analyze, I can only get the total execution time.\n\nCan someone shed me some light on this subject? How to interpret the ratios?\n\nThanks,\nSuya\n\n\n\n\n\n\n\n\n\nHello,\n \nI am using Pgbadger to analyze the postgresql database log recently and noticed a section “Prepared queries ratio”. For my report, it has:\n \n1.03 as Ratio of bind vs prepare\n0.12% Ratio between prepared and “usual” statements\n \nI’m trying to understand what the above metrics mean and if it’s a problem. I found people can clearly clarify the parse/bind/execute time of a query. To my limited knowledge of Postgres, using explain analyze, I can only get the total\n execution time. \n \nCan someone shed me some light on this subject? How to interpret the ratios?\n \nThanks,\nSuya",
"msg_date": "Thu, 5 Jun 2014 01:32:26 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "parse/bind/execute"
},
{
"msg_contents": "Huang, Suya wrote\n> Hello,\n> \n> I am using Pgbadger to analyze the postgresql database log recently and\n> noticed a section \"Prepared queries ratio\". For my report, it has:\n> \n> 1.03 as Ratio of bind vs prepare\n> 0.12% Ratio between prepared and \"usual\" statements\n> \n> I'm trying to understand what the above metrics mean and if it's a\n> problem. I found people can clearly clarify the parse/bind/execute time of\n> a query. To my limited knowledge of Postgres, using explain analyze, I can\n> only get the total execution time.\n> \n> Can someone shed me some light on this subject? How to interpret the\n> ratios?\n> \n> Thanks,\n> Suya\n\nBoth are related to using prepared statements (usually with parameters). \nEach bind is a use of an already prepared query with parameters filled in. \nThe prepare is the initial preparation of the query. A ratio of 1 means\nthat each time you prepare a query you use it once then throw it away. \nLikewise a value of 2 would mean you are executing each prepared statement\ntwice.\n\n\"Usual\" statements are those that are not prepared. The ratio is simply the\ncounts of each as seen by the database - I do not know specifics as to what\nexactly is counted (ddl?).\n\nThat low a ratio means that almost all statements you send to the database\nare non-prepared. In those relatively few cases where you do prepare first\nyou almost always immediately execute a single set of inputs then discard\nthe prepared statement.\n\nI do not know enough about the underlying data to draw a conclusion but\ntypically the higher the bind/prepare ratio the more efficient your use of\ndatabase resources. Same goes for the prepare ratio. The clients you use\nand the general usage of the database heavily influence what would be\nconsidered reasonable ratios.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/parse-bind-execute-tp5806132p5806133.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 4 Jun 2014 18:57:56 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: parse/bind/execute"
},
{
"msg_contents": "Thank you David, I copied the detailed activity from the report as below. As it shows, it has prepare and bind queries. One of the item has Bind/Prepare pretty high as 439.50. so that looks like a good value? \n\nAnother question is if bind only happens in a prepared statement? \n\nDay \tHour \tPrepare \tBind \tBind/Prepare \tPercentage of prepare\nJun 03 \t00 \t205 \t209 \t1.02 \t1.27%\n \t01 \t19 \t19 \t1.00 \t0.17%\n \t02 \t0 \t0 \t0.00 \t0.00%\n \t03 \t0 \t0 \t0.00 \t0.00%\n \t04 \t6 \t6 \t1.00 \t0.00%\n \t05 \t2 \t879 \t439.50 \t0.02%\n \t06 \t839 \t1,323 \t1.58 \t7.01%\n \t07 \t0 \t0 \t0.00 \t0.00%\n\nThanks,\nSuya\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of David G Johnston\nSent: Thursday, June 05, 2014 11:58 AM\nTo: [email protected]\nSubject: Re: [PERFORM] parse/bind/execute\n\nHuang, Suya wrote\n> Hello,\n> \n> I am using Pgbadger to analyze the postgresql database log recently \n> and noticed a section \"Prepared queries ratio\". For my report, it has:\n> \n> 1.03 as Ratio of bind vs prepare\n> 0.12% Ratio between prepared and \"usual\" statements\n> \n> I'm trying to understand what the above metrics mean and if it's a \n> problem. I found people can clearly clarify the parse/bind/execute \n> time of a query. To my limited knowledge of Postgres, using explain \n> analyze, I can only get the total execution time.\n> \n> Can someone shed me some light on this subject? How to interpret the \n> ratios?\n> \n> Thanks,\n> Suya\n\nBoth are related to using prepared statements (usually with parameters). \nEach bind is a use of an already prepared query with parameters filled in. \nThe prepare is the initial preparation of the query. A ratio of 1 means that each time you prepare a query you use it once then throw it away. \nLikewise a value of 2 would mean you are executing each prepared statement twice.\n\n\"Usual\" statements are those that are not prepared. The ratio is simply the counts of each as seen by the database - I do not know specifics as to what exactly is counted (ddl?).\n\nThat low a ratio means that almost all statements you send to the database are non-prepared. In those relatively few cases where you do prepare first you almost always immediately execute a single set of inputs then discard the prepared statement.\n\nI do not know enough about the underlying data to draw a conclusion but typically the higher the bind/prepare ratio the more efficient your use of database resources. Same goes for the prepare ratio. The clients you use and the general usage of the database heavily influence what would be considered reasonable ratios.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/parse-bind-execute-tp5806132p5806133.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jun 2014 02:20:19 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: parse/bind/execute"
},
{
"msg_contents": "Please do not top-posts on these lists.\n\nOn Wednesday, June 4, 2014, Huang, Suya <[email protected]> wrote:\n\n> Thank you David, I copied the detailed activity from the report as below.\n> As it shows, it has prepare and bind queries. One of the item has\n> Bind/Prepare pretty high as 439.50. so that looks like a good value?\n>\n> Another question is if bind only happens in a prepared statement?\n>\n> Day Hour Prepare Bind Bind/Prepare Percentage of\n> prepare\n> Jun 03 00 205 209 1.02 1.27%\n> 01 19 19 1.00 0.17%\n> 02 0 0 0.00 0.00%\n> 03 0 0 0.00 0.00%\n> 04 6 6 1.00 0.00%\n> 05 2 879 439.50 0.02%\n> 06 839 1,323 1.58 7.01%\n> 07 0 0 0.00 0.00%\n>\n>\n>\nYes. Something that high usual involves batch inserting into a table. To\nbe honest, a global picture is of limited value for this very reason.\n Representing all of your usage as a single number is problematic.\n Breaking it down by hour as done here increases the likelihood of seeing\nsomething useful but typically that would be by chance. In this case\nbecause batch processing is done in the early morning and few users are\nprobably on the system (a common source of one-off statements) the numbers\nhere are dominated by the special case of bulk inserts and are not typical\nof normal activity and performance.\n\nDavid J.\n\nPlease do not top-posts on these lists.On Wednesday, June 4, 2014, Huang, Suya <[email protected]> wrote:\nThank you David, I copied the detailed activity from the report as below. As it shows, it has prepare and bind queries. One of the item has Bind/Prepare pretty high as 439.50. so that looks like a good value?\n\nAnother question is if bind only happens in a prepared statement?\n\nDay Hour Prepare Bind Bind/Prepare Percentage of prepare\nJun 03 00 205 209 1.02 1.27%\n 01 19 19 1.00 0.17%\n 02 0 0 0.00 0.00%\n 03 0 0 0.00 0.00%\n 04 6 6 1.00 0.00%\n 05 2 879 439.50 0.02%\n 06 839 1,323 1.58 7.01%\n 07 0 0 0.00 0.00%\n\nYes. Something that high usual involves batch inserting into a table. To be honest, a global picture is of limited value for this very reason. Representing all of your usage as a single number is problematic. Breaking it down by hour as done here increases the likelihood of seeing something useful but typically that would be by chance. In this case because batch processing is done in the early morning and few users are probably on the system (a common source of one-off statements) the numbers here are dominated by the special case of bulk inserts and are not typical of normal activity and performance.\nDavid J.",
"msg_date": "Wed, 4 Jun 2014 22:34:31 -0400",
"msg_from": "David Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: parse/bind/execute"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm currently testing some queries on data which I had imported from an \nother database-system into Postgres 9.4.\n\nAfter the import I did create the indexes, run an analyze and vacuum. I \nalso played a little bit with seq_page_cost and random_page_cost. But \ncurrently I have no clue, which parameter I have to adjust, to get an \nquery-time like the example width 'enable_seqscan=off'.\n\nStefan\n\n\n\n> pd=> set enable_seqscan=off;\n> pd=> explain analyze select t.name from product p left join measurements m on p.productid=m.productid inner join measurementstype t on m.measurementstypeid=t.measurementstypeid where p.timestamp between '2013-02-01 15:00:00' and '2013-02-05 21:30:00' group by t.name;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=200380892.01..200380936.43 rows=4442 width=16) (actual time=34428.335..34428.693 rows=656 loops=1)\n> Group Key: t.name\n> -> Hash Join (cost=8995.44..200361772.19 rows=7647926 width=16) (actual time=103.670..30153.958 rows=5404751 loops=1)\n> Hash Cond: (m.measurementstypeid = t.measurementstypeid)\n> -> Nested Loop (cost=8279.61..200188978.03 rows=7647926 width=4) (actual time=75.939..22488.725 rows=5404751 loops=1)\n> -> Bitmap Heap Scan on product p (cost=8279.03..662659.76 rows=526094 width=8) (actual time=75.903..326.850 rows=368494 loops=1)\n> Recheck Cond: ((\"timestamp\" >= '2013-02-01 15:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2013-02-05 21:30:00'::timestamp without time zo\n> Heap Blocks: exact=3192\n> -> Bitmap Index Scan on product_timestamp (cost=0.00..8147.51 rows=526094 width=0) (actual time=75.050..75.050 rows=368494 loops=1)\n> Index Cond: ((\"timestamp\" >= '2013-02-01 15:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2013-02-05 21:30:00'::timestamp without tim\n> -> Index Scan using measurements_productid on measurements m (cost=0.58..347.12 rows=3214 width=12) (actual time=0.018..0.045 rows=15 loops=368494)\n> Index Cond: (productid = p.productid)\n> -> Hash (cost=508.91..508.91 rows=16554 width=20) (actual time=27.704..27.704 rows=16554 loops=1)\n> Buckets: 2048 Batches: 1 Memory Usage: 686kB\n> -> Index Scan using measurementstype_pkey on measurementstype t (cost=0.29..508.91 rows=16554 width=20) (actual time=0.017..15.719 rows=16554 loops=1)\n> Planning time: 2.176 ms\n> Execution time: 34429.080 ms\n> (17 Zeilen)\n>\n>\n> Zeit: 34432,187 ms\n> pd=> set enable_seqscan=on;\n> SET\n> Zeit: 0,193 ms\n> pd=> explain analyze select t.name from product p left join measurements m on p.productid=m.productid inner join measurementstype t on m.measurementstypeid=t.measurementstypeid where p.timestamp between '2013-02-01 15:00:00' and '2013-02-05 21:30:00' group by t.name;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=108645282.49..108645326.91 rows=4442 width=16) (actual time=5145182.269..5145182.656 rows=656 loops=1)\n> Group Key: t.name\n> -> Hash Join (cost=671835.40..108626162.68 rows=7647926 width=16) (actual time=2087822.232..5141351.539 rows=5404751 loops=1)\n> Hash Cond: (m.measurementstypeid = t.measurementstypeid)\n> -> Hash Join (cost=671291.94..108453540.88 rows=7647926 width=4) (actual time=2087800.816..5134312.822 rows=5404751 loops=1)\n> Hash Cond: (m.productid = p.productid)\n> -> Seq Scan on measurements m (cost=0.00..49325940.08 rows=2742148608 width=12) (actual time=0.007..2704591.045 rows=2742146806 loops=1)\n> -> Hash (cost=662659.76..662659.76 rows=526094 width=8) (actual time=552.480..552.480 rows=368494 loops=1)\n> Buckets: 16384 Batches: 4 Memory Usage: 2528kB\n> -> Bitmap Heap Scan on product p (cost=8279.03..662659.76 rows=526094 width=8) (actual time=73.353..302.482 rows=368494 loops=1)\n> Recheck Cond: ((\"timestamp\" >= '2013-02-01 15:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2013-02-05 21:30:00'::timestamp without t\n> Heap Blocks: exact=3192\n> -> Bitmap Index Scan on product_timestamp (cost=0.00..8147.51 rows=526094 width=0) (actual time=72.490..72.490 rows=368494 loops=1)\n> Index Cond: ((\"timestamp\" >= '2013-02-01 15:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2013-02-05 21:30:00'::timestamp witho\n> -> Hash (cost=336.54..336.54 rows=16554 width=20) (actual time=21.377..21.377 rows=16554 loops=1)\n> Buckets: 2048 Batches: 1 Memory Usage: 686kB\n> -> Seq Scan on measurementstype t (cost=0.00..336.54 rows=16554 width=20) (actual time=0.008..9.849 rows=16554 loops=1)\n> Planning time: 2.236 ms\n> Execution time: 5145183.471 ms\n> (19 Zeilen)\n>\n> Zeit: 5145186,786 ms\n\n\n> pd=> \\d measurements\n> Tabelle �public.measurements�\n> Spalte | Typ | Attribute\n> --------------------+-----------------------------+---------------------------------------------------------------------------\n> measurementsid | bigint | not null Vorgabewert nextval('measurements_measurementsid_seq'::regclass)\n> value | text | not null\n> lowerlimit | text |\n> upperlimit | text |\n> measurementstypeid | integer | not null\n> productid | bigint | not null\n> timestamp | timestamp without time zone |\n> state | character varying(20) | not null Vorgabewert 'Unknown'::character varying\n> Indexe:\n> \"measurements_pkey\" PRIMARY KEY, btree (measurementsid)\n> \"measurements_measurementstypeid\" btree (measurementstypeid)\n> \"measurements_productid\" btree (productid)\n>\n> pd=> \\d product\n> Tabelle �public.product�\n> Spalte | Typ | Attribute\n> ------------------+-----------------------------+-----------------------------------------------------------------\n> productid | bigint | not null Vorgabewert nextval('product_productid_seq'::regclass)\n> ordermaterialsid | integer | not null\n> testerid | integer |\n> equipmentid | integer | not null\n> timestamp | timestamp without time zone | not null\n> state | character varying(20) | not null Vorgabewert 'Unknown'::character varying\n> exported | character varying(1) | not null Vorgabewert 'N'::character varying\n> mc_selectionid | integer |\n> Indexe:\n> \"product_pkey\" PRIMARY KEY, btree (productid)\n> \"product_equipmentid\" btree (equipmentid)\n> \"product_exported\" btree (exported)\n> \"product_mc_selectionid\" btree (mc_selectionid)\n> \"product_ordermaterialsid\" btree (ordermaterialsid)\n> \"product_state\" btree (state)\n> \"product_testerid\" btree (testerid)\n> \"product_timestamp\" btree (\"timestamp\")\n> Fremdschl�ssel-Constraints:\n> \"fk_equipmentid\" FOREIGN KEY (equipmentid) REFERENCES equipment(equipmentid) ON UPDATE CASCADE ON DELETE RESTRICT\n> \"fk_mc_selectionid\" FOREIGN KEY (mc_selectionid) REFERENCES mc_selection(mc_selectionid) ON UPDATE CASCADE ON DELETE SET NULL\n> \"fk_ordermaterialsid\" FOREIGN KEY (ordermaterialsid) REFERENCES ordermaterials(ordermaterialsid) ON UPDATE CASCADE ON DELETE RESTRICT\n> \"fk_testerid\" FOREIGN KEY (testerid) REFERENCES tester(testerid) ON UPDATE CASCADE ON DELETE RESTRICT\n>\n> pd=> \\d measurementstype\n> Tabelle �public.measurementstype�\n> Spalte | Typ | Attribute\n> --------------------+------------------------+-----------------------------------------------------------------------------------\n> measurementstypeid | integer | not null Vorgabewert nextval('measurementstype_measurementstypeid_seq'::regclass)\n> datatype | character varying(20) | not null Vorgabewert 'char'::character varying\n> name | character varying(255) | not null\n> description | character varying(255) | Vorgabewert NULL::character varying\n> unit | character varying(20) | Vorgabewert NULL::character varying\n> step | integer |\n> stepdescription | character varying(255) | Vorgabewert NULL::character varying\n> permissionlevel | integer | not null Vorgabewert 0\n> Indexe:\n> \"measurementstype_pkey\" PRIMARY KEY, btree (measurementstypeid)\n> \"measurementstype_datatype\" btree (datatype)\n> \"measurementstype_name\" btree (name)\n> \"measurementstype_step\" btree (step)\n> \"measurementstype_stepdescription\" btree (stepdescription)\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 05 Jun 2014 21:36:08 +0200",
"msg_from": "Weinzierl Stefan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seqscan on big table, when an Index-Usage should be possible"
},
{
"msg_contents": "Stefan,\n\nSee below\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of Weinzierl Stefan\n> Sent: Thursday, June 05, 2014 3:36 PM\n> To: [email protected]\n> Subject: [PERFORM] Seqscan on big table, when an Index-Usage should be\n> possible\n> \n> Hello,\n> \n> I'm currently testing some queries on data which I had imported from an\n> other database-system into Postgres 9.4.\n> \n> After the import I did create the indexes, run an analyze and vacuum. I also\n> played a little bit with seq_page_cost and random_page_cost. But currently I\n> have no clue, which parameter I have to adjust, to get an query-time like the\n> example width 'enable_seqscan=off'.\n> \n> Stefan\n> \n> \n> \n> > pd=> set enable_seqscan=off;\n> > pd=> explain analyze select t.name from product p left join measurements\n> m on p.productid=m.productid inner join measurementstype t on\n> m.measurementstypeid=t.measurementstypeid where p.timestamp\n> between '2013-02-01 15:00:00' and '2013-02-05 21:30:00' group by t.name;\n> >\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > ----------------------------------------------------------------------\n> > -------------------------------- HashAggregate\n> > (cost=200380892.01..200380936.43 rows=4442 width=16) (actual\n> time=34428.335..34428.693 rows=656 loops=1)\n> > Group Key: t.name\n> > -> Hash Join (cost=8995.44..200361772.19 rows=7647926 width=16)\n> (actual time=103.670..30153.958 rows=5404751 loops=1)\n> > Hash Cond: (m.measurementstypeid = t.measurementstypeid)\n> > -> Nested Loop (cost=8279.61..200188978.03 rows=7647926 width=4)\n> (actual time=75.939..22488.725 rows=5404751 loops=1)\n> > -> Bitmap Heap Scan on product p (cost=8279.03..662659.76\n> rows=526094 width=8) (actual time=75.903..326.850 rows=368494 loops=1)\n> > Recheck Cond: ((\"timestamp\" >= '2013-02-01\n> 15:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2013-02-05\n> 21:30:00'::timestamp without time zo\n> > Heap Blocks: exact=3192\n> > -> Bitmap Index Scan on product_timestamp\n> (cost=0.00..8147.51 rows=526094 width=0) (actual time=75.050..75.050\n> rows=368494 loops=1)\n> > Index Cond: ((\"timestamp\" >= '2013-02-01\n> 15:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2013-02-05\n> 21:30:00'::timestamp without tim\n> > -> Index Scan using measurements_productid on measurements m\n> (cost=0.58..347.12 rows=3214 width=12) (actual time=0.018..0.045 rows=15\n> loops=368494)\n> > Index Cond: (productid = p.productid)\n> > -> Hash (cost=508.91..508.91 rows=16554 width=20) (actual\n> time=27.704..27.704 rows=16554 loops=1)\n> > Buckets: 2048 Batches: 1 Memory Usage: 686kB\n> > -> Index Scan using measurementstype_pkey on\n> > measurementstype t (cost=0.29..508.91 rows=16554 width=20) (actual\n> > time=0.017..15.719 rows=16554 loops=1) Planning time: 2.176 ms\n> > Execution time: 34429.080 ms\n> > (17 Zeilen)\n> >\n> >\n> > Zeit: 34432,187 ms\n> > pd=> set enable_seqscan=on;\n> > SET\n> > Zeit: 0,193 ms\n> > pd=> explain analyze select t.name from product p left join measurements\n> m on p.productid=m.productid inner join measurementstype t on\n> m.measurementstypeid=t.measurementstypeid where p.timestamp\n> between '2013-02-01 15:00:00' and '2013-02-05 21:30:00' group by t.name;\n> >\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > ----------------------------------------------------------------------\n> > -------------------------------- HashAggregate\n> > (cost=108645282.49..108645326.91 rows=4442 width=16) (actual\n> time=5145182.269..5145182.656 rows=656 loops=1)\n> > Group Key: t.name\n> > -> Hash Join (cost=671835.40..108626162.68 rows=7647926 width=16)\n> (actual time=2087822.232..5141351.539 rows=5404751 loops=1)\n> > Hash Cond: (m.measurementstypeid = t.measurementstypeid)\n> > -> Hash Join (cost=671291.94..108453540.88 rows=7647926 width=4)\n> (actual time=2087800.816..5134312.822 rows=5404751 loops=1)\n> > Hash Cond: (m.productid = p.productid)\n> > -> Seq Scan on measurements m (cost=0.00..49325940.08\n> rows=2742148608 width=12) (actual time=0.007..2704591.045\n> rows=2742146806 loops=1)\n> > -> Hash (cost=662659.76..662659.76 rows=526094 width=8) (actual\n> time=552.480..552.480 rows=368494 loops=1)\n> > Buckets: 16384 Batches: 4 Memory Usage: 2528kB\n> > -> Bitmap Heap Scan on product p (cost=8279.03..662659.76\n> rows=526094 width=8) (actual time=73.353..302.482 rows=368494 loops=1)\n> > Recheck Cond: ((\"timestamp\" >= '2013-02-01\n> 15:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2013-02-05\n> 21:30:00'::timestamp without t\n> > Heap Blocks: exact=3192\n> > -> Bitmap Index Scan on product_timestamp\n> (cost=0.00..8147.51 rows=526094 width=0) (actual time=72.490..72.490\n> rows=368494 loops=1)\n> > Index Cond: ((\"timestamp\" >= '2013-02-01\n> 15:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2013-02-05\n> 21:30:00'::timestamp witho\n> > -> Hash (cost=336.54..336.54 rows=16554 width=20) (actual\n> time=21.377..21.377 rows=16554 loops=1)\n> > Buckets: 2048 Batches: 1 Memory Usage: 686kB\n> > -> Seq Scan on measurementstype t (cost=0.00..336.54\n> > rows=16554 width=20) (actual time=0.008..9.849 rows=16554 loops=1)\n> > Planning time: 2.236 ms Execution time: 5145183.471 ms\n> > (19 Zeilen)\n> >\n> > Zeit: 5145186,786 ms\n> \n> \n> > pd=> \\d measurements\n> > Tabelle \"public.measurements\"\n> > Spalte | Typ | Attribute\n> > --------------------+-----------------------------+-------------------\n> > --------------------+-----------------------------+-------------------\n> > --------------------+-----------------------------+-------------------\n> > --------------------+-----------------------------+------------------\n> > measurementsid | bigint | not null Vorgabewert\n> nextval('measurements_measurementsid_seq'::regclass)\n> > value | text | not null\n> > lowerlimit | text |\n> > upperlimit | text |\n> > measurementstypeid | integer | not null\n> > productid | bigint | not null\n> > timestamp | timestamp without time zone |\n> > state | character varying(20) | not null Vorgabewert\n> 'Unknown'::character varying\n> > Indexe:\n> > \"measurements_pkey\" PRIMARY KEY, btree (measurementsid)\n> > \"measurements_measurementstypeid\" btree (measurementstypeid)\n> > \"measurements_productid\" btree (productid)\n> >\n> > pd=> \\d product\n> > Tabelle \"public.product\"\n> > Spalte | Typ | Attribute\n> > ------------------+-----------------------------+---------------------\n> > ------------------+-----------------------------+---------------------\n> > ------------------+-----------------------------+---------------------\n> > ------------------+-----------------------------+--\n> > productid | bigint | not null Vorgabewert\n> nextval('product_productid_seq'::regclass)\n> > ordermaterialsid | integer | not null\n> > testerid | integer |\n> > equipmentid | integer | not null\n> > timestamp | timestamp without time zone | not null\n> > state | character varying(20) | not null Vorgabewert\n> 'Unknown'::character varying\n> > exported | character varying(1) | not null Vorgabewert\n> 'N'::character varying\n> > mc_selectionid | integer |\n> > Indexe:\n> > \"product_pkey\" PRIMARY KEY, btree (productid)\n> > \"product_equipmentid\" btree (equipmentid)\n> > \"product_exported\" btree (exported)\n> > \"product_mc_selectionid\" btree (mc_selectionid)\n> > \"product_ordermaterialsid\" btree (ordermaterialsid)\n> > \"product_state\" btree (state)\n> > \"product_testerid\" btree (testerid)\n> > \"product_timestamp\" btree (\"timestamp\")\n> > Fremdschlüssel-Constraints:\n> > \"fk_equipmentid\" FOREIGN KEY (equipmentid) REFERENCES\n> equipment(equipmentid) ON UPDATE CASCADE ON DELETE RESTRICT\n> > \"fk_mc_selectionid\" FOREIGN KEY (mc_selectionid) REFERENCES\n> mc_selection(mc_selectionid) ON UPDATE CASCADE ON DELETE SET NULL\n> > \"fk_ordermaterialsid\" FOREIGN KEY (ordermaterialsid) REFERENCES\n> ordermaterials(ordermaterialsid) ON UPDATE CASCADE ON DELETE RESTRICT\n> > \"fk_testerid\" FOREIGN KEY (testerid) REFERENCES tester(testerid)\n> > ON UPDATE CASCADE ON DELETE RESTRICT\n> >\n> > pd=> \\d measurementstype\n> > Tabelle \"public.measurementstype\"\n> > Spalte | Typ | Attribute\n> > --------------------+------------------------+------------------------\n> > --------------------+------------------------+------------------------\n> > --------------------+------------------------+------------------------\n> > --------------------+------------------------+-----------\n> > measurementstypeid | integer | not null Vorgabewert\n> nextval('measurementstype_measurementstypeid_seq'::regclass)\n> > datatype | character varying(20) | not null Vorgabewert\n> 'char'::character varying\n> > name | character varying(255) | not null\n> > description | character varying(255) | Vorgabewert NULL::character\n> varying\n> > unit | character varying(20) | Vorgabewert NULL::character varying\n> > step | integer |\n> > stepdescription | character varying(255) | Vorgabewert NULL::character\n> varying\n> > permissionlevel | integer | not null Vorgabewert 0\n> > Indexe:\n> > \"measurementstype_pkey\" PRIMARY KEY, btree (measurementstypeid)\n> > \"measurementstype_datatype\" btree (datatype)\n> > \"measurementstype_name\" btree (name)\n> > \"measurementstype_step\" btree (step)\n> > \"measurementstype_stepdescription\" btree (stepdescription)\n> >\n> \n\nYou don't tell: \n\t- what kind of hardware (specifically, how much RAM) you are using\n\t- what are your config settings: shared_buffers, work_mem, effective_cache_size\n\nAll this affects planner decisions, when choosing one (or another) execution path/plan.\n\nRegards,\nIgor Neyman\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jun 2014 20:20:57 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seqscan on big table, when an Index-Usage should be\n possible"
}
] |
[
{
"msg_contents": "Hi, I'm trying to investigate a performance problem.\n\nWe have a large database (over 1TB) running on a server with 160GB of RAM\nand 32 cores (Xeon E5-2650). The database files are on a NetApp mount.\n\nThe software is Postgres 9.3.1 on Ubuntu 12.04, Linux 3.2.0-38-generic.\n\nGenerally, when a query is slow, it's because it's waiting for I/O. Since\nonly about 10% of the database can be in RAM at any time, this is expected.\nI'm trying to analyze the working set in the cache to see that relevant\ntables and indexes are cached. I can map our database's objects to files\nusing pg_class.relfilenode to get the name(s), and then I use fincore from\nlinux-ftools to inspect the Linux cache.\n\nI have noticed a few times that an index scan may be taking a long time,\nand the query's backend process is reading from disk at about 2 MB/s,\nspending 99% of its time waiting for I/O (using iotop). This makes sense,\nif scanning an index that is not in cache.\n\nAs this is happening, I expect bits of the index and table to be pulled\ninto cache, so the index scan may speed up as it goes, or will at least\nfinish with some portion of the index in cache, so the scan won't be so\nslow next time. I use fincore to see how much of the index and table are in\ncache (taking into account that large objects >1GB will be split into\nmultiple files). To my surprise, the files are cached 0%!\n\nSome research about the Linux page cache suggests that any file can be\nessentially forced into cache by cat-ting to /dev/null. So I cat a file of\none of these database objects, and I can see the cat process reading at\nabout 100MB/s, so it takes 10 sec for a 1GB file. Then I check fincore\nagain -- the file is still not cached. cat-ting the file again still takes\n10 sec.\n\nI cat several times in a row, and the file refuses to cache. Sometimes I\nsee a bit of the file appear in the cache, but get removed a few seconds\nlater. I also tried the fadvise program in the ftools, which didn't help.\nThe I/O on this machine is not all that high (a few MB/s for various\npostgres processes), and there are a few GB free (shown in top). Most of\nthe RAM (150GB+) is used by the page cache. No swap is in use.\n\nEventually the query finishes (or I cancel it). Then I find that running\ncat on the file does leave it in cache! So, is having multiple readers\nconfusing Linux, or is Postgres doing any madvise on the file (I'd expect\nno to both).\n\nSo here's where I'm stuck. How can reading a file not leave it in the Linux\ncache? I'd expect it to enter the inactive list (which is about 80GB), so\nI'd expect another 80GB would need to be read before it would be its turn\nto be evicted.... which should take a long time if my maximum read speed is\n100MB/s.\n\nSince I don't understand the behaviour of simply running cat and the file\nnot being cached, I wonder if this is an issue with Linux, not Postgres.\nBut the issue seems to happen with files in use by Postgres, so maybe\nthere's an interaction, so I thought I'd start here.\n\nAny ideas how I can debug this further? Thanks!\n\nBrian\n\nHi, I'm trying to investigate a performance problem.We have a large database (over 1TB) running on a server with 160GB of RAM and 32 cores (Xeon E5-2650). The database files are on a NetApp mount.\nThe software is Postgres 9.3.1 on Ubuntu 12.04, Linux 3.2.0-38-generic.Generally, when a query is slow, it's because it's waiting for I/O. Since only about 10% of the database can be in RAM at any time, this is expected. I'm trying to analyze the working set in the cache to see that relevant tables and indexes are cached. I can map our database's objects to files using pg_class.relfilenode to get the name(s), and then I use fincore from linux-ftools to inspect the Linux cache.\nI have noticed a few times that an index scan may be taking a long time, and the query's backend process is reading from disk at about 2 MB/s, spending 99% of its time waiting for I/O (using iotop). This makes sense, if scanning an index that is not in cache.\nAs this is happening, I expect bits of the index and table to be pulled into cache, so the index scan may speed up as it goes, or will at least finish with some portion of the index in cache, so the scan won't be so slow next time. I use fincore to see how much of the index and table are in cache (taking into account that large objects >1GB will be split into multiple files). To my surprise, the files are cached 0%!\nSome research about the Linux page cache suggests that any file can be essentially forced into cache by cat-ting to /dev/null. So I cat a file of one of these database objects, and I can see the cat process reading at about 100MB/s, so it takes 10 sec for a 1GB file. Then I check fincore again -- the file is still not cached. cat-ting the file again still takes 10 sec.\nI cat several times in a row, and the file refuses to cache. Sometimes I see a bit of the file appear in the cache, but get removed a few seconds later. I also tried the fadvise program in the ftools, which didn't help. The I/O on this machine is not all that high (a few MB/s for various postgres processes), and there are a few GB free (shown in top). Most of the RAM (150GB+) is used by the page cache. No swap is in use.\nEventually the query finishes (or I cancel it). Then I find that running cat on the file does leave it in cache! So, is having multiple readers confusing Linux, or is Postgres doing any madvise on the file (I'd expect no to both).\nSo here's where I'm stuck. How can reading a file not leave it in the Linux cache? I'd expect it to enter the inactive list (which is about 80GB), so I'd expect another 80GB would need to be read before it would be its turn to be evicted.... which should take a long time if my maximum read speed is 100MB/s.\nSince I don't understand the behaviour of simply running cat and the file not being cached, I wonder if this is an issue with Linux, not Postgres. But the issue seems to happen with files in use by Postgres, so maybe there's an interaction, so I thought I'd start here.\nAny ideas how I can debug this further? Thanks!Brian",
"msg_date": "Thu, 5 Jun 2014 14:32:04 -0700",
"msg_from": "Brio <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres files in use not staying in linux file cache"
},
{
"msg_contents": "On 06/05/2014 04:32 PM, Brio wrote:\n\n> So here's where I'm stuck. How can reading a file not leave it in the\n> Linux cache? I'd expect it to enter the inactive list (which is about\n> 80GB), so I'd expect another 80GB would need to be read before it would\n> be its turn to be evicted.... which should take a long time if my\n> maximum read speed is 100MB/s.\n\nSo here's the thing. The Linux page reclamation code is *extremely \nbroken* in everything before 3.11. Take a look at this, then realize \nthat this is *only one patch* from several that target the memory \nmanager weightings:\n\nhttp://linux-kernel.2935.n7.nabble.com/patch-v2-0-3-mm-improve-page-aging-fairness-between-zones-nodes-td696105.html\n\nThis is especially true of the 3.2 kernel you're using. It's extremely \naggressive about ageing pages out of memory when there's high memory \npressure from frequent disk reads. Chances of promotion into the active \nset is dismal, so you end up with a constant churn between inactive and \ndisk. Worse, if you kick it hard enough by having too many PostgreSQL \nbackends using memory, it'll actively purge the active set while still \nfailing to promote the inactive set.\n\nThe dev that linked me to this patch said he tested it against 3.10, \nmeaning it probably went into 3.11 or 3.12. So I personally wouldn't \ntrust anything before 3.13. :p\n\nSince you're using Ubuntu 12.04, I strongly suggest upgrading your core \nto 12.04.4 and apply the linux-generic-lts-saucy pseudo-package to at \nleast get onto the 3.11 instead. The 3.2 kernel is pants-on-head \nretarded; we've had a lot more luck with 3.8 and above.\n\nCheers!\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Jun 2014 16:07:02 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres files in use not staying in linux file cache"
},
{
"msg_contents": "> \n> From: Shaun Thomas <[email protected]>\n> Date: Tuesday, 10 June 2014 22:07\n> So here's the thing. The Linux page reclamation code is *extremely\n> broken* in everything before 3.11. Take a look at this, then realize\n> that this is *only one patch* from several that target the memory\n> manager weightings:\n> \n> <snipped>\n> \n> Since you're using Ubuntu 12.04, I strongly suggest upgrading your core\n> to 12.04.4 and apply the linux-generic-lts-saucy pseudo-package to at\n> least get onto the 3.11 instead. The 3.2 kernel is pants-on-head\n> retarded; we've had a lot more luck with 3.8 and above.\n\n> \nWithout trying to hijack this thread, is there any documentation around\nrecommended or known-good kernels for Postgres/DB type workloads?\nI can see there has been a lot of discussion around recent kernel\ndevelopments being detrimental (or potentially detrimental) -\nhttp://www.postgresql.org/message-id/[email protected] - , but\nthere doesn’t seem to be a definitive resource to know which kernels to\navoid.\n\nI ask because I’ve just read your statement above about 3.2 being\npants-on-head, and having had more luck with 3.8 and above – despite most\ninstallations being on much older (2.6.19) kernels (as per the thread).\nI’d be interested to see how much benefit we’d get from moving off 3.2, but\nI’d like to be aware of the risks of more recent kernel version’s also.\n\nCheers,\n\nTIm\n\n\n\n\n\n\nFrom: Shaun Thomas <[email protected]>Date: Tuesday, 10 June 2014 22:07So here's the thing. The Linux page reclamation code is *extremely broken* in everything before 3.11. Take a look at this, then realize that this is *only one patch* from several that target the memory manager weightings:<snipped>Since you're using Ubuntu 12.04, I strongly suggest upgrading your core to 12.04.4 and apply the linux-generic-lts-saucy pseudo-package to at least get onto the 3.11 instead. The 3.2 kernel is pants-on-head retarded; we've had a lot more luck with 3.8 and above.Without trying to hijack this thread, is there any documentation around recommended or known-good kernels for Postgres/DB type workloads?I can see there has been a lot of discussion around recent kernel developments being detrimental (or potentially detrimental) - http://www.postgresql.org/message-id/[email protected] - , but there doesn’t seem to be a definitive resource to know which kernels to avoid.I ask because I’ve just read your statement above about 3.2 being pants-on-head, and having had more luck with 3.8 and above – despite most installations being on much older (2.6.19) kernels (as per the thread).I’d be interested to see how much benefit we’d get from moving off 3.2, but I’d like to be aware of the risks of more recent kernel version’s also.Cheers,TIm",
"msg_date": "Fri, 13 Jun 2014 08:19:19 +0100",
"msg_from": "Tim Kane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres files in use not staying in linux file cache"
},
{
"msg_contents": "On 06/13/2014 02:19 AM, Tim Kane wrote:\n\n> I ask because I’ve just read your statement above about 3.2 being\n> pants-on-head, and having had more luck with 3.8 and above – despite\n> most installations being on much older (2.6.19) kernels (as per the\n> thread).\n\nWell, the issue is that the 3.2 kernel was a huge departure from the 2.6 \ntree that most people are still using. Thanks to RHEL, CentOS and their \nilk, the 2.6 tree has had a much longer lifetime than it probably should \nhave. As a result, the newer kernels haven't had sufficient real-world \nserver testing.\n\nWith 3.2 being the first of those, it was a bit wonky, to be honest. The \nnew CPU scheduler didn't have enough knobs, and the knobs that *were* \nthere, were set more appropriately for desktop use. The memory manager \nwas a train wreck and has been patched numerous times with rather \nsweeping changes since. For a while, there was even a bug at how system \nload was calculated that caused it to be off by an order of magnitude or \nmore based on process switching activity.\n\nThe overall situation has improved significantly, as has the development \nmomentum. It's extremely difficult to say which kernel versions have \nmore risk than others, since no kernel seems to be around for more than \na month or two before the next one comes out. My opinion has been to get \non the latest stable kernel for the distribution, and ignore everything \nelse.\n\nFor Ubuntu 12.04.4 LTS, that's 3.11.\n\nOur systems are definitely much happier since the upgrade, but the \nplural of anecdote is not data. :)\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 13 Jun 2014 11:11:41 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres files in use not staying in linux file cache"
},
{
"msg_contents": "On Thu, Jun 5, 2014 at 2:32 PM, Brio <[email protected]> wrote:\n> Hi, I'm trying to investigate a performance problem.\n>\n> We have a large database (over 1TB) running on a server with 160GB of RAM\n> and 32 cores (Xeon E5-2650). The database files are on a NetApp mount.\n>\n...\n>\n> I have noticed a few times that an index scan may be taking a long time, and\n> the query's backend process is reading from disk at about 2 MB/s, spending\n> 99% of its time waiting for I/O (using iotop). This makes sense, if scanning\n> an index that is not in cache.\n\nDoes the index scan dirty most of the index blocks it touches? (When\nan index scan follows an index entry to a heap page and finds that the\ntuple is no longer needed, when it gets back to the index it might\nkill that entry, so that the next index scan doesn't need to do the\nfutile heap look up. This dirties the index block, even for a \"read\nonly\" scan. However, It would be unusual for a typical index scan to\ndo this for most of the blocks it touches. It could happen if the\nindex scan is to support a giant rarely run reporting query, for\nexample, or if your vacuuming schedule is not tuned correctly.)\n\nThe reason I ask that is that I have previously seen the dirty blocks\nof NetApp-served files get dropped from the linux page cache as soon\nas they are written back to the NetApp.\n\nI had written a little Perl script to cut postgresql out of the loop\nentirely to demonstrate this effect, but I no longer have access to\nit.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Jun 2014 11:14:41 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres files in use not staying in linux file cache"
},
{
"msg_contents": "(Sorry, our last exchange forgot to cc the pgsql-performance list.)\n\nYes, I did see the original problem only when postgres was also accessing\nthe file. But the issue is intermittent, so I can't reproduce on demand, so\nI'm only reporting what I saw a small number of times, and not necessarily\n(or likely) the whole story.\n\nI've upgraded the kernel on my test machine, and I haven't seen the\noriginal problem. But I am seeing what looks like it might be the problem\nyou describe, Jeff. Here's what I saw:\n\nThis machine has 64 GB of RAM. There was about 20 GB free, and the rest was\nmostly file cache, mostly our large 1TB database. I ran a script that did\nvarious reading and writing to the database, but mostly updated many rows\nover and over again to new updated values. As this script ran, the cached\nmemory slowly dropped, and free memory increased. I now have 43 GB free!\nI'd expect practically any activity to leave files in the cache, and no\nsignificant evictions to occur until memory runs low. What actually happens\nis the cache increases gradually, and then drops down in chunks. I would\nthink that the only file activity that would evict from cache would be\ndeleting files, which would only happen when dropping tables (not happening\nin my test script), and also WAL file cycling, which should stay a constant\namount of memory.\n\nBut, if blocks that are written are evicted from the cache, that would\nexplain it, so I'd like to test that. As a very basic test, I tried:\ncd /path-to-nfs-mount\necho \"foo\" > foo.txt\nsync # This command forces a write? I haven't really used it before\nlinux-fincore foo\nshows the file is cached 100%.\n\nAlthough you don't have the Perl script you mentioned, could you give a\nbasic description of what it does, so I could try to recreate it? I'm not\nfamiliar with Perl, but I've done plenty of C programming, so demonstrating\nthis with the actual Linux APIs would be ideal.\n\nThanks Jeff!\n\n\n\nOn Mon, Jun 23, 2014 at 3:56 PM, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Jun 18, 2014 at 11:18 PM, Brio <[email protected]> wrote:\n> > Hi Jeff,\n> >\n> > That is interesting -- I hadn't thought about how a read-only index scan\n> > might actually write the index.\n> >\n> > But, to avoid effects like that, that's why I dropped down to simply\n> using\n> > \"cat\" on the file, and I saw the same problem there, with no writing\n> back.\n>\n> I thought that you saw the same problem with cat only when it was\n> running concurrently with the index scan, and when the index scan\n> stopped the problem in cat went away.\n>\n> > So the problem really seemed to be in Linux, not Postgres.\n> >\n> > But why would dirty blocks of NetApp-served files get dropped from the\n> Linux\n> > page cache as soon as they are written back to the NetApp? Is it a bug in\n> > the NetApp driver? Isn't the driver just NFS?\n>\n> I don't know why it would do that, it never made much sense to me.\n> But that is what the experimental evidence indicated.\n>\n> What I was using was NetApp on the back-end and just the plain linux\n> NFS driver on the client end, and I assume the problem was on the\n> client end. (Maybe you can get a custom client driver from Net-App\n> designed to work specifically with their server, but if so, I didn't\n> do that. For that matter, maybe just the default linux NFS driver has\n> improved.)\n>\n> > That sounds like a serious\n> > issue. Is there any online documentation of bugs like that with NetApp?\n>\n> Yes, it was a serious issue for one intended use. But it is was\n> partially mitigated by the fact that I would probably never run an\n> important production database over NFS anyway, out of corruption\n> concerns. I was hoping to use it just for testing purposes, but this\n> limit made it rather useless for that as well. I don't think it would\n> be a NetApp specific issue and didn't approach it from that angle,\n> just that NetApp didn't save from the issue.\n>\n> Cheers,\n>\n> Jeff\n>\n\n(Sorry, our last exchange forgot to cc the pgsql-performance list.)Yes, I did see the original problem only when postgres was also accessing the file. But the issue is intermittent, so I can't reproduce on demand, so I'm only reporting what I saw a small number of times, and not necessarily (or likely) the whole story.\nI've upgraded the kernel on my test machine, and I haven't seen the original problem. But I am seeing what looks like it might be the problem you describe, Jeff. Here's what I saw:This machine has 64 GB of RAM. There was about 20 GB free, and the rest was mostly file cache, mostly our large 1TB database. I ran a script that did various reading and writing to the database, but mostly updated many rows over and over again to new updated values. As this script ran, the cached memory slowly dropped, and free memory increased. I now have 43 GB free! I'd expect practically any activity to leave files in the cache, and no significant evictions to occur until memory runs low. What actually happens is the cache increases gradually, and then drops down in chunks. I would think that the only file activity that would evict from cache would be deleting files, which would only happen when dropping tables (not happening in my test script), and also WAL file cycling, which should stay a constant amount of memory.\nBut, if blocks that are written are evicted from the cache, that would explain it, so I'd like to test that. As a very basic test, I tried:cd /path-to-nfs-mountecho \"foo\" > foo.txt\nsync # This command forces a write? I haven't really used it beforelinux-fincore fooshows the file is cached 100%.Although you don't have the Perl script you mentioned, could you give a basic description of what it does, so I could try to recreate it? I'm not familiar with Perl, but I've done plenty of C programming, so demonstrating this with the actual Linux APIs would be ideal.\nThanks Jeff!On Mon, Jun 23, 2014 at 3:56 PM, Jeff Janes <[email protected]> wrote:\nOn Wed, Jun 18, 2014 at 11:18 PM, Brio <[email protected]> wrote:\n\n> Hi Jeff,\n>\n> That is interesting -- I hadn't thought about how a read-only index scan\n> might actually write the index.\n>\n> But, to avoid effects like that, that's why I dropped down to simply using\n> \"cat\" on the file, and I saw the same problem there, with no writing back.\n\nI thought that you saw the same problem with cat only when it was\nrunning concurrently with the index scan, and when the index scan\nstopped the problem in cat went away.\n\n> So the problem really seemed to be in Linux, not Postgres.\n>\n> But why would dirty blocks of NetApp-served files get dropped from the Linux\n> page cache as soon as they are written back to the NetApp? Is it a bug in\n> the NetApp driver? Isn't the driver just NFS?\n\nI don't know why it would do that, it never made much sense to me.\nBut that is what the experimental evidence indicated.\n\nWhat I was using was NetApp on the back-end and just the plain linux\nNFS driver on the client end, and I assume the problem was on the\nclient end. (Maybe you can get a custom client driver from Net-App\ndesigned to work specifically with their server, but if so, I didn't\ndo that. For that matter, maybe just the default linux NFS driver has\nimproved.)\n\n> That sounds like a serious\n> issue. Is there any online documentation of bugs like that with NetApp?\n\nYes, it was a serious issue for one intended use. But it is was\npartially mitigated by the fact that I would probably never run an\nimportant production database over NFS anyway, out of corruption\nconcerns. I was hoping to use it just for testing purposes, but this\nlimit made it rather useless for that as well. I don't think it would\nbe a NetApp specific issue and didn't approach it from that angle,\njust that NetApp didn't save from the issue.\n\nCheers,\n\nJeff",
"msg_date": "Tue, 24 Jun 2014 17:13:25 -0700",
"msg_from": "Brio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres files in use not staying in linux file cache"
}
] |
[
{
"msg_contents": "Well it's me again, with another performance regression. We have this query:\n\nSELECT *\nFROM users u\nWHERE (u.user_group_id IN\n (SELECT ug.id\n FROM user_groups ug, pro_partners p\n WHERE ug.pro_partner_id = p.id\n AND p.tree_sortkey BETWEEN\nE'0000000000010101000001000101000110000000000000000000000101101010'\nAND\ntree_right(E'0000000000010101000001000101000110000000000000000000000101101010')\nOFFSET 0)\nAND u.deleted_time IS NULL)\nORDER BY u.id LIMIT 1000;\n\nOK so on 8.4.2 it runs fast. If I take out the offset 0 it runs slow.\n\nIf I run this on 8.4.15. 8.4.19 or 8.4.21 it also runs slow.\n\nIf I drop the limit 1000 it runs fast again. Query plans:\n\n8.4.2 with offset 0: http://explain.depesz.com/s/b3G\n8.4.2 without offset 0: http://explain.depesz.com/s/UFAl\n8.4.2 without offset 0 and with no limit: http://explain.depesz.com/s/krdf\n8.4.21 with or without offset 0 and no limit: http://explain.depesz.com/s/9m1\n8.4.21 with limit: http://explain.depesz.com/s/x2G\n\nA couple of points: The with limit on 8.4.21 never returns. It runs\nfor hours and we just have to kill it. 8.4.2 without the offset and\nwith a limit never returns. Tables are analyzed, data sets are the\nsame (slony replication cluster) and I've tried cranking up stats\ntarget to 1000 with no help.\n\ntree_sortkey is defined here: http://rubick.com/openacs/tree_sortkey\nbut I don't think it's the neus of the problem, it looks like join\nestimations are way off here.\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Jun 2014 16:15:53 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> Well it's me again, with another performance regression. We have this query:\n> SELECT *\n> FROM users u\n> WHERE (u.user_group_id IN\n> (SELECT ug.id\n> FROM user_groups ug, pro_partners p\n> WHERE ug.pro_partner_id = p.id\n> AND p.tree_sortkey BETWEEN\n> E'0000000000010101000001000101000110000000000000000000000101101010'\n> AND\n> tree_right(E'0000000000010101000001000101000110000000000000000000000101101010')\n> OFFSET 0)\n> AND u.deleted_time IS NULL)\n> ORDER BY u.id LIMIT 1000;\n\n> OK so on 8.4.2 it runs fast. If I take out the offset 0 it runs slow.\n> If I run this on 8.4.15. 8.4.19 or 8.4.21 it also runs slow.\n\nThis seems to be about misestimation of the number of rows out of a\nsemijoin, so I'm thinking that the reason for the behavior change is\ncommit 899d7b00e9 or 46f775144e. It's unfortunate that your example\nends up on the wrong side of that change, but the original 8.4.x behavior\nwas definitely pretty bogus; I think it's only accidental that 8.4.2\nmanages to choose a better plan. (The fact that you need the crutch\nof the \"OFFSET 0\" to get it to do so is evidence that it doesn't\nreally know what its doing ;-).)\n\nOne thing you might try is back-patching commit 4c2777d0b733, as I\nsuspect that you're partially getting burnt by that in this scenario.\nI was afraid to back-patch that because of the API change possibly\nbreaking third-party code, but in a private build that's unlikely\nto be an issue.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 06 Jun 2014 21:38:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Thanks we'll give that a try.\n\nOn Fri, Jun 6, 2014 at 7:38 PM, Tom Lane <[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> Well it's me again, with another performance regression. We have this query:\n>> SELECT *\n>> FROM users u\n>> WHERE (u.user_group_id IN\n>> (SELECT ug.id\n>> FROM user_groups ug, pro_partners p\n>> WHERE ug.pro_partner_id = p.id\n>> AND p.tree_sortkey BETWEEN\n>> E'0000000000010101000001000101000110000000000000000000000101101010'\n>> AND\n>> tree_right(E'0000000000010101000001000101000110000000000000000000000101101010')\n>> OFFSET 0)\n>> AND u.deleted_time IS NULL)\n>> ORDER BY u.id LIMIT 1000;\n>\n>> OK so on 8.4.2 it runs fast. If I take out the offset 0 it runs slow.\n>> If I run this on 8.4.15. 8.4.19 or 8.4.21 it also runs slow.\n>\n> This seems to be about misestimation of the number of rows out of a\n> semijoin, so I'm thinking that the reason for the behavior change is\n> commit 899d7b00e9 or 46f775144e. It's unfortunate that your example\n> ends up on the wrong side of that change, but the original 8.4.x behavior\n> was definitely pretty bogus; I think it's only accidental that 8.4.2\n> manages to choose a better plan. (The fact that you need the crutch\n> of the \"OFFSET 0\" to get it to do so is evidence that it doesn't\n> really know what its doing ;-).)\n>\n> One thing you might try is back-patching commit 4c2777d0b733, as I\n> suspect that you're partially getting burnt by that in this scenario.\n> I was afraid to back-patch that because of the API change possibly\n> breaking third-party code, but in a private build that's unlikely\n> to be an issue.\n>\n> regards, tom lane\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Jun 2014 19:45:37 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "Hi,\n\nI have a query that's pulling data for another system using COPY (query) to\nSTDOUT CSV on a 9.2.4 db (we're in the process of upgrading to 9.3). The\nfinal csv file is large (~75GB, 86 million rows). The query is also large,\nconsisting of one table (86 million rows) left joined to a total of 30\nother tables (of mixed size), 3 of which are CTE supplied by a WITH clause\nof and consist of 3 joins each for a total of 39 joins in the plan.\nwork_mem on the system is set to 256MB.\n\nWe're running into problems with the machine running out of memory with\nthis single query process consuming over 100GB resident memory before the\nmachine exhausts swap and the Linux OOM handling eventually kills it. The\nquery plan from explain comes to 186 rows, which assuming that each row\nrequires the full work_mem (which should be a significant overestimate of\nthe number operations and size) is < 50GB and we're observing substantially\nmore then that. Is it reasonable to expect that a query will take ~ <\nwork_mem * # of operations, or are there other factors in play?\n\nThe plan looks reasonable (though there are some odd right join uses, see\nbelow) and the row estimates look pretty accurate with the exception that\none of the CTE queries is under-estimated row count wise by a little over 2\norders of magnitude (260k vs. 86 million rows). That query does a group by\n(plans as a sort then group aggregate, there are no hash aggregates in the\nplan which is something that might increase memory) and the group part\nmiss-estimates the final number of rows for that CTE. Unlike the other CTEs\nwhen it's merged joined into the main query there's no materialize line in\nthe plan (no idea if that's relevant).\n\nAs to the right join (used for a few of the joins, most are left join or\nmerge):\n -> Hash Right Join (cost=225541299.19..237399743.38\nrows=86681834 width=1108)\n Hash Cond: (xxx.xxx = yyy.yyy)\n -> Seq Scan on xxx (cost=0.00..6188.18\nrows=9941 width=20)\n Filter: (mode = 'live'::text)\n -> Hash (cost=212606744.27..212606744.27\nrows=86681834 width=1096)\n ....\nI'm not sure if I'm reading it right, but it looks like it's hashing the 86\nmillion row set and scanning over the 10k row set which seems to me like\nthe opposite of what you'd want to do, but I haven't seen a lot of hash\nright joins in plans and I'm not sure if that's how it works.\n\nTim\n\nHi,I have a query that's pulling data for another system using COPY (query) to STDOUT CSV on a 9.2.4 db (we're in the process of upgrading to 9.3). The final csv file is large (~75GB, 86 million rows). The query is also large, consisting of one table (86 million rows) left joined to a total of 30 other tables (of mixed size), 3 of which are CTE supplied by a WITH clause of and consist of 3 joins each for a total of 39 joins in the plan. work_mem on the system is set to 256MB.\nWe're running into problems with the machine running out of memory with this single query process consuming over 100GB resident memory before the machine exhausts swap and the Linux OOM handling eventually kills it. The query plan from explain comes to 186 rows, which assuming that each row requires the full work_mem (which should be a significant overestimate of the number operations and size) is < 50GB and we're observing substantially more then that. Is it reasonable to expect that a query will take ~ < work_mem * # of operations, or are there other factors in play?\nThe plan looks reasonable (though there are some odd right join uses, see below) and the row estimates look pretty accurate with the exception that one of the CTE queries is under-estimated row count wise by a little over 2 orders of magnitude (260k vs. 86 million rows). That query does a group by (plans as a sort then group aggregate, there are no hash aggregates in the plan which is something that might increase memory) and the group part miss-estimates the final number of rows for that CTE. Unlike the other CTEs when it's merged joined into the main query there's no materialize line in the plan (no idea if that's relevant).\nAs to the right join (used for a few of the joins, most are left join or merge): -> Hash Right Join (cost=225541299.19..237399743.38 rows=86681834 width=1108)\n Hash Cond: (xxx.xxx = yyy.yyy) -> Seq Scan on xxx (cost=0.00..6188.18 rows=9941 width=20) Filter: (mode = 'live'::text)\n -> Hash (cost=212606744.27..212606744.27 rows=86681834 width=1096) ....I'm not sure if I'm reading it right, but it looks like it's hashing the 86 million row set and scanning over the 10k row set which seems to me like the opposite of what you'd want to do, but I haven't seen a lot of hash right joins in plans and I'm not sure if that's how it works.\nTim",
"msg_date": "Wed, 11 Jun 2014 18:02:55 -0400",
"msg_from": "Timothy Garnett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query memory usage greatly in excess of work_mem * query plan steps"
},
{
"msg_contents": "We had a problem in the 8.X series with COPY IN - it did not respect any\nconfigured maximums and just kept allocating memory until it could fit the\nentire COPY contents down to the \\. into RAM. Could there be a similar\nissue with COPY OUT?\n\n-----\nDan\n\n\nOn Wed, Jun 11, 2014 at 6:02 PM, Timothy Garnett <[email protected]>\nwrote:\n\n> Hi,\n>\n> I have a query that's pulling data for another system using COPY (query)\n> to STDOUT CSV on a 9.2.4 db (we're in the process of upgrading to 9.3).\n> The final csv file is large (~75GB, 86 million rows). The query is also\n> large, consisting of one table (86 million rows) left joined to a total of\n> 30 other tables (of mixed size), 3 of which are CTE supplied by a WITH\n> clause of and consist of 3 joins each for a total of 39 joins in the plan.\n> work_mem on the system is set to 256MB.\n>\n> We're running into problems with the machine running out of memory with\n> this single query process consuming over 100GB resident memory before the\n> machine exhausts swap and the Linux OOM handling eventually kills it. The\n> query plan from explain comes to 186 rows, which assuming that each row\n> requires the full work_mem (which should be a significant overestimate of\n> the number operations and size) is < 50GB and we're observing substantially\n> more then that. Is it reasonable to expect that a query will take ~ <\n> work_mem * # of operations, or are there other factors in play?\n>\n> The plan looks reasonable (though there are some odd right join uses, see\n> below) and the row estimates look pretty accurate with the exception that\n> one of the CTE queries is under-estimated row count wise by a little over 2\n> orders of magnitude (260k vs. 86 million rows). That query does a group by\n> (plans as a sort then group aggregate, there are no hash aggregates in the\n> plan which is something that might increase memory) and the group part\n> miss-estimates the final number of rows for that CTE. Unlike the other CTEs\n> when it's merged joined into the main query there's no materialize line in\n> the plan (no idea if that's relevant).\n>\n> As to the right join (used for a few of the joins, most are left join or\n> merge):\n> -> Hash Right Join (cost=225541299.19..237399743.38\n> rows=86681834 width=1108)\n> Hash Cond: (xxx.xxx = yyy.yyy)\n> -> Seq Scan on xxx (cost=0.00..6188.18\n> rows=9941 width=20)\n> Filter: (mode = 'live'::text)\n> -> Hash (cost=212606744.27..212606744.27\n> rows=86681834 width=1096)\n> ....\n> I'm not sure if I'm reading it right, but it looks like it's hashing the\n> 86 million row set and scanning over the 10k row set which seems to me like\n> the opposite of what you'd want to do, but I haven't seen a lot of hash\n> right joins in plans and I'm not sure if that's how it works.\n>\n> Tim\n>\n\nWe had a problem in the 8.X series with COPY IN - it did not respect any configured maximums and just kept allocating memory until it could fit the entire COPY contents down to the \\. into RAM. Could there be a similar issue with COPY OUT?\n-----Dan\nOn Wed, Jun 11, 2014 at 6:02 PM, Timothy Garnett <[email protected]> wrote:\nHi,I have a query that's pulling data for another system using COPY (query) to STDOUT CSV on a 9.2.4 db (we're in the process of upgrading to 9.3). The final csv file is large (~75GB, 86 million rows). The query is also large, consisting of one table (86 million rows) left joined to a total of 30 other tables (of mixed size), 3 of which are CTE supplied by a WITH clause of and consist of 3 joins each for a total of 39 joins in the plan. work_mem on the system is set to 256MB.\nWe're running into problems with the machine running out of memory with this single query process consuming over 100GB resident memory before the machine exhausts swap and the Linux OOM handling eventually kills it. The query plan from explain comes to 186 rows, which assuming that each row requires the full work_mem (which should be a significant overestimate of the number operations and size) is < 50GB and we're observing substantially more then that. Is it reasonable to expect that a query will take ~ < work_mem * # of operations, or are there other factors in play?\nThe plan looks reasonable (though there are some odd right join uses, see below) and the row estimates look pretty accurate with the exception that one of the CTE queries is under-estimated row count wise by a little over 2 orders of magnitude (260k vs. 86 million rows). That query does a group by (plans as a sort then group aggregate, there are no hash aggregates in the plan which is something that might increase memory) and the group part miss-estimates the final number of rows for that CTE. Unlike the other CTEs when it's merged joined into the main query there's no materialize line in the plan (no idea if that's relevant).\nAs to the right join (used for a few of the joins, most are left join or merge): -> Hash Right Join (cost=225541299.19..237399743.38 rows=86681834 width=1108)\n Hash Cond: (xxx.xxx = yyy.yyy) -> Seq Scan on xxx (cost=0.00..6188.18 rows=9941 width=20) Filter: (mode = 'live'::text)\n -> Hash (cost=212606744.27..212606744.27 rows=86681834 width=1096) ....I'm not sure if I'm reading it right, but it looks like it's hashing the 86 million row set and scanning over the 10k row set which seems to me like the opposite of what you'd want to do, but I haven't seen a lot of hash right joins in plans and I'm not sure if that's how it works.\nTim",
"msg_date": "Fri, 13 Jun 2014 22:18:45 -0400",
"msg_from": "\"Franklin, Dan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query memory usage greatly in excess of work_mem *\n query plan steps"
},
{
"msg_contents": "Timothy Garnett <[email protected]> writes:\n> I have a query that's pulling data for another system using COPY (query) to\n> STDOUT CSV on a 9.2.4 db (we're in the process of upgrading to 9.3).\n> ...\n> We're running into problems with the machine running out of memory with\n> this single query process consuming over 100GB resident memory before the\n> machine exhausts swap and the Linux OOM handling eventually kills it.\n\nI wonder if you're hitting some sort of memory leak. What I'd suggest\ndoing to help diagnose that is to show us a memory map. Do this:\n\n(1) Set a ulimit so that the process will get ENOMEM sometime before\nthe OOM killer awakens (this is good practice anyway, if you've not\ndisabled OOM kills). On Linux systems, ulimit -m or -v generally\ndoes the trick. The easiest way to enforce this is to add a ulimit\ncommand to the script that launches the postmaster, then restart.\n\n(2) Make sure your logging setup will collect anything printed to\nstderr by a backend. If you use logging_collector you're good to go;\nif you use syslog you need to check where the postmaster's stderr\nwas redirected, making sure it's not /dev/null.\n\n(3) Run the failing query. Collect the memory map it dumps to stderr\nwhen it fails, and send it in. What you're looking for is a couple\nhundred lines looking like this:\n\nTopMemoryContext: 69984 total in 10 blocks; 6152 free (16 chunks); 63832 used\n MessageContext: 8192 total in 1 blocks; 7112 free (1 chunks); 1080 used\n Operator class cache: 8192 total in 1 blocks; 1640 free (0 chunks); 6552 used\n smgr relation table: 24576 total in 2 blocks; 13872 free (3 chunks); 10704 used\n ... lots more in the same vein ...\n\n\n> As to the right join (used for a few of the joins, most are left join or\n> merge):\n> -> Hash Right Join (cost=225541299.19..237399743.38\n> rows=86681834 width=1108)\n> Hash Cond: (xxx.xxx = yyy.yyy)\n> -> Seq Scan on xxx (cost=0.00..6188.18\n> rows=9941 width=20)\n> Filter: (mode = 'live'::text)\n> -> Hash (cost=212606744.27..212606744.27\n> rows=86681834 width=1096)\n> ....\n> I'm not sure if I'm reading it right, but it looks like it's hashing the 86\n> million row set and scanning over the 10k row set which seems to me like\n> the opposite of what you'd want to do, but I haven't seen a lot of hash\n> right joins in plans and I'm not sure if that's how it works.\n\nThat looks pretty odd to me too, though I guess the planner might think it\nwas sensible if xxx's join column had very low cardinality. Still, it's\nweird. What have you got work_mem set to exactly?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 14 Jun 2014 10:26:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query memory usage greatly in excess of work_mem * query plan\n steps"
}
] |
[
{
"msg_contents": "Hi all. This might be tricky in so much as there’s a few moving parts (when isn’t there?), but I’ve tried to test the postgres side as much as possible.\n\nTrying to work out a potential database bottleneck with a HTTP application (written in Go):\nPages that render HTML templates but don’t perform DB queries can hit ~36k+ req/s\nPages that perform a SELECT on a single row net about ~6.6k req/s: db.Get(l, \"SELECT * FROM listings WHERE id = $1 AND expiry_date > current_date\", l.Id)\nPages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to top 1.3k req/s\nThere’s very little “extra” logic around these queries: you can find the code here (about 39 lines for both functions) https://gist.github.com/elithrar/b2497b0b473da64932b5\nOther pertinent details:\nIt’s always been about this slow to my knowledge\nThe table is a test database with about 40 rows, although performance doesn’t change noticeably even with a few hundred (it’s unlikely to ever be more than a 10,000 rows over its lifetime)\nRunning PostgreSQL 9.3.4 on OS X w/ a 3.5GHz i7, 12GB RAM, 128GB PCI-E SSD.\nThe Go application peaks at about 40MB memory when hitting 37k req/s — so there doesn’t appear to be an issue of it eating into the available RAM on the machine\nI’m also aware that HTTP benchmarks aren’t the most reliable thing, but I’m using wrk -c 400 -t 32 -15s to stress it out\nThe application has a connection pool via the lib/pq driver (https://github.com/lib/pq) with MaxOpen set to 256 connections. Stack size is 8GB and max socket connections are set to 1024 (running out of FDs isn’t the problem here from what I can see).\n\nRelevant postgresql.conf settings — everything else should be default, including fsync/synchronous commits (on) for obvious reasons:\nmax_connections = 512\nshared_buffers = 2048MB\ntemp_buffers = 16MB\nwork_mem = 4MB\nwal_buffers = 16\ncheckpoint_segments = 16\nrandom_page_cost = 2.0\neffective_cache_size = 8192MB\nThe query in question is: http://explain.depesz.com/s/7g8 and the table schema is as below:\n\n\n Table \"public.listings\"\n┌───────────────┬──────────────────────────┬───────────┐\n│ Column │ Type │ Modifiers │\n├───────────────┼──────────────────────────┼───────────┤\n│ id │ character varying(17) │ not null │\n│ title │ text │ │\n│ company │ text │ │\n│ location │ text │ │\n│ description │ text │ │\n│ rendered_desc │ text │ │\n│ term │ text │ │\n│ commute │ text │ │\n│ company_url │ text │ │\n│ rep │ text │ │\n│ rep_email │ text │ │\n│ app_method │ text │ │\n│ app_email │ text │ │\n│ app_url │ text │ │\n│ posted_date │ timestamp with time zone │ │\n│ edited_date │ timestamp with time zone │ │\n│ renewed_date │ timestamp with time zone │ │\n│ expiry_date │ timestamp with time zone │ │\n│ slug │ text │ │\n│ charge_id │ text │ │\n│ sponsor_id │ text │ │\n│ tsv │ tsvector │ │\n└───────────────┴──────────────────────────┴───────────┘\nIndexes:\n \"listings_pkey\" PRIMARY KEY, btree (id)\n \"fts\" gin (tsv)\n \"listings_expiry_date_idx\" btree (expiry_date)\n \"listings_fts_idx\" gin (to_tsvector('english'::regconfig, (((((((title || ' '::text) || company) || ' '::text) || location) || ' '::text) || term) || ' '::text) || commute))\nTriggers:\n tsvectorupdate BEFORE INSERT OR UPDATE ON listings FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger('tsv', 'pg_catalog.english', 'title', 'company', 'location', 'term', 'commute’)\n\n\nThe single row query has a query plan here: http://explain.depesz.com/s/1Np (this is where I see 6.6k req/s at the application level), \n\nSome pgbench results from this machine as well:\n$ pgbench -c 128 -C -j 4 -T 15 -M extended -S\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 1\nquery mode: extended\nnumber of clients: 128\nnumber of threads: 4\nduration: 15 s\nnumber of transactions actually processed: 17040\ntps = 1134.481459 (including connections establishing)\ntps = 56884.093652 (excluding connections establishing)\nUltimately I'm not expecting a miracle—database ops are nearly always the slowest part of a web server outside the latency to the client itself—but I'd expect something a little closer (even 10% of 33k would be a lot better). And of course, this is somewhat \"academic\" because I don't expect to see four million hits an hour—but I'd also like to catch problems for future reference.\n\nThanks in advance.\n\n\n\n\n\nHi all. This might be tricky in so much as there’s a few moving parts (when isn’t there?), but I’ve tried to test the postgres side as much as possible.Trying to work out a potential database bottleneck with a HTTP application (written in Go):Pages that render HTML templates but don’t perform DB queries can hit ~36k+ req/sPages that perform a SELECT on a single row net about ~6.6k req/s: db.Get(l, \"SELECT * FROM listings WHERE id = $1 AND expiry_date > current_date\", l.Id)Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to top 1.3k req/sThere’s very little “extra” logic around these queries: you can find the code here (about 39 lines for both functions) https://gist.github.com/elithrar/b2497b0b473da64932b5Other pertinent details:It’s always been about this slow to my knowledgeThe table is a test database with about 40 rows, although performance doesn’t change noticeably even with a few hundred (it’s unlikely to ever be more than a 10,000 rows over its lifetime)Running PostgreSQL 9.3.4 on OS X w/ a 3.5GHz i7, 12GB RAM, 128GB PCI-E SSD.The Go application peaks at about 40MB memory when hitting 37k req/s — so there doesn’t appear to be an issue of it eating into the available RAM on the machineI’m also aware that HTTP benchmarks aren’t the most reliable thing, but I’m using wrk -c 400 -t 32 -15s to stress it outThe application has a connection pool via the lib/pq driver (https://github.com/lib/pq) with MaxOpen set to 256 connections. Stack size is 8GB and max socket connections are set to 1024 (running out of FDs isn’t the problem here from what I can see).Relevant postgresql.conf settings — everything else should be default, including fsync/synchronous commits (on) for obvious reasons:max_connections = 512\nshared_buffers = 2048MB\ntemp_buffers = 16MB\nwork_mem = 4MB\nwal_buffers = 16\ncheckpoint_segments = 16\nrandom_page_cost = 2.0\neffective_cache_size = 8192MB\nThe query in question is: http://explain.depesz.com/s/7g8 and the table schema is as below: Table \"public.listings\"┌───────────────┬──────────────────────────┬───────────┐│ Column │ Type │ Modifiers │├───────────────┼──────────────────────────┼───────────┤│ id │ character varying(17) │ not null ││ title │ text │ ││ company │ text │ ││ location │ text │ ││ description │ text │ ││ rendered_desc │ text │ ││ term │ text │ ││ commute │ text │ ││ company_url │ text │ ││ rep │ text │ ││ rep_email │ text │ ││ app_method │ text │ ││ app_email │ text │ ││ app_url │ text │ ││ posted_date │ timestamp with time zone │ ││ edited_date │ timestamp with time zone │ ││ renewed_date │ timestamp with time zone │ ││ expiry_date │ timestamp with time zone │ ││ slug │ text │ ││ charge_id │ text │ ││ sponsor_id │ text │ ││ tsv │ tsvector │ │└───────────────┴──────────────────────────┴───────────┘Indexes: \"listings_pkey\" PRIMARY KEY, btree (id) \"fts\" gin (tsv) \"listings_expiry_date_idx\" btree (expiry_date) \"listings_fts_idx\" gin (to_tsvector('english'::regconfig, (((((((title || ' '::text) || company) || ' '::text) || location) || ' '::text) || term) || ' '::text) || commute))Triggers: tsvectorupdate BEFORE INSERT OR UPDATE ON listings FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger('tsv', 'pg_catalog.english', 'title', 'company', 'location', 'term', 'commute’)The single row query has a query plan here: http://explain.depesz.com/s/1Np (this is where I see 6.6k req/s at the application level), Some pgbench results from this machine as well:$ pgbench -c 128 -C -j 4 -T 15 -M extended -S\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 1\nquery mode: extended\nnumber of clients: 128\nnumber of threads: 4\nduration: 15 s\nnumber of transactions actually processed: 17040\ntps = 1134.481459 (including connections establishing)\ntps = 56884.093652 (excluding connections establishing)Ultimately I'm not expecting a miracle—database ops are nearly always the slowest part of a web server outside the latency to the client itself—but I'd expect something a little closer (even 10% of 33k would be a lot better). And of course, this is somewhat \"academic\" because I don't expect to see four million hits an hour—but I'd also like to catch problems for future reference.Thanks in advance.",
"msg_date": "Thu, 12 Jun 2014 15:08:27 +0800",
"msg_from": "Matt Silverlock <[email protected]>",
"msg_from_op": true,
"msg_subject": "OFFSET/LIMIT - Disparate Performance w/ Go application"
},
{
"msg_contents": "Matt Silverlock <[email protected]> writes:\n> Hi all. This might be tricky in so much as there’s a few moving parts (when isn’t there?), but I’ve tried to test the postgres side as much as possible.\n> Trying to work out a potential database bottleneck with a HTTP application (written in Go):\n> Pages that render HTML templates but don’t perform DB queries can hit ~36k+ req/s\n> Pages that perform a SELECT on a single row net about ~6.6k req/s: db.Get(l, \"SELECT * FROM listings WHERE id = $1 AND expiry_date > current_date\", l.Id)\n> Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to top 1.3k req/s\n\nYou don't show us exactly what you're doing with OFFSET/LIMIT, but I'm\ngoing to guess that you're using it to paginate large query results.\nThat's basically always going to suck: Postgres has no way to implement\nOFFSET except to generate and then throw away that number of initial rows.\nIf you do the same query over again N times with different OFFSETs, it's\ngoing to cost you N times as much as the base query would.\n\nIf the application's interaction with the database is stateless then you\nmay not have much choice, but if you do have a choice I'd suggest doing\npagination by means of fetching from a cursor rather than independent\nqueries.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Jun 2014 10:58:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OFFSET/LIMIT - Disparate Performance w/ Go application"
},
{
"msg_contents": "On Thu, Jun 12, 2014 at 9:58 AM, Tom Lane <[email protected]> wrote:\n> Matt Silverlock <[email protected]> writes:\n>> Hi all. This might be tricky in so much as there’s a few moving parts (when isn’t there?), but I’ve tried to test the postgres side as much as possible.\n>> Trying to work out a potential database bottleneck with a HTTP application (written in Go):\n>> Pages that render HTML templates but don’t perform DB queries can hit ~36k+ req/s\n>> Pages that perform a SELECT on a single row net about ~6.6k req/s: db.Get(l, \"SELECT * FROM listings WHERE id = $1 AND expiry_date > current_date\", l.Id)\n>> Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to top 1.3k req/s\n>\n> You don't show us exactly what you're doing with OFFSET/LIMIT, but I'm\n> going to guess that you're using it to paginate large query results.\n> That's basically always going to suck: Postgres has no way to implement\n> OFFSET except to generate and then throw away that number of initial rows.\n> If you do the same query over again N times with different OFFSETs, it's\n> going to cost you N times as much as the base query would.\n>\n> If the application's interaction with the database is stateless then you\n> may not have much choice, but if you do have a choice I'd suggest doing\n> pagination by means of fetching from a cursor rather than independent\n> queries.\n\nWell, you can also do client side pagination using the row-wise\ncomparison feature, implemented by you :-). Cursors can be the best\napproach, but it's nice to know the client side approach if you're\nreally stateless and/or want to be able to pick up external changes\nduring the browse.\n\nSELECT * FROM listings\nWHERE (id, expiry_date) > (last_id_read, last_expiry_date_read)\nORDER BY id, expiry_date LIMIT x.\n\nthen you just save off the highest id, date pair and feed it back into\nthe query. This technique is usefui for emulating ISAM browse\noperations.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Jun 2014 11:15:30 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OFFSET/LIMIT - Disparate Performance w/ Go application"
},
{
"msg_contents": "På torsdag 12. juni 2014 kl. 16:58:06, skrev Tom Lane <[email protected] \n<mailto:[email protected]>>: Matt Silverlock <[email protected]> writes:\n > Hi all. This might be tricky in so much as there���s a few moving parts \n(when isn���t there?), but I���ve tried to test the postgres side as much as \npossible.\n > Trying to work out a potential database bottleneck with a HTTP application \n(written in Go):\n > Pages that render HTML templates but don���t perform DB queries can hit \n~36k+ req/s\n > Pages that perform a SELECT on a single row net about ~6.6k req/s: \ndb.Get(l, \"SELECT * FROM listings WHERE id = $1 AND expiry_date > \ncurrent_date\", l.Id)\n > Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle \nto top 1.3k req/s\n\n You don't show us exactly what you're doing with OFFSET/LIMIT, but I'm\n going to guess that you're using it to paginate large query results.\n That's basically always going to suck: Postgres has no way to implement\n OFFSET except to generate and then throw away that number of initial rows.\n If you do the same query over again N times with different OFFSETs, it's\n going to cost you N times as much as the base query would. Are there any \nplans to make PG implement OFFSET more efficiently, so it doesn't have to \"read \nand throw away\"? I used SQL Server back in 2011 in a project and seem to \nremember they implemented offset pretty fast. Paging in a resultset of millions \nwas much faster than in PG. -- Andreas Jospeh Krogh CTO / Partner - Visena AS \nMobile: +47 909 56 963 [email protected] <mailto:[email protected]> \nwww.visena.com <https://www.visena.com> <https://www.visena.com>",
"msg_date": "Thu, 12 Jun 2014 21:48:48 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OFFSET/LIMIT - Disparate Performance w/ Go\n application"
},
{
"msg_contents": "On Thu, Jun 12, 2014 at 2:48 PM, Andreas Joseph Krogh\n<[email protected]> wrote:\n>\n> På torsdag 12. juni 2014 kl. 16:58:06, skrev Tom Lane <[email protected]>:\n>\n> Matt Silverlock <[email protected]> writes:\n> > Hi all. This might be tricky in so much as there���s a few moving parts (when isn���t there?), but I���ve tried to test the postgres side as much as possible.\n> > Trying to work out a potential database bottleneck with a HTTP application (written in Go):\n> > Pages that render HTML templates but don���t perform DB queries can hit ~36k+ req/s\n> > Pages that perform a SELECT on a single row net about ~6.6k req/s: db.Get(l, \"SELECT * FROM listings WHERE id = $1 AND expiry_date > current_date\", l.Id)\n> > Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to top 1.3k req/s\n>\n> You don't show us exactly what you're doing with OFFSET/LIMIT, but I'm\n> going to guess that you're using it to paginate large query results.\n> That's basically always going to suck: Postgres has no way to implement\n> OFFSET except to generate and then throw away that number of initial rows.\n> If you do the same query over again N times with different OFFSETs, it's\n> going to cost you N times as much as the base query would.\n>\n> Are there any plans to make PG implement OFFSET more efficiently, so it doesn't have to \"read and throw away\"?\n>\n> I used SQL Server back in 2011 in a project and seem to remember they implemented offset pretty fast. Paging in a resultset of millions was much faster than in PG.\n\nI doubt it. Offset is widely regarded as being pretty dubious. SQL\nhas formally defined the way to do this (cursors) and optimizing\noffset would be complex for such a little benefit. Speaking\ngenerally SQL server also has some trick optimizations of other\nconstucts like fast count(*) but in my experience underperforms pg in\nmany areas.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Jun 2014 14:58:20 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OFFSET/LIMIT - Disparate Performance w/ Go application"
},
{
"msg_contents": "On Thu, Jun 12, 2014 at 12:08 AM, Matt Silverlock\n<[email protected]> wrote:\n>\n> Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to top 1.3k req/s\n\nIs that tested at the OFFSET and LIMIT of 0 and 15, as shown in the\nexplain plan?\n\n\n> The query in question is: http://explain.depesz.com/s/7g8 and the table schema is as below:\n\nThe reported runtime of 0.078 ms should be able to sustain nearly 10\ntimes the reported rate of 1.3k/s, so the bottleneck would seem to be\nelsewhere.\n\nPerhaps the bottleneck is formatting the result set in postgres to be\nsent over the wire, then sending it over the wire, then parsing it in\nthe Go connection library to hand back to the Go user code, and then\nthe Go user code doing something meaningful with it.\n\nWhat happens if you get rid of the offset and the order by, and just\nuse limit? I bet it doesn't change the speed much (because that is\nnot where the bottleneck is).\n\nYou seem to be selecting an awful lot of wide columns. Do you really\nneed to see all of them?\n\n>\n> Some pgbench results from this machine as well:\n>\n> $ pgbench -c 128 -C -j 4 -T 15 -M extended -S\n\nThis is just benchmarking how fast you can make and break connections\nto the database.\n\nBecause your app is using an embedded connection pooler, this\nbenchmark isn't very relevant to your situation.\n\n\n>\n> Ultimately I'm not expecting a miracle—database ops are nearly always the slowest part\n> of a web server outside the latency to the client itself—but I'd expect something a little\n> closer (even 10% of 33k would be a lot better). And of course, this is somewhat \"academic\"\n> because I don't expect to see four million hits an hour—but I'd also like to catch problems\n> for future reference.\n\nI think you have succeeded in doing that. If you want to get\nsubstantially faster than the current speed in the future, you will\nneed a web-app-side results cache for this type of query.\n\nI can't imagine the results of such a query change more than 1300\ntimes/s, nor that anyone would notice or care if the observed results\nwhich were stale by two or three seconds.\n\nThat type of cache is a PITA, and I've never needed one because I also\ndon't expect to get 4 millions hits an hour. But if this is what your\nfuture looks like, you'd be best off to embrace it sooner rather than\nlater.\n\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Jun 2014 13:46:48 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OFFSET/LIMIT - Disparate Performance w/ Go application"
},
{
"msg_contents": "Thanks for the replies Jeff, Tom and Merlin.\n\n>> Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to top 1.3k req/s\n> \n> Is that tested at the OFFSET and LIMIT of 0 and 15, as shown in the\n> explain plan?\n\n\nYes — 0 (OFFSET) and 16 (LIMIT), or 15 and 31 (i.e. “second page” of results). There’s no difference on that front. For context, OFFSET is a multiple of 15 (i.e. 15 results per page) and LIMIT is always 15 + 1 in an attempt to fetch one more result, get the len of the returned slice and then return paginate true + slice the last result off if there’s more than 15.\n\n> \n> \n>> The query in question is: http://explain.depesz.com/s/7g8 and the table schema is as below:\n> \n> The reported runtime of 0.078 ms should be able to sustain nearly 10\n> times the reported rate of 1.3k/s, so the bottleneck would seem to be\n> elsewhere.\n> \n> Perhaps the bottleneck is formatting the result set in postgres to be\n> sent over the wire, then sending it over the wire, then parsing it in\n> the Go connection library to hand back to the Go user code, and then\n> the Go user code doing something meaningful with it.\n> \n> What happens if you get rid of the offset and the order by, and just\n> use limit? I bet it doesn't change the speed much (because that is\n> not where the bottleneck is).\n> \n> You seem to be selecting an awful lot of wide columns. Do you really\n> need to see all of them?\n\n- Testing SELECT * FROM … with just LIMIT 15 and no offset yields 1299 request/s at the front end of the application.\n- Testing SELECT id, title, company, location, commute, term, expiry_date (a subset of fields) with LIMIT 15 and no OFFSET yields 1800 request/s at the front end.\n\nThere’s definitely an increase to be realised there (I’d say by just tossing the rendered HTML field). \n\nBased on your comments about the Go side of things, I ran a quick test by cutting the table down to 6 records from the 39 in the test DB in all previous tests. This skips the pagination logic (below) and yields 3068 req/s on the front-end. \n\n\t// Determine if we have more than one page of results.\n\t// If so, trim the extra result off and set pagination = true\n\tif len(listings) > opts.PerPage {\n\t\tpaginate = true\n\t\tlistings = listings[:opts.PerPage]\n\t}\n\nSo there certainly appears to be a bottleneck on the Go side as well (outside of even the DB driver), probably from the garbage generated from slicing the slice, although I’d be keen to know if there’s a better way to approach returning a paginated list of results.\n\n>>> Well, you can also do client side pagination using the row-wise\n>>> comparison feature, implemented by you :-). Cursors can be the best\n>>> approach, but it's nice to know the client side approach if you're\n>>> really stateless and/or want to be able to pick up external changes\n>>> during the browse.\n\n\nWhat would be a better approach here? The cursor approach isn’t ideal in my case (although I could make it work), but what other options are there that are stateless?\n\n\n>> \n>> Some pgbench results from this machine as well:\n>> \n>> $ pgbench -c 128 -C -j 4 -T 15 -M extended -S\n> \n> This is just benchmarking how fast you can make and break connections\n> to the database.\n> \n> Because your app is using an embedded connection pooler, this\n> benchmark isn't very relevant to your situation.\n\n\nNoted — thanks.\n\n\nOn 13 Jun 2014, at 4:46 AM, Jeff Janes <[email protected]> wrote:\n\n> <snip>\n\n\nThanks for the replies Jeff, Tom and Merlin.Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to top 1.3k req/sIs that tested at the OFFSET and LIMIT of 0 and 15, as shown in theexplain plan?Yes — 0 (OFFSET) and 16 (LIMIT), or 15 and 31 (i.e. “second page” of results). There’s no difference on that front. For context, OFFSET is a multiple of 15 (i.e. 15 results per page) and LIMIT is always 15 + 1 in an attempt to fetch one more result, get the len of the returned slice and then return paginate true + slice the last result off if there’s more than 15.The query in question is: http://explain.depesz.com/s/7g8 and the table schema is as below:The reported runtime of 0.078 ms should be able to sustain nearly 10times the reported rate of 1.3k/s, so the bottleneck would seem to beelsewhere.Perhaps the bottleneck is formatting the result set in postgres to besent over the wire, then sending it over the wire, then parsing it inthe Go connection library to hand back to the Go user code, and thenthe Go user code doing something meaningful with it.What happens if you get rid of the offset and the order by, and justuse limit? I bet it doesn't change the speed much (because that isnot where the bottleneck is).You seem to be selecting an awful lot of wide columns. Do you reallyneed to see all of them?- Testing SELECT * FROM … with just LIMIT 15 and no offset yields 1299 request/s at the front end of the application.- Testing SELECT id, title, company, location, commute, term, expiry_date (a subset of fields) with LIMIT 15 and no OFFSET yields 1800 request/s at the front end.There’s definitely an increase to be realised there (I’d say by just tossing the rendered HTML field). Based on your comments about the Go side of things, I ran a quick test by cutting the table down to 6 records from the 39 in the test DB in all previous tests. This skips the pagination logic (below) and yields 3068 req/s on the front-end. // Determine if we have more than one page of results. // If so, trim the extra result off and set pagination = true if len(listings) > opts.PerPage { paginate = true listings = listings[:opts.PerPage] }So there certainly appears to be a bottleneck on the Go side as well (outside of even the DB driver), probably from the garbage generated from slicing the slice, although I’d be keen to know if there’s a better way to approach returning a paginated list of results.Well, you can also do client side pagination using the row-wisecomparison feature, implemented by you :-). Cursors can be the bestapproach, but it's nice to know the client side approach if you'rereally stateless and/or want to be able to pick up external changesduring the browse.What would be a better approach here? The cursor approach isn’t ideal in my case (although I could make it work), but what other options are there that are stateless?Some pgbench results from this machine as well:$ pgbench -c 128 -C -j 4 -T 15 -M extended -SThis is just benchmarking how fast you can make and break connectionsto the database.Because your app is using an embedded connection pooler, thisbenchmark isn't very relevant to your situation.Noted — thanks.On 13 Jun 2014, at 4:46 AM, Jeff Janes <[email protected]> wrote:<snip>",
"msg_date": "Fri, 13 Jun 2014 06:39:43 +0800",
"msg_from": "Matt Silverlock <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OFFSET/LIMIT - Disparate Performance w/ Go application"
}
] |
[
{
"msg_contents": "Does anybody have a similar setup:\n\n[a] 1 physical machine with half a terabyte of RAM, Xeon E7- 8837 @\n2.67GHz, huge ZFS pools + ZIL + L2ARC\n[b] master DB pg9.3 postgres_fdw with read/write capabilities, with\ntablespaces and WAL on separate zpools, archiving enabled (for zfs\nsnapshots purposes), +17K tables, multi-TB in size and growing\n[c] multiple DB instances listening on different ports or sockets on the\nsame machine with [b] (looking at 2 DB instances as of now which may\nincrease later on)\n\nOn the master DB there are several schemas with foreign tables located on\nany of the [c] DB instance. postgres_fdw foreign server definitions and all\ntable sequence are on the master DB. Basically, I'm looking at any benefits\nin terms of decreasing the master DB scaling, size, separate shared_buffers\nand separate writer processes per instance (to utilize more CPU?). I'm also\nplanning on relocating seldom accessed tables on [c] DBs. Am I on the right\npath on utilizing foreign data wrappers this way?\n\n\n--\n\nregards\n\ngezeala bacuño II\n\nDoes anybody have a similar setup:[a] 1 physical machine with half a terabyte of RAM, Xeon E7- 8837 @ 2.67GHz, huge ZFS pools + ZIL + L2ARC[b] master DB pg9.3 postgres_fdw with read/write capabilities, with tablespaces and WAL on separate zpools, archiving enabled (for zfs snapshots purposes), +17K tables, multi-TB in size and growing\n\n[c] multiple DB instances listening on different ports or sockets on the same machine with [b] (looking at 2 DB instances as of now which may increase later on)On the master DB there are several schemas with foreign tables located on any of the [c] DB instance. postgres_fdw foreign server definitions and all table sequence are on the master DB. Basically, I'm looking at any benefits in terms of decreasing the master DB scaling, size, separate shared_buffers and separate writer processes per instance (to utilize more CPU?). I'm also planning on relocating seldom accessed tables on [c] DBs. Am I on the right path on utilizing foreign data wrappers this way? \n--regardsgezeala bacuño II",
"msg_date": "Mon, 16 Jun 2014 17:24:47 -0700",
"msg_from": "=?UTF-8?Q?Gezeala_M=2E_Bacu=C3=B1o_II?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "1 machine + master DB with postgres_fdw + multiple DB instances on\n different ports"
},
{
"msg_contents": "Gezeala M. Bacuño II wrote:\r\n> Does anybody have a similar setup:\r\n> \r\n> [a] 1 physical machine with half a terabyte of RAM, Xeon E7- 8837 @ 2.67GHz, huge ZFS pools + ZIL +\r\n> L2ARC\r\n> [b] master DB pg9.3 postgres_fdw with read/write capabilities, with tablespaces and WAL on separate\r\n> zpools, archiving enabled (for zfs snapshots purposes), +17K tables, multi-TB in size and growing\r\n> [c] multiple DB instances listening on different ports or sockets on the same machine with [b]\r\n> (looking at 2 DB instances as of now which may increase later on)\r\n> \r\n> \r\n> On the master DB there are several schemas with foreign tables located on any of the [c] DB instance.\r\n> postgres_fdw foreign server definitions and all table sequence are on the master DB. Basically, I'm\r\n> looking at any benefits in terms of decreasing the master DB scaling, size, separate shared_buffers\r\n> and separate writer processes per instance (to utilize more CPU?). I'm also planning on relocating\r\n> seldom accessed tables on [c] DBs. Am I on the right path on utilizing foreign data wrappers this way?\r\n\r\nYou are very likely not going to gain anything that way.\r\n\r\nAccess to foreign tables is slower than access to local tables, and (particularly when joins are\r\ninvolved) you will end up unnecessarily sending lots of data around between the databases.\r\nSo I'd expect performance to suffer.\r\n\r\nIn addition, all the database clusters will have to share the memory, so I don't see an\r\nimprovement over having everything in one database.\r\nSince the size will stay the same, you are not going to save anything on backups either.\r\n\r\nDepending on the workload and how you distribute the tables, it might be a win to\r\ndistribute a large database across several physical machines.\r\n\r\nI would test any such setup for performance.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 17 Jun 2014 07:17:42 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 machine + master DB with postgres_fdw + multiple DB\n instances on different ports"
},
{
"msg_contents": "On Tue, Jun 17, 2014 at 12:17 AM, Albe Laurenz <[email protected]>\nwrote:\n\n> Gezeala M. Bacuño II wrote:\n> > Does anybody have a similar setup:\n> >\n> > [a] 1 physical machine with half a terabyte of RAM, Xeon E7- 8837 @\n> 2.67GHz, huge ZFS pools + ZIL +\n> > L2ARC\n> > [b] master DB pg9.3 postgres_fdw with read/write capabilities, with\n> tablespaces and WAL on separate\n> > zpools, archiving enabled (for zfs snapshots purposes), +17K tables,\n> multi-TB in size and growing\n> > [c] multiple DB instances listening on different ports or sockets on the\n> same machine with [b]\n> > (looking at 2 DB instances as of now which may increase later on)\n> >\n> >\n> > On the master DB there are several schemas with foreign tables located\n> on any of the [c] DB instance.\n> > postgres_fdw foreign server definitions and all table sequence are on\n> the master DB. Basically, I'm\n> > looking at any benefits in terms of decreasing the master DB scaling,\n> size, separate shared_buffers\n> > and separate writer processes per instance (to utilize more CPU?). I'm\n> also planning on relocating\n> > seldom accessed tables on [c] DBs. Am I on the right path on utilizing\n> foreign data wrappers this way?\n>\n>\ncorrection: benefits in terms of *decreasing the master DB size*, scaling,\nseparate..\n\n\n> You are very likely not going to gain anything that way.\n>\n> Access to foreign tables is slower than access to local tables, and\n> (particularly when joins are\n> involved) you will end up unnecessarily sending lots of data around\n> between the databases.\n> So I'd expect performance to suffer.\n>\n\nfactoring in the fdw load during joins, I'm thinking there's probably not\ngonna be that much performance hit since all data are in 1 machine (we have\ntablespace set-up in place too)\n\n\n>\n> In addition, all the database clusters will have to share the memory, so I\n> don't see an\n> improvement over having everything in one database.\n>\n\nthis machine does have half a terabyte of RAM, shared_buffers at 8GB per\ncluster, work_mem at 512MB and ZFS arc, we will still have lots of RAM to\nspare.\n\n\n> Since the size will stay the same, you are not going to save anything on\n> backups either.\n>\n\nnot looking into decreasing the overall size of all db clusters but rather\ndecreasing the size and relation counts per cluster making each db cluster\nmanageable.\n\n\n> Depending on the workload and how you distribute the tables, it might be a\n> win to\n> distribute a large database across several physical machines.\n>\n\navoiding additional network load, only 2 machines available in the same\nlocation and the other one is a failover server.\n\n\n>\n> I would test any such setup for performance.\n>\n> Yours,\n> Laurenz Albe\n>\n\nOn Tue, Jun 17, 2014 at 12:17 AM, Albe Laurenz <[email protected]> wrote:\nGezeala M. Bacuño II wrote:\n> Does anybody have a similar setup:\n>\n> [a] 1 physical machine with half a terabyte of RAM, Xeon E7- 8837 @ 2.67GHz, huge ZFS pools + ZIL +\n> L2ARC\n> [b] master DB pg9.3 postgres_fdw with read/write capabilities, with tablespaces and WAL on separate\n> zpools, archiving enabled (for zfs snapshots purposes), +17K tables, multi-TB in size and growing\n> [c] multiple DB instances listening on different ports or sockets on the same machine with [b]\n> (looking at 2 DB instances as of now which may increase later on)\n>\n>\n> On the master DB there are several schemas with foreign tables located on any of the [c] DB instance.\n> postgres_fdw foreign server definitions and all table sequence are on the master DB. Basically, I'm\n> looking at any benefits in terms of decreasing the master DB scaling, size, separate shared_buffers\n> and separate writer processes per instance (to utilize more CPU?). I'm also planning on relocating\n> seldom accessed tables on [c] DBs. Am I on the right path on utilizing foreign data wrappers this way?\ncorrection: benefits in terms of *decreasing the master DB size*, scaling, separate.. \n\nYou are very likely not going to gain anything that way.\n\nAccess to foreign tables is slower than access to local tables, and (particularly when joins are\ninvolved) you will end up unnecessarily sending lots of data around between the databases.\nSo I'd expect performance to suffer.factoring in the fdw load during joins, I'm thinking there's probably not gonna be that much performance hit since all data are in 1 machine (we have tablespace set-up in place too)\n\n \n\nIn addition, all the database clusters will have to share the memory, so I don't see an\nimprovement over having everything in one database.this machine does have half a terabyte of RAM, shared_buffers at 8GB per cluster, work_mem at 512MB and ZFS arc, we will still have lots of RAM to spare.\n\n \nSince the size will stay the same, you are not going to save anything on backups either.not looking into decreasing the overall size of all db clusters but rather decreasing the size and relation counts per cluster making each db cluster manageable.\n\n\nDepending on the workload and how you distribute the tables, it might be a win to\ndistribute a large database across several physical machines.avoiding additional network load, only 2 machines available in the same location and the other one is a failover server.\n\n \n\nI would test any such setup for performance.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 17 Jun 2014 09:41:37 -0700",
"msg_from": "=?UTF-8?Q?Gezeala_M=2E_Bacu=C3=B1o_II?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 1 machine + master DB with postgres_fdw + multiple DB\n instances on different ports"
}
] |
[
{
"msg_contents": "Here are my kernel settings\n\nkernel.shmmax = 10737418240\n\n# Maximum total size of all shared memory segments in pages (normally 4096\nbytes)\nkernel.shmall = 2621440\nkernel.sem = 250 32000 32 1200\n\nThey are actually set...\n\nsysctl -a | grep shm\nkernel.shmmax = 10737418240\nkernel.shmall = 2621440\nkernel.shmmni = 4096\n\nTo reduce the request size [FAILently 2232950784 bytes), reduce\nPostgreSQL's shared memory usage,\n\n\nDave Cramer\n\nHere are my kernel settingskernel.shmmax = 10737418240# Maximum total size of all shared memory segments in pages (normally 4096 bytes)kernel.shmall = 2621440\nkernel.sem = 250 32000 32 1200They are actually set...sysctl -a | grep shmkernel.shmmax = 10737418240kernel.shmall = 2621440\n\nkernel.shmmni = 4096To reduce the request size [FAILently 2232950784 bytes), reduce PostgreSQL's shared memory usage,Dave Cramer",
"msg_date": "Wed, 18 Jun 2014 15:00:45 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unable to allocate 2G of shared memory on wheezy"
},
{
"msg_contents": "Dave Cramer <[email protected]> writes:\n> To reduce the request size [FAILently 2232950784 bytes), reduce\n> PostgreSQL's shared memory usage,\n\nThis error message is a bit garbled :-(. It would've been useful\nto know the specific errno, but you've trimmed out that info.\n\nPerhaps it's failing because you already have ~10G in shared memory\nsegments? \"sudo ipcs -m\" might be illuminating.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Jun 2014 15:15:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to allocate 2G of shared memory on wheezy"
},
{
"msg_contents": "2014-06-18 13:37:15 EDT FATAL: could not map anonymous shared memory:\nCannot allocate memory\n2014-06-18 13:37:15 EDT HINT: This error usually means that PostgreSQL's\nrequest for a shared memory segment exceeded available memory or swap\nspace. To reduce the request size (currently 8826445824 bytes), reduce\nPostgreSQL's shared memory usage, perhaps by reducing shared_buffers or\nmax_connections.\n\nipcs -m\n\n------ Shared Memory Segments --------\nkey shmid owner perms bytes nattch status\n\n\n\nDave Cramer\n\n\nOn 18 June 2014 15:15, Tom Lane <[email protected]> wrote:\n\n> Dave Cramer <[email protected]> writes:\n> > To reduce the request size [FAILently 2232950784 bytes), reduce\n> > PostgreSQL's shared memory usage,\n>\n> This error message is a bit garbled :-(. It would've been useful\n> to know the specific errno, but you've trimmed out that info.\n>\n> Perhaps it's failing because you already have ~10G in shared memory\n> segments? \"sudo ipcs -m\" might be illuminating.\n>\n> regards, tom lane\n>\n\n2014-06-18 13:37:15 EDT FATAL: could not map anonymous shared memory: Cannot allocate memory2014-06-18 13:37:15 EDT HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory or swap space. To reduce the request size (currently 8826445824 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\nipcs -m------ Shared Memory Segments --------key shmid owner perms bytes nattch status\nDave Cramer\nOn 18 June 2014 15:15, Tom Lane <[email protected]> wrote:\nDave Cramer <[email protected]> writes:\n> To reduce the request size [FAILently 2232950784 bytes), reduce\n> PostgreSQL's shared memory usage,\n\nThis error message is a bit garbled :-(. It would've been useful\nto know the specific errno, but you've trimmed out that info.\n\nPerhaps it's failing because you already have ~10G in shared memory\nsegments? \"sudo ipcs -m\" might be illuminating.\n\n regards, tom lane",
"msg_date": "Wed, 18 Jun 2014 15:24:45 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unable to allocate 2G of shared memory on wheezy"
},
{
"msg_contents": "Problem solved... a runaway process (puppet) had consumed all available\nreal memory\n\nDave Cramer\n\n\nOn 18 June 2014 15:24, Dave Cramer <[email protected]> wrote:\n\n> 2014-06-18 13:37:15 EDT FATAL: could not map anonymous shared memory:\n> Cannot allocate memory\n> 2014-06-18 13:37:15 EDT HINT: This error usually means that PostgreSQL's\n> request for a shared memory segment exceeded available memory or swap\n> space. To reduce the request size (currently 8826445824 bytes), reduce\n> PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or\n> max_connections.\n>\n> ipcs -m\n>\n> ------ Shared Memory Segments --------\n> key shmid owner perms bytes nattch status\n>\n>\n>\n> Dave Cramer\n>\n>\n> On 18 June 2014 15:15, Tom Lane <[email protected]> wrote:\n>\n>> Dave Cramer <[email protected]> writes:\n>> > To reduce the request size [FAILently 2232950784 bytes), reduce\n>> > PostgreSQL's shared memory usage,\n>>\n>> This error message is a bit garbled :-(. It would've been useful\n>> to know the specific errno, but you've trimmed out that info.\n>>\n>> Perhaps it's failing because you already have ~10G in shared memory\n>> segments? \"sudo ipcs -m\" might be illuminating.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nProblem solved... a runaway process (puppet) had consumed all available real memoryDave Cramer\nOn 18 June 2014 15:24, Dave Cramer <[email protected]> wrote:\n2014-06-18 13:37:15 EDT FATAL: could not map anonymous shared memory: Cannot allocate memory2014-06-18 13:37:15 EDT HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory or swap space. To reduce the request size (currently 8826445824 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.\nipcs -m------ Shared Memory Segments --------key shmid owner perms bytes nattch status\n\nDave Cramer\nOn 18 June 2014 15:15, Tom Lane <[email protected]> wrote:\nDave Cramer <[email protected]> writes:\n> To reduce the request size [FAILently 2232950784 bytes), reduce\n> PostgreSQL's shared memory usage,\n\nThis error message is a bit garbled :-(. It would've been useful\nto know the specific errno, but you've trimmed out that info.\n\nPerhaps it's failing because you already have ~10G in shared memory\nsegments? \"sudo ipcs -m\" might be illuminating.\n\n regards, tom lane",
"msg_date": "Wed, 18 Jun 2014 15:33:58 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unable to allocate 2G of shared memory on wheezy"
},
{
"msg_contents": "Dave Cramer <[email protected]> writes:\n> 2014-06-18 13:37:15 EDT FATAL: could not map anonymous shared memory:\n> Cannot allocate memory\n> 2014-06-18 13:37:15 EDT HINT: This error usually means that PostgreSQL's\n> request for a shared memory segment exceeded available memory or swap\n> space. To reduce the request size (currently 8826445824 bytes), reduce\n> PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or\n> max_connections.\n\nOh, interesting. That was the mmap that failed, so this has nothing to do\nwith SysV shm limits.\n\nNote that your request seems to be pushing 9G, not 2G as you thought.\nMaybe it's just more memory than you have?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Jun 2014 15:37:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to allocate 2G of shared memory on wheezy"
}
] |
[
{
"msg_contents": "Hi group,\n\nWe've found huge pgstat.stat file on our production DB boxes, the size is over 100MB. autovacuum is enabled. So my question would be:\n\n1. What's a reasonable size of pgstat.stat file, can it be estimated?\n\n2. What's the safest way to reduce the file size to alleviate the IO impact on disk?\n\n3. If need to drop all statistics, would a \"analyze DB\" command enough to eliminate the performance impact on queries?\n\nThanks,\nSuya\n\n\n\n\n\n\n\n\n\nHi group,\n \nWe’ve found huge pgstat.stat file on our production DB boxes, the size is over 100MB. autovacuum is enabled. So my question would be:\n1. \nWhat’s a reasonable size of pgstat.stat file, can it be estimated?\n2. \nWhat’s the safest way to reduce the file size to alleviate the IO impact on disk?\n3. \nIf need to drop all statistics, would a “analyze DB” command enough to eliminate the performance impact on queries?\n \nThanks,\nSuya",
"msg_date": "Thu, 19 Jun 2014 04:38:54 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "huge pgstat.stat file on PostgreSQL 8.3.24"
},
{
"msg_contents": "Hello\n\n\nThe size of statfile is related to size of database objects in database.\nDepends on PostgreSQL version this file can be one per database cluster or\none per database (from 9.3),\n\nThese statistics should by reset by call pg_stat_reset()\nhttp://www.postgresql.org/docs/9.2/static/monitoring-stats.html\n\nAutovacuum on large stat files has significant overhead - it can be\nincreasing by using new PostgreSQL (9.3) and by migration stat directory to\nramdisk - by setting stats_temp_directory to some dir on ramdisk (tmpfs on\nLinux)\n\nRegards\n\nPavel\n\n\n\n2014-06-19 6:38 GMT+02:00 Huang, Suya <[email protected]>:\n\n> Hi group,\n>\n>\n>\n> We’ve found huge pgstat.stat file on our production DB boxes, the size is\n> over 100MB. autovacuum is enabled. So my question would be:\n>\n> 1. What’s a reasonable size of pgstat.stat file, can it be\n> estimated?\n>\n> 2. What’s the safest way to reduce the file size to alleviate the\n> IO impact on disk?\n>\n> 3. If need to drop all statistics, would a “analyze DB” command\n> enough to eliminate the performance impact on queries?\n>\n>\n>\n> Thanks,\n>\n> Suya\n>\n\nHelloThe size of statfile is related to size of database objects in database. Depends on PostgreSQL version this file can be one per database cluster or one per database (from 9.3),\nThese statistics should by reset by call pg_stat_reset() http://www.postgresql.org/docs/9.2/static/monitoring-stats.html\nAutovacuum on large stat files has significant overhead - it can be increasing by using new PostgreSQL (9.3) and by migration stat directory to ramdisk - by setting stats_temp_directory to some dir on ramdisk (tmpfs on Linux)\nRegardsPavel2014-06-19 6:38 GMT+02:00 Huang, Suya <[email protected]>:\n\n\n\nHi group,\n \nWe’ve found huge pgstat.stat file on our production DB boxes, the size is over 100MB. autovacuum is enabled. So my question would be:\n1. \nWhat’s a reasonable size of pgstat.stat file, can it be estimated?\n2. \nWhat’s the safest way to reduce the file size to alleviate the IO impact on disk?\n3. \nIf need to drop all statistics, would a “analyze DB” command enough to eliminate the performance impact on queries?\n \nThanks,\nSuya",
"msg_date": "Thu, 19 Jun 2014 07:28:16 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: huge pgstat.stat file on PostgreSQL 8.3.24"
},
{
"msg_contents": "From: Pavel Stehule [mailto:[email protected]] \r\nSent: Thursday, June 19, 2014 3:28 PM\r\nTo: Huang, Suya\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\r\n\r\nHello\r\n\r\nThe size of statfile is related to size of database objects in database. Depends on PostgreSQL version this file can be one per database cluster or one per database (from 9.3),\r\nThese statistics should by reset by call pg_stat_reset() http://www.postgresql.org/docs/9.2/static/monitoring-stats.html\r\nAutovacuum on large stat files has significant overhead - it can be increasing by using new PostgreSQL (9.3) and by migration stat directory to ramdisk - by setting stats_temp_directory to some dir on ramdisk (tmpfs on Linux)\r\nRegards\r\n\r\nPavel\r\n\r\n2014-06-19 6:38 GMT+02:00 Huang, Suya <[email protected]>:\r\nHi group,\r\n \r\nWe’ve found huge pgstat.stat file on our production DB boxes, the size is over 100MB. autovacuum is enabled. So my question would be:\r\n1. What’s a reasonable size of pgstat.stat file, can it be estimated?\r\n2. What’s the safest way to reduce the file size to alleviate the IO impact on disk?\r\n3. If need to drop all statistics, would a “analyze DB” command enough to eliminate the performance impact on queries?\r\n \r\nThanks,\r\nSuya\r\n\r\n\r\n\r\n\r\nHi Pavel, \r\n\r\nour version is 8.3.24, not 9.3. I also want to know the impact caused by run pg_stat_reset to application, is that able to be mitigated by doing an analyze database command? \r\n\r\nThanks,\r\nSuya\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Jun 2014 05:35:14 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: huge pgstat.stat file on PostgreSQL 8.3.24"
},
{
"msg_contents": "2014-06-19 7:35 GMT+02:00 Huang, Suya <[email protected]>:\n\n> From: Pavel Stehule [mailto:[email protected]]\n> Sent: Thursday, June 19, 2014 3:28 PM\n> To: Huang, Suya\n> Cc: [email protected]\n> Subject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\n>\n> Hello\n>\n> The size of statfile is related to size of database objects in database.\n> Depends on PostgreSQL version this file can be one per database cluster or\n> one per database (from 9.3),\n> These statistics should by reset by call pg_stat_reset()\n> http://www.postgresql.org/docs/9.2/static/monitoring-stats.html\n> Autovacuum on large stat files has significant overhead - it can be\n> increasing by using new PostgreSQL (9.3) and by migration stat directory to\n> ramdisk - by setting stats_temp_directory to some dir on ramdisk (tmpfs on\n> Linux)\n> Regards\n>\n> Pavel\n>\n> 2014-06-19 6:38 GMT+02:00 Huang, Suya <[email protected]>:\n> Hi group,\n>\n> We’ve found huge pgstat.stat file on our production DB boxes, the size is\n> over 100MB. autovacuum is enabled. So my question would be:\n> 1. What’s a reasonable size of pgstat.stat file, can it be estimated?\n> 2. What’s the safest way to reduce the file size to alleviate the IO\n> impact on disk?\n> 3. If need to drop all statistics, would a “analyze DB” command\n> enough to eliminate the performance impact on queries?\n>\n> Thanks,\n> Suya\n>\n>\n>\n>\n> Hi Pavel,\n>\n> our version is 8.3.24, not 9.3. I also want to know the impact caused by\n> run pg_stat_reset to application, is that able to be mitigated by doing an\n> analyze database command?\n>\n\nyour version is too old - you can try reset statistics. ANALYZE statement\nshould not have a significant impact on these runtime statistics.\n\nPavel\n\nAttention: PostgreSQL 8.3 is unsupported now\n\n\n\n\n> Thanks,\n> Suya\n>\n>\n\n2014-06-19 7:35 GMT+02:00 Huang, Suya <[email protected]>:\nFrom: Pavel Stehule [mailto:[email protected]]\nSent: Thursday, June 19, 2014 3:28 PM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\n\nHello\n\nThe size of statfile is related to size of database objects in database. Depends on PostgreSQL version this file can be one per database cluster or one per database (from 9.3),\nThese statistics should by reset by call pg_stat_reset() http://www.postgresql.org/docs/9.2/static/monitoring-stats.html\nAutovacuum on large stat files has significant overhead - it can be increasing by using new PostgreSQL (9.3) and by migration stat directory to ramdisk - by setting stats_temp_directory to some dir on ramdisk (tmpfs on Linux)\n\n\nRegards\n\nPavel\n\n2014-06-19 6:38 GMT+02:00 Huang, Suya <[email protected]>:\nHi group,\n \nWe’ve found huge pgstat.stat file on our production DB boxes, the size is over 100MB. autovacuum is enabled. So my question would be:\n1. What’s a reasonable size of pgstat.stat file, can it be estimated?\n2. What’s the safest way to reduce the file size to alleviate the IO impact on disk?\n3. If need to drop all statistics, would a “analyze DB” command enough to eliminate the performance impact on queries?\n \nThanks,\nSuya\n\n\n\n\nHi Pavel,\n\nour version is 8.3.24, not 9.3. I also want to know the impact caused by run pg_stat_reset to application, is that able to be mitigated by doing an analyze database command?your version is too old - you can try reset statistics. ANALYZE statement should not have a significant impact on these runtime statistics. \nPavel Attention: PostgreSQL 8.3 is unsupported now\n\nThanks,\nSuya",
"msg_date": "Thu, 19 Jun 2014 07:41:24 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: huge pgstat.stat file on PostgreSQL 8.3.24"
},
{
"msg_contents": "On 19 Červen 2014, 7:35, Huang, Suya wrote:\n> From: Pavel Stehule [mailto:[email protected]]\n> Sent: Thursday, June 19, 2014 3:28 PM\n> To: Huang, Suya\n> Cc: [email protected]\n> Subject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\n>\n> Hello\n>\n> The size of statfile is related to size of database objects in database.\n> Depends on PostgreSQL version this file can be one per database cluster or\n> one per database (from 9.3),\n> These statistics should by reset by call pg_stat_reset()\n> http://www.postgresql.org/docs/9.2/static/monitoring-stats.html\n> Autovacuum on large stat files has significant overhead - it can be\n> increasing by using new PostgreSQL (9.3) and by migration stat directory\n> to ramdisk - by setting stats_temp_directory to some dir on ramdisk (tmpfs\n> on Linux)\n> Regards\n>\n> Pavel\n>\n> 2014-06-19 6:38 GMT+02:00 Huang, Suya <[email protected]>:\n> Hi group,\n> \n> We’ve found huge pgstat.stat file on our production DB boxes, the size is\n> over 100MB. autovacuum is enabled. So my question would be:\n> 1. What’s a reasonable size of pgstat.stat file, can it be\n> estimated?\n> 2. What’s the safest way to reduce the file size to alleviate the IO\n> impact on disk?\n> 3. If need to drop all statistics, would a “analyze DB” command\n> enough to eliminate the performance impact on queries?\n> \n> Thanks,\n> Suya\n>\n>\n>\n>\n> Hi Pavel,\n>\n> our version is 8.3.24, not 9.3. I also want to know the impact caused by\n> run pg_stat_reset to application, is that able to be mitigated by doing an\n> analyze database command?\n\nHi,\n\nI really doubt you're on 8.3.24. The last version in 8.3 branch is 8.3.23.\n\nRunning pg_stat_reset has no impact on planning queries. There are two\nkinds of statistics - those used for planning are stored withing the\ndatabase, not in pgstat.stat file and are not influenced by pg_stat_reset.\n\nThe stats in pgstat.stat are 'runtime stats' used for monitoring etc. so\nyou may see some distuption in your monitoring system. ANALYZE command has\nnothing to do with the stats in pgstat.stat.\n\nHowever, if you really have a pgstat.stat this large, this is only a\ntemporary solution - it will grow back, possibly pretty quickly, depending\non how often you access the objects.\n\nAnother option is to move the file to a tmpfs (ramdisk) partition. It will\neliminate the IO overhead, but it will consume more CPU (because it still\nneeds to be processed, and IO is not the bottleneck anymore).\n\nThe other thing is that you should really start thinking about upgrading\nto a supported version. 8.3 did not get updates for > 1 year (and won't).\n\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Jun 2014 12:01:44 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: huge pgstat.stat file on PostgreSQL 8.3.24"
},
{
"msg_contents": "\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]] \r\nSent: Thursday, June 19, 2014 3:41 PM\r\nTo: Huang, Suya\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\r\n\r\n\r\n\r\n2014-06-19 7:35 GMT+02:00 Huang, Suya <[email protected]>:\r\nFrom: Pavel Stehule [mailto:[email protected]]\r\nSent: Thursday, June 19, 2014 3:28 PM\r\nTo: Huang, Suya\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\r\n\r\nHello\r\n\r\nThe size of statfile is related to size of database objects in database. Depends on PostgreSQL version this file can be one per database cluster or one per database (from 9.3),\r\nThese statistics should by reset by call pg_stat_reset() http://www.postgresql.org/docs/9.2/static/monitoring-stats.html\r\nAutovacuum on large stat files has significant overhead - it can be increasing by using new PostgreSQL (9.3) and by migration stat directory to ramdisk - by setting stats_temp_directory to some dir on ramdisk (tmpfs on Linux)\r\nRegards\r\n\r\nPavel\r\n\r\n2014-06-19 6:38 GMT+02:00 Huang, Suya <[email protected]>:\r\nHi group,\r\n \r\nWe’ve found huge pgstat.stat file on our production DB boxes, the size is over 100MB. autovacuum is enabled. So my question would be:\r\n1. What’s a reasonable size of pgstat.stat file, can it be estimated?\r\n2. What’s the safest way to reduce the file size to alleviate the IO impact on disk?\r\n3. If need to drop all statistics, would a “analyze DB” command enough to eliminate the performance impact on queries?\r\n \r\nThanks,\r\nSuya\r\n\r\n\r\n\r\nHi Pavel,\r\n\r\nour version is 8.3.24, not 9.3. I also want to know the impact caused by run pg_stat_reset to application, is that able to be mitigated by doing an analyze database command?\r\n\r\nyour version is too old - you can try reset statistics. ANALYZE statement should not have a significant impact on these runtime statistics. \r\nPavel\r\n \r\nAttention: PostgreSQL 8.3 is unsupported now\r\n\r\n\r\n\r\nThanks,\r\nSuya\r\n\r\n\r\nThanks Pavel, to be more clear, what does \" pg_stat_reset \"really reset? In the document it says \" Reset all statistics counters for the current database to zero(requires superuser privileges) \". I thought it would reset all statistics of all tables/indexes, thus why I am thinking of re-run analyze database to gather statistics. Because if table/indexes don't have statistics, the query plan would be affected which is not a good thing to a production box... I'm not so sure if I understand \"run statistics\" you mentioned here.\r\n\r\nThanks,\r\nSuya\r\n\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Jun 2014 23:44:31 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: huge pgstat.stat file on PostgreSQL 8.3.24"
},
{
"msg_contents": "2014-06-20 1:44 GMT+02:00 Huang, Suya <[email protected]>:\n\n>\n>\n> From: Pavel Stehule [mailto:[email protected]]\n> Sent: Thursday, June 19, 2014 3:41 PM\n> To: Huang, Suya\n> Cc: [email protected]\n> Subject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\n>\n>\n>\n> 2014-06-19 7:35 GMT+02:00 Huang, Suya <[email protected]>:\n> From: Pavel Stehule [mailto:[email protected]]\n> Sent: Thursday, June 19, 2014 3:28 PM\n> To: Huang, Suya\n> Cc: [email protected]\n> Subject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\n>\n> Hello\n>\n> The size of statfile is related to size of database objects in database.\n> Depends on PostgreSQL version this file can be one per database cluster or\n> one per database (from 9.3),\n> These statistics should by reset by call pg_stat_reset()\n> http://www.postgresql.org/docs/9.2/static/monitoring-stats.html\n> Autovacuum on large stat files has significant overhead - it can be\n> increasing by using new PostgreSQL (9.3) and by migration stat directory to\n> ramdisk - by setting stats_temp_directory to some dir on ramdisk (tmpfs on\n> Linux)\n> Regards\n>\n> Pavel\n>\n> 2014-06-19 6:38 GMT+02:00 Huang, Suya <[email protected]>:\n> Hi group,\n>\n> We’ve found huge pgstat.stat file on our production DB boxes, the size is\n> over 100MB. autovacuum is enabled. So my question would be:\n> 1. What’s a reasonable size of pgstat.stat file, can it be estimated?\n> 2. What’s the safest way to reduce the file size to alleviate the IO\n> impact on disk?\n> 3. If need to drop all statistics, would a “analyze DB” command\n> enough to eliminate the performance impact on queries?\n>\n> Thanks,\n> Suya\n>\n>\n>\n> Hi Pavel,\n>\n> our version is 8.3.24, not 9.3. I also want to know the impact caused by\n> run pg_stat_reset to application, is that able to be mitigated by doing an\n> analyze database command?\n>\n> your version is too old - you can try reset statistics. ANALYZE statement\n> should not have a significant impact on these runtime statistics.\n> Pavel\n>\n> Attention: PostgreSQL 8.3 is unsupported now\n>\n>\n>\n> Thanks,\n> Suya\n>\n>\n> Thanks Pavel, to be more clear, what does \" pg_stat_reset \"really reset?\n> In the document it says \" Reset all statistics counters for the current\n> database to zero(requires superuser privileges) \". I thought it would\n> reset all statistics of all tables/indexes, thus why I am thinking of\n> re-run analyze database to gather statistics. Because if table/indexes\n> don't have statistics, the query plan would be affected which is not a good\n> thing to a production box... I'm not so sure if I understand \"run\n> statistics\" you mentioned here.\n>\n\nyou have true - anyway you can clean a content of this directory - but if\nyour database has lot of database objects, your stat file will have a\noriginal size very early\n\nPavel\n\n\n\n\n>\n> Thanks,\n> Suya\n>\n>\n>\n>\n\n2014-06-20 1:44 GMT+02:00 Huang, Suya <[email protected]>:\n\n\nFrom: Pavel Stehule [mailto:[email protected]]\nSent: Thursday, June 19, 2014 3:41 PM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\n\n\n\n2014-06-19 7:35 GMT+02:00 Huang, Suya <[email protected]>:\nFrom: Pavel Stehule [mailto:[email protected]]\nSent: Thursday, June 19, 2014 3:28 PM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\n\nHello\n\nThe size of statfile is related to size of database objects in database. Depends on PostgreSQL version this file can be one per database cluster or one per database (from 9.3),\nThese statistics should by reset by call pg_stat_reset() http://www.postgresql.org/docs/9.2/static/monitoring-stats.html\nAutovacuum on large stat files has significant overhead - it can be increasing by using new PostgreSQL (9.3) and by migration stat directory to ramdisk - by setting stats_temp_directory to some dir on ramdisk (tmpfs on Linux)\n\n\nRegards\n\nPavel\n\n2014-06-19 6:38 GMT+02:00 Huang, Suya <[email protected]>:\nHi group,\n \nWe’ve found huge pgstat.stat file on our production DB boxes, the size is over 100MB. autovacuum is enabled. So my question would be:\n1. What’s a reasonable size of pgstat.stat file, can it be estimated?\n2. What’s the safest way to reduce the file size to alleviate the IO impact on disk?\n3. If need to drop all statistics, would a “analyze DB” command enough to eliminate the performance impact on queries?\n \nThanks,\nSuya\n\n\n\nHi Pavel,\n\nour version is 8.3.24, not 9.3. I also want to know the impact caused by run pg_stat_reset to application, is that able to be mitigated by doing an analyze database command?\n\nyour version is too old - you can try reset statistics. ANALYZE statement should not have a significant impact on these runtime statistics.\nPavel\n \nAttention: PostgreSQL 8.3 is unsupported now\n\n\n\nThanks,\nSuya\n\n\nThanks Pavel, to be more clear, what does \" pg_stat_reset \"really reset? In the document it says \" Reset all statistics counters for the current database to zero(requires superuser privileges) \". I thought it would reset all statistics of all tables/indexes, thus why I am thinking of re-run analyze database to gather statistics. Because if table/indexes don't have statistics, the query plan would be affected which is not a good thing to a production box... I'm not so sure if I understand \"run statistics\" you mentioned here.\nyou have true - anyway you can clean a content of this directory - but if your database has lot of database objects, your stat file will have a original size very early\nPavel\n \n\nThanks,\nSuya",
"msg_date": "Fri, 20 Jun 2014 05:33:37 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: huge pgstat.stat file on PostgreSQL 8.3.24"
},
{
"msg_contents": "On 20 Červen 2014, 5:33, Pavel Stehule wrote:\n> 2014-06-20 1:44 GMT+02:00 Huang, Suya <[email protected]>:\n>>\n>> Thanks Pavel, to be more clear, what does \" pg_stat_reset \"really reset?\n>> In the document it says \" Reset all statistics counters for the current\n>> database to zero(requires superuser privileges) \". I thought it would\n>> reset all statistics of all tables/indexes, thus why I am thinking of\n>> re-run analyze database to gather statistics. Because if table/indexes\n>> don't have statistics, the query plan would be affected which is not a\n>> good\n>> thing to a production box... I'm not so sure if I understand \"run\n>> statistics\" you mentioned here.\n>>\n>\n> you have true - anyway you can clean a content of this directory - but if\n> your database has lot of database objects, your stat file will have a\n> original size very early\n>\n> Pavel\n>\n\nNo, he's not right.\n\nSuya, as I wrote in my previous message, there are two kinds of statistics\nin PostgreSQL\n\na) data distribution statistics\n - histograms, MCV lists, number of distinct values, ...\n - stored in regular tables\n - used for planning\n - collected by ANALYZE\n - not influenced by pg_stat_reset() at all\n\nb) runtime statistics\n - number of scans for table/index, rows fetched from table/index, ...\n - tracks activity within the database\n - stored in pgstat.stat file (or per-db files in the recent releases)\n - used for monitoring, not for planning\n - removed by pg_stat_reset()\n\nSo running pg_stat_reset will not hurt planning at all.\n\nregards\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 20 Jun 2014 12:14:16 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: huge pgstat.stat file on PostgreSQL 8.3.24"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: Tomas Vondra [mailto:[email protected]] \r\nSent: Friday, June 20, 2014 8:14 PM\r\nTo: Pavel Stehule\r\nCc: Huang, Suya; [email protected]\r\nSubject: Re: [PERFORM] huge pgstat.stat file on PostgreSQL 8.3.24\r\n\r\nOn 20 Červen 2014, 5:33, Pavel Stehule wrote:\r\n> 2014-06-20 1:44 GMT+02:00 Huang, Suya <[email protected]>:\r\n>>\r\n>> Thanks Pavel, to be more clear, what does \" pg_stat_reset \"really reset?\r\n>> In the document it says \" Reset all statistics counters for the \r\n>> current database to zero(requires superuser privileges) \". I thought \r\n>> it would reset all statistics of all tables/indexes, thus why I am \r\n>> thinking of re-run analyze database to gather statistics. Because if \r\n>> table/indexes don't have statistics, the query plan would be affected \r\n>> which is not a good thing to a production box... I'm not so sure if I \r\n>> understand \"run statistics\" you mentioned here.\r\n>>\r\n>\r\n> you have true - anyway you can clean a content of this directory - but \r\n> if your database has lot of database objects, your stat file will have \r\n> a original size very early\r\n>\r\n> Pavel\r\n>\r\n\r\nNo, he's not right.\r\n\r\nSuya, as I wrote in my previous message, there are two kinds of statistics in PostgreSQL\r\n\r\na) data distribution statistics\r\n - histograms, MCV lists, number of distinct values, ...\r\n - stored in regular tables\r\n - used for planning\r\n - collected by ANALYZE\r\n - not influenced by pg_stat_reset() at all\r\n\r\nb) runtime statistics\r\n - number of scans for table/index, rows fetched from table/index, ...\r\n - tracks activity within the database\r\n - stored in pgstat.stat file (or per-db files in the recent releases)\r\n - used for monitoring, not for planning\r\n - removed by pg_stat_reset()\r\n\r\nSo running pg_stat_reset will not hurt planning at all.\r\n\r\nregards\r\nTomas\r\n\r\n\r\nHi Tomas,\r\n\r\nYou're right, my DB version is 8.3.11, I remembered the wrong version... we've got a new project using the latest version 9.3.4, and the old DB will be decommissioned in the future, so that's why the management people don't want to spend resources on upgrading and QA, etc.\r\n\r\nStill have a question of why the file would become so big, is that related to the number of objects I have in database?\r\n\r\nThanks again for your clear explanation on the two different statistics in PostgreSQL DB, really helped a lot! I'm wondering if they should also exist in the documentation, as it really confuses people... :)\r\n\r\nThanks,\r\nSuya\r\n\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 Jun 2014 07:07:37 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: huge pgstat.stat file on PostgreSQL 8.3.24"
}
] |
[
{
"msg_contents": "Hello,\nI already post my question in the General Mailing list, but without \nsucceed so I try this one that seems to me more specialized.\nMy question is about GIST index.\nI made my own index to handle specific data and operators. It works \npretty fine but I wonder if it was possible to optimize it.\nWhen I run my operator on a GIST node (in the method \ngist_range_consistent) it returns \"NotConsistent\" / \"MaybeConsistent\" / \n\"FullyConsistent\".\nNotConsistent -> means that all subnodes could be ignored, \ngist_range_consistent return false\nMaybeConsistent -> means that at least one subnode/leaf will be \nconsistent, gist_range_consistent return true\nFullyConsistent -> means that all subnodes/leaves will be consistent, \ngist_range_consistent return true\n\nSo like with the \"recheck flag\" I would like to know if there is a way \nto notify postgres that it is not necessary to rerun my operator on \nsubnodes, to speedup the search.\n\nFor example, consider the following gist tree\n R\n / \\\n Na Nb\n / \\ / \\\nLa1 La2 Lb1 Lb2\n\nIf all nodes return FullyConsistent, postgres will run tests in that \nOrder : R, Na, Nb, La1, La2, Lb1, Lb2, thanks to recheck flag it will \nnot test rows associated to leaves Lxx.\nMy goal is that postgres run test on R and then skip tests on other \nnodes. So is there a way to do that in the GIST API ? Or could I share \ndata from R to Nx and then From Na to Lax and Nb to Lbx ?\nThanks,\nMathieu\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Jun 2014 09:20:39 +0200",
"msg_from": "Pujol Mathieu <[email protected]>",
"msg_from_op": true,
"msg_subject": "GIST optimization to limit calls to operator on sub nodes"
},
{
"msg_contents": "Pujol Mathieu <[email protected]>:\n> Hello,\n> I already post my question in the General Mailing list, but without\n> succeed so I try this one that seems to me more specialized.\n> My question is about GIST index.\n> I made my own index to handle specific data and operators. It works\n> pretty fine but I wonder if it was possible to optimize it.\n> When I run my operator on a GIST node (in the method\n> gist_range_consistent) it returns \"NotConsistent\" /\n> \"MaybeConsistent\" / \"FullyConsistent\".\n> NotConsistent -> means that all subnodes could be ignored,\n> gist_range_consistent return false\n> MaybeConsistent -> means that at least one subnode/leaf will be\n> consistent, gist_range_consistent return true\n> FullyConsistent -> means that all subnodes/leaves will be\n> consistent, gist_range_consistent return true\n> \n> So like with the \"recheck flag\" I would like to know if there is a\n> way to notify postgres that it is not necessary to rerun my operator\n> on subnodes, to speedup the search.\n\nI do not think it is possible at the moment. The GiST framework can\nbe extended to support this use case. I am not sure about the\nspeedup. Most of the consistent functions do not seem very expensive\ncompared to other operations of the GiST framework. I would be\nhappy to test it, if you would implement.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 29 Jun 2014 23:14:19 +0300",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIST optimization to limit calls to operator on sub\n nodes"
},
{
"msg_contents": "Emre Hasegeli <[email protected]> writes:\n> Pujol Mathieu <[email protected]>:\n>> I made my own index to handle specific data and operators. It works\n>> pretty fine but I wonder if it was possible to optimize it.\n>> When I run my operator on a GIST node (in the method\n>> gist_range_consistent) it returns \"NotConsistent\" /\n>> \"MaybeConsistent\" / \"FullyConsistent\".\n>> NotConsistent -> means that all subnodes could be ignored,\n>> gist_range_consistent return false\n>> MaybeConsistent -> means that at least one subnode/leaf will be\n>> consistent, gist_range_consistent return true\n>> FullyConsistent -> means that all subnodes/leaves will be\n>> consistent, gist_range_consistent return true\n>> \n>> So like with the \"recheck flag\" I would like to know if there is a\n>> way to notify postgres that it is not necessary to rerun my operator\n>> on subnodes, to speedup the search.\n\n> I do not think it is possible at the moment. The GiST framework can\n> be extended to support this use case. I am not sure about the\n> speedup. Most of the consistent functions do not seem very expensive\n> compared to other operations of the GiST framework. I would be\n> happy to test it, if you would implement.\n\nI don't actually understand what's being requested here that the\nNotConsistent case doesn't already cover.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 29 Jun 2014 16:30:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIST optimization to limit calls to operator on sub nodes"
},
{
"msg_contents": "\nLe 29/06/2014 22:14, Emre Hasegeli a écrit :\n> Pujol Mathieu <[email protected]>:\n>> Hello,\n>> I already post my question in the General Mailing list, but without\n>> succeed so I try this one that seems to me more specialized.\n>> My question is about GIST index.\n>> I made my own index to handle specific data and operators. It works\n>> pretty fine but I wonder if it was possible to optimize it.\n>> When I run my operator on a GIST node (in the method\n>> gist_range_consistent) it returns \"NotConsistent\" /\n>> \"MaybeConsistent\" / \"FullyConsistent\".\n>> NotConsistent -> means that all subnodes could be ignored,\n>> gist_range_consistent return false\n>> MaybeConsistent -> means that at least one subnode/leaf will be\n>> consistent, gist_range_consistent return true\n>> FullyConsistent -> means that all subnodes/leaves will be\n>> consistent, gist_range_consistent return true\n>>\n>> So like with the \"recheck flag\" I would like to know if there is a\n>> way to notify postgres that it is not necessary to rerun my operator\n>> on subnodes, to speedup the search.\n> I do not think it is possible at the moment. The GiST framework can\n> be extended to support this use case. I am not sure about the\n> speedup. Most of the consistent functions do not seem very expensive\n> compared to other operations of the GiST framework. I would be\n> happy to test it, if you would implement.\n>\n>\nThanks for your reply.\nI am not sure to have time to develop inside the framework, but if I try \nI'll let you know my results. In my case the consistent function is \ncostly and the number of row important so this optimization could save \nseveral hundred tests on a single request.\n\n-- \nMathieu PUJOL\nIngénieur Réalité Virtuelle\nREAL FUSIO - 3D Computer Graphics\n10, rue des arts - 31000 TOULOUSE - FRANCE\[email protected] - http://www.realfusio.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 09:26:08 +0200",
"msg_from": "Pujol Mathieu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GIST optimization to limit calls to operator on sub\n nodes"
},
{
"msg_contents": "\nLe 29/06/2014 22:30, Tom Lane a �crit :\n> Emre Hasegeli <[email protected]> writes:\n>> Pujol Mathieu <[email protected]>:\n>>> I made my own index to handle specific data and operators. It works\n>>> pretty fine but I wonder if it was possible to optimize it.\n>>> When I run my operator on a GIST node (in the method\n>>> gist_range_consistent) it returns \"NotConsistent\" /\n>>> \"MaybeConsistent\" / \"FullyConsistent\".\n>>> NotConsistent -> means that all subnodes could be ignored,\n>>> gist_range_consistent return false\n>>> MaybeConsistent -> means that at least one subnode/leaf will be\n>>> consistent, gist_range_consistent return true\n>>> FullyConsistent -> means that all subnodes/leaves will be\n>>> consistent, gist_range_consistent return true\n>>>\n>>> So like with the \"recheck flag\" I would like to know if there is a\n>>> way to notify postgres that it is not necessary to rerun my operator\n>>> on subnodes, to speedup the search.\n>> I do not think it is possible at the moment. The GiST framework can\n>> be extended to support this use case. I am not sure about the\n>> speedup. Most of the consistent functions do not seem very expensive\n>> compared to other operations of the GiST framework. I would be\n>> happy to test it, if you would implement.\n> I don't actually understand what's being requested here that the\n> NotConsistent case doesn't already cover.\n>\n> \t\t\tregards, tom lane\n>\n>\nHi,\nThe NotConsistent case is correctly covered, the sub nodes are not \ntested because I know that no child could pass the consistent_test.\nThe MaybeConsistent case is also correctly covered, all sub nodes are \ntested because I don't know which sub nodes will pass the consistent_test.\nMy problem is with the FullyConsistent, because when I test a node I can \nknow that all it's childs nodes and leaves will pass the test, so I want \nto notify GIST framework that it can't skip consistent test on those \nnodes. Like we can notify it when testing a leaf that it could skip \nconsistent test on the row. Maybe I miss something on the API to do \nthat. On my tests, the \"recheck_flag\" works only for leaves.\nThanks\nMathieu\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 09:40:00 +0200",
"msg_from": "Pujol Mathieu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GIST optimization to limit calls to operator on sub\n nodes"
},
{
"msg_contents": "Pujol Mathieu <[email protected]> writes:\n> Le 29/06/2014 22:30, Tom Lane a �crit :\n>> I don't actually understand what's being requested here that the\n>> NotConsistent case doesn't already cover.\n\n> The NotConsistent case is correctly covered, the sub nodes are not \n> tested because I know that no child could pass the consistent_test.\n> The MaybeConsistent case is also correctly covered, all sub nodes are \n> tested because I don't know which sub nodes will pass the consistent_test.\n> My problem is with the FullyConsistent, because when I test a node I can \n> know that all it's childs nodes and leaves will pass the test, so I want \n> to notify GIST framework that it can't skip consistent test on those \n> nodes. Like we can notify it when testing a leaf that it could skip \n> consistent test on the row. Maybe I miss something on the API to do \n> that. On my tests, the \"recheck_flag\" works only for leaves.\n\nHm ... that doesn't seem like a case that'd come up often enough to be\nworth complicating the APIs for, unless maybe you are expecting a lot\nof exact-duplicate index entries. If you are, you might find that GIN\nis a better fit for your problem than GIST --- it's designed to be\nefficient for lots-of-duplicates.\n\nAnother view of this is that if you can make exact satisfaction checks\nat upper-page entries, you're probably storing too much information in\nthe index entries (and thereby bloating the index). The typical tradeoff\nin GIST indexes is something like storing bounding boxes for geometric\nobjects --- which is necessarily lossy, but it results in small indexes\nthat are fast to search. It's particularly important for upper-page\nentries to be small, so that fanout is high and you have a better chance\nof keeping all the upper pages in cache.\n\nIf you've got a compelling example where this actually makes sense,\nI'd be curious to hear the details.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 10:04:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIST optimization to limit calls to operator on sub nodes"
},
{
"msg_contents": "\nLe 30/06/2014 16:04, Tom Lane a �crit :\n> Pujol Mathieu <[email protected]> writes:\n>> Le 29/06/2014 22:30, Tom Lane a �crit :\n>>> I don't actually understand what's being requested here that the\n>>> NotConsistent case doesn't already cover.\n>> The NotConsistent case is correctly covered, the sub nodes are not\n>> tested because I know that no child could pass the consistent_test.\n>> The MaybeConsistent case is also correctly covered, all sub nodes are\n>> tested because I don't know which sub nodes will pass the consistent_test.\n>> My problem is with the FullyConsistent, because when I test a node I can\n>> know that all it's childs nodes and leaves will pass the test, so I want\n>> to notify GIST framework that it can't skip consistent test on those\n>> nodes. Like we can notify it when testing a leaf that it could skip\n>> consistent test on the row. Maybe I miss something on the API to do\n>> that. On my tests, the \"recheck_flag\" works only for leaves.\n> Hm ... that doesn't seem like a case that'd come up often enough to be\n> worth complicating the APIs for, unless maybe you are expecting a lot\n> of exact-duplicate index entries. If you are, you might find that GIN\n> is a better fit for your problem than GIST --- it's designed to be\n> efficient for lots-of-duplicates.\n>\n> Another view of this is that if you can make exact satisfaction checks\n> at upper-page entries, you're probably storing too much information in\n> the index entries (and thereby bloating the index). The typical tradeoff\n> in GIST indexes is something like storing bounding boxes for geometric\n> objects --- which is necessarily lossy, but it results in small indexes\n> that are fast to search. It's particularly important for upper-page\n> entries to be small, so that fanout is high and you have a better chance\n> of keeping all the upper pages in cache.\n>\n> If you've got a compelling example where this actually makes sense,\n> I'd be curious to hear the details.\n>\n> \t\t\tregards, tom lane\n>\n>\nHi,\nI have a table containing several millions of rows, and each row \ncontains an unique integer as identifier. My goal is to select all rows \nwhich have an identifier that is contained into at least one range of a \nlist.\nCREATE TABLE t (id int4 UNIQUE, ...)\nSELECT * FROM t WHERE id @@> ARRAY[...]::range[]\nI use a custom operator to check if an integer is contained in an array \nof ranges (a custom structure defined by my plugin).\nI build my own GIST to speedup those requests. So each node of my GIST \nis represented by a range (like a BVH or octree of 3D boxes). I have no \nduplicated keys in my index.\nWhen I run my tests I am able to quickly discard entire portions of the \nindex which leads to great performance improvements.\nDuring GIST traversal when I test consistency of a node \n(isRangeOverlap(range,range[]) the test return Fully, Partially, No. So \nwhen the tests return Fully I know for sure that all subnodes will also \nreturn Fully.\nIn practice my operator is not free in execution time, so if I could \npropagate the information on subnodes it will allow to save lot of \ncomputation time.\nThis optimization could also be benefical for cube extension.\nI don't think that it would complicate the API, we could use existing \nrecheck_flag. Today this value is used only for leaves node. Maybe it \ncould be propagated by GIST API from a node to its subnodes. So if a \nnode set it to false, it will be false for it's subnodes allowing client \nto use it or no. So the API could remain the same without changes for \nexisting plugins and need only small memory to propagate this boolean.\nI already achieve great performance improvements with my GIST, my \nrequest is to optimize it in use cases that select several rows to limit \noverhead of my consistent_operator.\n\nregards,\nmathieu pujol\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 01 Jul 2014 09:46:50 +0200",
"msg_from": "Pujol Mathieu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GIST optimization to limit calls to operator on sub\n nodes"
}
] |
[
{
"msg_contents": "Hi,\nI’m running a search engine for cars. It’s backed by a postgresql 9.3 installation. \n\nNow I’m unsure about the best approach/strategy on doing index optimization for the fronted search.\n\nThe problem:\n\nThe table containing the cars holds a around 1,5 million rows. People that searches for cars needs different criteria to search by. Some search by brand/model, some by year, some by mileage, some by price and some by special equipment etc. etc. - and often they combine a whole bunch of criteria together. Of cause some, like brand/mode and price, are used more frequently than others. In total we offer: 9 category criteria like brand/model or body type, plus 5 numeric criteria like price or mileage, plus 12 boolean criteria like equipment. Lastly people can order the results by different columns (year, price, mileage and a score we create about the cars). By default we order by our own generated score.\n\nWhat I’ve done so far:\n\nI have analyzed the usage of the criteria “lightly”, and created a few indexes (10). Among those, are e.g. indexes on price, mileage and a combined index on brand/model. Since we are only interested in showing results for cars which is actually for sale, the indexes are made as partial indexes on a sales state column.\n\nQuestions: \n\n1. How would you go about analyzing and determining what columns should be indexed, and how?\n2. What is the best strategy when optimizing indexes for searches happening on 20 + columns, where the use and the combinations varies a lot? (To just index everything, to index some of the columns, to do combined indexes, to only do single column indexes etc. etc.)\n3. I expect that it does not make sense to index all columns?\n4. I expect it does not make sense to index boolean columns?\n5. Is it better to do a combined index on 5 frequently used columns rather than having individual indexes on each of them?\n6. Would it be a goof idea to have all indexes sorted by my default sorting?\n7. Do you have so experiences with other approaches that could greatly improve performance (e.g. forcing indexes to stay in memory etc.)?\n\n\n\n\n \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Jun 2014 10:48:28 +0200",
"msg_from": "=?windows-1252?Q?Niels_Kristian_Schj=F8dt?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Guidelines on best indexing strategy for varying searches on 20+\n columns"
},
{
"msg_contents": "On Wed, Jun 25, 2014 at 3:48 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> Hi,\n> I’m running a search engine for cars. It’s backed by a postgresql 9.3 installation.\n>\n> Now I’m unsure about the best approach/strategy on doing index optimization for the fronted search.\n>\n> The problem:\n>\n> The table containing the cars holds a around 1,5 million rows. People that searches for cars needs different criteria to search by. Some search by brand/model, some by year, some by mileage, some by price and some by special equipment etc. etc. - and often they combine a whole bunch of criteria together. Of cause some, like brand/mode and price, are used more frequently than others. In total we offer: 9 category criteria like brand/model or body type, plus 5 numeric criteria like price or mileage, plus 12 boolean criteria like equipment. Lastly people can order the results by different columns (year, price, mileage and a score we create about the cars). By default we order by our own generated score.\n>\n> What I’ve done so far:\n>\n> I have analyzed the usage of the criteria “lightly”, and created a few indexes (10). Among those, are e.g. indexes on price, mileage and a combined index on brand/model. Since we are only interested in showing results for cars which is actually for sale, the indexes are made as partial indexes on a sales state column.\n>\n> Questions:\n>\n> 1. How would you go about analyzing and determining what columns should be indexed, and how?\n\nmainly frequency of access.\n\n> 2. What is the best strategy when optimizing indexes for searches happening on 20 + columns, where the use and the combinations varies a lot? (To just index everything, to index some of the columns, to do combined indexes, to only do single column indexes etc. etc.)\n\ndon't make 20 indexes. consider installing pg_trgm (for optimized\nLIKE searching) or hstore (for optmized key value searching) and then\nusing GIST/GIN for multiple attribute search. with 9.4 we have\nanother fancy technique to explore: jsonb searching via GIST/GIN.\n\n> 3. I expect that it does not make sense to index all columns?\n\nwell, maybe. if you only ever search one column at a time, then it\nmight make sense. but if you need to search arbitrary criteria and\nfrequently combine a large number, then no -- particularly if your\ndataset is very large and individual criteria are not very selective.\n\n> 4. I expect it does not make sense to index boolean columns?\n\nin general, no. an important exception is if you are only interested\nin true or false and the number of records that have that interesting\nvalue is tiny relative to the size of the table. in that case, a\npartial index can be used for massive optimization.\n\n> 5. Is it better to do a combined index on 5 frequently used columns rather than having individual indexes on each of them?\n\nOnly if you search those 5 columns together a significant portion of the time.\n\n> 6. Would it be a goof idea to have all indexes sorted by my default sorting?\n\nindex order rarely matters. if you always search values backwards and\nthe table is very large you may want to consider it. unfortunately\nthis often doesn't work for composite indexes so sometimes we must\nexplore the old school technique of reversing the value.\n\n> 7. Do you have so experiences with other approaches that could greatly improve performance (e.g. forcing indexes to stay in memory etc.)?\n\nas noted above, fancy indexing is the first place to look. start\nwith pg_trgm (for like optmization), hstore, and the new json stuff.\nthe big limitation you will hit is that that most index strategies, at\nleast fo the prepackaged stuff will support '=', or partial string\n(particularly with pg_trgm like), but not > or <: for range operations\nyou have to post process the search or try to work the index from\nanother angle.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Jun 2014 16:48:01 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guidelines on best indexing strategy for varying\n searches on 20+ columns"
},
{
"msg_contents": "Thanks for your suggestions, very useful. See comments inline:\n\nDen 25/06/2014 kl. 23.48 skrev Merlin Moncure <[email protected]>:\n\n> On Wed, Jun 25, 2014 at 3:48 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n>> Hi,\n>> I’m running a search engine for cars. It’s backed by a postgresql 9.3 installation.\n>> \n>> Now I’m unsure about the best approach/strategy on doing index optimization for the fronted search.\n>> \n>> The problem:\n>> \n>> The table containing the cars holds a around 1,5 million rows. People that searches for cars needs different criteria to search by. Some search by brand/model, some by year, some by mileage, some by price and some by special equipment etc. etc. - and often they combine a whole bunch of criteria together. Of cause some, like brand/mode and price, are used more frequently than others. In total we offer: 9 category criteria like brand/model or body type, plus 5 numeric criteria like price or mileage, plus 12 boolean criteria like equipment. Lastly people can order the results by different columns (year, price, mileage and a score we create about the cars). By default we order by our own generated score.\n>> \n>> What I’ve done so far:\n>> \n>> I have analyzed the usage of the criteria “lightly”, and created a few indexes (10). Among those, are e.g. indexes on price, mileage and a combined index on brand/model. Since we are only interested in showing results for cars which is actually for sale, the indexes are made as partial indexes on a sales state column.\n>> \n>> Questions:\n>> \n>> 1. How would you go about analyzing and determining what columns should be indexed, and how?\n> \n> mainly frequency of access.\n> \n>> 2. What is the best strategy when optimizing indexes for searches happening on 20 + columns, where the use and the combinations varies a lot? (To just index everything, to index some of the columns, to do combined indexes, to only do single column indexes etc. etc.)\n> \n> don't make 20 indexes. consider installing pg_trgm (for optimized\n> LIKE searching) or hstore (for optmized key value searching) and then\n> using GIST/GIN for multiple attribute search. with 9.4 we have\n> another fancy technique to explore: jsonb searching via GIST/GIN.\n\nInteresting, do you have any good resources on this approach?\n> \n>> 3. I expect that it does not make sense to index all columns?\n> \n> well, maybe. if you only ever search one column at a time, then it\n> might make sense. but if you need to search arbitrary criteria and\n> frequently combine a large number, then no -- particularly if your\n> dataset is very large and individual criteria are not very selective.\n\nSo, to just clarify: I’m often combining a large number of search criteria and the individual criteria is often not very selective, in that case, are you arguing for or against indexing all columns? :-)\n> \n>> 4. I expect it does not make sense to index boolean columns?\n> \n> in general, no. an important exception is if you are only interested\n> in true or false and the number of records that have that interesting\n> value is tiny relative to the size of the table. in that case, a\n> partial index can be used for massive optimization.\n\nThanks, hadn’t been thinking about using partial indexes here as an option.\n> \n>> 5. Is it better to do a combined index on 5 frequently used columns rather than having individual indexes on each of them?\n> \n> Only if you search those 5 columns together a significant portion of the time.\n> \n>> 6. Would it be a goof idea to have all indexes sorted by my default sorting?\n> \n> index order rarely matters. if you always search values backwards and\n> the table is very large you may want to consider it. unfortunately\n> this often doesn't work for composite indexes so sometimes we must\n> explore the old school technique of reversing the value.\n> \n>> 7. Do you have so experiences with other approaches that could greatly improve performance (e.g. forcing indexes to stay in memory etc.)?\n> \n> as noted above, fancy indexing is the first place to look. start\n> with pg_trgm (for like optmization), hstore, and the new json stuff.\n> the big limitation you will hit is that that most index strategies, at\n> least fo the prepackaged stuff will support '=', or partial string\n> (particularly with pg_trgm like), but not > or <: for range operations\n> you have to post process the search or try to work the index from\n> another angle.\n> \n> merlin\n\n\nThanks for your suggestions, very useful. See comments inline:\nDen 25/06/2014 kl. 23.48 skrev Merlin Moncure <[email protected]>:On Wed, Jun 25, 2014 at 3:48 AM, Niels Kristian Schjødt<[email protected]> wrote:Hi,I’m running a search engine for cars. It’s backed by a postgresql 9.3 installation.Now I’m unsure about the best approach/strategy on doing index optimization for the fronted search.The problem:The table containing the cars holds a around 1,5 million rows. People that searches for cars needs different criteria to search by. Some search by brand/model, some by year, some by mileage, some by price and some by special equipment etc. etc. - and often they combine a whole bunch of criteria together. Of cause some, like brand/mode and price, are used more frequently than others. In total we offer: 9 category criteria like brand/model or body type, plus 5 numeric criteria like price or mileage, plus 12 boolean criteria like equipment. Lastly people can order the results by different columns (year, price, mileage and a score we create about the cars). By default we order by our own generated score.What I’ve done so far:I have analyzed the usage of the criteria “lightly”, and created a few indexes (10). Among those, are e.g. indexes on price, mileage and a combined index on brand/model. Since we are only interested in showing results for cars which is actually for sale, the indexes are made as partial indexes on a sales state column.Questions:1. How would you go about analyzing and determining what columns should be indexed, and how?mainly frequency of access.2. What is the best strategy when optimizing indexes for searches happening on 20 + columns, where the use and the combinations varies a lot? (To just index everything, to index some of the columns, to do combined indexes, to only do single column indexes etc. etc.)don't make 20 indexes. consider installing pg_trgm (for optimizedLIKE searching) or hstore (for optmized key value searching) and thenusing GIST/GIN for multiple attribute search. with 9.4 we haveanother fancy technique to explore: jsonb searching via GIST/GIN.Interesting, do you have any good resources on this approach?3. I expect that it does not make sense to index all columns?well, maybe. if you only ever search one column at a time, then itmight make sense. but if you need to search arbitrary criteria andfrequently combine a large number, then no -- particularly if yourdataset is very large and individual criteria are not very selective.So, to just clarify: I’m often combining a large number of search criteria and the individual criteria is often not very selective, in that case, are you arguing for or against indexing all columns? :-)4. I expect it does not make sense to index boolean columns?in general, no. an important exception is if you are only interestedin true or false and the number of records that have that interestingvalue is tiny relative to the size of the table. in that case, apartial index can be used for massive optimization.Thanks, hadn’t been thinking about using partial indexes here as an option.5. Is it better to do a combined index on 5 frequently used columns rather than having individual indexes on each of them?Only if you search those 5 columns together a significant portion of the time.6. Would it be a goof idea to have all indexes sorted by my default sorting?index order rarely matters. if you always search values backwards andthe table is very large you may want to consider it. unfortunatelythis often doesn't work for composite indexes so sometimes we mustexplore the old school technique of reversing the value.7. Do you have so experiences with other approaches that could greatly improve performance (e.g. forcing indexes to stay in memory etc.)?as noted above, fancy indexing is the first place to look. startwith pg_trgm (for like optmization), hstore, and the new json stuff.the big limitation you will hit is that that most index strategies, atleast fo the prepackaged stuff will support '=', or partial string(particularly with pg_trgm like), but not > or <: for range operationsyou have to post process the search or try to work the index fromanother angle.merlin",
"msg_date": "Mon, 30 Jun 2014 09:33:10 +0200",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guidelines on best indexing strategy for varying searches on 20+\n columns"
},
{
"msg_contents": "On Wed, Jun 25, 2014 at 1:48 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> Hi,\n> I’m running a search engine for cars. It’s backed by a postgresql 9.3 installation.\n>\n> Now I’m unsure about the best approach/strategy on doing index optimization for the fronted search.\n>\n> The problem:\n>\n> The table containing the cars holds a around 1,5 million rows. People that searches for cars needs different criteria to search by. Some search by brand/model, some by year, some by mileage, some by price and some by special equipment etc. etc. - and often they combine a whole bunch of criteria together. Of cause some, like brand/mode and price, are used more frequently than others. In total we offer: 9 category criteria like brand/model or body type, plus 5 numeric criteria like price or mileage, plus 12 boolean criteria like equipment. Lastly people can order the results by different columns (year, price, mileage and a score we create about the cars). By default we order by our own generated score.\n>\n> What I’ve done so far:\n>\n> I have analyzed the usage of the criteria “lightly”, and created a few indexes (10). Among those, are e.g. indexes on price, mileage and a combined index on brand/model. Since we are only interested in showing results for cars which is actually for sale, the indexes are made as partial indexes on a sales state column.\n\nI'd probably partition the data on whether it is for sale, and then\nsearch only the for-sale partition.\n\n>\n> Questions:\n>\n> 1. How would you go about analyzing and determining what columns should be indexed, and how?\n\nI'd start out with intuition about which columns are likely to be used\nmost often, and in a selective way. And followup by logging slow\nqueries so they can be dissected at leisure.\n\n> 2. What is the best strategy when optimizing indexes for searches happening on 20 + columns, where the use and the combinations varies a lot? (To just index everything, to index some of the columns, to do combined indexes, to only do single column indexes etc. etc.)\n\nThere is no magic index. Based on your description, you are going to\nbe seq scanning your table a lot. Focus on making it as small as\npossible, but vertical partitioning it so that the not-for-sale\nentries are hived off to an historical table, and horizontally\npartitioning it so that large columns rarely used in the where clause\nare in a separate table (Ideally you would tell postgresql to\naggressively toast those columns, but there is no knob with which to\ndo that)\n\n\n> 3. I expect that it does not make sense to index all columns?\n\nYou mean individually, or jointly? Either way, probably not.\n\n> 4. I expect it does not make sense to index boolean columns?\n\nIn some cases it can, for example if the data distribution is very\nlopsided and the value with the smaller side is frequently specified.\n\n> 5. Is it better to do a combined index on 5 frequently used columns rather than having individual indexes on each of them?\n\nHow often are the columns specified together? If they are completely\nindependent it probably makes little sense to index them together.\n\n> 6. Would it be a goof idea to have all indexes sorted by my default sorting?\n\nYou don't get to choose. An btree index is sorted by the columns\nspecified in the index, according to the operators specified (or\ndefaulted). Unless you mean that you want to add the default sort\ncolumn to be the lead column in each index, that actually might make\nsense.\n\n> 7. Do you have so experiences with other approaches that could greatly improve performance (e.g. forcing indexes to stay in memory etc.)?\n\nIf your queries are as unstructured as you imply, I'd forget about\nindexes for the most part, as you will have a hard time findings ones\nthat work. Concentrate on making seq scans as fast as possible. If\nmost of your queries end in something like \"ORDER by price limit 10\"\nthen concentrate on index scans over price. You will probably want to\ninclude heuristics in your UI such that if people configure queries to\ndownload half your database, you disallow that. You will probably\nfind that 90% of the workload comes from people who are just playing\naround with your website and don't actually intend to do business with\nyou.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 11:04:36 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guidelines on best indexing strategy for varying\n searches on 20+ columns"
},
{
"msg_contents": "Thanks for the answers.\n\n> Den 30/06/2014 kl. 20.04 skrev Jeff Janes <[email protected]>:\n> \n> On Wed, Jun 25, 2014 at 1:48 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n>> Hi,\n>> I’m running a search engine for cars. It’s backed by a postgresql 9.3 installation.\n>> \n>> Now I’m unsure about the best approach/strategy on doing index optimization for the fronted search.\n>> \n>> The problem:\n>> \n>> The table containing the cars holds a around 1,5 million rows. People that searches for cars needs different criteria to search by. Some search by brand/model, some by year, some by mileage, some by price and some by special equipment etc. etc. - and often they combine a whole bunch of criteria together. Of cause some, like brand/mode and price, are used more frequently than others. In total we offer: 9 category criteria like brand/model or body type, plus 5 numeric criteria like price or mileage, plus 12 boolean criteria like equipment. Lastly people can order the results by different columns (year, price, mileage and a score we create about the cars). By default we order by our own generated score.\n>> \n>> What I’ve done so far:\n>> \n>> I have analyzed the usage of the criteria “lightly”, and created a few indexes (10). Among those, are e.g. indexes on price, mileage and a combined index on brand/model. Since we are only interested in showing results for cars which is actually for sale, the indexes are made as partial indexes on a sales state column.\n> \n> I'd probably partition the data on whether it is for sale, and then\n> search only the for-sale partition.\n\nHmm okay, I already did all the indexes partial based on the for-sales state. If the queries always queries for-sale, and all indexes are partial based on those, will it then still help performance / make sense to partition the tables?\n> \n>> \n>> Questions:\n>> \n>> 1. How would you go about analyzing and determining what columns should be indexed, and how?\n> \n> I'd start out with intuition about which columns are likely to be used\n> most often, and in a selective way. And followup by logging slow\n> queries so they can be dissected at leisure.\n> \n>> 2. What is the best strategy when optimizing indexes for searches happening on 20 + columns, where the use and the combinations varies a lot? (To just index everything, to index some of the columns, to do combined indexes, to only do single column indexes etc. etc.)\n> \n> There is no magic index. Based on your description, you are going to\n> be seq scanning your table a lot. Focus on making it as small as\n> possible, but vertical partitioning it so that the not-for-sale\n> entries are hived off to an historical table, and horizontally\n> partitioning it so that large columns rarely used in the where clause\n> are in a separate table (Ideally you would tell postgresql to\n> aggressively toast those columns, but there is no knob with which to\n> do that)\n> \n> \n>> 3. I expect that it does not make sense to index all columns?\n> \n> You mean individually, or jointly? Either way, probably not.\n> \n>> 4. I expect it does not make sense to index boolean columns?\n> \n> In some cases it can, for example if the data distribution is very\n> lopsided and the value with the smaller side is frequently specified.\n> \n>> 5. Is it better to do a combined index on 5 frequently used columns rather than having individual indexes on each of them?\n> \n> How often are the columns specified together? If they are completely\n> independent it probably makes little sense to index them together.\n> \n>> 6. Would it be a goof idea to have all indexes sorted by my default sorting?\n> \n> You don't get to choose. An btree index is sorted by the columns\n> specified in the index, according to the operators specified (or\n> defaulted). Unless you mean that you want to add the default sort\n> column to be the lead column in each index, that actually might make\n> sense.\n> \n>> 7. Do you have so experiences with other approaches that could greatly improve performance (e.g. forcing indexes to stay in memory etc.)?\n> \n> If your queries are as unstructured as you imply, I'd forget about\n> indexes for the most part, as you will have a hard time findings ones\n> that work. Concentrate on making seq scans as fast as possible. If\n> most of your queries end in something like \"ORDER by price limit 10\"\n> then concentrate on index scans over price. You will probably want to\n> include heuristics in your UI such that if people configure queries to\n> download half your database, you disallow that. You will probably\n> find that 90% of the workload comes from people who are just playing\n> around with your website and don't actually intend to do business with\n> you.\n> \n> Cheers,\n> \n> Jeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 1 Jul 2014 22:08:32 +0200",
"msg_from": "=?utf-8?Q?Niels_Kristian_Schj=C3=B8dt?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guidelines on best indexing strategy for varying searches on 20+\n columns"
}
] |
[
{
"msg_contents": "Sorry for the semi-newbie question...\n\nI have a relatively sizable postgresql 9.0.2 DB with a few large tables \n(keep in mind \"large\" is relative, I'm sure there are plenty larger out \nthere).\n\nOne of my queries that seems to be bogging-down performance is a join \nbetween two tables on each of their BIGINT PK's (so they have default \nunique constraint/PK indexes on them). One table is a detail table for \nthe other. The \"master\" has about 6mm rows. The detail table has about \n131mm rows (table size = 17GB, index size = 16GB).\n\nI unfortunately have limited disks, so I can't actually move to multiple \nspindles, but wonder if there is anything I can do (should I partition \nthe data, etc.) to improve performance? Maybe some further tuning to my \n.conf, but I do think that's using as much mem as I can spare right now \n(happy to send it along if it would help).\n\nDB is vacuumed nightly with stats updates enabled. I can send the \nstatistics info listed in pgAdmin tab if that would help.\n\nAny suggestions, tips, tricks, links, etc. are welcomed!\n\nThanks in advance,\nAJ\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Jun 2014 16:10:25 -0400",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to improve perf of 131MM row table?"
},
{
"msg_contents": "On 06/25/2014 03:10 PM, AJ Weber wrote:\n\n> I have a relatively sizable postgresql 9.0.2 DB with a few large tables\n> (keep in mind \"large\" is relative, I'm sure there are plenty larger out\n> there).\n\nRegardless of any help we might offer regarding this, you need to \nupgrade your installation to 9.0.17. You are behind by several \nperformance, security, and integrity bugfixes, some of which address \ncritical corruption bugs related to replication.\n\n> One of my queries that seems to be bogging-down performance is a join\n> between two tables on each of their BIGINT PK's (so they have default\n> unique constraint/PK indexes on them). One table is a detail table for\n> the other.\n\nThis isn't enough information. Just knowing the relative sizes of the \ntables doesn't tell us which columns are indexed, whether or not the \nquery is using those indexes, how many rows usually match, which queries \nare performing badly, and so on.\n\nPlease refer to this page to ask performance related questions:\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nWithout much of this information, we'd only be speculating.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Jun 2014 15:49:16 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "I will gather the other data tonight. Thank you. \n\nIn the meantime, I guess I wasn't clear about some other particulars \nThe query's where clause is only an \"IN\", with a list of id's (those I mentioned are the PK), and the join is explicitly on the PK (so, indexed). \n\nThus, there should be only the explicit matches to the in clause returned, and if postgresql isn't using the unique index on that column, I would be very shocked (to the point I would suggest there is a bug somewhere). \n\nAn IN with 50 int values took 23sec to return (by way of example). \n\nThanks again. \n-- \nAaron\n\nOn June 25, 2014 4:49:16 PM EDT, Shaun Thomas <[email protected]> wrote:\n>On 06/25/2014 03:10 PM, AJ Weber wrote:\n>\n>> I have a relatively sizable postgresql 9.0.2 DB with a few large\n>tables\n>> (keep in mind \"large\" is relative, I'm sure there are plenty larger\n>out\n>> there).\n>\n>Regardless of any help we might offer regarding this, you need to \n>upgrade your installation to 9.0.17. You are behind by several \n>performance, security, and integrity bugfixes, some of which address \n>critical corruption bugs related to replication.\n>\n>> One of my queries that seems to be bogging-down performance is a join\n>> between two tables on each of their BIGINT PK's (so they have default\n>> unique constraint/PK indexes on them). One table is a detail table\n>for\n>> the other.\n>\n>This isn't enough information. Just knowing the relative sizes of the \n>tables doesn't tell us which columns are indexed, whether or not the \n>query is using those indexes, how many rows usually match, which\n>queries \n>are performing badly, and so on.\n>\n>Please refer to this page to ask performance related questions:\n>\n>https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n>Without much of this information, we'd only be speculating.\n>\n>-- \n>Shaun Thomas\n>OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL,\n>60604\n>312-676-8870\n>[email protected]\n>\n>______________________________________________\n>\n>See http://www.peak6.com/email_disclaimer/ for terms and conditions\n>related to this email\n\nI will gather the other data tonight. Thank you. \n\nIn the meantime, I guess I wasn't clear about some other particulars \nThe query's where clause is only an \"IN\", with a list of id's (those I mentioned are the PK), and the join is explicitly on the PK (so, indexed). \n\nThus, there should be only the explicit matches to the in clause returned, and if postgresql isn't using the unique index on that column, I would be very shocked (to the point I would suggest there is a bug somewhere). \n\nAn IN with 50 int values took 23sec to return (by way of example). \n\nThanks again. \n-- \nAaronOn June 25, 2014 4:49:16 PM EDT, Shaun Thomas <[email protected]> wrote:\nOn 06/25/2014 03:10 PM, AJ Weber wrote: I have a relatively sizable postgresql 9.0.2 DB with a few large tables (keep in mind \"large\" is relative, I'm sure there are plenty larger out there).Regardless of any help we might offer regarding this, you need to upgrade your installation to 9.0.17. You are behind by several performance, security, and integrity bugfixes, some of which address critical corruption bugs related to replication. One of my queries that seems to be bogging-down performance is a join between two tables on each of their BIGINT PK's (so they have default unique constraint/PK indexes on them). One table is a detail table\nfor the other.This isn't enough information. Just knowing the relative sizes of the tables doesn't tell us which columns are indexed, whether or not the query is using those indexes, how many rows usually match, which queries are performing badly, and so on.Please refer to this page to ask performance related questions:https://wiki.postgresql.org/wiki/Slow_Query_QuestionsWithout much of this information, we'd only be speculating.",
"msg_date": "Wed, 25 Jun 2014 17:40:47 -0400",
"msg_from": "Aaron Weber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On 06/25/2014 04:40 PM, Aaron Weber wrote:\n\n> In the meantime, I guess I wasn't clear about some other particulars\n> The query's where clause is only an \"IN\", with a list of id's (those\n> I mentioned are the PK), and the join is explicitly on the PK (so,\n> indexed).\n\nIndexed doesn't mean indexed if the wrong datatypes are used. We need to \nsee the table and index definitions, and a sample query with EXPLAIN \nANALYZE output.\n\n> An IN with 50 int values took 23sec to return (by way of example).\n\nTo me, this sounds like a sequence scan, or one of your key matches so \nmany rows, the random seeks are throwing off your performance. Of \ncourse, I can't confirm that without EXPLAIN output.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Jun 2014 16:55:29 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "Will get what you asked for ASAP. Thanks for your time. \n-- \nAaron\n\nOn June 25, 2014 5:55:29 PM EDT, Shaun Thomas <[email protected]> wrote:\n>On 06/25/2014 04:40 PM, Aaron Weber wrote:\n>\n>> In the meantime, I guess I wasn't clear about some other particulars\n>> The query's where clause is only an \"IN\", with a list of id's (those\n>> I mentioned are the PK), and the join is explicitly on the PK (so,\n>> indexed).\n>\n>Indexed doesn't mean indexed if the wrong datatypes are used. We need\n>to \n>see the table and index definitions, and a sample query with EXPLAIN \n>ANALYZE output.\n>\n>> An IN with 50 int values took 23sec to return (by way of example).\n>\n>To me, this sounds like a sequence scan, or one of your key matches so \n>many rows, the random seeks are throwing off your performance. Of \n>course, I can't confirm that without EXPLAIN output.\n>\n>-- \n>Shaun Thomas\n>OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL,\n>60604\n>312-676-8870\n>[email protected]\n>\n>______________________________________________\n>\n>See http://www.peak6.com/email_disclaimer/ for terms and conditions\n>related to this email\n\nWill get what you asked for ASAP. Thanks for your time. \n-- \nAaronOn June 25, 2014 5:55:29 PM EDT, Shaun Thomas <[email protected]> wrote:\nOn 06/25/2014 04:40 PM, Aaron Weber wrote: In the meantime, I guess I wasn't clear about some other particulars The query's where clause is only an \"IN\", with a list of id's (those I mentioned are the PK), and the join is explicitly on the PK (so, indexed).Indexed doesn't mean indexed if the wrong datatypes are used. We need to see the table and index definitions, and a sample query with EXPLAIN ANALYZE output. An IN with 50 int values took 23sec to return (by way of example).To me, this sounds like a sequence scan, or one of your key matches so many rows, the random seeks are throwing off your performance. Of course,\nI can't confirm that without EXPLAIN output.",
"msg_date": "Wed, 25 Jun 2014 18:29:40 -0400",
"msg_from": "Aaron Weber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "OK, the sample query is attached (hopefully attachments are allowed) as \n\"query.sql\".\nThe \"master table\" definition is attached as \"table1.sql\".\nThe \"detail table\" definition is attached as \"table2.sql\".\nThe EXPLAIN (ANALYZE, BUFFERS) output is here: \nhttp://explain.depesz.com/s/vd5\n\nLet me know if I can provide anything else, and thank you again.\n\n-AJ\n\n\nOn 6/25/2014 5:55 PM, Shaun Thomas wrote:\n> On 06/25/2014 04:40 PM, Aaron Weber wrote:\n>\n>> In the meantime, I guess I wasn't clear about some other particulars\n>> The query's where clause is only an \"IN\", with a list of id's (those\n>> I mentioned are the PK), and the join is explicitly on the PK (so,\n>> indexed).\n>\n> Indexed doesn't mean indexed if the wrong datatypes are used. We need \n> to see the table and index definitions, and a sample query with \n> EXPLAIN ANALYZE output.\n>\n>> An IN with 50 int values took 23sec to return (by way of example).\n>\n> To me, this sounds like a sequence scan, or one of your key matches so \n> many rows, the random seeks are throwing off your performance. Of \n> course, I can't confirm that without EXPLAIN output.\n>\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 26 Jun 2014 09:26:06 -0400",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On Thu, Jun 26, 2014 at 10:26 AM, AJ Weber <[email protected]> wrote:\n\n> OK, the sample query is attached (hopefully attachments are allowed) as\n> \"query.sql\".\n> The \"master table\" definition is attached as \"table1.sql\".\n> The \"detail table\" definition is attached as \"table2.sql\".\n> The EXPLAIN (ANALYZE, BUFFERS) output is here:\n> http://explain.depesz.com/s/vd5\n>\n\nCould you try chaning your query and sending the EXPLAIN of the following?\n\nInstead of `node_id in ('175769', '175771', ...)` try `node_in IN\n(VALUES('175769'), ('175771'), ... )`.\n\n\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Thu, Jun 26, 2014 at 10:26 AM, AJ Weber <[email protected]> wrote:\nOK, the sample query is attached (hopefully attachments are allowed) as \"query.sql\".\n\n\nThe \"master table\" definition is attached as \"table1.sql\".\nThe \"detail table\" definition is attached as \"table2.sql\".\nThe EXPLAIN (ANALYZE, BUFFERS) output is here: http://explain.depesz.com/s/vd5Could you try chaning your query and sending the EXPLAIN of the following?\nInstead of `node_id in ('175769', '175771', ...)` try `node_in IN (VALUES('175769'), ('175771'), ... )`.\n-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Thu, 26 Jun 2014 10:56:47 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On 06/26/2014 08:26 AM, AJ Weber wrote:\n\n> The \"master table\" definition is attached as \"table1.sql\".\n> The \"detail table\" definition is attached as \"table2.sql\".\n\nI'm not sure what you think a primary key is, but neither of these \ntables have one. Primary keys are declared one of two ways:\n\nCREATE TABLE foo\n(\n id BIGINT PRIMARY KEY,\n col1 VARCHAR,\n col2 INT\n);\n\nOr this:\n\nCREATE TABLE foo\n(\n id BIGINT,\n col1 VARCHAR,\n col2 INT\n);\n\nALTER TABLE foo ADD constraint pk_foo PRIMARY KEY (id);\n\nOn your alf_node_properties table, you only have an index on node_id \nbecause you created one. If you look at your alf_node table, there is no \nindex on the id column at all. This is confirmed by the explain output \nyou attached:\n\nSeq Scan on alf_node node (cost=0.00..227265.29 rows=5733429 width=16) \n(actual time=0.013..2029.649 rows=5733888 loops=1)\n\nSince it has no index, the database is reading the entire table to find \nyour matching values. Then it's using the index on node_id in the other \ntable to find the 'detail' matches, as seen here:\n\nBitmap Index Scan on fk_alf_nprop_n (cost=0.00..1240.00 rows=52790 \nwidth=0) (actual time=0.552..0.552 rows=1071 loops=1)\n\nAdd an actual primary key to your alf_node table, and your query \nperformance should improve substantially. But I also strongly suggest \nyou spend some time learning how to read an EXPLAIN plan, as that would \nhave made your problem obvious immediately.\n\nHere's a link for your version:\n\nhttp://www.postgresql.org/docs/9.0/static/sql-explain.html\n\nYou should still consider upgrading to the latest release of 9.0 too.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 09:05:35 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "I will try this, but can you clarify the syntax? I only know the VALUES \nclause from insert statements, and it would be one set of parens like \nVALUES('175769', '175771', ... )\n\nYou seem to indicate a VALUES clause that has strange parenthesis \ncorresponding to it.\n\nThank you for the feedback and offer to help!\n\n-AJ\n\nOn 6/26/2014 9:56 AM, Matheus de Oliveira wrote:\n>\n> On Thu, Jun 26, 2014 at 10:26 AM, AJ Weber <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> OK, the sample query is attached (hopefully attachments are\n> allowed) as \"query.sql\".\n> The \"master table\" definition is attached as \"table1.sql\".\n> The \"detail table\" definition is attached as \"table2.sql\".\n> The EXPLAIN (ANALYZE, BUFFERS) output is here:\n> http://explain.depesz.com/s/vd5\n>\n>\n> Could you try chaning your query and sending the EXPLAIN of the following?\n>\n> Instead of `node_id in ('175769', '175771', ...)` try `node_in IN \n> (VALUES('175769'), ('175771'), ... )`.\n>\n>\n> -- \n> Matheus de Oliveira\n> Analista de Banco de Dados\n> Dextra Sistemas - MPS.Br nível F!\n> www.dextra.com.br/postgres <http://www.dextra.com.br/postgres/>\n>\n\n\n\n\n\n\n\n I will try this, but can you clarify the syntax? I only know the\n VALUES clause from insert statements, and it would be one set of\n parens like VALUES('175769', '175771', ... )\n\n You seem to indicate a VALUES clause that has strange parenthesis\n corresponding to it.\n\n Thank you for the feedback and offer to help!\n\n -AJ\n\nOn 6/26/2014 9:56 AM, Matheus de\n Oliveira wrote:\n\n\n\n\nOn Thu, Jun 26, 2014 at 10:26 AM, AJ\n Weber <[email protected]>\n wrote:\n\nOK, the\n sample query is attached (hopefully attachments are\n allowed) as \"query.sql\".\n The \"master table\" definition is attached as\n \"table1.sql\".\n The \"detail table\" definition is attached as\n \"table2.sql\".\n The EXPLAIN (ANALYZE, BUFFERS) output is here: http://explain.depesz.com/s/vd5\n\n\n\n\nCould you try chaning your query and\n sending the EXPLAIN of the following?\n\n\nInstead of `node_id in ('175769',\n '175771', ...)` try `node_in IN (VALUES('175769'), ('175771'),\n ... )`.\n\n\n\n\n\n-- \n Matheus de Oliveira\n Analista de Banco de Dados\n Dextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres",
"msg_date": "Thu, 26 Jun 2014 10:07:45 -0400",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On Thu, Jun 26, 2014 at 11:07 AM, AJ Weber <[email protected]> wrote:\n\n> I will try this, but can you clarify the syntax? I only know the VALUES\n> clause from insert statements, and it would be one set of parens like\n> VALUES('175769', '175771', ... )\n>\n>\nThat is for multiple columns, mine is for multiple rows (and it also work\non INSERT to insert many rows at once).\n\nThe result would be:\n\n WHERE node_id in\n VALUES(('175769'), ('175771'), ('175781'), ('175825'),\n('175881'), ('175893'), ('175919'), ('175932'), ('175963'), ('175999'),\n('176022'), ('176079'), ('176099'), ('176115'), ('176118'), ('176171'),\n('176181'), ('176217'), ('176220'), ('176243'), ('176283'), ('176312'),\n('176326'), ('176335'), ('176377'), ('176441'), ('176444'), ('176475'),\n('176530'), ('176570'), ('176623'), ('176674'), ('176701'), ('176730'),\n('176748'), ('176763'), ('176771'), ('176808'), ('176836'), ('176851'),\n('176864'), ('176881'), ('176929'), ('176945'), ('176947'), ('176960'),\n('177006'), ('177039'), ('177079'), ('177131'), ('177144'))\n\nYou seem to indicate a VALUES clause that has strange parenthesis\n> corresponding to it.\n>\n\nNo, nothing strange, you are just not aware of the syntax. See [1], and a\nmore clear example (for INSERT) at [2], look for \"To insert multiple rows\nusing the multirow VALUES syntax:\".\n\n[1] http://www.postgresql.org/docs/current/static/sql-values.html\n[2] http://www.postgresql.org/docs/current/static/sql-insert.html#AEN78471\n\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Thu, Jun 26, 2014 at 11:07 AM, AJ Weber <[email protected]> wrote:\n\n I will try this, but can you clarify the syntax? I only know the\n VALUES clause from insert statements, and it would be one set of\n parens like VALUES('175769', '175771', ... )\nThat is for multiple columns, mine is for multiple rows (and it also work on INSERT to insert many rows at once). The result would be: WHERE node_id in\n\n VALUES(('175769'), ('175771'), ('175781'), ('175825'), ('175881'), ('175893'), ('175919'), ('175932'), ('175963'), ('175999'), ('176022'), ('176079'), ('176099'), ('176115'), ('176118'), ('176171'), ('176181'), ('176217'), ('176220'), ('176243'), ('176283'), ('176312'), ('176326'), ('176335'), ('176377'), ('176441'), ('176444'), ('176475'), ('176530'), ('176570'), ('176623'), ('176674'), ('176701'), ('176730'), ('176748'), ('176763'), ('176771'), ('176808'), ('176836'), ('176851'), ('176864'), ('176881'), ('176929'), ('176945'), ('176947'), ('176960'), ('177006'), ('177039'), ('177079'), ('177131'), ('177144'))\n\n You seem to indicate a VALUES clause that has strange parenthesis\n corresponding to it.No, nothing strange, you are just not aware of the syntax. See [1], and a more clear example (for INSERT) at [2], look for \"To insert multiple rows using the multirow VALUES syntax:\".\n[1] http://www.postgresql.org/docs/current/static/sql-values.html[2] http://www.postgresql.org/docs/current/static/sql-insert.html#AEN78471\n-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Thu, 26 Jun 2014 11:14:27 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "I sent the details as identified by pgAdmin III.\n\npsql output shows this:\n\\d alf_node\n Table \"public.alf_node\"\n Column | Type | Modifiers\n----------------+------------------------+-----------\n id | bigint | not null\n version | bigint | not null\n store_id | bigint | not null\n uuid | character varying(36) | not null\n transaction_id | bigint | not null\n node_deleted | boolean | not null\n type_qname_id | bigint | not null\n locale_id | bigint | not null\n acl_id | bigint |\n audit_creator | character varying(255) |\n audit_created | character varying(30) |\n audit_modifier | character varying(255) |\n audit_modified | character varying(30) |\n audit_accessed | character varying(30) |\nIndexes:\n \"alf_node_pkey\" PRIMARY KEY, btree (id) CLUSTER\n \"store_id\" UNIQUE, btree (store_id, uuid)\n \"fk_alf_node_acl\" btree (acl_id)\n \"fk_alf_node_loc\" btree (locale_id)\n \"fk_alf_node_store\" btree (store_id)\n \"fk_alf_node_tqn\" btree (type_qname_id)\n \"fk_alf_node_txn\" btree (transaction_id)\n \"idx_alf_node_del\" btree (node_deleted)\n \"idx_alf_node_txn_del\" btree (transaction_id, node_deleted)\nForeign-key constraints:\n \"fk_alf_node_acl\" FOREIGN KEY (acl_id) REFERENCES \nalf_access_control_list(id)\n \"fk_alf_node_loc\" FOREIGN KEY (locale_id) REFERENCES alf_locale(id)\n \"fk_alf_node_store\" FOREIGN KEY (store_id) REFERENCES alf_store(id)\n \"fk_alf_node_tqn\" FOREIGN KEY (type_qname_id) REFERENCES alf_qname(id)\n \"fk_alf_node_txn\" FOREIGN KEY (transaction_id) REFERENCES \nalf_transaction(id)\nReferenced by:\n TABLE \"alf_child_assoc\" CONSTRAINT \"fk_alf_cass_cnode\" FOREIGN KEY \n(child_node_id) REFERENCES alf_node(id)\n TABLE \"alf_child_assoc\" CONSTRAINT \"fk_alf_cass_pnode\" FOREIGN KEY \n(parent_node_id) REFERENCES alf_node(id)\n TABLE \"alf_node_aspects\" CONSTRAINT \"fk_alf_nasp_n\" FOREIGN KEY \n(node_id) REFERENCES alf_node(id)\n TABLE \"alf_node_assoc\" CONSTRAINT \"fk_alf_nass_snode\" FOREIGN KEY \n(source_node_id) REFERENCES alf_node(id)\n TABLE \"alf_node_assoc\" CONSTRAINT \"fk_alf_nass_tnode\" FOREIGN KEY \n(target_node_id) REFERENCES alf_node(id)\n TABLE \"alf_node_properties\" CONSTRAINT \"fk_alf_nprop_n\" FOREIGN KEY \n(node_id) REFERENCES alf_node(id)\n TABLE \"alf_store\" CONSTRAINT \"fk_alf_store_root\" FOREIGN KEY \n(root_node_id) REFERENCES alf_node(id)\n TABLE \"alf_subscriptions\" CONSTRAINT \"fk_alf_sub_node\" FOREIGN KEY \n(node_id) REFERENCES alf_node(id) ON DELETE CASCADE\n TABLE \"alf_subscriptions\" CONSTRAINT \"fk_alf_sub_user\" FOREIGN KEY \n(user_node_id) REFERENCES alf_node(id) ON DELETE CASCADE\n TABLE \"alf_usage_delta\" CONSTRAINT \"fk_alf_usaged_n\" FOREIGN KEY \n(node_id) REFERENCES alf_node(id)\n\nThis line of the output:\n \"alf_node_pkey\" PRIMARY KEY, btree (id) CLUSTER\nwould indicate to me that there is a PK on alf_node table, it is on \ncolumn \"id\", it is of type btree, and the table is clustered around that \nindex.\n\nAm I reading this totally wrong?\n\nThe supporting table actually seems to have a multi-column PK defined, \nand a separate btree index on node_id as you mentioned.\n\n-AJ\n\n\nOn 6/26/2014 10:05 AM, Shaun Thomas wrote:\n> On 06/26/2014 08:26 AM, AJ Weber wrote:\n>\n>> The \"master table\" definition is attached as \"table1.sql\".\n>> The \"detail table\" definition is attached as \"table2.sql\".\n>\n> I'm not sure what you think a primary key is, but neither of these \n> tables have one. Primary keys are declared one of two ways:\n>\n> CREATE TABLE foo\n> (\n> id BIGINT PRIMARY KEY,\n> col1 VARCHAR,\n> col2 INT\n> );\n>\n> Or this:\n>\n> CREATE TABLE foo\n> (\n> id BIGINT,\n> col1 VARCHAR,\n> col2 INT\n> );\n>\n> ALTER TABLE foo ADD constraint pk_foo PRIMARY KEY (id);\n>\n> On your alf_node_properties table, you only have an index on node_id \n> because you created one. If you look at your alf_node table, there is \n> no index on the id column at all. This is confirmed by the explain \n> output you attached:\n>\n> Seq Scan on alf_node node (cost=0.00..227265.29 rows=5733429 \n> width=16) (actual time=0.013..2029.649 rows=5733888 loops=1)\n>\n> Since it has no index, the database is reading the entire table to \n> find your matching values. Then it's using the index on node_id in the \n> other table to find the 'detail' matches, as seen here:\n>\n> Bitmap Index Scan on fk_alf_nprop_n (cost=0.00..1240.00 rows=52790 \n> width=0) (actual time=0.552..0.552 rows=1071 loops=1)\n>\n> Add an actual primary key to your alf_node table, and your query \n> performance should improve substantially. But I also strongly suggest \n> you spend some time learning how to read an EXPLAIN plan, as that \n> would have made your problem obvious immediately.\n>\n> Here's a link for your version:\n>\n> http://www.postgresql.org/docs/9.0/static/sql-explain.html\n>\n> You should still consider upgrading to the latest release of 9.0 too.\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 10:22:59 -0400",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On 06/26/2014 09:22 AM, AJ Weber wrote:\n\n> I sent the details as identified by pgAdmin III.\n\nInteresting. Either there is a bug in pgAdmin, or you're connecting to a \ndifferent database that is missing the primary key. What is the EXPLAIN \nANALYZE output if you execute the query you sent on a psql prompt?\n\n> \"alf_node_pkey\" PRIMARY KEY, btree (id) CLUSTER\n> would indicate to me that there is a PK on alf_node table, it is on\n> column \"id\", it is of type btree, and the table is clustered around that\n> index.\n>\n> Am I reading this totally wrong?\n\nNo, that's right. But that wasn't in the SQL you sent. In fact, there's \na lot of stuff missing in that output.\n\nTry running the EXPLAIN ANALYZE using the same psql connection you used \nto retrieve the actual table structure just now. I suspect you've \naccidentally connected to the wrong database. If it's still doing the \nsequence scan, we'll have to dig deeper.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 09:37:24 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": " From psql (same session as previous \\d output) --\n\n Hash Join (cost=328182.35..548154.83 rows=52790 width=187) (actual \ntime=4157.886..4965.466 rows=1071 loops=1)\n Hash Cond: (prop.node_id = node.id)\n Buffers: shared hit=146711 read=23498, temp read=23676 written=23646\n -> Bitmap Heap Scan on alf_node_properties prop \n(cost=1253.19..189491.88 rows=52790 width=179) (actual time=0.429..1.154 \nrows=1071 loops=1)\n Recheck Cond: (node_id = ANY \n('{175769,175771,175781,175825,175881,175893,175919,175932,175963,175999,176022,176079,176099,176115,176118,176171,176181,176217,176220,176243,176283,176312,176326,176335,176377,176441,176444,176475,176530,176570,176623,176674,176701,176730,176748,176763,176771,176808,176836,176851,176864,176881,176929,176945,176947,176960,177006,177039,177079,177131,177144}'::bigint[]))\n Buffers: shared hit=278\n -> Bitmap Index Scan on fk_alf_nprop_n (cost=0.00..1240.00 \nrows=52790 width=0) (actual time=0.411..0.411 rows=1071 loops=1)\n Index Cond: (node_id = ANY \n('{175769,175771,175781,175825,175881,175893,175919,175932,175963,175999,176022,176079,176099,176115,176118,176171,176181,176217,176220,176243,176283,176312,176326,176335,176377,176441,176444,176475,176530,176570,176623,176674,176701,176730,176748,176763,176771,176808,176836,176851,176864,176881,176929,176945,176947,176960,177006,177039,177079,177131,177144}'::bigint[]))\n Buffers: shared hit=207\n -> Hash (cost=227265.29..227265.29 rows=5733429 width=16) (actual \ntime=4156.075..4156.075 rows=5734255 loops=1)\n Buckets: 65536 Batches: 16 Memory Usage: 16888kB\n Buffers: shared hit=146433 read=23498, temp written=23609\n -> Seq Scan on alf_node node (cost=0.00..227265.29 \nrows=5733429 width=16) (actual time=0.004..1908.493 rows=5734255 loops=1)\n Buffers: shared hit=146433 read=23498\n Total runtime: 4967.674 ms\n(15 rows)\n\nOn 6/26/2014 10:37 AM, Shaun Thomas wrote:\n> On 06/26/2014 09:22 AM, AJ Weber wrote:\n>\n>> I sent the details as identified by pgAdmin III.\n>\n> Interesting. Either there is a bug in pgAdmin, or you're connecting to \n> a different database that is missing the primary key. What is the \n> EXPLAIN ANALYZE output if you execute the query you sent on a psql \n> prompt?\n>\n>> \"alf_node_pkey\" PRIMARY KEY, btree (id) CLUSTER\n>> would indicate to me that there is a PK on alf_node table, it is on\n>> column \"id\", it is of type btree, and the table is clustered around that\n>> index.\n>>\n>> Am I reading this totally wrong?\n>\n> No, that's right. But that wasn't in the SQL you sent. In fact, \n> there's a lot of stuff missing in that output.\n>\n> Try running the EXPLAIN ANALYZE using the same psql connection you \n> used to retrieve the actual table structure just now. I suspect you've \n> accidentally connected to the wrong database. If it's still doing the \n> sequence scan, we'll have to dig deeper.\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 10:50:15 -0400",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On Thu, Jun 26, 2014 at 10:26 AM, AJ Weber <[email protected]> wrote:\n> OK, the sample query is attached (hopefully attachments are allowed) as\n> \"query.sql\".\n> The \"master table\" definition is attached as \"table1.sql\".\n> The \"detail table\" definition is attached as \"table2.sql\".\n> The EXPLAIN (ANALYZE, BUFFERS) output is here:\n> http://explain.depesz.com/s/vd5\n\n\nI think the problem is that you're sending strings in the ids, instead\nof integers.\n\nRemove the quotes, leave only the numbers. That will make pg able to\ninfer that node.id = prop.node_id means it can also use an index on\nalf_node_properties.\n\nI think.\n\nTry.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 12:35:00 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "I noticed this too. I am trying to find where the actual SQL is \ngenerated, and I am seeing if this is an artifact of Hibernate.\n\nWill test the same query without the quotes as you recommend. (But I \ndon't know where to fix that, if it is the actual issue, unfortunately.)\n\nOn 6/26/2014 11:35 AM, Claudio Freire wrote:\n> On Thu, Jun 26, 2014 at 10:26 AM, AJ Weber <[email protected]> wrote:\n>> OK, the sample query is attached (hopefully attachments are allowed) as\n>> \"query.sql\".\n>> The \"master table\" definition is attached as \"table1.sql\".\n>> The \"detail table\" definition is attached as \"table2.sql\".\n>> The EXPLAIN (ANALYZE, BUFFERS) output is here:\n>> http://explain.depesz.com/s/vd5\n>\n> I think the problem is that you're sending strings in the ids, instead\n> of integers.\n>\n> Remove the quotes, leave only the numbers. That will make pg able to\n> infer that node.id = prop.node_id means it can also use an index on\n> alf_node_properties.\n>\n> I think.\n>\n> Try.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 11:38:53 -0400",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On Thu, Jun 26, 2014 at 12:38 PM, AJ Weber <[email protected]> wrote:\n> I noticed this too. I am trying to find where the actual SQL is generated,\n> and I am seeing if this is an artifact of Hibernate.\n>\n> Will test the same query without the quotes as you recommend. (But I don't\n> know where to fix that, if it is the actual issue, unfortunately.)\n\nLast time I used Hibernate it was an ancient version, but that version\ndidn't handle bigint very well, IIRC.\n\nWhat we did is write a customized dialect that did the proper handling\nof it. I really can't remember the details, nor whether this still\napplies to the latest version. But it's worth a look.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 12:44:18 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "FWIW: I tested removing the quotes around each value, and it did not \nchange the plan (I am a little surprised too, but I guess PG is \"smarter \nthan that\").\n\nThanks for the idea.\n\nOn 6/26/2014 11:38 AM, AJ Weber wrote:\n> I noticed this too. I am trying to find where the actual SQL is \n> generated, and I am seeing if this is an artifact of Hibernate.\n>\n> Will test the same query without the quotes as you recommend. (But I \n> don't know where to fix that, if it is the actual issue, unfortunately.)\n>\n> On 6/26/2014 11:35 AM, Claudio Freire wrote:\n>> On Thu, Jun 26, 2014 at 10:26 AM, AJ Weber <[email protected]> wrote:\n>>> OK, the sample query is attached (hopefully attachments are allowed) as\n>>> \"query.sql\".\n>>> The \"master table\" definition is attached as \"table1.sql\".\n>>> The \"detail table\" definition is attached as \"table2.sql\".\n>>> The EXPLAIN (ANALYZE, BUFFERS) output is here:\n>>> http://explain.depesz.com/s/vd5\n>>\n>> I think the problem is that you're sending strings in the ids, instead\n>> of integers.\n>>\n>> Remove the quotes, leave only the numbers. That will make pg able to\n>> infer that node.id = prop.node_id means it can also use an index on\n>> alf_node_properties.\n>>\n>> I think.\n>>\n>> Try.\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 11:48:27 -0400",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On Thu, Jun 26, 2014 at 12:48 PM, AJ Weber <[email protected]> wrote:\n> FWIW: I tested removing the quotes around each value, and it did not change\n> the plan (I am a little surprised too, but I guess PG is \"smarter than\n> that\").\n>\n> Thanks for the idea.\n\n\nOk, second round.\n\nTry changing node_id in (...) into node.id in (...)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 13:19:06 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On 06/26/2014 11:19 AM, Claudio Freire wrote:\n\n> Try changing node_id in (...) into node.id in (...)\n\nWow. How did we not see that earlier? That's probably the issue. If you \nlook at the estimates of his query:\n\nBitmap Heap Scan on alf_node_properties prop (cost=1253.19..189491.87 \nrows=52790 width=179) (actual time=0.571..1.349 rows=1071 loops=1)\n\nThe planner is off by an order of magnitude, and since the matches are \nagainst node_id instead of node.id, it thinks it would have to index \nseek on the alf_node table for over 50k rows. I could easily see it \nopting for a sequence scan in that case, depending on how high \nrandom_page_cost is.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 11:23:54 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On Thu, Jun 26, 2014 at 10:37 AM, Shaun Thomas <[email protected]>\nwrote:\n\n> On 06/26/2014 09:22 AM, AJ Weber wrote:\n>\n> I sent the details as identified by pgAdmin III.\n>>\n>\n> Interesting. Either there is a bug in pgAdmin, or you're connecting to a\n> different database that is missing the primary key. What is the EXPLAIN\n> ANALYZE output if you execute the query you sent on a psql prompt?\n>\n>\n> \"alf_node_pkey\" PRIMARY KEY, btree (id) CLUSTER\n>> would indicate to me that there is a PK on alf_node table, it is on\n>> column \"id\", it is of type btree, and the table is clustered around that\n>> index.\n>>\n>> Am I reading this totally wrong?\n>>\n>\n> No, that's right. But that wasn't in the SQL you sent. In fact, there's a\n> lot of stuff missing in that output.\n>\n> Try running the EXPLAIN ANALYZE using the same psql connection you used to\n> retrieve the actual table structure just now. I suspect you've accidentally\n> connected to the wrong database. If it's still doing the sequence scan,\n> we'll have to dig deeper.\n\n\n\n\nI see \"CONSTRAINT alf_node_pkey PRIMARY KEY (id)\" for table1 and\n\"CONSTRAINT alf_node_properties_pkey PRIMARY KEY (node_id, qname_id,\nlist_index, locale_id)\" for table2. When you say there is not primary key\ndefined, is it based on the execution plan ?\n\nSébastien\n\nOn Thu, Jun 26, 2014 at 10:37 AM, Shaun Thomas <[email protected]> wrote:\n\n\nOn 06/26/2014 09:22 AM, AJ Weber wrote:\n\n\nI sent the details as identified by pgAdmin III.\n\n\nInteresting. Either there is a bug in pgAdmin, or you're connecting to a different database that is missing the primary key. What is the EXPLAIN ANALYZE output if you execute the query you sent on a psql prompt?\n\n\n\n \"alf_node_pkey\" PRIMARY KEY, btree (id) CLUSTER\nwould indicate to me that there is a PK on alf_node table, it is on\ncolumn \"id\", it is of type btree, and the table is clustered around that\nindex.\n\nAm I reading this totally wrong?\n\n\nNo, that's right. But that wasn't in the SQL you sent. In fact, there's a lot of stuff missing in that output.\n\nTry running the EXPLAIN ANALYZE using the same psql connection you used to retrieve the actual table structure just now. I suspect you've accidentally connected to the wrong database. If it's still doing the sequence scan, we'll have to dig deeper.\n \n\nI see \"CONSTRAINT alf_node_pkey PRIMARY KEY (id)\" for table1 and \"CONSTRAINT alf_node_properties_pkey PRIMARY KEY (node_id, qname_id, list_index, locale_id)\" for table2. When you say there is not primary key defined, is it based on the execution plan ?\nSébastien",
"msg_date": "Thu, 26 Jun 2014 13:04:32 -0400",
"msg_from": "=?UTF-8?Q?S=C3=A9bastien_Lorion?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "\nOn 6/26/2014 12:23 PM, Shaun Thomas wrote:\n> On 06/26/2014 11:19 AM, Claudio Freire wrote:\n>\n>> Try changing node_id in (...) into node.id in (...)\n>\nThat looks much better to my untrained eye! (Am I right?)\n\n Nested Loop (cost=218.29..21305.47 rows=53480 width=187) (actual \ntime=42.347..\n43.617 rows=1071 loops=1)\n Buffers: shared hit=487 read=15\n -> Bitmap Heap Scan on alf_node node (cost=218.29..423.40 rows=51 \nwidth=16)\n (actual time=42.334..42.413 rows=51 loops=1)\n Recheck Cond: (id = ANY \n('{175769,175771,175781,175825,175881,175893,17\n5919,175932,175963,175999,176022,176079,176099,176115,176118,176171,176181,17621\n7,176220,176243,176283,176312,176326,176335,176377,176441,176444,176475,176530,1\n76570,176623,176674,176701,176730,176748,176763,176771,176808,176836,176851,1768\n64,176881,176929,176945,176947,176960,177006,177039,177079,177131,177144}'::bigi\nnt[]))\n Buffers: shared hit=159 read=15\n -> Bitmap Index Scan on alf_node_pkey (cost=0.00..218.28 \nrows=51 widt\nh=0) (actual time=42.326..42.326 rows=51 loops=1)\n Index Cond: (id = ANY \n('{175769,175771,175781,175825,175881,17589\n3,175919,175932,175963,175999,176022,176079,176099,176115,176118,176171,176181,1\n76217,176220,176243,176283,176312,176326,176335,176377,176441,176444,176475,1765\n30,176570,176623,176674,176701,176730,176748,176763,176771,176808,176836,176851,\n176864,176881,176929,176945,176947,176960,177006,177039,177079,177131,177144}'::\nbigint[]))\n Buffers: shared hit=146 read=7\n -> Index Scan using fk_alf_nprop_n on alf_node_properties prop \n(cost=0.00..\n396.34 rows=1049 width=179) (actual time=0.006..0.013 rows=21 loops=51)\n Index Cond: (prop.node_id = node.id)\n Buffers: shared hit=328\n Total runtime: 43.747 ms\n\nAM I RIGHT? (That it's much better -- I thank Claudio and Shaun for \nbeing right!)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 14:01:59 -0400",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On Wed, Jun 25, 2014 at 2:40 PM, Aaron Weber <[email protected]> wrote:\n> I will gather the other data tonight. Thank you.\n>\n> In the meantime, I guess I wasn't clear about some other particulars\n> The query's where clause is only an \"IN\", with a list of id's (those I\n> mentioned are the PK), and the join is explicitly on the PK (so, indexed).\n\nThe PK of the master table and the PK of the detail table cannot be\nthe same thing, or they would not have a master-detail relationship.\nOne side has to be an FK, not a PK.\n\n>\n> An IN with 50 int values took 23sec to return (by way of example).\n\nIf that is 50 PKs from the master table, it would be about 1000 on the\ndetail table. If you have 5600 rpm drives and every detail row\nrequires one index leaf page and one table page to be read from disk,\nthen 23 seconds is right on the nose. Although they shouldn't require\na different leaf page each because all entries for the same master row\nshould be adjacent in the index, so that does sound a little high if\nthis is the only thing going on.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 13:14:55 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "On 06/26/2014 03:14 PM, Jeff Janes wrote:\n\n> If that is 50 PKs from the master table, it would be about 1000 on the\n> detail table.\n\nYou're right. But here's the funny part: we solved this after we noticed \nhis where clause was directed at the *detail* table instead of the \nmaster table. This was compounded by the fact the planner incorrectly \nestimated the row match count on the detail table due to the well-known \ncorrelation deficiencies especially present in older versions. The row \ncount went from 1000 to 50,000.\n\nThen it joined against the master table. Since 50,000 index page fetches \nfollowed by 50,000 data page fetches would be pretty damn slow, the \nplanner went for a sequence scan on the master table instead. Clearly \nthe old 9.0 planner does not consider transitive IN equality.\n\nI'm curious to see if Aaron can test his structure on 9.3 with the \noriginal data and WHERE clause and see if the planner still goes for the \nterrible plan. If it does, that would seem like an obvious planner tweak \nto me.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 15:26:43 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "\n\n\n>The PK of the master table and the PK of the detail table cannot be\n>the same thing, or they would not have a master-detail relationship.\n>One side has to be an FK, not a PK.\n>\nOf course this is correct. I was trying to make the point that there should be unique indices (of whatever flavor PG uses for PK's by default) on the relevant columns. Since we're referring to a select statement, the actual integrity constraints should not come into play. \n\nI will remember to be more explicit about the schema next time. \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 16:40:51 -0400",
"msg_from": "Aaron Weber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
},
{
"msg_contents": "\n\n\n>I'm curious to see if Aaron can test his structure on 9.3 with the \n>original data and WHERE clause and see if the planner still goes for\n>the \n>terrible plan. If it does, that would seem like an obvious planner\n>tweak \n>to me.\n\nI will try to spin up a test 9.3 db and run the same queries to see if this is the case. Least I can do in return for the help. \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 16:43:47 -0400",
"msg_from": "Aaron Weber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to improve perf of 131MM row table?"
}
] |
[
{
"msg_contents": "I have a nice toy to play with: Dell R920 with 60 cores and 1TB ram [1].\n\nThe context is the current machine in use by the customer is a 32 core \none, and due to growth we are looking at something larger (hence 60 cores).\n\nSome initial tests show similar pgbench read only performance to what \nRobert found here \nhttp://rhaas.blogspot.co.nz/2012/04/did-i-say-32-cores-how-about-64.html \n(actually a bit quicker around 400000 tps).\n\nHowever doing a mixed read-write workload is getting results the same or \nonly marginally quicker than the 32 core machine - particularly at \nhigher number of clients (e.g 200 - 500). I have yet to break out the \nperf toolset, but I'm wondering if any folk has compared 32 and 60 (or \n64) core read write pgbench performance?\n\nregards\n\nMark\n\n[1] Details:\n\n4x E7-4890 15 cores each.\n1 TB ram\n16x Toshiba PX02SS SATA SSD\n4x Samsung NVMe XS1715 PCIe SSD\n\nUbuntu 14.04 (Linux 3.13)\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 27 Jun 2014 11:49:36 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "60 core performance with 9.3"
},
{
"msg_contents": "On Thu, Jun 26, 2014 at 5:49 PM, Mark Kirkwood\n<[email protected]> wrote:\n> I have a nice toy to play with: Dell R920 with 60 cores and 1TB ram [1].\n>\n> The context is the current machine in use by the customer is a 32 core one,\n> and due to growth we are looking at something larger (hence 60 cores).\n>\n> Some initial tests show similar pgbench read only performance to what Robert\n> found here\n> http://rhaas.blogspot.co.nz/2012/04/did-i-say-32-cores-how-about-64.html\n> (actually a bit quicker around 400000 tps).\n>\n> However doing a mixed read-write workload is getting results the same or\n> only marginally quicker than the 32 core machine - particularly at higher\n> number of clients (e.g 200 - 500). I have yet to break out the perf toolset,\n> but I'm wondering if any folk has compared 32 and 60 (or 64) core read write\n> pgbench performance?\n\nMy guess is that the read only test is CPU / memory bandwidth limited,\nbut the mixed test is IO bound.\n\nWhat's your iostat / vmstat / iotop etc look like when you're doing\nboth read only and read/write mixed?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Jun 2014 20:01:31 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 27/06/14 14:01, Scott Marlowe wrote:\n> On Thu, Jun 26, 2014 at 5:49 PM, Mark Kirkwood\n> <[email protected]> wrote:\n>> I have a nice toy to play with: Dell R920 with 60 cores and 1TB ram [1].\n>>\n>> The context is the current machine in use by the customer is a 32 core one,\n>> and due to growth we are looking at something larger (hence 60 cores).\n>>\n>> Some initial tests show similar pgbench read only performance to what Robert\n>> found here\n>> http://rhaas.blogspot.co.nz/2012/04/did-i-say-32-cores-how-about-64.html\n>> (actually a bit quicker around 400000 tps).\n>>\n>> However doing a mixed read-write workload is getting results the same or\n>> only marginally quicker than the 32 core machine - particularly at higher\n>> number of clients (e.g 200 - 500). I have yet to break out the perf toolset,\n>> but I'm wondering if any folk has compared 32 and 60 (or 64) core read write\n>> pgbench performance?\n>\n> My guess is that the read only test is CPU / memory bandwidth limited,\n> but the mixed test is IO bound.\n>\n> What's your iostat / vmstat / iotop etc look like when you're doing\n> both read only and read/write mixed?\n>\n>\n\nThat was what I would have thought too, but it does not appear to be the \ncase, here is a typical iostat:\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await r_await w_await svctm %util\nsda 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\nnvme0n1 0.00 0.00 0.00 4448.00 0.00 41.47 \n19.10 0.14 0.03 0.00 0.03 0.03 14.40\nnvme1n1 0.00 0.00 0.00 4448.00 0.00 41.47 \n19.10 0.15 0.03 0.00 0.03 0.03 15.20\nnvme2n1 0.00 0.00 0.00 4549.00 0.00 42.20 \n19.00 0.15 0.03 0.00 0.03 0.03 15.20\nnvme3n1 0.00 0.00 0.00 4548.00 0.00 42.19 \n19.00 0.16 0.04 0.00 0.04 0.04 16.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\nmd0 0.00 0.00 0.00 17961.00 0.00 83.67 \n9.54 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-4 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\n\nMy feeling is spinlock or similar, 'perf top' shows\n\nkernel find_busiest_group\nkernel _raw_spin_lock\n\nas the top time users.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 27 Jun 2014 14:28:20 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 2014-06-27 14:28:20 +1200, Mark Kirkwood wrote:\n> My feeling is spinlock or similar, 'perf top' shows\n> \n> kernel find_busiest_group\n> kernel _raw_spin_lock\n> \n> as the top time users.\n\nThose don't tell that much by themselves, could you do a hierarchical\nprofile? I.e. perf record -ga? That'll at least give the callers for\nkernel level stuff. For more information compile postgres with\n-fno-omit-frame-pointer.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 27 Jun 2014 11:19:51 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 27/06/14 21:19, Andres Freund wrote:\n> On 2014-06-27 14:28:20 +1200, Mark Kirkwood wrote:\n>> My feeling is spinlock or similar, 'perf top' shows\n>>\n>> kernel find_busiest_group\n>> kernel _raw_spin_lock\n>>\n>> as the top time users.\n>\n> Those don't tell that much by themselves, could you do a hierarchical\n> profile? I.e. perf record -ga? That'll at least give the callers for\n> kernel level stuff. For more information compile postgres with\n> -fno-omit-frame-pointer.\n>\n\nExcellent suggestion, will do next week!\n\nregards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 27 Jun 2014 21:30:59 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 27/06/14 21:19, Andres Freund wrote:\n> On 2014-06-27 14:28:20 +1200, Mark Kirkwood wrote:\n>> My feeling is spinlock or similar, 'perf top' shows\n>>\n>> kernel find_busiest_group\n>> kernel _raw_spin_lock\n>>\n>> as the top time users.\n>\n> Those don't tell that much by themselves, could you do a hierarchical\n> profile? I.e. perf record -ga? That'll at least give the callers for\n> kernel level stuff. For more information compile postgres with\n> -fno-omit-frame-pointer.\n>\n\nUnfortunately this did not help - had lots of unknown symbols from \npostgres in the profile - I'm guessing the Ubuntu postgresql-9.3 package \nneeds either the -dev package or to be rebuilt with the enable profile \noption (debug and no-omit-frame-pointer seem to be there already).\n\nHowever further investigation did uncover *very* interesting things. \nFirstly I had previously said that read only performance looked \nok...this was wrong, purely based on comparison to Robert's blog post. \nRebooting the 60 core box with 32 cores enabled showed that we got \n*better* scaling performance in the read only case and illustrated we \nwere hitting a serious regression with more cores. At this point data is \nneeded:\n\nTest: pgbench\nOptions: scale 500\n read only\nOs: Ubuntu 14.04\nPg: 9.3.4\nPg Options:\n max_connections = 200\n shared_buffers = 10GB\n maintenance_work_mem = 1GB\n effective_io_concurrency = 10\n wal_buffers = 32MB\n checkpoint_segments = 192\n checkpoint_completion_target = 0.8\n\n\nResults\n\nClients | 9.3 tps 32 cores | 9.3 tps 60 cores\n--------+------------------+-----------------\n6 | 70400 | 71028\n12 | 98918 | 129140\n24 | 230345 | 240631\n48 | 324042 | 409510\n96 | 346929 | 120464\n192 | 312621 | 92663\n\nSo we have anti scaling with 60 cores as we increase the client \nconnections. Ouch! A level of urgency led to trying out Andres's \n'rwlock' 9.4 branch [1] - cherry picking the last 5 commits into 9.4 \nbranch and building a package from that and retesting:\n\nClients | 9.4 tps 60 cores (rwlock)\n--------+--------------------------\n6 | 70189\n12 | 128894\n24 | 233542\n48 | 422754\n96 | 590796\n192 | 630672\n\nWow - that is more like it! Andres that is some nice work, we definitely \nowe you some beers for that :-) I am aware that I need to retest with an \nunpatched 9.4 src - as it is not clear from this data how much is due to \nAndres's patches and how much to the steady stream of 9.4 development. \nI'll post an update on that later, but figured this was interesting \nenough to note for now.\n\n\nRegards\n\nMark\n\n[1] from git://git.postgresql.org/git/users/andresfreund/postgres.git, \ncommits:\n4b82477dcaf81ad7b0c102f4b66e479a5eb9504a\n10d72b97f108b6002210ea97a414076a62302d4e\n67ffebe50111743975d54782a3a94b15ac4e755f\nfe686ed18fe132021ee5e557c67cc4d7c50a1ada\nf2378dc2fa5b73c688f696704976980bab90c611\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 01 Jul 2014 21:48:35 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 01/07/14 21:48, Mark Kirkwood wrote:\n\n> [1] from git://git.postgresql.org/git/users/andresfreund/postgres.git,\n> commits:\n> 4b82477dcaf81ad7b0c102f4b66e479a5eb9504a\n> 10d72b97f108b6002210ea97a414076a62302d4e\n> 67ffebe50111743975d54782a3a94b15ac4e755f\n> fe686ed18fe132021ee5e557c67cc4d7c50a1ada\n> f2378dc2fa5b73c688f696704976980bab90c611\n>\n>\n\nHmmm, should read last 5 commits in 'rwlock-contention' and I had pasted \nthe commit nos from my tree not Andres's, sorry, here are the right ones:\n472c87400377a7dc418d8b77e47ba08f5c89b1bb\ne1e549a8e42b753cc7ac60e914a3939584cb1c56\n65c2174469d2e0e7c2894202dc63b8fa6f8d2a7f\n959aa6e0084d1264e5b228e5a055d66e5173db7d\na5c3ddaef0ee679cf5e8e10d59e0a1fe9f0f1893\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 01 Jul 2014 22:04:29 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 2014-07-01 21:48:35 +1200, Mark Kirkwood wrote:\n> On 27/06/14 21:19, Andres Freund wrote:\n> >On 2014-06-27 14:28:20 +1200, Mark Kirkwood wrote:\n> >>My feeling is spinlock or similar, 'perf top' shows\n> >>\n> >>kernel find_busiest_group\n> >>kernel _raw_spin_lock\n> >>\n> >>as the top time users.\n> >\n> >Those don't tell that much by themselves, could you do a hierarchical\n> >profile? I.e. perf record -ga? That'll at least give the callers for\n> >kernel level stuff. For more information compile postgres with\n> >-fno-omit-frame-pointer.\n> >\n> \n> Unfortunately this did not help - had lots of unknown symbols from postgres\n> in the profile - I'm guessing the Ubuntu postgresql-9.3 package needs either\n> the -dev package or to be rebuilt with the enable profile option (debug and\n> no-omit-frame-pointer seem to be there already).\n\nYou need to install the -dbg package. My bet is you'll see s_lock high\nin the profile, called mainly from the procarray and buffer mapping\nlwlocks.\n\n> Test: pgbench\n> Options: scale 500\n> read only\n> Os: Ubuntu 14.04\n> Pg: 9.3.4\n> Pg Options:\n> max_connections = 200\n\nJust as an experiment I'd suggest increasing max_connections by one and\ntwo and quickly retesting - there's some cacheline alignment issues that\naren't fixed yet that happen to vanish with some max_connections\nsettings.\n\n> shared_buffers = 10GB\n> maintenance_work_mem = 1GB\n> effective_io_concurrency = 10\n> wal_buffers = 32MB\n> checkpoint_segments = 192\n> checkpoint_completion_target = 0.8\n> \n> \n> Results\n> \n> Clients | 9.3 tps 32 cores | 9.3 tps 60 cores\n> --------+------------------+-----------------\n> 6 | 70400 | 71028\n> 12 | 98918 | 129140\n> 24 | 230345 | 240631\n> 48 | 324042 | 409510\n> 96 | 346929 | 120464\n> 192 | 312621 | 92663\n> \n> So we have anti scaling with 60 cores as we increase the client connections.\n> Ouch! A level of urgency led to trying out Andres's 'rwlock' 9.4 branch [1]\n> - cherry picking the last 5 commits into 9.4 branch and building a package\n> from that and retesting:\n> \n> Clients | 9.4 tps 60 cores (rwlock)\n> --------+--------------------------\n> 6 | 70189\n> 12 | 128894\n> 24 | 233542\n> 48 | 422754\n> 96 | 590796\n> 192 | 630672\n> \n> Wow - that is more like it! Andres that is some nice work, we definitely owe\n> you some beers for that :-) I am aware that I need to retest with an\n> unpatched 9.4 src - as it is not clear from this data how much is due to\n> Andres's patches and how much to the steady stream of 9.4 development. I'll\n> post an update on that later, but figured this was interesting enough to\n> note for now.\n\nCool. That's what I like (and expect) to see :). I don't think unpatched\n9.4 will show significantly different results than 9.3, but it'd be good\nto validate that. If you do so, could you post the results in the\n-hackers thread I just CCed you on? That'll help the work to get into\n9.5.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 1 Jul 2014 12:13:07 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 01/07/14 22:13, Andres Freund wrote:\n> On 2014-07-01 21:48:35 +1200, Mark Kirkwood wrote:\n>> - cherry picking the last 5 commits into 9.4 branch and building a package\n>> from that and retesting:\n>>\n>> Clients | 9.4 tps 60 cores (rwlock)\n>> --------+--------------------------\n>> 6 | 70189\n>> 12 | 128894\n>> 24 | 233542\n>> 48 | 422754\n>> 96 | 590796\n>> 192 | 630672\n>>\n>> Wow - that is more like it! Andres that is some nice work, we definitely owe\n>> you some beers for that :-) I am aware that I need to retest with an\n>> unpatched 9.4 src - as it is not clear from this data how much is due to\n>> Andres's patches and how much to the steady stream of 9.4 development. I'll\n>> post an update on that later, but figured this was interesting enough to\n>> note for now.\n>\n> Cool. That's what I like (and expect) to see :). I don't think unpatched\n> 9.4 will show significantly different results than 9.3, but it'd be good\n> to validate that. If you do so, could you post the results in the\n> -hackers thread I just CCed you on? That'll help the work to get into\n> 9.5.\n\nSo we seem to have nailed read only performance. Going back and \nrevisiting read write performance finds:\n\nPostgres 9.4 beta\nrwlock patch\npgbench scale = 2000\n\nmax_connections = 200;\nshared_buffers = \"10GB\";\nmaintenance_work_mem = \"1GB\";\neffective_io_concurrency = 10;\nwal_buffers = \"32MB\";\ncheckpoint_segments = 192;\ncheckpoint_completion_target = 0.8;\n\nclients | tps (32 cores) | tps\n---------+----------------+---------\n6 | 8313 | 8175\n12 | 11012 | 14409\n24 | 16151 | 17191\n48 | 21153 | 23122\n96 | 21977 | 22308\n192 | 22917 | 23109\n\n\nSo we are back to not doing significantly better than 32 cores. Hmmm. \nDoing quite a few more tweaks gets some better numbers:\n\nkernel.sched_autogroup_enabled=0\nkernel.sched_migration_cost_ns=5000000\nnet.core.somaxconn=1024\n/sys/kernel/mm/transparent_hugepage/enabled [never]\n\n+checkpoint_segments = 1920\n+wal_buffers = \"256MB\";\n\n\nclients | tps\n---------+---------\n6 | 8366\n12 | 15988\n24 | 19828\n48 | 30315\n96 | 31649\n192 | 29497\n\nOne more:\n\n+wal__sync_method = \"open_datasync\"\n\nclients | tps\n---------+---------\n6 | 9566\n12 | 17129\n24 | 22962\n48 | 34564\n96 | 32584\n192 | 28367\n\nSo this looks better - however I suspect 32 core performance would \nimprove with these as well!\n\nThe problem does *not* look to be connected with IO (I will include some \niostat below). So time to get the profiler out (192 clients for 1 minute):\n\nFull report http://paste.ubuntu.com/7777886/\n\n# ========\n# captured on: Fri Jul 11 03:09:06 2014\n# hostname : ncel-prod-db3\n# os release : 3.13.0-24-generic\n# perf version : 3.13.9\n# arch : x86_64\n# nrcpus online : 60\n# nrcpus avail : 60\n# cpudesc : Intel(R) Xeon(R) CPU E7-4890 v2 @ 2.80GHz\n# cpuid : GenuineIntel,6,62,7\n# total memory : 1056692116 kB\n# cmdline : /usr/lib/linux-tools-3.13.0-24/perf record -ag\n# event : name = cycles, type = 0, config = 0x0, config1 = 0x0, config2 \n= 0x0, excl_usr = 0, excl_kern = 0, excl_host = 0, excl_guest = 1, \nprecise_ip = 0, attr_mmap2 = 0, attr_mmap = 1, attr_mmap_data = 0\n# HEADER_CPU_TOPOLOGY info available, use -I to display\n# HEADER_NUMA_TOPOLOGY info available, use -I to display\n# pmu mappings: cpu = 4, uncore_cbox_10 = 17, uncore_cbox_11 = 18, \nuncore_cbox_12 = 19, uncore_cbox_13 = 20, uncore_cbox_14 = 21, software \n= 1, uncore_irp = 33, uncore_pcu = 22, tracepoint = 2, uncore_imc_0 = \n25, uncore_imc_1 = 26, uncore_imc_2 = 27, uncore_imc_3 = 28, \nuncore_imc_4 = 29, uncore_imc_5 = 30, uncore_imc_6 = 31, uncore_imc_7 = \n32, uncore_qpi_0 = 34, uncore_qpi_1 = 35, uncore_qpi_2 = 36, \nuncore_cbox_0 = 7, uncore_cbox_1 = 8, uncore_cbox_2 = 9, uncore_cbox_3 = \n10, uncore_cbox_4 = 11, uncore_cbox_5 = 12, uncore_cbox_6 = 13, \nuncore_cbox_7 = 14, uncore_cbox_8 = 15, uncore_cbox_9 = 16, \nuncore_r2pcie = 37, uncore_r3qpi_0 = 38, uncore_r3qpi_1 = 39, breakpoint \n= 5, uncore_ha_0 = 23, uncore_ha_1 = 24, uncore_ubox = 6\n# ========\n#\n# Samples: 1M of event 'cycles'\n# Event count (approx.): 359906321606\n#\n# Overhead Command Shared Object \n Symbol\n# ........ .............. ....................... \n.....................................................\n#\n 8.82% postgres [kernel.kallsyms] [k] \n_raw_spin_lock_irqsave\n |\n --- _raw_spin_lock_irqsave\n |\n |--75.69%-- pagevec_lru_move_fn\n | __lru_cache_add\n | lru_cache_add\n | putback_lru_page\n | migrate_pages\n | migrate_misplaced_page\n | do_numa_page\n | handle_mm_fault\n | __do_page_fault\n | do_page_fault\n | page_fault\n | |\n | |--31.07%-- PinBuffer\n | | |\n | | --100.00%-- ReadBuffer_common\n | | |\n | | --100.00%-- \nReadBufferExtended\n | | | \n\n | | \n|--71.62%-- index_fetch_heap\n | | | \n index_getnext\n | | | \n IndexNext\n | | | \n ExecScan\n | | | \n ExecProcNode\n | | | \n ExecModifyTable\n | | | \n ExecProcNode\n | | | \n standard_ExecutorRun\n | | | \n ProcessQuery\n | | | \n PortalRunMulti\n | | | \n PortalRun\n | | | \n PostgresMain\n | | | \n ServerLoop\n | | | \n\n | | \n|--17.47%-- heap_hot_search\n | | | \n _bt_check_unique\n | | | \n _bt_doinsert\n | | | \n btinsert\n | | | \n FunctionCall6Coll\n | | | \n index_insert\n | | | \n |\n | | | \n --100.00%-- ExecInsertIndexTuples\n | | | \n ExecModifyTable\n | | | \n ExecProcNode\n | | | \n standard_ExecutorRun\n | | | \n ProcessQuery\n | | | \n PortalRunMulti\n | | | \n PortalRun\n | | | \n PostgresMain\n | | | \n ServerLoop\n | | | \n\n | | \n|--3.81%-- RelationGetBufferForTuple\n | | | \n heap_update\n | | | \n ExecModifyTable\n | | | \n ExecProcNode\n | | | \n standard_ExecutorRun\n | | | \n ProcessQuery\n | | | \n PortalRunMulti\n | | | \n PortalRun\n | | | \n PostgresMain\n | | | \n ServerLoop\n | | | \n\n | | \n|--3.65%-- _bt_relandgetbuf\n | | | \n _bt_search\n | | | \n _bt_first\n | | | \n |\n | | | \n --100.00%-- btgettuple\n | | | \n FunctionCall2Coll\n | | | \n index_getnext_tid\n | | | \n index_getnext\n | | | \n IndexNext\n | | | \n ExecScan\n | | | \n ExecProcNode\n | | | \n |\n | | | \n |--97.56%-- ExecModifyTable\n | | | \n | ExecProcNode\n | | | \n | standard_ExecutorRun\n | | | \n | ProcessQuery\n | | | \n | PortalRunMulti\n | | | \n | PortalRun\n | | | \n | PostgresMain\n | | | \n | ServerLoop\n | | | \n |\n | | | \n --2.44%-- standard_ExecutorRun\n | | | \n PortalRunSelect\n | | | \n PortalRun\n | | | \n PostgresMain\n | | | \n ServerLoop\n | | | \n\n | | \n|--2.69%-- fsm_readbuf\n | | | \n fsm_set_and_search\n | | | \n RecordPageWithFreeSpace\n | | | \n lazy_vacuum_rel\n | | | \n vacuum_rel\n | | | \n vacuum\n | | | \n do_autovacuum\n | | | \n\n | | \n--0.75%-- lazy_vacuum_rel\n | | \n vacuum_rel\n | | \n vacuum\n | | \n do_autovacuum\n | |\n | |--4.66%-- SearchCatCache\n | | |\n | | |--49.62%-- oper\n | | | make_op\n | | | transformExprRecurse\n | | | transformExpr\n | | | |\n | | | |--90.02%-- \ntransformTargetEntry\n | | | | \ntransformTargetList\n | | | | \ntransformStmt\n | | | | \nparse_analyze\n | | | | \npg_analyze_and_rewrite\n | | | | \nPostgresMain\n | | | | ServerLoop\n | | | |\n | | | --9.98%-- \ntransformWhereClause\n | | | \ntransformStmt\n | | | \nparse_analyze\n | | | \npg_analyze_and_rewrite\n | | | \nPostgresMain\n | | | ServerLoop\n\n\n\nWith respect to IO, here are typical iostat outputs:\n\nsda HW RAID 10 array SAS SSD [data]\nmd0 SW RAID 10 of nvme[0-3]n1 PCie SSD [xlog]\n\nNon Checkpoint\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await r_await w_await svctm %util\nsda 0.00 15.00 0.00 3.00 0.00 0.07 \n50.67 0.00 0.00 0.00 0.00 0.00 0.00\nnvme0n1 0.00 0.00 0.00 4198.00 0.00 146.50 \n71.47 0.18 0.05 0.00 0.05 0.04 18.40\nnvme1n1 0.00 0.00 0.00 4198.00 0.00 146.50 \n71.47 0.18 0.04 0.00 0.04 0.04 17.20\nnvme2n1 0.00 0.00 0.00 4126.00 0.00 146.08 \n72.51 0.15 0.04 0.00 0.04 0.03 14.00\nnvme3n1 0.00 0.00 0.00 4125.00 0.00 146.03 \n72.50 0.15 0.04 0.00 0.04 0.03 14.40\nmd0 0.00 0.00 0.00 16022.00 0.00 292.53 \n37.39 0.00 0.00 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 0.00 18.00 0.00 0.07 \n8.44 0.00 0.00 0.00 0.00 0.00 0.00\ndm-4 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\n\nCheckpoint\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await r_await w_await svctm %util\nsda 0.00 29.00 1.00 96795.00 0.00 1074.52 \n22.73 133.13 1.38 4.00 1.38 0.01 100.00\nnvme0n1 0.00 0.00 0.00 3564.00 0.00 56.71 \n32.59 0.12 0.03 0.00 0.03 0.03 11.60\nnvme1n1 0.00 0.00 0.00 3564.00 0.00 56.71 \n32.59 0.12 0.03 0.00 0.03 0.03 12.00\nnvme2n1 0.00 0.00 0.00 3884.00 0.00 59.12 \n31.17 0.14 0.04 0.00 0.04 0.04 13.60\nnvme3n1 0.00 0.00 0.00 3884.00 0.00 59.12 \n31.17 0.13 0.03 0.00 0.03 0.03 12.80\nmd0 0.00 0.00 0.00 14779.00 0.00 115.80 \n16.05 0.00 0.00 0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 3.00 0.00 0.01 \n8.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-3 0.00 0.00 1.00 96830.00 0.00 1074.83 \n22.73 134.79 1.38 4.00 1.38 0.01 100.00\ndm-4 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\n\nThanks for your patience if you have read this far!\n\nRegards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jul 2014 12:40:15 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 2014-07-11 12:40:15 +1200, Mark Kirkwood wrote:\n> On 01/07/14 22:13, Andres Freund wrote:\n> >On 2014-07-01 21:48:35 +1200, Mark Kirkwood wrote:\n> >>- cherry picking the last 5 commits into 9.4 branch and building a package\n> >>from that and retesting:\n> >>\n> >>Clients | 9.4 tps 60 cores (rwlock)\n> >>--------+--------------------------\n> >>6 | 70189\n> >>12 | 128894\n> >>24 | 233542\n> >>48 | 422754\n> >>96 | 590796\n> >>192 | 630672\n> >>\n> >>Wow - that is more like it! Andres that is some nice work, we definitely owe\n> >>you some beers for that :-) I am aware that I need to retest with an\n> >>unpatched 9.4 src - as it is not clear from this data how much is due to\n> >>Andres's patches and how much to the steady stream of 9.4 development. I'll\n> >>post an update on that later, but figured this was interesting enough to\n> >>note for now.\n> >\n> >Cool. That's what I like (and expect) to see :). I don't think unpatched\n> >9.4 will show significantly different results than 9.3, but it'd be good\n> >to validate that. If you do so, could you post the results in the\n> >-hackers thread I just CCed you on? That'll help the work to get into\n> >9.5.\n> \n> So we seem to have nailed read only performance. Going back and revisiting\n> read write performance finds:\n> \n> Postgres 9.4 beta\n> rwlock patch\n> pgbench scale = 2000\n> \n> max_connections = 200;\n> shared_buffers = \"10GB\";\n> maintenance_work_mem = \"1GB\";\n> effective_io_concurrency = 10;\n> wal_buffers = \"32MB\";\n> checkpoint_segments = 192;\n> checkpoint_completion_target = 0.8;\n> \n> clients | tps (32 cores) | tps\n> ---------+----------------+---------\n> 6 | 8313 | 8175\n> 12 | 11012 | 14409\n> 24 | 16151 | 17191\n> 48 | 21153 | 23122\n> 96 | 21977 | 22308\n> 192 | 22917 | 23109\n\nOn that scale - that's bigger than shared_buffers IIRC - I'd not expect\nthe patch to make much of a difference.\n\n> kernel.sched_autogroup_enabled=0\n> kernel.sched_migration_cost_ns=5000000\n> net.core.somaxconn=1024\n> /sys/kernel/mm/transparent_hugepage/enabled [never]\n> \n> Full report http://paste.ubuntu.com/7777886/\n\n> #\n> 8.82% postgres [kernel.kallsyms] [k]\n> _raw_spin_lock_irqsave\n> |\n> --- _raw_spin_lock_irqsave\n> |\n> |--75.69%-- pagevec_lru_move_fn\n> | __lru_cache_add\n> | lru_cache_add\n> | putback_lru_page\n> | migrate_pages\n> | migrate_misplaced_page\n> | do_numa_page\n> | handle_mm_fault\n> | __do_page_fault\n> | do_page_fault\n> | page_fault\n\nSo, the majority of the time is spent in numa page migration. Can you\ndisable numa_balancing? I'm not sure if your kernel version does that at\nruntime or whether you need to reboot.\nThe kernel.numa_balancing sysctl might work. Otherwise you probably need\nto boot with numa_balancing=0.\n\nIt'd also be worthwhile to test this with numactl --interleave.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jul 2014 10:22:07 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 11/07/14 20:22, Andres Freund wrote:\n> On 2014-07-11 12:40:15 +1200, Mark Kirkwood wrote:\n\n>> Postgres 9.4 beta\n>> rwlock patch\n>> pgbench scale = 2000\n>>\n> On that scale - that's bigger than shared_buffers IIRC - I'd not expect\n> the patch to make much of a difference.\n>\n\nRight - we did test with it bigger (can't recall exactly how big), but \nwill retry again after setting the numa parameters below.\n\n>> #\n>> 8.82% postgres [kernel.kallsyms] [k]\n>> _raw_spin_lock_irqsave\n>> |\n>> --- _raw_spin_lock_irqsave\n>> |\n>> |--75.69%-- pagevec_lru_move_fn\n>> | __lru_cache_add\n>> | lru_cache_add\n>> | putback_lru_page\n>> | migrate_pages\n>> | migrate_misplaced_page\n>> | do_numa_page\n>> | handle_mm_fault\n>> | __do_page_fault\n>> | do_page_fault\n>> | page_fault\n>\n> So, the majority of the time is spent in numa page migration. Can you\n> disable numa_balancing? I'm not sure if your kernel version does that at\n> runtime or whether you need to reboot.\n> The kernel.numa_balancing sysctl might work. Otherwise you probably need\n> to boot with numa_balancing=0.\n>\n> It'd also be worthwhile to test this with numactl --interleave.\n>\n\nThat was my feeling too - but I had no idea what the magic switch was to \ntame it (appears to be in 3.13 kernels), will experiment and report \nback. Thanks again!\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jul 2014 20:54:20 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> wrote:\n> On 11/07/14 20:22, Andres Freund wrote:\n\n>> So, the majority of the time is spent in numa page migration.\n>> Can you disable numa_balancing? I'm not sure if your kernel\n>> version does that at runtime or whether you need to reboot.\n>> The kernel.numa_balancing sysctl might work. Otherwise you\n>> probably need to boot with numa_balancing=0.\n>>\n>> It'd also be worthwhile to test this with numactl --interleave.\n>\n> That was my feeling too - but I had no idea what the magic switch\n> was to tame it (appears to be in 3.13 kernels), will experiment\n> and report back. Thanks again!\n\nIt might be worth a test using a cpuset to interleave OS cache and\nthe NUMA patch I submitted to the current CF to see whether this is\ngetting into territory where the patch makes a bigger difference. \nI would expect it to do much better than using numactl --interleave\nbecause work_mem and other process-local memory would be allocated\nin \"near\" memory for each process.\n\nhttp://www.postgresql.org/message-id/[email protected]\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jul 2014 06:19:09 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 11/07/14 20:22, Andres Freund wrote:\n> On 2014-07-11 12:40:15 +1200, Mark Kirkwood wrote:\n>> Full report http://paste.ubuntu.com/7777886/\n>\n>> #\n>> 8.82% postgres [kernel.kallsyms] [k]\n>> _raw_spin_lock_irqsave\n>> |\n>> --- _raw_spin_lock_irqsave\n>> |\n>> |--75.69%-- pagevec_lru_move_fn\n>> | __lru_cache_add\n>> | lru_cache_add\n>> | putback_lru_page\n>> | migrate_pages\n>> | migrate_misplaced_page\n>> | do_numa_page\n>> | handle_mm_fault\n>> | __do_page_fault\n>> | do_page_fault\n>> | page_fault\n>\n> So, the majority of the time is spent in numa page migration. Can you\n> disable numa_balancing? I'm not sure if your kernel version does that at\n> runtime or whether you need to reboot.\n> The kernel.numa_balancing sysctl might work. Otherwise you probably need\n> to boot with numa_balancing=0.\n>\n> It'd also be worthwhile to test this with numactl --interleave.\n>\n\nTrying out with numa_balancing=0 seemed to get essentially the same \nperformance. Similarly wrapping postgres startup with --interleave.\n\nAll this made me want to try with numa *really* disabled. So rebooted \nthe box with \"numa=off\" appended to the kernel cmdline. Somewhat \nsurprisingly (to me anyway), the numbers were essentially identical. The \nprofile, however is quite different:\n\nFull report at http://paste.ubuntu.com/7806285/\n\n\n 4.56% postgres [kernel.kallsyms] [k] \n_raw_spin_lock_irqsave \n \n\n |\n --- _raw_spin_lock_irqsave\n |\n |--41.89%-- try_to_wake_up\n | |\n | |--96.12%-- default_wake_function\n | | |\n | | |--99.96%-- pollwake\n | | | __wake_up_common\n | | | __wake_up_sync_key\n | | | sock_def_readable\n | | | |\n | | | |--99.94%-- \nunix_stream_sendmsg\n | | | | \nsock_sendmsg\n | | | | \nSYSC_sendto\n | | | | \nsys_sendto\n | | | | tracesys\n | | | | \n__libc_send\n | | | | pq_flush\n | | | | \nReadyForQuery\n | | | | \nPostgresMain\n | | | | \nServerLoop\n | | | | \nPostmasterMain\n | | | | main\n | | | | \n__libc_start_main\n | | | --0.06%-- [...]\n | | --0.04%-- [...]\n | |\n | |--2.87%-- wake_up_process\n | | |\n | | |--95.71%-- \nwake_up_sem_queue_do\n | | | SYSC_semtimedop\n | | | sys_semop\n | | | tracesys\n | | | __GI___semop\n | | | |\n | | | |--99.75%-- \nLWLockRelease\n | | | | | \n\n | | | | \n|--25.09%-- RecordTransactionCommit\n | | | | | \n CommitTransaction\n | | | | | \n CommitTransactionCommand\n | | | | | \n finish_xact_command.part.4\n | | | | | \n PostgresMain\n | | | | | \n ServerLoop\n | | | | | \n PostmasterMain\n | | | | | \n main\n | | | | | \n __libc_start_main\n\n\n\nregards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 17 Jul 2014 11:58:46 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 12/07/14 01:19, Kevin Grittner wrote:\n>\n> It might be worth a test using a cpuset to interleave OS cache and\n> the NUMA patch I submitted to the current CF to see whether this is\n> getting into territory where the patch makes a bigger difference.\n> I would expect it to do much better than using numactl --interleave\n> because work_mem and other process-local memory would be allocated\n> in \"near\" memory for each process.\n>\n> http://www.postgresql.org/message-id/[email protected]\n>\n\nThanks Kevin - I did try this out - seemed slightly better than using \n--interleave, but almost identical to the results posted previously.\n\nHowever looking at my postgres binary with ldd, I'm not seeing any link \nto libnuma (despite it demanding the library whilst building), so I \nwonder if my package build has somehow vanilla-ified the result :-(\n\nAlso I am guessing that with 60 cores I do:\n\n$ sudo /bin/bash -c \"echo 0-59 >/dev/cpuset/postgres/cpus\"\n\ni.e cpus are cores not packages...? If I've stuffed it up I'll redo!\n\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 17 Jul 2014 12:09:13 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> wrote:\n> On 12/07/14 01:19, Kevin Grittner wrote:\n>>\n>> It might be worth a test using a cpuset to interleave OS cache and\n>> the NUMA patch I submitted to the current CF to see whether this is\n>> getting into territory where the patch makes a bigger difference.\n>> I would expect it to do much better than using numactl --interleave\n>> because work_mem and other process-local memory would be allocated\n>> in \"near\" memory for each process.\n>>\n> http://www.postgresql.org/message-id/[email protected]\n>\n> Thanks Kevin - I did try this out - seemed slightly better than using\n> --interleave, but almost identical to the results posted previously.\n>\n> However looking at my postgres binary with ldd, I'm not seeing any link\n> to libnuma (despite it demanding the library whilst building), so I\n> wonder if my package build has somehow vanilla-ified the result :-(\n\nThat is odd; not sure what to make of that!\n\n> Also I am guessing that with 60 cores I do:\n>\n> $ sudo /bin/bash -c \"echo 0-59 >/dev/cpuset/postgres/cpus\"\n>\n> i.e cpus are cores not packages...?\n\nRight; basically, as a guide, you can use the output from:\n\n$ numactl --hardware\n\nUse the union of all the \"cpu\" numbers from the \"node n cpus\" lines. The\nabove statement is also a good way to see how unbalance memory usage has\nbecome while running a test.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jul 2014 07:19:13 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 17/07/14 11:58, Mark Kirkwood wrote:\n\n>\n> Trying out with numa_balancing=0 seemed to get essentially the same\n> performance. Similarly wrapping postgres startup with --interleave.\n>\n> All this made me want to try with numa *really* disabled. So rebooted\n> the box with \"numa=off\" appended to the kernel cmdline. Somewhat\n> surprisingly (to me anyway), the numbers were essentially identical. The\n> profile, however is quite different:\n>\n\nA little more tweaking got some further improvement:\n\nrwlocks patch as before\n\nwal_buffers = 256MB\ncheckpoint_segments = 1920\nwal_sync_method = open_datasync\n\nLSI RAID adaptor disable read ahead and write cache for SSD fast path mode\nnuma_balancing = 0\n\n\nPgbench scale 2000 again:\n\nclients | tps (prev) | tps (tweaked config)\n---------+------------+---------\n6 | 8175 | 8281\n12 | 14409 | 15896\n24 | 17191 | 19522\n48 | 23122 | 29776\n96 | 22308 | 32352\n192 | 23109 | 28804\n\n\nNow recall we were seeing no actual tps changes with numa_balancing=0 or \n1 (so the improvement above is from the other changes), but figured it \nmight be informative to try to track down what the non-numa bottlenecks \nlooked like. We tried profiling the entire 10 minute run which showed \nthe stats collector as a possible source of contention:\n\n\n 3.86% postgres [kernel.kallsyms] [k] _raw_spin_lock_bh\n |\n --- _raw_spin_lock_bh\n |\n |--95.78%-- lock_sock_nested\n | udpv6_sendmsg\n | inet_sendmsg\n | sock_sendmsg\n | SYSC_sendto\n | sys_sendto\n | tracesys\n | __libc_send\n | |\n | |--99.17%-- pgstat_report_stat\n | | PostgresMain\n | | ServerLoop\n | | PostmasterMain\n | | main\n | | __libc_start_main\n | |\n | |--0.77%-- pgstat_send_bgwriter\n | | BackgroundWriterMain\n | | AuxiliaryProcessMain\n | | 0x7f08efe8d453\n | | reaper\n | | __restore_rt\n | | PostmasterMain\n | | main\n | | __libc_start_main\n | --0.07%-- [...]\n |\n |--2.54%-- __lock_sock\n | |\n | |--91.95%-- lock_sock_nested\n | | udpv6_sendmsg\n | | inet_sendmsg\n | | sock_sendmsg\n | | SYSC_sendto\n | | sys_sendto\n | | tracesys\n | | __libc_send\n | | |\n | | |--99.73%-- pgstat_report_stat\n | | | PostgresMain\n | | | ServerLoop\n\n\n\nDisabling track_counts and rerunning pgbench:\n\nclients | tps (no counts)\n---------+------------\n6 | 9806\n12 | 18000\n24 | 29281\n48 | 43703\n96 | 54539\n192 | 36114\n\n\nWhile these numbers look great in the middle range (12-96 clients), then \nbenefit looks to be tailing off as client numbers increase. Also running \nwith no stats (and hence no auto vacuum or analyze) is way too scary!\n\nTrying out less write heavy workloads shows that the stats overhead does \nnot appear to be significant for *read* heavy cases, so this result \nabove is perhaps more of a curiosity than anything (given that read \nheavy is more typical...and our real workload is more similar to read \nheavy).\n\nThe profile for counts off looks like:\n\n 4.79% swapper [kernel.kallsyms] [k] read_hpet\n |\n --- read_hpet\n |\n |--97.10%-- ktime_get\n | |\n | |--35.24%-- clockevents_program_event\n | | tick_program_event\n | | |\n | | |--56.59%-- \n__hrtimer_start_range_ns\n | | | |\n | | | |--78.12%-- \nhrtimer_start_range_ns\n | | | | \ntick_nohz_restart\n | | | | \ntick_nohz_idle_exit\n | | | | \ncpu_startup_entry\n | | | | |\n | | | | \n|--98.84%-- start_secondary\n | | | | |\n | | | | \n--1.16%-- rest_init\n | | | | \n start_kernel\n | | | | \n x86_64_start_reservations\n | | | | \n x86_64_start_kernel\n | | | |\n | | | --21.88%-- \nhrtimer_start\n | | | \ntick_nohz_stop_sched_tick\n | | | \n__tick_nohz_idle_enter\n | | | |\n | | | \n|--99.89%-- tick_nohz_idle_enter\n | | | | \n cpu_startup_entry\n | | | | \n |\n | | | | \n |--98.30%-- start_secondary\n | | | | \n |\n | | | | \n --1.70%-- rest_init\n | | | | \n start_kernel\n | | | | \n x86_64_start_reservations\n | | | | \n x86_64_start_kernel\n | | | \n--0.11%-- [...]\n | | |\n | | |--40.25%-- \nhrtimer_force_reprogram\n | | | __remove_hrtimer\n | | | |\n | | | |--89.68%-- \n__hrtimer_start_range_ns\n | | | | \nhrtimer_start\n | | | | \ntick_nohz_stop_sched_tick\n | | | | \n__tick_nohz_idle_enter\n | | | | |\n | | | | \n|--99.90%-- tick_nohz_idle_enter\n | | | | | \n cpu_startup_entry\n | | | | | \n |\n | | | | | \n |--99.04%-- start_secondary\n | | | | | \n |\n | | | | | \n --0.96%-- rest_init\n | | | | | \n start_kernel\n | | | | | \n x86_64_start_reservations\n | | | | | \n x86_64_start_kernel\n | | | | \n--0.10%-- [...]\n | | | |\n\n\n\nAny thoughts on how to proceed further appreciated!\n\nCheers,\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 13:44:54 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 30 Červenec 2014, 3:44, Mark Kirkwood wrote:\n>\n> While these numbers look great in the middle range (12-96 clients), then\n> benefit looks to be tailing off as client numbers increase. Also running\n> with no stats (and hence no auto vacuum or analyze) is way too scary!\n\nI assume you've disabled statistics collector, which has nothing to do\nwith vacuum or analyze.\n\nThere are two kinds of statistics in PostgreSQL - data distribution\nstatistics (which is collected by ANALYZE and stored in actual tables\nwithin the database) and runtime statistics (which is collected by the\nstats collector and stored in a file somewhere on the dist).\n\nBy disabling statistics collector you loose runtime counters - number of\nsequential/index scans on a table, tuples read from a relation aetc. But\nit does not influence VACUUM or planning at all.\n\nAlso, it's mostly async (send over UDP and you're done) and shouldn't make\nmuch difference unless you have large number of objects. There are ways to\nimprove this (e.g. by placing the stat files into a tmpfs).\n\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 10:42:17 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "\"Tomas Vondra\" <[email protected]> writes:\n> On 30 Červenec 2014, 3:44, Mark Kirkwood wrote:\n>> While these numbers look great in the middle range (12-96 clients), then\n>> benefit looks to be tailing off as client numbers increase. Also running\n>> with no stats (and hence no auto vacuum or analyze) is way too scary!\n\n> By disabling statistics collector you loose runtime counters - number of\n> sequential/index scans on a table, tuples read from a relation aetc. But\n> it does not influence VACUUM or planning at all.\n\nIt does break autovacuum.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 08:39:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 30 Červenec 2014, 14:39, Tom Lane wrote:\n> \"Tomas Vondra\" <[email protected]> writes:\n>> On 30 ??ervenec 2014, 3:44, Mark Kirkwood wrote:\n>>> While these numbers look great in the middle range (12-96 clients),\n>>> then\n>>> benefit looks to be tailing off as client numbers increase. Also\n>>> running\n>>> with no stats (and hence no auto vacuum or analyze) is way too scary!\n>\n>> By disabling statistics collector you loose runtime counters - number of\n>> sequential/index scans on a table, tuples read from a relation aetc. But\n>> it does not influence VACUUM or planning at all.\n>\n> It does break autovacuum.\n\nOf course, you're right. It throws away info about how much data was\nmodified and when the table was last (auto)vacuumed.\n\nThis is a clear proof that I really need to drink at least one cup of\ncoffee in the morning before doing anything in the morning.\n\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 14:47:23 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "Hi Tomas,\n\nUnfortunately I think you are mistaken - disabling the stats collector \n(i.e. track_counts = off) means that autovacuum has no idea about \nwhen/if it needs to start a worker (as it uses those counts to decide), \nand hence you lose all automatic vacuum and analyze as a result.\n\nWith respect to comments like \"it shouldn't make difference\" etc etc, \nwell the profile suggests otherwise, and the change in tps numbers \nsupport the observation.\n\nregards\n\nMark\n\nOn 30/07/14 20:42, Tomas Vondra wrote:\n> On 30 Červenec 2014, 3:44, Mark Kirkwood wrote:\n>>\n>> While these numbers look great in the middle range (12-96 clients), then\n>> benefit looks to be tailing off as client numbers increase. Also running\n>> with no stats (and hence no auto vacuum or analyze) is way too scary!\n>\n> I assume you've disabled statistics collector, which has nothing to do\n> with vacuum or analyze.\n>\n> There are two kinds of statistics in PostgreSQL - data distribution\n> statistics (which is collected by ANALYZE and stored in actual tables\n> within the database) and runtime statistics (which is collected by the\n> stats collector and stored in a file somewhere on the dist).\n>\n> By disabling statistics collector you loose runtime counters - number of\n> sequential/index scans on a table, tuples read from a relation aetc. But\n> it does not influence VACUUM or planning at all.\n>\n> Also, it's mostly async (send over UDP and you're done) and shouldn't make\n> much difference unless you have large number of objects. There are ways to\n> improve this (e.g. by placing the stat files into a tmpfs).\n>\n> Tomas\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 31 Jul 2014 11:33:14 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "I've been assisting Mark with the benchmarking of these new servers. \n\nThe drop off in both throughput and CPU utilisation that we've been\nobserving as the client count increases has let me to investigate which\nlwlocks are dominant at different client counts.\n\nI've recompiled postgres with Andres LWLock improvements, Kevin's\nlibnuma patch and with LWLOCK_STATS enabled.\n\nThe LWLOCK_STATS below suggest that ProcArrayLock might be the main\nsource of locking that's causing throughput to take a dive as the client\ncount increases beyond the core count.\n\n\nwal_buffers = 256MB\ncheckpoint_segments = 1920\nwal_sync_method = open_datasync\n\npgbench -s 2000 -T 600\n\n\nResults:\n\n clients | tps\n---------+---------\n 6 | 9490\n 12 | 17558\n 24 | 25681\n 48 | 41175\n 96 | 48954\n 192 | 31887\n 384 | 15564\n \n \n\nLWLOCK_STATS at 48 clients\n\n Lock | Blk | SpinDelay | Blk % | SpinDelay % \n--------------------+----------+-----------+-------+-------------\n BufFreelistLock | 31144 | 11 | 1.64 | 1.62\n ShmemIndexLock | 192 | 1 | 0.01 | 0.15\n OidGenLock | 32648 | 14 | 1.72 | 2.06\n XidGenLock | 35731 | 18 | 1.88 | 2.64\n ProcArrayLock | 291121 | 215 | 15.36 | 31.57\n SInvalReadLock | 32136 | 13 | 1.70 | 1.91\n SInvalWriteLock | 32141 | 12 | 1.70 | 1.76\n WALBufMappingLock | 31662 | 15 | 1.67 | 2.20\n WALWriteLock | 825380 | 45 | 36.31 | 6.61\n CLogControlLock | 583458 | 337 | 26.93 | 49.49\n \n \n \nLWLOCK_STATS at 96 clients\n\n Lock | Blk | SpinDelay | Blk % | SpinDelay % \n--------------------+----------+-----------+-------+-------------\n BufFreelistLock | 62954 | 12 | 1.54 | 0.27\n ShmemIndexLock | 62635 | 4 | 1.54 | 0.09\n OidGenLock | 92232 | 22 | 2.26 | 0.50\n XidGenLock | 98326 | 18 | 2.41 | 0.41\n ProcArrayLock | 928871 | 3188 | 22.78 | 72.57\n SInvalReadLock | 58392 | 13 | 1.43 | 0.30\n SInvalWriteLock | 57429 | 14 | 1.41 | 0.32\n WALBufMappingLock | 138375 | 14 | 3.39 | 0.32\n WALWriteLock | 1480707 | 42 | 36.31 | 0.96\n CLogControlLock | 1098239 | 1066 | 26.93 | 27.27\n \n \n \nLWLOCK_STATS at 384 clients\n\n Lock | Blk | SpinDelay | Blk % | SpinDelay % \n--------------------+----------+-----------+-------+-------------\n BufFreelistLock | 184298 | 158 | 1.93 | 0.03\n ShmemIndexLock | 183573 | 164 | 1.92 | 0.03\n OidGenLock | 184558 | 173 | 1.93 | 0.03\n XidGenLock | 200239 | 213 | 2.09 | 0.04\n ProcArrayLock | 4035527 | 579666 | 42.22 | 98.62\n SInvalReadLock | 182204 | 152 | 1.91 | 0.03\n SInvalWriteLock | 182898 | 137 | 1.91 | 0.02\n WALBufMappingLock | 219936 | 215 | 2.30 | 0.04\n WALWriteLock | 3172725 | 457 | 24.67 | 0.08\n CLogControlLock | 1012458 | 6423 | 10.59 | 1.09\n \n\nThe same test done with a readonly workload show virtually no SpinDelay\nat all.\n\n\nAny thoughts or comments on these results are welcome!\n\n\nRegards,\nMatt.\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 31 Jul 2014 11:36:03 +1200",
"msg_from": "Matt Clarkson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 31/07/14 00:47, Tomas Vondra wrote:\n> On 30 Červenec 2014, 14:39, Tom Lane wrote:\n>> \"Tomas Vondra\" <[email protected]> writes:\n>>> On 30 ??ervenec 2014, 3:44, Mark Kirkwood wrote:\n>>>> While these numbers look great in the middle range (12-96 clients),\n>>>> then\n>>>> benefit looks to be tailing off as client numbers increase. Also\n>>>> running\n>>>> with no stats (and hence no auto vacuum or analyze) is way too scary!\n>>\n>>> By disabling statistics collector you loose runtime counters - number of\n>>> sequential/index scans on a table, tuples read from a relation aetc. But\n>>> it does not influence VACUUM or planning at all.\n>>\n>> It does break autovacuum.\n>\n> Of course, you're right. It throws away info about how much data was\n> modified and when the table was last (auto)vacuumed.\n>\n> This is a clear proof that I really need to drink at least one cup of\n> coffee in the morning before doing anything in the morning.\n>\n\nLol - thanks for taking a look anyway. Yes, coffee is often an important \npart of the exercise.\n\nRegards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 31 Jul 2014 11:38:20 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "Matt Clarkson wrote:\n\n> The LWLOCK_STATS below suggest that ProcArrayLock might be the main\n> source of locking that's causing throughput to take a dive as the client\n> count increases beyond the core count.\n\n> Any thoughts or comments on these results are welcome!\n\nDo these results change if you use Heikki's patch for CSN-based\nsnapshots? See\nhttp://www.postgresql.org/message-id/[email protected] for the\npatch (but note that you need to apply on top of 89cf2d52030 in the\nmaster branch -- maybe it applies to HEAD the 9.4 branch but I didn't\ntry).\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 31 Jul 2014 17:38:08 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 01/08/14 09:38, Alvaro Herrera wrote:\n> Matt Clarkson wrote:\n>\n>> The LWLOCK_STATS below suggest that ProcArrayLock might be the main\n>> source of locking that's causing throughput to take a dive as the client\n>> count increases beyond the core count.\n>\n>> Any thoughts or comments on these results are welcome!\n>\n> Do these results change if you use Heikki's patch for CSN-based\n> snapshots? See\n> http://www.postgresql.org/message-id/[email protected] for the\n> patch (but note that you need to apply on top of 89cf2d52030 in the\n> master branch -- maybe it applies to HEAD the 9.4 branch but I didn't\n> try).\n>\n\nHi Alvaro,\n\nApplying the CSN patch on top of the rwlock + numa in 9.4 (bit of a \npatch-fest we have here now) shows modest improvement at highest client \nnumber (but appears to hurt performance in the mid range):\n\n clients | tps\n---------+--------\n6 | 8445\n12 | 14548\n24 | 20043\n48 | 27451\n96 | 27718\n192 | 23614\n384 | 24737\n\n\nInitial runs were quite disappointing, until we moved the csnlog \ndirectory onto the same filesystem that the xlogs are on (PCIe SSD). We \ncould potentially look at locating them on their own separate volume if \nthat make sense.\n\nAdding in LWLOCK stats again shows quite a different picture from the \nprevious:\n\n48 clients\n\n Lock | Blk | SpinDelay | Blk % | SpinDelay %\n--------------------+----------+-----------+-----------+-------------\nWALWriteLock | 25426001 | 1239 | 62.227442 | 14.373550\nCLogControlLock | 1793739 | 1376 | 4.389986 | 15.962877\nProcArrayLock | 1007765 | 1305 | 2.466398 | 15.139211\nCSNLogControlLock | 609556 | 1722 | 1.491824 | 19.976798\nWALInsertLocks 4 | 994170 | 247 | 2.433126 | 2.865429\nWALInsertLocks 7 | 983497 | 243 | 2.407005 | 2.819026\nWALInsertLocks 5 | 993068 | 239 | 2.430429 | 2.772622\nWALInsertLocks 3 | 991446 | 229 | 2.426459 | 2.656613\nWALInsertLocks 0 | 964185 | 235 | 2.359741 | 2.726218\nWALInsertLocks 1 | 995237 | 221 | 2.435737 | 2.563805\nWALInsertLocks 2 | 997593 | 213 | 2.441503 | 2.470998\nWALInsertLocks 6 | 978178 | 201 | 2.393987 | 2.331787\nBufFreelistLock | 887194 | 206 | 2.171313 | 2.389791\nXidGenLock | 327385 | 366 | 0.801240 | 4.245940\nCheckpointerCommLock| 104754 | 151 | 0.256374 | 1.751740\nWALBufMappingLock | 274226 | 7 | 0.671139 | 0.081206\n\n\n96 clients\n\n Lock | Blk | SpinDelay | Blk % | SpinDelay %\n--------------------+----------+-----------+-----------+-------------\nWALWriteLock | 25426001 | 1239 | 62.227442 | 14.373550\nWALWriteLock | 30097625 | 9616 | 48.550747 | 19.068393\nCLogControlLock | 3193429 | 13490 | 5.151349 | 26.750481\nProcArrayLock | 2007103 | 11754 | 3.237676 | 23.308017\nCSNLogControlLock | 1303172 | 5022 | 2.102158 | 9.958556\nBufFreelistLock | 1921625 | 1977 | 3.099790 | 3.920363\nWALInsertLocks 0 | 2011855 | 681 | 3.245341 | 1.350413\nWALInsertLocks 5 | 1829266 | 627 | 2.950805 | 1.243332\nWALInsertLocks 7 | 1806966 | 632 | 2.914833 | 1.253247\nWALInsertLocks 4 | 1847372 | 591 | 2.980012 | 1.171945\nWALInsertLocks 1 | 1948553 | 557 | 3.143228 | 1.104523\nWALInsertLocks 6 | 1818717 | 582 | 2.933789 | 1.154098\nWALInsertLocks 3 | 1873964 | 552 | 3.022908 | 1.094608\nWALInsertLocks 2 | 1912007 | 523 | 3.084276 | 1.037102\nXidGenLock | 512521 | 699 | 0.826752 | 1.386107\nCheckpointerCommLock| 386853 | 711 | 0.624036 | 1.409903\nWALBufMappingLock | 546462 | 65 | 0.881503 | 0.128894\n\n\n384 clients\n\n Lock | Blk | SpinDelay | Blk % | SpinDelay %\n--------------------+----------+-----------+-----------+-------------\nWALWriteLock | 25426001 | 1239 | 62.227442 | 14.373550\nWALWriteLock | 20703796 | 87265 | 27.749961 | 15.360068\nCLogControlLock | 3273136 | 122616 | 4.387089 | 21.582422\nProcArrayLock | 3969918 | 100730 | 5.321008 | 17.730128\nCSNLogControlLock | 3191989 | 115068 | 4.278325 | 20.253851\nBufFreelistLock | 2014218 | 27952 | 2.699721 | 4.920009\nWALInsertLocks 0 | 2750082 | 5438 | 3.686023 | 0.957177\nWALInsertLocks 1 | 2584155 | 5312 | 3.463626 | 0.934999\nWALInsertLocks 2 | 2477782 | 5497 | 3.321051 | 0.967562\nWALInsertLocks 4 | 2375977 | 5441 | 3.184598 | 0.957705\nWALInsertLocks 5 | 2349769 | 5458 | 3.149471 | 0.960697\nWALInsertLocks 6 | 2329982 | 5367 | 3.122950 | 0.944680\nWALInsertLocks 3 | 2415965 | 4771 | 3.238195 | 0.839774\nWALInsertLocks 7 | 2316144 | 4930 | 3.104402 | 0.867761\nCheckpointerCommLock| 584419 | 10794 | 0.783316 | 1.899921\nXidGenLock | 391212 | 6963 | 0.524354 | 1.225602\nWALBufMappingLock | 484693 | 83 | 0.649650 | 0.014609\n\n\n\nSo we're seeing delay coming fairly equally from 5 lwlocks.\n\nThanks again - any other suggestions welcome!\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Aug 2014 16:56:10 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "Mark,\n\nIs the 60-core machine using some of the Intel chips which have 20\nhyperthreaded virtual cores?\n\nIf so, I've been seeing some performance issues on these processors.\nI'm currently doing a side-by-side hyperthreading on/off test.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Aug 2014 11:18:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "On 15/08/14 06:18, Josh Berkus wrote:\n> Mark,\n>\n> Is the 60-core machine using some of the Intel chips which have 20\n> hyperthreaded virtual cores?\n>\n> If so, I've been seeing some performance issues on these processors.\n> I'm currently doing a side-by-side hyperthreading on/off test.\n>\n\nHi Josh,\n\nThe board has 4 sockets with E7-4890 v2 cpus. They have 15 cores/30 \nthreads. We've running with hyperthreading off (noticed the usual \nsteep/sudden scaling dropoff with it on).\n\nWhat model are your 20 cores cpus?\n\nCheers\n\nMark\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Aug 2014 10:24:29 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 60 core performance with 9.3"
},
{
"msg_contents": "Mark, all:\n\nSo, this is pretty damming:\n\nRead-only test with HT ON:\n\n[pgtest@db ~]$ pgbench -c 20 -j 4 -T 600 -S bench\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 30\nquery mode: simple\nnumber of clients: 20\nnumber of threads: 4\nduration: 600 s\nnumber of transactions actually processed: 47167533\ntps = 78612.471802 (including connections establishing)\ntps = 78614.604352 (excluding connections establishing)\n\nRead-only test with HT Off:\n\n[pgtest@db ~]$ pgbench -c 20 -j 4 -T 600 -S bench\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 30\nquery mode: simple\nnumber of clients: 20\nnumber of threads: 4\nduration: 600 s\nnumber of transactions actually processed: 82457739\ntps = 137429.508196 (including connections establishing)\ntps = 137432.893796 (excluding connections establishing)\n\n\nOn a read-write test, it's 10% faster with HT off as well.\n\nFurther, from their production machine we've seen that having HT on\ncauses the machine to slow down by 5X whenever you get more than 40\ncores (as in 100% of real cores or 50% of HT cores) worth of activity.\n\nSo we're definitely back to \"If you're using PostgreSQL, turn off\nHyperthreading\".\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Aug 2014 12:13:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On 08/20/2014 02:13 PM, Josh Berkus wrote:\n\n> So we're definitely back to \"If you're using PostgreSQL, turn off\n> Hyperthreading\".\n\nThat's so strange. Back when I did my Nehalem tests, we got a very \nstrong 30%+ increase by enabling HT. We only got a hit when we turned \noff turbo, or forgot to disable power saving features.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Aug 2014 15:36:05 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On 21/08/14 07:13, Josh Berkus wrote:\n> Mark, all:\n>\n> So, this is pretty damming:\n>\n> Read-only test with HT ON:\n>\n> [pgtest@db ~]$ pgbench -c 20 -j 4 -T 600 -S bench\n> starting vacuum...end.\n> transaction type: SELECT only\n> scaling factor: 30\n> query mode: simple\n> number of clients: 20\n> number of threads: 4\n> duration: 600 s\n> number of transactions actually processed: 47167533\n> tps = 78612.471802 (including connections establishing)\n> tps = 78614.604352 (excluding connections establishing)\n>\n> Read-only test with HT Off:\n>\n> [pgtest@db ~]$ pgbench -c 20 -j 4 -T 600 -S bench\n> starting vacuum...end.\n> transaction type: SELECT only\n> scaling factor: 30\n> query mode: simple\n> number of clients: 20\n> number of threads: 4\n> duration: 600 s\n> number of transactions actually processed: 82457739\n> tps = 137429.508196 (including connections establishing)\n> tps = 137432.893796 (excluding connections establishing)\n>\n>\n> On a read-write test, it's 10% faster with HT off as well.\n>\n> Further, from their production machine we've seen that having HT on\n> causes the machine to slow down by 5X whenever you get more than 40\n> cores (as in 100% of real cores or 50% of HT cores) worth of activity.\n>\n> So we're definitely back to \"If you're using PostgreSQL, turn off\n> Hyperthreading\".\n>\n\n\nHmm - that is interesting - I don't think we compared read only scaling \nfor hyperthreading on and off (only read write). You didn't mention what \ncpu this is for (or how many sockets etc), would be useful to know.\n\nNotwithstanding the above results, my workmate Matt made an interesting \nobservation: the scaling graph for (our) 60 core box (HT off), looks \njust like the one for our 32 core box with HT *on*.\n\nWe are wondering if a lot of the previous analysis of HT performance \nregressions should actually be reevaluated in the light of ...err is it \njust that we have a lot more cores...? [1]\n\nRegards\n\nMark\n\n[1] Particularly as in *some* cases (single socket i7 for instance) HT \non seems to scale fine.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 11:14:46 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On Wed, Aug 20, 2014 at 1:36 PM, Shaun Thomas <[email protected]> wrote:\n> That's so strange. Back when I did my Nehalem tests, we got a very strong\n> 30%+ increase by enabling HT. We only got a hit when we turned off turbo, or\n> forgot to disable power saving features.\n\nIn my experience, it is crucially important to consider power saving\nfeatures in most benchmarks these days, where that might not have been\ntrue a few years ago. The CPU scaling governor can alter the outcome\nof many benchmarks quite significantly.\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Aug 2014 16:59:31 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with 9.3"
},
{
"msg_contents": "On Wed, Aug 20, 2014 at 12:13:50PM -0700, Josh Berkus wrote:\n> On a read-write test, it's 10% faster with HT off as well.\n> \n> Further, from their production machine we've seen that having HT on\n> causes the machine to slow down by 5X whenever you get more than 40\n> cores (as in 100% of real cores or 50% of HT cores) worth of activity.\n> \n> So we're definitely back to \"If you're using PostgreSQL, turn off\n> Hyperthreading\".\n\nNot sure how you can make such a blanket statement when so many people\nhave tested and shown the benefits of hyper-threading. I am also\nunclear exactly what you tested, as I didn't see it mentioned in the\nemail --- CPU type, CPU count, and operating system would be the minimal\ninformation required.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Aug 2014 22:40:36 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance\n with 9.3"
},
{
"msg_contents": "> On Wed, Aug 20, 2014 at 12:13:50PM -0700, Josh Berkus wrote:\n>> On a read-write test, it's 10% faster with HT off as well.\n>> \n>> Further, from their production machine we've seen that having HT on\n>> causes the machine to slow down by 5X whenever you get more than 40\n>> cores (as in 100% of real cores or 50% of HT cores) worth of activity.\n>> \n>> So we're definitely back to \"If you're using PostgreSQL, turn off\n>> Hyperthreading\".\n> \n> Not sure how you can make such a blanket statement when so many people\n> have tested and shown the benefits of hyper-threading. I am also\n> unclear exactly what you tested, as I didn't see it mentioned in the\n> email --- CPU type, CPU count, and operating system would be the minimal\n> information required.\n\nHT off is common knowledge for better benchmarking result, at least\nfor me. I've never seen better result with HT on, except POWER.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 11:47:00 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance\n with 9.3"
},
{
"msg_contents": "On 21/08/14 11:14, Mark Kirkwood wrote:\n>\n> You didn't mention what\n> cpu this is for (or how many sockets etc), would be useful to know.\n>\n\nJust to clarify - while you mentioned that the production system was 40 \ncores, it wasn't immediately obvious that the same system was the source \nof the measurements you posted...sorry if I'm being a mixture of \npedantic and dense - just trying to make sure it is clear what \nsystems/cpus etc we are talking about (with this in mind it never hurts \nto quote cpu and mobo model numbers)!\n\nCheers\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 22:02:03 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On 08/20/2014 06:14 PM, Mark Kirkwood wrote:\n\n> Notwithstanding the above results, my workmate Matt made an interesting\n> observation: the scaling graph for (our) 60 core box (HT off), looks\n> just like the one for our 32 core box with HT *on*.\n\nHmm. I know this sounds stupid and unlikely, but has anyone actually \ntested PostgreSQL on a system with more than 64 legitimate cores? The \nwork Robert Haas did to fix the CPU locking way back when showed \nsignificant improvements up to 64, but so far as I know, nobody really \ntested beyond that.\n\nI seem to remember similar choking effects when pre-9.2 systems \nencountered high CPU counts. I somehow doubt Intel would allow their HT \narchitecture to regress so badly from Nehalem, which is almost \n3-generations old at this point. This smells like something in the \nsoftware stack, up to and including the Linux kernel.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 09:14:56 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "> HT off is common knowledge for better benchmarking result\n\nIt's wise to use the qualifer 'for better benchmarking results'. \n\nIt's worth keeping in mind here that a benchmark is not the same as normal production use. \n\nFor example, where I work we do lots of long-running queries in parallel over a big range of datasets rather than many short-term transactions as fast as possible. Our biggest DB server is also used for GDAL work and R at the same time*. Pretty far from pgbench; not everyone is constrained by locks.\n\nI suppose that if your code is basically N copies of the same function, hyper-threading isn't likely to help much because it was introduced to allow different parts of the processor to be used in parallel when you're running hetarogenous code. \n\nBut if you're hammering just one part of the CPU... well, adding another layer of logical complexity for your CPU to manage probably isn't going to do much good.\n\nShould HT be on or off when you're running 64 very mixed types of long-term queries which involve variously either heavy use of real number calculations or e.g. logic/string handling, and different data sets? It's a much more complex question than simply maxing out your pgbench scores. \n\nI don't have the data now unfortunately, but I remember seeing a benefit for HT on our 4 core e3 when running GDAL/Postgis work in parallel last year. It's not surprising though; the GDAL calls are almost certainly using different functions of the processor compared to postgres and there should be very little lock contention. In light of this interesting data I'm now leaning towards proposing HT off for our mapservers (which receive short, similar requests over and over), but for the hetaragenous servers, I think I'll keep it on for now.\n\nGraeme. \n\n\n\n* unrelated. There's also huge advantages for us in keeping these different programs running on the same machine since we found we can get much better transfer rates through unix sockets than with TCP over the network.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 20:03:13 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance\n with 9.3"
},
{
"msg_contents": "On 08/20/2014 07:40 PM, Bruce Momjian wrote:\n> On Wed, Aug 20, 2014 at 12:13:50PM -0700, Josh Berkus wrote:\n>> On a read-write test, it's 10% faster with HT off as well.\n>>\n>> Further, from their production machine we've seen that having HT on\n>> causes the machine to slow down by 5X whenever you get more than 40\n>> cores (as in 100% of real cores or 50% of HT cores) worth of activity.\n>>\n>> So we're definitely back to \"If you're using PostgreSQL, turn off\n>> Hyperthreading\".\n> \n> Not sure how you can make such a blanket statement when so many people\n> have tested and shown the benefits of hyper-threading. \n\nActually, I don't know that anyone has posted the benefits of HT. Link?\n I want to compare results so that we can figure out what's different\nbetween my case and theirs. Also, it makes a big difference if there is\nan advantage to turning HT on for some workloads.\n\n> I am also\n> unclear exactly what you tested, as I didn't see it mentioned in the\n> email --- CPU type, CPU count, and operating system would be the minimal\n> information required.\n\nOoops! I thought I'd posted that earlier, but I didn't.\n\nThe processors in question is the Intel(R) Xeon(R) CPU E7- 4850, with 4\nof them for a total of 40 cores or 80 HT cores.\n\nOS is RHEL with 2.6.32-431.3.1.el6.x86_64.\n\nI've emailed a kernel hacker who works at Intel for comment; for one\nthing, I'm wondering if the older kernel version is a problem for a\nsystem like this.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 14:02:26 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On Thu, Aug 21, 2014 at 02:02:26PM -0700, Josh Berkus wrote:\n> On 08/20/2014 07:40 PM, Bruce Momjian wrote:\n> > On Wed, Aug 20, 2014 at 12:13:50PM -0700, Josh Berkus wrote:\n> >> On a read-write test, it's 10% faster with HT off as well.\n> >>\n> >> Further, from their production machine we've seen that having HT on\n> >> causes the machine to slow down by 5X whenever you get more than 40\n> >> cores (as in 100% of real cores or 50% of HT cores) worth of activity.\n> >>\n> >> So we're definitely back to \"If you're using PostgreSQL, turn off\n> >> Hyperthreading\".\n> > \n> > Not sure how you can make such a blanket statement when so many people\n> > have tested and shown the benefits of hyper-threading. \n> \n> Actually, I don't know that anyone has posted the benefits of HT. Link?\n> I want to compare results so that we can figure out what's different\n> between my case and theirs. Also, it makes a big difference if there is\n> an advantage to turning HT on for some workloads.\n\nI had Greg Smith test my system when it was installed, tested it, and\nrecommended hyper-threading. The system is Debian Squeeze\n(2.6.32-5-amd64), CPUs are dual Xeon E5620, 8 cores, 16 virtual cores.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 17:11:13 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance\n with 9.3"
},
{
"msg_contents": "On 08/21/2014 02:11 PM, Bruce Momjian wrote:\n> On Thu, Aug 21, 2014 at 02:02:26PM -0700, Josh Berkus wrote:\n>> On 08/20/2014 07:40 PM, Bruce Momjian wrote:\n>>> On Wed, Aug 20, 2014 at 12:13:50PM -0700, Josh Berkus wrote:\n>>>> On a read-write test, it's 10% faster with HT off as well.\n>>>>\n>>>> Further, from their production machine we've seen that having HT on\n>>>> causes the machine to slow down by 5X whenever you get more than 40\n>>>> cores (as in 100% of real cores or 50% of HT cores) worth of activity.\n>>>>\n>>>> So we're definitely back to \"If you're using PostgreSQL, turn off\n>>>> Hyperthreading\".\n>>>\n>>> Not sure how you can make such a blanket statement when so many people\n>>> have tested and shown the benefits of hyper-threading. \n>>\n>> Actually, I don't know that anyone has posted the benefits of HT. Link?\n>> I want to compare results so that we can figure out what's different\n>> between my case and theirs. Also, it makes a big difference if there is\n>> an advantage to turning HT on for some workloads.\n> \n> I had Greg Smith test my system when it was installed, tested it, and\n> recommended hyper-threading. The system is Debian Squeeze\n> (2.6.32-5-amd64), CPUs are dual Xeon E5620, 8 cores, 16 virtual cores.\n\nCan you post some numerical results?\n\nI'm serious. It's obviously easier for our users if we can blanket\nrecommend turning HT off; that's a LOT easier for them than \"you might\nwant to turn HT off if these conditions ...\". So I want to establish\nthat HT is a benefit sometimes if it is.\n\nI personally have never seen HT be a benefit. I've seen it be harmless\n(most of the time) but never beneficial.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 14:17:13 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On Thu, Aug 21, 2014 at 02:17:13PM -0700, Josh Berkus wrote:\n> >> Actually, I don't know that anyone has posted the benefits of HT. Link?\n> >> I want to compare results so that we can figure out what's different\n> >> between my case and theirs. Also, it makes a big difference if there is\n> >> an advantage to turning HT on for some workloads.\n> > \n> > I had Greg Smith test my system when it was installed, tested it, and\n> > recommended hyper-threading. The system is Debian Squeeze\n> > (2.6.32-5-amd64), CPUs are dual Xeon E5620, 8 cores, 16 virtual cores.\n> \n> Can you post some numerical results?\n> \n> I'm serious. It's obviously easier for our users if we can blanket\n> recommend turning HT off; that's a LOT easier for them than \"you might\n> want to turn HT off if these conditions ...\". So I want to establish\n> that HT is a benefit sometimes if it is.\n> \n> I personally have never seen HT be a benefit. I've seen it be harmless\n> (most of the time) but never beneficial.\n\nI know that when hyperthreading was introduced that it was mostly a\nnegative, but then this was improved, and it might have gotten bad\nagain. I am afraid results are based on the type of CPU, so I am not\nsure we can know a general answer.\n\nI know I asked Greg Smith, and I assume he would know.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 17:26:17 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance\n with 9.3"
},
{
"msg_contents": "On Thu, Aug 21, 2014 at 3:02 PM, Josh Berkus <[email protected]> wrote:\n> On 08/20/2014 07:40 PM, Bruce Momjian wrote:\n>\n>> I am also\n>> unclear exactly what you tested, as I didn't see it mentioned in the\n>> email --- CPU type, CPU count, and operating system would be the minimal\n>> information required.\n>\n> Ooops! I thought I'd posted that earlier, but I didn't.\n>\n> The processors in question is the Intel(R) Xeon(R) CPU E7- 4850, with 4\n> of them for a total of 40 cores or 80 HT cores.\n>\n> OS is RHEL with 2.6.32-431.3.1.el6.x86_64.\n\nI'm running almost the exact same setup in production as a spare. It\nhas 4 of those CPUs, 256G RAM, and is currently set to use HT. Since\nit's a spare node I might be able to do some testing on it as well.\nIt's running a 3.2 kernel right now. I could probably get a later\nmodel kernel on it even.\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 15:26:24 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with 9.3"
},
{
"msg_contents": "On Thu, Aug 21, 2014 at 3:26 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Aug 21, 2014 at 3:02 PM, Josh Berkus <[email protected]> wrote:\n>> On 08/20/2014 07:40 PM, Bruce Momjian wrote:\n>>\n>>> I am also\n>>> unclear exactly what you tested, as I didn't see it mentioned in the\n>>> email --- CPU type, CPU count, and operating system would be the minimal\n>>> information required.\n>>\n>> Ooops! I thought I'd posted that earlier, but I didn't.\n>>\n>> The processors in question is the Intel(R) Xeon(R) CPU E7- 4850, with 4\n>> of them for a total of 40 cores or 80 HT cores.\n>>\n>> OS is RHEL with 2.6.32-431.3.1.el6.x86_64.\n>\n> I'm running almost the exact same setup in production as a spare. It\n> has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since\n> it's a spare node I might be able to do some testing on it as well.\n> It's running a 3.2 kernel right now. I could probably get a later\n> model kernel on it even.\n>\n> --\n> To understand recursion, one must first understand recursion.\n\nTo update this last post, the machine I have is running ubuntu 12.04.1\nright now, and I have kernels 3.2, 3.5, 3.8, 3.11, and 3.13 available\nto put on it. We're looking at removing it from our current production\ncluster so I could likely do all kinds of crazy tests on it.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 15:43:31 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with 9.3"
},
{
"msg_contents": "On 08/21/2014 02:26 PM, Scott Marlowe wrote:\n> I'm running almost the exact same setup in production as a spare. It\n> has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since\n> it's a spare node I might be able to do some testing on it as well.\n> It's running a 3.2 kernel right now. I could probably get a later\n> model kernel on it even.\n\nYou know about the IO performance issues with 3.2, yes?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 15:51:20 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On 08/21/2014 03:51 PM, Josh Berkus wrote:\n> On 08/21/2014 02:26 PM, Scott Marlowe wrote:\n>> I'm running almost the exact same setup in production as a spare. It\n>> has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since\n>> it's a spare node I might be able to do some testing on it as well.\n>> It's running a 3.2 kernel right now. I could probably get a later\n>> model kernel on it even.\n> You know about the IO performance issues with 3.2, yes?\n>\nWere those 3.2 only and since fixed or are there issues persisting in \n3.2+? The 12.04 LTS release of Ubuntu Server was 3.2 but the 14.04 is 3.13.\n\nCheers,\nSteve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 16:08:07 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On 08/21/2014 04:08 PM, Steve Crawford wrote:\n> On 08/21/2014 03:51 PM, Josh Berkus wrote:\n>> On 08/21/2014 02:26 PM, Scott Marlowe wrote:\n>>> I'm running almost the exact same setup in production as a spare. It\n>>> has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since\n>>> it's a spare node I might be able to do some testing on it as well.\n>>> It's running a 3.2 kernel right now. I could probably get a later\n>>> model kernel on it even.\n>> You know about the IO performance issues with 3.2, yes?\n>>\n> Were those 3.2 only and since fixed or are there issues persisting in\n> 3.2+? The 12.04 LTS release of Ubuntu Server was 3.2 but the 14.04 is 3.13.\n\nThe issues I know of were fixed in 3.9.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 16:29:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On 22/08/14 11:29, Josh Berkus wrote:\n> On 08/21/2014 04:08 PM, Steve Crawford wrote:\n>> On 08/21/2014 03:51 PM, Josh Berkus wrote:\n>>> On 08/21/2014 02:26 PM, Scott Marlowe wrote:\n>>>> I'm running almost the exact same setup in production as a spare. It\n>>>> has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since\n>>>> it's a spare node I might be able to do some testing on it as well.\n>>>> It's running a 3.2 kernel right now. I could probably get a later\n>>>> model kernel on it even.\n>>> You know about the IO performance issues with 3.2, yes?\n>>>\n>> Were those 3.2 only and since fixed or are there issues persisting in\n>> 3.2+? The 12.04 LTS release of Ubuntu Server was 3.2 but the 14.04 is 3.13.\n>\n> The issues I know of were fixed in 3.9.\n>\n\nThere is a 3.11 kernel series for Ubuntu 12.04 Precise.\n\nRegards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Aug 2014 11:32:34 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "\nOn 08/21/2014 04:29 PM, Josh Berkus wrote:\n>\n> On 08/21/2014 04:08 PM, Steve Crawford wrote:\n>> On 08/21/2014 03:51 PM, Josh Berkus wrote:\n>>> On 08/21/2014 02:26 PM, Scott Marlowe wrote:\n>>>> I'm running almost the exact same setup in production as a spare. It\n>>>> has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since\n>>>> it's a spare node I might be able to do some testing on it as well.\n>>>> It's running a 3.2 kernel right now. I could probably get a later\n>>>> model kernel on it even.\n>>> You know about the IO performance issues with 3.2, yes?\n>>>\n>> Were those 3.2 only and since fixed or are there issues persisting in\n>> 3.2+? The 12.04 LTS release of Ubuntu Server was 3.2 but the 14.04 is 3.13.\n>\n> The issues I know of were fixed in 3.9.\n>\n\nCorrect. If you run trusty backports you are good to go.\n\nJD\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, @cmdpromptinc\n\"If we send our children to Caesar for their education, we should\n not be surprised when they come back as Romans.\"\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 17:04:44 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On Thu, Aug 21, 2014 at 5:29 PM, Josh Berkus <[email protected]> wrote:\n> On 08/21/2014 04:08 PM, Steve Crawford wrote:\n>> On 08/21/2014 03:51 PM, Josh Berkus wrote:\n>>> On 08/21/2014 02:26 PM, Scott Marlowe wrote:\n>>>> I'm running almost the exact same setup in production as a spare. It\n>>>> has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since\n>>>> it's a spare node I might be able to do some testing on it as well.\n>>>> It's running a 3.2 kernel right now. I could probably get a later\n>>>> model kernel on it even.\n>>> You know about the IO performance issues with 3.2, yes?\n>>>\n>> Were those 3.2 only and since fixed or are there issues persisting in\n>> 3.2+? The 12.04 LTS release of Ubuntu Server was 3.2 but the 14.04 is 3.13.\n>\n> The issues I know of were fixed in 3.9.\n>\nI thought they were fixed in 3.8.something? We're running 3.8 on our\nproduction servers but IO is not an issue for us.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Aug 2014 00:37:35 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with 9.3"
},
{
"msg_contents": "On 08/22/2014 01:37 AM, Scott Marlowe wrote:\n\n> I thought they were fixed in 3.8.something? We're running 3.8 on our\n> production servers but IO is not an issue for us.\n\nYeah. 3.8 fixed a ton of issues that were plaguing us. There were still \na couple patches I wanted that didn't get in until 3.11+, but the worst \nof the behavior was solved before that.\n\nBugs in kernel cache page aging algorithms are bad, m'kay?\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Aug 2014 08:01:03 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On 2014-08-21 14:02:26 -0700, Josh Berkus wrote:\n> On 08/20/2014 07:40 PM, Bruce Momjian wrote:\n> > Not sure how you can make such a blanket statement when so many people\n> > have tested and shown the benefits of hyper-threading. \n> \n> Actually, I don't know that anyone has posted the benefits of HT.\n> Link?\n\nThere's definitely cases where it can help. But it's highly workload\n*and* hardware dependent.\n\n> OS is RHEL with 2.6.32-431.3.1.el6.x86_64.\n> \n> I've emailed a kernel hacker who works at Intel for comment; for one\n> thing, I'm wondering if the older kernel version is a problem for a\n> system like this.\n\nI'm not sure if it has been backported by redhat, but there\ndefinitely have been significant improvement in SMT aware scheduling\nafter vanilla 2.6.32.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Aug 2014 16:02:23 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On 08/22/2014 07:02 AM, Andres Freund wrote:\n> On 2014-08-21 14:02:26 -0700, Josh Berkus wrote:\n>> On 08/20/2014 07:40 PM, Bruce Momjian wrote:\n>>> Not sure how you can make such a blanket statement when so many people\n>>> have tested and shown the benefits of hyper-threading. \n>>\n>> Actually, I don't know that anyone has posted the benefits of HT.\n>> Link?\n> \n> There's definitely cases where it can help. But it's highly workload\n> *and* hardware dependent.\n\nThe only cases I've seen where HT can be beneficial is when you have\nlarge numbers of idle connections. Then the idle connections can be\n\"parked\" on the HT virtual cores. However, even in this case I haven't\nseen a head-to-head performance comparison.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Aug 2014 15:13:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On 26/08/14 10:13, Josh Berkus wrote:\n> On 08/22/2014 07:02 AM, Andres Freund wrote:\n>> On 2014-08-21 14:02:26 -0700, Josh Berkus wrote:\n>>> On 08/20/2014 07:40 PM, Bruce Momjian wrote:\n>>>> Not sure how you can make such a blanket statement when so many people\n>>>> have tested and shown the benefits of hyper-threading.\n>>>\n>>> Actually, I don't know that anyone has posted the benefits of HT.\n>>> Link?\n>>\n>> There's definitely cases where it can help. But it's highly workload\n>> *and* hardware dependent.\n>\n> The only cases I've seen where HT can be beneficial is when you have\n> large numbers of idle connections. Then the idle connections can be\n> \"parked\" on the HT virtual cores. However, even in this case I haven't\n> seen a head-to-head performance comparison.\n>\n\nI recall HT beneficial on a single socket (i3 or i7), using pgbench as \nthe measuring tool. However I didn't save the results at the time. I've \njust got some new ssd's to play with so might run some pgbench tests on \nmy home machine (Haswell i7) with HT on and off.\n\nRegards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Aug 2014 14:03:53 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
},
{
"msg_contents": "On 26/08/14 10:13, Josh Berkus wrote:\n> On 08/22/2014 07:02 AM, Andres Freund wrote:\n>> On 2014-08-21 14:02:26 -0700, Josh Berkus wrote:\n>>> On 08/20/2014 07:40 PM, Bruce Momjian wrote:\n>>>> Not sure how you can make such a blanket statement when so many people\n>>>> have tested and shown the benefits of hyper-threading.\n>>>\n>>> Actually, I don't know that anyone has posted the benefits of HT.\n>>> Link?\n>>\n>> There's definitely cases where it can help. But it's highly workload\n>> *and* hardware dependent.\n>\n> The only cases I've seen where HT can be beneficial is when you have\n> large numbers of idle connections. Then the idle connections can be\n> \"parked\" on the HT virtual cores. However, even in this case I haven't\n> seen a head-to-head performance comparison.\n>\n\nI've just had a pair of Crucial m550's arrive, so a bit of benchmarking \nis in order. The results (below) seem to suggest that HT enabled is \ncertainly not inhibiting scaling performance for single socket i7's. I \nperformed several runs (typical results shown below).\n\nIntel i7-4770 3.4 Ghz, 16G\n2x Crucial m550\nUbuntu 14.04\nPostgres 9.4 beta2\n\nlogging_collector = on\nmax_connections = 600\nshared_buffers = 1GB\nwal_buffers = 32MB\ncheckpoint_segments = 128\neffective_cache_size = 10GB\n\npgbench scale = 300\ntest duration (each) = 600s\n\ndb on 1x m550\nxlog on 1x m550\n\nclients | tps (HT)| tps (no HT)\n--------+----------+-------------\n4 | 517 | 520\n8 | 1013 | 999\n16 | 1938 | 1913\n32 | 3574 | 3560\n64 | 5873 | 5412\n128 | 8351 | 7450\n256 | 9426 | 7840\n512 | 9357 | 7288\n\n\nRegards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 Aug 2014 16:24:12 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Turn off Hyperthreading! WAS: 60 core performance with\n 9.3"
}
] |
[
{
"msg_contents": "The simplified scene: \nselect slowfunction() from a order by b limit 1\nis slow than\nselect slowfunction() from ( select * from a order by b limit 1)\nif there are many records in table 'a'\n\nThe real scene:\n\nfunction ST_Distance_Sphere is slow than ST_Distance, the query:\n\nSELECT ST_Distance_Sphere(s, ST_GeomFromText('POINT(1 1)')) from road order by ST_Distance(s, ST_GeomFromText('POINT(1 1)')) limit 1\n\nis slow than:\n\nselect ST_Distance_Sphere(s, ST_GeomFromText('POINT(1 1)')) from (SELECT s from road order by ST_Distance(s, ST_GeomFromText('POINT(1 1)')) limit 1) as a\n\nThere are about 7000 records in 'road'. \n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 27 Jun 2014 15:30:52 +0800",
"msg_from": "\"songtebo\"<[email protected]>",
"msg_from_op": true,
"msg_subject": "Can improve 'limit 1' ? with slow function"
}
] |
[
{
"msg_contents": "Hello Everyone ...\n\nWe have 6 PG 9.1.12 installation, one master (Ubuntu 10.04), one slony\nslave(Ubuntu 10.04), and four streaming replica (2 on Ubuntu 10.04 and 2 on\nRHEL 6.5 (Santiago) which lies on different datacenter). All Ubuntu is on\nsame datacenter. Master send wal archive to slony slave.\n\nThis is the infrastructure description :\n200Mbit link between data centers, esx 5.5 on hp blade chassis. proliant\ngen 7 blades. postgres servers dedicated to esx hosts (no other vms on\nthose esx hosts). 3par disk backends with 4 and 8 Gbit fiber channel\nconnections. 10Gbit ethernet virtual connects on the hp chassis. cisco\nfabric and network switches.\n\nAll postgres installed from Ubuntu/RHEL package.\n\nEverything works fine until on Thursday we have high load on master, and\nafter that every streaming replica lag further behind the master. Even on\nnight and weekend where all server load is low. But the slony slave is OK\nat all.\n\nWe thought it was due to network, so we decide to copy wal files to local\nof a streaming server, and replaying wal from local. After PG restart, it\nreplays wal on a good speed about 3 seconds per wal file, but as the time\ngoes the speed decreasing. We had 30 seconds per wal file. The worst we get\nis 3 minutes to replay 1 wal file.\n\nThe rate of wal produced from master is normal like usual. And also on\nThursday we had wal files on pg_xlog on streaming replica server, but no\nother wal files.\n\nThis is the configuration :\nSELECT name, current_setting(name)\n FROM pg_settings\n WHERE source NOT IN ('default', 'override');\n name | current_setting\n--------------------------------+---------------------------------------------------\n application_name | psql\n archive_command | /var/lib/postgresql/scripts/wal_archive\n\"%p\" \"%f\"\n archive_mode | on\n checkpoint_completion_target | 0.7\n checkpoint_segments | 30\n client_encoding | UTF8\n DateStyle | ISO, MDY\n default_text_search_config | pg_catalog.english\n effective_cache_size | 125GB\n effective_io_concurrency | 3\n external_pid_file | /var/run/postgresql/9.1-main.pid\n hot_standby | on\n hot_standby_feedback | on\n lc_messages | en_US.UTF-8\n lc_monetary | en_US.UTF-8\n lc_numeric | en_US.UTF-8\n lc_time | en_US.UTF-8\n listen_addresses | *\n log_checkpoints | on\n log_connections | on\n log_destination | csvlog\n log_directory | pg_log\n log_disconnections | on\n log_filename | postgresql-%a.log\n log_line_prefix | %t\n log_lock_waits | on\n log_rotation_age | 1d\n log_rotation_size | 0\n log_temp_files | 100kB\n log_timezone | localtime\n log_truncate_on_rotation | on\n logging_collector | on\n maintenance_work_mem | 1GB\n max_connections | 750\n max_locks_per_transaction | 900\n max_pred_locks_per_transaction | 900\n max_stack_depth | 2MB\n max_wal_senders | 6\n port | 5432\n shared_buffers | 8GB\n ssl | on\n temp_buffers | 64MB\n TimeZone | America/Chicago\n unix_socket_directory | /var/run/postgresql\n wal_keep_segments | 50\n wal_level | hot_standby\n work_mem | 256MB\n(47 rows)\n\nThanks for any help\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nHello Everyone ...We have 6 PG 9.1.12 installation, one master (Ubuntu 10.04), one slony slave(Ubuntu 10.04), and four streaming replica (2 on Ubuntu 10.04 and 2 on RHEL 6.5 (Santiago) which lies on different datacenter). All Ubuntu is on same datacenter. Master send wal archive to slony slave.\nThis is the infrastructure description :200Mbit link between data centers, esx 5.5 on hp blade chassis. proliant gen 7 blades. postgres servers dedicated to esx hosts (no other vms on those esx hosts). 3par disk backends with 4 and 8 Gbit fiber channel connections. 10Gbit ethernet virtual connects on the hp chassis. cisco fabric and network switches.\nAll postgres installed from Ubuntu/RHEL package.Everything works fine until on Thursday we have high load on master, and after that every streaming replica lag further behind the master. Even on night and weekend where all server load is low. But the slony slave is OK at all.\nWe thought it was due to network, so we decide to copy wal files to local of a streaming server, and replaying wal from local. After PG restart, it replays wal on a good speed about 3 seconds per wal file, but as the time goes the speed decreasing. We had 30 seconds per wal file. The worst we get is 3 minutes to replay 1 wal file.\nThe rate of wal produced from master is normal like usual. And also on Thursday we had wal files on pg_xlog on streaming replica server, but no other wal files.This is the configuration :\nSELECT name, current_setting(name) FROM pg_settings WHERE source NOT IN ('default', 'override'); name | current_setting\n--------------------------------+--------------------------------------------------- application_name | psql archive_command | /var/lib/postgresql/scripts/wal_archive \"%p\" \"%f\"\n archive_mode | on checkpoint_completion_target | 0.7 checkpoint_segments | 30 client_encoding | UTF8 DateStyle | ISO, MDY\n default_text_search_config | pg_catalog.english effective_cache_size | 125GB effective_io_concurrency | 3 external_pid_file | /var/run/postgresql/9.1-main.pid\n hot_standby | on hot_standby_feedback | on lc_messages | en_US.UTF-8 lc_monetary | en_US.UTF-8 lc_numeric | en_US.UTF-8\n lc_time | en_US.UTF-8 listen_addresses | * log_checkpoints | on log_connections | on log_destination | csvlog\n log_directory | pg_log log_disconnections | on log_filename | postgresql-%a.log log_line_prefix | %t log_lock_waits | on\n log_rotation_age | 1d log_rotation_size | 0 log_temp_files | 100kB log_timezone | localtime log_truncate_on_rotation | on\n logging_collector | on maintenance_work_mem | 1GB max_connections | 750 max_locks_per_transaction | 900 max_pred_locks_per_transaction | 900\n max_stack_depth | 2MB max_wal_senders | 6 port | 5432 shared_buffers | 8GB ssl | on\n temp_buffers | 64MB TimeZone | America/Chicago unix_socket_directory | /var/run/postgresql wal_keep_segments | 50\n wal_level | hot_standby work_mem | 256MB(47 rows)Thanks for any help-- Regards,\nSoni Maula Harriz",
"msg_date": "Sun, 29 Jun 2014 15:14:19 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres Replaying WAL slowly"
},
{
"msg_contents": "On 06/29/2014 11:14 AM, Soni M wrote:\n> Everything works fine until on Thursday we have high load on master, and\n> after that every streaming replica lag further behind the master. Even on\n> night and weekend where all server load is low. But the slony slave is OK\n> at all.\n\nWhat does 'top' on the standby say? Is the startup process using 100% of \n(one) CPU replaying records, or is it waiting for I/O? How large is the \ndatabase, does it fit in RAM? Any clues in the system or PostgreSQL logs?\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 29 Jun 2014 11:31:31 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "top and sar says 100% cpu usage of one core, no sign of I/O wait. The\ndatabase is 1.5TB in size. RAM in master is 145GB, on slave it's differ,\nsome has about 16GB another has 145GB also.\n\nnothing suspicious on standby's postgres log.\n\non master's postgres log :\nWARNING,01000,\"pgstat wait timeout\",,,,,,,,,\"\"\nERROR,57014,\"canceling autovacuum task\",,,,,\"automatic vacuum of\ntable \"\"consprod._consprod_replication.sl_event\"\"\",,,,\"\"\nERROR,57014,\"canceling statement due to statement timeout\",,,,,,\"\n\"PARSE\",2014-06-26 00:39:35 CDT,91/0,0,ERROR,25P02,\"current transaction is\naborted, commands ignored until end of transaction block\",,,,,,\"select\n1\",,,\"\"\n\"could not receive data from client: Connection reset by peer\",,,,,,,,,\"\"\n\nthe log files is big anyway. if you can specify some pattern to look at the\nlog, that would really help.\n\n\nOn Sun, Jun 29, 2014 at 3:31 PM, Heikki Linnakangas <[email protected]\n> wrote:\n\n> On 06/29/2014 11:14 AM, Soni M wrote:\n>\n>> Everything works fine until on Thursday we have high load on master, and\n>> after that every streaming replica lag further behind the master. Even on\n>> night and weekend where all server load is low. But the slony slave is OK\n>> at all.\n>>\n>\n> What does 'top' on the standby say? Is the startup process using 100% of\n> (one) CPU replaying records, or is it waiting for I/O? How large is the\n> database, does it fit in RAM? Any clues in the system or PostgreSQL logs?\n>\n> - Heikki\n>\n>\n\n\n-- \nRegards,\n\nSoni Maula Harriz\n\ntop and sar says 100% cpu usage of one core, no sign of I/O wait. The database is 1.5TB in size. RAM in master is 145GB, on slave it's differ, some has about 16GB another has 145GB also.\nnothing suspicious on standby's postgres log.on master's postgres log :WARNING,01000,\"pgstat wait timeout\",,,,,,,,,\"\"ERROR,57014,\"canceling autovacuum task\",,,,,\"automatic vacuum of table \"\"consprod._consprod_replication.sl_event\"\"\",,,,\"\"\nERROR,57014,\"canceling statement due to statement timeout\",,,,,,\"\"PARSE\",2014-06-26 00:39:35 CDT,91/0,0,ERROR,25P02,\"current transaction is aborted, commands ignored until end of transaction block\",,,,,,\"select 1\",,,\"\"\n\"could not receive data from client: Connection reset by peer\",,,,,,,,,\"\"the log files is big anyway. if you can specify some pattern to look at the log, that would really help.\nOn Sun, Jun 29, 2014 at 3:31 PM, Heikki Linnakangas <[email protected]> wrote:\nOn 06/29/2014 11:14 AM, Soni M wrote:\n\nEverything works fine until on Thursday we have high load on master, and\nafter that every streaming replica lag further behind the master. Even on\nnight and weekend where all server load is low. But the slony slave is OK\nat all.\n\n\nWhat does 'top' on the standby say? Is the startup process using 100% of (one) CPU replaying records, or is it waiting for I/O? How large is the database, does it fit in RAM? Any clues in the system or PostgreSQL logs?\n\n- Heikki\n\n-- Regards,Soni Maula Harriz",
"msg_date": "Sun, 29 Jun 2014 19:43:52 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "Here's some data from standby (this is when replaying take 44seconds per\nwal files):\n\n2014-06-29 00:07:36.513 CDT,,,16682,,53af6f46.412a,44,,2014-06-28 20:43:34\nCDT,,0,LOG,00000,\"restartpoint complete: wrote 63187 buffers (3.0%); 0\ntransaction log file(s) added, 0 removed, 0 recycled; write=209.170 s,\nsync=0.482 s, total=209.667 s; sync files=644, longest=0.036 s,\naverage=0.000 s\",,,,,,,,,\"\"\n2014-06-29 00:07:36.513 CDT,,,16682,,53af6f46.412a,45,,2014-06-28 20:43:34\nCDT,,0,LOG,00000,\"recovery restart point at 27CE/170056A8\",\"last completed\ntransaction was at log time 2014-06-27 13:39:00.542624-05\",,,,,,,,\"\"\n2014-06-29 00:28:59.678 CDT,,,16682,,53af6f46.412a,47,,2014-06-28 20:43:34\nCDT,,0,LOG,00000,\"restartpoint complete: wrote 70942 buffers (3.4%); 0\ntransaction log file(s) added, 0 removed, 0 recycled; write=209.981 s,\nsync=0.493 s, total=210.486 s; sync files=723, longest=0.156 s,\naverage=0.000 s\",,,,,,,,,\"\"\n2014-06-29 00:28:59.678 CDT,,,16682,,53af6f46.412a,48,,2014-06-28 20:43:34\nCDT,,0,LOG,00000,\"recovery restart point at 27CE/35002678\",\"last completed\ntransaction was at log time 2014-06-27 13:42:05.121121-05\",,,,,,,,\"\"\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nHere's some data from standby (this is when replaying take 44seconds per wal files):2014-06-29 00:07:36.513 CDT,,,16682,,53af6f46.412a,44,,2014-06-28 20:43:34 CDT,,0,LOG,00000,\"restartpoint complete: wrote 63187 buffers (3.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=209.170 s, sync=0.482 s, total=209.667 s; sync files=644, longest=0.036 s, average=0.000 s\",,,,,,,,,\"\"\n2014-06-29 00:07:36.513 CDT,,,16682,,53af6f46.412a,45,,2014-06-28 20:43:34 CDT,,0,LOG,00000,\"recovery restart point at 27CE/170056A8\",\"last completed transaction was at log time 2014-06-27 13:39:00.542624-05\",,,,,,,,\"\"\n2014-06-29 00:28:59.678 CDT,,,16682,,53af6f46.412a,47,,2014-06-28 20:43:34 CDT,,0,LOG,00000,\"restartpoint complete: wrote 70942 buffers (3.4%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=209.981 s, sync=0.493 s, total=210.486 s; sync files=723, longest=0.156 s, average=0.000 s\",,,,,,,,,\"\"\n2014-06-29 00:28:59.678 CDT,,,16682,,53af6f46.412a,48,,2014-06-28 20:43:34 CDT,,0,LOG,00000,\"recovery restart point at 27CE/35002678\",\"last completed transaction was at log time 2014-06-27 13:42:05.121121-05\",,,,,,,,\"\"\n-- Regards,Soni Maula Harriz",
"msg_date": "Sun, 29 Jun 2014 20:05:49 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On 06/29/2014 03:43 PM, Soni M wrote:\n> top and sar says 100% cpu usage of one core, no sign of I/O wait.\n\nHmm, I wonder what it's doing then... If you have \"perf\" installed on \nthe system, you can do \"perf top\" to get a quick overlook of where the \nCPU time is spent.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 10:05:48 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "Here's what 'perf top' said on streaming replica :\n\nSamples: 26K of event 'cpu-clock', Event count (approx.): 19781\n 95.97% postgres [.] 0x00000000002210f3\n 0.41% perf [.] 0x000000000005f225\n 0.39% libc-2.12.so [.] __strstr_sse2\n 0.22% libc-2.12.so [.] memchr\n 0.22% [kernel] [k] kallsyms_expand_symbol\n 0.18% perf [.] symbols__insert\n 0.18% [kernel] [k] format_decode\n 0.15% libc-2.12.so [.] __GI___strcmp_ssse3\n 0.13% [kernel] [k] string\n 0.12% [kernel] [k] number\n 0.12% [kernel] [k] vsnprintf\n 0.12% libc-2.12.so [.] _IO_vfscanf\n 0.11% perf [.] dso__find_symbol\n 0.11% [kernel] [k] _spin_unlock_irqrestore\n 0.10% perf [.] hex2u64\n 0.10% postgres [.]\nhash_search_with_hash_value\n 0.09% perf [.] rb_next\n 0.08% libc-2.12.so [.] memcpy\n 0.07% libc-2.12.so [.] __strchr_sse2\n 0.07% [kernel] [k] clear_page\n 0.06% [kernel] [k] strnlen\n 0.05% perf [.] perf_evsel__parse_sample\n 0.05% perf [.] rb_insert_color\n 0.05% [kernel] [k] pointer\n\n\n\nOn Mon, Jun 30, 2014 at 2:05 PM, Heikki Linnakangas <[email protected]\n> wrote:\n\n> On 06/29/2014 03:43 PM, Soni M wrote:\n>\n>> top and sar says 100% cpu usage of one core, no sign of I/O wait.\n>>\n>\n> Hmm, I wonder what it's doing then... If you have \"perf\" installed on the\n> system, you can do \"perf top\" to get a quick overlook of where the CPU time\n> is spent.\n>\n> - Heikki\n>\n>\n\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nHere's what 'perf top' said on streaming replica :Samples: 26K of event 'cpu-clock', Event count (approx.): 19781 95.97% postgres [.] 0x00000000002210f3\n 0.41% perf [.] 0x000000000005f225 0.39% libc-2.12.so [.] __strstr_sse2 0.22% libc-2.12.so [.] memchr\n 0.22% [kernel] [k] kallsyms_expand_symbol 0.18% perf [.] symbols__insert 0.18% [kernel] [k] format_decode\n 0.15% libc-2.12.so [.] __GI___strcmp_ssse3 0.13% [kernel] [k] string 0.12% [kernel] [k] number\n 0.12% [kernel] [k] vsnprintf 0.12% libc-2.12.so [.] _IO_vfscanf 0.11% perf [.] dso__find_symbol\n 0.11% [kernel] [k] _spin_unlock_irqrestore 0.10% perf [.] hex2u64 0.10% postgres [.] hash_search_with_hash_value\n 0.09% perf [.] rb_next 0.08% libc-2.12.so [.] memcpy 0.07% libc-2.12.so [.] __strchr_sse2\n 0.07% [kernel] [k] clear_page 0.06% [kernel] [k] strnlen 0.05% perf [.] perf_evsel__parse_sample\n 0.05% perf [.] rb_insert_color 0.05% [kernel] [k] pointer\r\nOn Mon, Jun 30, 2014 at 2:05 PM, Heikki Linnakangas <[email protected]> wrote:\nOn 06/29/2014 03:43 PM, Soni M wrote:\n\r\ntop and sar says 100% cpu usage of one core, no sign of I/O wait.\n\n\r\nHmm, I wonder what it's doing then... If you have \"perf\" installed on the system, you can do \"perf top\" to get a quick overlook of where the CPU time is spent.\n\r\n- Heikki\n\n-- Regards,Soni Maula Harriz",
"msg_date": "Mon, 30 Jun 2014 21:46:10 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On 06/30/2014 05:46 PM, Soni M wrote:\n> Here's what 'perf top' said on streaming replica :\n>\n> Samples: 26K of event 'cpu-clock', Event count (approx.): 19781\n> 95.97% postgres [.] 0x00000000002210f3\n\nOk, so it's stuck doing something.. Can you get build with debug symbols \ninstalled, so that we could see the function name?\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 19:14:24 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "\nOn Jun 30, 2014, at 9:14 AM, Heikki Linnakangas <[email protected]> wrote:\n\n> On 06/30/2014 05:46 PM, Soni M wrote:\n>> Here's what 'perf top' said on streaming replica :\n>> \n>> Samples: 26K of event 'cpu-clock', Event count (approx.): 19781\n>> 95.97% postgres [.] 0x00000000002210f3\n> \n> Ok, so it's stuck doing something.. Can you get build with debug symbols installed, so that we could see the function name?\n> - Heikki\n> \n\nLooks like StandbyReleaseLocks:\n\nSamples: 10K of event 'cpu-clock', Event count (approx.): 8507\n 89.21% postgres [.] StandbyReleaseLocks\n 0.89% libc-2.12.so [.] __strstr_sse2\n 0.83% perf [.] 0x000000000005f1e5\n 0.74% [kernel] [k] kallsyms_expand_symbol\n 0.52% libc-2.12.so [.] memchr\n 0.47% perf [.] symbols__insert\n 0.47% [kernel] [k] format_decode\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 10:04:44 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On 2014-06-30 19:14:24 +0300, Heikki Linnakangas wrote:\n> On 06/30/2014 05:46 PM, Soni M wrote:\n> >Here's what 'perf top' said on streaming replica :\n> >\n> >Samples: 26K of event 'cpu-clock', Event count (approx.): 19781\n> > 95.97% postgres [.] 0x00000000002210f3\n> \n> Ok, so it's stuck doing something.. Can you get build with debug symbols\n> installed, so that we could see the function name?\n\nMy guess it's a spinlock, probably xlogctl->info_lck via\nRecoveryInProgress(). Unfortunately inline assembler doesn't always seem\nto show up correctly in profiles...\n\nWhat worked for me was to build with -fno-omit-frame-pointer - that\nnormally shows the callers, even if it can't generate a proper symbol\nname.\n\nSoni: Do you use Hot Standby? Are there connections active while you\nhave that problem? Any other processes with high cpu load?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 19:14:16 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On Tue, Jul 1, 2014 at 12:14 AM, Andres Freund <[email protected]>\nwrote:\n\n>\n> My guess it's a spinlock, probably xlogctl->info_lck via\n> RecoveryInProgress(). Unfortunately inline assembler doesn't always seem\n> to show up correctly in profiles...\n>\n> What worked for me was to build with -fno-omit-frame-pointer - that\n> normally shows the callers, even if it can't generate a proper symbol\n> name.\n>\n> Soni: Do you use Hot Standby? Are there connections active while you\n> have that problem? Any other processes with high cpu load?\n>\n> Greetings,\n>\n> Andres Freund\n>\n> --\n> Andres Freund http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nIt is\n 96.62% postgres [.] StandbyReleaseLocks\n as Jeff said. It runs quite long time, more than 5 minutes i think\n\ni also use hot standby. we have 4 streaming replica, some of them has\nactive connection some has not. this issue has last more than 4 days. On\none of the standby, above postgres process is the only process that consume\nhigh cpu load.\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nOn Tue, Jul 1, 2014 at 12:14 AM, Andres Freund <[email protected]> wrote:\n\nMy guess it's a spinlock, probably xlogctl->info_lck via\nRecoveryInProgress(). Unfortunately inline assembler doesn't always seem\nto show up correctly in profiles...\n\nWhat worked for me was to build with -fno-omit-frame-pointer - that\nnormally shows the callers, even if it can't generate a proper symbol\nname.\n\nSoni: Do you use Hot Standby? Are there connections active while you\nhave that problem? Any other processes with high cpu load?\n\nGreetings,\n\nAndres Freund\n\n--\n Andres Freund http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\nIt is 96.62% postgres [.] StandbyReleaseLocks as Jeff said. It runs quite long time, more than 5 minutes i think\ni also use hot standby. we have 4 streaming replica, some of them has active connection some has not. this issue has last more than 4 days. On one of the standby, above postgres process is the only process that consume high cpu load.\n-- Regards,Soni Maula Harriz",
"msg_date": "Tue, 1 Jul 2014 00:29:45 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On Jun 30, 2014, at 10:29 AM, Soni M <[email protected]> wrote:\n\n> \n> \n> \n> On Tue, Jul 1, 2014 at 12:14 AM, Andres Freund <[email protected]> wrote:\n> \n> My guess it's a spinlock, probably xlogctl->info_lck via\n> RecoveryInProgress(). Unfortunately inline assembler doesn't always seem\n> to show up correctly in profiles...\n> \n> What worked for me was to build with -fno-omit-frame-pointer - that\n> normally shows the callers, even if it can't generate a proper symbol\n> name.\n> \n> Soni: Do you use Hot Standby? Are there connections active while you\n> have that problem? Any other processes with high cpu load?\n> \n> Greetings,\n> \n> Andres Freund\n> \n> --\n> Andres Freund http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n> \n> It is \n> 96.62% postgres [.] StandbyReleaseLocks\n> as Jeff said. It runs quite long time, more than 5 minutes i think\n> \n> i also use hot standby. we have 4 streaming replica, some of them has active connection some has not. this issue has last more than 4 days. On one of the standby, above postgres process is the only process that consume high cpu load.\n\ncompiled with -fno-omit-frame-pointer doesn't yield much more info:\n\n 76.24% postgres [.] StandbyReleaseLocks\n 2.64% libcrypto.so.1.0.1e [.] md5_block_asm_data_order\n 2.19% libcrypto.so.1.0.1e [.] RC4\n 2.17% postgres [.] RecordIsValid\n 1.20% [kernel] [k] copy_user_generic_unrolled\n 1.18% [kernel] [k] _spin_unlock_irqrestore\n 0.97% [vmxnet3] [k] vmxnet3_poll_rx_only\n 0.87% [kernel] [k] __do_softirq\n 0.77% [vmxnet3] [k] vmxnet3_xmit_frame\n 0.69% postgres [.] hash_search_with_hash_value\n 0.68% [kernel] [k] fin\n\nHowever, this server started progressing through the WAL files quite a bit better before I finished compiling, so we'll leave it running with this version and see if there's more info available the next time it starts replaying slowly.\n\n\n\nOn Jun 30, 2014, at 10:29 AM, Soni M <[email protected]> wrote:On Tue, Jul 1, 2014 at 12:14 AM, Andres Freund <[email protected]> wrote:\n\nMy guess it's a spinlock, probably xlogctl->info_lck via\nRecoveryInProgress(). Unfortunately inline assembler doesn't always seem\nto show up correctly in profiles...\n\nWhat worked for me was to build with -fno-omit-frame-pointer - that\nnormally shows the callers, even if it can't generate a proper symbol\nname.\n\nSoni: Do you use Hot Standby? Are there connections active while you\nhave that problem? Any other processes with high cpu load?\n\nGreetings,\n\nAndres Freund\n\n--\n Andres Freund http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\nIt is 96.62% postgres [.] StandbyReleaseLocks as Jeff said. It runs quite long time, more than 5 minutes i think\ni also use hot standby. we have 4 streaming replica, some of them has active connection some has not. this issue has last more than 4 days. On one of the standby, above postgres process is the only process that consume high cpu load.\ncompiled with -fno-omit-frame-pointer doesn't yield much more info: 76.24% postgres [.] StandbyReleaseLocks 2.64% libcrypto.so.1.0.1e [.] md5_block_asm_data_order 2.19% libcrypto.so.1.0.1e [.] RC4 2.17% postgres [.] RecordIsValid 1.20% [kernel] [k] copy_user_generic_unrolled 1.18% [kernel] [k] _spin_unlock_irqrestore 0.97% [vmxnet3] [k] vmxnet3_poll_rx_only 0.87% [kernel] [k] __do_softirq 0.77% [vmxnet3] [k] vmxnet3_xmit_frame 0.69% postgres [.] hash_search_with_hash_value 0.68% [kernel] [k] finHowever, this server started progressing through the WAL files quite a bit better before I finished compiling, so we'll leave it running with this version and see if there's more info available the next time it starts replaying slowly.",
"msg_date": "Mon, 30 Jun 2014 11:34:52 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On 2014-06-30 11:34:52 -0700, Jeff Frost wrote:\n> On Jun 30, 2014, at 10:29 AM, Soni M <[email protected]> wrote:\n\n> > It is \n> > 96.62% postgres [.] StandbyReleaseLocks\n> > as Jeff said. It runs quite long time, more than 5 minutes i think\n> > \n> > i also use hot standby. we have 4 streaming replica, some of them has active connection some has not. this issue has last more than 4 days. On one of the standby, above postgres process is the only process that consume high cpu load.\n> \n> compiled with -fno-omit-frame-pointer doesn't yield much more info:\n\nYou'd need to do perf record -ga instead of perf record -a to see\nadditional information.\n\nBut:\n\n> 76.24% postgres [.] StandbyReleaseLocks\n\nalready is quite helpful.\n\nWhat are you doing on that system? Is there anything requiring large\namounts of access exclusive locks on the primary? Possibly large amounts\nof temporary relations?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 20:39:08 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "2014-06-30 20:34 GMT+02:00 Jeff Frost <[email protected]>:\n\n> On Jun 30, 2014, at 10:29 AM, Soni M <[email protected]> wrote:\n>\n>\n>\n>\n> On Tue, Jul 1, 2014 at 12:14 AM, Andres Freund <[email protected]>\n> wrote:\n>\n>>\n>> My guess it's a spinlock, probably xlogctl->info_lck via\n>> RecoveryInProgress(). Unfortunately inline assembler doesn't always seem\n>> to show up correctly in profiles...\n>>\n>>\nFor this kind of issues a systemtap or dtrace can be useful\n\nhttp://postgres.cz/wiki/Monitorov%C3%A1n%C3%AD_lwlocku_pomoc%C3%AD_systemtapu\n\nyou can identify what locking is a problem - please, use a google translate\n\nRegards\n\nPavel\n\n\n> What worked for me was to build with -fno-omit-frame-pointer - that\n>> normally shows the callers, even if it can't generate a proper symbol\n>> name.\n>>\n>> Soni: Do you use Hot Standby? Are there connections active while you\n>> have that problem? Any other processes with high cpu load?\n>>\n>> Greetings,\n>>\n>> Andres Freund\n>>\n>> --\n>> Andres Freund http://www.2ndQuadrant.com/\n>> <http://www.2ndquadrant.com/>\n>> PostgreSQL Development, 24x7 Support, Training & Services\n>>\n>\n> It is\n> 96.62% postgres [.] StandbyReleaseLocks\n> as Jeff said. It runs quite long time, more than 5 minutes i think\n>\n> i also use hot standby. we have 4 streaming replica, some of them has\n> active connection some has not. this issue has last more than 4 days. On\n> one of the standby, above postgres process is the only process that consume\n> high cpu load.\n>\n>\n> compiled with -fno-omit-frame-pointer doesn't yield much more info:\n>\n> 76.24% postgres [.] StandbyReleaseLocks\n> 2.64% libcrypto.so.1.0.1e [.]\n> md5_block_asm_data_order\n> 2.19% libcrypto.so.1.0.1e [.] RC4\n> 2.17% postgres [.] RecordIsValid\n> 1.20% [kernel] [k]\n> copy_user_generic_unrolled\n> 1.18% [kernel] [k] _spin_unlock_irqrestore\n> 0.97% [vmxnet3] [k] vmxnet3_poll_rx_only\n> 0.87% [kernel] [k] __do_softirq\n> 0.77% [vmxnet3] [k] vmxnet3_xmit_frame\n> 0.69% postgres [.]\n> hash_search_with_hash_value\n> 0.68% [kernel] [k] fin\n>\n> However, this server started progressing through the WAL files quite a bit\n> better before I finished compiling, so we'll leave it running with this\n> version and see if there's more info available the next time it starts\n> replaying slowly.\n>\n>\n>\n\n2014-06-30 20:34 GMT+02:00 Jeff Frost <[email protected]>:\nOn Jun 30, 2014, at 10:29 AM, Soni M <[email protected]> wrote:\nOn Tue, Jul 1, 2014 at 12:14 AM, Andres Freund <[email protected]> wrote:\n\nMy guess it's a spinlock, probably xlogctl->info_lck via\nRecoveryInProgress(). Unfortunately inline assembler doesn't always seem\nto show up correctly in profiles...\nFor this kind of issues a systemtap or dtrace can be useful http://postgres.cz/wiki/Monitorov%C3%A1n%C3%AD_lwlocku_pomoc%C3%AD_systemtapu\nyou can identify what locking is a problem - please, use a google translateRegardsPavel \n\n\n\nWhat worked for me was to build with -fno-omit-frame-pointer - that\nnormally shows the callers, even if it can't generate a proper symbol\nname.\n\nSoni: Do you use Hot Standby? Are there connections active while you\nhave that problem? Any other processes with high cpu load?\n\nGreetings,\n\nAndres Freund\n\n--\n Andres Freund http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\nIt is 96.62% postgres [.] StandbyReleaseLocks as Jeff said. It runs quite long time, more than 5 minutes i think\ni also use hot standby. we have 4 streaming replica, some of them has active connection some has not. this issue has last more than 4 days. On one of the standby, above postgres process is the only process that consume high cpu load.\ncompiled with -fno-omit-frame-pointer doesn't yield much more info: 76.24% postgres [.] StandbyReleaseLocks\n\n 2.64% libcrypto.so.1.0.1e [.] md5_block_asm_data_order 2.19% libcrypto.so.1.0.1e [.] RC4 2.17% postgres [.] RecordIsValid\n 1.20% [kernel] [k] copy_user_generic_unrolled 1.18% [kernel] [k] _spin_unlock_irqrestore 0.97% [vmxnet3] [k] vmxnet3_poll_rx_only\n 0.87% [kernel] [k] __do_softirq 0.77% [vmxnet3] [k] vmxnet3_xmit_frame 0.69% postgres [.] hash_search_with_hash_value\n 0.68% [kernel] [k] finHowever, this server started progressing through the WAL files quite a bit better before I finished compiling, so we'll leave it running with this version and see if there's more info available the next time it starts replaying slowly.",
"msg_date": "Mon, 30 Jun 2014 20:40:15 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On Jun 30, 2014, at 11:39 AM, Andres Freund <[email protected]> wrote:\n\n> On 2014-06-30 11:34:52 -0700, Jeff Frost wrote:\n>> On Jun 30, 2014, at 10:29 AM, Soni M <[email protected]> wrote:\n> \n>>> It is \n>>> 96.62% postgres [.] StandbyReleaseLocks\n>>> as Jeff said. It runs quite long time, more than 5 minutes i think\n>>> \n>>> i also use hot standby. we have 4 streaming replica, some of them has active connection some has not. this issue has last more than 4 days. On one of the standby, above postgres process is the only process that consume high cpu load.\n>> \n>> compiled with -fno-omit-frame-pointer doesn't yield much more info:\n> \n> You'd need to do perf record -ga instead of perf record -a to see\n> additional information.\n> \n\nAh! That's right.\n\nHere's how that looks:\n\nSamples: 473K of event 'cpu-clock', Event count (approx.): 473738\n+ 68.42% init [kernel.kallsyms] [k] native_safe_halt\n+ 26.07% postgres postgres [.] StandbyReleaseLocks\n+ 2.82% swapper [kernel.kallsyms] [k] native_safe_halt\n+ 0.19% ssh libcrypto.so.1.0.1e [.] md5_block_asm_data_order\n+ 0.19% postgres postgres [.] RecordIsValid\n+ 0.16% ssh libcrypto.so.1.0.1e [.] RC4\n+ 0.10% postgres postgres [.] hash_search_with_hash_value\n+ 0.06% postgres [kernel.kallsyms] [k] _spin_unlock_irqrestore\n+ 0.05% init [vmxnet3] [k] vmxnet3_poll_rx_only\n+ 0.04% postgres [kernel.kallsyms] [k] copy_user_generic_unrolled\n+ 0.04% init [kernel.kallsyms] [k] finish_task_switch\n+ 0.04% init [kernel.kallsyms] [k] __do_softirq\n+ 0.04% ssh [kernel.kallsyms] [k] _spin_unlock_irqrestore\n+ 0.04% ssh [vmxnet3] [k] vmxnet3_xmit_frame\n+ 0.03% postgres postgres [.] PinBuffer\n+ 0.03% init [vmxnet3] [k] vmxnet3_xmit_frame\n+ 0.03% ssh [kernel.kallsyms] [k] copy_user_generic_unrolled\n+ 0.03% postgres postgres [.] XLogReadBufferExtended\n+ 0.03% ssh ssh [.] 0x000000000002aa07\n+ 0.03% init [kernel.kallsyms] [k] _spin_unlock_irqrestore\n+ 0.03% ssh [vmxnet3] [k] vmxnet3_poll_rx_only\n+ 0.02% ssh [kernel.kallsyms] [k] __do_softirq\n+ 0.02% postgres libc-2.12.so [.] _wordcopy_bwd_dest_aligned\n+ 0.02% postgres postgres [.] mdnblocks\n+ 0.02% ssh libcrypto.so.1.0.1e [.] 0x00000000000e25a1\n+ 0.02% scp [kernel.kallsyms] [k] copy_user_generic_unrolled\n+ 0.02% ssh libc-2.12.so [.] memcpy\n+ 0.02% postgres libc-2.12.so [.] memcpy\n\n\n> But:\n> \n>> 76.24% postgres [.] StandbyReleaseLocks\n> \n> already is quite helpful.\n> \n> What are you doing on that system? Is there anything requiring large\n> amounts of access exclusive locks on the primary? Possibly large amounts\n> of temporary relations?\n\n\nThe last time we did a 100% logging run, the peak temp table creation was something like 120k/hr, but the replicas seemed able to keep up with that just fine.\n\nHopefully Soni can answer whether that has increased significantly since May.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 12:17:05 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On Jun 30, 2014, at 12:17 PM, Jeff Frost <[email protected]> wrote:\n\n>> \n>> already is quite helpful.\n>> \n>> What are you doing on that system? Is there anything requiring large\n>> amounts of access exclusive locks on the primary? Possibly large amounts\n>> of temporary relations?\n> \n> \n> The last time we did a 100% logging run, the peak temp table creation was something like 120k/hr, but the replicas seemed able to keep up with that just fine.\n> \n\nSampling pg_locks on the primary shows ~50 locks with ExclusiveLock mode:\n\n mode | count\n--------------------------+-------\n AccessExclusiveLock | 11\n AccessShareLock | 2089\n ExclusiveLock | 46\n RowExclusiveLock | 81\n RowShareLock | 17\n ShareLock | 4\n ShareUpdateExclusiveLock | 5\n\nSeems to be relatively consistent. Of course, it's hard to say what it looked like back when the issue began.\n\n\n\n\nOn Jun 30, 2014, at 12:17 PM, Jeff Frost <[email protected]> wrote:already is quite helpful.What are you doing on that system? Is there anything requiring largeamounts of access exclusive locks on the primary? Possibly large amountsof temporary relations?The last time we did a 100% logging run, the peak temp table creation was something like 120k/hr, but the replicas seemed able to keep up with that just fine.Sampling pg_locks on the primary shows ~50 locks with ExclusiveLock mode: mode | count--------------------------+------- AccessExclusiveLock | 11 AccessShareLock | 2089 ExclusiveLock | 46 RowExclusiveLock | 81 RowShareLock | 17 ShareLock | 4 ShareUpdateExclusiveLock | 5Seems to be relatively consistent. Of course, it's hard to say what it looked like back when the issue began.",
"msg_date": "Mon, 30 Jun 2014 12:25:30 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "Jeff Frost <[email protected]> writes:\n> Sampling pg_locks on the primary shows ~50 locks with ExclusiveLock mode:\n\n> mode | count\n> --------------------------+-------\n> AccessExclusiveLock | 11\n> AccessShareLock | 2089\n> ExclusiveLock | 46\n> RowExclusiveLock | 81\n> RowShareLock | 17\n> ShareLock | 4\n> ShareUpdateExclusiveLock | 5\n\nThat's not too helpful if you don't pay attention to what the lock is on;\nit's likely that all the ExclusiveLocks are on transactions' own XIDs,\nwhich isn't relevant to the standby's behavior. The AccessExclusiveLocks\nare probably interesting though --- you should look to see what those\nare on.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 15:32:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "\nOn Jun 30, 2014, at 12:32 PM, Tom Lane <[email protected]> wrote:\n\n> Jeff Frost <[email protected]> writes:\n>> Sampling pg_locks on the primary shows ~50 locks with ExclusiveLock mode:\n> \n>> mode | count\n>> --------------------------+-------\n>> AccessExclusiveLock | 11\n>> AccessShareLock | 2089\n>> ExclusiveLock | 46\n>> RowExclusiveLock | 81\n>> RowShareLock | 17\n>> ShareLock | 4\n>> ShareUpdateExclusiveLock | 5\n> \n> That's not too helpful if you don't pay attention to what the lock is on;\n> it's likely that all the ExclusiveLocks are on transactions' own XIDs,\n> which isn't relevant to the standby's behavior. The AccessExclusiveLocks\n> are probably interesting though --- you should look to see what those\n> are on.\n\nYou're right about the ExclusiveLocks.\n\nHere's how the AccessExclusiveLocks look:\n\n locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted\n----------+----------+------------+------+-------+------------+---------------+---------+------------+----------+--------------------+-------+---------------------+---------\n relation | 111285 | 3245291551 | | | | | | | | 233/170813 | 23509 | AccessExclusiveLock | t\n relation | 111285 | 3245292820 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245292833 | | | | | | | | 173/1723993 | 23407 | AccessExclusiveLock | t\n relation | 111285 | 3245287874 | | | | | | | | 133/3818415 | 23348 | AccessExclusiveLock | t\n relation | 111285 | 3245292836 | | | | | | | | 173/1723993 | 23407 | AccessExclusiveLock | t\n relation | 111285 | 3245292774 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245292734 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245292827 | | | | | | | | 173/1723993 | 23407 | AccessExclusiveLock | t\n relation | 111285 | 3245288540 | | | | | | | | 133/3818415 | 23348 | AccessExclusiveLock | t\n relation | 111285 | 3245292773 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245292775 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245292743 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245292751 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245288669 | | | | | | | | 133/3818415 | 23348 | AccessExclusiveLock | t\n relation | 111285 | 3245292817 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245288657 | | | | | | | | 133/3818415 | 23348 | AccessExclusiveLock | t\n object | 111285 | | | | | | 2615 | 1246019760 | 0 | 233/170813 | 23509 | AccessExclusiveLock | t\n relation | 111285 | 3245292746 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245287876 | | | | | | | | 133/3818415 | 23348 | AccessExclusiveLock | t\n relation | 111285 | 3245292739 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245292826 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245292825 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245292832 | | | | | | | | 173/1723993 | 23407 | AccessExclusiveLock | t\n relation | 111285 | 3245292740 | | | | | | | | 5/22498235 | 23427 | AccessExclusiveLock | t\n relation | 111285 | 3245287871 | | | | | | | | 133/3818415 | 23348 | AccessExclusiveLock | t\n(25 rows)\n\nAnd if you go fishing in pg_class for any of the oids, you don't find anything:\n\nSELECT s.procpid,\n s.query_start,\n n.nspname,\n c.relname,\n l.mode,\n l.granted,\n s.current_query\n FROM pg_locks l,\n pg_class c,\n pg_stat_activity s,\n pg_namespace n\n WHERE l.relation = c.oid\n AND l.pid = s.procpid\n AND c.relnamespace = n.oid\n AND l.mode = 'AccessExclusiveLock';\n procpid | query_start | nspname | relname | mode | granted | current_query\n---------+-------------+---------+---------+------+---------+---------------\n(0 rows)\n\nTemp tables maybe?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 12:42:19 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On Mon, Jun 30, 2014 at 4:42 PM, Jeff Frost <[email protected]> wrote:\n\n> And if you go fishing in pg_class for any of the oids, you don't find\n> anything:\n\n\nThat is probably because you are connected in the wrong database. Once you\nconnect to the database of interest, you don't even need to query pg_class,\njust cast relation attribute to regclass:\n\n SELECT relation::regclass, ...\n FROM pg_locks WHERE database = (SELECT oid FROM pg_database WHERE\ndatname = current_database());\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Jun 30, 2014 at 4:42 PM, Jeff Frost <[email protected]> wrote:\n\nAnd if you go fishing in pg_class for any of the oids, you don't find anything:That is probably because you are connected in the wrong database. Once you connect to the database of interest, you don't even need to query pg_class, just cast relation attribute to regclass:\n SELECT relation::regclass, ... FROM pg_locks WHERE database = (SELECT oid FROM pg_database WHERE datname = current_database());\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 30 Jun 2014 16:54:36 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On Jun 30, 2014, at 12:54 PM, Matheus de Oliveira <[email protected]> wrote:\n\n> \n> On Mon, Jun 30, 2014 at 4:42 PM, Jeff Frost <[email protected]> wrote:\n> And if you go fishing in pg_class for any of the oids, you don't find anything:\n> \n> That is probably because you are connected in the wrong database. Once you connect to the database of interest, you don't even need to query pg_class, just cast relation attribute to regclass:\n> \n> SELECT relation::regclass, ...\n> FROM pg_locks WHERE database = (SELECT oid FROM pg_database WHERE datname = current_database());\n> \n\nYah, i thought about that too, but verified I am in the correct DB. Just for clarity sake:\n\nSELECT relation::regclass\n FROM pg_locks WHERE database = (SELECT oid FROM pg_database WHERE datname = current_database()) and mode = 'AccessExclusiveLock';\n\n relation\n------------\n\n\n 3245508214\n 3245508273\n 3245508272\n 3245508257\n 3245508469\n 3245508274\n 3245508373\n 3245508468\n 3245508210\n 3245508463\n 3245508205\n 3245508260\n 3245508265\n 3245508434\n(16 rows)\nOn Jun 30, 2014, at 12:54 PM, Matheus de Oliveira <[email protected]> wrote:On Mon, Jun 30, 2014 at 4:42 PM, Jeff Frost <[email protected]> wrote:\n\nAnd if you go fishing in pg_class for any of the oids, you don't find anything:That is probably because you are connected in the wrong database. Once you connect to the database of interest, you don't even need to query pg_class, just cast relation attribute to regclass:\n SELECT relation::regclass, ... FROM pg_locks WHERE database = (SELECT oid FROM pg_database WHERE datname = current_database());\nYah, i thought about that too, but verified I am in the correct DB. Just for clarity sake:SELECT relation::regclass FROM pg_locks WHERE database = (SELECT oid FROM pg_database WHERE datname = current_database()) and mode = 'AccessExclusiveLock'; relation------------ 3245508214 3245508273 3245508272 3245508257 3245508469 3245508274 3245508373 3245508468 3245508210 3245508463 3245508205 3245508260 3245508265 3245508434(16 rows)",
"msg_date": "Mon, 30 Jun 2014 12:57:56 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On 2014-06-30 12:57:56 -0700, Jeff Frost wrote:\n> \n> On Jun 30, 2014, at 12:54 PM, Matheus de Oliveira <[email protected]> wrote:\n> \n> > \n> > On Mon, Jun 30, 2014 at 4:42 PM, Jeff Frost <[email protected]> wrote:\n> > And if you go fishing in pg_class for any of the oids, you don't find anything:\n> > \n> > That is probably because you are connected in the wrong database. Once you connect to the database of interest, you don't even need to query pg_class, just cast relation attribute to regclass:\n> > \n> > SELECT relation::regclass, ...\n> > FROM pg_locks WHERE database = (SELECT oid FROM pg_database WHERE datname = current_database());\n> > \n> \n> Yah, i thought about that too, but verified I am in the correct DB. Just for clarity sake:\n\nSo these are probably relations created in uncommitted\ntransactions. Possibly ON COMMIT DROP temp tables?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 22:15:23 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On Jun 30, 2014, at 1:15 PM, Andres Freund <[email protected]> wrote:\n\n> On 2014-06-30 12:57:56 -0700, Jeff Frost wrote:\n>> \n>> On Jun 30, 2014, at 12:54 PM, Matheus de Oliveira <[email protected]> wrote:\n>> \n>>> \n>>> On Mon, Jun 30, 2014 at 4:42 PM, Jeff Frost <[email protected]> wrote:\n>>> And if you go fishing in pg_class for any of the oids, you don't find anything:\n>>> \n>>> That is probably because you are connected in the wrong database. Once you connect to the database of interest, you don't even need to query pg_class, just cast relation attribute to regclass:\n>>> \n>>> SELECT relation::regclass, ...\n>>> FROM pg_locks WHERE database = (SELECT oid FROM pg_database WHERE datname = current_database());\n>>> \n>> \n>> Yah, i thought about that too, but verified I am in the correct DB. Just for clarity sake:\n> \n> So these are probably relations created in uncommitted\n> transactions. Possibly ON COMMIT DROP temp tables?\n\n\nThat would make sense. There are definitely quite a few of those being used.\n\nAnother item of note is the system catalogs are quite bloated:\n\n schemaname | tablename | tbloat | wastedmb | idxbloat | wastedidxmb\n------------+--------------+--------+----------+----------+-------------\n pg_catalog | pg_attribute | 3945 | 106.51 | 2770 | 611.28\n pg_catalog | pg_class | 8940 | 45.26 | 4420 | 47.89\n pg_catalog | pg_type | 4921 | 18.45 | 5850 | 81.16\n pg_catalog | pg_depend | 933 | 10.23 | 11730 | 274.37\n pg_catalog | pg_index | 3429 | 8.36 | 3920 | 24.24\n pg_catalog | pg_shdepend | 983 | 2.67 | 9360 | 30.56\n(6 rows)\n\nWould that cause the replica to spin on StandbyReleaseLocks?\n\n\n\nOn Jun 30, 2014, at 1:15 PM, Andres Freund <[email protected]> wrote:On 2014-06-30 12:57:56 -0700, Jeff Frost wrote:On Jun 30, 2014, at 12:54 PM, Matheus de Oliveira <[email protected]> wrote:On Mon, Jun 30, 2014 at 4:42 PM, Jeff Frost <[email protected]> wrote:And if you go fishing in pg_class for any of the oids, you don't find anything:That is probably because you are connected in the wrong database. Once you connect to the database of interest, you don't even need to query pg_class, just cast relation attribute to regclass: SELECT relation::regclass, ... FROM pg_locks WHERE database = (SELECT oid FROM pg_database WHERE datname = current_database());Yah, i thought about that too, but verified I am in the correct DB. Just for clarity sake:So these are probably relations created in uncommittedtransactions. Possibly ON COMMIT DROP temp tables?That would make sense. There are definitely quite a few of those being used.Another item of note is the system catalogs are quite bloated: schemaname | tablename | tbloat | wastedmb | idxbloat | wastedidxmb------------+--------------+--------+----------+----------+------------- pg_catalog | pg_attribute | 3945 | 106.51 | 2770 | 611.28 pg_catalog | pg_class | 8940 | 45.26 | 4420 | 47.89 pg_catalog | pg_type | 4921 | 18.45 | 5850 | 81.16 pg_catalog | pg_depend | 933 | 10.23 | 11730 | 274.37 pg_catalog | pg_index | 3429 | 8.36 | 3920 | 24.24 pg_catalog | pg_shdepend | 983 | 2.67 | 9360 | 30.56(6 rows)Would that cause the replica to spin on StandbyReleaseLocks?",
"msg_date": "Mon, 30 Jun 2014 13:21:08 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "Jeff Frost <[email protected]> writes:\n> On Jun 30, 2014, at 1:15 PM, Andres Freund <[email protected]> wrote:\n>> So these are probably relations created in uncommitted\n>> transactions. Possibly ON COMMIT DROP temp tables?\n\n> That would make sense. There are definitely quite a few of those being used.\n\nUh-huh. I doubt that the mechanism that handles propagation of\nAccessExclusiveLocks to the standby is smart enough to ignore locks\non temp tables :-(\n\n> Another item of note is the system catalogs are quite bloated:\n> Would that cause the replica to spin on StandbyReleaseLocks?\n\nAFAIK, no. It's an unsurprising consequence of heavy use of short-lived\ntemp tables though.\n\nSo it seems like we have a candidate explanation. I'm a bit surprised\nthat StandbyReleaseLocks would get this slow if there are only a dozen\nAccessExclusiveLocks in place at any one time, though. Perhaps that\nwas a low point and there are often many more?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 16:39:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "\nOn Jun 30, 2014, at 1:39 PM, Tom Lane <[email protected]> wrote:\n\n> \n> \n>> Another item of note is the system catalogs are quite bloated:\n>> Would that cause the replica to spin on StandbyReleaseLocks?\n> \n> AFAIK, no. It's an unsurprising consequence of heavy use of short-lived\n> temp tables though.\n> \n\nYah, this has been an issue in the past, so we tend to cluster them regularly during off-hours to minimize the issue.\n\n> So it seems like we have a candidate explanation. I'm a bit surprised\n> that StandbyReleaseLocks would get this slow if there are only a dozen\n> AccessExclusiveLocks in place at any one time, though. Perhaps that\n> was a low point and there are often many more?\n> \n> \t\n\nEntirely possible that it was a low point. We'll set up some monitoring to track the number of AccessExclusiveLocks and see how much variance there is throughout the day.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 13:46:05 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On Jun 30, 2014, at 1:46 PM, Jeff Frost <[email protected]> wrote:\n\n>> So it seems like we have a candidate explanation. I'm a bit surprised\n>> that StandbyReleaseLocks would get this slow if there are only a dozen\n>> AccessExclusiveLocks in place at any one time, though. Perhaps that\n>> was a low point and there are often many more?\n>> \n>> \t\n> \n> Entirely possible that it was a low point. We'll set up some monitoring to track the number of AccessExclusiveLocks and see how much variance there is throughout the day.\n\n\nSince we turned on the monitoring for that, we had a peak of 13,550 AccessExclusiveLocks. So far most of the samples have been in the double digit, with that and two other outliers: 6,118 and 12,747.\nOn Jun 30, 2014, at 1:46 PM, Jeff Frost <[email protected]> wrote:So it seems like we have a candidate explanation. I'm a bit surprisedthat StandbyReleaseLocks would get this slow if there are only a dozenAccessExclusiveLocks in place at any one time, though. Perhaps thatwas a low point and there are often many more? Entirely possible that it was a low point. We'll set up some monitoring to track the number of AccessExclusiveLocks and see how much variance there is throughout the day.Since we turned on the monitoring for that, we had a peak of 13,550 AccessExclusiveLocks. So far most of the samples have been in the double digit, with that and two other outliers: 6,118 and 12,747.",
"msg_date": "Mon, 30 Jun 2014 14:52:22 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "Jeff Frost <[email protected]> writes:\n>>> So it seems like we have a candidate explanation. I'm a bit surprised\n>>> that StandbyReleaseLocks would get this slow if there are only a dozen\n>>> AccessExclusiveLocks in place at any one time, though. Perhaps that\n>>> was a low point and there are often many more?\n\n> Since we turned on the monitoring for that, we had a peak of 13,550\n> AccessExclusiveLocks.\n\nAh ... that's more like a number I can believe something would have\ntrouble coping with. Did you see a noticeable slowdown with this?\nNow that we've seen that number, of course it's possible there was an\neven higher peak occurring when you saw the trouble.\n\nPerhaps there's an O(N^2) behavior in StandbyReleaseLocks, or maybe\nit just takes awhile to handle that many locks.\n\nDid you check whether the locks were all on temp tables of the\nON COMMIT DROP persuasion?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 19:04:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On 2014-06-30 19:04:20 -0400, Tom Lane wrote:\n> Jeff Frost <[email protected]> writes:\n> >>> So it seems like we have a candidate explanation. I'm a bit surprised\n> >>> that StandbyReleaseLocks would get this slow if there are only a dozen\n> >>> AccessExclusiveLocks in place at any one time, though. Perhaps that\n> >>> was a low point and there are often many more?\n> \n> > Since we turned on the monitoring for that, we had a peak of 13,550\n> > AccessExclusiveLocks.\n\nAny chance the workload also uses lots of subtransactions?\n\n> Ah ... that's more like a number I can believe something would have\n> trouble coping with. Did you see a noticeable slowdown with this?\n> Now that we've seen that number, of course it's possible there was an\n> even higher peak occurring when you saw the trouble.\n> \n> Perhaps there's an O(N^2) behavior in StandbyReleaseLocks, or maybe\n> it just takes awhile to handle that many locks.\n\nI don't think there's a O(n^2) in StandbyReleaseLocks() itself, but in\ncombination with StandbyReleaseLockTree() it looks possibly bad. The\nlatter will call StandbyReleaseLocks() for every xid/subxid, and each of\nthe StandbyReleaseLocks() will then trawl the entire RecoveryLockList...\n\nIt'd probably be better to implement ReleaseLocksTree() by sorting the\nsubxid list and bsearch that while iterating RecoveryLockList.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 1 Jul 2014 01:17:41 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On Jun 30, 2014, at 4:04 PM, Tom Lane <[email protected]> wrote:\n\n> Ah ... that's more like a number I can believe something would have\n> trouble coping with. Did you see a noticeable slowdown with this?\n> Now that we've seen that number, of course it's possible there was an\n> even higher peak occurring when you saw the trouble.\n> \n> Perhaps there's an O(N^2) behavior in StandbyReleaseLocks, or maybe\n> it just takes awhile to handle that many locks.\n> \n> Did you check whether the locks were all on temp tables of the\n> ON COMMIT DROP persuasion?\n\n\nUnfortunately not, because I went for a poor man's: SELECT count(*) FROM pg_locks WHERE mode = 'AccessExclusiveLock' \nrun in cron every minute.\n\nThat said, I'd bet it was mostly ON COMMIT DROP temp tables.\n\nThe unfortunate thing is I wouldn't know how to correlate that spike with the corresponding slowdown because the replica is about 5.5hrs lagged at the moment.\n\nHopefully it will get caught up tonight and we can see if there's a correlation tomorrow.\nOn Jun 30, 2014, at 4:04 PM, Tom Lane <[email protected]> wrote:Ah ... that's more like a number I can believe something would havetrouble coping with. Did you see a noticeable slowdown with this?Now that we've seen that number, of course it's possible there was aneven higher peak occurring when you saw the trouble.Perhaps there's an O(N^2) behavior in StandbyReleaseLocks, or maybeit just takes awhile to handle that many locks.Did you check whether the locks were all on temp tables of theON COMMIT DROP persuasion?Unfortunately not, because I went for a poor man's: SELECT count(*) FROM pg_locks WHERE mode = 'AccessExclusiveLock' run in cron every minute.That said, I'd bet it was mostly ON COMMIT DROP temp tables.The unfortunate thing is I wouldn't know how to correlate that spike with the corresponding slowdown because the replica is about 5.5hrs lagged at the moment.Hopefully it will get caught up tonight and we can see if there's a correlation tomorrow.",
"msg_date": "Mon, 30 Jun 2014 16:57:33 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On Jun 30, 2014, at 4:57 PM, Jeff Frost <[email protected]> wrote:\n\n> \n> On Jun 30, 2014, at 4:04 PM, Tom Lane <[email protected]> wrote:\n> \n>> Ah ... that's more like a number I can believe something would have\n>> trouble coping with. Did you see a noticeable slowdown with this?\n>> Now that we've seen that number, of course it's possible there was an\n>> even higher peak occurring when you saw the trouble.\n>> \n>> Perhaps there's an O(N^2) behavior in StandbyReleaseLocks, or maybe\n>> it just takes awhile to handle that many locks.\n>> \n>> Did you check whether the locks were all on temp tables of the\n>> ON COMMIT DROP persuasion?\n> \n> \n> Unfortunately not, because I went for a poor man's: SELECT count(*) FROM pg_locks WHERE mode = 'AccessExclusiveLock' \n> run in cron every minute.\n> \n> That said, I'd bet it was mostly ON COMMIT DROP temp tables.\n> \n> The unfortunate thing is I wouldn't know how to correlate that spike with the corresponding slowdown because the replica is about 5.5hrs lagged at the moment.\n> \n> Hopefully it will get caught up tonight and we can see if there's a correlation tomorrow.\n\nAnd indeed it did catch up overnight and the lag increased shortly after a correlating spike in AccessExclusiveLocks that were generated by temp table creation with on commit drop.\n\n\n\nOn Jun 30, 2014, at 4:57 PM, Jeff Frost <[email protected]> wrote:On Jun 30, 2014, at 4:04 PM, Tom Lane <[email protected]> wrote:Ah ... that's more like a number I can believe something would havetrouble coping with. Did you see a noticeable slowdown with this?Now that we've seen that number, of course it's possible there was aneven higher peak occurring when you saw the trouble.Perhaps there's an O(N^2) behavior in StandbyReleaseLocks, or maybeit just takes awhile to handle that many locks.Did you check whether the locks were all on temp tables of theON COMMIT DROP persuasion?Unfortunately not, because I went for a poor man's: SELECT count(*) FROM pg_locks WHERE mode = 'AccessExclusiveLock' run in cron every minute.That said, I'd bet it was mostly ON COMMIT DROP temp tables.The unfortunate thing is I wouldn't know how to correlate that spike with the corresponding slowdown because the replica is about 5.5hrs lagged at the moment.Hopefully it will get caught up tonight and we can see if there's a correlation tomorrow.And indeed it did catch up overnight and the lag increased shortly after a correlating spike in AccessExclusiveLocks that were generated by temp table creation with on commit drop.",
"msg_date": "Tue, 1 Jul 2014 10:12:14 -0700",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "Jeff Frost <[email protected]> writes:\n>> On Jun 30, 2014, at 4:04 PM, Tom Lane <[email protected]> wrote:\n>>> Did you check whether the locks were all on temp tables of the\n>>> ON COMMIT DROP persuasion?\n\n> And indeed it did catch up overnight and the lag increased shortly after a correlating spike in AccessExclusiveLocks that were generated by temp table creation with on commit drop.\n\nOK, so we have a pretty clear idea of where the problem is now.\n\nIt seems like there are three, not mutually exclusive, ways we might\naddress this:\n\n1. Local revisions inside StandbyReleaseLocks to make it perform better in\nthe presence of many locks. This would only be likely to improve matters\nmuch if there's a fixable O(N^2) algorithmic issue; but there might well\nbe one.\n\n2. Avoid WAL-logging AccessExclusiveLocks associated with temp tables, on\nthe grounds that no standby should be touching them. I'm not entirely\nsure that that argument is bulletproof though; in particular, even though\na standby couldn't access the table's data, it's possible that it would be\ninterested in seeing consistent catalog entries.\n\n3. Avoid WAL-logging AccessExclusiveLocks associated with\nnew-in-transaction tables, temp or not, on the grounds that no standby\ncould even see such tables until they're committed. We could go a bit\nfurther and not take out any locks on a new-in-transaction table in the\nfirst place, on the grounds that other transactions on the master can't\nsee 'em either.\n\nIt sounded like Andres had taken a preliminary look at #1 and found a\npossible avenue for improvement, which I'd encourage him to pursue.\n\nFor both #2 and the conservative version of #3, the main implementation\nproblem would be whether the lock WAL-logging code has cheap access to\nthe necessary information. I suspect it doesn't.\n\nThe radical version of #3 might be pretty easy to do, at least to the\nextent of removing locks taken out during CREATE TABLE. I suspect there\nare some assertions or other consistency checks that would get unhappy if\nwe manipulate relations without locks, though, so those would have to be\ntaught about the exception. Also, we sometimes forget new-in-transaction\nstatus during relcache flush events; it's not clear if that would be a\nproblem for this.\n\nI don't plan to work on this myself, but perhaps someone with more\nmotivation will want to run with these ideas.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 01 Jul 2014 15:20:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On 2014-07-01 15:20:37 -0400, Tom Lane wrote:\n> Jeff Frost <[email protected]> writes:\n> >> On Jun 30, 2014, at 4:04 PM, Tom Lane <[email protected]> wrote:\n> >>> Did you check whether the locks were all on temp tables of the\n> >>> ON COMMIT DROP persuasion?\n> \n> > And indeed it did catch up overnight and the lag increased shortly after a correlating spike in AccessExclusiveLocks that were generated by temp table creation with on commit drop.\n> \n> OK, so we have a pretty clear idea of where the problem is now.\n> \n> It seems like there are three, not mutually exclusive, ways we might\n> address this:\n> \n> 1. Local revisions inside StandbyReleaseLocks to make it perform better in\n> the presence of many locks. This would only be likely to improve matters\n> much if there's a fixable O(N^2) algorithmic issue; but there might well\n> be one.\n>\n> It sounded like Andres had taken a preliminary look at #1 and found a\n> possible avenue for improvement, which I'd encourage him to pursue.\n> \n\nI don't have the resources to do this right now, but yes, I think we can\nget relatively easily get rid of the O(num_locks * num_subtransactions)\nbehaviour.\n\n> 2. Avoid WAL-logging AccessExclusiveLocks associated with temp tables, on\n> the grounds that no standby should be touching them. I'm not entirely\n> sure that that argument is bulletproof though; in particular, even though\n> a standby couldn't access the table's data, it's possible that it would be\n> interested in seeing consistent catalog entries.\n\nHm. We definitely perform checks surprisingly late for those. It's\npossible to do SELECT * FROM pg_temp_<nn>.whatever; without an error f\nthere's no rows of if the rest of the plan doesn't do accesses to that\ntable. The check prohibiting access is only in bufmgr.c...\nSo yea, I don't think we can do this for at least < 9.4. And there\nit'll still be hard.\n\n> 3. Avoid WAL-logging AccessExclusiveLocks associated with\n> new-in-transaction tables, temp or not, on the grounds that no standby\n> could even see such tables until they're committed. We could go a bit\n> further and not take out any locks on a new-in-transaction table in the\n> first place, on the grounds that other transactions on the master can't\n> see 'em either.\n> \n> For both #2 and the conservative version of #3, the main implementation\n> problem would be whether the lock WAL-logging code has cheap access to\n> the necessary information. I suspect it doesn't.\n\nNot trivially. It's logged directly in LockAcquireExtended(). We could\nadd the information into locktags as there's unused fields for relation\nlocktags, but brrr.\n\n> The radical version of #3 might be pretty easy to do, at least to the\n> extent of removing locks taken out during CREATE TABLE. I suspect there\n> are some assertions or other consistency checks that would get unhappy if\n> we manipulate relations without locks, though, so those would have to be\n> taught about the exception.\n>\n> Also, we sometimes forget new-in-transaction\n> status during relcache flush events; it's not clear if that would be a\n> problem for this.\n\nI think that hole is actually pluggable in newer releases - at least\nthere's no code around that assumes rd_createSubid now is persistent,\neven across cache resets.\n\nBut I think more importantly it's probably quite possible to hit a\nsimilar problem without ON COMMIT DROP relations. Say DISCARD TEMP\ninside a transaction (with several subxacts) or so? So we probaly really\nshould fix the bad scaling.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 2 Jul 2014 21:01:11 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2014-07-01 15:20:37 -0400, Tom Lane wrote:\n>> It seems like there are three, not mutually exclusive, ways we might\n>> address this:\n\n> But I think more importantly it's probably quite possible to hit a\n> similar problem without ON COMMIT DROP relations. Say DISCARD TEMP\n> inside a transaction (with several subxacts) or so? So we probaly really\n> should fix the bad scaling.\n\nWell, my thought was that these approaches would address somewhat\ndifferent sets of use-cases, and we might well want to do more than one.\nEven if StandbyReleaseLocks were zero-cost, not emitting the WAL in the\nfirst place is surely considerably cheaper yet.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 02 Jul 2014 15:14:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On 1 July 2014 20:20, Tom Lane <[email protected]> wrote:\n\n> I don't plan to work on this myself, but perhaps someone with more\n> motivation will want to run with these ideas.\n\nI was planning to work on improving performance of replication apply\nover the summer, mid July - Aug, so I'll add this to the list.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 3 Jul 2014 10:34:14 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
},
{
"msg_contents": "On 1 July 2014 20:20, Tom Lane <[email protected]> wrote:\n> Jeff Frost <[email protected]> writes:\n>>> On Jun 30, 2014, at 4:04 PM, Tom Lane <[email protected]> wrote:\n>>>> Did you check whether the locks were all on temp tables of the\n>>>> ON COMMIT DROP persuasion?\n>\n>> And indeed it did catch up overnight and the lag increased shortly after a correlating spike in AccessExclusiveLocks that were generated by temp table creation with on commit drop.\n>\n> OK, so we have a pretty clear idea of where the problem is now.\n>\n> It seems like there are three, not mutually exclusive, ways we might\n> address this:\n>\n> 1. Local revisions inside StandbyReleaseLocks to make it perform better in\n> the presence of many locks. This would only be likely to improve matters\n> much if there's a fixable O(N^2) algorithmic issue; but there might well\n> be one.\n>\n> 2. Avoid WAL-logging AccessExclusiveLocks associated with temp tables, on\n> the grounds that no standby should be touching them. I'm not entirely\n> sure that that argument is bulletproof though; in particular, even though\n> a standby couldn't access the table's data, it's possible that it would be\n> interested in seeing consistent catalog entries.\n>\n> 3. Avoid WAL-logging AccessExclusiveLocks associated with\n> new-in-transaction tables, temp or not, on the grounds that no standby\n> could even see such tables until they're committed. We could go a bit\n> further and not take out any locks on a new-in-transaction table in the\n> first place, on the grounds that other transactions on the master can't\n> see 'em either.\n>\n> It sounded like Andres had taken a preliminary look at #1 and found a\n> possible avenue for improvement, which I'd encourage him to pursue.\n>\n> For both #2 and the conservative version of #3, the main implementation\n> problem would be whether the lock WAL-logging code has cheap access to\n> the necessary information. I suspect it doesn't.\n>\n> The radical version of #3 might be pretty easy to do, at least to the\n> extent of removing locks taken out during CREATE TABLE. I suspect there\n> are some assertions or other consistency checks that would get unhappy if\n> we manipulate relations without locks, though, so those would have to be\n> taught about the exception. Also, we sometimes forget new-in-transaction\n> status during relcache flush events; it's not clear if that would be a\n> problem for this.\n>\n> I don't plan to work on this myself, but perhaps someone with more\n> motivation will want to run with these ideas.\n\nPatch implements option 2 in the above.\n\nSkipping the locks entirely seems like it opens a can of worms.\n\nSkipping the lock for temp tables is valid since locks don't need to\nexist on the standby. Any catalog entries for them will exist, but the\nrows will show them as temp and nobody would expect them to be valid\noutside of the original session.\n\nPatch implements a special case that takes the lock normally, but\nskips WAL logging the lock info.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 17 Sep 2014 18:15:32 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replaying WAL slowly"
}
] |
[
{
"msg_contents": "Hi all\n\nThe docs say:\n\n\"For best optimization results, you should label your functions with the\nstrictest volatility category that is valid for them.\"\n\nhttp://www.postgresql.org/docs/current/interactive/xfunc-volatility.html\n\n... but I recall discussion here suggesting that in fact IMMUTABLE\nfunctions may not be inlined where you'd expect, e.g.\n\nhttp://www.postgresql.org/message-id/CAFj8pRBF3Qr7WtQwO1H_WN=hhFGk0semwhdE+ODz3iyv-TroMQ@mail.gmail.com\n\nThat's always seemed counter to my expectations. Am I just\nmisunderstanding? Tom's comment seemed to confirm what Pavel was saying.\n\nI know STRICT can prevent inlining (unfortunately, though necessarily),\nbut it seems inexplicable that IMMUTABLE should. If it can, then the\ndocumentation is wrong.\n\nWhich is it?\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 17:24:55 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Volatility - docs vs behaviour?"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> The docs say:\n\n> \"For best optimization results, you should label your functions with the\n> strictest volatility category that is valid for them.\"\n\nYeah ...\n\n> ... but I recall discussion here suggesting that in fact IMMUTABLE\n> functions may not be inlined where you'd expect, e.g.\n> http://www.postgresql.org/message-id/CAFj8pRBF3Qr7WtQwO1H_WN=hhFGk0semwhdE+ODz3iyv-TroMQ@mail.gmail.com\n\nThe reason that case behaved surprisingly was exactly that the user had\nviolated the above bit of documentation, ie, he'd marked the function\n*incorrectly* as being immutable when in fact its contained functions\nwere only stable.\n\n> I know STRICT can prevent inlining (unfortunately, though necessarily),\n> but it seems inexplicable that IMMUTABLE should.\n\nI don't see why you find that inexplicable. If the planner were to\ninline this function, it would then fail to reduce a call with constant\nargument to a constant, which is presumably what the user desires from\nmarking it immutable (questions of correctness in the face of timezone\nchanges notwithstanding). Just as we \"keep the wrapper on\" when it's\nnecessary to hide possible non-strictness of the body of a function,\nwe must do so when inlining would raise the visible volatility of an\nexpression.\n\nIt's true that the above-quoted bit of advice presumes that you correctly\nidentify the \"strictest volatility category that is valid\" for a given\nfunction. If you're too lazy or uninformed to do that, it might be\nbetter to leave the settings at defaults (volatile/nonstrict) and hope\nthe planner can figure out that it's safe to inline anyway.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 11:49:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Volatility - docs vs behaviour?"
},
{
"msg_contents": "On 06/30/2014 11:49 PM, Tom Lane wrote:\n> Craig Ringer <[email protected]> writes:\n>> The docs say:\n> \n>> \"For best optimization results, you should label your functions with the\n>> strictest volatility category that is valid for them.\"\n> \n> Yeah ...\n> \n>> ... but I recall discussion here suggesting that in fact IMMUTABLE\n>> functions may not be inlined where you'd expect, e.g.\n>> http://www.postgresql.org/message-id/CAFj8pRBF3Qr7WtQwO1H_WN=hhFGk0semwhdE+ODz3iyv-TroMQ@mail.gmail.com\n> \n> The reason that case behaved surprisingly was exactly that the user had\n> violated the above bit of documentation, ie, he'd marked the function\n> *incorrectly* as being immutable when in fact its contained functions\n> were only stable.\n\nYes, I realise that's the case with this particular incident. It's the\nmore general case I'm interested in - whether this can be true in\ngeneral, not just when the user does something dumb.\n\nIt sounds like you're saying that the behaviour observed here is\nspecific to cases where the user incorrectly identifies the function\nvolatility. In which case we don't care, that's fine, no problem here.\n\nMy concern was only with whether the advice that the highest volatility\ncategory should be used is always true for *correct* immutable functions\ntoo.\n\n>> I know STRICT can prevent inlining (unfortunately, though necessarily),\n>> but it seems inexplicable that IMMUTABLE should.\n> \n> I don't see why you find that inexplicable. If the planner were to\n> inline this function, it would then fail to reduce a call with constant\n> argument to a constant, which is presumably what the user desires from\n> marking it immutable (questions of correctness in the face of timezone\n> changes notwithstanding). Just as we \"keep the wrapper on\" when it's\n> necessary to hide possible non-strictness of the body of a function,\n> we must do so when inlining would raise the visible volatility of an\n> expression.\n\nIf the input is constant, then clearly it should be evaluated and a\nconstant substituted.\n\nIf it _isn't_ a constant input, then why would STRICT inline when\nIMMUTABLE doesn't?\n\n> It's true that the above-quoted bit of advice presumes that you correctly\n> identify the \"strictest volatility category that is valid\" for a given\n> function. If you're too lazy or uninformed to do that, it might be\n> better to leave the settings at defaults (volatile/nonstrict) and hope\n> the planner can figure out that it's safe to inline anyway.\n\nI was unaware that the planner made any attempt to catch users' errors\nin marking the strictness of functions. I thought it pretty much trusted\nthe user not to lie about the mutability of functions invoked\nindirectly. I'm not really sure where in the inlining code to look to\nfigure that out.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 01 Jul 2014 09:11:32 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Volatility - docs vs behaviour?"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> I was unaware that the planner made any attempt to catch users' errors\n> in marking the strictness of functions. I thought it pretty much trusted\n> the user not to lie about the mutability of functions invoked\n> indirectly. I'm not really sure where in the inlining code to look to\n> figure that out.\n\nIt's in optimizer/util/clauses.c:\n\n /*\n * Additional validity checks on the expression. It mustn't return a set,\n * and it mustn't be more volatile than the surrounding function (this is\n * to avoid breaking hacks that involve pretending a function is immutable\n * when it really ain't). If the surrounding function is declared strict,\n * then the expression must contain only strict constructs and must use\n * all of the function parameters (this is overkill, but an exact analysis\n * is hard).\n */\n if (expression_returns_set(newexpr))\n goto fail;\n\n if (funcform->provolatile == PROVOLATILE_IMMUTABLE &&\n contain_mutable_functions(newexpr))\n goto fail;\n else if (funcform->provolatile == PROVOLATILE_STABLE &&\n contain_volatile_functions(newexpr))\n goto fail;\n\nAs the comment says, this wasn't really coded with an eye towards\n\"catching user error\". Rather, there are known use-cases where people\nintentionally use SQL wrapper functions to lie about the mutability\nof some underlying function; inlining would expose the truth of the\nmatter and thus defeat such hacks. Now I'd be the first to agree\nthat this isn't a terribly high-performance way of doing that, but\nthe point here was to not change the behavior that existed before\nSQL inlining did.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jun 2014 22:15:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Volatility - docs vs behaviour?"
},
{
"msg_contents": "On Mon, Jun 30, 2014 at 9:15 PM, Tom Lane <[email protected]> wrote:\n> Craig Ringer <[email protected]> writes:\n>> I was unaware that the planner made any attempt to catch users' errors\n>> in marking the strictness of functions. I thought it pretty much trusted\n>> the user not to lie about the mutability of functions invoked\n>> indirectly. I'm not really sure where in the inlining code to look to\n>> figure that out.\n>\n> It's in optimizer/util/clauses.c:\n>\n> /*\n> * Additional validity checks on the expression. It mustn't return a set,\n> * and it mustn't be more volatile than the surrounding function (this is\n> * to avoid breaking hacks that involve pretending a function is immutable\n> * when it really ain't). If the surrounding function is declared strict,\n> * then the expression must contain only strict constructs and must use\n> * all of the function parameters (this is overkill, but an exact analysis\n> * is hard).\n> */\n> if (expression_returns_set(newexpr))\n> goto fail;\n>\n> if (funcform->provolatile == PROVOLATILE_IMMUTABLE &&\n> contain_mutable_functions(newexpr))\n> goto fail;\n> else if (funcform->provolatile == PROVOLATILE_STABLE &&\n> contain_volatile_functions(newexpr))\n> goto fail;\n>\n> As the comment says, this wasn't really coded with an eye towards\n> \"catching user error\". Rather, there are known use-cases where people\n> intentionally use SQL wrapper functions to lie about the mutability\n> of some underlying function; inlining would expose the truth of the\n> matter and thus defeat such hacks. Now I'd be the first to agree\n> that this isn't a terribly high-performance way of doing that, but\n> the point here was to not change the behavior that existed before\n> SQL inlining did.\n\nsome points:\n*) there are several cases that look superficially immutable to the\nuser but are really stable. Mostly this comes up with date time\nfunctions because of the database dependency. The issue I have with\nthe status quo is that the server punishes you by mis-decorating the\nfunction (it gets treaded as volatile).\n\n*) some formulations of functions like to_char() are immutable\ndepending on arguments (julian day for example). so if the user wraps\nthis for purposes of indexing and then uses that same function for\nquerying, the operation is non inlineable.\n\n*) unless you really know your stuff server inlining is completely\nabstracted from you, except in terms of performance\n\nAdding up the above, the way things work today is kind of a pain -- in\nmay ways I feel that if I mark a function IMMUTABLE, the server should\nnot overrule me. If that argument doesn't hold water, then server\nshould at least tell you when a function is inlineable -- perhaps via\n\\df+.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 1 Jul 2014 11:26:38 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Volatility - docs vs behaviour?"
}
] |
[
{
"msg_contents": "Hello,\n\nHas anyone some experience using defragmentation tools on Linux against tablespaces ?\n\nwe are facing fragmentation problems with postgres instances having a few TB of data.\n( RAID 5 )\n\nI/O througput decreased from 300MB/s to 160.\n\n\n- We first moved some schemas to separate servers.\n After that we still have 150'000 tables in 1.5 TB\n \n- Now we are in the process of vacuuming FULL historical tables which are not written anymore. \n This seems to improve the I/O considerably\n \nOur remaining issue is that the free space fragmentíon is still suboptimal \nso that fragmention will probably start again soon.\n\nWould it make sense to use a tool like e4defrag\t\n(http://www.linux.org/threads/online-defragmentation.4121/)\nin order to defrag the free space ?\nAnd how safe is it to use such a tool against a running postgres instance?\n\n\n\nmany thanks,\n\nMarc Mamin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 2 Jul 2014 08:51:16 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": true,
"msg_subject": "fragmention issue with ext4: e4defrag?"
},
{
"msg_contents": "Marc Mamin <[email protected]> wrote:\n\n> I/O througput decreased from 300MB/s to 160.\n\nI don't have any experience with ext4 defrag tools, but just wanted\nto point out that the difference in performance you cite above is\nabout the same as the difference between accessing data on the\nouter (and usually first-filled) tracks on a disk drive and the\ninner tracks. One of the reasons performance falls as a drive\nfills is that the OS is compelled to use slower and slower portions\nof the disk. Part of the benefit you are seeing might be due to\nfreeing \"fast\" tracks and data being relocated there.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 2 Jul 2014 11:01:29 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fragmention issue with ext4: e4defrag?"
}
] |
[
{
"msg_contents": "Hi, I'm in the process of attempting to tune some slow queries. I came \nacross a scenario where I'm not entirely sure how I \nmight figure out why a node is taking awhile to process. I'm not concerned \nwith the query itself, we are working to figure \nout how we can make it faster. But I was hoping someone might be able to \nprovide some insight into why a hash join is \nsometimes slow.\n\nFor example, running explain (analyze, buffers) with the query, 4/5 times we \nwill see the following:\n\n-> Hash Join (cost=16385.76..103974.09 rows=523954 width=64) (actual \ntime=532.634..4018.678 rows=258648 loops=1)\n Hash Cond: (p.a = c.c)\n Buffers: shared hit=4 read=29147, temp read=12943 written=12923\n -> Seq Scan on p (cost=0.00..38496.88 rows=1503188 width=60) (actual \ntime=0.013..1388.205 rows=1503188 loops=1)\n Buffers: shared hit=1 read=23464\n -> Hash (cost=15382.47..15382.47 rows=57703 width=12) (actual \ntime=527.237..527.237 rows=57789 loops=1)\n Buckets: 4096 Batches: 4 Memory Usage: 632kB\n Buffers: shared hit=3 read=5683, temp read=617 written=771\n\nThe other times, we will see something like this:\n\n-> Hash Join (cost=16385.76..103974.09 rows=523954 width=64) (actual \ntime=587.277..15208.621 rows=258648 loops=1)\n Hash Cond: (p.a = c.c)\n Buffers: shared hit=26 read=29125, temp read=12943 written=12923\n -> Seq Scan on p (cost=0.00..38496.88 rows=1503188 width=60) (actual \ntime=0.013..1525.608 rows=1503188 loops=1)\n Buffers: shared hit=22 read=23443\n -> Hash (cost=15382.47..15382.47 rows=57703 width=12) (actual \ntime=581.638..581.638 rows=57789 loops=1)\n Buckets: 4096 Batches: 4 Memory Usage: 632kB\n Buffers: shared hit=4 read=5682, temp read=617 written=771\n\nDoes anyone have ideas on what might be causing the difference in timing for \nthe hash join node?\n\nThanks\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 2 Jul 2014 13:01:47 +0000 (UTC)",
"msg_from": "Dave Roberge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hash Join node sometimes slow"
},
{
"msg_contents": "Dave Roberge <[email protected]> writes:\n> For example, running explain (analyze, buffers) with the query, 4/5 times we \n> will see the following:\n\n> -> Hash Join (cost=16385.76..103974.09 rows=523954 width=64) (actual \n> time=532.634..4018.678 rows=258648 loops=1)\n> Hash Cond: (p.a = c.c)\n> Buffers: shared hit=4 read=29147, temp read=12943 written=12923\n> -> Seq Scan on p (cost=0.00..38496.88 rows=1503188 width=60) (actual \n> time=0.013..1388.205 rows=1503188 loops=1)\n> Buffers: shared hit=1 read=23464\n> -> Hash (cost=15382.47..15382.47 rows=57703 width=12) (actual \n> time=527.237..527.237 rows=57789 loops=1)\n> Buckets: 4096 Batches: 4 Memory Usage: 632kB\n> Buffers: shared hit=3 read=5683, temp read=617 written=771\n\n> The other times, we will see something like this:\n\n> -> Hash Join (cost=16385.76..103974.09 rows=523954 width=64) (actual \n> time=587.277..15208.621 rows=258648 loops=1)\n> Hash Cond: (p.a = c.c)\n> Buffers: shared hit=26 read=29125, temp read=12943 written=12923\n> -> Seq Scan on p (cost=0.00..38496.88 rows=1503188 width=60) (actual \n> time=0.013..1525.608 rows=1503188 loops=1)\n> Buffers: shared hit=22 read=23443\n> -> Hash (cost=15382.47..15382.47 rows=57703 width=12) (actual \n> time=581.638..581.638 rows=57789 loops=1)\n> Buckets: 4096 Batches: 4 Memory Usage: 632kB\n> Buffers: shared hit=4 read=5682, temp read=617 written=771\n\n> Does anyone have ideas on what might be causing the difference in timing for \n> the hash join node?\n\nI'd bet on the extra time being in I/O for the per-batch temp files,\nsince it's hard to see what else would be different if the data were\nidentical in each run. Maybe the kernel is under memory pressure and\nis dropping the file data from in-memory disk cache. Or maybe it's\ngoing to disk all the time but the slow runs face more I/O congestion.\n\nPersonally, for a problem of this size I'd increase work_mem enough\nso you don't get multiple batches in the first place.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 02 Jul 2014 10:11:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Join node sometimes slow"
},
{
"msg_contents": "Tom Lane writes:\n> I'd bet on the extra time being in I/O for the per-batch temp files, since it's hard\n> to see what else would be different if the data were identical in each run.\n> Maybe the kernel is under memory pressure and is dropping the file data from\n> in-memory disk cache. Or maybe it's going to disk all the time but the slow runs\n> face more I/O congestion.\n> \n> Personally, for a problem of this size I'd increase work_mem enough so you\n> don't get multiple batches in the first place.\n\nTom thanks for the response. I'm very much a novice in this area - what do you mean by problem of this size, i.e. number of rows, hash memory usage? Does 'shared read' mean either 1) it was read from disk or 2) it was read from in-memory disk cache?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 3 Jul 2014 15:14:13 +0000",
"msg_from": "Dave Roberge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Join node sometimes slow"
}
] |
[
{
"msg_contents": "Hi,\n\nWe've experienced a DB issue yesterday and after checked the log found that the peak sessions is 3000 while the peak DB connections is only around 30. The application is having problem of pulling data but no warnings in DB log as it doesn't exceed max_connections.\n\n\nHow could this happen? How does sessions/connections work in Postgres?\n\nThanks,\nSuya\n\n\n\n\n\n\n\n\n\nHi,\n \nWe’ve experienced a DB issue yesterday and after checked the log found that the peak sessions is 3000 while the peak DB connections is only around 30. The application is having problem of pulling data but no warnings in DB log as it doesn’t\n exceed max_connections.\n \n \nHow could this happen? How does sessions/connections work in Postgres?\n \nThanks,\nSuya",
"msg_date": "Fri, 4 Jul 2014 01:44:02 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "DB sessions 100 times of DB connections"
},
{
"msg_contents": "BTW, I'm using the pgbadger report to check for peak connections/sessions.\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Huang, Suya\nSent: Friday, July 04, 2014 11:44 AM\nTo: [email protected]\nSubject: [PERFORM] DB sessions 100 times of DB connections\n\nHi,\n\nWe've experienced a DB issue yesterday and after checked the log found that the peak sessions is 3000 while the peak DB connections is only around 30. The application is having problem of pulling data but no warnings in DB log as it doesn't exceed max_connections.\n\n\nHow could this happen? How does sessions/connections work in Postgres?\n\nThanks,\nSuya\n\n\n\n\n\n\n\n\n\nBTW, I’m using the pgbadger report to check for peak connections/sessions.\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Huang, Suya\nSent: Friday, July 04, 2014 11:44 AM\nTo: [email protected]\nSubject: [PERFORM] DB sessions 100 times of DB connections\n\n\n \nHi,\n \nWe’ve experienced a DB issue yesterday and after checked the log found that the peak sessions is 3000 while the peak DB connections is only around 30. The application is having problem of pulling data but no warnings in DB log as it doesn’t\n exceed max_connections.\n \n \nHow could this happen? How does sessions/connections work in Postgres?\n \nThanks,\nSuya",
"msg_date": "Fri, 4 Jul 2014 01:59:51 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB sessions 100 times of DB connections"
},
{
"msg_contents": "On 07/03/2014 06:59 PM, Huang, Suya wrote:\n>\n> BTW, I'm using the pgbadger report to check for peak connections/sessions.\n>\n> *From:*[email protected] \n> [mailto:[email protected]] *On Behalf Of *Huang, Suya\n> *Sent:* Friday, July 04, 2014 11:44 AM\n> *To:* [email protected]\n> *Subject:* [PERFORM] DB sessions 100 times of DB connections\n>\n> Hi,\n>\n> We've experienced a DB issue yesterday and after checked the log found \n> that the peak sessions is 3000 while the peak DB connections is only \n> around 30. The application is having problem of pulling data but no \n> warnings in DB log as it doesn't exceed max_connections.\n>\n> How could this happen? How does sessions/connections work in Postgres?\n>\n>\nAs handy as pgbadger is, I have found that its max-connections values \ndon't pass the \"sniff test\" as it generally shows peak values that \nexceed the configured number of connections. I haven't dug in to find \nout why but could conjecture that the fact that log entries are \ngenerally truncated to the nearest second could cause this sort of thing.\n\nUnexpected connection buildup is often a side-effect of something else \nlike a large resource-intensive query, a query holding locks that \nprevent the other connections' queries from completing or a variety of \nother things.\n\nIf you are looking to solve/prevent the undescribed \"issue\", please \nprovide more detail.\n\n-Steve\n\n\n\n\n\n\n\nOn 07/03/2014 06:59 PM, Huang, Suya\n wrote:\n\n\n\n\n\n\nBTW, I’m using\n the pgbadger report to check for peak connections/sessions.\n \n\n\nFrom:\[email protected]\n [mailto:[email protected]]\n On Behalf Of Huang, Suya\nSent: Friday, July 04, 2014 11:44 AM\nTo: [email protected]\nSubject: [PERFORM] DB sessions 100 times of DB\n connections\n\n\n \nHi,\n \nWe’ve experienced a DB issue yesterday and\n after checked the log found that the peak sessions is 3000\n while the peak DB connections is only around 30. The\n application is having problem of pulling data but no warnings\n in DB log as it doesn’t exceed max_connections.\n \n \nHow could this happen? How does\n sessions/connections work in Postgres?\n \n\n\n\n As handy as pgbadger is, I have found that its max-connections\n values don't pass the \"sniff test\" as it generally shows peak values\n that exceed the configured number of connections. I haven't dug in\n to find out why but could conjecture that the fact that log entries\n are generally truncated to the nearest second could cause this sort\n of thing.\n\n Unexpected connection buildup is often a side-effect of something\n else like a large resource-intensive query, a query holding locks\n that prevent the other connections' queries from completing or a\n variety of other things.\n\n If you are looking to solve/prevent the undescribed \"issue\", please\n provide more detail.\n\n -Steve",
"msg_date": "Tue, 08 Jul 2014 09:28:47 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB sessions 100 times of DB connections"
}
] |
[
{
"msg_contents": "We have two database servers running streaming replication between them:\n\nPrimary server\n==============\nOS: Linux version 2.6.18-371.3.1.el5\nPostgreSQL: 9.1.11\n\n\nHOT standby server\n==================\nOS: Linux version 2.6.32-431.11.2.el6.x86_64\nPostgreSQL: 9.1.11\n\nSince July 1, one SP suddenly runs slowly in HOT STANDBY server. After\ninvestigation, I can narrow the problem to one particular query in SP. The\nweird things are:\n\n(1) The SP takes about 25 seconds to run in HOT STANDBY only, but only 0.5\nsecond in primary\n(2) If I extract the query in the SP and run it in a psql session, it runs\nfine even in HOT STANDBY\n(3) The SP is:\nCREATE OR REPLACE FUNCTION tmp_test (p_beacon_id bigint, p_rpt_start_ts\nbigint, p_rpt_end_ts bigint) RETURNS bigint AS $$\nDECLARE\n --\n v_min_locate_id bigint;\n --\nBEGIN\n --\n SELECT MIN(locate_id) INTO v_min_locate_id\n FROM event_startstop\n WHERE beacon_id = p_beacon_id\n AND locate_id IS NOT NULL\n AND network_timestamp BETWEEN p_rpt_start_ts AND p_rpt_end_ts;\n --\n RETURN v_min_locate_id;\n --\nEXCEPTION\n WHEN OTHERS THEN\n RAISE EXCEPTION 'tmp_test %, %', SQLSTATE, SQLERRM;\nEND\n$$ LANGUAGE 'plpgsql' STABLE;\n\n(4) explain analyze buffers in HOT STANDBY:\nDB=# explain (analyze, buffers true) select * from tmp_test (55627,\n1403989199, 1404187199);\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------\n Function Scan on tmp_test (cost=0.25..0.26 rows=1 width=8) (actual\ntime=25300.000..25300.002 rows=1 loops=1)\n Buffers: shared hit=25357218 read=880466 written=4235\n Total runtime: 25300.067 ms\n(3 rows)\n\n(5) if running the SQL from psql:\nDB=# explain (analyze, buffers true) select min(locate_id) from\nevent_startstop where beacon_id=55627 and locate_id is not null and\nnetwork_timestamp between 1403989199 and 1404187199;\n \nQUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=79.11..79.12 rows=1 width=8) (actual time=0.342..0.342\nrows=1 loops=1)\n Buffers: shared hit=8 read=7\n -> Append (cost=0.00..79.06 rows=20 width=8) (actual time=0.190..0.326\nrows=11 loops=1)\n Buffers: shared hit=8 read=7\n -> Seq Scan on event_startstop (cost=0.00..0.00 rows=1 width=8)\n(actual time=0.002..0.002 rows=0 loops=1)\n Filter: ((locate_id IS NOT NULL) AND (network_timestamp >=\n1403989199) AND (network_timestamp <= 1404187199) AND (beacon_id = 55627))\n -> Bitmap Heap Scan on event_startstop_201406_b54to56k\nevent_startstop (cost=4.71..79.06 rows=19 width=8) (actual\ntime=0.186..0.310 rows=11 loops=1)\n Recheck Cond: ((beacon_id = 55627) AND (network_timestamp >=\n1403989199) AND (network_timestamp <= 1404187199))\n Filter: (locate_id IS NOT NULL)\n Buffers: shared hit=8 read=7\n -> Bitmap Index Scan on\nevent_startstop_201406_b54to56k_bidntslid_idx (cost=0.00..4.71 rows=19\nwidth=0) (actual time=0.170..0.170 rows=11 loops=1)\n Index Cond: ((beacon_id = 55627) AND (network_timestamp\n>= 1403989199) AND (network_timestamp <= 1404187199))\n Buffers: shared hit=5 read=1\n Total runtime: 0.485 ms\n(14 rows)\n\nTime: 159.359 ms\n\n(6) the event_startstop is a parent table with 406 children tables\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/stored-procedure-suddenly-runs-slowly-in-HOT-STANDBY-but-fast-in-primary-tp5810599.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 4 Jul 2014 14:04:48 -0700 (PDT)",
"msg_from": "piuschan <[email protected]>",
"msg_from_op": true,
"msg_subject": "stored procedure suddenly runs slowly in HOT STANDBY but fast in\n primary"
},
{
"msg_contents": "piuschan <[email protected]> writes:\n> PostgreSQL: 9.1.11\n\n> Since July 1, one SP suddenly runs slowly in HOT STANDBY server. After\n> investigation, I can narrow the problem to one particular query in SP.\n\n> SELECT MIN(locate_id) INTO v_min_locate_id\n> FROM event_startstop\n> WHERE beacon_id = p_beacon_id\n> AND locate_id IS NOT NULL\n> AND network_timestamp BETWEEN p_rpt_start_ts AND p_rpt_end_ts;\n\n> (6) the event_startstop is a parent table with 406 children tables\n\nTBH, the astonishing part of this report is not that it's slow, but\nthat it ever was not slow. 9.1 is not capable of avoiding scanning\nthe other 405 child tables when given a parameterized query such as\nthis one. (You haven't said, but I suppose that the child tables\nare partitioned on beacon_id and/or network_timestamp, so that knowledge\nof the constants these columns are being compared to is essential for\ndoing constraint exclusion.)\n\nYou could work around that by inserting constants into the query with\nEXECUTE, but a better answer would be to update to 9.2 or later.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 05 Jul 2014 00:31:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stored procedure suddenly runs slowly in HOT STANDBY but fast in\n primary"
},
{
"msg_contents": "Hi Tom,\n\nThanks for your reply. As you said, the weirdest part is the the SP ran fine\nin primary server and hot standby server. I did another test over the\nweekend:\n\n(1) restore the production DB dump to an internal server with replication\nset up.\n(2) ran the SP in internal hot standy DB\n(3) surprisingly the SP ran fast in internal hot standby DB\n\nSo what is the possible explanations to these test results? Is there a way\nto tell if the SP scans through all children event_startstop tables as you\nsaid?\n\nThanks a lot,\n\nPius\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/stored-procedure-suddenly-runs-slowly-in-HOT-STANDBY-but-fast-in-primary-tp5810599p5810779.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 7 Jul 2014 11:39:20 -0700 (PDT)",
"msg_from": "piuschan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stored procedure suddenly runs slowly in HOT STANDBY but fast\n in primary"
}
] |
[
{
"msg_contents": "\n\n\n\nHello,\n\nI have a fact table ( table and indexes are bellow ) that will probably get\narround 2 billion rows.\n\n- Can postgresql support such table (this table is the fact table of a\ndatamart -> many join query with dimensions tables) ?\n- If yes, I would like to test (say insert 2 billion test rows), what\nserveur configuration do I need ? How much RAM ?\n- If not, would it be better to think about a cluster or other ?\n- (Have you any idea to optimize this table ?)\n\nThanks a lot !\n\n\nCREATE TABLE observation_fact\n(\n encounter_num integer NOT NULL,\n patient_num integer NOT NULL,\n concept_cd character varying(50) NOT NULL,\n provider_id character varying(50) NOT NULL,\n start_date timestamp without time zone NOT NULL,\n modifier_cd character varying(100) NOT NULL DEFAULT '@'::character\nvarying,\n instance_num integer NOT NULL DEFAULT 1,\n valtype_cd character varying(50),\n tval_char character varying(255),\n nval_num numeric(18,5),\n valueflag_cd character varying(50),\n quantity_num numeric(18,5),\n units_cd character varying(50),\n end_date timestamp without time zone,\n location_cd character varying(50),\n observation_blob text,\n confidence_num numeric(18,5),\n update_date timestamp without time zone,\n download_date timestamp without time zone,\n import_date timestamp without time zone,\n sourcesystem_cd character varying(50),\n upload_id integer,\n text_search_index serial NOT NULL,\n CONSTRAINT observation_fact_pk PRIMARY KEY (patient_num, concept_cd,\nmodifier_cd, start_date, encounter_num, instance_num, provider_id)\n)\nWITH (\n OIDS=FALSE\n);\n\n\nCREATE INDEX of_idx_allobservation_fact\n ON i2b2databeta.observation_fact\n USING btree\n (patient_num, encounter_num, concept_cd COLLATE pg_catalog.\"default\",\nstart_date, provider_id COLLATE pg_catalog.\"default\", modifier_cd COLLATE\npg_catalog.\"default\", instance_num, valtype_cd COLLATE\npg_catalog.\"default\", tval_char COLLATE pg_catalog.\"default\", nval_num,\nvalueflag_cd COLLATE pg_catalog.\"default\", quantity_num, units_cd COLLATE\npg_catalog.\"default\", end_date, location_cd COLLATE pg_catalog.\"default\",\nconfidence_num);\n\n\nCREATE INDEX of_idx_clusteredconcept\n ON i2b2databeta.observation_fact\n USING btree\n (concept_cd COLLATE pg_catalog.\"default\");\n\n\nCREATE INDEX of_idx_encounter_patient\n ON i2b2databeta.observation_fact\n USING btree\n (encounter_num, patient_num, instance_num);\n\n\nCREATE INDEX of_idx_modifier\n ON i2b2databeta.observation_fact\n USING btree\n (modifier_cd COLLATE pg_catalog.\"default\");\n\nCREATE INDEX of_idx_sourcesystem_cd\n ON i2b2databeta.observation_fact\n USING btree\n (sourcesystem_cd COLLATE pg_catalog.\"default\");\n\n\nCREATE INDEX of_idx_start_date\n ON i2b2databeta.observation_fact\n USING btree\n (start_date, patient_num);\n\n\nCREATE INDEX of_idx_uploadid\n ON i2b2databeta.observation_fact\n USING btree\n (upload_id);\n\n\nCREATE UNIQUE INDEX of_text_search_unique\n ON i2b2databeta.observation_fact\n USING btree\n (text_search_index);\n\n\n\n\nHello,I have a fact table ( table and indexes are bellow ) that will probably get arround 2 billion rows.\n- Can postgresql support such table (this table is the fact table of a datamart -> many join query with dimensions tables) ?\n- If yes, I would like to test (say insert 2 billion test rows), what serveur configuration do I need ? How much RAM ?\n- If not, would it be better to think about a cluster or other ?\n- (Have you any idea to optimize this table ?)Thanks a lot !\nCREATE TABLE observation_fact( encounter_num integer NOT NULL, patient_num integer NOT NULL,\n concept_cd character varying(50) NOT NULL, provider_id character varying(50) NOT NULL, start_date timestamp without time zone NOT NULL, modifier_cd character varying(100) NOT NULL DEFAULT '@'::character varying,\n instance_num integer NOT NULL DEFAULT 1, valtype_cd character varying(50), tval_char character varying(255), nval_num numeric(18,5), valueflag_cd character varying(50), quantity_num numeric(18,5),\n units_cd character varying(50), end_date timestamp without time zone, location_cd character varying(50), observation_blob text, confidence_num numeric(18,5), update_date timestamp without time zone,\n download_date timestamp without time zone, import_date timestamp without time zone, sourcesystem_cd character varying(50), upload_id integer, text_search_index serial NOT NULL, CONSTRAINT observation_fact_pk PRIMARY KEY (patient_num, concept_cd, modifier_cd, start_date, encounter_num, instance_num, provider_id)\n)WITH ( OIDS=FALSE);CREATE INDEX of_idx_allobservation_fact ON i2b2databeta.observation_fact USING btree (patient_num, encounter_num, concept_cd COLLATE pg_catalog.\"default\", start_date, provider_id COLLATE pg_catalog.\"default\", modifier_cd COLLATE pg_catalog.\"default\", instance_num, valtype_cd COLLATE pg_catalog.\"default\", tval_char COLLATE pg_catalog.\"default\", nval_num, valueflag_cd COLLATE pg_catalog.\"default\", quantity_num, units_cd COLLATE pg_catalog.\"default\", end_date, location_cd COLLATE pg_catalog.\"default\", confidence_num);\nCREATE INDEX of_idx_clusteredconcept ON i2b2databeta.observation_fact USING btree (concept_cd COLLATE pg_catalog.\"default\");CREATE INDEX of_idx_encounter_patient ON i2b2databeta.observation_fact\n USING btree (encounter_num, patient_num, instance_num);CREATE INDEX of_idx_modifier ON i2b2databeta.observation_fact USING btree (modifier_cd COLLATE pg_catalog.\"default\");\nCREATE INDEX of_idx_sourcesystem_cd ON i2b2databeta.observation_fact USING btree (sourcesystem_cd COLLATE pg_catalog.\"default\");CREATE INDEX of_idx_start_date ON i2b2databeta.observation_fact\n USING btree (start_date, patient_num);CREATE INDEX of_idx_uploadid ON i2b2databeta.observation_fact USING btree (upload_id);CREATE UNIQUE INDEX of_text_search_unique ON i2b2databeta.observation_fact\n USING btree (text_search_index);",
"msg_date": "Mon, 7 Jul 2014 15:59:59 +0200",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "PGSQL 9.3 - billion rows"
},
{
"msg_contents": "Hi Nicolas,\n\nI do believe Postgresql can handle that.\n\nI've worked with tables that have 2 millions rows per day, which give us an\naverage of 700 mi/year.\n\nIt's hard to say how much hardware power you will need, but I would say\ntest it with a server in the cloud, since servers in the cloud are usually\neasily to resize to your needs (both up and down).\n\nBeside that, take a look at this link to fine tune your settings:\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nMy final words are about the table itselft. I've used to create partitions\nfor such large tables. The partitions were by day (I had a \"created_date\"\ncolumn), because that was the most used filtering field used by the people\nthat queried the table. Using partitions make Postgresql look at only the\nsubset of data that is being queried, thus increasing querying performance.\n\nIf you can do that, do it. But be sure you are partitioning the right\ncolumn. Creating partitions that are different from the most part of the\nquerying filters may impact the query performance negatively.\n\nGood luck!\n\n\n2014-07-07 10:59 GMT-03:00 Nicolas Paris <[email protected]>:\n\n> \n> \n> \n> \n> Hello,\n>\n> I have a fact table ( table and indexes are bellow ) that will probably\n> get arround 2 billion rows.\n>\n> - Can postgresql support such table (this table is the fact table of a\n> datamart -> many join query with dimensions tables) ?\n> - If yes, I would like to test (say insert 2 billion test rows), what\n> serveur configuration do I need ? How much RAM ?\n> - If not, would it be better to think about a cluster or other ?\n> - (Have you any idea to optimize this table ?)\n>\n> Thanks a lot !\n>\n>\n> CREATE TABLE observation_fact\n> (\n> encounter_num integer NOT NULL,\n> patient_num integer NOT NULL,\n> concept_cd character varying(50) NOT NULL,\n> provider_id character varying(50) NOT NULL,\n> start_date timestamp without time zone NOT NULL,\n> modifier_cd character varying(100) NOT NULL DEFAULT '@'::character\n> varying,\n> instance_num integer NOT NULL DEFAULT 1,\n> valtype_cd character varying(50),\n> tval_char character varying(255),\n> nval_num numeric(18,5),\n> valueflag_cd character varying(50),\n> quantity_num numeric(18,5),\n> units_cd character varying(50),\n> end_date timestamp without time zone,\n> location_cd character varying(50),\n> observation_blob text,\n> confidence_num numeric(18,5),\n> update_date timestamp without time zone,\n> download_date timestamp without time zone,\n> import_date timestamp without time zone,\n> sourcesystem_cd character varying(50),\n> upload_id integer,\n> text_search_index serial NOT NULL,\n> CONSTRAINT observation_fact_pk PRIMARY KEY (patient_num, concept_cd,\n> modifier_cd, start_date, encounter_num, instance_num, provider_id)\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n>\n> CREATE INDEX of_idx_allobservation_fact\n> ON i2b2databeta.observation_fact\n> USING btree\n> (patient_num, encounter_num, concept_cd COLLATE pg_catalog.\"default\",\n> start_date, provider_id COLLATE pg_catalog.\"default\", modifier_cd COLLATE\n> pg_catalog.\"default\", instance_num, valtype_cd COLLATE\n> pg_catalog.\"default\", tval_char COLLATE pg_catalog.\"default\", nval_num,\n> valueflag_cd COLLATE pg_catalog.\"default\", quantity_num, units_cd COLLATE\n> pg_catalog.\"default\", end_date, location_cd COLLATE pg_catalog.\"default\",\n> confidence_num);\n>\n>\n> CREATE INDEX of_idx_clusteredconcept\n> ON i2b2databeta.observation_fact\n> USING btree\n> (concept_cd COLLATE pg_catalog.\"default\");\n>\n>\n> CREATE INDEX of_idx_encounter_patient\n> ON i2b2databeta.observation_fact\n> USING btree\n> (encounter_num, patient_num, instance_num);\n>\n>\n> CREATE INDEX of_idx_modifier\n> ON i2b2databeta.observation_fact\n> USING btree\n> (modifier_cd COLLATE pg_catalog.\"default\");\n>\n> CREATE INDEX of_idx_sourcesystem_cd\n> ON i2b2databeta.observation_fact\n> USING btree\n> (sourcesystem_cd COLLATE pg_catalog.\"default\");\n>\n>\n> CREATE INDEX of_idx_start_date\n> ON i2b2databeta.observation_fact\n> USING btree\n> (start_date, patient_num);\n>\n>\n> CREATE INDEX of_idx_uploadid\n> ON i2b2databeta.observation_fact\n> USING btree\n> (upload_id);\n>\n>\n> CREATE UNIQUE INDEX of_text_search_unique\n> ON i2b2databeta.observation_fact\n> USING btree\n> (text_search_index);\n> \n>\n>\n\nHi Nicolas,I do believe Postgresql can handle that.I've worked with tables that have 2 millions rows per day, which give us an average of 700 mi/year.\nIt's hard to say how much hardware power you will need, but I would say test it with a server in the cloud, since servers in the cloud are usually easily to resize to your needs (both up and down).\nBeside that, take a look at this link to fine tune your settings:https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\nMy final words are about the table itselft. I've used to create partitions for such large tables. The partitions were by day (I had a \"created_date\" column), because that was the most used filtering field used by the people that queried the table. Using partitions make Postgresql look at only the subset of data that is being queried, thus increasing querying performance.\nIf you can do that, do it. But be sure you are partitioning the right column. Creating partitions that are different from the most part of the querying filters may impact the query performance negatively.\nGood luck!2014-07-07 10:59 GMT-03:00 Nicolas Paris <[email protected]>:\n\n\n\n\nHello,I have a fact table ( table and indexes are bellow ) that will probably get arround 2 billion rows.\n- Can postgresql support such table (this table is the fact table of a datamart -> many join query with dimensions tables) ?\n- If yes, I would like to test (say insert 2 billion test rows), what serveur configuration do I need ? How much RAM ?\n- If not, would it be better to think about a cluster or other ?\n\n- (Have you any idea to optimize this table ?)Thanks a lot !\nCREATE TABLE observation_fact( encounter_num integer NOT NULL, patient_num integer NOT NULL,\n\n concept_cd character varying(50) NOT NULL, provider_id character varying(50) NOT NULL, start_date timestamp without time zone NOT NULL, modifier_cd character varying(100) NOT NULL DEFAULT '@'::character varying,\n\n instance_num integer NOT NULL DEFAULT 1, valtype_cd character varying(50), tval_char character varying(255), nval_num numeric(18,5), valueflag_cd character varying(50), quantity_num numeric(18,5),\n\n units_cd character varying(50), end_date timestamp without time zone, location_cd character varying(50), observation_blob text, confidence_num numeric(18,5), update_date timestamp without time zone,\n\n download_date timestamp without time zone, import_date timestamp without time zone, sourcesystem_cd character varying(50), upload_id integer, text_search_index serial NOT NULL, CONSTRAINT observation_fact_pk PRIMARY KEY (patient_num, concept_cd, modifier_cd, start_date, encounter_num, instance_num, provider_id)\n\n)WITH ( OIDS=FALSE);CREATE INDEX of_idx_allobservation_fact ON i2b2databeta.observation_fact USING btree (patient_num, encounter_num, concept_cd COLLATE pg_catalog.\"default\", start_date, provider_id COLLATE pg_catalog.\"default\", modifier_cd COLLATE pg_catalog.\"default\", instance_num, valtype_cd COLLATE pg_catalog.\"default\", tval_char COLLATE pg_catalog.\"default\", nval_num, valueflag_cd COLLATE pg_catalog.\"default\", quantity_num, units_cd COLLATE pg_catalog.\"default\", end_date, location_cd COLLATE pg_catalog.\"default\", confidence_num);\nCREATE INDEX of_idx_clusteredconcept ON i2b2databeta.observation_fact USING btree (concept_cd COLLATE pg_catalog.\"default\");CREATE INDEX of_idx_encounter_patient ON i2b2databeta.observation_fact\n\n USING btree (encounter_num, patient_num, instance_num);CREATE INDEX of_idx_modifier ON i2b2databeta.observation_fact USING btree (modifier_cd COLLATE pg_catalog.\"default\");\n\nCREATE INDEX of_idx_sourcesystem_cd ON i2b2databeta.observation_fact USING btree (sourcesystem_cd COLLATE pg_catalog.\"default\");CREATE INDEX of_idx_start_date ON i2b2databeta.observation_fact\n\n USING btree (start_date, patient_num);CREATE INDEX of_idx_uploadid ON i2b2databeta.observation_fact USING btree (upload_id);CREATE UNIQUE INDEX of_text_search_unique ON i2b2databeta.observation_fact\n\n USING btree (text_search_index);",
"msg_date": "Mon, 7 Jul 2014 11:27:51 -0300",
"msg_from": "Felipe Santos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGSQL 9.3 - billion rows"
},
{
"msg_contents": "PostgreSQL will definitely be able to handle it.\n\nHowever, besides the schema, an important parameter is the kind of request\nyou will be submitting to PostgreSQL. Reporting queries, low-latency\nqueries ?\n\nYou may found PostgreSQL weak if you mainly submit analytical queries (eg.\nSELECT count(1) from observation_fact might need a long time to complete).\nHowever, based on the indexes you showed, a standard \"SELECT * FROM\nobservation_fact WHERE\" will most likely show decent performances.\n\nDo you think the active set will fit your RAM ? If not, it might be\ninteresting to increase memory.\n\nAFAIK, vanilla PostgreSQL can not scale horizontally (yet), and each query\nis not multithreaded (yet). Hence, you could have a look at Postgres-XC or\nPostgres-XL.\n\nSekine\n\n\n2014-07-07 15:59 GMT+02:00 Nicolas Paris <[email protected]>:\n\n> \n> \n> \n> \n> Hello,\n>\n> I have a fact table ( table and indexes are bellow ) that will probably\n> get arround 2 billion rows.\n>\n> - Can postgresql support such table (this table is the fact table of a\n> datamart -> many join query with dimensions tables) ?\n> - If yes, I would like to test (say insert 2 billion test rows), what\n> serveur configuration do I need ? How much RAM ?\n> - If not, would it be better to think about a cluster or other ?\n> - (Have you any idea to optimize this table ?)\n>\n> Thanks a lot !\n>\n>\n> CREATE TABLE observation_fact\n> (\n> encounter_num integer NOT NULL,\n> patient_num integer NOT NULL,\n> concept_cd character varying(50) NOT NULL,\n> provider_id character varying(50) NOT NULL,\n> start_date timestamp without time zone NOT NULL,\n> modifier_cd character varying(100) NOT NULL DEFAULT '@'::character\n> varying,\n> instance_num integer NOT NULL DEFAULT 1,\n> valtype_cd character varying(50),\n> tval_char character varying(255),\n> nval_num numeric(18,5),\n> valueflag_cd character varying(50),\n> quantity_num numeric(18,5),\n> units_cd character varying(50),\n> end_date timestamp without time zone,\n> location_cd character varying(50),\n> observation_blob text,\n> confidence_num numeric(18,5),\n> update_date timestamp without time zone,\n> download_date timestamp without time zone,\n> import_date timestamp without time zone,\n> sourcesystem_cd character varying(50),\n> upload_id integer,\n> text_search_index serial NOT NULL,\n> CONSTRAINT observation_fact_pk PRIMARY KEY (patient_num, concept_cd,\n> modifier_cd, start_date, encounter_num, instance_num, provider_id)\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n>\n> CREATE INDEX of_idx_allobservation_fact\n> ON i2b2databeta.observation_fact\n> USING btree\n> (patient_num, encounter_num, concept_cd COLLATE pg_catalog.\"default\",\n> start_date, provider_id COLLATE pg_catalog.\"default\", modifier_cd COLLATE\n> pg_catalog.\"default\", instance_num, valtype_cd COLLATE\n> pg_catalog.\"default\", tval_char COLLATE pg_catalog.\"default\", nval_num,\n> valueflag_cd COLLATE pg_catalog.\"default\", quantity_num, units_cd COLLATE\n> pg_catalog.\"default\", end_date, location_cd COLLATE pg_catalog.\"default\",\n> confidence_num);\n>\n>\n> CREATE INDEX of_idx_clusteredconcept\n> ON i2b2databeta.observation_fact\n> USING btree\n> (concept_cd COLLATE pg_catalog.\"default\");\n>\n>\n> CREATE INDEX of_idx_encounter_patient\n> ON i2b2databeta.observation_fact\n> USING btree\n> (encounter_num, patient_num, instance_num);\n>\n>\n> CREATE INDEX of_idx_modifier\n> ON i2b2databeta.observation_fact\n> USING btree\n> (modifier_cd COLLATE pg_catalog.\"default\");\n>\n> CREATE INDEX of_idx_sourcesystem_cd\n> ON i2b2databeta.observation_fact\n> USING btree\n> (sourcesystem_cd COLLATE pg_catalog.\"default\");\n>\n>\n> CREATE INDEX of_idx_start_date\n> ON i2b2databeta.observation_fact\n> USING btree\n> (start_date, patient_num);\n>\n>\n> CREATE INDEX of_idx_uploadid\n> ON i2b2databeta.observation_fact\n> USING btree\n> (upload_id);\n>\n>\n> CREATE UNIQUE INDEX of_text_search_unique\n> ON i2b2databeta.observation_fact\n> USING btree\n> (text_search_index);\n> \n>\n>\n\nPostgreSQL will definitely be able to handle it.However,\n besides the schema, an important parameter is the kind of request you \nwill be submitting to PostgreSQL. Reporting queries, low-latency queries\n ?\nYou may found PostgreSQL weak if you mainly submit analytical \nqueries (eg. SELECT count(1) from observation_fact might need a long \ntime to complete). However, based on the indexes you showed, a standard \n \"SELECT * FROM observation_fact WHERE\" will most likely show decent \nperformances.\nDo you think the active set will fit your RAM ? If not, it might be interesting to increase memory.AFAIK,\n vanilla PostgreSQL can not scale horizontally (yet), and each query is \nnot multithreaded (yet). Hence, you could have a look at Postgres-XC or \nPostgres-XL.\nSekine2014-07-07 15:59 GMT+02:00 Nicolas Paris <[email protected]>:\n\n\n\n\n\nHello,I have a fact table ( table and indexes are bellow ) that will probably get arround 2 billion rows.\n- Can postgresql support such table (this table is the fact table of a datamart -> many join query with dimensions tables) ?\n- If yes, I would like to test (say insert 2 billion test rows), what serveur configuration do I need ? How much RAM ?\n- If not, would it be better to think about a cluster or other ?\n\n\n- (Have you any idea to optimize this table ?)Thanks a lot !\nCREATE TABLE observation_fact( encounter_num integer NOT NULL, patient_num integer NOT NULL,\n\n\n concept_cd character varying(50) NOT NULL, provider_id character varying(50) NOT NULL, start_date timestamp without time zone NOT NULL, modifier_cd character varying(100) NOT NULL DEFAULT '@'::character varying,\n\n\n instance_num integer NOT NULL DEFAULT 1, valtype_cd character varying(50), tval_char character varying(255), nval_num numeric(18,5), valueflag_cd character varying(50), quantity_num numeric(18,5),\n\n\n units_cd character varying(50), end_date timestamp without time zone, location_cd character varying(50), observation_blob text, confidence_num numeric(18,5), update_date timestamp without time zone,\n\n\n download_date timestamp without time zone, import_date timestamp without time zone, sourcesystem_cd character varying(50), upload_id integer, text_search_index serial NOT NULL, CONSTRAINT observation_fact_pk PRIMARY KEY (patient_num, concept_cd, modifier_cd, start_date, encounter_num, instance_num, provider_id)\n\n\n)WITH ( OIDS=FALSE);CREATE INDEX of_idx_allobservation_fact ON i2b2databeta.observation_fact USING btree (patient_num, encounter_num, concept_cd COLLATE pg_catalog.\"default\", start_date, provider_id COLLATE pg_catalog.\"default\", modifier_cd COLLATE pg_catalog.\"default\", instance_num, valtype_cd COLLATE pg_catalog.\"default\", tval_char COLLATE pg_catalog.\"default\", nval_num, valueflag_cd COLLATE pg_catalog.\"default\", quantity_num, units_cd COLLATE pg_catalog.\"default\", end_date, location_cd COLLATE pg_catalog.\"default\", confidence_num);\nCREATE INDEX of_idx_clusteredconcept ON i2b2databeta.observation_fact USING btree (concept_cd COLLATE pg_catalog.\"default\");CREATE INDEX of_idx_encounter_patient ON i2b2databeta.observation_fact\n\n\n USING btree (encounter_num, patient_num, instance_num);CREATE INDEX of_idx_modifier ON i2b2databeta.observation_fact USING btree (modifier_cd COLLATE pg_catalog.\"default\");\n\nCREATE INDEX of_idx_sourcesystem_cd ON i2b2databeta.observation_fact USING btree (sourcesystem_cd COLLATE pg_catalog.\"default\");CREATE INDEX of_idx_start_date ON i2b2databeta.observation_fact\n\n\n USING btree (start_date, patient_num);CREATE INDEX of_idx_uploadid ON i2b2databeta.observation_fact USING btree (upload_id);CREATE UNIQUE INDEX of_text_search_unique ON i2b2databeta.observation_fact\n\n\n USING btree (text_search_index);",
"msg_date": "Mon, 7 Jul 2014 21:40:49 +0200",
"msg_from": "=?UTF-8?Q?S=C3=A9kine_Coulibaly?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGSQL 9.3 - billion rows"
},
{
"msg_contents": "On 07/07/2014 06:59 AM, Nicolas Paris wrote:\n> - Can postgresql support such table (this table is the fact table of a\n> datamart -> many join query with dimensions tables) ?\n\nYes, it can.\n\n> - If yes, I would like to test (say insert 2 billion test rows), what\n> serveur configuration do I need ? How much RAM ?\n> - If not, would it be better to think about a cluster or other ?\n> - (Have you any idea to optimize this table ?)\n\nConsider also trying cstore_fdw: https://github.com/citusdata/cstore_fdw\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 08 Jul 2014 17:06:03 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGSQL 9.3 - billion rows"
}
] |
[
{
"msg_contents": "Hi,\n\nI've got a table with GIN index on integer[] type. While doing a query with filter criteria on that column has GIN index created, it's not using index at all, still do the full table scan. Wondering why?\n\nTable is analyzed.\n\ndev=# \\d+ booking_weekly\n Table \"booking_weekly\"\n Column | Type | Modifiers | Storage | Stats target | Description\n--------------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nid | character varying(256) | | extended | |\nt_wei | double precision | | plain | |\nbooking_ts | integer[] | | extended | |\nIndexes:\n \"idx_booking_weekly_1_1\" btree (id), tablespace \"tbs_data\"\n \"idx_booking_weekly_1_2\" gin (booking_ts), tablespace \"tbs_data\"\n\ndev=# select * from booking_weekly limit 1;\n-[ RECORD 1\ndate | 2014-05-03\nid | 148f8ecbf40\nt_wei | 0.892571268670041\nbooking_ts | {2446685,4365133,5021137,2772581,1304970,6603422,262511,5635455,4637460,5250119,3037711,6273424,3198590,3581767,6612741,5813035,3074851}\n\n\ndev=# explain analyze select * FROM booking_weekly\nWHERE date = '2014-05-03' AND\nbooking_ts@>array[2446685];\n-[ RECORD 1 ]--------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Seq Scan on booking_weekly (cost=10000000000.00..10000344953.64 rows=1288 width=1233) (actual time=0.015..1905.657 rows=1 loops=1)\n-[ RECORD 2 ]--------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Filter: ((booking_ts @> '{2446685}'::integer[]) AND (date = '2014-05-03'::date))\n-[ RECORD 3 ]--------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Rows Removed by Filter: 1288402\n-[ RECORD 4 ]--------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Total runtime: 1905.687 ms\n\nThanks,\nSuya\n\n\n\n\n\n\n\n\n\nHi,\n \nI’ve got a table with GIN index on integer[] type. While doing a query with filter criteria on that column has GIN index created, it’s not using index at all, still do the full table scan. Wondering why?\n \nTable is analyzed.\n \ndev=# \\d+ booking_weekly\n Table \"booking_weekly\"\n Column | Type | Modifiers | Storage | Stats target | Description\n--------------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nid | character varying(256) | | extended | |\nt_wei | double precision | | plain | |\nbooking_ts | integer[] | | extended | |\nIndexes:\n \"idx_booking_weekly_1_1\" btree (id), tablespace \"tbs_data\"\n \"idx_booking_weekly_1_2\" gin (booking_ts), tablespace \"tbs_data\"\n \ndev=# select * from booking_weekly limit 1;\n-[ RECORD 1 \ndate | 2014-05-03\nid | 148f8ecbf40\nt_wei | 0.892571268670041\nbooking_ts | {2446685,4365133,5021137,2772581,1304970,6603422,262511,5635455,4637460,5250119,3037711,6273424,3198590,3581767,6612741,5813035,3074851}\n \n \ndev=# explain analyze select * FROM booking_weekly\nWHERE date = '2014-05-03' AND\nbooking_ts@>array[2446685];\n-[ RECORD 1 ]--------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Seq Scan on booking_weekly (cost=10000000000.00..10000344953.64 rows=1288 width=1233) (actual time=0.015..1905.657 rows=1 loops=1)\n-[ RECORD 2 ]--------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Filter: ((booking_ts @> '{2446685}'::integer[]) AND (date = '2014-05-03'::date))\n-[ RECORD 3 ]--------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Rows Removed by Filter: 1288402\n-[ RECORD 4 ]--------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Total runtime: 1905.687 ms\n \nThanks,\nSuya",
"msg_date": "Fri, 11 Jul 2014 04:14:38 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "GIN index not used"
},
{
"msg_contents": "Huang, Suya <[email protected]> wrote:\n\n> Hi,\n> \n> \n> \n> I’ve got a table with GIN index on integer[] type. While doing a query with\n> filter criteria on that column has GIN index created, it’s not using index at\n> all, still do the full table scan. Wondering why?\n\nTry to add an index on the date-column.\n\nBtw.: works for me:\n\n,----\n| test=*# \\d foo;\n| Table \"public.foo\"\n| Column | Type | Modifiers\n| --------+-----------+-----------\n| id | integer |\n| ts | integer[] |\n| Indexes:\n| \"idx_foo\" gin (ts)\n|\n| test=*# set enable_seqscan to off;\n| SET\n| Time: 0,049 ms\n| test=*# select * from foo;\n| id | ts\n| ----+------------\n| 1 | {1,2,3}\n| 2 | {10,20,30}\n| (2 rows)\n|\n| Time: 0,230 ms\n| test=*# explain select * from foo where ts @> array[2];\n| QUERY PLAN\n| ----------------------------------------------------------------------\n| Bitmap Heap Scan on foo (cost=8.00..12.01 rows=1 width=36)\n| Recheck Cond: (ts @> '{2}'::integer[])\n| -> Bitmap Index Scan on idx_foo (cost=0.00..8.00 rows=1 width=0)\n| Index Cond: (ts @> '{2}'::integer[])\n| (4 rows)\n`----\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jul 2014 06:44:52 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN index not used"
},
{
"msg_contents": "Andreas Kretschmer <[email protected]> writes:\n> Huang, Suya <[email protected]> wrote:\n>> I’ve got a table with GIN index on integer[] type. While doing a query with\n>> filter criteria on that column has GIN index created, it’s not using index at\n>> all, still do the full table scan. Wondering why?\n\n> Btw.: works for me:\n\nYeah, me too:\n\nregression=# create table booking_weekly(booking_ts int[]);\nCREATE TABLE\nregression=# create index on booking_weekly using gin (booking_ts);\nCREATE INDEX\nregression=# explain select * from booking_weekly where booking_ts@>array[2446685];\n QUERY PLAN \n--------------------------------------------------------------------------------------------\n Bitmap Heap Scan on booking_weekly (cost=8.05..18.20 rows=7 width=32)\n Recheck Cond: (booking_ts @> '{2446685}'::integer[])\n -> Bitmap Index Scan on booking_weekly_booking_ts_idx (cost=0.00..8.05 rows=7 width=0)\n Index Cond: (booking_ts @> '{2446685}'::integer[])\n Planning time: 0.862 ms\n(5 rows)\n\nWhat PG version is this? What non-default planner parameter settings are\nyou using? (Don't say \"none\", because I can see you've got enable_seqscan\nturned off.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jul 2014 00:56:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN index not used"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n\n> What PG version is this? What non-default planner parameter settings are\n> you using? (Don't say \"none\", because I can see you've got enable_seqscan\n> turned off.)\n\nLOL, right ;-)\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jul 2014 07:01:23 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN index not used"
},
{
"msg_contents": "-----Original Message-----\r\nFrom: Tom Lane [mailto:[email protected]] \r\nSent: Friday, July 11, 2014 2:56 PM\r\nTo: Andreas Kretschmer\r\nCc: Huang, Suya; [email protected]\r\nSubject: Re: [PERFORM] GIN index not used\r\n\r\nAndreas Kretschmer <[email protected]> writes:\r\n> Huang, Suya <[email protected]> wrote:\r\n>> I’ve got a table with GIN index on integer[] type. While doing a \r\n>> query with filter criteria on that column has GIN index created, it’s \r\n>> not using index at all, still do the full table scan. Wondering why?\r\n\r\n> Btw.: works for me:\r\n\r\nYeah, me too:\r\n\r\nregression=# create table booking_weekly(booking_ts int[]); CREATE TABLE regression=# create index on booking_weekly using gin (booking_ts); CREATE INDEX regression=# explain select * from booking_weekly where booking_ts@>array[2446685];\r\n QUERY PLAN \r\n--------------------------------------------------------------------------------------------\r\n Bitmap Heap Scan on booking_weekly (cost=8.05..18.20 rows=7 width=32)\r\n Recheck Cond: (booking_ts @> '{2446685}'::integer[])\r\n -> Bitmap Index Scan on booking_weekly_booking_ts_idx (cost=0.00..8.05 rows=7 width=0)\r\n Index Cond: (booking_ts @> '{2446685}'::integer[]) Planning time: 0.862 ms\r\n(5 rows)\r\n\r\nWhat PG version is this? What non-default planner parameter settings are you using? (Don't say \"none\", because I can see you've got enable_seqscan turned off.)\r\n\r\n\t\t\tregards, tom lane\r\n\r\n\r\n\r\n\r\n\r\n\r\nJust found out something here http://www.postgresql.org/message-id/[email protected] \r\n\r\nSo I dropped the index and recreate it by specifying: using gin(terms_ts gin__int_ops) and the index works.\r\n\r\nMy PG version is 9.3.4, none-default planner settings:\r\nenable_mergejoin = off\r\nenable_nestloop = off\r\n\r\nenable_seqscan is turned off for session while trying to figure out why the GIN index is not used.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jul 2014 05:26:09 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GIN index not used"
},
{
"msg_contents": "\"Huang, Suya\" <[email protected]> writes:\n> Just found out something here http://www.postgresql.org/message-id/[email protected] \n> So I dropped the index and recreate it by specifying: using gin(terms_ts gin__int_ops) and the index works.\n\nOh, you're using contrib/intarray?\n\nPursuant to the thread you mention above, we removed intarray's <@ and @>\noperators (commit 65e758a4d3) but then reverted that (commit 156475a589)\nbecause of backwards-compatibility worries. It doesn't look like anything\ngot done about it since then. Perhaps the extension upgrade\ninfrastructure would offer a solution now.\n\n> My PG version is 9.3.4, none-default planner settings:\n> enable_mergejoin = off\n> enable_nestloop = off\n\n[ raised eyebrow... ] It's pretty hard to see how those would be\na good idea. Not all problems are best solved by hash joins.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jul 2014 01:43:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN index not used"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, July 11, 2014 3:43 PM\nTo: Huang, Suya\nCc: Andreas Kretschmer; [email protected]\nSubject: Re: [PERFORM] GIN index not used\n\n\"Huang, Suya\" <[email protected]> writes:\n> Just found out something here \n> http://www.postgresql.org/message-id/[email protected]\n> So I dropped the index and recreate it by specifying: using gin(terms_ts gin__int_ops) and the index works.\n\nOh, you're using contrib/intarray?\n\nPursuant to the thread you mention above, we removed intarray's <@ and @> operators (commit 65e758a4d3) but then reverted that (commit 156475a589) because of backwards-compatibility worries. It doesn't look like anything got done about it since then. Perhaps the extension upgrade infrastructure would offer a solution now.\n\n> My PG version is 9.3.4, none-default planner settings:\n> enable_mergejoin = off\n> enable_nestloop = off\n\n[ raised eyebrow... ] It's pretty hard to see how those would be a good idea. Not all problems are best solved by hash joins.\n\n\t\t\tregards, tom lane\n\n\n\nAbout the contrib/intarray, do I have other choices not using that one?\n\n\nAbout the join, yeah, in our testing for DW-like queries, hash join does improved the performance greatly...\n\nThanks,\nSuya\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jul 2014 05:47:51 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GIN index not used"
},
{
"msg_contents": "> -----Original Message-----\n\nIt is hard to read your message. You should indicate the quoted lines.\nPlease fix your email client.\n\n> About the contrib/intarray, do I have other choices not using that one?\n\ninteger[] and contrib/intarray are two different data types.\n\n> About the join, yeah, in our testing for DW-like queries, hash join does improved the performance greatly...\n\nThen, it should be chosen by the planner. I doubt it is the best\nchoice in all cases. It is not advised to set these parameters\nglobally.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 12 Jul 2014 11:31:15 +0300",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN index not used"
}
] |
[
{
"msg_contents": "I am using a Pentaho process to access the database and select the appropriate information to update the DB tables and records. I am trying to select the previous subscription key in order to update the factable for any records that have the previous key to have the current subscription key. This query is intended to use the current subscription key and subscription info to select the previous subscription key to allow for the information to be updated. I would like to optimize the query to execute more efficiently.\n\nThe database table has about 60K records in it and when I run an explain anaylyze it indicates that the query optimizer chooses to execute a bitmap heap scan, this seems like an inefficient method for this query.\n\nQuery:\nSelect subscription_key as prev_sub_key\nfrom member_subscription_d\nwhere subscription_value ='[email protected]'<mailto:='[email protected]'>\nand newsletter_nme = 'newsletter_member'\nand subscription_platform = 'email'\nand version = (select version -1 as mtch_vers\n from member_subscription_d\n where subscription_key = 4037516)\n\nCurrent Data in Database for this address:\n subscription_key | version | date_from | date_to | newsletter_nme | subscription_platform | subscription_value | subscription_status | list_status | current_status | unsubscribetoken | transaction_date | newsletter_sts\n------------------+---------+------------------------+----------------------------+-------------------+-----------------------+--------------------+---------------------+-------------+----------------+------------------+------------------------+----------------\n 4001422 | 1 | 2000-02-09 00:00:00-05 | 2014-04-19 09:57:24-04 | newsletter_member | email | [email protected]<mailto:[email protected]> | VALID | pending | f | | 2000-02-09 00:00:00-05 | 2\n 4019339 | 2 | 2014-04-19 09:57:24-04 | 2014-06-04 12:27:34-04 | newsletter_member | email | [email protected]<mailto:[email protected]> | VALID | subscribe | f | | 2014-04-19 09:57:24-04 | 1\n 4037516 | 3 | 2014-06-04 12:27:34-04 | 2199-12-31 23:59:59.999-05 | newsletter_member | email | [email protected]<mailto:[email protected]> | VALID | subscribe | t | | 2014-06-04 12:27:34-04 | 1\n(3 rows)\n\nSystem information:\nPostgres Version: 9.2\nOS : Linux cmprodpgsql1 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\nPentaho: 5.0.1-stable\n\npostgresql.conf\ncheckpoint_segments = '8'\ndata_directory = '/var/lib/postgresql/9.2/main'\ndatestyle = 'iso, mdy'\ndefault_text_search_config = 'pg_catalog.english'\neffective_cache_size = '2GB'\nexternal_pid_file = '/var/run/postgresql/9.2-main.pid'\nhba_file = '/etc/postgresql/9.2/main/pg_hba.conf'\nident_file = '/etc/postgresql/9.2/main/pg_ident.conf'\nlisten_addresses = '*'\nlog_line_prefix = '%t '\nmax_connections = '200'\nmax_wal_senders = '3'\nport = 5432\nshared_buffers = '1024MB'\nssl = off\nssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'\nssl_key_file = '/etc/ssl/certs/ssl-cert-snakeoil.key'\ntimezone = 'localtime'\nunix_socket_directory = '/var/run/postgresql'\nwal_keep_segments = '8'\nwal_level = 'hot_standby'\nwork_mem = '100MB'\n\n\n\n\n\n\n\n\nI am using a Pentaho process to access the database and select the appropriate information to update the DB tables and records. I am trying to select the previous subscription key in order to update the factable for any records that have the previous key\n to have the current subscription key. This query is intended to use the current subscription key and subscription info to select the previous subscription key to allow for the information to be updated. I would like to optimize the query to execute more efficiently.\nThe database table has about 60K records in it and when I run an explain anaylyze it indicates that the query optimizer chooses to execute a bitmap heap scan, this seems like an inefficient method for this query.\nQuery:\nSelect subscription_key as prev_sub_key\nfrom member_subscription_d\nwhere subscription_value ='[email protected]'\nand newsletter_nme = 'newsletter_member'\nand subscription_platform = 'email'\nand version = (select version -1 as mtch_vers\n from member_subscription_d\n where subscription_key = 4037516)\n\nCurrent Data in Database for this address:\n subscription_key | version | date_from | date_to | newsletter_nme | subscription_platform | subscription_value | subscription_status | list_status | current_status | unsubscribetoken | transaction_date | newsletter_sts\n------------------+---------+------------------------+----------------------------+-------------------+-----------------------+--------------------+---------------------+-------------+----------------+------------------+------------------------+----------------\n 4001422 | 1 | 2000-02-09 00:00:00-05 | 2014-04-19 09:57:24-04 | newsletter_member | email |\[email protected] | VALID | pending | f | | 2000-02-09 00:00:00-05 | 2\n 4019339 | 2 | 2014-04-19 09:57:24-04 | 2014-06-04 12:27:34-04 | newsletter_member | email |\[email protected] | VALID | subscribe | f | | 2014-04-19 09:57:24-04 | 1\n 4037516 | 3 | 2014-06-04 12:27:34-04 | 2199-12-31 23:59:59.999-05 | newsletter_member | email |\[email protected] | VALID | subscribe | t | | 2014-06-04 12:27:34-04 | 1\n(3 rows)\n\nSystem information:\nPostgres Version: 9.2\nOS : Linux cmprodpgsql1 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\nPentaho: 5.0.1-stable\npostgresql.conf\ncheckpoint_segments = '8'\ndata_directory = '/var/lib/postgresql/9.2/main'\ndatestyle = 'iso, mdy'\ndefault_text_search_config = 'pg_catalog.english'\neffective_cache_size = '2GB'\nexternal_pid_file = '/var/run/postgresql/9.2-main.pid'\nhba_file = '/etc/postgresql/9.2/main/pg_hba.conf'\nident_file = '/etc/postgresql/9.2/main/pg_ident.conf'\nlisten_addresses = '*'\nlog_line_prefix = '%t '\nmax_connections = '200'\nmax_wal_senders = '3'\nport = 5432\nshared_buffers = '1024MB'\nssl = off\nssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'\nssl_key_file = '/etc/ssl/certs/ssl-cert-snakeoil.key'\ntimezone = 'localtime'\nunix_socket_directory = '/var/run/postgresql'\nwal_keep_segments = '8'\nwal_level = 'hot_standby'\nwork_mem = '100MB'",
"msg_date": "Sun, 13 Jul 2014 22:55:42 +0000",
"msg_from": "\"Magers, James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Performance question"
},
{
"msg_contents": "On 14.7.2014 00:55, Magers, James wrote:\n> I am using a Pentaho process to access the database and select the\n> appropriate information to update the DB tables and records. I am\n> trying to select the previous subscription key in order to update the\n> factable for any records that have the previous key to have the current\n> subscription key. This query is intended to use the current subscription\n> key and subscription info to select the previous subscription key to\n> allow for the information to be updated. I would like to optimize the\n> query to execute more efficiently.\n> \n> The database table has about 60K records in it and when I run an explain\n> anaylyze it indicates that the query optimizer chooses to execute a\n> bitmap heap scan, this seems like an inefficient method for this query.\n\nWhy do you think it's inefficient? The planner thinks it's efficient,\nfor some reason. And it's impossible to say if that's a good decision,\nbecause we don't know (a) the explain plan, and (b) structure of the\ntable involved (indexes, ...).\n\nPlease post the explain analyze output to explain.depesz.com and post\nthe link here (it's more readable than posting it here directly).\n\nAlso, please do this:\n\n SELECT relname, relpages, reltuples\n FROM pg_class WHERE relname = 'member_subscription_d'\n\nand this\n\n \\d member_subscription_d\n\nand post the results here.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Jul 2014 02:58:08 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance question"
},
{
"msg_contents": "Tomas,\n\nThank you for your feedback. I am attaching the requested information. While I do not think the query is necessarily inefficient, I believe a sequence scan would be more efficient. \n\n\\d member_subscription_d\n\n Table \"public.member_subscription_d\"\n Column | Type | Modifiers\n-----------------------+--------------------------+-----------------------------------------------------------------\n subscription_key | bigint | not null default nextval('subscription_id_seq'::regclass)\n version | integer | not null\n date_from | timestamp with time zone |\n date_to | timestamp with time zone |\n newsletter_nme | character varying(50) |\n subscription_platform | character varying(50) |\n subscription_value | character varying(255) |\n subscription_status | character varying(100) |\n list_status | character varying(25) |\n current_status | boolean |\n unsubscribetoken | character varying(200) |\n transaction_date | timestamp with time zone |\n newsletter_sts | integer |\nIndexes:\n \"member_subscription_key\" PRIMARY KEY, btree (subscription_key)\n \"idx_member_subscription_d_list_status\" btree (list_status)\n \"idx_member_subscription_d_newsletter_nme\" btree (newsletter_nme)\n \"idx_member_subscription_d_subscription_status\" btree (subscription_status)\n \"idx_member_subscription_d_subscription_value\" btree (subscription_value)\n \"idx_member_subscription_d_tk\" btree (subscription_key)\nReferenced by:\n TABLE \"member_recipient_f\" CONSTRAINT \"member_subscription_d_recipient_f_fk\" FOREIGN KEY (subscription_key) REFERENCES member_subscription_d(subscription_key)\n\n\n\npgahq_datamart-# FROM pg_class WHERE relname = 'member_subscription_d';\n relname | relpages | reltuples\n-----------------------+----------+-----------\n member_subscription_d | 1383 | 63012\n(1 row)\n\n\nExplain output:\nhttp://explain.depesz.com/s/OVK\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Jul 2014 02:20:49 +0000",
"msg_from": "\"Magers, James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Performance question"
},
{
"msg_contents": "Magers, James, 14.07.2014 04:20:\n\n> Thank you for your feedback. I am attaching the requested information. \n> While I do not think the query is necessarily inefficient, I believe a sequence scan would be more efficient. \n\nYou can try\n\nset enable_indexscan = off;\nset enable_bitmapscan = off;\n\nand then run your query. \n\nBut I would be very surprised if a seq scan (which reads through the whole table) was faster than those 4ms you have now\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Jul 2014 09:07:10 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance question"
},
{
"msg_contents": "Thomas,\r\n\r\nThank you. I executed the query this morning after disabling the scan types. I am including links to explain.depesz output for each of the three variations that I executed. \r\n\r\nindexscan and bitmapscan off: http://explain.depesz.com/s/sIx\r\nseqscan and bitmapscan off: http://explain.depesz.com/s/GfM\r\nbitmapscan off: http://explain.depesz.com/s/3wna\r\n\r\n\r\nThank you,\r\nJames\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Jul 2014 13:18:12 +0000",
"msg_from": "\"Magers, James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Performance question"
},
{
"msg_contents": "Magers, James, 14.07.2014 15:18:\n> Thank you. I executed the query this morning after disabling the scan types. \n> I am including links to explain.depesz output for each of the three variations that I executed. \n> \n> indexscan and bitmapscan off: http://explain.depesz.com/s/sIx\n> seqscan and bitmapscan off: http://explain.depesz.com/s/GfM\n> bitmapscan off: http://explain.depesz.com/s/3wna\n> \n\nSo the original query (using an \"Index Scan\" + \"Bitmap Index Scan\") is indeed the most efficient one: 4ms vs. 39ms vs. 64ms \n\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Jul 2014 15:48:03 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance question"
},
{
"msg_contents": "Thomas,\r\n\r\nI would have to agree that the current results do indicate that. However, I have run this explain analyze multiple times and the timing varies from about 4ms to 35ms using the Bitmap Heap Scan. Here is an explain plan from Thursday of last week that shows about 21ms. Part of the issue in trying to isolate if the query can be faster is that once the data is cached any way that the query is executed appears to be quicker.\r\n\r\nhttp://explain.depesz.com/s/SIX1\r\n\r\nThank you,\r\nJames\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Jul 2014 14:00:42 +0000",
"msg_from": "\"Magers, James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Performance question"
},
{
"msg_contents": "On 14 Červenec 2014, 16:00, Magers, James wrote:\n> Thomas,\n>\n> I would have to agree that the current results do indicate that. However,\n> I have run this explain analyze multiple times and the timing varies from\n> about 4ms to 35ms using the Bitmap Heap Scan. Here is an explain plan\n> from Thursday of last week that shows about 21ms. Part of the issue in\n> trying to isolate if the query can be faster is that once the data is\n> cached any way that the query is executed appears to be quicker.\n>\n> http://explain.depesz.com/s/SIX1\n\nI think that judging the performance based on this limited number of\nsamples is futile, especially when the plans are this fast. The\nmeasurements are easy to influence by other tasks running on the system,\nOS process scheduling etc. Or it might be because of memory pressure on\nthe system, causing the important data from page cache (and thus I/O for\nqueries accessing them).\n\nThis might be the reason why you saw higher timings, and it's impossible\nto say based solely on explain plan from a single execution. To get\nmeaningful numbers it's necessary to execute the query repeatedly, to\neliminate caching effects. But the question is whether these caching\neffects will happen on production or not. (Because what if you tweak the\nconfiguration to get the best plan based on assumption that everything is\ncached, when it won't be in practice?)\n\nThat being said, the only plan that's actually faster than the bitmap\nindex scan (which you believe is inefficient) is this one\n\n http://explain.depesz.com/s/3wna\n\nThe reason why it's not selected by the optimizer is that the cost is\nestimated to be 20.60, while the bitmap index scan cost is estimated as\n20.38. So the optimizer decides that 20.38 is lower than 20.60, and thus\nchooses the bitmap index scan.\n\nWhat you may do is tweak cost constants, described here\n\nwww.postgresql.org/docs/9.4/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS\n\nYou need to increase the bitmap idex scan cost estimate, so that it's more\nexpensive than the index scan. I'd guess that increasing the\ncpu_tuple_cost and/or cpu_index_tuple_cost a bit should do the trick.\n\nregards\nTomas\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Jul 2014 16:48:05 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance question"
},
{
"msg_contents": "Tomas,\r\n\r\nThank you for the recommendation. In this case, The bitmap scan runs quite quickly, however in production were data may or may not be cached and at higher volumes I am trying to ensure the process will continue to execute efficiently and reduce the impact of the process on other processes running against the database. \r\n\r\nMy assessment is based on my experiences with the scans. Does your experience provide you with a different assessment of the scan types and how efficient they may be?\r\n\r\nThank you,\r\nJames\r\n\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Jul 2014 16:02:13 +0000",
"msg_from": "\"Magers, James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Performance question"
},
{
"msg_contents": "On 14 Červenec 2014, 18:02, Magers, James wrote:\n> Tomas,\n>\n> Thank you for the recommendation. In this case, The bitmap scan runs\n> quite quickly, however in production were data may or may not be cached\n> and at higher volumes I am trying to ensure the process will continue to\n> execute efficiently and reduce the impact of the process on other\n> processes running against the database.\n\nThat's why it's important to do the testing with representative amount of\ndata. Testing the queries on significantly reduced dataset is pointless,\nbecause the optimizer will do different decisions.\n\n> My assessment is based on my experiences with the scans. Does your\n> experience provide you with a different assessment of the scan types and\n> how efficient they may be?\n\nNo. Because I don't have your data. And it seems that your assessment is\nbased on experience with dataset that's very different from your expected\nproduction dataset, which means the experience is not directly applicable.\nThe optimizer considers the size of the dataset when choosing the plan.\n\nregards\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Jul 2014 18:28:50 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance question"
},
{
"msg_contents": "Thank you Tomas. I did execute the queries against a dataset that was representative of what we expect the production dataset to have. By higher volume I meant more transactions happening against the data, We would expect the data size to increase over time and when we executed against a dataset that was about 4x larger the index scan was selected to perform the lookup versus the bitmap heap scan. The scan of the both the smaller and larger datasets were returning in similar times between the two groups of tests. This is part of the reason that I was thinking that the bitmap heap scan may not be as efficient since 4 times the data returned in just a little more time using the index scan.\r\n\r\n\r\nThank you,\r\nJames\r\n\r\n-----Original Message-----\r\nFrom: Tomas Vondra [mailto:[email protected]] \r\nSent: Monday, July 14, 2014 12:29 PM\r\nTo: Magers, James\r\nCc: Tomas Vondra; Thomas Kellerer; [email protected]\r\nSubject: RE: [PERFORM] Query Performance question\r\n\r\nOn 14 Červenec 2014, 18:02, Magers, James wrote:\r\n> Tomas,\r\n>\r\n> Thank you for the recommendation. In this case, The bitmap scan runs\r\n> quite quickly, however in production were data may or may not be cached\r\n> and at higher volumes I am trying to ensure the process will continue to\r\n> execute efficiently and reduce the impact of the process on other\r\n> processes running against the database.\r\n\r\nThat's why it's important to do the testing with representative amount of\r\ndata. Testing the queries on significantly reduced dataset is pointless,\r\nbecause the optimizer will do different decisions.\r\n\r\n> My assessment is based on my experiences with the scans. Does your\r\n> experience provide you with a different assessment of the scan types and\r\n> how efficient they may be?\r\n\r\nNo. Because I don't have your data. And it seems that your assessment is\r\nbased on experience with dataset that's very different from your expected\r\nproduction dataset, which means the experience is not directly applicable.\r\nThe optimizer considers the size of the dataset when choosing the plan.\r\n\r\nregards\r\nTomas\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 14 Jul 2014 17:15:54 +0000",
"msg_from": "\"Magers, James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Performance question"
}
] |
[
{
"msg_contents": "Is there any way that I can build multiple indexes on one table without having to scan the table multiple times? For small tables, that's probably not an issue, but if I have a 500 GB table that I need to create 6 indexes on, I don't want to read that table 6 times.\nNothing I could find in the manual other than reindex, but that's not helping, since it only rebuilds indexes that are already there and I don't know if that reads the table once or multiple times. If I could create indexes inactive and then run reindex, which then reads the table once, I would have a solution. But that doesn't seem to exist either.\n\nbest regards,\nchris\n-- \nchris ruprecht\ndatabase grunt and bit pusher extraordinaíre\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 17 Jul 2014 18:47:58 -0400",
"msg_from": "Chris Ruprecht <[email protected]>",
"msg_from_op": true,
"msg_subject": "Building multiple indexes on one table."
},
{
"msg_contents": "On Thu, Jul 17, 2014 at 7:47 PM, Chris Ruprecht <[email protected]> wrote:\n> Is there any way that I can build multiple indexes on one table without having to scan the table multiple times? For small tables, that's probably not an issue, but if I have a 500 GB table that I need to create 6 indexes on, I don't want to read that table 6 times.\n> Nothing I could find in the manual other than reindex, but that's not helping, since it only rebuilds indexes that are already there and I don't know if that reads the table once or multiple times. If I could create indexes inactive and then run reindex, which then reads the table once, I would have a solution. But that doesn't seem to exist either.\n\nJust build them with separate but concurrent connections, and the\nscans will be synchronized so it will be only one.\n\nBtw, reindex rebuilds one index at a time, so what I do is issue\nseparate reindex for each index in parallel, to avoid the repeated\nscans as well.\n\nJust make sure you've got the I/O and CPU capacity for it (you'll be\nwriting many indexes at once, so there is a lot of I/O).\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 17 Jul 2014 20:21:42 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes on one table."
},
{
"msg_contents": ">Von: [email protected] [[email protected]]" im Auftrag von "Claudio Freire [[email protected]]\n>Gesendet: Freitag, 18. Juli 2014 01:21\n>An: Chris Ruprecht\n>Cc: [email protected]\n>Betreff: Re: [PERFORM] Building multiple indexes on one table.\n>\n>On Thu, Jul 17, 2014 at 7:47 PM, Chris Ruprecht <[email protected]> wrote:\n>> Is there any way that I can build multiple indexes on one table without having to scan the table multiple times? For small tables, that's probably not an issue, but if I have a 500 GB table that I need to create 6 indexes on, I don't want to read that table 6 times.\n>> Nothing I could find in the manual other than reindex, but that's not helping, since it only rebuilds indexes that are already there and I don't know if that reads the table once or multiple times. If I could create indexes inactive and then run reindex, which then reads the table once, I would have a solution. But that doesn't seem to exist either.\n>\n>Just build them with separate but concurrent connections, and the\n>scans will be synchronized so it will be only one.\n>\n>Btw, reindex rebuilds one index at a time, so what I do is issue\n>separate reindex for each index in parallel, to avoid the repeated\n>scans as well.\n>\n>Just make sure you've got the I/O and CPU capacity for it (you'll be\n>writing many indexes at once, so there is a lot of I/O).\n\nIndex creation on large tables are mostly CPU bound as long as no swap occurs.\nI/O may be an issue when all your indexes are similar; e.g. all on single int4 columns.\nin other cases the writes will not all take place concurrently.\nTo reduce I/O due to swap, you can consider increasing maintenance_work_mem on the connextions/sessionns\nthat build the indexes.\n\nregards,\n\nMarc Mamin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Jul 2014 19:40:02 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes on one table."
},
{
"msg_contents": "On Wed, Jul 23, 2014 at 4:40 PM, Marc Mamin <[email protected]> wrote:\n>>On Thu, Jul 17, 2014 at 7:47 PM, Chris Ruprecht <[email protected]> wrote:\n>>> Is there any way that I can build multiple indexes on one table without having to scan the table multiple times? For small tables, that's probably not an issue, but if I have a 500 GB table that I need to create 6 indexes on, I don't want to read that table 6 times.\n>>> Nothing I could find in the manual other than reindex, but that's not helping, since it only rebuilds indexes that are already there and I don't know if that reads the table once or multiple times. If I could create indexes inactive and then run reindex, which then reads the table once, I would have a solution. But that doesn't seem to exist either.\n>>\n>>Just build them with separate but concurrent connections, and the\n>>scans will be synchronized so it will be only one.\n>>\n>>Btw, reindex rebuilds one index at a time, so what I do is issue\n>>separate reindex for each index in parallel, to avoid the repeated\n>>scans as well.\n>>\n>>Just make sure you've got the I/O and CPU capacity for it (you'll be\n>>writing many indexes at once, so there is a lot of I/O).\n>\n> Index creation on large tables are mostly CPU bound as long as no swap occurs.\n> I/O may be an issue when all your indexes are similar; e.g. all on single int4 columns.\n> in other cases the writes will not all take place concurrently.\n> To reduce I/O due to swap, you can consider increasing maintenance_work_mem on the connextions/sessionns\n> that build the indexes.\n\nUsually there will always be swap, unless you've got toy indexes.\n\nBut swap I/O is all sequential I/O, with a good readahead setting\nthere should be no problem.\n\nIt's the final writing step that can be a bottleneck if you have a\nlame I/O system and try to push 5 or 6 indexes at once.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Jul 2014 16:49:56 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes on one table."
},
{
"msg_contents": "Your question: Is there any way that I can build multiple indexes on one\ntable without having to scan the table multiple times?\n\nMy answer: I don't think so. Since each index has a different indexing\nrule, it will analyze the same table in a different way. I've built indexes\non a 100GB table recently and it didn't take me too much time (Amazon EC2\nwith 8 CPU cores / 70 GB RAM). I don't remember how much time it took, but\nthat's a good sign right ;-) ? Painful jobs are always remembered... (ok,\nthe hardware helped a lot).\n\nSo, my advice is: get yourself a good maintenance window and just build\nindexes, remember that they will help a lot of people querying this table.\n\n\n2014-07-23 16:49 GMT-03:00 Claudio Freire <[email protected]>:\n\n> On Wed, Jul 23, 2014 at 4:40 PM, Marc Mamin <[email protected]> wrote:\n> >>On Thu, Jul 17, 2014 at 7:47 PM, Chris Ruprecht <[email protected]>\n> wrote:\n> >>> Is there any way that I can build multiple indexes on one table\n> without having to scan the table multiple times? For small tables, that's\n> probably not an issue, but if I have a 500 GB table that I need to create 6\n> indexes on, I don't want to read that table 6 times.\n> >>> Nothing I could find in the manual other than reindex, but that's not\n> helping, since it only rebuilds indexes that are already there and I don't\n> know if that reads the table once or multiple times. If I could create\n> indexes inactive and then run reindex, which then reads the table once, I\n> would have a solution. But that doesn't seem to exist either.\n> >>\n> >>Just build them with separate but concurrent connections, and the\n> >>scans will be synchronized so it will be only one.\n> >>\n> >>Btw, reindex rebuilds one index at a time, so what I do is issue\n> >>separate reindex for each index in parallel, to avoid the repeated\n> >>scans as well.\n> >>\n> >>Just make sure you've got the I/O and CPU capacity for it (you'll be\n> >>writing many indexes at once, so there is a lot of I/O).\n> >\n> > Index creation on large tables are mostly CPU bound as long as no swap\n> occurs.\n> > I/O may be an issue when all your indexes are similar; e.g. all on\n> single int4 columns.\n> > in other cases the writes will not all take place concurrently.\n> > To reduce I/O due to swap, you can consider increasing\n> maintenance_work_mem on the connextions/sessionns\n> > that build the indexes.\n>\n> Usually there will always be swap, unless you've got toy indexes.\n>\n> But swap I/O is all sequential I/O, with a good readahead setting\n> there should be no problem.\n>\n> It's the final writing step that can be a bottleneck if you have a\n> lame I/O system and try to push 5 or 6 indexes at once.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYour question: Is there any way that I can build multiple indexes on one table without having to scan the table multiple times?\nMy answer: I don't think so. Since each index has a different indexing rule, it will analyze the same table in a different way. I've built indexes on a 100GB table recently and it didn't take me too much time (Amazon EC2 with 8 CPU cores / 70 GB RAM). I don't remember how much time it took, but that's a good sign right ;-) ? Painful jobs are always remembered... (ok, the hardware helped a lot).\nSo, my advice is: get yourself a good maintenance window and just build indexes, remember that they will help a lot of people querying this table.\n2014-07-23 16:49 GMT-03:00 Claudio Freire <[email protected]>:\nOn Wed, Jul 23, 2014 at 4:40 PM, Marc Mamin <[email protected]> wrote:\n\n>>On Thu, Jul 17, 2014 at 7:47 PM, Chris Ruprecht <[email protected]> wrote:\n>>> Is there any way that I can build multiple indexes on one table without having to scan the table multiple times? For small tables, that's probably not an issue, but if I have a 500 GB table that I need to create 6 indexes on, I don't want to read that table 6 times.\n\n>>> Nothing I could find in the manual other than reindex, but that's not helping, since it only rebuilds indexes that are already there and I don't know if that reads the table once or multiple times. If I could create indexes inactive and then run reindex, which then reads the table once, I would have a solution. But that doesn't seem to exist either.\n\n>>\n>>Just build them with separate but concurrent connections, and the\n>>scans will be synchronized so it will be only one.\n>>\n>>Btw, reindex rebuilds one index at a time, so what I do is issue\n>>separate reindex for each index in parallel, to avoid the repeated\n>>scans as well.\n>>\n>>Just make sure you've got the I/O and CPU capacity for it (you'll be\n>>writing many indexes at once, so there is a lot of I/O).\n>\n> Index creation on large tables are mostly CPU bound as long as no swap occurs.\n> I/O may be an issue when all your indexes are similar; e.g. all on single int4 columns.\n> in other cases the writes will not all take place concurrently.\n> To reduce I/O due to swap, you can consider increasing maintenance_work_mem on the connextions/sessionns\n> that build the indexes.\n\nUsually there will always be swap, unless you've got toy indexes.\n\nBut swap I/O is all sequential I/O, with a good readahead setting\nthere should be no problem.\n\nIt's the final writing step that can be a bottleneck if you have a\nlame I/O system and try to push 5 or 6 indexes at once.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 23 Jul 2014 17:19:13 -0300",
"msg_from": "Felipe Santos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Building multiple indexes on one table."
}
] |
[
{
"msg_contents": "Hello,\n\nI'm working on Postgres 9.3.4 for a project.\n\nWe are using Scala, Akka and JDBC to insert data in the database, we have\naround 25M insert to do which are basically lines from 5000 files. We issue\na DELETE according to the file (mandatory) and then a COPY each 1000 lines\nof that file.\n\n*DELETE request :* DELETE FROM table WHERE field1 = ? AND field2 = ?;\n*COPY request :* COPY table FROM STDIN WITH CSV\n\nWe have indexes on our database that we can't delete to insert our data.\n\nWhen we insert the data there is some kind of freezes on the databases\nbetween requests. Freezes occur about every 20 seconds.\n\nHere is a screenshot\n<http://tof.canardpc.com/view/c42e69c0-d776-4f93-a8a3-8713794a1a07.jpg>from\nyourkit.\n\nWe tried different solutions:\n\n - 1 table to 5 tables to reduces lock contention\n - fillfactor on indexes\n - commit delay\n - fsync to off (that helped but we can't do this)\n\nWe mainly want to know why this is happening because it slowing the insert\ntoo much for us.\n\nHello,I'm working on Postgres 9.3.4 for a project.We are using Scala, Akka and JDBC to insert data in the database, we have around 25M insert to do which are basically lines from 5000 files. We issue a DELETE according to the file (mandatory) and then a COPY each 1000 lines of that file. \nDELETE request : DELETE FROM table WHERE field1 = ? AND field2 = ?;COPY request : COPY table FROM STDIN WITH CSVWe have indexes on our database that we can't delete to insert our data. \nWhen we insert the data there is some kind of freezes on the databases between requests. Freezes occur about every 20 seconds.Here is a screenshot from yourkit.\nWe tried different solutions:1 table to 5 tables to reduces lock contentionfillfactor on indexescommit delayfsync to off (that helped but we can't do this)\nWe mainly want to know why this is happening because it slowing the insert too much for us.",
"msg_date": "Fri, 18 Jul 2014 12:52:39 +0200",
"msg_from": "Benjamin Dugast <[email protected]>",
"msg_from_op": true,
"msg_subject": "Blocking every 20 sec while mass copying."
},
{
"msg_contents": "Benjamin Dugast wrote:\r\n> I'm working on Postgres 9.3.4 for a project.\r\n> \r\n> \r\n> We are using Scala, Akka and JDBC to insert data in the database, we have around 25M insert to do\r\n> which are basically lines from 5000 files. We issue a DELETE according to the file (mandatory) and\r\n> then a COPY each 1000 lines of that file.\r\n> \r\n> DELETE request : DELETE FROM table WHERE field1 = ? AND field2 = ?;\r\n> \r\n> COPY request : COPY table FROM STDIN WITH CSV\r\n> \r\n> \r\n> We have indexes on our database that we can't delete to insert our data.\r\n> \r\n> \r\n> When we insert the data there is some kind of freezes on the databases between requests. Freezes occur\r\n> about every 20 seconds.\r\n> \r\n> \r\n> Here is a screenshot <http://tof.canardpc.com/view/c42e69c0-d776-4f93-a8a3-8713794a1a07.jpg> from\r\n> yourkit.\r\n> \r\n> \r\n> We tried different solutions:\r\n> \r\n> \r\n> *\t1 table to 5 tables to reduces lock contention\r\n> *\tfillfactor on indexes\r\n> *\tcommit delay\r\n> *\tfsync to off (that helped but we can't do this)\r\n> \r\n> We mainly want to know why this is happening because it slowing the insert too much for us.\r\n\r\nThis sounds a lot like checkpoint I/O spikes.\r\n\r\nCheck with the database server log if the freezes coincide with checkpoints.\r\n\r\nYou can increase checkpoint_segments when you load data to have them occur less often.\r\n\r\nIf you are on Linux and you have a lot of memory, you might hit spikes because too\r\nmuch dirty data are cached; check /proc/sys/vm/dirty_ratio and /proc/sys/dirty_background_ratio.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 18 Jul 2014 11:11:34 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking every 20 sec while mass copying."
},
{
"msg_contents": "Benjamin Dugast <bdugast 'at' excilys.com> writes:\n\n> • fsync to off (that helped but we can't do this)\n\nnot exactly your question, but maybe synchronous_commit=off is a\nnice enough intermediary solution for you (it may give better\nperformances at other places too for only an affordable cost)\n\n-- \nGuillaume Cottenceau\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 18 Jul 2014 17:16:54 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking every 20 sec while mass copying."
},
{
"msg_contents": "On Fri, Jul 18, 2014 at 3:52 AM, Benjamin Dugast <[email protected]>\nwrote:\n\n> Hello,\n>\n> I'm working on Postgres 9.3.4 for a project.\n>\n> We are using Scala, Akka and JDBC to insert data in the database, we have\n> around 25M insert to do which are basically lines from 5000 files. We issue\n> a DELETE according to the file (mandatory) and then a COPY each 1000 lines\n> of that file.\n>\n> *DELETE request :* DELETE FROM table WHERE field1 = ? AND field2 = ?;\n> *COPY request :* COPY table FROM STDIN WITH CSV\n>\n> We have indexes on our database that we can't delete to insert our data.\n>\n\nInserting data into large indexed tables will usually dirty a prodigious\namount of data in a random manner, to maintain those indexes. It will take\na very long time to clear that data down to spinning disks, because the\nwrites cannot be effectively combined into long sequences (sometimes they\ntheoretically could be combined, but the kernel just fails to do a good job\nof doing so).\n\nBuy a good IO system, RAID with lots of disks, or maybe SSD, for your\nindexes.\n\nIf the freezes occur mostly at checkpoint sync time, then you can try\nmaking the checkpoint interval much longer. Checkpoints will still suck\nwhen they do happen, but that happens less often. Depending on the\ndetails of your system of your data and your loading processes, they might\nfreeze for N times longer if you make them N times less frequent, such that\nthe total amount of freezing time is conserved. Or they might freeze for\njust the same period, so that total freezing time is reduced by a factor of\nN. It is hard to know without trying it. You could also try lowering\nthe /proc/sys/vm/dirty_background_bytes setting, so that the kernel starts\nwriting things out *before* the end-of-checkpoint sync calls start landing.\n\nIf the freezes aren't correlated with checkpoints, you could try increasing\nthe shared_buffers to take up most of your RAM. This is unconventional\nadvice, but I've seen it do wonders for such loads when the indexes that\nneed maintenance are about the same size as RAM.\n\nIf you can partition your tables so that only one partition is being\nactively loaded at a time, that could be very effective if the indexes for\neach partition would then be small enough to fit in memory.\n\nCheers,\n\nJeff\n\nOn Fri, Jul 18, 2014 at 3:52 AM, Benjamin Dugast <[email protected]> wrote:\nHello,\nI'm working on Postgres 9.3.4 for a project.We are using Scala, Akka and JDBC to insert data in the database, we have around 25M insert to do which are basically lines from 5000 files. We issue a DELETE according to the file (mandatory) and then a COPY each 1000 lines of that file. \nDELETE request : DELETE FROM table WHERE field1 = ? AND field2 = ?;COPY request : COPY table FROM STDIN WITH CSVWe have indexes on our database that we can't delete to insert our data. \nInserting data into large indexed tables will usually dirty a prodigious amount of data in a random manner, to maintain those indexes. It will take a very long time to clear that data down to spinning disks, because the writes cannot be effectively combined into long sequences (sometimes they theoretically could be combined, but the kernel just fails to do a good job of doing so). \nBuy a good IO system, RAID with lots of disks, or maybe SSD, for your indexes.If the freezes occur mostly at checkpoint sync time, then you can try making the checkpoint interval much longer. Checkpoints will still suck when they do happen, but that happens less often. Depending on the details of your system of your data and your loading processes, they might freeze for N times longer if you make them N times less frequent, such that the total amount of freezing time is conserved. Or they might freeze for just the same period, so that total freezing time is reduced by a factor of N. It is hard to know without trying it. You could also try lowering the /proc/sys/vm/dirty_background_bytes setting, so that the kernel starts writing things out *before* the end-of-checkpoint sync calls start landing.\nIf the freezes aren't correlated with checkpoints, you could try increasing the shared_buffers to take up most of your RAM. This is unconventional advice, but I've seen it do wonders for such loads when the indexes that need maintenance are about the same size as RAM.\nIf you can partition your tables so that only one partition is being actively loaded at a time, that could be very effective if the indexes for each partition would then be small enough to fit in memory.\nCheers,Jeff",
"msg_date": "Fri, 18 Jul 2014 10:56:35 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking every 20 sec while mass copying."
},
{
"msg_contents": "Please keep the list on CC: in your responses.\r\n\r\nBenjamin Dugast wrote:\r\n> 2014-07-18 13:11 GMT+02:00 Albe Laurenz <[email protected]>:\r\n>> This sounds a lot like checkpoint I/O spikes.\r\n>>\r\n>> Check with the database server log if the freezes coincide with checkpoints.\r\n>>\r\n>> You can increase checkpoint_segments when you load data to have them occur less often.\r\n>>\r\n>> If you are on Linux and you have a lot of memory, you might hit spikes because too\r\n>> much dirty data are cached; check /proc/sys/vm/dirty_ratio and /proc/sys/dirty_background_ratio.\r\n\r\n> The checkpoint_segments is set to 64 already\r\n> \r\n> the dirty_ration was set by default to 10 i put it down to 5\r\n> the dirty_background_ratio was set to 5 and I changed it to 2\r\n> \r\n> There is less freezes but the insert is so slower than before.\r\n\r\nThat seems to indicate that my suspicion was right.\r\n\r\nI would say that your I/O system is saturated.\r\nHave you checked with \"iostat -mNx 1\"?\r\n\r\nIf you really cannot drop the indexes during loading, there's probably not much more\r\nyou can do to speed up the load.\r\nYou can try to increase checkpoint_segments beyond 64 and see if that buys you anything.\r\n\r\nTuning the file system write cache will not reduce the amount of I/O necessary, but it\r\nshould reduce the spikes (which is what I thought was your problem).\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jul 2014 08:02:37 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Blocking every 20 sec while mass copying."
},
{
"msg_contents": "Finally we solved our problem by using a kind of trick\n\nWe have 2 kind of table : online table for read and temp table to mass\ninsert our data\n\nWe work on the temp tables (5 different tables) to insert every data\nwithout any index that goes really fast compared to the previous method\nthen we create index on these tables simultanously,\nthen we drop online tables(also 5 tables) and rename the temp tables to\nonline (takes less than 1 sec)\n\nThis is the faster way to insert our data that we found.\nOn our config it goes pretty fast, we reduce our execution time to 50% and\nthere is no more need of many maintenance on the database.\n\nThanks for all answer that you give us.\n\n\n\n2014-07-21 10:02 GMT+02:00 Albe Laurenz <[email protected]>:\n\n> Please keep the list on CC: in your responses.\n>\n> Benjamin Dugast wrote:\n> > 2014-07-18 13:11 GMT+02:00 Albe Laurenz <[email protected]>:\n> >> This sounds a lot like checkpoint I/O spikes.\n> >>\n> >> Check with the database server log if the freezes coincide with\n> checkpoints.\n> >>\n> >> You can increase checkpoint_segments when you load data to have them\n> occur less often.\n> >>\n> >> If you are on Linux and you have a lot of memory, you might hit spikes\n> because too\n> >> much dirty data are cached; check /proc/sys/vm/dirty_ratio and\n> /proc/sys/dirty_background_ratio.\n>\n> > The checkpoint_segments is set to 64 already\n> >\n> > the dirty_ration was set by default to 10 i put it down to 5\n> > the dirty_background_ratio was set to 5 and I changed it to 2\n> >\n> > There is less freezes but the insert is so slower than before.\n>\n> That seems to indicate that my suspicion was right.\n>\n> I would say that your I/O system is saturated.\n> Have you checked with \"iostat -mNx 1\"?\n>\n> If you really cannot drop the indexes during loading, there's probably not\n> much more\n> you can do to speed up the load.\n> You can try to increase checkpoint_segments beyond 64 and see if that buys\n> you anything.\n>\n> Tuning the file system write cache will not reduce the amount of I/O\n> necessary, but it\n> should reduce the spikes (which is what I thought was your problem).\n>\n> Yours,\n> Laurenz Albe\n>\n\nFinally we solved our problem by using a kind of trickWe have 2 kind of table : online table for read and temp table to mass insert our dataWe work on the temp tables (5 different tables) to insert every data without any index that goes really fast compared to the previous method\nthen we create index on these tables simultanously,then we drop online tables(also 5 tables) and rename the temp tables to online (takes less than 1 sec)This is the faster way to insert our data that we found. \nOn our config it goes pretty fast, we reduce our execution time to 50% and there is no more need of many maintenance on the database.Thanks for all answer that you give us.\n2014-07-21 10:02 GMT+02:00 Albe Laurenz <[email protected]>:\nPlease keep the list on CC: in your responses.\n\nBenjamin Dugast wrote:\n> 2014-07-18 13:11 GMT+02:00 Albe Laurenz <[email protected]>:\n>> This sounds a lot like checkpoint I/O spikes.\n>>\n>> Check with the database server log if the freezes coincide with checkpoints.\n>>\n>> You can increase checkpoint_segments when you load data to have them occur less often.\n>>\n>> If you are on Linux and you have a lot of memory, you might hit spikes because too\n>> much dirty data are cached; check /proc/sys/vm/dirty_ratio and /proc/sys/dirty_background_ratio.\n\n> The checkpoint_segments is set to 64 already\n>\n> the dirty_ration was set by default to 10 i put it down to 5\n> the dirty_background_ratio was set to 5 and I changed it to 2\n>\n> There is less freezes but the insert is so slower than before.\n\nThat seems to indicate that my suspicion was right.\n\nI would say that your I/O system is saturated.\nHave you checked with \"iostat -mNx 1\"?\n\nIf you really cannot drop the indexes during loading, there's probably not much more\nyou can do to speed up the load.\nYou can try to increase checkpoint_segments beyond 64 and see if that buys you anything.\n\nTuning the file system write cache will not reduce the amount of I/O necessary, but it\nshould reduce the spikes (which is what I thought was your problem).\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 23 Jul 2014 11:39:20 +0200",
"msg_from": "Benjamin Dugast <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Blocking every 20 sec while mass copying."
}
] |
[
{
"msg_contents": "Hi there,\n\nI am trying to optimize a simple query that returns first 100 rows that\nhave been updated since a given timestamp (ordered by timestamp and id\ndesc). If there are several rows with the same timestamp I need to a\nsecond condition, that states that I want to return rows having the given\ntimestamp and id > given id.\n\nThe obvious query is\n\nSELECT * FROM register_uz_accounting_entities\n> WHERE effective_on > '2014-07-11' OR (effective_on = '2014-07-11' AND\n> id > 1459)\n> ORDER BY effective_on, id\n> LIMIT 100\n\n\nWith a composite index on (effective_on, id)\n\nQuery plan\n\n\"Limit (cost=4613.70..4613.95 rows=100 width=1250) (actual\n> time=0.122..0.130 rows=22 loops=1)\"\n> \" Buffers: shared hit=28\"\n> \" -> Sort (cost=4613.70..4617.33 rows=1453 width=1250) (actual\n> time=0.120..0.122 rows=22 loops=1)\"\n> \" Sort Key: effective_on, id\"\n> \" Sort Method: quicksort Memory: 30kB\"\n> \" Buffers: shared hit=28\"\n> \" -> Bitmap Heap Scan on register_uz_accounting_entities\n> (cost=35.42..4558.17 rows=1453 width=1250) (actual time=0.036..0.083\n> rows=22 loops=1)\"\n> \" Recheck Cond: ((effective_on > '2014-07-11'::date) OR\n> ((effective_on = '2014-07-11'::date) AND (id > 1459)))\"\n> \" Buffers: shared hit=28\"\n> \" -> BitmapOr (cost=35.42..35.42 rows=1453 width=0) (actual\n> time=0.026..0.026 rows=0 loops=1)\"\n> \" Buffers: shared hit=6\"\n> \" -> Bitmap Index Scan on idx2 (cost=0.00..6.49\n> rows=275 width=0) (actual time=0.017..0.017 rows=15 loops=1)\"\n> \" Index Cond: (effective_on > '2014-07-11'::date)\"\n> \" Buffers: shared hit=3\"\n> \" -> Bitmap Index Scan on idx2 (cost=0.00..28.21\n> rows=1178 width=0) (actual time=0.007..0.007 rows=7 loops=1)\"\n> \" Index Cond: ((effective_on =\n> '2014-07-11'::date) AND (id > 1459))\"\n> \" Buffers: shared hit=3\"\n> \"Total runtime: 0.204 ms\"\n\n\n\nEverything works as expected. However if I change the constraint to a\ntimestamp/date more in the past (resulting in far more matching rows) the\nquery slows down drastically.\n\n>\n> SELECT * FROM register_uz_accounting_entities\n> WHERE effective_on > '2014-06-11' OR (effective_on = '2014-06-11' AND id >\n> 1459)\n> ORDER BY effective_on, id\n> LIMIT 100\n>\n> \"Limit (cost=0.42..649.46 rows=100 width=1250) (actual\n> time=516.125..516.242 rows=100 loops=1)\"\n> \" Buffers: shared hit=576201\"\n> \" -> Index Scan using idx2 on register_uz_accounting_entities\n> (cost=0.42..106006.95 rows=16333 width=1250) (actual time=516.122..516.229\n> rows=100 loops=1)\"\n> \" Filter: ((effective_on > '2014-06-11'::date) OR ((effective_on =\n> '2014-06-11'::date) AND (id > 1459)))\"\n> \" Rows Removed by Filter: 797708\"\n> \" Buffers: shared hit=576201\"\n> \"Total runtime: 516.304 ms\"\n\n\n\nI've tried to optimize this query by pushing down the limit and order by's\ninto explicit subselects.\n\nSELECT * FROM (\n> SELECT * FROM register_uz_accounting_entities\n> WHERE effective_on > '2014-06-11'\n> ORDER BY effective_on, id LIMIT 100\n> ) t1\n> UNION\n> SELECT * FROM (\n> SELECT * FROM register_uz_accounting_entities\n> WHERE effective_on = '2014-06-11' AND id > 1459\n> ORDER BY effective_on, id LIMIT 100\n> ) t2\n> ORDER BY effective_on, id\n> LIMIT 100\n>\n> -- query plan\n> \"Limit (cost=684.29..684.54 rows=100 width=1250) (actual\n> time=2.648..2.708 rows=100 loops=1)\"\n> \" Buffers: shared hit=203\"\n> \" -> Sort (cost=684.29..684.79 rows=200 width=1250) (actual\n> time=2.646..2.672 rows=100 loops=1)\"\n> \" Sort Key: register_uz_accounting_entities.effective_on,\n> register_uz_accounting_entities.id\"\n> \" Sort Method: quicksort Memory: 79kB\"\n> \" Buffers: shared hit=203\"\n> \" -> HashAggregate (cost=674.65..676.65 rows=200 width=1250)\n> (actual time=1.738..1.971 rows=200 loops=1)\"\n> \" Buffers: shared hit=203\"\n> \" -> Append (cost=0.42..661.15 rows=200 width=1250) (actual\n> time=0.054..0.601 rows=200 loops=1)\"\n> \" Buffers: shared hit=203\"\n> \" -> Limit (cost=0.42..338.62 rows=100 width=1250)\n> (actual time=0.053..0.293 rows=100 loops=1)\"\n> \" Buffers: shared hit=101\"\n> \" -> Index Scan using idx2 on\n> register_uz_accounting_entities (cost=0.42..22669.36 rows=6703 width=1250)\n> (actual time=0.052..0.260 rows=100 loops=1)\"\n> \" Index Cond: (effective_on >\n> '2014-06-11'::date)\"\n> \" Buffers: shared hit=101\"\n> \" -> Limit (cost=0.42..318.53 rows=100 width=1250)\n> (actual time=0.037..0.228 rows=100 loops=1)\"\n> \" Buffers: shared hit=102\"\n> \" -> Index Scan using idx2 on\n> register_uz_accounting_entities register_uz_accounting_entities_1\n> (cost=0.42..30888.88 rows=9710 width=1250) (actual time=0.036..0.187\n> rows=100 loops=1)\"\n> \" Index Cond: ((effective_on =\n> '2014-06-11'::date) AND (id > 1459))\"\n> \" Buffers: shared hit=102\"\n> \"Total runtime: 3.011 ms\"\n\n\n=> Very fast.\n\nThe question is... why is the query planner unable to make this\noptimization for the slow query? What am I missing?\n\nQueries with syntax highlighting\nhttps://gist.github.com/jsuchal/0993fd5a2bfe8e7242d1\n\nThanks in advance.\n\nHi there,I am trying to optimize a simple query that returns first 100 rows that have been updated since a given timestamp (ordered by timestamp and id desc). If there are several rows with the same timestamp I need to a second condition, that states that I want to return rows having the given timestamp and id > given id.\nThe obvious query is \nSELECT * FROM register_uz_accounting_entities WHERE effective_on > '2014-07-11' OR (effective_on = '2014-07-11' AND id > 1459) ORDER BY effective_on, id LIMIT 100\nWith a composite index on (effective_on, id)Query plan\n\"Limit (cost=4613.70..4613.95 rows=100 width=1250) (actual time=0.122..0.130 rows=22 loops=1)\"\" Buffers: shared hit=28\"\" -> Sort (cost=4613.70..4617.33 rows=1453 width=1250) (actual time=0.120..0.122 rows=22 loops=1)\"\n\" Sort Key: effective_on, id\"\" Sort Method: quicksort Memory: 30kB\"\" Buffers: shared hit=28\"\" -> Bitmap Heap Scan on register_uz_accounting_entities (cost=35.42..4558.17 rows=1453 width=1250) (actual time=0.036..0.083 rows=22 loops=1)\"\n\" Recheck Cond: ((effective_on > '2014-07-11'::date) OR ((effective_on = '2014-07-11'::date) AND (id > 1459)))\"\" Buffers: shared hit=28\"\" -> BitmapOr (cost=35.42..35.42 rows=1453 width=0) (actual time=0.026..0.026 rows=0 loops=1)\"\n\" Buffers: shared hit=6\"\" -> Bitmap Index Scan on idx2 (cost=0.00..6.49 rows=275 width=0) (actual time=0.017..0.017 rows=15 loops=1)\"\" Index Cond: (effective_on > '2014-07-11'::date)\"\n\" Buffers: shared hit=3\"\" -> Bitmap Index Scan on idx2 (cost=0.00..28.21 rows=1178 width=0) (actual time=0.007..0.007 rows=7 loops=1)\"\" Index Cond: ((effective_on = '2014-07-11'::date) AND (id > 1459))\"\n\" Buffers: shared hit=3\"\"Total runtime: 0.204 ms\"Everything works as expected. However if I change the constraint to a timestamp/date more in the past (resulting in far more matching rows) the query slows down drastically.\nSELECT * FROM register_uz_accounting_entities \tWHERE effective_on > '2014-06-11' OR (effective_on = '2014-06-11' AND id > 1459)\n \tORDER BY effective_on, id \tLIMIT 100 \"Limit (cost=0.42..649.46 rows=100 width=1250) (actual time=516.125..516.242 rows=100 loops=1)\"\" Buffers: shared hit=576201\"\" -> Index Scan using idx2 on register_uz_accounting_entities (cost=0.42..106006.95 rows=16333 width=1250) (actual time=516.122..516.229 rows=100 loops=1)\"\n\" Filter: ((effective_on > '2014-06-11'::date) OR ((effective_on = '2014-06-11'::date) AND (id > 1459)))\"\" Rows Removed by Filter: 797708\"\" Buffers: shared hit=576201\"\n\"Total runtime: 516.304 ms\"I've tried to optimize this query by pushing down the limit and order by's into explicit subselects.\nSELECT * FROM ( SELECT * FROM register_uz_accounting_entities WHERE effective_on > '2014-06-11' ORDER BY effective_on, id LIMIT 100 ) t1UNION SELECT * FROM ( SELECT * FROM register_uz_accounting_entities\n WHERE effective_on = '2014-06-11' AND id > 1459 ORDER BY effective_on, id LIMIT 100\t) t2ORDER BY effective_on, id LIMIT 100 -- query plan\"Limit (cost=684.29..684.54 rows=100 width=1250) (actual time=2.648..2.708 rows=100 loops=1)\"\n\" Buffers: shared hit=203\"\" -> Sort (cost=684.29..684.79 rows=200 width=1250) (actual time=2.646..2.672 rows=100 loops=1)\"\" Sort Key: register_uz_accounting_entities.effective_on, register_uz_accounting_entities.id\"\n\" Sort Method: quicksort Memory: 79kB\"\" Buffers: shared hit=203\"\" -> HashAggregate (cost=674.65..676.65 rows=200 width=1250) (actual time=1.738..1.971 rows=200 loops=1)\"\n\" Buffers: shared hit=203\"\" -> Append (cost=0.42..661.15 rows=200 width=1250) (actual time=0.054..0.601 rows=200 loops=1)\"\" Buffers: shared hit=203\"\n\" -> Limit (cost=0.42..338.62 rows=100 width=1250) (actual time=0.053..0.293 rows=100 loops=1)\"\" Buffers: shared hit=101\"\" -> Index Scan using idx2 on register_uz_accounting_entities (cost=0.42..22669.36 rows=6703 width=1250) (actual time=0.052..0.260 rows=100 loops=1)\"\n\" Index Cond: (effective_on > '2014-06-11'::date)\"\" Buffers: shared hit=101\"\" -> Limit (cost=0.42..318.53 rows=100 width=1250) (actual time=0.037..0.228 rows=100 loops=1)\"\n\" Buffers: shared hit=102\"\" -> Index Scan using idx2 on register_uz_accounting_entities register_uz_accounting_entities_1 (cost=0.42..30888.88 rows=9710 width=1250) (actual time=0.036..0.187 rows=100 loops=1)\"\n\" Index Cond: ((effective_on = '2014-06-11'::date) AND (id > 1459))\"\" Buffers: shared hit=102\"\"Total runtime: 3.011 ms\"\n=> Very fast.The question is... why is the query planner unable to make this optimization for the slow query? What am I missing?Queries with syntax highlighting https://gist.github.com/jsuchal/0993fd5a2bfe8e7242d1\nThanks in advance.",
"msg_date": "Mon, 21 Jul 2014 23:09:19 +0200",
"msg_from": "johno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query with indexed ORDER BY and LIMIT when using OR'd conditions"
},
{
"msg_contents": "johno wrote\n> The question is... why is the query planner unable to make this\n> optimization for the slow query? What am I missing?\n\nShort answer - your first and last queries are not relationally equivalent\nand the optimizer cannot change the behavior of the query which it is\noptimizing. i.e. you did not make an optimization but rather choose to\nreformulate the question so that it could be answered more easily while\nstill providing an acceptable answer.\n\nThe question main question is better phrased as:\n\nGive me 100 updated at t(0) but only that are subsequent to a given ID. If\nthere are less than 100 such records give me enough additional rows having t\n> t(0) so that the total number of rows returned is equal to 100.\n\nBoth queries give the same answer but only due to the final LIMIT 100. They\narrive there in different ways which necessitates generating different\nplans. At a basic level it is unable to push down LIMIT into a WHERE clause\nand it cannot add additional sub-queries that do not exist in the original\nplan - which includes adding a UNION node.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-with-indexed-ORDER-BY-and-LIMIT-when-using-OR-d-conditions-tp5812282p5812285.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jul 2014 14:31:19 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with indexed ORDER BY and LIMIT when using OR'd\n conditions"
},
{
"msg_contents": "Thanks for the quick reply David!\n\nHowever I am still unsure how these two queries are not relationally\nequivalent. I am struggling to find a counterexample where the first and\nthird query (in email, not in gist) would yield different results. Any\nideas?\n\nJano\n\n\nOn Mon, Jul 21, 2014 at 11:31 PM, David G Johnston <\[email protected]> wrote:\n\n> johno wrote\n> > The question is... why is the query planner unable to make this\n> > optimization for the slow query? What am I missing?\n>\n> Short answer - your first and last queries are not relationally equivalent\n> and the optimizer cannot change the behavior of the query which it is\n> optimizing. i.e. you did not make an optimization but rather choose to\n> reformulate the question so that it could be answered more easily while\n> still providing an acceptable answer.\n>\n> The question main question is better phrased as:\n>\n> Give me 100 updated at t(0) but only that are subsequent to a given ID. If\n> there are less than 100 such records give me enough additional rows having\n> t\n> > t(0) so that the total number of rows returned is equal to 100.\n>\n> Both queries give the same answer but only due to the final LIMIT 100. They\n> arrive there in different ways which necessitates generating different\n> plans. At a basic level it is unable to push down LIMIT into a WHERE\n> clause\n> and it cannot add additional sub-queries that do not exist in the original\n> plan - which includes adding a UNION node.\n>\n> David J.\n>\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Slow-query-with-indexed-ORDER-BY-and-LIMIT-when-using-OR-d-conditions-tp5812282p5812285.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks for the quick reply David!However I am still unsure how these two queries are not relationally equivalent. I am struggling to find a counterexample where the first and third query (in email, not in gist) would yield different results. Any ideas?\nJano\nOn Mon, Jul 21, 2014 at 11:31 PM, David G Johnston <[email protected]> wrote:\njohno wrote\n> The question is... why is the query planner unable to make this\n> optimization for the slow query? What am I missing?\n\nShort answer - your first and last queries are not relationally equivalent\nand the optimizer cannot change the behavior of the query which it is\noptimizing. i.e. you did not make an optimization but rather choose to\nreformulate the question so that it could be answered more easily while\nstill providing an acceptable answer.\n\nThe question main question is better phrased as:\n\nGive me 100 updated at t(0) but only that are subsequent to a given ID. If\nthere are less than 100 such records give me enough additional rows having t\n> t(0) so that the total number of rows returned is equal to 100.\n\nBoth queries give the same answer but only due to the final LIMIT 100. They\narrive there in different ways which necessitates generating different\nplans. At a basic level it is unable to push down LIMIT into a WHERE clause\nand it cannot add additional sub-queries that do not exist in the original\nplan - which includes adding a UNION node.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-with-indexed-ORDER-BY-and-LIMIT-when-using-OR-d-conditions-tp5812282p5812285.html\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 21 Jul 2014 23:44:15 +0200",
"msg_from": "johno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Slow query with indexed ORDER BY and LIMIT when\n using OR'd conditions"
},
{
"msg_contents": "johno wrote\n> Thanks for the quick reply David!\n> \n> However I am still unsure how these two queries are not relationally\n> equivalent. I am struggling to find a counterexample where the first and\n> third query (in email, not in gist) would yield different results. Any\n> ideas?\n\nRemove the outer LIMIT 100 from both queries...\n\nThe first query would return a maximal number of rows that meet the OR\ncriteria while the second query would return at most 200 rows since both\nsub-queries would still have their own independent LIMIT 100 clauses.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-with-indexed-ORDER-BY-and-LIMIT-when-using-OR-d-conditions-tp5812282p5812289.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jul 2014 14:54:11 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with indexed ORDER BY and LIMIT when using OR'd\n conditions"
},
{
"msg_contents": "Oh, yes I do understand that if I remove the outer limit, the semantics of\nthe query would change. However I am looking for the counterexample *with*\nthe limit clauses. Maybe I just don't understand what relationally\nequivalent means, sorry about that.\n\nBTW this is to my understanding a very similar scenario to how partitioned\ntables work and push down limit and where conditions. Why is this not\npossible in this case?\n\nJano\n\n\nOn Mon, Jul 21, 2014 at 11:54 PM, David G Johnston <\[email protected]> wrote:\n\n> johno wrote\n> > Thanks for the quick reply David!\n> >\n> > However I am still unsure how these two queries are not relationally\n> > equivalent. I am struggling to find a counterexample where the first and\n> > third query (in email, not in gist) would yield different results. Any\n> > ideas?\n>\n> Remove the outer LIMIT 100 from both queries...\n>\n> The first query would return a maximal number of rows that meet the OR\n> criteria while the second query would return at most 200 rows since both\n> sub-queries would still have their own independent LIMIT 100 clauses.\n>\n> David J.\n>\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Slow-query-with-indexed-ORDER-BY-and-LIMIT-when-using-OR-d-conditions-tp5812282p5812289.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOh, yes I do understand that if I remove the outer limit, the semantics of the query would change. However I am looking for the counterexample *with* the limit clauses. Maybe I just don't understand what relationally equivalent means, sorry about that.\nBTW this is to my understanding a very similar scenario to how partitioned tables work and push down limit and where conditions. Why is this not possible in this case? Jano\nOn Mon, Jul 21, 2014 at 11:54 PM, David G Johnston <[email protected]> wrote:\njohno wrote\n> Thanks for the quick reply David!\n>\n> However I am still unsure how these two queries are not relationally\n> equivalent. I am struggling to find a counterexample where the first and\n> third query (in email, not in gist) would yield different results. Any\n> ideas?\n\nRemove the outer LIMIT 100 from both queries...\n\nThe first query would return a maximal number of rows that meet the OR\ncriteria while the second query would return at most 200 rows since both\nsub-queries would still have their own independent LIMIT 100 clauses.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-with-indexed-ORDER-BY-and-LIMIT-when-using-OR-d-conditions-tp5812282p5812289.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 22 Jul 2014 00:02:54 +0200",
"msg_from": "johno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Slow query with indexed ORDER BY and LIMIT when\n using OR'd conditions"
},
{
"msg_contents": "johno wrote\n> Oh, yes I do understand that if I remove the outer limit, the semantics of\n> the query would change. However I am looking for the counterexample *with*\n> the limit clauses. Maybe I just don't understand what relationally\n> equivalent means, sorry about that.\n> \n> BTW this is to my understanding a very similar scenario to how partitioned\n> tables work and push down limit and where conditions. Why is this not\n> possible in this case?\n> \n> Jano\n> \n> \n> On Mon, Jul 21, 2014 at 11:54 PM, David G Johnston <\n\n> david.g.johnston@\n\n>> wrote:\n> \n>> johno wrote\n>> > Thanks for the quick reply David!\n>> >\n>> > However I am still unsure how these two queries are not relationally\n>> > equivalent. I am struggling to find a counterexample where the first\n>> and\n>> > third query (in email, not in gist) would yield different results. Any\n>> > ideas?\n>>\n>> Remove the outer LIMIT 100 from both queries...\n>>\n>> The first query would return a maximal number of rows that meet the OR\n>> criteria while the second query would return at most 200 rows since both\n>> sub-queries would still have their own independent LIMIT 100 clauses.\n>>\n>> David J.\n\nTry following my lead and bottom-post, please.\n\nAnyway, the query has no clue that because of the final LIMIT 100 that the\ntwo different feeding queries are just going to happen to end up providing\nthe same result. Maybe, in this particular instance, it is theoretically\npossible to make such a proof but generally that is not the case and so such\nan optimization has not made into the codebase even if it theoretically\ncould be done (I'm not convinced it could but do not know enough to explain\nto someone else why I have that impression).\n\nI do not know enough to answer why this situation is any different from a\nsimilar partitioning scenario. An example showing exactly what a similar\npartitioning query looks like would help in this regard.\n\nIf you are looking for considerably more insight into the planner workings\nand why it does or doesn't do something you will need to wait for others. I\ncan, to a reasonable degree, deconstruct a pair of queries and either\nexplain or guess as to why things are happening but that is mostly applied\ndeductive reasoning and not because I have any particular insight into the\ncodebase.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-with-indexed-ORDER-BY-and-LIMIT-when-using-OR-d-conditions-tp5812282p5812291.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jul 2014 15:14:44 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with indexed ORDER BY and LIMIT when using OR'd\n conditions"
},
{
"msg_contents": ">\n>\n> Try following my lead and bottom-post, please.\n>\n\nSorry for that.\n\n\n>\n> Anyway, the query has no clue that because of the final LIMIT 100 that the\n> two different feeding queries are just going to happen to end up providing\n> the same result. Maybe, in this particular instance, it is theoretically\n> possible to make such a proof but generally that is not the case and so\n> such\n> an optimization has not made into the codebase even if it theoretically\n> could be done (I'm not convinced it could but do not know enough to explain\n> to someone else why I have that impression).\n>\n> I do not know enough to answer why this situation is any different from a\n> similar partitioning scenario. An example showing exactly what a similar\n> partitioning query looks like would help in this regard.\n>\n> If you are looking for considerably more insight into the planner workings\n> and why it does or doesn't do something you will need to wait for others.\n> I\n> can, to a reasonable degree, deconstruct a pair of queries and either\n> explain or guess as to why things are happening but that is mostly applied\n> deductive reasoning and not because I have any particular insight into the\n> codebase.\n>\n\n\nThanks again for your time. Let's just wait for someone else and see where\nthis will end up going.\n\nJano\n\nTry following my lead and bottom-post, please.\nSorry for that. \n\nAnyway, the query has no clue that because of the final LIMIT 100 that the\ntwo different feeding queries are just going to happen to end up providing\nthe same result. Maybe, in this particular instance, it is theoretically\npossible to make such a proof but generally that is not the case and so such\nan optimization has not made into the codebase even if it theoretically\ncould be done (I'm not convinced it could but do not know enough to explain\nto someone else why I have that impression).\n\nI do not know enough to answer why this situation is any different from a\nsimilar partitioning scenario. An example showing exactly what a similar\npartitioning query looks like would help in this regard.\n\nIf you are looking for considerably more insight into the planner workings\nand why it does or doesn't do something you will need to wait for others. I\ncan, to a reasonable degree, deconstruct a pair of queries and either\nexplain or guess as to why things are happening but that is mostly applied\ndeductive reasoning and not because I have any particular insight into the\ncodebase.Thanks again for your time. Let's just wait for someone else and see where this will end up going. Jano",
"msg_date": "Tue, 22 Jul 2014 00:22:52 +0200",
"msg_from": "johno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Slow query with indexed ORDER BY and LIMIT when\n using OR'd conditions"
},
{
"msg_contents": "johno <[email protected]> writes:\n> I am trying to optimize a simple query that returns first 100 rows that\n> have been updated since a given timestamp (ordered by timestamp and id\n> desc). If there are several rows with the same timestamp I need to a\n> second condition, that states that I want to return rows having the given\n> timestamp and id > given id.\n\n> The obvious query is\n\n> SELECT * FROM register_uz_accounting_entities\n> WHERE effective_on > '2014-07-11' OR (effective_on = '2014-07-11' AND\n> id > 1459)\n> ORDER BY effective_on, id\n> LIMIT 100\n\nA more readily optimizable query is\n\nSELECT * FROM register_uz_accounting_entities\nWHERE (effective_on, id) > ('2014-07-11'::date, 1459)\nORDER BY effective_on, id\nLIMIT 100\n\nThis formulation allows the planner to match both the WHERE and ORDER BY\nclauses directly to the two-column index.\n\n> I've tried to optimize this query by pushing down the limit and order by's\n> into explicit subselects.\n\nAs noted earlier, that's unlikely to be an improvement, because on its\nface it specifies more computation. Postgres is not terribly bright\nabout UNIONs, either.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jul 2014 22:53:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with indexed ORDER BY and LIMIT when using OR'd\n conditions"
},
{
"msg_contents": "On Tue, Jul 22, 2014 at 4:53 AM, Tom Lane <[email protected]> wrote:\n\n> johno <[email protected]> writes:\n> > I am trying to optimize a simple query that returns first 100 rows that\n> > have been updated since a given timestamp (ordered by timestamp and id\n> > desc). If there are several rows with the same timestamp I need to a\n> > second condition, that states that I want to return rows having the given\n> > timestamp and id > given id.\n>\n> > The obvious query is\n>\n> > SELECT * FROM register_uz_accounting_entities\n> > WHERE effective_on > '2014-07-11' OR (effective_on = '2014-07-11' AND\n> > id > 1459)\n> > ORDER BY effective_on, id\n> > LIMIT 100\n>\n> A more readily optimizable query is\n>\n> SELECT * FROM register_uz_accounting_entities\n> WHERE (effective_on, id) > ('2014-07-11'::date, 1459)\n> ORDER BY effective_on, id\n> LIMIT 100\n>\n\nYes, but that query has completely different semantics - I can't change\nthat.\n\n\n>\n> This formulation allows the planner to match both the WHERE and ORDER BY\n> clauses directly to the two-column index.\n>\n\nAre both fields really used? I was under the impression that only the first\ncolumn from index can be used when there is a range query.\n\n\n>\n> > I've tried to optimize this query by pushing down the limit and order\n> by's\n> > into explicit subselects.\n>\n> As noted earlier, that's unlikely to be an improvement, because on its\n> face it specifies more computation. Postgres is not terribly bright\n> about UNIONs, either.\n>\n\n\nDespite the cost calculation in explain the actual query times are very\ndifferent. I get consistent sub 50ms responses from the optimized one\n(union with pushing down the limits) and 500+ms for the plain one (when not\nusing bitmap index scan).\n\nIs this possible optimization considered by query planner or do I have\n\"force\" it?\n\nThanks again for your time and effort, I appreciate it.\n\n\n\n>\n> regards, tom lane\n>\n\nOn Tue, Jul 22, 2014 at 4:53 AM, Tom Lane <[email protected]> wrote:\njohno <[email protected]> writes:\n> I am trying to optimize a simple query that returns first 100 rows that\n> have been updated since a given timestamp (ordered by timestamp and id\n> desc). If there are several rows with the same timestamp I need to a\n> second condition, that states that I want to return rows having the given\n> timestamp and id > given id.\n\n> The obvious query is\n\n> SELECT * FROM register_uz_accounting_entities\n> WHERE effective_on > '2014-07-11' OR (effective_on = '2014-07-11' AND\n> id > 1459)\n> ORDER BY effective_on, id\n> LIMIT 100\n\nA more readily optimizable query is\n\nSELECT * FROM register_uz_accounting_entities\nWHERE (effective_on, id) > ('2014-07-11'::date, 1459)\nORDER BY effective_on, id\nLIMIT 100Yes, but that query has completely different semantics - I can't change that. \n\n\nThis formulation allows the planner to match both the WHERE and ORDER BY\nclauses directly to the two-column index.Are both fields really used? I was under the impression that only the first column from index can be used when there is a range query.\n \n\n> I've tried to optimize this query by pushing down the limit and order by's\n> into explicit subselects.\n\nAs noted earlier, that's unlikely to be an improvement, because on its\nface it specifies more computation. Postgres is not terribly bright\nabout UNIONs, either.Despite the cost calculation in explain the actual query times are very different. I get consistent sub 50ms responses from the optimized one (union with pushing down the limits) and 500+ms for the plain one (when not using bitmap index scan).\nIs this possible optimization considered by query planner or do I have \"force\" it?Thanks again for your time and effort, I appreciate it. \n\n\n regards, tom lane",
"msg_date": "Tue, 22 Jul 2014 07:57:08 +0200",
"msg_from": "johno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query with indexed ORDER BY and LIMIT when using\n OR'd conditions"
},
{
"msg_contents": "johno <[email protected]> writes:\n> On Tue, Jul 22, 2014 at 4:53 AM, Tom Lane <[email protected]> wrote:\n>> johno <[email protected]> writes:\n>>> The obvious query is\n>>> SELECT * FROM register_uz_accounting_entities\n>>> WHERE effective_on > '2014-07-11' OR (effective_on = '2014-07-11' AND\n>>> id > 1459)\n>>> ORDER BY effective_on, id\n>>> LIMIT 100\n\n>> A more readily optimizable query is\n>> SELECT * FROM register_uz_accounting_entities\n>> WHERE (effective_on, id) > ('2014-07-11'::date, 1459)\n>> ORDER BY effective_on, id\n>> LIMIT 100\n\n> Yes, but that query has completely different semantics - I can't change\n> that.\n\nNo, it doesn't. Read it again ... or read up on row comparisons,\nif you're unfamiliar with that notation. The above queries are\nexactly equivalent per spec.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Jul 2014 02:15:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with indexed ORDER BY and LIMIT when using OR'd\n conditions"
},
{
"msg_contents": ">\n> No, it doesn't. Read it again ... or read up on row comparisons,\n> if you're unfamiliar with that notation. The above queries are\n> exactly equivalent per spec.\n>\n\nWow, this is great. Thanks.\n\nNo, it doesn't. Read it again ... or read up on row comparisons,\n\nif you're unfamiliar with that notation. The above queries are\nexactly equivalent per spec.Wow, this is great. Thanks.",
"msg_date": "Tue, 22 Jul 2014 12:42:26 +0200",
"msg_from": "johno <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query with indexed ORDER BY and LIMIT when using\n OR'd conditions"
}
] |
[
{
"msg_contents": "Dear all,\n\nIs it possible to estimate btree index size on text field before\ncreating it. For example i have a table like this:\n\nfortest=# \\d index_estimate\ni integer\nsomestring text\n\nThis is avg bytes in text field\nfortest=# select round(avg(octet_length(somestring))) from index_estimate ;\n4\n\nAnd number of tuples is\nfortest=# select reltuples from pg_class where relname = 'index_estimate';\n10001\n\nCan i estimate index size, operate only this data, may be exist\nformula or something to do this estimation.\n\n-- \nBest Regards,\nSeliavka Evgenii\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Jul 2014 11:38:18 +0400",
"msg_from": "=?UTF-8?B?0JXQstCz0LXQvdC40Lkg0KHQtdC70Y/QstC60LA=?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "estimate btree index size without creating"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a table partitioned with about 60 children tables. Now I found \nthe planning time of simple query with partition key are very slow.\n# explain analyze select count(*) as cnt from article where pid=88 and \nhash_code='2ca3ff8b17b163f0212c2ba01b80a064';\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=16.55..16.56 rows=1 width=0) (actual \ntime=0.259..0.259 rows=1 loops=1)\n -> Append (cost=0.00..16.55 rows=2 width=0) (actual \ntime=0.248..0.250 rows=1 loops=1)\n -> Seq Scan on article (cost=0.00..0.00 rows=1 width=0) \n(actual time=0.002..0.002 rows=0 loops=1)\n Filter: ((pid = 88) AND (hash_code = \n'2ca3ff8b17b163f0212c2ba01b80a064'::bpchar))\n -> Index Scan using article_88_hash_idx on article_88 \narticle (cost=0.00..16.55 rows=1 width=0) (actual time=0.246..0.248 \nrows=1 loops=1)\n Index Cond: (hash_code = \n'2ca3ff8b17b163f0212c2ba01b80a064'::bpchar)\n Filter: (pid = 88)\n Total runtime: 3.816 ms\n(8 rows)\n\nTime: 30999.986 ms\n\nYou can see the timing output that the actual run time of the 'explain \nanalyze' is 30 seconds while the select sql itself takes only 3 ms. My \npartition key is on article.pid and the constraint is simple like this: \nCONSTRAINT article_88_pid_check CHECK (pid = 88). What's wrong and how \ncan I improve the planning performance?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Jul 2014 21:21:00 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow planning performance on partition table"
},
{
"msg_contents": "On Wed, Jul 23, 2014 at 6:21 AM, Rural Hunter <[email protected]> wrote:\n\n> What's wrong and how can I improve the planning performance?\n\n\nWhat is constraint exclusion set to?\n\n\n-- \nDouglas J Hunley ([email protected])\n\nOn Wed, Jul 23, 2014 at 6:21 AM, Rural Hunter <[email protected]> wrote:\nWhat's wrong and how can I improve the planning performance?What is constraint exclusion set to?\n-- Douglas J Hunley ([email protected])",
"msg_date": "Wed, 23 Jul 2014 10:35:21 -0700",
"msg_from": "Douglas J Hunley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "\n\n\n\n\nIt's the default value(partition): \n # grep exclusion postgresql.conf\n #constraint_exclusion = partition锟�0锟�2锟�0锟�2锟�0锟�2 # on, off, or partition\n\n btw, I'm on postgresql 9.2.4\n\n 锟斤拷 2014/7/24 1:35, Douglas J Hunley 写锟斤拷:\n\n\n\n\nOn Wed, Jul 23, 2014 at 6:21 AM,\n Rural Hunter <[email protected]>\n wrote:\nWhat's\n wrong and how can I improve the planning performance?\n\n\n What is constraint exclusion set to?\n\n\n\n -- \nDouglas J Hunley ([email protected])\n\n\n\n\n\n\n",
"msg_date": "Thu, 24 Jul 2014 09:30:12 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "Rural Hunter <[email protected]> writes:\n> I have a table partitioned with about 60 children tables. Now I found \n> the planning time of simple query with partition key are very slow.\n> ...\n> You can see the timing output that the actual run time of the 'explain \n> analyze' is 30 seconds while the select sql itself takes only 3 ms. My \n> partition key is on article.pid and the constraint is simple like this: \n> CONSTRAINT article_88_pid_check CHECK (pid = 88). What's wrong and how \n> can I improve the planning performance?\n\n[ shrug... ] Insufficient data. When I try a simple test case based on\nwhat you've told us, I get planning times of a couple of milliseconds.\nI can think of contributing factors that would increase that, but not by\nfour orders of magnitude. So there's something very significant that\nyou've left out. Can you construct a self-contained test case that's\nthis slow?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Jul 2014 21:53:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "2014/7/25 9:53, Tom Lane wrote:\n> Rural\n> [ shrug... ] Insufficient data. When I try a simple test case based on\n> what you've told us, I get planning times of a couple of milliseconds.\n> I can think of contributing factors that would increase that, but not by\n> four orders of magnitude. So there's something very significant that\n> you've left out. Can you construct a self-contained test case that's\n> this slow?\n>\n> \t\t\tregards, tom lane\n>\nNo I cann't. I exported the db schema(without data) to another server \nand there is no problem. Is the planning time related to data volume? \nAnything else can I check? I already checked the default statistics \ntarget and it's the default value. I did change some statistics target \non one column of the table, but the column is not involved in the slow \nplanning query.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jul 2014 11:26:03 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "锟斤拷 2014/7/25 9:53, Tom Lane 写锟斤拷:\n> [ shrug... ] Insufficient data. When I try a simple test case based on \n> what you've told us, I get planning times of a couple of milliseconds. \n> I can think of contributing factors that would increase that, but not \n> by four orders of magnitude. So there's something very significant \n> that you've left out. Can you construct a self-contained test case \n> that's this slow? regards, tom lane \n\nI run dbg on the backend process and got this:\n(gdb) bt\n#0 0x00007fc4a1b6cdb7 in semop () from /lib/x86_64-linux-gnu/libc.so.6\n#1 0x00000000005f8703 in PGSemaphoreLock ()\n#2 0x0000000000636703 in LWLockAcquire ()\n#3 0x0000000000632eb3 in LockAcquireExtended ()\n#4 0x000000000062fdfb in LockRelationOid ()\n#5 0x0000000000474e55 in relation_open ()\n#6 0x000000000047b39b in index_open ()\n#7 0x00000000005f3c22 in get_relation_info ()\n#8 0x00000000005f6590 in build_simple_rel ()\n#9 0x00000000005f65db in build_simple_rel ()\n#10 0x00000000005de8c0 in add_base_rels_to_query ()\n#11 0x00000000005df352 in query_planner ()\n#12 0x00000000005e0d51 in grouping_planner ()\n#13 0x00000000005e2bbe in subquery_planner ()\n#14 0x00000000005e2ef9 in standard_planner ()\n#15 0x00000000006426e1 in pg_plan_query ()\n#16 0x000000000064279e in pg_plan_queries ()\n#17 0x00000000006f4b7a in BuildCachedPlan ()\n#18 0x00000000006f4e1e in GetCachedPlan ()\n#19 0x0000000000642259 in exec_bind_message ()\n#20 0x0000000000643561 in PostgresMain ()\n#21 0x000000000060347f in ServerLoop ()\n#22 0x0000000000604121 in PostmasterMain ()\n#23 0x00000000005a5ade in main ()\n\nDoes that indicate something? seems it's waiting for some lock.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jul 2014 22:23:49 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "Anyone? I can see many pg processes are in BIND status with htop. Some \nof them could be hanging like 30 mins. I tried gdb on the same process \nmany times and the trace shows same as my previous post. This happened \nafter I partitioned my main tables to 60 children tables. And also, I'm \nexperiecing a cpu peak around 30-60 mins every 1-2 days. During the \npeak, all my cpus(32 cores) are full utilized while there is no special \nload and the memory and io are fine. Sometimes I had to kill the db \nprocess and restart the db to escape the situation. I tried to upgrade \nto the latest 9.2.9 but it didn't help.\n\n锟斤拷 2014/7/25 22:23, Rural Hunter 写锟斤拷:\n> I run dbg on the backend process and got this:\n> (gdb) bt\n> #0 0x00007fc4a1b6cdb7 in semop () from /lib/x86_64-linux-gnu/libc.so.6\n> #1 0x00000000005f8703 in PGSemaphoreLock ()\n> #2 0x0000000000636703 in LWLockAcquire ()\n> #3 0x0000000000632eb3 in LockAcquireExtended ()\n> #4 0x000000000062fdfb in LockRelationOid ()\n> #5 0x0000000000474e55 in relation_open ()\n> #6 0x000000000047b39b in index_open ()\n> #7 0x00000000005f3c22 in get_relation_info ()\n> #8 0x00000000005f6590 in build_simple_rel ()\n> #9 0x00000000005f65db in build_simple_rel ()\n> #10 0x00000000005de8c0 in add_base_rels_to_query ()\n> #11 0x00000000005df352 in query_planner ()\n> #12 0x00000000005e0d51 in grouping_planner ()\n> #13 0x00000000005e2bbe in subquery_planner ()\n> #14 0x00000000005e2ef9 in standard_planner ()\n> #15 0x00000000006426e1 in pg_plan_query ()\n> #16 0x000000000064279e in pg_plan_queries ()\n> #17 0x00000000006f4b7a in BuildCachedPlan ()\n> #18 0x00000000006f4e1e in GetCachedPlan ()\n> #19 0x0000000000642259 in exec_bind_message ()\n> #20 0x0000000000643561 in PostgresMain ()\n> #21 0x000000000060347f in ServerLoop ()\n> #22 0x0000000000604121 in PostmasterMain ()\n> #23 0x00000000005a5ade in main ()\n>\n> Does that indicate something? seems it's waiting for some lock.\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 27 Jul 2014 23:44:09 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "Rural Hunter <[email protected]> writes:\n>> Does that indicate something? seems it's waiting for some lock.\n\nYeah, that's what the stack trace suggests. Have you looked into pg_locks\nand pg_stat_activity to see which lock it wants and what's holding said\nlock?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 27 Jul 2014 12:28:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "Yes I checked. The connection I inspected is the longest running one. \nThere was no other connections blocking it. And I also see all locks are \ngranted for it. Does the planning phase require some internal locks?\n\n锟斤拷 2014/7/28 0:28, Tom Lane 写锟斤拷:\n> Yeah, that's what the stack trace suggests. Have you looked into pg_locks\n> and pg_stat_activity to see which lock it wants and what's holding said\n> lock?\n>\n> \t\t\tregards, tom lane\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Jul 2014 09:18:59 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "This is the vmstat output when the high load peak happens:\n# vmstat 3\nprocs -----------memory---------- ---swap-- -----io---- -system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n54 1 756868 1047128 264572 341573472 0 0 243 272 1 2 51 7 \n39 2\n53 1 756888 924452 264508 341566080 0 7 31379 3623 53110 184808 \n29 55 15 1\n70 1 756892 992416 264408 341530880 0 3 14483 9455 53010 183758 \n23 61 15 1\n93 1 756900 954704 264160 341514208 0 3 20280 3391 66607 304526 \n23 59 17 1\n65 2 756916 998524 263696 341427520 0 5 23295 2084 53748 213259 \n26 60 12 1\n46 0 756924 969036 263636 341421088 0 3 23508 1447 51134 200739 \n22 59 19 1\n123 1 756932 977336 263568 341426016 0 3 21444 2747 48044 174390 \n27 59 13 1\n71 2 756932 975932 263580 341483520 0 0 19328 89629 54321 234718 \n25 59 14 2\n47 5 756932 967004 263676 341502240 0 0 19509 52652 56792 236648 \n21 60 15 4\n70 0 756944 1038464 263660 341468800 0 4 21349 3584 51937 179806 \n25 59 15 1\n70 0 756940 923800 263532 341475712 0 0 15135 1524 58201 236794 \n21 59 19 1\n40 1 756940 1022420 263560 341506560 0 0 9163 4889 34702 130106 \n19 61 19 1\n59 0 756944 939380 263500 341518144 0 1 22809 4024 46398 224644 \n21 60 19 1\n56 1 756956 954656 263464 341469440 0 4 22927 4477 53705 175386 \n28 57 14 1\n39 0 756976 968204 263372 341376576 0 7 24612 2556 61900 262784 \n30 51 18 1\n109 1 756984 1015260 263332 341323776 0 3 16636 4039 29271 \n83699 7 85 7 1\n76 6 756992 980044 263312 341308128 0 3 6949 1848 27496 130478 \n6 90 2 2\n103 0 756992 963540 263308 341352064 0 0 22125 2493 20526 61133 \n4 88 6 2\n\nSeems most of the cpu is used by sys part.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Jul 2014 15:01:19 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "I am now seeing another phenominom of hanging connections. They are \nshowing 'UPDATE' status in process list.\n(gdb) bt\n#0 0x00007f783f79d4f7 in semop () from /lib/x86_64-linux-gnu/libc.so.6\n#1 0x00000000005f97d3 in PGSemaphoreLock ()\n#2 0x0000000000638153 in LWLockAcquire ()\n#3 0x00000000004a9239 in ginStepRight ()\n#4 0x00000000004a9c61 in ginFindLeafPage ()\n#5 0x00000000004a8377 in ginInsertItemPointers ()\n#6 0x00000000004a4548 in ginEntryInsert ()\n#7 0x00000000004ae687 in ginInsertCleanup ()\n#8 0x00000000004af3d6 in ginHeapTupleFastInsert ()\n#9 0x00000000004a4ab1 in gininsert ()\n#10 0x0000000000709b15 in FunctionCall6Coll ()\n#11 0x000000000047b6b7 in index_insert ()\n#12 0x000000000057f475 in ExecInsertIndexTuples ()\n#13 0x000000000058bf07 in ExecModifyTable ()\n#14 0x00000000005766e3 in ExecProcNode ()\n#15 0x0000000000575ad4 in standard_ExecutorRun ()\n#16 0x000000000064718f in ProcessQuery ()\n#17 0x00000000006473b7 in PortalRunMulti ()\n#18 0x0000000000647e8a in PortalRun ()\n#19 0x0000000000645160 in PostgresMain ()\n#20 0x000000000060459e in ServerLoop ()\n#21 0x00000000006053bc in PostmasterMain ()\n#22 0x00000000005a686b in main ()\n(gdb) q\n\nThis connection can not be killed by pg_cancel_backend nor \npg_terminate_backend. It just hangs there and does not respond to normal \nkill command. I had to kill -9 the process to terminate whole \npostgresql instance. What happened there and how can I kill these \nconnections safely?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Jul 2014 21:10:32 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "On Sun, Jul 27, 2014 at 9:28 AM, Tom Lane <[email protected]> wrote:\n\n> Rural Hunter <[email protected]> writes:\n> >> Does that indicate something? seems it's waiting for some lock.\n>\n> Yeah, that's what the stack trace suggests. Have you looked into pg_locks\n> and pg_stat_activity to see which lock it wants and what's holding said\n> lock?\n>\n\nIf it were waiting on a pg_locks lock, the semop should be coming\nfrom ProcSleep, not from LWLockAcquire, shouldn't it?\n\nI'm guessing he has a lot of connections, and each connection is locking\neach partition in shared mode in rapid fire, generating spin-lock or\ncache-line contention.\n\nCheers,\n\nJeff\n\nOn Sun, Jul 27, 2014 at 9:28 AM, Tom Lane <[email protected]> wrote:\nRural Hunter <[email protected]> writes:\n\n>> Does that indicate something? seems it's waiting for some lock.\n\nYeah, that's what the stack trace suggests. Have you looked into pg_locks\nand pg_stat_activity to see which lock it wants and what's holding said\nlock?If it were waiting on a pg_locks lock, the semop should be coming from ProcSleep, not from LWLockAcquire, shouldn't it?\nI'm guessing he has a lot of connections, and each connection is locking each partition in shared mode in rapid fire, generating spin-lock or cache-line contention.\nCheers,Jeff",
"msg_date": "Mon, 28 Jul 2014 10:29:25 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "\n\n\n\n\n在 2014/7/29 1:29, Jeff Janes 写道:\n\n\n\n\n\nIf it were waiting on a pg_locks lock, the semop should\n be coming from ProcSleep, not from LWLockAcquire,\n shouldn't it?\n\n\nI'm\n guessing he has a lot of connections, and each\n connection is locking each partition in shared mode in\n rapid fire, generating spin-lock or cache-line\n contention.\n\n\nCheers,\n\n\nJeff\n\n\n\n\n Yes. I have a lot of connections and they maybe coming together and\n doing the same update statement without partition key on the\n partition table.\n\n\n",
"msg_date": "Tue, 29 Jul 2014 08:10:35 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "More information found. After the hang connection appears, I noticed \nthere were several hundreds of connections of the same user. Since I use \npgbouncer and I only set the pool size to 50 for each user, this is very \nstrange. I checked the pgbouncer side, 'show pools' showed the active \nserver connection count is less than 50(only 35 actually). I also \nchecked the client port which is shown in pg process list. It is not \nused at pgbouncer side when I did the check. So I stopped pgbouncer then \nthe connection count from the user drops slowly. Finally all those \nconnections disappeared. After that I restarted pgbouncer and it looks \ngood again.\nWith this solution, I at least don't have to kill pg when the problem \nhappens. But anyone has a clue why this happens? What I need to check \nfor the root cause? One thing I forgot to check is the network status of \nthose orphan connections at pg side. I will check it next time and see \nif they are in abnormal status.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 29 Jul 2014 16:21:18 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "On Tue, Jul 29, 2014 at 1:21 AM, Rural Hunter <[email protected]> wrote:\n\n> More information found. After the hang connection appears, I noticed there\n> were several hundreds of connections of the same user. Since I use\n> pgbouncer and I only set the pool size to 50 for each user, this is very\n> strange. I checked the pgbouncer side, 'show pools' showed the active\n> server connection count is less than 50(only 35 actually). I also checked\n> the client port which is shown in pg process list. It is not used at\n> pgbouncer side when I did the check. So I stopped pgbouncer then the\n> connection count from the user drops slowly. Finally all those connections\n> disappeared. After that I restarted pgbouncer and it looks good again.\n> With this solution, I at least don't have to kill pg when the problem\n> happens. But anyone has a clue why this happens?\n\n\nIt sounds like someone is bypassing your pgbouncer and connecting directly\nto your database. Maybe they tried to create their own parallelization and\nhave a master connection going through pgbouncer and create many auxiliary\nconnections that go directly to the database (probably because pgbouncer\nwouldn't let them create as many connections as they wanted through it).\n That would explain why the connections slowly drain away once pgbouncer is\nshut down.\n\nCan you change your pg_hba.conf file so that it only allows connections\nfrom pgbouncer's IP address? This should flush out the culprit pretty\nquickly.\n\nCheers,\n\nJeff\n\nOn Tue, Jul 29, 2014 at 1:21 AM, Rural Hunter <[email protected]> wrote:\nMore information found. After the hang connection appears, I noticed there were several hundreds of connections of the same user. Since I use pgbouncer and I only set the pool size to 50 for each user, this is very strange. I checked the pgbouncer side, 'show pools' showed the active server connection count is less than 50(only 35 actually). I also checked the client port which is shown in pg process list. It is not used at pgbouncer side when I did the check. So I stopped pgbouncer then the connection count from the user drops slowly. Finally all those connections disappeared. After that I restarted pgbouncer and it looks good again.\n\nWith this solution, I at least don't have to kill pg when the problem happens. But anyone has a clue why this happens? It sounds like someone is bypassing your pgbouncer and connecting directly to your database. Maybe they tried to create their own parallelization and have a master connection going through pgbouncer and create many auxiliary connections that go directly to the database (probably because pgbouncer wouldn't let them create as many connections as they wanted through it). That would explain why the connections slowly drain away once pgbouncer is shut down.\nCan you change your pg_hba.conf file so that it only allows connections from pgbouncer's IP address? This should flush out the culprit pretty quickly.Cheers,\nJeff",
"msg_date": "Tue, 29 Jul 2014 10:27:11 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "在 2014/7/30 1:27, Jeff Janes 写道:\n>\n>\n> It sounds like someone is bypassing your pgbouncer and connecting \n> directly to your database. Maybe they tried to create their own \n> parallelization and have a master connection going through pgbouncer \n> and create many auxiliary connections that go directly to the database \n> (probably because pgbouncer wouldn't let them create as many \n> connections as they wanted through it). That would explain why the \n> connections slowly drain away once pgbouncer is shut down.\n>\n> Can you change your pg_hba.conf file so that it only allows \n> connections from pgbouncer's IP address? This should flush out the \n> culprit pretty quickly.\n>\n> Cheers,\n>\n> Jeff\nI suspected that first. But after I checked a few things, I am quite \nsure this is not someone bypassing the pgbouncer.\n1. The connections were all from the host of pgbouncer.\n2. The id is an application id and no human has access to it. There was \nno other suspect applications running on the host of pgbouncer when the \nproblem happened.\n3. When I found the problem and checked the connections on the host of \npgbouncer, those network connection actually didn't exist on the client \nside while they were still hanging at pg server side.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 09:13:40 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "This happened again. This time I got the connection status(between \npgbouncer host to pgsql host) at postgresql side. When the problem \nhappens, the connection status is this:\nESTABLISHED: 188\nCLOSE_WAIT: 116\n\nThe count of connections in CLOSE_WAIT is abnormal. Comparing with \nnormal situation, there is usually no close_wait connection. The \nconnection status sample is like this:\nESTABLISHED: 117\nCLOSE_WAIT: 0\n\nI have 4 users configured in pgbouncer and the pool_size is 50. So the \nmax number of connections from pgbouncer should be less than 200.\nThe connection spike happens very quickly. I created a script to check \nthe connections from pgbouncer. The script checks the connections from \npgbouncer every 5 mins. This is the log:\n10:55:01 CST pgbouncer is healthy. connection count: 73\n11:00:02 CST pgbouncer is healthy. connection count: 77\n11:05:01 CST pgbouncer is healthy. connection count: 118\n11:10:01 CST pgbouncer is healthy. connection count: 115\n11:15:01 CST pgbouncer is healthy. connection count: 75\n11:20:01 CST pgbouncer is healthy. connection count: 73\n11:25:02 CST pgbouncer is healthy. connection count: 75\n11:30:01 CST pgbouncer is healthy. connection count: 77\n11:35:01 CST pgbouncer is healthy. connection count: 84\n11:40:10 CST Problematic connection count: 292, will restart pgbouncer...\n\nNow I suspect there is some network problem between the hosts of \npgbouncer and pgsql. Will check more.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 12:05:25 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "This was no error in the log of pgbouncer, but there is a sudden drop of \nrequest count when the problem happened:\n2014-07-30 11:36:51.919 25369 LOG Stats: 2394 req/s, in 339478 b/s, out \n1422425 b/s,query 3792 us\n2014-07-30 11:37:51.919 25369 LOG Stats: 2207 req/s, in 314570 b/s, out \n2291440 b/s,query 5344 us\n2014-07-30 11:38:51.919 25369 LOG Stats: 2151 req/s, in 288565 b/s, out \n1945795 b/s,query 10016 us\n[=========problem happens=========]\n2014-07-30 11:39:51.919 25369 LOG Stats: 1061 req/s, in 140077 b/s, out \n2652730 b/s,query 515753 us\n[=========pgbouncer restart=========]\n2014-07-30 11:40:52.780 10640 LOG File descriptor limit: 65535 \n(H:65535), max_client_conn: 5500, max fds possible: 6560\n2014-07-30 11:40:52.781 10640 LOG Stale pidfile, removing\n2014-07-30 11:40:52.782 10642 LOG listening on 0.0.0.0:xxxx\n2014-07-30 11:40:52.782 10642 WARNING Cannot listen on ::/xxxx: bind(): \nAddress already in use\n2014-07-30 11:40:52.782 10642 LOG listening on unix:/tmp/.s.PGSQL.xxxx\n2014-07-30 11:40:52.782 10642 LOG process up: pgbouncer 1.5.4, libevent \n1.4.13-stable (epoll), adns: libc-2.11\n2014-07-30 11:41:52.781 10642 LOG Stats: 2309 req/s, in 331097 b/s, out \n3806033 b/s,query 4671 us\n2014-07-30 11:42:52.782 10642 LOG Stats: 2044 req/s, in 285153 b/s, out \n2932543 b/s,query 4789 us\n2014-07-30 11:43:52.782 10642 LOG Stats: 1969 req/s, in 282697 b/s, out \n560439 b/s,query 4607 us\n2014-07-30 11:44:52.782 10642 LOG Stats: 2551 req/s, in 351589 b/s, out \n3223438 b/s,query 4364 us\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 12:16:36 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "I think I understand what happened now. I have another monitor script \nruns periodically and calls pg_cancel_backend and pg_terminate_backend \nfor those hanging update sqls. However for some unkown reason the cancle \nand termiante command doesn't work at pgsql side for those update sqls.\n\nBut I think pgbouncer side was notified by cancel or terminate command. \nIt then drops old connections and creates new ones while those old \nconnections still hang at pgsql side. That's why the connection status \nshows CLOST_WAIT and there are more processes at pgsql side than \npgbouncer defined . So the root cause is still at pgsql side. It \nshouldn't hang there. What the hanging process was doing is in my \nprevious posts. There many same concurrent sql which updates a \npartitioned table witouth partition key specified in conditions. The gdb \ntrace shows this:\n(gdb) bt\n#0 0x00007f8cea310db7 in semop () from /lib/x86_64-linux-gnu/libc.so.6\n#1 0x00000000005f97d3 in PGSemaphoreLock ()\n#2 0x0000000000638153 in LWLockAcquire ()\n#3 0x00000000004a90d0 in ginTraverseLock ()\n#4 0x00000000004a9d0b in ginFindLeafPage ()\n#5 0x00000000004a8377 in ginInsertItemPointers ()\n#6 0x00000000004a4548 in ginEntryInsert ()\n#7 0x00000000004ae687 in ginInsertCleanup ()\n#8 0x00000000004af3d6 in ginHeapTupleFastInsert ()\n#9 0x00000000004a4ab1 in gininsert ()\n#10 0x0000000000709b15 in FunctionCall6Coll ()\n#11 0x000000000047b6b7 in index_insert ()\n#12 0x000000000057f475 in ExecInsertIndexTuples ()\n#13 0x000000000058bf07 in ExecModifyTable ()\n#14 0x00000000005766e3 in ExecProcNode ()\n#15 0x0000000000575ad4 in standard_ExecutorRun ()\n#16 0x000000000064718f in ProcessQuery ()\n#17 0x00000000006473b7 in PortalRunMulti ()\n#18 0x0000000000647e8a in PortalRun ()\n#19 0x0000000000645160 in PostgresMain ()\n#20 0x000000000060459e in ServerLoop ()\n#21 0x00000000006053bc in PostmasterMain ()\n#22 0x00000000005a686b in main ()\n(gdb) q\n\nIt will just hangs there forever and finally blocks all other update \nsqls if I don't stop pgbouncer. When this happens, all the cpus will be \nutilized by those hanging processes and the server load is very very \nhigh. It keeps at serveral hundreds comparing with about 20 normally \nwhich causes the performance problem for all tasks on the server.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 18:03:30 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "Hi Tom,\n\nCould my problem be a victim of this issue?\nhttp://postgresql.1045698.n5.nabble.com/Planner-performance-extremely-affected-by-an-hanging-transaction-20-30-times-td5771686.html\n\nis the patch mentioned in that thread applied in 9.2.9?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 04 Aug 2014 15:23:11 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
},
{
"msg_contents": "在 2014/7/30 18:03, Rural Hunter 写道:\n> I think I understand what happened now. I have another monitor script \n> runs periodically and calls pg_cancel_backend and pg_terminate_backend \n> for those hanging update sqls. However for some unkown reason the \n> cancle and termiante command doesn't work at pgsql side for those \n> update sqls.\n>\nWith the log of the monitor&kill scipt, I can confirm that the \nCLOSE_WAIT is not caused by it. I logged the netstat before actually \ndoing the kill and found the CLOSE_WAIT connections were already there. \nSo it must be something else caused the CLOSE_WAIT connections.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 04 Aug 2014 18:12:35 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow planning performance on partition table"
}
] |
[
{
"msg_contents": "Hello PGSQL performance community,\n[By way of introduction, we are a TPC subcommittee that is developing a benchmark with cloud-like characteristics for virtualized databases. The end-to-end benchmarking kit will be publicly available, and will run on PGSQL]\n\nI am running into very high failure rates when I run with the Serializable Isolation Level. I have simplified our configuration to a single database with a constant workload, a TPC-E workload if you will, to focus on this this problem. We are running with PGSQL 9.2.4, ODBC 2.2.14 (as well as 2.3.3pre, which didn't help), RHEL 6.4, and a 6-way VM with 96GB of memory on a 4-socket Westmere server.\n\nWith our 9 transactions running with a mix of SQL_TXN_READ_COMMITTED and SQL_TXN_REPEATABLE_READ, we get less than 1% deadlocks, all of which occur because each row in one table, BROKER, may be read or written by multiple transactions at the same time. So, there are legitimate conflicts, which we deal with using an exponential backoff algorithm that sleeps for 10ms/30ms/90ms/etc.\n\nWhen we raise the Trade-Result transaction to SQL_TXN_SERIALIZABLE, we face a storm of conflicts. Out of 37,342 Trade-Result transactions, 15,707 hit an error, and have to be rolled back and retired one or more times. The total failure count (due to many transactions failing more than once) is 31,388.\n\nWhat is unusual is that the majority of the failures occur in a statement that should not have any isolation conflicts. About 17K of failures are from the statement below:\n2014-07-23 11:27:15 PDT 26085 ERROR: could not serialize access due to read/write dependencies among transactions\n2014-07-23 11:27:15 PDT 26085 DETAIL: Reason code: Canceled on identification as a pivot, during write.\n2014-07-23 11:27:15 PDT 26085 HINT: The transaction might succeed if retried.\n2014-07-23 11:27:15 PDT 26085 CONTEXT: SQL statement \"update TRADE\n set T_COMM = comm_amount,\n T_DTS = trade_dts,\n T_ST_ID = st_completed_id,\n T_TRADE_PRICE = trade_price\n where T_ID = trade_id\"\n PL/pgSQL function traderesultframe5(ident_t,value_t,character,timestamp without time zone,trade_t,s_price_t) line 15 at SQL statement\n\nThis doesn't make sense since at any given time, only one transaction might possibly be accessing the row that is being updated. There should be no conflicts if we have row-level locking/isolation\n\nThe second most common conflict happens 7.6K times in the statement below:\n2014-07-23 11:27:23 PDT 26039 ERROR: could not serialize access due to read/write dependencies among transactions\n2014-07-23 11:27:23 PDT 26039 DETAIL: Reason code: Canceled on identification as a pivot, during conflict in checking.\n2014-07-23 11:27:23 PDT 26039 HINT: The transaction might succeed if retried.\n2014-07-23 11:27:23 PDT 26039 CONTEXT: SQL statement \"insert\n into SETTLEMENT ( SE_T_ID,\n SE_CASH_TYPE,\n SE_CASH_DUE_DATE,\n SE_AMT)\n values ( trade_id,\n cash_type,\n due_date,\n se_amount\n )\"\n PL/pgSQL function traderesultframe6(ident_t,timestamp without time zone,character varying,value_t,timestamp without time zone,trade_t,smallint,s_qty_t,character) line 23 at SQL statement\n\nI don't understand why an insert would hit a serialization conflict\n\nWe also have 4.5K conflicts when we try to commit:\n2014-07-23 11:27:23 PDT 26037 ERROR: could not serialize access due to read/write dependencies among transactions\n2014-07-23 11:27:23 PDT 26037 DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.\n2014-07-23 11:27:23 PDT 26037 HINT: The transaction might succeed if retried.\n2014-07-23 11:27:23 PDT 26037 STATEMENT: COMMIT\n\n\nDoes PGSQL raise locks to page level when we run with SQL_TXN_SERIALIZABLE? Are there any knobs I can play with to alleviate this? FWIW, the same transactions on MS SQL Server see almost no conflicts.\n\nThanks,\nReza\n\n\n\n\n\n\n\n\n\nHello PGSQL performance community,\n[By way of introduction, we are a TPC subcommittee that is developing a benchmark with cloud-like characteristics for virtualized databases. The end-to-end benchmarking kit will be publicly available, and will run on PGSQL]\n \nI am running into very high failure rates when I run with the Serializable Isolation Level. I have simplified our configuration to a single database with a constant workload, a TPC-E workload if you will, to focus on\n this this problem. We are running with PGSQL 9.2.4, ODBC 2.2.14 (as well as 2.3.3pre, which didn’t help), RHEL 6.4, and a 6-way VM with 96GB of memory on a 4-socket Westmere server.\n \nWith our 9 transactions running with a mix of SQL_TXN_READ_COMMITTED and SQL_TXN_REPEATABLE_READ, we get less than 1% deadlocks, all of which occur because each row in one table, BROKER, may be read or written by multiple transactions at\n the same time. So, there are legitimate conflicts, which we deal with using an exponential backoff algorithm that sleeps for 10ms/30ms/90ms/etc.\n \nWhen we raise the Trade-Result transaction to SQL_TXN_SERIALIZABLE, we face a storm of conflicts. Out of 37,342 Trade-Result transactions, 15,707 hit an error, and have to be rolled back and retired one or more times. The total failure\n count (due to many transactions failing more than once) is 31,388.\n \nWhat is unusual is that the majority of the failures occur in a statement that should not have any isolation conflicts. About 17K of failures are from the statement below:\n2014-07-23 11:27:15 PDT 26085 ERROR: could not serialize access due to read/write dependencies among transactions\n2014-07-23 11:27:15 PDT 26085 DETAIL: Reason code: Canceled on identification as a pivot, during write.\n2014-07-23 11:27:15 PDT 26085 HINT: The transaction might succeed if retried.\n2014-07-23 11:27:15 PDT 26085 CONTEXT: SQL statement \"update TRADE\n set T_COMM = comm_amount,\n T_DTS = trade_dts,\n T_ST_ID = st_completed_id,\n T_TRADE_PRICE = trade_price\n where T_ID = trade_id\"\n PL/pgSQL function traderesultframe5(ident_t,value_t,character,timestamp without time zone,trade_t,s_price_t) line 15 at SQL statement\n \nThis doesn’t make sense since at any given time, only one transaction might possibly be accessing the row that is being updated. There should be no conflicts if we have row-level locking/isolation\n\nThe second most common conflict happens 7.6K times in the statement below:\n2014-07-23 11:27:23 PDT 26039 ERROR: could not serialize access due to read/write dependencies among transactions\n2014-07-23 11:27:23 PDT 26039 DETAIL: Reason code: Canceled on identification as a pivot, during conflict in checking.\n2014-07-23 11:27:23 PDT 26039 HINT: The transaction might succeed if retried.\n2014-07-23 11:27:23 PDT 26039 CONTEXT: SQL statement \"insert\n into SETTLEMENT ( SE_T_ID,\n SE_CASH_TYPE,\n SE_CASH_DUE_DATE,\n SE_AMT)\n values ( trade_id,\n cash_type,\n due_date,\n se_amount\n )\"\n PL/pgSQL function traderesultframe6(ident_t,timestamp without time zone,character varying,value_t,timestamp without time zone,trade_t,smallint,s_qty_t,character)\n line 23 at SQL statement\n \nI don’t understand why an insert would hit a serialization conflict\n \nWe also have 4.5K conflicts when we try to commit:\n2014-07-23 11:27:23 PDT 26037 ERROR: could not serialize access due to read/write dependencies among transactions\n2014-07-23 11:27:23 PDT 26037 DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.\n2014-07-23 11:27:23 PDT 26037 HINT: The transaction might succeed if retried.\n2014-07-23 11:27:23 PDT 26037 STATEMENT: COMMIT\n\n \n \nDoes PGSQL raise locks to page level when we run with SQL_TXN_SERIALIZABLE? Are there any knobs I can play with to alleviate this? FWIW, the same transactions on MS SQL Server see almost no conflicts.\n \nThanks,\nReza",
"msg_date": "Thu, 24 Jul 2014 01:18:05 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "High rate of transaction failure with the Serializable Isolation\n Level"
},
{
"msg_contents": "On 07/24/2014 09:18 AM, Reza Taheri wrote:\n> What is unusual is that the majority of the failures occur in a\n> statement that should not have any isolation conflicts. About 17K of\n> failures are from the statement below:\n\nIt's not just that statement that is relevant.\n\nAt SERIALIZABLE isolation the entire transaction's actions must be\nconsidered, as must the conflicting transaction.\n\n> This doesn�t make sense since at any given time, only one transaction\n> might possibly be accessing the row that is being updated. There should\n> be no conflicts if we have row-level locking/isolation.\n\nIs that statement run standalone, or as part of a larger transaction?\n\n> The second most common conflict happens 7.6K times in the statement below:\n...\n> I don�t understand why an insert would hit a serialization conflict\n\nIf the INSERTing transaction previously queried for a key that was\ncreated by a concurrent transaction this can occur as there is no\nserialization execution order of the transactions that could produce the\nsame result.\n\nThis doesn't produce exactly the same error, but demonstrates one such case:\n\n\nregress=> CREATE TABLE demo (id integer primary key, value integer);\nCREATE TABLE\nregress=> INSERT INTO demo(id, value) VALUES (1, 42);\nINSERT 0 1\n\nthen\n\nregress=> BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\nBEGIN\nregress=> SELECT id FROM demo WHERE id = 2;\n id\n----\n(0 rows)\n\n\nsession1=> BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\nBEGIN\nsession2=> BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\nBEGIN\n\nsession1=> SELECT id FROM demo WHERE id = 2;\n id\n----\n(0 rows)\n\nsession2=> SELECT id FROM demo WHERE id = 3;\n id\n----\n(0 rows)\n\n\nsession1=> INSERT INTO demo VALUES (3, 43);\nINSERT 0 1\n\nsession2=> INSERT INTO demo VALUES (2, 43);\nINSERT 0 1\n\nsession2=> COMMIT;\nCOMMIT\n\nsession1=> COMMIT;\nERROR: could not serialize access due to read/write dependencies among\ntransactions\nDETAIL: Reason code: Canceled on identification as a pivot, during\ncommit attempt.\nHINT: The transaction might succeed if retried.\n\n> Does PGSQL raise locks to page level when we run with\n> SQL_TXN_SERIALIZABLE?\n\n From the documentation\n(http://www.postgresql.org/docs/current/static/transaction-iso.html):\n\n> Predicate locks in PostgreSQL, like in most other database systems, are based on data actually accessed by a transaction. These will show up in the pg_locks system view with a mode of SIReadLock. The particular locks acquired during execution of a query will depend on the plan used by the query, and multiple finer-grained locks (e.g., tuple locks) may be combined into fewer coarser-grained locks (e.g., page locks) during the course of the transaction to prevent exhaustion of the memory used to track the locks.\n\n... so yes, it may raise locks to page level. That doesn't mean that's\nnecessarily what's happening here.\n\n> Are there any knobs I can play with to alleviate\n> this? \n\nA lower FILLFACTOR can spread data out at the cost of wasted space.\n\n> FWIW, the same transactions on MS SQL Server see almost no conflicts.\n\nMany DBMSs don't detect all serialization anomalies. PostgreSQL doesn't\ndetect all possible anomalies but it detects many that other systems may\nnot.\n\nTo see what's going on and why MS SQL Server (version?) doesn't\ncomplain, it'd be best to boil each case down to a minimal reproducible\ntest case that can be analyzed in isolation.\n\nPostgreSQL's isolationtester tool, in src/test/isolation, can be handy\nfor automating this kind of conflict, and provides some useful examples\nof cases that are detected.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Jul 2014 12:57:31 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High rate of transaction failure with the Serializable\n Isolation Level"
},
{
"msg_contents": "On 07/24/2014 09:18 AM, Reza Taheri wrote:\n> Does PGSQL raise locks to page level when we run with\n> SQL_TXN_SERIALIZABLE? Are there any knobs I can play with to alleviate\n> this? FWIW, the same transactions on MS SQL Server see almost no conflicts.\n> \n\nAlso, in the documentation\n(http://www.postgresql.org/docs/current/static/transaction-iso.html):\n\n> When the system is forced to combine multiple page-level predicate locks into a single relation-level predicate lock because the predicate lock table is short of memory, an increase in the rate of serialization failures may occur. You can avoid this by increasing max_pred_locks_per_transaction.\n\n... so I suggest experimenting with higher\nmax_pred_locks_per_transaction values.\n\nhttp://www.postgresql.org/docs/9.1/static/runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-TRANSACTION\n\n... though that should only really affect object level locks (tables,\netc) according to the docs. I'd need to dig further to determine how to\nreduce or eliminate lock combining of row-level to page-level and\npage-level to object-level locks.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Jul 2014 13:01:15 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High rate of transaction failure with the Serializable\n Isolation Level"
},
{
"msg_contents": "Reza Taheri <[email protected]> wrote:\n\n> I am running into very high failure rates when I run with the\n> Serializable Isolation Level. I have simplified our configuration\n> to a single database with a constant workload, a TPC-E workload\n> if you will, to focus on this this problem. We are running with\n> PGSQL 9.2.4\n\nI don't remember any bug fixes that would be directly related to\nwhat you describe in the last 15 months, but it might be better to\ndo any testing with fixes for known bugs:\n\nhttp://www.postgresql.org/support/versioning/\n\n> When we raise the Trade-Result transaction to\n> SQL_TXN_SERIALIZABLE, we face a storm of conflicts. Out of\n> 37,342 Trade-Result transactions, 15,707 hit an error, and have\n> to be rolled back and retired one or more times. The total\n> failure count (due to many transactions failing more than once)\n> is 31,388.\n>\n> What is unusual is that the majority of the failures occur in a\n> statement that should not have any isolation conflicts.\n\nAs already pointed out by Craig, statements don't have\nserialization failures; transactions do. In some cases a\ntransaction may become \"doomed to fail\" by the action of a\nconcurrent transaction, but the actual failure cannot occur until\nthe next statement is run on the connection with the doomed\ntransaction; it may have nothing to do with the statement itself.\n\nIf you want to understand the theory of how SERIALIZABLE\ntransactions are implemented in PostgreSQL, these links may help:\n\nhttp://vldb.org/pvldb/vol5/p1850_danrkports_vldb2012.pdf\n\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/storage/lmgr/README-SSI;hb=master\n\nhttp://wiki.postgresql.org/wiki/Serializable\n\nFor a more practical set of examples about the differences in\nusing REPEATABLE READ and SERIALIZABLE transaction isolation levels\nin PostgreSQL, see:\n\nhttp://wiki.postgresql.org/wiki/SSI\n\nIf you are just interested in reducing the number of serialization\nfailures, see the suggestions near the end of this section of the\ndocumentation:\n\nhttp://www.postgresql.org/docs/9.2/interactive/transaction-iso.html#XACT-SERIALIZABLE\n\nAny of these items (or perhaps a combination of them) may\nameliorate the problem. Note that I have seen reports of cases\nwhere max_pred_locks_per_transaction needed to be set to 20x the\ndefault to reduce serialization failures to an acceptable level.\nThe default is intentionally set very low because so many people do\nnot use this isolation level, and this setting reserves shared\nmemory for purposes of tracking serializable transactions; the\nspace is wasted for those who don't choose to use them.\n\nThere is still a lot of work possible to reduce the rate of false\npositives, which has largely gone undone so far due to a general\nlack of problem reports from people which could not be solved\nthrough tuning. If you have such a case, it would be interesting\nto have all relevant details, so that we can target which of the\nmany enhancements are relevant to your case.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Jul 2014 07:02:59 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High rate of transaction failure with the Serializable Isolation\n Level"
},
{
"msg_contents": "Hi Craig,\n> It's not just that statement that is relevant.\n> Is that statement run standalone, or as part of a larger transaction?\n\nYes, the \"size\" of the transaction seems to matter here. It is a complex transaction (attached). Each \"frame\" is one stored procedure, and the 6 frames are called one after the other with no pause. After frame6 returns, we call SQLTransact(..., ..., SQL_COMMIT). Below is the failure rate of the various frames:\n\n 112 tid 18883: SQL Failed: DoTradeResultFrame3\n 102 tid 18883: SQL Failed: DoTradeResultFrame4\n 18188 tid 18883: SQL Failed: DoTradeResultFrame5\n 8566 tid 18883: SQL Failed: DoTradeResultFrame6\n 4492 tid 18883: ERROR: TradeResultDB: commit failed\n\nSo, no failures in frames 1 and 2, and then the failure rate grows as we approach the end of the transaction.\n\n> If the INSERTing transaction previously queried for a key that was created by a concurrent transaction this can occur as there is no serialization\n> execution order of the transactions that could produce the same result.\n\nAs far as the inserts, your point is well-taken. But in this case, I have eliminated the transactions that query or otherwise manipulate the SETTELEMENT table. The only access to it is the single insert in this transaction\n\n> A lower FILLFACTOR can spread data out at the cost of wasted space.\n\nInteresting idea! Let me look into this. Even if this is not practical (our tables are 10s and 100s of GBs), if I can force a single row per page and the problem goes away, then we learn something\n\n> PostgreSQL's isolationtester tool, in src/test/isolation, can be handy for automating this kind of conflict,\n> and provides some useful examples of cases that are detected.\n\nDidn't know about this tool. Let me look into it!\n\nThanks again for the reply,\nReza\n\n> -----Original Message-----\n> From: Craig Ringer [mailto:[email protected]]\n> Sent: Wednesday, July 23, 2014 9:58 PM\n> To: Reza Taheri; [email protected]\n> Subject: Re: [PERFORM] High rate of transaction failure with the Serializable\n> Isolation Level\n> \n> On 07/24/2014 09:18 AM, Reza Taheri wrote:\n> > What is unusual is that the majority of the failures occur in a\n> > statement that should not have any isolation conflicts. About 17K of\n> > failures are from the statement below:\n> \n> It's not just that statement that is relevant.\n> \n> At SERIALIZABLE isolation the entire transaction's actions must be\n> considered, as must the conflicting transaction.\n> \n> > This doesn't make sense since at any given time, only one transaction\n> > might possibly be accessing the row that is being updated. There\n> > should be no conflicts if we have row-level locking/isolation.\n> \n> Is that statement run standalone, or as part of a larger transaction?\n> \n> > The second most common conflict happens 7.6K times in the statement\n> below:\n> ...\n> > I don't understand why an insert would hit a serialization conflict\n> \n> If the INSERTing transaction previously queried for a key that was created by\n> a concurrent transaction this can occur as there is no serialization execution\n> order of the transactions that could produce the same result.\n> \n> This doesn't produce exactly the same error, but demonstrates one such\n> case:\n> \n> \n> regress=> CREATE TABLE demo (id integer primary key, value integer);\n> CREATE TABLE regress=> INSERT INTO demo(id, value) VALUES (1, 42);\n> INSERT 0 1\n> \n> then\n> \n> regress=> BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; BEGIN\n> regress=> SELECT id FROM demo WHERE id = 2; id\n> ----\n> (0 rows)\n> \n> \n> session1=> BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; BEGIN\n> session2=> BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE; BEGIN\n> \n> session1=> SELECT id FROM demo WHERE id = 2; id\n> ----\n> (0 rows)\n> \n> session2=> SELECT id FROM demo WHERE id = 3; id\n> ----\n> (0 rows)\n> \n> \n> session1=> INSERT INTO demo VALUES (3, 43); INSERT 0 1\n> \n> session2=> INSERT INTO demo VALUES (2, 43); INSERT 0 1\n> \n> session2=> COMMIT;\n> COMMIT\n> \n> session1=> COMMIT;\n> ERROR: could not serialize access due to read/write dependencies among\n> transactions\n> DETAIL: Reason code: Canceled on identification as a pivot, during commit\n> attempt.\n> HINT: The transaction might succeed if retried.\n> \n> > Does PGSQL raise locks to page level when we run with\n> > SQL_TXN_SERIALIZABLE?\n> \n> From the documentation\n> (https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/d\n> ocs/current/static/transaction-\n> iso.html%29:&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPj\n> roD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=TLPOH83mhBZDaYDaC\n> sh%2F8g2qVmFXtdg7HcUqXymxn40%3D%0A&s=32832df25ebb8166a18523bd\n> 9d6ec00f5ad545ea3bc1f8e95808ba65b4766130\n> \n> > Predicate locks in PostgreSQL, like in most other database systems, are\n> based on data actually accessed by a transaction. These will show up in the\n> pg_locks system view with a mode of SIReadLock. The particular locks\n> acquired during execution of a query will depend on the plan used by the\n> query, and multiple finer-grained locks (e.g., tuple locks) may be combined\n> into fewer coarser-grained locks (e.g., page locks) during the course of the\n> transaction to prevent exhaustion of the memory used to track the locks.\n> \n> ... so yes, it may raise locks to page level. That doesn't mean that's\n> necessarily what's happening here.\n> \n> > Are there any knobs I can play with to alleviate this?\n> \n> A lower FILLFACTOR can spread data out at the cost of wasted space.\n> \n> > FWIW, the same transactions on MS SQL Server see almost no conflicts.\n> \n> Many DBMSs don't detect all serialization anomalies. PostgreSQL doesn't\n> detect all possible anomalies but it detects many that other systems may not.\n> \n> To see what's going on and why MS SQL Server (version?) doesn't complain,\n> it'd be best to boil each case down to a minimal reproducible test case that\n> can be analyzed in isolation.\n> \n> PostgreSQL's isolationtester tool, in src/test/isolation, can be handy for\n> automating this kind of conflict, and provides some useful examples of cases\n> that are detected.\n> \n> --\n> Craig Ringer\n> https://urldefense.proofpoint.com/v1/url?u=http://www.2ndquadrant.com\n> /&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjroD2HLPTH\n> U27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=TLPOH83mhBZDaYDaCsh%2F8g2q\n> VmFXtdg7HcUqXymxn40%3D%0A&s=3b1587fc43a994ddcf59e658e2521e9a9c\n> 847393fb4ab8dc48df009b547cca55\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 24 Jul 2014 19:50:19 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High rate of transaction failure with the\n Serializable Isolation Level"
},
{
"msg_contents": "Hi Kevin,\nThanks for the reply\n\n> As already pointed out by Craig, statements don't have serialization failures; transactions do. In some cases a transaction may become\n> \"doomed to fail\" by the action of a concurrent transaction, but the actual failure cannot occur until the next statement is run on the\n> connection with the doomed transaction; it may have nothing to do with the statement itself.\n\nThat's an interesting concept. I suppose I could test it by moving statements around to see what happens.\n\n\n> Note that I have seen reports of cases where max_pred_locks_per_transaction needed to be set to 20x the default to\n> reduce serialization failures to an acceptable level.\n\n\nI was running with the following two parameters set to 640; I then raised them to 6400, and saw no difference\n\nmax_locks_per_transaction = 6400\nmax_pred_locks_per_transaction = 6400\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: Kevin Grittner [mailto:[email protected]]\n> Sent: Thursday, July 24, 2014 7:03 AM\n> To: Reza Taheri; [email protected]\n> Subject: Re: [PERFORM] High rate of transaction failure with the Serializable\n> Isolation Level\n> \n> Reza Taheri <[email protected]> wrote:\n> \n> > I am running into very high failure rates when I run with the\n> > Serializable Isolation Level. I have simplified our configuration to a\n> > single database with a constant workload, a TPC-E workload if you\n> > will, to focus on this this problem. We are running with PGSQL 9.2.4\n> \n> I don't remember any bug fixes that would be directly related to what you\n> describe in the last 15 months, but it might be better to do any testing with\n> fixes for known bugs:\n> \n> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/su\n> pport/versioning/&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKm\n> A0CPjroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=H8C2sdUv2dsUC\n> 9oH2yzDssdTbCEBF5mbQZbZ871laGw%3D%0A&s=6522bd258d0a034429522b\n> 61239134b07f1cabc086e8c2cb330aa9c9bc4a337d\n> \n> > When we raise the Trade-Result transaction to SQL_TXN_SERIALIZABLE, we\n> > face a storm of conflicts. Out of\n> > 37,342 Trade-Result transactions, 15,707 hit an error, and have to be\n> > rolled back and retired one or more times. The total failure count\n> > (due to many transactions failing more than once) is 31,388.\n> >\n> > What is unusual is that the majority of the failures occur in a\n> > statement that should not have any isolation conflicts.\n> \n> As already pointed out by Craig, statements don't have serialization failures;\n> transactions do. In some cases a transaction may become \"doomed to fail\"\n> by the action of a concurrent transaction, but the actual failure cannot occur\n> until the next statement is run on the connection with the doomed\n> transaction; it may have nothing to do with the statement itself.\n> \n> If you want to understand the theory of how SERIALIZABLE transactions are\n> implemented in PostgreSQL, these links may help:\n> \n> https://urldefense.proofpoint.com/v1/url?u=http://vldb.org/pvldb/vol5/p1\n> 850_danrkports_vldb2012.pdf&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0\n> A&r=b9TKmA0CPjroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=H8C\n> 2sdUv2dsUC9oH2yzDssdTbCEBF5mbQZbZ871laGw%3D%0A&s=d1b8cd62c431\n> c267124c21d4e639c98eebb650caaf8fd05ba47aa825a9b54a52\n> \n> https://urldefense.proofpoint.com/v1/url?u=http://git.postgresql.org/gitwe\n> b/?p%3Dpostgresql.git%3Ba%3Dblob_plain%3Bf%3Dsrc/backend/storage/lm\n> gr/README-\n> SSI%3Bhb%3Dmaster&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9T\n> KmA0CPjroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=H8C2sdUv2d\n> sUC9oH2yzDssdTbCEBF5mbQZbZ871laGw%3D%0A&s=1f60010253b8012dbe5\n> e5a51af48fcb831dae81200708f620438e6afb48c0eef\n> \n> https://urldefense.proofpoint.com/v1/url?u=http://wiki.postgresql.org/wiki\n> /Serializable&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPj\n> roD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=H8C2sdUv2dsUC9oH2\n> yzDssdTbCEBF5mbQZbZ871laGw%3D%0A&s=040078780771088975f2abe3668\n> 5b182ca626557ed2cd1c7241c78b9f417d325\n> \n> For a more practical set of examples about the differences in using\n> REPEATABLE READ and SERIALIZABLE transaction isolation levels in\n> PostgreSQL, see:\n> \n> https://urldefense.proofpoint.com/v1/url?u=http://wiki.postgresql.org/wiki\n> /SSI&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjroD2HLP\n> THU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=H8C2sdUv2dsUC9oH2yzDssdTb\n> CEBF5mbQZbZ871laGw%3D%0A&s=3c2629d0256b802ed7b701be6bff7443480\n> 5f94beb1c400c772ace91c7204bc5\n> \n> If you are just interested in reducing the number of serialization failures, see\n> the suggestions near the end of this section of the\n> documentation:\n> \n> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/do\n> cs/9.2/interactive/transaction-iso.html%23XACT-\n> SERIALIZABLE&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=H8C2sdUv2dsUC9oH2\n> yzDssdTbCEBF5mbQZbZ871laGw%3D%0A&s=ae5349ae3cafcd86c6ba6be9404\n> 990ae800d93d6ccfe892402c2d8d463bd8574\n> \n> Any of these items (or perhaps a combination of them) may ameliorate the\n> problem. Note that I have seen reports of cases where\n> max_pred_locks_per_transaction needed to be set to 20x the default to\n> reduce serialization failures to an acceptable level.\n> The default is intentionally set very low because so many people do not use\n> this isolation level, and this setting reserves shared memory for purposes of\n> tracking serializable transactions; the space is wasted for those who don't\n> choose to use them.\n> \n> There is still a lot of work possible to reduce the rate of false positives, which\n> has largely gone undone so far due to a general lack of problem reports from\n> people which could not be solved through tuning. If you have such a case, it\n> would be interesting to have all relevant details, so that we can target which\n> of the many enhancements are relevant to your case.\n> \n> --\n> Kevin Grittner\n> EDB:\n> https://urldefense.proofpoint.com/v1/url?u=http://www.enterprisedb.com\n> /&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjroD2HLPTH\n> U27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=H8C2sdUv2dsUC9oH2yzDssdTbCE\n> BF5mbQZbZ871laGw%3D%0A&s=ca419a3d34bca730a6a153fe027150e4975396\n> 4be76b78a2b81dc84378d091e6\n> The Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 24 Jul 2014 19:50:41 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High rate of transaction failure with the\n Serializable Isolation Level"
},
{
"msg_contents": "On 07/25/2014 03:50 AM, Reza Taheri wrote:\n> Hi Craig,\n>> It's not just that statement that is relevant.\n>> Is that statement run standalone, or as part of a larger transaction?\n> \n> Yes, the \"size\" of the transaction seems to matter here. It is a complex transaction (attached). Each \"frame\" is one stored procedure, and the 6 frames are called one after the other with no pause. After frame6 returns, we call SQLTransact(..., ..., SQL_COMMIT). Below is the failure rate of the various frames:\n> \n> 112 tid 18883: SQL Failed: DoTradeResultFrame3\n> 102 tid 18883: SQL Failed: DoTradeResultFrame4\n> 18188 tid 18883: SQL Failed: DoTradeResultFrame5\n> 8566 tid 18883: SQL Failed: DoTradeResultFrame6\n> 4492 tid 18883: ERROR: TradeResultDB: commit failed\n> \n> So, no failures in frames 1 and 2, and then the failure rate grows as we approach the end of the transaction.\n\nAccording to the attached SQL, each frame is a separate phase in the\noperation and performs many different operations.\n\nThere's a *lot* going on here, so identifying possible interdependencies\nisn't something I can do in a ten minute skim read over my morning coffee.\n\nI think the most useful thing to do here is to start cutting and\nsimplifying the case, trying to boil it down to the smallest thing that\nstill causes the problem.\n\nThat'll likely either find a previously unidentified interdependency\nbetween transactions or, if you're unlucky, a Pg bug. Given the\ncomplexity of the operations there I'd be very surprised if it wasn't\nthe former.\n\n>> If the INSERTing transaction previously queried for a key that was created by a concurrent transaction this can occur as there is no serialization\n>> execution order of the transactions that could produce the same result.\n> \n> As far as the inserts, your point is well-taken. But in this case, I have eliminated the transactions that query or otherwise manipulate the SETTELEMENT table. The only access to it is the single insert in this transaction\n\nIf there are foreign keys to it from other tables, they count too.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jul 2014 09:30:03 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High rate of transaction failure with the Serializable\n Isolation Level"
},
{
"msg_contents": "Hi Craig,\n\n> According to the attached SQL, each frame is a separate phase in the operation and performs many different operations.\n> There's a *lot* going on here, so identifying possible interdependencies isn't something I can do in a ten minute skim\n> read over my morning coffee.\n\nYou didn't think I was going to bug you all with a trivial problem, did you? :-) :-)\n\nYes, I am going to have to take an axe to the code and see what pops out. Just to put this in perspective, the transaction flow and its statements are borrowed verbatim from the TPC-E benchmark. There have been dozens of TPC-E disclosures with MS SQL Server, and there are Oracle and DB2 kits that, although not used in public disclosures for various non-technical reasons, are used internally in by the DB and server companies. These 3 products, and perhaps more, were used extensively in the prototyping phase of TPC-E.\n\nSo, my hope is that if there is a \"previously unidentified interdependency between transactions\" as you point out, it will be due to a mistake we made in coding this for PGSQL. Otherwise, we will have a hard time convincing all the council member companies that we need to change the schema or the business logic to make the kit work with PGSQL.\n\nJust pointing out my uphill battle!!\n\n> If there are foreign keys to it from other tables, they count too.\n\nYes, we have a lot of foreign keys. I dropped them all a few weeks ago with no impact. But when I start the axing process, they will be one of the first to go\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: Craig Ringer [mailto:[email protected]]\n> Sent: Thursday, July 24, 2014 6:30 PM\n> To: Reza Taheri; [email protected]\n> Subject: Re: [PERFORM] High rate of transaction failure with the Serializable\n> Isolation Level\n> \n> On 07/25/2014 03:50 AM, Reza Taheri wrote:\n> > Hi Craig,\n> >> It's not just that statement that is relevant.\n> >> Is that statement run standalone, or as part of a larger transaction?\n> >\n> > Yes, the \"size\" of the transaction seems to matter here. It is a complex\n> transaction (attached). Each \"frame\" is one stored procedure, and the 6\n> frames are called one after the other with no pause. After frame6 returns,\n> we call SQLTransact(..., ..., SQL_COMMIT). Below is the failure rate of the\n> various frames:\n> >\n> > 112 tid 18883: SQL Failed: DoTradeResultFrame3\n> > 102 tid 18883: SQL Failed: DoTradeResultFrame4\n> > 18188 tid 18883: SQL Failed: DoTradeResultFrame5\n> > 8566 tid 18883: SQL Failed: DoTradeResultFrame6\n> > 4492 tid 18883: ERROR: TradeResultDB: commit failed\n> >\n> > So, no failures in frames 1 and 2, and then the failure rate grows as we\n> approach the end of the transaction.\n> \n> According to the attached SQL, each frame is a separate phase in the\n> operation and performs many different operations.\n> \n> There's a *lot* going on here, so identifying possible interdependencies isn't\n> something I can do in a ten minute skim read over my morning coffee.\n> \n> I think the most useful thing to do here is to start cutting and simplifying the\n> case, trying to boil it down to the smallest thing that still causes the problem.\n> \n> That'll likely either find a previously unidentified interdependency between\n> transactions or, if you're unlucky, a Pg bug. Given the complexity of the\n> operations there I'd be very surprised if it wasn't the former.\n> \n> >> If the INSERTing transaction previously queried for a key that was\n> >> created by a concurrent transaction this can occur as there is no\n> serialization execution order of the transactions that could produce the same\n> result.\n> >\n> > As far as the inserts, your point is well-taken. But in this case, I\n> > have eliminated the transactions that query or otherwise manipulate\n> > the SETTELEMENT table. The only access to it is the single insert in\n> > this transaction\n> \n> If there are foreign keys to it from other tables, they count too.\n> \n> --\n> Craig Ringer\n> https://urldefense.proofpoint.com/v1/url?u=http://www.2ndquadrant.com\n> /&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjroD2HLPTH\n> U27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=SLSpdQUFSC%2BXlQIgotLSghfyEB\n> qC7q8Sh1AEizZ3pBw%3D%0A&s=ceb740d5d6686cda7ed9dd31b4dce2de0eda\n> 3cf3a46ffead645c5bb6d9e7ec5c\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jul 2014 18:58:06 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High rate of transaction failure with the\n Serializable Isolation Level"
},
{
"msg_contents": "On 25/07/2014 2:58 PM, Reza Taheri wrote:\n> Hi Craig,\n>\n>> According to the attached SQL, each frame is a separate phase in the operation and performs many different operations.\n>> There's a *lot* going on here, so identifying possible interdependencies isn't something I can do in a ten minute skim\n>> read over my morning coffee.\n> You didn't think I was going to bug you all with a trivial problem, did you? :-) :-)\n>\n> Yes, I am going to have to take an axe to the code and see what pops out. Just to put this in perspective, the transaction flow and its statements are borrowed verbatim from the TPC-E benchmark. There have been dozens of TPC-E disclosures with MS SQL Server, and there are Oracle and DB2 kits that, although not used in public disclosures for various non-technical reasons, are used internally in by the DB and server companies. These 3 products, and perhaps more, were used extensively in the prototyping phase of TPC-E.\n>\n> So, my hope is that if there is a \"previously unidentified interdependency between transactions\" as you point out, it will be due to a mistake we made in coding this for PGSQL. Otherwise, we will have a hard time convincing all the council member companies that we need to change the schema or the business logic to make the kit work with PGSQL.\n>\n> Just pointing out my uphill battle!!\nYou might compare against dbt-5 [1], just to see if the same problem \noccurs. I didn't notice such high abort rates when I ran that workload a \nfew weeks ago. Just make sure to use the latest commit, because the \n\"released\" version has fatal bugs.\n\n[1] https://github.com/petergeoghegan/dbt5\n\nRyan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jul 2014 17:35:31 -0400",
"msg_from": "Ryan Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High rate of transaction failure with the Serializable\n Isolation Level"
},
{
"msg_contents": "On 07/23/2014 06:18 PM, Reza Taheri wrote:\n> [By way of introduction, we are a TPC subcommittee that is developing a\n> benchmark with cloud-like characteristics for virtualized databases. The\n> end-to-end benchmarking kit will be publicly available, and will run on\n> PGSQL]\n\nAwesome! Any idea when it will be available? Our community could\nreally use some updated benchmark tooling ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jul 2014 14:47:01 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High rate of transaction failure with the Serializable\n Isolation Level"
},
{
"msg_contents": "We are hoping the spec will get wrapped up in the next 6 months, but industry standard councils move very slowly! However, if there is interest in getting involved and helping, the TPC might be receptive to earlier access.\r\n\r\nBTW, just to let folks know how large of an undertaking this has been, this is an estimate of the lines of code developed so far:\r\n\r\n§ 700 lines of run-time shell scripts\r\n§ 700 lines of build-time shell scripts\r\n§ 3.5K lines of DDL and DML\r\n§ 4K lines of C code to test the DML\r\n§ 22K lines of C, C++, and Java code in the benchmark driver\r\n§ 45K lines of C++ code in VGen\r\n\r\n> -----Original Message-----\r\n> From: [email protected] [mailto:pgsql-\r\n> [email protected]] On Behalf Of Josh Berkus\r\n> Sent: Friday, July 25, 2014 2:47 PM\r\n> To: [email protected]\r\n> Subject: Re: High rate of transaction failure with the Serializable Isolation\r\n> Level\r\n> \r\n> On 07/23/2014 06:18 PM, Reza Taheri wrote:\r\n> > [By way of introduction, we are a TPC subcommittee that is developing\r\n> > a benchmark with cloud-like characteristics for virtualized databases.\r\n> > The end-to-end benchmarking kit will be publicly available, and will\r\n> > run on PGSQL]\r\n> \r\n> Awesome! Any idea when it will be available? Our community could really\r\n> use some updated benchmark tooling ...\r\n> \r\n> --\r\n> Josh Berkus\r\n> PostgreSQL Experts Inc.\r\n> https://urldefense.proofpoint.com/v1/url?u=http://pgexperts.com/&k=oIv\r\n> Rg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjroD2HLPTHU27nI9P\r\n> Jr8wgKO2rU9QZyZZU%3D%0A&m=tao9MSozUNfqQO2rwn3fQPjQbSY7t6i7va\r\n> Qnrs%2F%2B%2FWI%3D%0A&s=3cc04ce20eaf8319b9d2727420905b24ad64be\r\n> b8d283142b321dde138026ed8d\r\n> \r\n> \r\n> --\r\n> Sent via pgsql-performance mailing list ([email protected])\r\n> To make changes to your subscription:\r\n> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/m\r\n> ailpref/pgsql-\r\n> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\r\n> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=tao9MSozUNfqQO2r\r\n> wn3fQPjQbSY7t6i7vaQnrs%2F%2B%2FWI%3D%0A&s=9c73db2dfd89fef26217\r\n> f410046c8557b726bc8c5d0bf53053ea0e33294032f7\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jul 2014 23:24:02 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High rate of transaction failure with the Serializable\n Isolation Level"
},
{
"msg_contents": "Hi Ryan,\nThat's a very good point. We are looking at dbt5. One question: what throughput rate, and how many threads of execution did you use for dbt5? The failure rates I reported were at ~120 tps with 15 trade-result threads.\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of Ryan Johnson\n> Sent: Friday, July 25, 2014 2:36 PM\n> To: [email protected]\n> Subject: Re: High rate of transaction failure with the Serializable Isolation\n> Level\n> \n> On 25/07/2014 2:58 PM, Reza Taheri wrote:\n> > Hi Craig,\n> >\n> >> According to the attached SQL, each frame is a separate phase in the\n> operation and performs many different operations.\n> >> There's a *lot* going on here, so identifying possible\n> >> interdependencies isn't something I can do in a ten minute skim read over\n> my morning coffee.\n> > You didn't think I was going to bug you all with a trivial problem,\n> > did you? :-) :-)\n> >\n> > Yes, I am going to have to take an axe to the code and see what pops out.\n> Just to put this in perspective, the transaction flow and its statements are\n> borrowed verbatim from the TPC-E benchmark. There have been dozens of\n> TPC-E disclosures with MS SQL Server, and there are Oracle and DB2 kits that,\n> although not used in public disclosures for various non-technical reasons, are\n> used internally in by the DB and server companies. These 3 products, and\n> perhaps more, were used extensively in the prototyping phase of TPC-E.\n> >\n> > So, my hope is that if there is a \"previously unidentified interdependency\n> between transactions\" as you point out, it will be due to a mistake we made\n> in coding this for PGSQL. Otherwise, we will have a hard time convincing all\n> the council member companies that we need to change the schema or the\n> business logic to make the kit work with PGSQL.\n> >\n> > Just pointing out my uphill battle!!\n> You might compare against dbt-5 [1], just to see if the same problem occurs. I\n> didn't notice such high abort rates when I ran that workload a few weeks\n> ago. Just make sure to use the latest commit, because the \"released\" version\n> has fatal bugs.\n> \n> [1]\n> https://urldefense.proofpoint.com/v1/url?u=https://github.com/petergeog\n> hegan/dbt5&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjr\n> oD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMyP\n> xtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=b3f269216d419410f3f07bb\n> 774a27b7d377744c9d423df52a3e62324d9279958\n> \n> Ryan\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/m\n> ailpref/pgsql-\n> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMy\n> PxtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=45ab94ce068dbe28956af\n> 8bb3f999e9a91138dd1e3c3345c036e87e902da1ef1\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 26 Jul 2014 19:55:11 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High rate of transaction failure with the Serializable\n Isolation Level"
},
{
"msg_contents": "Dredging through some old run logs, 12 dbt-5 clients gave the following \nwhen everything was run under SSI (fully serializable, even the \ntransactions that allow repeatable read isolation). Not sure how that \ntranslates to your results. Abort rates were admittedly rather high, \nthough perhaps lower than what you report.\n\nTransaction % Average: 90th % Total Rollbacks % Warning Invalid\n----------------- ------- --------------- ------- -------------- ------- -------\nTrade Result 5.568 0.022: 0.056 2118 417 19.69% 0 91\nBroker Volume 5.097 0.009: 0.014 1557 0 0.00% 0 0\nCustomer Position 13.530 0.016: 0.034 4134 1 0.02% 0 0\nMarket Feed 0.547 0.033: 0.065 212 45 21.23% 0 69\nMarket Watch 18.604 0.031: 0.061 5683 0 0.00% 0 0\nSecurity Detail 14.462 0.015: 0.020 4418 0 0.00% 0 0\nTrade Lookup 8.325 0.059: 0.146 2543 0 0.00% 432 0\nTrade Order 9.110 0.006: 0.008 3227 444 13.76% 0 0\nTrade Status 19.795 0.030: 0.046 6047 0 0.00% 0 0\nTrade Update 1.990 0.064: 0.145 608 0 0.00% 432 0\nData Maintenance N/A 0.012: 0.012 1 0 0.00% 0 0\n----------------- ------- --------------- ------- -------------- ------- -------\n28.35 trade-result transactions per second (trtps)\n\nRegards,\nRyan\n\nOn 26/07/2014 3:55 PM, Reza Taheri wrote:\n> Hi Ryan,\n> That's a very good point. We are looking at dbt5. One question: what throughput rate, and how many threads of execution did you use for dbt5? The failure rates I reported were at ~120 tps with 15 trade-result threads.\n>\n> Thanks,\n> Reza\n>\n>> -----Original Message-----\n>> From: [email protected] [mailto:pgsql-\n>> [email protected]] On Behalf Of Ryan Johnson\n>> Sent: Friday, July 25, 2014 2:36 PM\n>> To: [email protected]\n>> Subject: Re: High rate of transaction failure with the Serializable Isolation\n>> Level\n>>\n>> On 25/07/2014 2:58 PM, Reza Taheri wrote:\n>>> Hi Craig,\n>>>\n>>>> According to the attached SQL, each frame is a separate phase in the\n>> operation and performs many different operations.\n>>>> There's a *lot* going on here, so identifying possible\n>>>> interdependencies isn't something I can do in a ten minute skim read over\n>> my morning coffee.\n>>> You didn't think I was going to bug you all with a trivial problem,\n>>> did you? :-) :-)\n>>>\n>>> Yes, I am going to have to take an axe to the code and see what pops out.\n>> Just to put this in perspective, the transaction flow and its statements are\n>> borrowed verbatim from the TPC-E benchmark. There have been dozens of\n>> TPC-E disclosures with MS SQL Server, and there are Oracle and DB2 kits that,\n>> although not used in public disclosures for various non-technical reasons, are\n>> used internally in by the DB and server companies. These 3 products, and\n>> perhaps more, were used extensively in the prototyping phase of TPC-E.\n>>> So, my hope is that if there is a \"previously unidentified interdependency\n>> between transactions\" as you point out, it will be due to a mistake we made\n>> in coding this for PGSQL. Otherwise, we will have a hard time convincing all\n>> the council member companies that we need to change the schema or the\n>> business logic to make the kit work with PGSQL.\n>>> Just pointing out my uphill battle!!\n>> You might compare against dbt-5 [1], just to see if the same problem occurs. I\n>> didn't notice such high abort rates when I ran that workload a few weeks\n>> ago. Just make sure to use the latest commit, because the \"released\" version\n>> has fatal bugs.\n>>\n>> [1]\n>> https://urldefense.proofpoint.com/v1/url?u=https://github.com/petergeog\n>> hegan/dbt5&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjr\n>> oD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMyP\n>> xtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=b3f269216d419410f3f07bb\n>> 774a27b7d377744c9d423df52a3e62324d9279958\n>>\n>> Ryan\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/m\n>> ailpref/pgsql-\n>> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n>> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMy\n>> PxtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=45ab94ce068dbe28956af\n>> 8bb3f999e9a91138dd1e3c3345c036e87e902da1ef1\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 26 Jul 2014 17:05:53 -0400",
"msg_from": "Ryan Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High rate of transaction failure with the Serializable Isolation\n Level"
},
{
"msg_contents": "Hi Ryan,\nThanks a lot for sharing this. When I run with 12 CE threads and 3-5 MEE threads (how many MEE threads do you have?) @ 80-90 tps, I get something in the 20-30% of trade-result transactions rolled back depending on how I count. E.g., in a 5.5-minute run with 3 MEE threads, I saw 87.5 tps. There were 29200 successful trade-result transactions. Of these, 5800 were rolled back, some more than once for a total of 8450 rollbacks. So I'd say your results and ours tell similar stories!\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-\n> [email protected]] On Behalf Of Ryan Johnson\n> Sent: Saturday, July 26, 2014 2:06 PM\n> To: Reza Taheri\n> Cc: [email protected]\n> Subject: Re: High rate of transaction failure with the Serializable Isolation\n> Level\n> \n> Dredging through some old run logs, 12 dbt-5 clients gave the following when\n> everything was run under SSI (fully serializable, even the transactions that\n> allow repeatable read isolation). Not sure how that translates to your results.\n> Abort rates were admittedly rather high, though perhaps lower than what\n> you report.\n> \n> Transaction % Average: 90th % Total Rollbacks % Warning Invalid\n> ----------------- ------- --------------- ------- -------------- ------- -------\n> Trade Result 5.568 0.022: 0.056 2118 417 19.69% 0 91\n> Broker Volume 5.097 0.009: 0.014 1557 0 0.00% 0 0\n> Customer Position 13.530 0.016: 0.034 4134 1 0.02% 0 0\n> Market Feed 0.547 0.033: 0.065 212 45 21.23% 0 69\n> Market Watch 18.604 0.031: 0.061 5683 0 0.00% 0 0\n> Security Detail 14.462 0.015: 0.020 4418 0 0.00% 0 0\n> Trade Lookup 8.325 0.059: 0.146 2543 0 0.00% 432 0\n> Trade Order 9.110 0.006: 0.008 3227 444 13.76% 0 0\n> Trade Status 19.795 0.030: 0.046 6047 0 0.00% 0 0\n> Trade Update 1.990 0.064: 0.145 608 0 0.00% 432 0\n> Data Maintenance N/A 0.012: 0.012 1 0 0.00% 0 0\n> ----------------- ------- --------------- ------- -------------- ------- -------\n> 28.35 trade-result transactions per second (trtps)\n> \n> Regards,\n> Ryan\n> \n> On 26/07/2014 3:55 PM, Reza Taheri wrote:\n> > Hi Ryan,\n> > That's a very good point. We are looking at dbt5. One question: what\n> throughput rate, and how many threads of execution did you use for dbt5?\n> The failure rates I reported were at ~120 tps with 15 trade-result threads.\n> >\n> > Thanks,\n> > Reza\n> >\n> >> -----Original Message-----\n> >> From: [email protected] [mailto:pgsql-\n> >> [email protected]] On Behalf Of Ryan Johnson\n> >> Sent: Friday, July 25, 2014 2:36 PM\n> >> To: [email protected]\n> >> Subject: Re: High rate of transaction failure with the Serializable\n> >> Isolation Level\n> >>\n> >> On 25/07/2014 2:58 PM, Reza Taheri wrote:\n> >>> Hi Craig,\n> >>>\n> >>>> According to the attached SQL, each frame is a separate phase in\n> >>>> the\n> >> operation and performs many different operations.\n> >>>> There's a *lot* going on here, so identifying possible\n> >>>> interdependencies isn't something I can do in a ten minute skim\n> >>>> read over\n> >> my morning coffee.\n> >>> You didn't think I was going to bug you all with a trivial problem,\n> >>> did you? :-) :-)\n> >>>\n> >>> Yes, I am going to have to take an axe to the code and see what pops\n> out.\n> >> Just to put this in perspective, the transaction flow and its\n> >> statements are borrowed verbatim from the TPC-E benchmark. There\n> have\n> >> been dozens of TPC-E disclosures with MS SQL Server, and there are\n> >> Oracle and DB2 kits that, although not used in public disclosures for\n> >> various non-technical reasons, are used internally in by the DB and\n> >> server companies. These 3 products, and perhaps more, were used\n> extensively in the prototyping phase of TPC-E.\n> >>> So, my hope is that if there is a \"previously unidentified\n> >>> interdependency\n> >> between transactions\" as you point out, it will be due to a mistake\n> >> we made in coding this for PGSQL. Otherwise, we will have a hard time\n> >> convincing all the council member companies that we need to change\n> >> the schema or the business logic to make the kit work with PGSQL.\n> >>> Just pointing out my uphill battle!!\n> >> You might compare against dbt-5 [1], just to see if the same problem\n> >> occurs. I didn't notice such high abort rates when I ran that\n> >> workload a few weeks ago. Just make sure to use the latest commit,\n> >> because the \"released\" version has fatal bugs.\n> >>\n> >> [1]\n> >>\n> https://urldefense.proofpoint.com/v1/url?u=https://github.com/peterge\n> >> og\n> hegan/dbt5&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjr\n> >>\n> oD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMyP\n> >>\n> xtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=b3f269216d419410f3f07bb\n> >> 774a27b7d377744c9d423df52a3e62324d9279958\n> >>\n> >> Ryan\n> >>\n> >>\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list\n> >> ([email protected])\n> >> To make changes to your subscription:\n> >>\n> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/\n> >> m\n> >> ailpref/pgsql-\n> >>\n> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n> >>\n> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMy\n> >>\n> PxtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=45ab94ce068dbe28956af\n> >> 8bb3f999e9a91138dd1e3c3345c036e87e902da1ef1\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/m\n> ailpref/pgsql-\n> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=gzdXAra2QlJIiMTFSjH\n> cKAsSKNR5LST%2FrsLWdeb7Y9c%3D%0A&s=673454322b6239edd9d02472e95\n> e8a6c15cb1a095d2afb9c981642e44fb40672\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 26 Jul 2014 23:33:10 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High rate of transaction failure with the Serializable\n Isolation Level"
},
{
"msg_contents": "That does sound pretty similar, modulo the raw performance difference. I \nhave no idea how many MEE threads there were; it was just a quick run \nwith exactly zero tuning, so I use whatever dbt5 does out of the box. \nActually, though, if you have any general tuning tips for TPC-E I'd be \ninterested to learn them (PM if that's off topic for this discussion).\n\nRegards,\nRyan\n\nOn 26/07/2014 7:33 PM, Reza Taheri wrote:\n> Hi Ryan,\n> Thanks a lot for sharing this. When I run with 12 CE threads and 3-5 MEE threads (how many MEE threads do you have?) @ 80-90 tps, I get something in the 20-30% of trade-result transactions rolled back depending on how I count. E.g., in a 5.5-minute run with 3 MEE threads, I saw 87.5 tps. There were 29200 successful trade-result transactions. Of these, 5800 were rolled back, some more than once for a total of 8450 rollbacks. So I'd say your results and ours tell similar stories!\n>\n> Thanks,\n> Reza\n>\n>> -----Original Message-----\n>> From: [email protected] [mailto:pgsql-\n>> [email protected]] On Behalf Of Ryan Johnson\n>> Sent: Saturday, July 26, 2014 2:06 PM\n>> To: Reza Taheri\n>> Cc: [email protected]\n>> Subject: Re: High rate of transaction failure with the Serializable Isolation\n>> Level\n>>\n>> Dredging through some old run logs, 12 dbt-5 clients gave the following when\n>> everything was run under SSI (fully serializable, even the transactions that\n>> allow repeatable read isolation). Not sure how that translates to your results.\n>> Abort rates were admittedly rather high, though perhaps lower than what\n>> you report.\n>>\n>> Transaction % Average: 90th % Total Rollbacks % Warning Invalid\n>> ----------------- ------- --------------- ------- -------------- ------- -------\n>> Trade Result 5.568 0.022: 0.056 2118 417 19.69% 0 91\n>> Broker Volume 5.097 0.009: 0.014 1557 0 0.00% 0 0\n>> Customer Position 13.530 0.016: 0.034 4134 1 0.02% 0 0\n>> Market Feed 0.547 0.033: 0.065 212 45 21.23% 0 69\n>> Market Watch 18.604 0.031: 0.061 5683 0 0.00% 0 0\n>> Security Detail 14.462 0.015: 0.020 4418 0 0.00% 0 0\n>> Trade Lookup 8.325 0.059: 0.146 2543 0 0.00% 432 0\n>> Trade Order 9.110 0.006: 0.008 3227 444 13.76% 0 0\n>> Trade Status 19.795 0.030: 0.046 6047 0 0.00% 0 0\n>> Trade Update 1.990 0.064: 0.145 608 0 0.00% 432 0\n>> Data Maintenance N/A 0.012: 0.012 1 0 0.00% 0 0\n>> ----------------- ------- --------------- ------- -------------- ------- -------\n>> 28.35 trade-result transactions per second (trtps)\n>>\n>> Regards,\n>> Ryan\n>>\n>> On 26/07/2014 3:55 PM, Reza Taheri wrote:\n>>> Hi Ryan,\n>>> That's a very good point. We are looking at dbt5. One question: what\n>> throughput rate, and how many threads of execution did you use for dbt5?\n>> The failure rates I reported were at ~120 tps with 15 trade-result threads.\n>>> Thanks,\n>>> Reza\n>>>\n>>>> -----Original Message-----\n>>>> From: [email protected] [mailto:pgsql-\n>>>> [email protected]] On Behalf Of Ryan Johnson\n>>>> Sent: Friday, July 25, 2014 2:36 PM\n>>>> To: [email protected]\n>>>> Subject: Re: High rate of transaction failure with the Serializable\n>>>> Isolation Level\n>>>>\n>>>> On 25/07/2014 2:58 PM, Reza Taheri wrote:\n>>>>> Hi Craig,\n>>>>>\n>>>>>> According to the attached SQL, each frame is a separate phase in\n>>>>>> the\n>>>> operation and performs many different operations.\n>>>>>> There's a *lot* going on here, so identifying possible\n>>>>>> interdependencies isn't something I can do in a ten minute skim\n>>>>>> read over\n>>>> my morning coffee.\n>>>>> You didn't think I was going to bug you all with a trivial problem,\n>>>>> did you? :-) :-)\n>>>>>\n>>>>> Yes, I am going to have to take an axe to the code and see what pops\n>> out.\n>>>> Just to put this in perspective, the transaction flow and its\n>>>> statements are borrowed verbatim from the TPC-E benchmark. There\n>> have\n>>>> been dozens of TPC-E disclosures with MS SQL Server, and there are\n>>>> Oracle and DB2 kits that, although not used in public disclosures for\n>>>> various non-technical reasons, are used internally in by the DB and\n>>>> server companies. These 3 products, and perhaps more, were used\n>> extensively in the prototyping phase of TPC-E.\n>>>>> So, my hope is that if there is a \"previously unidentified\n>>>>> interdependency\n>>>> between transactions\" as you point out, it will be due to a mistake\n>>>> we made in coding this for PGSQL. Otherwise, we will have a hard time\n>>>> convincing all the council member companies that we need to change\n>>>> the schema or the business logic to make the kit work with PGSQL.\n>>>>> Just pointing out my uphill battle!!\n>>>> You might compare against dbt-5 [1], just to see if the same problem\n>>>> occurs. I didn't notice such high abort rates when I ran that\n>>>> workload a few weeks ago. Just make sure to use the latest commit,\n>>>> because the \"released\" version has fatal bugs.\n>>>>\n>>>> [1]\n>>>>\n>> https://urldefense.proofpoint.com/v1/url?u=https://github.com/peterge\n>>>> og\n>> hegan/dbt5&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjr\n>> oD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMyP\n>> xtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=b3f269216d419410f3f07bb\n>>>> 774a27b7d377744c9d423df52a3e62324d9279958\n>>>>\n>>>> Ryan\n>>>>\n>>>>\n>>>>\n>>>> --\n>>>> Sent via pgsql-performance mailing list\n>>>> ([email protected])\n>>>> To make changes to your subscription:\n>>>>\n>> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/\n>>>> m\n>>>> ailpref/pgsql-\n>>>>\n>> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n>> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMy\n>> PxtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=45ab94ce068dbe28956af\n>>>> 8bb3f999e9a91138dd1e3c3345c036e87e902da1ef1\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/m\n>> ailpref/pgsql-\n>> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n>> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=gzdXAra2QlJIiMTFSjH\n>> cKAsSKNR5LST%2FrsLWdeb7Y9c%3D%0A&s=673454322b6239edd9d02472e95\n>> e8a6c15cb1a095d2afb9c981642e44fb40672\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 26 Jul 2014 19:51:03 -0400",
"msg_from": "Ryan Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High rate of transaction failure with the Serializable Isolation\n Level"
},
{
"msg_contents": "On 07/26/2014 02:58 AM, Reza Taheri wrote:\n> Hi Craig,\n> \n>> According to the attached SQL, each frame is a separate phase in the operation and performs many different operations.\n>> There's a *lot* going on here, so identifying possible interdependencies isn't something I can do in a ten minute skim\n>> read over my morning coffee.\n> \n> You didn't think I was going to bug you all with a trivial problem, did you? :-) :-)\n\nOne can hope, but usually in vain...\n\n> Yes, I am going to have to take an axe to the code and see what pops out. Just to put this in perspective, the transaction flow and its statements are borrowed verbatim from the TPC-E benchmark. There have been dozens of TPC-E disclosures with MS SQL Server, and there are Oracle and DB2 kits that, although not used in public disclosures for various non-technical reasons, are used internally in by the DB and server companies. These 3 products, and perhaps more, were used extensively in the prototyping phase of TPC-E.\n> \n> So, my hope is that if there is a \"previously unidentified interdependency between transactions\" as you point out, it will be due to a mistake we made in coding this for PGSQL. Otherwise, we will have a hard time convincing all the council member companies that we need to change the schema or the business logic to make the kit work with PGSQL.\n\nHopefully so.\n\nPersonally I think it's moderately likely that PostgreSQL's much\nstricter enforcement of serializable isolation is detecting anomalies\nthat other products do not, so it's potentially preventing errors.\n\nIt would be nice to have the ability to tune this; sometimes there are\nanomalies you wish to ignore or permit. At present it is an all or\nnothing affair - no predicate locking (REPEATABLE READ isolation) or\nstrict predicate locking (SERIALIZABLE isolation).\n\nI recommend running some of the examples in the SERIALIZABLE\ndocumentation on other products. If they don't fail where they do in Pg,\nthen the other products either have less strict (and arguably therefor\nless correct) serialisable isolation enforcement or they rely on\nblocking predicate locks. In the latter case it should be easy to tell\nbecause statements will block on locks where no ordinary row or table\nlevel lock could be taken.\n\nIf you do run comparative tests I would be quite interested in seeing\nthe results.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Jul 2014 11:58:22 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High rate of transaction failure with the Serializable\n Isolation Level"
},
{
"msg_contents": "Hi Ryan,\nI just noticed that the mail alias manager has stalled the post below because of the attachment size. But you should have gotten it directly.\n\nIf anyone else is interested in a copy, let me know, and I will forward it\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: Reza Taheri\n> Sent: Monday, July 28, 2014 8:57 PM\n> To: 'Ryan Johnson'\n> Cc: [email protected]\n> Subject: RE: High rate of transaction failure with the Serializable Isolation\n> Level\n> \n> Hi Ryan,\n> We presented a paper at the TPCTC of last year's VLDB (attached). It\n> described the architecture of the kit, and some of the tuning. Another tuning\n> change was setting /proc/sys/vm/dirty_background_bytes to a small value\n> (like 10000000) on very-large memory machines, which was a problem I\n> brought up on this same mailing list a while ago and got great advice. Also,\n> make sure you do a SQLFreeStmt(stmt, SQL_DROP) at the end of\n> transactions, not SQL_CLOSE.\n> \n> Let me know if you have any question about the paper\n> \n> Thanks,\n> Reza\n> \n> > -----Original Message-----\n> > From: Ryan Johnson [mailto:[email protected]]\n> > Sent: Saturday, July 26, 2014 4:51 PM\n> > To: Reza Taheri\n> > Cc: [email protected]\n> > Subject: Re: High rate of transaction failure with the Serializable\n> > Isolation Level\n> >\n> > That does sound pretty similar, modulo the raw performance difference.\n> > I have no idea how many MEE threads there were; it was just a quick\n> > run with exactly zero tuning, so I use whatever dbt5 does out of the box.\n> > Actually, though, if you have any general tuning tips for TPC-E I'd be\n> > interested to learn them (PM if that's off topic for this discussion).\n> >\n> > Regards,\n> > Ryan\n> >\n> > On 26/07/2014 7:33 PM, Reza Taheri wrote:\n> > > Hi Ryan,\n> > > Thanks a lot for sharing this. When I run with 12 CE threads and 3-5\n> > > MEE\n> > threads (how many MEE threads do you have?) @ 80-90 tps, I get\n> > something in the 20-30% of trade-result transactions rolled back\n> > depending on how I count. E.g., in a 5.5-minute run with 3 MEE\n> > threads, I saw 87.5 tps. There were 29200 successful trade-result\n> > transactions. Of these, 5800 were rolled back, some more than once for\n> > a total of 8450 rollbacks. So I'd say your results and ours tell similar stories!\n> > >\n> > > Thanks,\n> > > Reza\n> > >\n> > >> -----Original Message-----\n> > >> From: [email protected] [mailto:pgsql-\n> > >> [email protected]] On Behalf Of Ryan Johnson\n> > >> Sent: Saturday, July 26, 2014 2:06 PM\n> > >> To: Reza Taheri\n> > >> Cc: [email protected]\n> > >> Subject: Re: High rate of transaction failure with the Serializable\n> > >> Isolation Level\n> > >>\n> > >> Dredging through some old run logs, 12 dbt-5 clients gave the\n> > >> following when everything was run under SSI (fully serializable,\n> > >> even the transactions that allow repeatable read isolation). Not\n> > >> sure how that\n> > translates to your results.\n> > >> Abort rates were admittedly rather high, though perhaps lower than\n> > >> what you report.\n> > >>\n> > >> Transaction % Average: 90th % Total Rollbacks % Warning Invalid\n> > >> ----------------- ------- --------------- ------- -------------- ------- -------\n> > >> Trade Result 5.568 0.022: 0.056 2118 417 19.69% 0 91\n> > >> Broker Volume 5.097 0.009: 0.014 1557 0 0.00% 0 0\n> > >> Customer Position 13.530 0.016: 0.034 4134 1 0.02% 0 0\n> > >> Market Feed 0.547 0.033: 0.065 212 45 21.23% 0 69\n> > >> Market Watch 18.604 0.031: 0.061 5683 0 0.00% 0 0\n> > >> Security Detail 14.462 0.015: 0.020 4418 0 0.00% 0 0\n> > >> Trade Lookup 8.325 0.059: 0.146 2543 0 0.00% 432 0\n> > >> Trade Order 9.110 0.006: 0.008 3227 444 13.76% 0 0\n> > >> Trade Status 19.795 0.030: 0.046 6047 0 0.00% 0 0\n> > >> Trade Update 1.990 0.064: 0.145 608 0 0.00% 432 0\n> > >> Data Maintenance N/A 0.012: 0.012 1 0 0.00% 0 0\n> > >> ----------------- ------- --------------- ------- --------------\n> > >> ------- -------\n> > >> 28.35 trade-result transactions per second (trtps)\n> > >>\n> > >> Regards,\n> > >> Ryan\n> > >>\n> > >> On 26/07/2014 3:55 PM, Reza Taheri wrote:\n> > >>> Hi Ryan,\n> > >>> That's a very good point. We are looking at dbt5. One question:\n> > >>> what\n> > >> throughput rate, and how many threads of execution did you use for\n> > dbt5?\n> > >> The failure rates I reported were at ~120 tps with 15 trade-result\n> threads.\n> > >>> Thanks,\n> > >>> Reza\n> > >>>\n> > >>>> -----Original Message-----\n> > >>>> From: [email protected] [mailto:pgsql-\n> > >>>> [email protected]] On Behalf Of Ryan Johnson\n> > >>>> Sent: Friday, July 25, 2014 2:36 PM\n> > >>>> To: [email protected]\n> > >>>> Subject: Re: High rate of transaction failure with the\n> > >>>> Serializable Isolation Level\n> > >>>>\n> > >>>> On 25/07/2014 2:58 PM, Reza Taheri wrote:\n> > >>>>> Hi Craig,\n> > >>>>>\n> > >>>>>> According to the attached SQL, each frame is a separate phase\n> > >>>>>> in the\n> > >>>> operation and performs many different operations.\n> > >>>>>> There's a *lot* going on here, so identifying possible\n> > >>>>>> interdependencies isn't something I can do in a ten minute skim\n> > >>>>>> read over\n> > >>>> my morning coffee.\n> > >>>>> You didn't think I was going to bug you all with a trivial\n> > >>>>> problem, did you? :-) :-)\n> > >>>>>\n> > >>>>> Yes, I am going to have to take an axe to the code and see what\n> > >>>>> pops\n> > >> out.\n> > >>>> Just to put this in perspective, the transaction flow and its\n> > >>>> statements are borrowed verbatim from the TPC-E benchmark.\n> There\n> > >> have\n> > >>>> been dozens of TPC-E disclosures with MS SQL Server, and there\n> > >>>> are Oracle and DB2 kits that, although not used in public\n> > >>>> disclosures for various non-technical reasons, are used\n> > >>>> internally in by the DB and server companies. These 3 products,\n> > >>>> and perhaps more, were\n> > used\n> > >> extensively in the prototyping phase of TPC-E.\n> > >>>>> So, my hope is that if there is a \"previously unidentified\n> > >>>>> interdependency\n> > >>>> between transactions\" as you point out, it will be due to a\n> > >>>> mistake we made in coding this for PGSQL. Otherwise, we will have\n> > >>>> a hard time convincing all the council member companies that we\n> > >>>> need to change the schema or the business logic to make the kit\n> > >>>> work with\n> > PGSQL.\n> > >>>>> Just pointing out my uphill battle!!\n> > >>>> You might compare against dbt-5 [1], just to see if the same\n> > >>>> problem occurs. I didn't notice such high abort rates when I ran\n> > >>>> that workload a few weeks ago. Just make sure to use the latest\n> > >>>> commit, because the \"released\" version has fatal bugs.\n> > >>>>\n> > >>>> [1]\n> > >>>>\n> > >>\n> > https://urldefense.proofpoint.com/v1/url?u=https://github.com/peterge\n> > >>>> og\n> > >>\n> >\n> hegan/dbt5&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjr\n> > >>\n> >\n> oD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMyP\n> > >>\n> >\n> xtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=b3f269216d419410f3f07bb\n> > >>>> 774a27b7d377744c9d423df52a3e62324d9279958\n> > >>>>\n> > >>>> Ryan\n> > >>>>\n> > >>>>\n> > >>>>\n> > >>>> --\n> > >>>> Sent via pgsql-performance mailing list\n> > >>>> ([email protected])\n> > >>>> To make changes to your subscription:\n> > >>>>\n> > >>\n> > https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/\n> > >>>> m\n> > >>>> ailpref/pgsql-\n> > >>>>\n> > >>\n> >\n> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n> > >>\n> >\n> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMy\n> > >>\n> >\n> PxtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=45ab94ce068dbe28956af\n> > >>>> 8bb3f999e9a91138dd1e3c3345c036e87e902da1ef1\n> > >>\n> > >>\n> > >> --\n> > >> Sent via pgsql-performance mailing list\n> > >> ([email protected])\n> > >> To make changes to your subscription:\n> > >>\n> > https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/\n> > >> m\n> > >> ailpref/pgsql-\n> > >>\n> >\n> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n> > >>\n> >\n> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=gzdXAra2QlJIiMTFSjH\n> > >>\n> >\n> cKAsSKNR5LST%2FrsLWdeb7Y9c%3D%0A&s=673454322b6239edd9d02472e95\n> > >> e8a6c15cb1a095d2afb9c981642e44fb40672\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 2 Aug 2014 16:28:55 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High rate of transaction failure with the Serializable\n Isolation Level"
},
{
"msg_contents": "Great, thanks. I'll look into when I get a few minutes.\n\nRyan\n\nOn 28/07/2014 11:57 PM, Reza Taheri wrote:\n> Hi Ryan,\n> We presented a paper at the TPCTC of last year's VLDB (attached). It described the architecture of the kit, and some of the tuning. Another tuning change was setting /proc/sys/vm/dirty_background_bytes to a small value (like 10000000) on very-large memory machines, which was a problem I brought up on this same mailing list a while ago and got great advice. Also, make sure you do a SQLFreeStmt(stmt, SQL_DROP) at the end of transactions, not SQL_CLOSE.\n>\n> Let me know if you have any question about the paper\n>\n> Thanks,\n> Reza\n>\n>> -----Original Message-----\n>> From: Ryan Johnson [mailto:[email protected]]\n>> Sent: Saturday, July 26, 2014 4:51 PM\n>> To: Reza Taheri\n>> Cc: [email protected]\n>> Subject: Re: High rate of transaction failure with the Serializable Isolation\n>> Level\n>>\n>> That does sound pretty similar, modulo the raw performance difference. I\n>> have no idea how many MEE threads there were; it was just a quick run with\n>> exactly zero tuning, so I use whatever dbt5 does out of the box.\n>> Actually, though, if you have any general tuning tips for TPC-E I'd be\n>> interested to learn them (PM if that's off topic for this discussion).\n>>\n>> Regards,\n>> Ryan\n>>\n>> On 26/07/2014 7:33 PM, Reza Taheri wrote:\n>>> Hi Ryan,\n>>> Thanks a lot for sharing this. When I run with 12 CE threads and 3-5 MEE\n>> threads (how many MEE threads do you have?) @ 80-90 tps, I get something\n>> in the 20-30% of trade-result transactions rolled back depending on how I\n>> count. E.g., in a 5.5-minute run with 3 MEE threads, I saw 87.5 tps. There\n>> were 29200 successful trade-result transactions. Of these, 5800 were rolled\n>> back, some more than once for a total of 8450 rollbacks. So I'd say your\n>> results and ours tell similar stories!\n>>> Thanks,\n>>> Reza\n>>>\n>>>> -----Original Message-----\n>>>> From: [email protected] [mailto:pgsql-\n>>>> [email protected]] On Behalf Of Ryan Johnson\n>>>> Sent: Saturday, July 26, 2014 2:06 PM\n>>>> To: Reza Taheri\n>>>> Cc: [email protected]\n>>>> Subject: Re: High rate of transaction failure with the Serializable\n>>>> Isolation Level\n>>>>\n>>>> Dredging through some old run logs, 12 dbt-5 clients gave the\n>>>> following when everything was run under SSI (fully serializable, even\n>>>> the transactions that allow repeatable read isolation). Not sure how that\n>> translates to your results.\n>>>> Abort rates were admittedly rather high, though perhaps lower than\n>>>> what you report.\n>>>>\n>>>> Transaction % Average: 90th % Total Rollbacks % Warning Invalid\n>>>> ----------------- ------- --------------- ------- -------------- ------- -------\n>>>> Trade Result 5.568 0.022: 0.056 2118 417 19.69% 0 91\n>>>> Broker Volume 5.097 0.009: 0.014 1557 0 0.00% 0 0\n>>>> Customer Position 13.530 0.016: 0.034 4134 1 0.02% 0 0\n>>>> Market Feed 0.547 0.033: 0.065 212 45 21.23% 0 69\n>>>> Market Watch 18.604 0.031: 0.061 5683 0 0.00% 0 0\n>>>> Security Detail 14.462 0.015: 0.020 4418 0 0.00% 0 0\n>>>> Trade Lookup 8.325 0.059: 0.146 2543 0 0.00% 432 0\n>>>> Trade Order 9.110 0.006: 0.008 3227 444 13.76% 0 0\n>>>> Trade Status 19.795 0.030: 0.046 6047 0 0.00% 0 0\n>>>> Trade Update 1.990 0.064: 0.145 608 0 0.00% 432 0\n>>>> Data Maintenance N/A 0.012: 0.012 1 0 0.00% 0 0\n>>>> ----------------- ------- --------------- ------- --------------\n>>>> ------- -------\n>>>> 28.35 trade-result transactions per second (trtps)\n>>>>\n>>>> Regards,\n>>>> Ryan\n>>>>\n>>>> On 26/07/2014 3:55 PM, Reza Taheri wrote:\n>>>>> Hi Ryan,\n>>>>> That's a very good point. We are looking at dbt5. One question: what\n>>>> throughput rate, and how many threads of execution did you use for\n>> dbt5?\n>>>> The failure rates I reported were at ~120 tps with 15 trade-result threads.\n>>>>> Thanks,\n>>>>> Reza\n>>>>>\n>>>>>> -----Original Message-----\n>>>>>> From: [email protected] [mailto:pgsql-\n>>>>>> [email protected]] On Behalf Of Ryan Johnson\n>>>>>> Sent: Friday, July 25, 2014 2:36 PM\n>>>>>> To: [email protected]\n>>>>>> Subject: Re: High rate of transaction failure with the Serializable\n>>>>>> Isolation Level\n>>>>>>\n>>>>>> On 25/07/2014 2:58 PM, Reza Taheri wrote:\n>>>>>>> Hi Craig,\n>>>>>>>\n>>>>>>>> According to the attached SQL, each frame is a separate phase in\n>>>>>>>> the\n>>>>>> operation and performs many different operations.\n>>>>>>>> There's a *lot* going on here, so identifying possible\n>>>>>>>> interdependencies isn't something I can do in a ten minute skim\n>>>>>>>> read over\n>>>>>> my morning coffee.\n>>>>>>> You didn't think I was going to bug you all with a trivial\n>>>>>>> problem, did you? :-) :-)\n>>>>>>>\n>>>>>>> Yes, I am going to have to take an axe to the code and see what\n>>>>>>> pops\n>>>> out.\n>>>>>> Just to put this in perspective, the transaction flow and its\n>>>>>> statements are borrowed verbatim from the TPC-E benchmark. There\n>>>> have\n>>>>>> been dozens of TPC-E disclosures with MS SQL Server, and there are\n>>>>>> Oracle and DB2 kits that, although not used in public disclosures\n>>>>>> for various non-technical reasons, are used internally in by the DB\n>>>>>> and server companies. These 3 products, and perhaps more, were\n>> used\n>>>> extensively in the prototyping phase of TPC-E.\n>>>>>>> So, my hope is that if there is a \"previously unidentified\n>>>>>>> interdependency\n>>>>>> between transactions\" as you point out, it will be due to a mistake\n>>>>>> we made in coding this for PGSQL. Otherwise, we will have a hard\n>>>>>> time convincing all the council member companies that we need to\n>>>>>> change the schema or the business logic to make the kit work with\n>> PGSQL.\n>>>>>>> Just pointing out my uphill battle!!\n>>>>>> You might compare against dbt-5 [1], just to see if the same\n>>>>>> problem occurs. I didn't notice such high abort rates when I ran\n>>>>>> that workload a few weeks ago. Just make sure to use the latest\n>>>>>> commit, because the \"released\" version has fatal bugs.\n>>>>>>\n>>>>>> [1]\n>>>>>>\n>> https://urldefense.proofpoint.com/v1/url?u=https://github.com/peterge\n>>>>>> og\n>> hegan/dbt5&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjr\n>> oD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMyP\n>> xtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=b3f269216d419410f3f07bb\n>>>>>> 774a27b7d377744c9d423df52a3e62324d9279958\n>>>>>>\n>>>>>> Ryan\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> --\n>>>>>> Sent via pgsql-performance mailing list\n>>>>>> ([email protected])\n>>>>>> To make changes to your subscription:\n>>>>>>\n>> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/\n>>>>>> m\n>>>>>> ailpref/pgsql-\n>>>>>>\n>> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n>> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=6E%2F9fWJPMGjpMy\n>> PxtY0nsamLLW%2FNsTXu7FP9Wzauj10%3D%0A&s=45ab94ce068dbe28956af\n>>>>>> 8bb3f999e9a91138dd1e3c3345c036e87e902da1ef1\n>>>>\n>>>> --\n>>>> Sent via pgsql-performance mailing list\n>>>> ([email protected])\n>>>> To make changes to your subscription:\n>>>>\n>> https://urldefense.proofpoint.com/v1/url?u=http://www.postgresql.org/\n>>>> m\n>>>> ailpref/pgsql-\n>>>>\n>> performance&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CP\n>> jroD2HLPTHU27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=gzdXAra2QlJIiMTFSjH\n>> cKAsSKNR5LST%2FrsLWdeb7Y9c%3D%0A&s=673454322b6239edd9d02472e95\n>>>> e8a6c15cb1a095d2afb9c981642e44fb40672\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 02 Aug 2014 14:12:26 -0400",
"msg_from": "Ryan Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High rate of transaction failure with the Serializable Isolation\n Level"
},
{
"msg_contents": "An update: following the recommendations on this list, I ran a number of experiments:\n\n- I ran with all foreign keys deleted. There was a 4% drop in the rate of deadlocks/transaction, which went from 0.32 per transaction to 0.31. So we still have pretty much the same failure rate. One interesting side effect was that throughput went up by 9%. So maintaining (a lot of) foreign constraints costs us a lot\n\n- I ran with max_pred_locks_per_tran as high as 26,400. No difference\n\n- I rebuilt the database with fillfactor=15 for all the tables and indexes that are involved in the transactions that fail. This was to see if the problem is PGSQL upgrading row level locks to page level locks. With a low fillfactor, the chances of two transactions landing on the same page is low. We have around 9 rows per data page instead of the original ~60. (The index pages are more tightly packed). I ran a range of thread counts from 5 to 60 for the threads that issue transactions. The failure rate per transaction dropped to around half for the thread count of 5, but that's misleading since with a fillfactor of15, our database size went up by around 6X, reducing the effectiveness of PGSQL and OS disk caches, resulting in a throughput of around half of what we used to see. So the reduced failure rate is just a result of fewer threads competing for resources. When I run with enough threads to max out the system with fillfactor=15 or 100, I get the same failure rates\n\n- In case folks hadn't noticed, Ryan Johnson is getting very similar failure rates with dbt-5. So this isn't a case of our having made a silly mistake in our coding of the app or the stored procedures\n\nAbove experiments were the easy one. I am now working on rewriting the app code and the 6 stored procedures to see if I can execute the whole transaction in a single stored procedure\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: Craig Ringer [mailto:[email protected]]\n> Sent: Sunday, July 27, 2014 8:58 PM\n> To: Reza Taheri; [email protected]\n> Subject: Re: [PERFORM] High rate of transaction failure with the Serializable\n> Isolation Level\n> \n> On 07/26/2014 02:58 AM, Reza Taheri wrote:\n> > Hi Craig,\n> >\n> >> According to the attached SQL, each frame is a separate phase in the\n> operation and performs many different operations.\n> >> There's a *lot* going on here, so identifying possible\n> >> interdependencies isn't something I can do in a ten minute skim read over\n> my morning coffee.\n> >\n> > You didn't think I was going to bug you all with a trivial problem,\n> > did you? :-) :-)\n> \n> One can hope, but usually in vain...\n> \n> > Yes, I am going to have to take an axe to the code and see what pops out.\n> Just to put this in perspective, the transaction flow and its statements are\n> borrowed verbatim from the TPC-E benchmark. There have been dozens of\n> TPC-E disclosures with MS SQL Server, and there are Oracle and DB2 kits that,\n> although not used in public disclosures for various non-technical reasons, are\n> used internally in by the DB and server companies. These 3 products, and\n> perhaps more, were used extensively in the prototyping phase of TPC-E.\n> >\n> > So, my hope is that if there is a \"previously unidentified interdependency\n> between transactions\" as you point out, it will be due to a mistake we made\n> in coding this for PGSQL. Otherwise, we will have a hard time convincing all\n> the council member companies that we need to change the schema or the\n> business logic to make the kit work with PGSQL.\n> \n> Hopefully so.\n> \n> Personally I think it's moderately likely that PostgreSQL's much stricter\n> enforcement of serializable isolation is detecting anomalies that other\n> products do not, so it's potentially preventing errors.\n> \n> It would be nice to have the ability to tune this; sometimes there are\n> anomalies you wish to ignore or permit. At present it is an all or nothing affair\n> - no predicate locking (REPEATABLE READ isolation) or strict predicate locking\n> (SERIALIZABLE isolation).\n> \n> I recommend running some of the examples in the SERIALIZABLE\n> documentation on other products. If they don't fail where they do in Pg,\n> then the other products either have less strict (and arguably therefor less\n> correct) serialisable isolation enforcement or they rely on blocking predicate\n> locks. In the latter case it should be easy to tell because statements will block\n> on locks where no ordinary row or table level lock could be taken.\n> \n> If you do run comparative tests I would be quite interested in seeing the\n> results.\n> \n> --\n> Craig Ringer\n> https://urldefense.proofpoint.com/v1/url?u=http://www.2ndquadrant.com\n> /&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=b9TKmA0CPjroD2HLPTH\n> U27nI9PJr8wgKO2rU9QZyZZU%3D%0A&m=jyJSmq%2BgXoPIgY%2BNtgswlUg\n> zHSm45s%2FmevjxBmPKrIs%3D%0A&s=401415fb62d6f76b22bc76469cbefb85\n> 8342612f1fea2d359fe2bb3f18cab1ab\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 04:51:42 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High rate of transaction failure with the\n Serializable Isolation Level"
}
] |
[
{
"msg_contents": "Hi all.\n\nI have a database with quiet heavy writing load (about 20k tps, 5k of which do writes). And I see lots of writing I/O (I mean amount of data, not iops) to this database, much more than I expect. My question is how can I debug what for backend processes do lots of writes to the $PGDATA/base directory? Below are some details.\n\nThe database works on machine with 128 GB of RAM and md raid10 of 8 ssd disks (INTEL SSDSC2BB480G4 480 GB). It runs PostgreSQL 9.3.4 on Red Hat 6.5 with the following postgresql.conf - http://pastebin.com/LNLHppcb. Sysctl parameters for page cache are:\n\n# sysctl -a | grep vm.dirty\nvm.dirty_background_ratio = 0\nvm.dirty_background_bytes = 104857600\nvm.dirty_ratio = 40\nvm.dirty_bytes = 0\nvm.dirty_writeback_centisecs = 100\nvm.dirty_expire_centisecs = 300\n#\n\nTotal database size is now a bit more than 500 GB.\n\nI have different raid10 arrays for PGDATA and pg_xlog directory (with different mount options). And under load iostat shows that it is written about 20 MB/s on array with xlogs and about 200 MB/s on array with PGDATA. Iotop shows me that ~ 80-100 MB/s of data is written by pdflush (and it is expected behavior for me). And the other ~100 MB is being written by backend processes (varying from 1 MB/s to 30 MB/s). Checkpointer process, bgwriter process and autovacuum workers do really little work (3-5 MB/s).\n\nLsof on several backend processes shows me that backend uses just database files (tables and indexes) and last xlog file. Is there any way to understand why is backend writing lots of data to $PGDATA/base directory? I have tried to use pg_stat_statements for it but I haven’t found a good way to understand what is happening. Is there a way to see something like \"this backend process has written these pages to disk while performing this query\"?\n\nWould be very grateful for any help. Thanks.\n\n--\nVladimir\n\n\n\n\n\nHi all.I have a database with quiet heavy writing load (about 20k tps, 5k of which do writes). And I see lots of writing I/O (I mean amount of data, not iops) to this database, much more than I expect. My question is how can I debug what for backend processes do lots of writes to the $PGDATA/base directory? Below are some details.The database works on machine with 128 GB of RAM and md raid10 of 8 ssd disks (INTEL SSDSC2BB480G4 480 GB). It runs PostgreSQL 9.3.4 on Red Hat 6.5 with the following postgresql.conf - http://pastebin.com/LNLHppcb. Sysctl parameters for page cache are:# sysctl -a | grep vm.dirtyvm.dirty_background_ratio = 0vm.dirty_background_bytes = 104857600vm.dirty_ratio = 40vm.dirty_bytes = 0vm.dirty_writeback_centisecs = 100vm.dirty_expire_centisecs = 300#Total database size is now a bit more than 500 GB.I have different raid10 arrays for PGDATA and pg_xlog directory (with different mount options). And under load iostat shows that it is written about 20 MB/s on array with xlogs and about 200 MB/s on array with PGDATA. Iotop shows me that ~ 80-100 MB/s of data is written by pdflush (and it is expected behavior for me). And the other ~100 MB is being written by backend processes (varying from 1 MB/s to 30 MB/s). Checkpointer process, bgwriter process and autovacuum workers do really little work (3-5 MB/s).Lsof on several backend processes shows me that backend uses just database files (tables and indexes) and last xlog file. Is there any way to understand why is backend writing lots of data to $PGDATA/base directory? I have tried to use pg_stat_statements for it but I haven’t found a good way to understand what is happening. Is there a way to see something like \"this backend process has written these pages to disk while performing this query\"?Would be very grateful for any help. Thanks.\n--Vladimir",
"msg_date": "Thu, 24 Jul 2014 18:22:01 +0400",
"msg_from": "Borodin Vladimir <[email protected]>",
"msg_from_op": true,
"msg_subject": "Debugging writing load"
},
{
"msg_contents": "On 07/24/2014 10:22 PM, Borodin Vladimir wrote:\n> Hi all.\n> \n> I have a database with quiet heavy writing load (about 20k tps, 5k of\n> which do writes). And I see lots of writing I/O (I mean amount of data,\n> not iops) to this database, much more than I expect. My question is how\n> can I debug what for backend processes do lots of writes to the\n> $PGDATA/base directory? Below are some details.\n\nI'd be using perf for this - if you set tracepoints on writes and\nflushes you can associate that with the proceses performing the work.\n\nThat said, the bgwriter and checkpointer will accumulate data from many\ndifferent backends. perf can't track that back to the origin backend.\n\nIt's also currently impractical to pluck things like the current_user,\nstatement name, etc from the PostgreSQL process's memory when capturing\nperf events, so it's still hard to drill down to \"this query is causing\nlots of I/O\".\n\n> Lsof on several backend processes shows me that backend uses just\n> database files (tables and indexes) and last xlog file. Is there any way\n> to understand why is backend writing lots of data to $PGDATA/base\n> directory? I have tried to use pg_stat_statements for it but I havenпїЅt\n> found a good way to understand what is happening. Is there a way to see\n> something like \"this backend process has written these pages to disk\n> while performing this query\"?\n\nPg can't keep track of that its self. At most it can track what buffered\nwrites it's issued. The OS might write-merge them, or it might not flush\nthem to disk at all if they're superseded by a later write over the same\nrange.\n\nYou really need the kernel's help, and that's where perf comes in.\n\nI wrote a little about this topic a while ago:\n\nhttp://blog.2ndquadrant.com/tracing-postgresql-perf/\n\nbut it's really a bit introductory.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jul 2014 17:19:54 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Debugging writing load"
}
] |
[
{
"msg_contents": "Hello guys.\n\n \n\nMy issue kind of hits multiple topics, but the main question is about\nperformance. I think you need to understand the background a little bit to\nbe able to help me. So I will firstly define the problem and my solutions to\nit and place the questions for you to the end of this message.\n\n \n\nProblem:\n\nI have a table of observations of objects on the sky. The most important\ncolumns are the coordinates (x,y). All other columns in there are just\nadditional information about the observation. The problem is that when I\ntake an image of the same object on the sky twice, the coordinates x,y won't\nbe the same, they will be only close to each other. My task is to generate a\ncommon identifier to all of the observations of the same object and assign\nthe observations to it (N:1 relation). The criterium is that all of the\nobservations which are within 1 arcsec of this identifier are considered as\nthe same object. I keep the identifiers in a separate table (objcat) and\nhave a foreign key in the observations table.\n\nThe reason why I solve the performance issues here is that the table of\nobservations has atm cca 3e8 rows after 1.5 year of gathering the data. The\nnumber growth is linear.\n\n \n\nTechnical background:\n\nI'm trying to keep the whole algoritm in DB if possible because I have a\ngood PostgreSQL plugin Q3C for indexing the coordinates of the objects on\nthe sky (https://code.google.com/p/q3c/source/browse/README.q3c). It also\nhas quite a few stored procedures to look in that table for near neighbors\nwhich is what I'm doing. The function q3c_join(x1, x2, y1, y2, radius)\nreturns true if the object y is within radius of the object x. It simply\ngenerates a list of index bitmap or queries with the operators <=, >= which\ndefine the position on sky. Asking for the nearest neighbors are then only\nindex scans.\n\n \n\nSolution:\n\nAfter lot of experimentation with clustering the objects and trying to\nprocess them all together in one \"silver-bullet\" SQL query I decided to use\nsome more simple approach. The main problem with the \"process all at once\napproach\" is that the finding neighbours for each observation is in\ndefinition quadratic and for 3e8 rows just runs out of disk space (~TBs of\nmemory for the temporary results).\n\nThe simplest approach I could think of is that I process each row of the 3e8\nrows sequentially and ask:\n\nDo I have an identifier in the radius of 1 arcsec?\n\nNo: Generate one and assign me to it.\n\nYes: Update it and assigne me to it. The update is done as weighted average\n- I keep the number of how many observations the identifier has been\ncomputed. The result is that the identifier will have average coordinates of\nall the observations it identifies - it will be the center.\n\n \n\nSo here I come with my procedure. It has 3 params. The first two are range\nof oids to list in the table. Used for scaling and parallelization of the\nalgorithm. The third is the radius in which to search for the neighbours.\n\n \n\nDROP TYPE IF EXISTS coords;\n\nCREATE TYPE coords AS (\n\n raj2000 double precision,\n\n dej2000 double precision\n\n );\n\n \n\n \n\nDROP FUNCTION IF EXISTS build_catalog(int,int,double precision);\n\nCREATE OR REPLACE FUNCTION build_catalog (fromOID int, toOID int, radius\ndouble precision)\n\n RETURNS VOID AS $$\n\nDECLARE \n\n cur1 CURSOR FOR \n\n SELECT \n\n raj2000, dej2000\n\n FROM\n\n \\schema.observation AS obs\n\n WHERE \n\n obs.oid >= fromOID\n\n AND\n\n obs.oid < toOID;\n\n curr_raj2000 double precision;\n\n curr_dej2000 double precision;\n\n curr_coords_cat coords;\n\n cnt int; \n\n \n\nBEGIN \n\n/*SELECT current_setting('transaction_isolation') into tmp;\n\nraise notice 'Isolation level %', tmp;*/\n\nOPEN cur1;\n\ncnt:=0;\n\nLOCK TABLE \\schema.objcat IN SHARE ROW EXCLUSIVE MODE;\n\nLOOP \n\n FETCH cur1 INTO curr_raj2000, curr_dej2000;\n\n EXIT WHEN NOT found;\n\n \n\n WITH \n\n upsert\n\n AS\n\n (UPDATE \n\n \\schema.objcat\n\n SET \n\n ipix_cat=q3c_ang2ipix(\n\n (raj2000 *\nweight + curr_raj2000) / (weight + 1),\n\n (dej2000 *\nweight + curr_dej2000) / (weight + 1)\n\n ),\n\n raj2000 = (raj2000 * weight +\ncurr_raj2000) / (weight + 1),\n\n dej2000 = (dej2000 * weight +\ncurr_dej2000) / (weight + 1),\n\n weight=weight+1 \n\n WHERE \n\n q3c_join(curr_raj2000,\ncurr_dej2000,\n\n raj2000,\ndej2000,\n\n radius)\n\n RETURNING *),\n\n ins AS\n\n (INSERT INTO \n\n \\schema.objcat \n\n (ipix_cat, raj2000, dej2000,\nweight)\n\n SELECT\n\n (q3c_ang2ipix(curr_raj2000,\ncurr_dej2000)),\n\n curr_raj2000,\n\n curr_dej2000,\n\n 1\n\n WHERE NOT EXISTS\n\n (SELECT * FROM upsert)\n\n RETURNING *)\n\n UPDATE \n\n \\schema.observation\n\n SET \n\n id_cat = (SELECT DISTINCT\n\n id_cat\n\n FROM\n\n upsert\n\n UNION \n\n SELECT \n\n id_cat\n\n FROM\n\n ins\n\n WHERE id_cat IS NOT NULL\n\n LIMIT 1)\n\n WHERE CURRENT OF cur1;\n\n cnt:=cnt+1;\n\n \n\n IF ((cnt % 100000 ) = 0) THEN\n\n RAISE NOTICE 'Processed % entries', cnt;\n\n END IF;\n\n \n\nEND LOOP;\n\nCLOSE cur1;\n\nEND;\n\n$$ LANGUAGE plpgsql;\n\n \n\nResults: When I run the query only once (1 client thread), it runs cca 1 mil\nrows per hour. Which is days for the whole dataset. When I run it in\nparallel with that lock to ensure pessimistic synchronization, it runs\nsequentially too J the other threads just waiting. When I delete that lock\nand hope to solve the resulting conflicts later, the ssd disk serves up to 4\nthreads relatively effectively - which can divide my days of time by 4 -\nstill inacceptable.\n\n \n\nThe reason is quite clear here - I'm trying to write something in one cycle\nof the script to a table - then in the following cycle I need to read that\ninformation. \n\n \n\nQuestions for you:\n\n1. The first question is if you can think of a better way how to do\nthis and maybe if SQL is even capable of doing such thing - or do I have to\ndo it in C? Would rewriting the SQL function to C help?\n\n2. Could I somehow bend the commiting during the algorithm for my\nthing? Ensure that inside one cycle, the whole part of the identifiers table\nwould be kept in memory for faster lookups?\n\n3. Why is this so slow? J It is comparable to the quadratic algorithm\nin the terms of speed - only does not use any memory.\n\n \n\nI tried to sum this up the best I could - for more information please don't\nhesitate to contact me.\n\n \n\nThank you very much for even reading this far.\n\n \n\nBest Regards,\n\n \n\nJiri Nadvornik\n\n \n\n \n\n\nHello guys. My issue kind of hits multiple topics, but the main question is about performance. I think you need to understand the background a little bit to be able to help me. So I will firstly define the problem and my solutions to it and place the questions for you to the end of this message. Problem:I have a table of observations of objects on the sky. The most important columns are the coordinates (x,y). All other columns in there are just additional information about the observation. The problem is that when I take an image of the same object on the sky twice, the coordinates x,y won’t be the same, they will be only close to each other. My task is to generate a common identifier to all of the observations of the same object and assign the observations to it (N:1 relation). The criterium is that all of the observations which are within 1 arcsec of this identifier are considered as the same object. I keep the identifiers in a separate table (objcat) and have a foreign key in the observations table.The reason why I solve the performance issues here is that the table of observations has atm cca 3e8 rows after 1.5 year of gathering the data. The number growth is linear. Technical background:I’m trying to keep the whole algoritm in DB if possible because I have a good PostgreSQL plugin Q3C for indexing the coordinates of the objects on the sky (https://code.google.com/p/q3c/source/browse/README.q3c). It also has quite a few stored procedures to look in that table for near neighbors which is what I’m doing. The function q3c_join(x1, x2, y1, y2, radius) returns true if the object y is within radius of the object x. It simply generates a list of index bitmap or queries with the operators <=, >= which define the position on sky. Asking for the nearest neighbors are then only index scans. Solution:After lot of experimentation with clustering the objects and trying to process them all together in one “silver-bullet” SQL query I decided to use some more simple approach. The main problem with the “process all at once approach” is that the finding neighbours for each observation is in definition quadratic and for 3e8 rows just runs out of disk space (~TBs of memory for the temporary results).The simplest approach I could think of is that I process each row of the 3e8 rows sequentially and ask:Do I have an identifier in the radius of 1 arcsec?No: Generate one and assign me to it.Yes: Update it and assigne me to it. The update is done as weighted average – I keep the number of how many observations the identifier has been computed. The result is that the identifier will have average coordinates of all the observations it identifies – it will be the center. So here I come with my procedure. It has 3 params. The first two are range of oids to list in the table. Used for scaling and parallelization of the algorithm. The third is the radius in which to search for the neighbours. DROP TYPE IF EXISTS coords;CREATE TYPE coords AS ( raj2000 double precision, dej2000 double precision ); DROP FUNCTION IF EXISTS build_catalog(int,int,double precision);CREATE OR REPLACE FUNCTION build_catalog (fromOID int, toOID int, radius double precision) RETURNS VOID AS $$DECLARE cur1 CURSOR FOR SELECT raj2000, dej2000 FROM \\schema.observation AS obs WHERE obs.oid >= fromOID AND obs.oid < toOID; curr_raj2000 double precision; curr_dej2000 double precision; curr_coords_cat coords; cnt int; BEGIN /*SELECT current_setting('transaction_isolation') into tmp;raise notice 'Isolation level %', tmp;*/OPEN cur1;cnt:=0;LOCK TABLE \\schema.objcat IN SHARE ROW EXCLUSIVE MODE;LOOP FETCH cur1 INTO curr_raj2000, curr_dej2000; EXIT WHEN NOT found; WITH upsert AS (UPDATE \\schema.objcat SET ipix_cat=q3c_ang2ipix( (raj2000 * weight + curr_raj2000) / (weight + 1), (dej2000 * weight + curr_dej2000) / (weight + 1) ), raj2000 = (raj2000 * weight + curr_raj2000) / (weight + 1), dej2000 = (dej2000 * weight + curr_dej2000) / (weight + 1), weight=weight+1 WHERE q3c_join(curr_raj2000, curr_dej2000, raj2000, dej2000, radius) RETURNING *), ins AS (INSERT INTO \\schema.objcat (ipix_cat, raj2000, dej2000, weight) SELECT (q3c_ang2ipix(curr_raj2000, curr_dej2000)), curr_raj2000, curr_dej2000, 1 WHERE NOT EXISTS (SELECT * FROM upsert) RETURNING *) UPDATE \\schema.observation SET id_cat = (SELECT DISTINCT id_cat FROM upsert UNION SELECT id_cat FROM ins WHERE id_cat IS NOT NULL LIMIT 1) WHERE CURRENT OF cur1; cnt:=cnt+1; IF ((cnt % 100000 ) = 0) THEN RAISE NOTICE 'Processed % entries', cnt; END IF; END LOOP;CLOSE cur1;END;$$ LANGUAGE plpgsql; Results: When I run the query only once (1 client thread), it runs cca 1 mil rows per hour. Which is days for the whole dataset. When I run it in parallel with that lock to ensure pessimistic synchronization, it runs sequentially too J the other threads just waiting. When I delete that lock and hope to solve the resulting conflicts later, the ssd disk serves up to 4 threads relatively effectively – which can divide my days of time by 4 – still inacceptable. The reason is quite clear here – I’m trying to write something in one cycle of the script to a table – then in the following cycle I need to read that information. Questions for you:1. The first question is if you can think of a better way how to do this and maybe if SQL is even capable of doing such thing – or do I have to do it in C? Would rewriting the SQL function to C help?2. Could I somehow bend the commiting during the algorithm for my thing? Ensure that inside one cycle, the whole part of the identifiers table would be kept in memory for faster lookups?3. Why is this so slow? J It is comparable to the quadratic algorithm in the terms of speed – only does not use any memory. I tried to sum this up the best I could – for more information please don’t hesitate to contact me. Thank you very much for even reading this far. Best Regards, Jiri Nadvornik",
"msg_date": "Sat, 26 Jul 2014 12:46:31 +0200",
"msg_from": "=?iso-8859-2?B?Smn47SBO4WR2b3Ju7Ws=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cursor + upsert (astronomical data)"
},
{
"msg_contents": "I am not sure I understand the problem fully, e.g. what to do if there are\nobservations A,B and C with A to B and B to C less then treshold and A to C\nover treshold, but anyway.\n\nCould you first apply a kind of grid to your observations? What I mean is\nto round your coords to, say, 1/2 arcsec on each axe and group the results.\nI think you will have most observations grouped this way and then use your\nregular algorithm to combine the results.\n\nBest regards, Vitalii Tymchyshyn\n\nI am not sure I understand the problem fully, e.g. what to do if there are observations A,B and C with A to B and B to C less then treshold and A to C over treshold, but anyway.\nCould you first apply a kind of grid to your observations? What I mean is to round your coords to, say, 1/2 arcsec on each axe and group the results. I think you will have most observations grouped this way and then use your regular algorithm to combine the results.\nBest regards, Vitalii Tymchyshyn",
"msg_date": "Sun, 27 Jul 2014 02:05:42 -0400",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor + upsert (astronomical data)"
},
{
"msg_contents": "Hi Vitalii, thank you for your reply.\n\n \n\nThe problem you suggested can in the most pathological way be, that these observations are on one line. As you suggested it, the B would be in the middle. So A and C are not in 1 arcsec range of each other, but they must be within 1 arcsec of their common average coordinates. If the distances between A,B,C are 1 arcsec for each, the right solution is to pick B as reference identifier and assign A and C to it.\n\n \n\nWe already tried the approach you suggest with applying a grid based on the Q3C indexing of the database. We were not just rounding the results, but using the center of the Q3C “square” in which the observation took place. The result was poor however – 22% of the identifiers were closer to each other than 1 arcsec. That means that when you crossmatch the original observations to them, you don’t know which one to use and you have duplicates. The reason for this is that nearly all of the observations are from SMC (high density of observations), which causes that you have more than 2 “rounded” positions in a row and don’t know which ones to join together (compute average coordinates from it). If it is not clear enough I can draw it on an image for you.\n\nMaybe the simple round up would have better results because the squares are not each the same size and you can scale them only by 2 (2-times smaller, or larger square). We used a squre with the side cca 0.76 arcsec which approximately covers the 1 arcsec radius circle.\n\n \n\nOh and one more important thing. The difficulty of our data is not that it is 3e8 rows. But in the highest density, there are cca 1000 images overlapping. Which kills you when you try to self-join the observations to find neighbours for each of them – the quadratic complexity is based on the overlappingon the image (e.g. 10000 observations on one image with another 999 images overlapping it means 10000 *1000^2). \n\n \n\nBest regards,\n\n \n\nJiri Nadvornik\n\n \n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Vitalii Tymchyshyn\nSent: Sunday, July 27, 2014 8:06 AM\nTo: Jiří Nádvorník\nCc: [email protected]\nSubject: Re: [PERFORM] Cursor + upsert (astronomical data)\n\n \n\nI am not sure I understand the problem fully, e.g. what to do if there are observations A,B and C with A to B and B to C less then treshold and A to C over treshold, but anyway.\n\nCould you first apply a kind of grid to your observations? What I mean is to round your coords to, say, 1/2 arcsec on each axe and group the results. I think you will have most observations grouped this way and then use your regular algorithm to combine the results.\n\nBest regards, Vitalii Tymchyshyn\n\n\nHi Vitalii, thank you for your reply. The problem you suggested can in the most pathological way be, that these observations are on one line. As you suggested it, the B would be in the middle. So A and C are not in 1 arcsec range of each other, but they must be within 1 arcsec of their common average coordinates. If the distances between A,B,C are 1 arcsec for each, the right solution is to pick B as reference identifier and assign A and C to it. We already tried the approach you suggest with applying a grid based on the Q3C indexing of the database. We were not just rounding the results, but using the center of the Q3C “square” in which the observation took place. The result was poor however – 22% of the identifiers were closer to each other than 1 arcsec. That means that when you crossmatch the original observations to them, you don’t know which one to use and you have duplicates. The reason for this is that nearly all of the observations are from SMC (high density of observations), which causes that you have more than 2 “rounded” positions in a row and don’t know which ones to join together (compute average coordinates from it). If it is not clear enough I can draw it on an image for you.Maybe the simple round up would have better results because the squares are not each the same size and you can scale them only by 2 (2-times smaller, or larger square). We used a squre with the side cca 0.76 arcsec which approximately covers the 1 arcsec radius circle. Oh and one more important thing. The difficulty of our data is not that it is 3e8 rows. But in the highest density, there are cca 1000 images overlapping. Which kills you when you try to self-join the observations to find neighbours for each of them – the quadratic complexity is based on the overlappingon the image (e.g. 10000 observations on one image with another 999 images overlapping it means 10000 *1000^2). Best regards, Jiri Nadvornik From: [email protected] [mailto:[email protected]] On Behalf Of Vitalii TymchyshynSent: Sunday, July 27, 2014 8:06 AMTo: Jiří NádvorníkCc: [email protected]: Re: [PERFORM] Cursor + upsert (astronomical data) I am not sure I understand the problem fully, e.g. what to do if there are observations A,B and C with A to B and B to C less then treshold and A to C over treshold, but anyway.Could you first apply a kind of grid to your observations? What I mean is to round your coords to, say, 1/2 arcsec on each axe and group the results. I think you will have most observations grouped this way and then use your regular algorithm to combine the results.Best regards, Vitalii Tymchyshyn",
"msg_date": "Sun, 27 Jul 2014 16:35:21 +0200",
"msg_from": "=?UTF-8?B?SmnFmcOtIE7DoWR2b3Juw61r?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor + upsert (astronomical data)"
},
{
"msg_contents": "Jiri,\n\nIf you haven't looked at clustering algorithms yet, you might want to do\nso. Your problem is a special case of clustering, where you have a large\nnumber of small clusters. A good place to start is the overview on\nWikipedia: http://en.wikipedia.org/wiki/Cluster_analysis\n\nA lot of people have worked extensively on this problem, and you might find\na good solution, or at least some ideas to guide your own algorithm. In my\nfield (chemistry), researchers often need to cluster 10^6 to 10^7 chemical\ncompounds, and a great deal of research has gone into efficient ways to do\nso.\n\nCraig\n\n\n\nOn Sun, Jul 27, 2014 at 7:35 AM, Jiří Nádvorník <[email protected]>\nwrote:\n\n> Hi Vitalii, thank you for your reply.\n>\n>\n>\n> The problem you suggested can in the most pathological way be, that these\n> observations are on one line. As you suggested it, the B would be in the\n> middle. So A and C are not in 1 arcsec range of each other, but they must\n> be within 1 arcsec of their common average coordinates. If the distances\n> between A,B,C are 1 arcsec for each, the right solution is to pick B as\n> reference identifier and assign A and C to it.\n>\n>\n>\n> We already tried the approach you suggest with applying a grid based on\n> the Q3C indexing of the database. We were not just rounding the results,\n> but using the center of the Q3C “square” in which the observation took\n> place. The result was poor however – 22% of the identifiers were closer to\n> each other than 1 arcsec. That means that when you crossmatch the original\n> observations to them, you don’t know which one to use and you have\n> duplicates. The reason for this is that nearly all of the observations are\n> from SMC (high density of observations), which causes that you have more\n> than 2 “rounded” positions in a row and don’t know which ones to join\n> together (compute average coordinates from it). If it is not clear enough I\n> can draw it on an image for you.\n>\n> Maybe the simple round up would have better results because the squares\n> are not each the same size and you can scale them only by 2 (2-times\n> smaller, or larger square). We used a squre with the side cca 0.76 arcsec\n> which approximately covers the 1 arcsec radius circle.\n>\n>\n>\n> Oh and one more important thing. The difficulty of our data is not that it\n> is 3e8 rows. But in the highest density, there are cca 1000 images\n> overlapping. Which kills you when you try to self-join the observations to\n> find neighbours for each of them – the quadratic complexity is based on the\n> overlappingon the image (e.g. 10000 observations on one image with another\n> 999 images overlapping it means 10000 *1000^2).\n>\n>\n>\n> Best regards,\n>\n>\n>\n> Jiri Nadvornik\n>\n>\n>\n> *From:* [email protected] [mailto:[email protected]] *On Behalf Of *Vitalii\n> Tymchyshyn\n> *Sent:* Sunday, July 27, 2014 8:06 AM\n> *To:* Jiří Nádvorník\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Cursor + upsert (astronomical data)\n>\n>\n>\n> I am not sure I understand the problem fully, e.g. what to do if there are\n> observations A,B and C with A to B and B to C less then treshold and A to C\n> over treshold, but anyway.\n>\n> Could you first apply a kind of grid to your observations? What I mean is\n> to round your coords to, say, 1/2 arcsec on each axe and group the results.\n> I think you will have most observations grouped this way and then use your\n> regular algorithm to combine the results.\n>\n> Best regards, Vitalii Tymchyshyn\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nJiri,If you haven't looked at clustering algorithms yet, you might want to do so. Your problem is a special case of clustering, where you have a large number of small clusters. A good place to start is the overview on Wikipedia: http://en.wikipedia.org/wiki/Cluster_analysis\nA lot of people have worked extensively on this problem, and you might find a good solution, or at least some ideas to guide your own algorithm. In my field (chemistry), researchers often need to cluster 10^6 to 10^7 chemical compounds, and a great deal of research has gone into efficient ways to do so.\nCraigOn Sun, Jul 27, 2014 at 7:35 AM, Jiří Nádvorník <[email protected]> wrote:\nHi Vitalii, thank you for your reply.\n The problem you suggested can in the most pathological way be, that these observations are on one line. As you suggested it, the B would be in the middle. So A and C are not in 1 arcsec range of each other, but they must be within 1 arcsec of their common average coordinates. If the distances between A,B,C are 1 arcsec for each, the right solution is to pick B as reference identifier and assign A and C to it.\n We already tried the approach you suggest with applying a grid based on the Q3C indexing of the database. We were not just rounding the results, but using the center of the Q3C “square” in which the observation took place. The result was poor however – 22% of the identifiers were closer to each other than 1 arcsec. That means that when you crossmatch the original observations to them, you don’t know which one to use and you have duplicates. The reason for this is that nearly all of the observations are from SMC (high density of observations), which causes that you have more than 2 “rounded” positions in a row and don’t know which ones to join together (compute average coordinates from it). If it is not clear enough I can draw it on an image for you.\nMaybe the simple round up would have better results because the squares are not each the same size and you can scale them only by 2 (2-times smaller, or larger square). We used a squre with the side cca 0.76 arcsec which approximately covers the 1 arcsec radius circle.\n Oh and one more important thing. The difficulty of our data is not that it is 3e8 rows. But in the highest density, there are cca 1000 images overlapping. Which kills you when you try to self-join the observations to find neighbours for each of them – the quadratic complexity is based on the overlappingon the image (e.g. 10000 observations on one image with another 999 images overlapping it means 10000 *1000^2). \n Best regards,\n Jiri Nadvornik\n \nFrom: [email protected] [mailto:[email protected]] On Behalf Of Vitalii Tymchyshyn\nSent: Sunday, July 27, 2014 8:06 AMTo: Jiří NádvorníkCc: [email protected]: Re: [PERFORM] Cursor + upsert (astronomical data)\n I am not sure I understand the problem fully, e.g. what to do if there are observations A,B and C with A to B and B to C less then treshold and A to C over treshold, but anyway.\nCould you first apply a kind of grid to your observations? What I mean is to round your coords to, say, 1/2 arcsec on each axe and group the results. I think you will have most observations grouped this way and then use your regular algorithm to combine the results.\nBest regards, Vitalii Tymchyshyn-- ---------------------------------Craig A. James\nChief Technology OfficereMolecules, Inc.---------------------------------",
"msg_date": "Sun, 27 Jul 2014 08:35:25 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor + upsert (astronomical data)"
},
{
"msg_contents": "[Craig]\n>>If you haven't looked at clustering algorithms yet, you might want to do so.\n>>Your problem is a special case of clustering, where you have a large number\n>>of small clusters. A good place to start is the overview on Wikipedia:\n>>http://en.wikipedia.org/wiki/Cluster_analysis\n\nAccording to this list, your method is similar to http://en.wikipedia.org/wiki/Basic_sequential_algorithmic_scheme,\nbut with what seems to be some logical errors.\n\n>The simplest approach I could think of is that I process each row of the 3e8 rows\n>sequentially and ask:\n>Do I have an identifier in the radius of 1 arcsec?\n>No: Generate one and assign me to it.\n>Yes: Update it and assigne me to it. The update is done as weighted average – I keep the\n>number of how many observations the identifier has been computed. The result is that the\n>identifier will have average coordinates of all the observations it identifies – it will\n>be the center.\n\n\nLet say you have 2 single measures on a line at arcsec 1 and 2.1. which hence correspond to 2 ipix_cat.\nNow add a new measure at 1.9: as you choose any of the possible adjacent ipix_cat whitout considering the least distance, you may end with an ipix_at at 1.45 which is at less than 1 arcsec then the next one.\n Moreover you have raised the weight of both ipix_cat.\n Which increases the lock probability when trying a nutlithreaded upgrade.\n\nThe max distance between 2 observations belonging to a same ipix_cat tends to 2 arcsec with your method. If this is ok, you should probbaly modify your method so that the 2 first points of my example would have megred to a single ipix_cat. You could use your weigth for this: increase your search radius to 2arcsec and then reject the candidates located between 1 and 2 arsec depending on their weight. The additional work load might be compensated by the smaller number of ipix_cat that woul will have.\n\n\n\n\n\n\n\n\n\n\n\n\n[Craig]\n>>If you haven't looked at clustering algorithms yet, you might want to do so. \n>>Your problem is a special case of clustering, where you have a large number \n>>of small clusters. A good place to start is the overview on Wikipedia:\n>>http://en.wikipedia.org/wiki/Cluster_analysis\n\nAccording to this list, your method is similar to http://en.wikipedia.org/wiki/Basic_sequential_algorithmic_scheme,\nbut with what seems to be some logical errors.\n\n>The simplest approach I could think of is that I process each row of the 3e8 rows\n\n>sequentially and ask:\n>Do I have an identifier in the radius of 1 arcsec?\n>No: Generate one and assign me to it.\n>Yes: Update it and assigne me to it. The update is done as weighted average – I keep the\n\n>number of how many observations the identifier has been computed. The result is that the\n>identifier will have average coordinates of all the observations it identifies – it will\n\n>be the center.\n \n\nLet say you have 2 single measures on a line at arcsec 1 and 2.1. which hence correspond to 2 ipix_cat.\n\nNow add a new measure at 1.9: as you choose any of the possible adjacent ipix_cat whitout considering the least distance, you may end with an ipix_at at 1.45 which is at less than 1 arcsec then the next one.\n\n Moreover you have raised the weight of both ipix_cat.\n Which increases the lock probability when trying a nutlithreaded upgrade.\n \nThe max distance between 2 observations belonging to a same ipix_cat tends to 2 arcsec with your method. If this is ok, you should probbaly modify your method so that the 2 first points of my example would have megred to a single ipix_cat. You could use your\n weigth for this: increase your search radius to 2arcsec and then reject the candidates located between 1 and 2 arsec depending on their weight. The additional work load might be compensated by the smaller number of ipix_cat that woul will have.",
"msg_date": "Sun, 27 Jul 2014 19:05:22 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor + upsert (astronomical data)"
},
{
"msg_contents": "Well, that's why I said to apply regular algorithm to deduplicate after\nthis step. Basically, what I expect is to have first pass with group by\nthat do not require any joins and produces \"dirty\" set of identifiers.\n\nIt should do next things:\n1) Provide working set of dirty identifiers that has a huge factor less\ncardinality than the original observations set.\n2) Most of the identifiers can be used as is, only for small fraction you\nneed to perform additional merge. 22% is actually very good number, it\nmeans only 1/5 of identifiers should be analyzed for merging.\n\nBest regards, Vitalii Tymchyshyn\n27 лип. 2014 10:35, користувач \"Jiří Nádvorník\" <[email protected]>\nнаписав:\n\n> Hi Vitalii, thank you for your reply.\n>\n>\n>\n> The problem you suggested can in the most pathological way be, that these\n> observations are on one line. As you suggested it, the B would be in the\n> middle. So A and C are not in 1 arcsec range of each other, but they must\n> be within 1 arcsec of their common average coordinates. If the distances\n> between A,B,C are 1 arcsec for each, the right solution is to pick B as\n> reference identifier and assign A and C to it.\n>\n>\n>\n> We already tried the approach you suggest with applying a grid based on\n> the Q3C indexing of the database. We were not just rounding the results,\n> but using the center of the Q3C “square” in which the observation took\n> place. The result was poor however – 22% of the identifiers were closer to\n> each other than 1 arcsec. That means that when you crossmatch the original\n> observations to them, you don’t know which one to use and you have\n> duplicates. The reason for this is that nearly all of the observations are\n> from SMC (high density of observations), which causes that you have more\n> than 2 “rounded” positions in a row and don’t know which ones to join\n> together (compute average coordinates from it). If it is not clear enough I\n> can draw it on an image for you.\n>\n> Maybe the simple round up would have better results because the squares\n> are not each the same size and you can scale them only by 2 (2-times\n> smaller, or larger square). We used a squre with the side cca 0.76 arcsec\n> which approximately covers the 1 arcsec radius circle.\n>\n>\n>\n> Oh and one more important thing. The difficulty of our data is not that it\n> is 3e8 rows. But in the highest density, there are cca 1000 images\n> overlapping. Which kills you when you try to self-join the observations to\n> find neighbours for each of them – the quadratic complexity is based on the\n> overlappingon the image (e.g. 10000 observations on one image with another\n> 999 images overlapping it means 10000 *1000^2).\n>\n>\n>\n> Best regards,\n>\n>\n>\n> Jiri Nadvornik\n>\n>\n>\n> *From:* [email protected] [mailto:[email protected]] *On Behalf Of *Vitalii\n> Tymchyshyn\n> *Sent:* Sunday, July 27, 2014 8:06 AM\n> *To:* Jiří Nádvorník\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Cursor + upsert (astronomical data)\n>\n>\n>\n> I am not sure I understand the problem fully, e.g. what to do if there are\n> observations A,B and C with A to B and B to C less then treshold and A to C\n> over treshold, but anyway.\n>\n> Could you first apply a kind of grid to your observations? What I mean is\n> to round your coords to, say, 1/2 arcsec on each axe and group the results.\n> I think you will have most observations grouped this way and then use your\n> regular algorithm to combine the results.\n>\n> Best regards, Vitalii Tymchyshyn\n>\n\nWell, that's why I said to apply regular algorithm to deduplicate after this step. Basically, what I expect is to have first pass with group by that do not require any joins and produces \"dirty\" set of identifiers.\nIt should do next things:\n1) Provide working set of dirty identifiers that has a huge factor less cardinality than the original observations set. \n2) Most of the identifiers can be used as is, only for small fraction you need to perform additional merge. 22% is actually very good number, it means only 1/5 of identifiers should be analyzed for merging.\nBest regards, Vitalii Tymchyshyn\n27 лип. 2014 10:35, користувач \"Jiří Nádvorník\" <[email protected]> написав:\nHi Vitalii, thank you for your reply.\n The problem you suggested can in the most pathological way be, that these observations are on one line. As you suggested it, the B would be in the middle. So A and C are not in 1 arcsec range of each other, but they must be within 1 arcsec of their common average coordinates. If the distances between A,B,C are 1 arcsec for each, the right solution is to pick B as reference identifier and assign A and C to it.\n We already tried the approach you suggest with applying a grid based on the Q3C indexing of the database. We were not just rounding the results, but using the center of the Q3C “square” in which the observation took place. The result was poor however – 22% of the identifiers were closer to each other than 1 arcsec. That means that when you crossmatch the original observations to them, you don’t know which one to use and you have duplicates. The reason for this is that nearly all of the observations are from SMC (high density of observations), which causes that you have more than 2 “rounded” positions in a row and don’t know which ones to join together (compute average coordinates from it). If it is not clear enough I can draw it on an image for you.\nMaybe the simple round up would have better results because the squares are not each the same size and you can scale them only by 2 (2-times smaller, or larger square). We used a squre with the side cca 0.76 arcsec which approximately covers the 1 arcsec radius circle.\n Oh and one more important thing. The difficulty of our data is not that it is 3e8 rows. But in the highest density, there are cca 1000 images overlapping. Which kills you when you try to self-join the observations to find neighbours for each of them – the quadratic complexity is based on the overlappingon the image (e.g. 10000 observations on one image with another 999 images overlapping it means 10000 *1000^2). \n Best regards,\n Jiri Nadvornik\n \nFrom: [email protected] [mailto:[email protected]] On Behalf Of Vitalii Tymchyshyn\nSent: Sunday, July 27, 2014 8:06 AMTo: Jiří NádvorníkCc: [email protected]: Re: [PERFORM] Cursor + upsert (astronomical data)\n I am not sure I understand the problem fully, e.g. what to do if there are observations A,B and C with A to B and B to C less then treshold and A to C over treshold, but anyway.\nCould you first apply a kind of grid to your observations? What I mean is to round your coords to, say, 1/2 arcsec on each axe and group the results. I think you will have most observations grouped this way and then use your regular algorithm to combine the results.\nBest regards, Vitalii Tymchyshyn",
"msg_date": "Sun, 27 Jul 2014 16:09:41 -0400",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor + upsert (astronomical data)"
},
{
"msg_contents": "Hi Craig,\n\n \n\nI’m really interested in those algorithms and study them. But I would need somebody to point me directly at a specific algorithm to look at. The main problem with choosing the right one (which couldn’t get over even my university teacher) is that you don’t know the number of clusters (classification problem) and you don’t know the number of objects in one cluster (that isn’t such a big deal), but each cluster has different count of objects – which is a big issue because it actually disqualifies all of k-means based algorithms (correct me if I’m wrong).\n\n \n\nWe were thinking of some kind of Bayesian method or a likelihood ratio computation, but dismissed that as we found an implementation that ran ~hours for ~5e5 records – I just don’t think such an approach would provide better results from a faster algorithm than the simple linear one (checking each row and updating/inserting it to catalog).\n\n \n\nI actually left the algorithm to run for a smaller set of data (~80 mil. Rows) and on 8 cores (cca 4 of them used at a time) it ran 48 hours. That’s not that bad assuming it must run only once for the dataset (on DB population) – and then the additions will be processed fast. The result is actually cca 1.2 mil identifiers and 50 thousand from them are closer to each other than 1 arcsec (collisions) – but that is only <5% error. I think if we worked a bit on this algorithm we could make it faster and reduce that error percentage to maybe 1% which would be acceptable? What do you think?\n\n \n\nThank you very much for your effort.\n\n \n\nKind Regards,\n\n \n\nJiri Nadvornik\n\n \n\nFrom: Craig James [mailto:[email protected]] \nSent: Sunday, July 27, 2014 5:35 PM\nTo: Jiří Nádvorník\nCc: Vitalii Tymchyshyn; [email protected]\nSubject: Re: [PERFORM] Cursor + upsert (astronomical data)\n\n \n\nJiri,\n\n \n\nIf you haven't looked at clustering algorithms yet, you might want to do so. Your problem is a special case of clustering, where you have a large number of small clusters. A good place to start is the overview on Wikipedia: http://en.wikipedia.org/wiki/Cluster_analysis\n\n \n\nA lot of people have worked extensively on this problem, and you might find a good solution, or at least some ideas to guide your own algorithm. In my field (chemistry), researchers often need to cluster 10^6 to 10^7 chemical compounds, and a great deal of research has gone into efficient ways to do so.\n\n \n\nCraig\n\n \n\n \n\nOn Sun, Jul 27, 2014 at 7:35 AM, Jiří Nádvorník <[email protected] <mailto:[email protected]> > wrote:\n\nHi Vitalii, thank you for your reply.\n\n \n\nThe problem you suggested can in the most pathological way be, that these observations are on one line. As you suggested it, the B would be in the middle. So A and C are not in 1 arcsec range of each other, but they must be within 1 arcsec of their common average coordinates. If the distances between A,B,C are 1 arcsec for each, the right solution is to pick B as reference identifier and assign A and C to it.\n\n \n\nWe already tried the approach you suggest with applying a grid based on the Q3C indexing of the database. We were not just rounding the results, but using the center of the Q3C “square” in which the observation took place. The result was poor however – 22% of the identifiers were closer to each other than 1 arcsec. That means that when you crossmatch the original observations to them, you don’t know which one to use and you have duplicates. The reason for this is that nearly all of the observations are from SMC (high density of observations), which causes that you have more than 2 “rounded” positions in a row and don’t know which ones to join together (compute average coordinates from it). If it is not clear enough I can draw it on an image for you.\n\nMaybe the simple round up would have better results because the squares are not each the same size and you can scale them only by 2 (2-times smaller, or larger square). We used a squre with the side cca 0.76 arcsec which approximately covers the 1 arcsec radius circle.\n\n \n\nOh and one more important thing. The difficulty of our data is not that it is 3e8 rows. But in the highest density, there are cca 1000 images overlapping. Which kills you when you try to self-join the observations to find neighbours for each of them – the quadratic complexity is based on the overlappingon the image (e.g. 10000 observations on one image with another 999 images overlapping it means 10000 *1000^2). \n\n \n\nBest regards,\n\n \n\nJiri Nadvornik\n\n \n\nFrom: <mailto:[email protected]> [email protected] [mailto: <mailto:[email protected]> [email protected]] On Behalf Of Vitalii Tymchyshyn\nSent: Sunday, July 27, 2014 8:06 AM\nTo: Jiří Nádvorník\nCc: <mailto:[email protected]> [email protected]\nSubject: Re: [PERFORM] Cursor + upsert (astronomical data)\n\n \n\nI am not sure I understand the problem fully, e.g. what to do if there are observations A,B and C with A to B and B to C less then treshold and A to C over treshold, but anyway.\n\nCould you first apply a kind of grid to your observations? What I mean is to round your coords to, say, 1/2 arcsec on each axe and group the results. I think you will have most observations grouped this way and then use your regular algorithm to combine the results.\n\nBest regards, Vitalii Tymchyshyn\n\n\n\n\n\n \n\n-- \n\n---------------------------------\nCraig A. James\n\nChief Technology Officer\n\neMolecules, Inc.\n\n---------------------------------\n\n\nHi Craig, I’m really interested in those algorithms and study them. But I would need somebody to point me directly at a specific algorithm to look at. The main problem with choosing the right one (which couldn’t get over even my university teacher) is that you don’t know the number of clusters (classification problem) and you don’t know the number of objects in one cluster (that isn’t such a big deal), but each cluster has different count of objects – which is a big issue because it actually disqualifies all of k-means based algorithms (correct me if I’m wrong). We were thinking of some kind of Bayesian method or a likelihood ratio computation, but dismissed that as we found an implementation that ran ~hours for ~5e5 records – I just don’t think such an approach would provide better results from a faster algorithm than the simple linear one (checking each row and updating/inserting it to catalog). I actually left the algorithm to run for a smaller set of data (~80 mil. Rows) and on 8 cores (cca 4 of them used at a time) it ran 48 hours. That’s not that bad assuming it must run only once for the dataset (on DB population) – and then the additions will be processed fast. The result is actually cca 1.2 mil identifiers and 50 thousand from them are closer to each other than 1 arcsec (collisions) – but that is only <5% error. I think if we worked a bit on this algorithm we could make it faster and reduce that error percentage to maybe 1% which would be acceptable? What do you think? Thank you very much for your effort. Kind Regards, Jiri Nadvornik From: Craig James [mailto:[email protected]] Sent: Sunday, July 27, 2014 5:35 PMTo: Jiří NádvorníkCc: Vitalii Tymchyshyn; [email protected]: Re: [PERFORM] Cursor + upsert (astronomical data) Jiri, If you haven't looked at clustering algorithms yet, you might want to do so. Your problem is a special case of clustering, where you have a large number of small clusters. A good place to start is the overview on Wikipedia: http://en.wikipedia.org/wiki/Cluster_analysis A lot of people have worked extensively on this problem, and you might find a good solution, or at least some ideas to guide your own algorithm. In my field (chemistry), researchers often need to cluster 10^6 to 10^7 chemical compounds, and a great deal of research has gone into efficient ways to do so. Craig On Sun, Jul 27, 2014 at 7:35 AM, Jiří Nádvorník <[email protected]> wrote:Hi Vitalii, thank you for your reply. The problem you suggested can in the most pathological way be, that these observations are on one line. As you suggested it, the B would be in the middle. So A and C are not in 1 arcsec range of each other, but they must be within 1 arcsec of their common average coordinates. If the distances between A,B,C are 1 arcsec for each, the right solution is to pick B as reference identifier and assign A and C to it. We already tried the approach you suggest with applying a grid based on the Q3C indexing of the database. We were not just rounding the results, but using the center of the Q3C “square” in which the observation took place. The result was poor however – 22% of the identifiers were closer to each other than 1 arcsec. That means that when you crossmatch the original observations to them, you don’t know which one to use and you have duplicates. The reason for this is that nearly all of the observations are from SMC (high density of observations), which causes that you have more than 2 “rounded” positions in a row and don’t know which ones to join together (compute average coordinates from it). If it is not clear enough I can draw it on an image for you.Maybe the simple round up would have better results because the squares are not each the same size and you can scale them only by 2 (2-times smaller, or larger square). We used a squre with the side cca 0.76 arcsec which approximately covers the 1 arcsec radius circle. Oh and one more important thing. The difficulty of our data is not that it is 3e8 rows. But in the highest density, there are cca 1000 images overlapping. Which kills you when you try to self-join the observations to find neighbours for each of them – the quadratic complexity is based on the overlappingon the image (e.g. 10000 observations on one image with another 999 images overlapping it means 10000 *1000^2). Best regards, Jiri Nadvornik From: [email protected] [mailto:[email protected]] On Behalf Of Vitalii TymchyshynSent: Sunday, July 27, 2014 8:06 AMTo: Jiří NádvorníkCc: [email protected]: Re: [PERFORM] Cursor + upsert (astronomical data) I am not sure I understand the problem fully, e.g. what to do if there are observations A,B and C with A to B and B to C less then treshold and A to C over treshold, but anyway.Could you first apply a kind of grid to your observations? What I mean is to round your coords to, say, 1/2 arcsec on each axe and group the results. I think you will have most observations grouped this way and then use your regular algorithm to combine the results.Best regards, Vitalii Tymchyshyn -- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------",
"msg_date": "Tue, 29 Jul 2014 11:44:46 +0200",
"msg_from": "=?UTF-8?B?SmnFmcOtIE7DoWR2b3Juw61r?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor + upsert (astronomical data)"
},
{
"msg_contents": "Hi Oleg, Sergey,\n\nThe problem would be crossmatch if I would have a catalog to crossmatch with. But I am actually trying to build this catalog.\n\nThe crossmatching can be actually used to solve that problem, when I crossmatch the observations with themselves on q3c_join with 1 arcsec. But as I said, that crashes on the fact that we have ~thousand images overlapping. This is the factor of quadratic complexity of self-crossmatching the table (self-joining is the same thing if I understand the crossmatching term correctly). \n\nI actually managed to write a silver-bullet query, which did the self-joining in the most nested subquery and then worked with the results via analytic methods like count and rank, which found the best candidate of these self-joined tuples to compute the average coordination on which I grouped them and got my identifier. I can send you the code if you are interested.\n\nIt worked like charm for smaller data - fast and small error (<1%). But self-joining 3e8 rows when in the highest density areas you multiply each observation by the factor of 1000^2, the temporary results run out of disk space (didn’t have more than 1.5 TB). So I managed to solve this by dividing the dataset to smaller parts (cca 25000 for one part). When ran in parallel on 8 cores, it used them quite good (cca 6 cores fully loaded at one time) and the 24 GB memory had load at ~75%. \n\nIf the time was linear compared to processed results, the time to process the 3e8 rows was ~9 days. However I tried this only once (for obvious reasons) and the RDMS crashed after cca 4 days of this heavy load. Don't know whether I was stretching Postgres over it's limits here so I tried to find an algorithm with linear complexity at that point. That's how I got into the point where I'm right now.\n\nP.S.: I consulted this with people from catalina and they are doing this thing by Friends-of-Friends algorithm - but they don't have most of the stars in the high density areas like SMC as we do. That's why I didn't decide to not use it as I think it would crash horribly when you have the distances comparable with the threshold of your astrometry.\n\nThank you very much for your invested effort.\n\nKind Regards,\n\nJiri Nadvornik\n\n-----Original Message-----\nFrom: Oleg Bartunov [mailto:[email protected]] \nSent: Sunday, July 27, 2014 6:57 PM\nTo: Jiří Nádvorník\nCc: [email protected]; Sergey Karpov\nSubject: Re: [PERFORM] Cursor + upsert (astronomical data)\n\nJiri,\n\nas I understand your problem is called crossmatch ? I attach pdf of our work in progress, where we compared several spatial indexing techniques, including postgis, q3c and new pgsphere. Sergey Karpov made these benchmarks.\n\nNew pgsphere you may find here\nhttps://github.com/akorotkov/pgsphere\n\nOleg\n\nOn Sat, Jul 26, 2014 at 1:46 PM, Jiří Nádvorník <[email protected]> wrote:\n> Hello guys.\n>\n>\n>\n> My issue kind of hits multiple topics, but the main question is about \n> performance. I think you need to understand the background a little \n> bit to be able to help me. So I will firstly define the problem and my \n> solutions to it and place the questions for you to the end of this message.\n>\n>\n>\n> Problem:\n>\n> I have a table of observations of objects on the sky. The most \n> important columns are the coordinates (x,y). All other columns in \n> there are just additional information about the observation. The \n> problem is that when I take an image of the same object on the sky \n> twice, the coordinates x,y won’t be the same, they will be only close \n> to each other. My task is to generate a common identifier to all of \n> the observations of the same object and assign the observations to it \n> (N:1 relation). The criterium is that all of the observations which \n> are within 1 arcsec of this identifier are considered as the same \n> object. I keep the identifiers in a separate table (objcat) and have a foreign key in the observations table.\n>\n> The reason why I solve the performance issues here is that the table \n> of observations has atm cca 3e8 rows after 1.5 year of gathering the \n> data. The number growth is linear.\n>\n>\n>\n> Technical background:\n>\n> I’m trying to keep the whole algoritm in DB if possible because I have \n> a good PostgreSQL plugin Q3C for indexing the coordinates of the \n> objects on the sky \n> (https://code.google.com/p/q3c/source/browse/README.q3c). It also has \n> quite a few stored procedures to look in that table for near neighbors \n> which is what I’m doing. The function q3c_join(x1, x2, y1, y2, radius) \n> returns true if the object y is within radius of the object x. It \n> simply generates a list of index bitmap or queries with the operators \n> <=, >= which define the position on sky. Asking for the nearest neighbors are then only index scans.\n>\n>\n>\n> Solution:\n>\n> After lot of experimentation with clustering the objects and trying to \n> process them all together in one “silver-bullet” SQL query I decided \n> to use some more simple approach. The main problem with the “process \n> all at once approach” is that the finding neighbours for each \n> observation is in definition quadratic and for 3e8 rows just runs out \n> of disk space (~TBs of memory for the temporary results).\n>\n> The simplest approach I could think of is that I process each row of \n> the 3e8 rows sequentially and ask:\n>\n> Do I have an identifier in the radius of 1 arcsec?\n>\n> No: Generate one and assign me to it.\n>\n> Yes: Update it and assigne me to it. The update is done as weighted \n> average – I keep the number of how many observations the identifier \n> has been computed. The result is that the identifier will have average \n> coordinates of all the observations it identifies – it will be the center.\n>\n>\n>\n> So here I come with my procedure. It has 3 params. The first two are \n> range of oids to list in the table. Used for scaling and \n> parallelization of the algorithm. The third is the radius in which to search for the neighbours.\n>\n>\n>\n> DROP TYPE IF EXISTS coords;\n>\n> CREATE TYPE coords AS (\n>\n> raj2000 double precision,\n>\n> dej2000 double precision\n>\n> );\n>\n>\n>\n>\n>\n> DROP FUNCTION IF EXISTS build_catalog(int,int,double precision);\n>\n> CREATE OR REPLACE FUNCTION build_catalog (fromOID int, toOID int, \n> radius double precision)\n>\n> RETURNS VOID AS $$\n>\n> DECLARE\n>\n> cur1 CURSOR FOR\n>\n> SELECT\n>\n> raj2000, dej2000\n>\n> FROM\n>\n> \\schema.observation AS \n> obs\n>\n> WHERE\n>\n> obs.oid >= fromOID\n>\n> AND\n>\n> obs.oid < toOID;\n>\n> curr_raj2000 double precision;\n>\n> curr_dej2000 double precision;\n>\n> curr_coords_cat coords;\n>\n> cnt int;\n>\n>\n>\n> BEGIN\n>\n> /*SELECT current_setting('transaction_isolation') into tmp;\n>\n> raise notice 'Isolation level %', tmp;*/\n>\n> OPEN cur1;\n>\n> cnt:=0;\n>\n> LOCK TABLE \\schema.objcat IN SHARE ROW EXCLUSIVE MODE;\n>\n> LOOP\n>\n> FETCH cur1 INTO curr_raj2000, curr_dej2000;\n>\n> EXIT WHEN NOT found;\n>\n>\n>\n> WITH\n>\n> upsert\n>\n> AS\n>\n> (UPDATE\n>\n> \\schema.objcat\n>\n> SET\n>\n> ipix_cat=q3c_ang2ipix(\n>\n> \n> (raj2000 * weight + curr_raj2000) / (weight + 1),\n>\n> \n> (dej2000 * weight + curr_dej2000) / (weight + 1)\n>\n> ),\n>\n> raj2000 = (raj2000 * \n> weight +\n> curr_raj2000) / (weight + 1),\n>\n> dej2000 = (dej2000 * \n> weight +\n> curr_dej2000) / (weight + 1),\n>\n> weight=weight+1\n>\n> WHERE\n>\n> q3c_join(curr_raj2000, \n> curr_dej2000,\n>\n> \n> raj2000, dej2000,\n>\n> radius)\n>\n> RETURNING *),\n>\n> ins AS\n>\n> (INSERT INTO\n>\n> \\schema.objcat\n>\n> (ipix_cat, raj2000, \n> dej2000,\n> weight)\n>\n> SELECT\n>\n> \n> (q3c_ang2ipix(curr_raj2000, curr_dej2000)),\n>\n> curr_raj2000,\n>\n> curr_dej2000,\n>\n> 1\n>\n> WHERE NOT EXISTS\n>\n> (SELECT * FROM upsert)\n>\n> RETURNING *)\n>\n> UPDATE\n>\n> \\schema.observation\n>\n> SET\n>\n> id_cat = (SELECT DISTINCT\n>\n> id_cat\n>\n> FROM\n>\n> upsert\n>\n> UNION\n>\n> SELECT\n>\n> id_cat\n>\n> FROM\n>\n> ins\n>\n> WHERE id_cat IS NOT \n> NULL\n>\n> LIMIT 1)\n>\n> WHERE CURRENT OF cur1;\n>\n> cnt:=cnt+1;\n>\n>\n>\n> IF ((cnt % 100000 ) = 0) THEN\n>\n> RAISE NOTICE 'Processed % entries', \n> cnt;\n>\n> END IF;\n>\n>\n>\n> END LOOP;\n>\n> CLOSE cur1;\n>\n> END;\n>\n> $$ LANGUAGE plpgsql;\n>\n>\n>\n> Results: When I run the query only once (1 client thread), it runs cca \n> 1 mil rows per hour. Which is days for the whole dataset. When I run \n> it in parallel with that lock to ensure pessimistic synchronization, \n> it runs sequentially too J the other threads just waiting. When I \n> delete that lock and hope to solve the resulting conflicts later, the \n> ssd disk serves up to 4 threads relatively effectively – which can \n> divide my days of time by 4 – still inacceptable.\n>\n>\n>\n> The reason is quite clear here – I’m trying to write something in one \n> cycle of the script to a table – then in the following cycle I need to \n> read that information.\n>\n>\n>\n> Questions for you:\n>\n> 1. The first question is if you can think of a better way how to do\n> this and maybe if SQL is even capable of doing such thing – or do I \n> have to do it in C? Would rewriting the SQL function to C help?\n>\n> 2. Could I somehow bend the commiting during the algorithm for my\n> thing? Ensure that inside one cycle, the whole part of the identifiers \n> table would be kept in memory for faster lookups?\n>\n> 3. Why is this so slow? J It is comparable to the quadratic algorithm\n> in the terms of speed – only does not use any memory.\n>\n>\n>\n> I tried to sum this up the best I could – for more information please \n> don’t hesitate to contact me.\n>\n>\n>\n> Thank you very much for even reading this far.\n>\n>\n>\n> Best Regards,\n>\n>\n>\n> Jiri Nadvornik\n>\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 29 Jul 2014 12:11:23 +0200",
"msg_from": "=?UTF-8?B?SmnFmcOtIE7DoWR2b3Juw61r?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor + upsert (astronomical data)"
},
{
"msg_contents": "Hi Jiri,\n\nI’m really interested in those [clustering] algorithms and study them. But\n> I would need somebody to point me directly at a specific algorithm to look\n> at. The main problem with choosing the right one (which couldn’t get over\n> even my university teacher) is that you don’t know the number of clusters\n> (classification problem) and you don’t know the number of objects in one\n> cluster (that isn’t such a big deal), but each cluster has different count\n> of objects – which is a big issue because it actually disqualifies all of\n> k-means based algorithms (correct me if I’m wrong).\n>\n\nI'm not an expert in clustering, I just worked with several people who\ndeveloped clustering packages.\n\nOne algorithm that's widely used in chemistry and genetics is the\nJarvis-Patrick algorithm. Google for \"jarvis patrick clustering\" and you'll\nfind several good descriptions. It has the advantages that it's\ndeterministic, it works with any distance metric, and it chooses the number\nof clusters (you don't have to specify ahead of time). The drawback to\nJarvis-Patrick clustering is that you have to start with a\nnearest-neighbors list (i.e. for each item in your data set, you must find\nthe nearest N items, where N is something like 10-20.) In a brute-force\napproach, finding nearest neighbors is O(N^2) which is bad, but if you have\nany method of partitioning observations to narrow down the range of\nneighbors that have to be examined, the running time can be dramatically\nreduced.\n\nOne advantage of JP clustering is that once you have the nearest-neighbors\nlist (which is the time-consuming part), the actual JP clustering is fast.\nYou can spend some time up front computing the nearest-neighbors list, and\nthen run JP clustering a number of time with different parameters until you\nget the clusters you want.\n\nA problem with J-P clustering in your situation is that it will tend to\nmerge clusters. If you have two astronomical objects that have overlapping\nobservations, they'll probable merge into one. You might be able to address\nthis with a post-processing step that identified \"bimodal\" or \"multimodal\"\nclusters -- say by identifying a cluster with a distinctly asymmetrical or\neliptical outline and splitting it. But I think any good clustering package\nwould have trouble with these cases.\n\nThere are many other clustering algorithms, but many of them suffer from\nbeing non-deterministic, or from requiring a number-of-clusters parameter\nat the start.\n\nThat pretty much exhausts my knowledge of clustering. I hope this gives you\na little insight.\n\nCraig\n\n>\n>\n\nHi Jiri,\nI’m really interested in those [clustering] algorithms and study them. But I would need somebody to point me directly at a specific algorithm to look at. The main problem with choosing the right one (which couldn’t get over even my university teacher) is that you don’t know the number of clusters (classification problem) and you don’t know the number of objects in one cluster (that isn’t such a big deal), but each cluster has different count of objects – which is a big issue because it actually disqualifies all of k-means based algorithms (correct me if I’m wrong).\nI'm not an expert in clustering, I just worked with several people who developed clustering packages.One algorithm that's widely used in chemistry and genetics is the Jarvis-Patrick algorithm. Google for \"jarvis patrick clustering\" and you'll find several good descriptions. It has the advantages that it's deterministic, it works with any distance metric, and it chooses the number of clusters (you don't have to specify ahead of time). The drawback to Jarvis-Patrick clustering is that you have to start with a nearest-neighbors list (i.e. for each item in your data set, you must find the nearest N items, where N is something like 10-20.) In a brute-force approach, finding nearest neighbors is O(N^2) which is bad, but if you have any method of partitioning observations to narrow down the range of neighbors that have to be examined, the running time can be dramatically reduced.\nOne advantage of JP clustering is that once you have the nearest-neighbors list (which is the time-consuming part), the actual JP clustering is fast. You can spend some time up front computing the nearest-neighbors list, and then run JP clustering a number of time with different parameters until you get the clusters you want.\nA problem with J-P clustering in your situation is that it will tend to merge clusters. If you have two astronomical objects that have overlapping observations, they'll probable merge into one. You might be able to address this with a post-processing step that identified \"bimodal\" or \"multimodal\" clusters -- say by identifying a cluster with a distinctly asymmetrical or eliptical outline and splitting it. But I think any good clustering package would have trouble with these cases.\nThere are many other clustering algorithms, but many of them suffer from being non-deterministic, or from requiring a number-of-clusters parameter at the start.That pretty much exhausts my knowledge of clustering. I hope this gives you a little insight.\nCraig",
"msg_date": "Tue, 29 Jul 2014 07:46:59 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor + upsert (astronomical data)"
},
{
"msg_contents": "Hello Sergey,\n\nOh dear, I should have written you before several months :). I actually did exactly what you suggest - it was actually Markus Demleitners (GAVO org.) idea. The groupby in ipixcenter runs indeed less than an hour on the whole table and is linear by definition so no problem with memory. I used I think bit 11, which means when I proclaim that the ipix is a square and it has the number of steradians (one of your func. I don't remember which), it has cca 0.7 arcsec side and cca 1.1 arcsec \"half-diameter\" which means it approximately covers the circle of 1 arcsec radius with the center in the squares center - if you understand what I mean.\n\nThis approach and it's quite good results I actually presented on IVOA Interop in Madrid this year (don't be afraid, I pointed out, that I use your Q3C plugin for PostgreSQL). \n\nThe problem actually comes when you have more than 2 of these ipix \"squares\" in line - when joining them, you don't know which ones to use. It causes about 22% of these joined ipix centers to have collisions - when I take the resulting ipix (joined from the ipix centers) and look back for all the original observations from which these ipix centers where constructed - they don't belong to the same object. It is because when you count with the ipix centers instead of the original observations - in most pathological case you shift the observations from the bottom left corner to the top upper corner when joining with a square to your upper left. That means up to 2.2 arcsec shift which you can't backtrack.\n\nBut. It definitely brought us closer to our goal - you don't have 3e8 observations, you have cca 7e7 observations. It's still too much to process with some quadratical algorithm, but it's something.. \n\nMaybe I could use it along with my *naïve* sequential algorithm.. Could you look at the script attached if you could see any obvious inefficient statements?\n\nThank you very much! You know even talking with someone about the problem brings you closer to the solution.\n\nP.S.: I am sorry if I got a little confused with your name when addresing you - Are you the co-author of Q3C paper where the name is S. Koposov (+ O. Bartunov)?\n\nBest Regards,\n\nJiri Nadvornik\n\n-----Original Message-----\nFrom: Sergey Karpov [mailto:[email protected]] \nSent: Monday, July 28, 2014 5:03 PM\nTo: Oleg Bartunov\nCc: Jiří Nádvorník; [email protected]\nSubject: Re: [PERFORM] Cursor + upsert (astronomical data)\n\nHi Jiri,\n\nI understand your problem (and I actually have exactly the same in my sky monitoring experiment). Unfortunately, I have no complete solution for it as of now.\n\nI just may suggest you to look at q3c_ipixcenter() function from q3c which returns the Q3C bin number for a given coordinates at a given binning length. Probably, you may first group your observations using this function, selecting the binning depth to roughly match your desired angular resolution, and then probably merge the (much shorter) list of resulting bin numbers (sort of common identifier you are looking for) for corner cases (when the same cluster of observational points lies between two bins). I did not try this approach myself yet, but probably it will be significantly faster.\n\nGood luck!\n\nSergey\n\n2014-07-27 20:56 GMT+04:00 Oleg Bartunov <[email protected]>:\n> Jiri,\n>\n> as I understand your problem is called crossmatch ? I attach pdf of \n> our work in progress, where we compared several spatial indexing \n> techniques, including postgis, q3c and new pgsphere. Sergey Karpov \n> made these benchmarks.\n>\n> New pgsphere you may find here\n> https://github.com/akorotkov/pgsphere\n>\n> Oleg\n>\n> On Sat, Jul 26, 2014 at 1:46 PM, Jiří Nádvorník <[email protected]> wrote:\n>> Hello guys.\n>>\n>>\n>>\n>> My issue kind of hits multiple topics, but the main question is about \n>> performance. I think you need to understand the background a little \n>> bit to be able to help me. So I will firstly define the problem and \n>> my solutions to it and place the questions for you to the end of this message.\n>>\n>>\n>>\n>> Problem:\n>>\n>> I have a table of observations of objects on the sky. The most \n>> important columns are the coordinates (x,y). All other columns in \n>> there are just additional information about the observation. The \n>> problem is that when I take an image of the same object on the sky \n>> twice, the coordinates x,y won’t be the same, they will be only close \n>> to each other. My task is to generate a common identifier to all of \n>> the observations of the same object and assign the observations to it \n>> (N:1 relation). The criterium is that all of the observations which \n>> are within 1 arcsec of this identifier are considered as the same \n>> object. I keep the identifiers in a separate table (objcat) and have a foreign key in the observations table.\n>>\n>> The reason why I solve the performance issues here is that the table \n>> of observations has atm cca 3e8 rows after 1.5 year of gathering the \n>> data. The number growth is linear.\n>>\n>>\n>>\n>> Technical background:\n>>\n>> I’m trying to keep the whole algoritm in DB if possible because I \n>> have a good PostgreSQL plugin Q3C for indexing the coordinates of the \n>> objects on the sky \n>> (https://code.google.com/p/q3c/source/browse/README.q3c). It also has \n>> quite a few stored procedures to look in that table for near \n>> neighbors which is what I’m doing. The function q3c_join(x1, x2, y1, \n>> y2, radius) returns true if the object y is within radius of the \n>> object x. It simply generates a list of index bitmap or queries with \n>> the operators <=, >= which define the position on sky. Asking for the nearest neighbors are then only index scans.\n>>\n>>\n>>\n>> Solution:\n>>\n>> After lot of experimentation with clustering the objects and trying \n>> to process them all together in one “silver-bullet” SQL query I \n>> decided to use some more simple approach. The main problem with the \n>> “process all at once approach” is that the finding neighbours for \n>> each observation is in definition quadratic and for 3e8 rows just \n>> runs out of disk space (~TBs of memory for the temporary results).\n>>\n>> The simplest approach I could think of is that I process each row of \n>> the 3e8 rows sequentially and ask:\n>>\n>> Do I have an identifier in the radius of 1 arcsec?\n>>\n>> No: Generate one and assign me to it.\n>>\n>> Yes: Update it and assigne me to it. The update is done as weighted \n>> average – I keep the number of how many observations the identifier \n>> has been computed. The result is that the identifier will have \n>> average coordinates of all the observations it identifies – it will be the center.\n>>\n>>\n>>\n>> So here I come with my procedure. It has 3 params. The first two are \n>> range of oids to list in the table. Used for scaling and \n>> parallelization of the algorithm. The third is the radius in which to search for the neighbours.\n>>\n>>\n>>\n>> DROP TYPE IF EXISTS coords;\n>>\n>> CREATE TYPE coords AS (\n>>\n>> raj2000 double precision,\n>>\n>> dej2000 double precision\n>>\n>> );\n>>\n>>\n>>\n>>\n>>\n>> DROP FUNCTION IF EXISTS build_catalog(int,int,double precision);\n>>\n>> CREATE OR REPLACE FUNCTION build_catalog (fromOID int, toOID int, \n>> radius double precision)\n>>\n>> RETURNS VOID AS $$\n>>\n>> DECLARE\n>>\n>> cur1 CURSOR FOR\n>>\n>> SELECT\n>>\n>> raj2000, dej2000\n>>\n>> FROM\n>>\n>> \\schema.observation AS \n>> obs\n>>\n>> WHERE\n>>\n>> obs.oid >= fromOID\n>>\n>> AND\n>>\n>> obs.oid < toOID;\n>>\n>> curr_raj2000 double precision;\n>>\n>> curr_dej2000 double precision;\n>>\n>> curr_coords_cat coords;\n>>\n>> cnt int;\n>>\n>>\n>>\n>> BEGIN\n>>\n>> /*SELECT current_setting('transaction_isolation') into tmp;\n>>\n>> raise notice 'Isolation level %', tmp;*/\n>>\n>> OPEN cur1;\n>>\n>> cnt:=0;\n>>\n>> LOCK TABLE \\schema.objcat IN SHARE ROW EXCLUSIVE MODE;\n>>\n>> LOOP\n>>\n>> FETCH cur1 INTO curr_raj2000, curr_dej2000;\n>>\n>> EXIT WHEN NOT found;\n>>\n>>\n>>\n>> WITH\n>>\n>> upsert\n>>\n>> AS\n>>\n>> (UPDATE\n>>\n>> \\schema.objcat\n>>\n>> SET\n>>\n>> ipix_cat=q3c_ang2ipix(\n>>\n>> \n>> (raj2000 * weight + curr_raj2000) / (weight + 1),\n>>\n>> \n>> (dej2000 * weight + curr_dej2000) / (weight + 1)\n>>\n>> ),\n>>\n>> raj2000 = (raj2000 * \n>> weight +\n>> curr_raj2000) / (weight + 1),\n>>\n>> dej2000 = (dej2000 * \n>> weight +\n>> curr_dej2000) / (weight + 1),\n>>\n>> weight=weight+1\n>>\n>> WHERE\n>>\n>> q3c_join(curr_raj2000, \n>> curr_dej2000,\n>>\n>> \n>> raj2000, dej2000,\n>>\n>> \n>> radius)\n>>\n>> RETURNING *),\n>>\n>> ins AS\n>>\n>> (INSERT INTO\n>>\n>> \\schema.objcat\n>>\n>> (ipix_cat, raj2000, \n>> dej2000,\n>> weight)\n>>\n>> SELECT\n>>\n>> \n>> (q3c_ang2ipix(curr_raj2000, curr_dej2000)),\n>>\n>> curr_raj2000,\n>>\n>> curr_dej2000,\n>>\n>> 1\n>>\n>> WHERE NOT EXISTS\n>>\n>> (SELECT * FROM upsert)\n>>\n>> RETURNING *)\n>>\n>> UPDATE\n>>\n>> \\schema.observation\n>>\n>> SET\n>>\n>> id_cat = (SELECT DISTINCT\n>>\n>> id_cat\n>>\n>> FROM\n>>\n>> upsert\n>>\n>> UNION\n>>\n>> SELECT\n>>\n>> id_cat\n>>\n>> FROM\n>>\n>> ins\n>>\n>> WHERE id_cat IS NOT \n>> NULL\n>>\n>> LIMIT 1)\n>>\n>> WHERE CURRENT OF cur1;\n>>\n>> cnt:=cnt+1;\n>>\n>>\n>>\n>> IF ((cnt % 100000 ) = 0) THEN\n>>\n>> RAISE NOTICE 'Processed % entries', \n>> cnt;\n>>\n>> END IF;\n>>\n>>\n>>\n>> END LOOP;\n>>\n>> CLOSE cur1;\n>>\n>> END;\n>>\n>> $$ LANGUAGE plpgsql;\n>>\n>>\n>>\n>> Results: When I run the query only once (1 client thread), it runs \n>> cca 1 mil rows per hour. Which is days for the whole dataset. When I \n>> run it in parallel with that lock to ensure pessimistic \n>> synchronization, it runs sequentially too J the other threads just \n>> waiting. When I delete that lock and hope to solve the resulting \n>> conflicts later, the ssd disk serves up to 4 threads relatively \n>> effectively – which can divide my days of time by 4 – still inacceptable.\n>>\n>>\n>>\n>> The reason is quite clear here – I’m trying to write something in one \n>> cycle of the script to a table – then in the following cycle I need \n>> to read that information.\n>>\n>>\n>>\n>> Questions for you:\n>>\n>> 1. The first question is if you can think of a better way how to do\n>> this and maybe if SQL is even capable of doing such thing – or do I \n>> have to do it in C? Would rewriting the SQL function to C help?\n>>\n>> 2. Could I somehow bend the commiting during the algorithm for my\n>> thing? Ensure that inside one cycle, the whole part of the \n>> identifiers table would be kept in memory for faster lookups?\n>>\n>> 3. Why is this so slow? J It is comparable to the quadratic algorithm\n>> in the terms of speed – only does not use any memory.\n>>\n>>\n>>\n>> I tried to sum this up the best I could – for more information please \n>> don’t hesitate to contact me.\n>>\n>>\n>>\n>> Thank you very much for even reading this far.\n>>\n>>\n>>\n>> Best Regards,\n>>\n>>\n>>\n>> Jiri Nadvornik\n>>\n>>\n>>\n>>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 29 Jul 2014 19:50:43 +0200",
"msg_from": "=?UTF-8?B?SmnFmcOtIE7DoWR2b3Juw61r?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor + upsert (astronomical data)"
},
{
"msg_contents": "On Sat, Jul 26, 2014 at 3:46 AM, Jiří Nádvorník <[email protected]>\nwrote:\n\n\n\n> The reason why I solve the performance issues here is that the table of\n> observations has atm cca 3e8 rows after 1.5 year of gathering the data. The\n> number growth is linear.\n>\n\nSo about 500,000 new records a day.\n\n\n\n (UPDATE\n>\n> \\schema.objcat\n>\n> SET\n>\n> ipix_cat=q3c_ang2ipix(\n>\n> (raj2000 *\n> weight + curr_raj2000) / (weight + 1),\n>\n> (dej2000 *\n> weight + curr_dej2000) / (weight + 1)\n>\n> ),\n>\n> raj2000 = (raj2000 * weight\n> + curr_raj2000) / (weight + 1),\n>\n> dej2000 = (dej2000 * weight\n> + curr_dej2000) / (weight + 1),\n>\n> weight=weight+1\n>\n> WHERE\n>\n> q3c_join(curr_raj2000,\n> curr_dej2000,\n>\n> raj2000,\n> dej2000,\n>\n> radius)\n>\n> RETURNING *),\n>\n\n\nDoing all of this (above, plus the other parts I snipped) as a single query\nseems far too clever. How can you identify the slow component when you\nhave them all munged up like that?\n\nTurn the above select query and run it on a random smattering of records,\n'explain (analyze, buffers)', periodically while the load process is going\non.\n\n\n\n>\n> Results: When I run the query only once (1 client thread), it runs cca 1\n> mil rows per hour.\n>\n\nIs that 1 million, or 1 thousand? I'm assuming million, but...\n\n\n> Which is days for the whole dataset. When I run it in parallel with that\n> lock to ensure pessimistic synchronization, it runs sequentially too J\n> the other threads just waiting. When I delete that lock and hope to solve\n> the resulting conflicts later, the ssd disk serves up to 4 threads\n> relatively effectively – which can divide my days of time by 4 – still\n> inacceptable.\n>\n\nIt is processing new records 192 times faster than you are generating them.\n Why is that not acceptable?\n\n\n>\n>\n> The reason is quite clear here – I’m trying to write something in one\n> cycle of the script to a table – then in the following cycle I need to read\n> that information.\n>\n\nThat is the reason for concurrency issues, but it is not clear that that is\nthe reason that the performance is not what you desire. If you first\npartition your data into stripes that are a few arc minutes wide, each\nstripe should not interact with anything other than itself and two\nneighbors. That should parallelize nicely.\n\n\n>\n>\n> Questions for you:\n>\n> 1. The first question is if you can think of a better way how to do\n> this and maybe if SQL is even capable of doing such thing – or do I have to\n> do it in C? Would rewriting the SQL function to C help?\n>\n\nSkillfully hand-crafted C will always be faster than SQL, if you don't\ncount the time needed to write and debug it.\n\n\n\n> 2. Could I somehow bend the commiting during the algorithm for my\n> thing? Ensure that inside one cycle, the whole part of the identifiers\n> table would be kept in memory for faster lookups?\n>\nIs committing a bottleneck? It looks like you are doing everything in\nlarge transactional chunks already, so it probably isn't. If the\nidentifier table fits in memory, it should probably stay in memory there on\nits own just through usage. If it doesn't fit, there isn't much you can do\nother than pre-cluster the data in a coarse-grained way such that only a\nfew parts of the table (and its index) are active at any one time, such\nthat those active parts stay in memory.\n\n\n\n> 3. Why is this so slow? J It is comparable to the quadratic\n> algorithm in the terms of speed – only does not use any memory.\n>\n\nUse 'explain (analyze, buffers)', preferably with track_io_timing on. use\ntop, strace, gprof, or perf.\n\nCheers,\n\nJeff\n\nOn Sat, Jul 26, 2014 at 3:46 AM, Jiří Nádvorník <[email protected]> wrote:\n \nThe reason why I solve the performance issues here is that the table of observations has atm cca 3e8 rows after 1.5 year of gathering the data. The number growth is linear.\nSo about 500,000 new records a day.\n (UPDATE \\schema.objcat\n SET ipix_cat=q3c_ang2ipix(\n (raj2000 * weight + curr_raj2000) / (weight + 1), (dej2000 * weight + curr_dej2000) / (weight + 1)\n ), raj2000 = (raj2000 * weight + curr_raj2000) / (weight + 1),\n dej2000 = (dej2000 * weight + curr_dej2000) / (weight + 1), weight=weight+1 \n WHERE q3c_join(curr_raj2000, curr_dej2000,\n raj2000, dej2000, radius)\n RETURNING *),Doing all of this (above, plus the other parts I snipped) as a single query seems far too clever. How can you identify the slow component when you have them all munged up like that?\nTurn the above select query and run it on a random smattering of records, 'explain (analyze, buffers)', periodically while the load process is going on.\n Results: When I run the query only once (1 client thread), it runs cca 1 mil rows per hour. \nIs that 1 million, or 1 thousand? I'm assuming million, but... \nWhich is days for the whole dataset. When I run it in parallel with that lock to ensure pessimistic synchronization, it runs sequentially too J the other threads just waiting. When I delete that lock and hope to solve the resulting conflicts later, the ssd disk serves up to 4 threads relatively effectively – which can divide my days of time by 4 – still inacceptable.\nIt is processing new records 192 times faster than you are generating them. Why is that not acceptable? \n The reason is quite clear here – I’m trying to write something in one cycle of the script to a table – then in the following cycle I need to read that information.\nThat is the reason for concurrency issues, but it is not clear that that is the reason that the performance is not what you desire. If you first partition your data into stripes that are a few arc minutes wide, each stripe should not interact with anything other than itself and two neighbors. That should parallelize nicely.\n \n Questions for you:1. The first question is if you can think of a better way how to do this and maybe if SQL is even capable of doing such thing – or do I have to do it in C? Would rewriting the SQL function to C help?\nSkillfully hand-crafted C will always be faster than SQL, if you don't count the time needed to write and debug it. \n2. Could I somehow bend the commiting during the algorithm for my thing? Ensure that inside one cycle, the whole part of the identifiers table would be kept in memory for faster lookups?\nIs committing a bottleneck? It looks like you are doing everything in large transactional chunks already, so it probably isn't. If the identifier table fits in memory, it should probably stay in memory there on its own just through usage. If it doesn't fit, there isn't much you can do other than pre-cluster the data in a coarse-grained way such that only a few parts of the table (and its index) are active at any one time, such that those active parts stay in memory.\n \n3. Why is this so slow? J It is comparable to the quadratic algorithm in the terms of speed – only does not use any memory.\nUse 'explain (analyze, buffers)', preferably with track_io_timing on. use top, strace, gprof, or perf. Cheers,Jeff",
"msg_date": "Tue, 29 Jul 2014 11:36:32 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor + upsert (astronomical data)"
}
] |
[
{
"msg_contents": "Explained here:\nhttps://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf\n\n13 out of 15 tested SSD's had various kinds of corruption on a power-out.\n\n(thanks, Neil!)\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 29 Jul 2014 20:12:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why you should turn on Checksums with SSDs"
},
{
"msg_contents": "On 30 Červenec 2014, 5:12, Josh Berkus wrote:\n> Explained here:\n> https://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf\n>\n> 13 out of 15 tested SSD's had various kinds of corruption on a power-out.\n>\n> (thanks, Neil!)\n\nWell, only four of the devices supposedly had a power-loss protection\n(battery, capacitor, ...) so I guess it's not really that surprising the\nremaining 11 devices failed in a test like this. Although it really\nshouldn't damage the device, as apparently happened during the tests.\n\nToo bad they haven't mentioned which SSDs they've been testing\nspecifically. While I understand the reason for that (HP Labs can't just\npoint at products from other companies), it significantly limits the\nusefulness of the study. Too many companies are producing crappy\nconsumer-level devices, advertising them as \"enterprise\". I could name a\nfew ...\n\nMaybe it could be deciphered using the information in the paper\n(power-loss protection, year of release, ...).\n\nI'd expect to see Intel 320/710 to see there, but that seems not to be the\ncase, because those devices were released in 2011 and all the four devices\nwith power-loss protection have year=2012. Or maybe it's the year when\nthat particular device was manufactured?\n\n\nregards\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 11:01:55 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why you should turn on Checksums with SSDs"
},
{
"msg_contents": "On Wed, Jul 30, 2014 at 4:01 AM, Tomas Vondra <[email protected]> wrote:\n> On 30 Červenec 2014, 5:12, Josh Berkus wrote:\n>> Explained here:\n>> https://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf\n>>\n>> 13 out of 15 tested SSD's had various kinds of corruption on a power-out.\n>>\n>> (thanks, Neil!)\n>\n> Well, only four of the devices supposedly had a power-loss protection\n> (battery, capacitor, ...) so I guess it's not really that surprising the\n> remaining 11 devices failed in a test like this. Although it really\n> shouldn't damage the device, as apparently happened during the tests.\n>\n> Too bad they haven't mentioned which SSDs they've been testing\n> specifically. While I understand the reason for that (HP Labs can't just\n> point at products from other companies), it significantly limits the\n> usefulness of the study. Too many companies are producing crappy\n> consumer-level devices, advertising them as \"enterprise\". I could name a\n> few ...\n>\n> Maybe it could be deciphered using the information in the paper\n> (power-loss protection, year of release, ...).\n>\n> I'd expect to see Intel 320/710 to see there, but that seems not to be the\n> case, because those devices were released in 2011 and all the four devices\n> with power-loss protection have year=2012. Or maybe it's the year when\n> that particular device was manufactured?\n\nTake a look here:\nhttp://hardware.slashdot.org/story/13/12/27/208249/power-loss-protected-ssds-tested-only-intel-s3500-passes\n\n\"Only the end-of-lifed Intel 320 and its newer replacement, the S3500,\nsurvived unscathed. The conclusion: if you care about data even when\npower could be unreliable, only buy Intel SSDs.\"\"\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 13:53:07 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why you should turn on Checksums with SSDs"
}
] |
[
{
"msg_contents": "Greetings,\n\nAny help regarding a sporadic and mysterious issue would be much appreciated.\n\nA pgsql function for loading in data is occasionally taking 12+ hours\nto complete versus its normal 1-2 hours, due to a slow down at the\nCREATE TEMP TABLE step. During slow runs of the function, the temp\ntable data file is being written to at 8192 bytes/second. This rate\nwas consistent at the 5 hour mark up until I canceled the query at 6\nhrs in. An immediate rerunning of the function finished in an hour.\nTemp table file size was 226 MB and was created in ~15 mins.\n\nPostgreSQL 9.3.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\n\nLinux 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014\nx86_64 x86_64 x86_64 GNU/Linux\n\nProLiant DL380p Gen8, 2 x E5-2620 (hyperthreading on)\n96 GB\npgsql data dir mounted on 25 x ssd storage array, connected via fibre\nchannel, pg_xlog on a RAID 10 hdd array\ndeadline scheduler\n8192 readahead\n\n name | current_setting | source\n------------------------------+---------------------------+--------------------\n application_name | psql | client\n archive_command | ********* | configuration file\n archive_mode | on | configuration file\n autovacuum | on | configuration file\n autovacuum_max_workers | 6 | configuration file\n bgwriter_delay | 40ms | configuration file\n bgwriter_lru_maxpages | 1000 | configuration file\n bgwriter_lru_multiplier | 3 | configuration file\n checkpoint_completion_target | 0.9 | configuration file\n checkpoint_segments | 1024 | configuration file\n checkpoint_timeout | 30min | configuration file\n client_encoding | UTF8 | client\n cpu_operator_cost | 0.5 | configuration file\n cpu_tuple_cost | 0.5 | configuration file\n DateStyle | ISO, MDY | configuration file\n deadlock_timeout | 10s | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n effective_cache_size | 70GB | configuration file\n effective_io_concurrency | 6 | configuration file\n full_page_writes | on | configuration file\n hot_standby | on | configuration file\n hot_standby_feedback | on | configuration file\n lc_messages | en_US.UTF-8 | configuration file\n lc_monetary | en_US.UTF-8 | configuration file\n lc_numeric | en_US.UTF-8 | configuration file\n lc_time | en_US.UTF-8 | configuration file\n listen_addresses | * | configuration file\n log_autovacuum_min_duration | 1s | configuration file\n log_checkpoints | on | configuration file\n log_destination | csvlog | configuration file\n log_file_mode | 0600 | configuration file\n log_filename | postgresql-%a.log | configuration file\n log_lock_waits | on | configuration file\n log_min_duration_statement | 250ms | configuration file\n log_rotation_age | 1d | configuration file\n log_rotation_size | 0 | configuration file\n log_statement | ddl | configuration file\n log_timezone | America/New_York | configuration file\n log_truncate_on_rotation | on | configuration file\n logging_collector | on | configuration file\n maintenance_work_mem | 2400MB | configuration file\n max_connections | 1000 | configuration file\n max_stack_depth | 5MB | configuration file\n max_wal_senders | 5 | configuration file\n port | 5432 | command line\n random_page_cost | 4 | session\n seq_page_cost | 1 | configuration file\n shared_buffers | 8GB | configuration file\n shared_preload_libraries | auto_explain | configuration file\n stats_temp_directory | /var/lib/pgsql_stat_tmpfs | configuration file\n TimeZone | America/New_York | configuration file\n track_activities | on | configuration file\n track_counts | on | configuration file\n track_functions | all | configuration file\n track_io_timing | on | configuration file\n update_process_title | on | configuration file\n wal_buffers | 64MB | configuration file\n wal_keep_segments | 2000 | configuration file\n wal_level | hot_standby | configuration file\n work_mem | 32MB | configuration file\n\n\nNumber of connections at any one time on the database is 300-400, with\nthe majority idle - there are legacy reasons for that and the high\nmax_connections.\n\nkernel.shmmax = 50701037568\nkernel.shmall = 12378183\nvm.swappiness = 0\nvm.overcommit_memory = 2\nvm.dirty_background_ratio = 2\nvm.dirty_background_bytes = 0\nvm.dirty_ratio = 5\nvm.dirty_bytes = 0\nvm.dirty_writeback_centisecs = 500\nvm.dirty_expire_centisecs = 3000\n\n\nEXPLAIN ANALYZE\nCREATE TEMPORARY TABLE temp AS\n SELECT rxfill.*, betapat.betapatientid, patrx.rxnum,\npatrx.patientrxid as patientrxid, betapat.pharmacypatientid AS\noriginalpharmacypatientid, store.storeid as betastoreid\n FROM rxfilldata_parent rxfill\n JOIN (select MAX(id) id, storeid, rxnbr FROM rxfilldata_parent\ntmp WHERE clientid = 118 AND tmp.pkgfileid = 417995 GROUP BY storeid,\nrxnbr) rxfillmax ON (rxfillmax.id = rxfill.id)\n JOIN client.client client ON (rxfill.clientid = client.clientid)\n JOIN client.chain chain ON (client.clientid = chain.clientid)\n JOIN client.store store ON (chain.chainid = store.chainid AND\nrxfill.storeid = store.clientstoreid)\n LEFT OUTER JOIN patient.patientrx118 patrx ON (rxfill.clientid =\npatrx.clientid AND rxfill.rxnbr = patrx.rxnum AND patrx.storeid =\nstore.storeid)\n LEFT OUTER JOIN patient.betapatient118 betapat ON\n(rxfill.clientid = betapat.clientid AND betapat.storeid =\nstore.storeid AND rxfill.pharmacypatientid =\nbetapat.pharmacypatientid)\n where rxfill.clientid = 118 and rxfill.pkgfileid = 417995;\n\n\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=520598.91..572134.79 rows=7 width=911)\n(actual time=2862.795..90880.990 rows=273566 loops=1)\n Join Filter: (betapat.storeid = store.storeid)\n Rows Removed by Join Filter: 121214\n -> Nested Loop Left Join (cost=520511.91..571468.59 rows=7\nwidth=899) (actual time=2862.725..82299.527 rows=273566 loops=1)\n -> Hash Join (cost=520398.91..570601.16 rows=7 width=887)\n(actual time=2859.693..5998.178 rows=273566 loops=1)\n Hash Cond: ((max(tmp.id)) = rxfill.id)\n -> HashAggregate (cost=510782.11..525124.61\nrows=28685 width=15) (actual time=728.322..1014.029 rows=273566\nloops=1)\n -> Bitmap Heap Scan on rxfilldata_parent tmp\n(cost=100172.71..370349.11 rows=93622 width=15) (actual\ntime=44.358..148.145 rows=274808 loops=1)\n Recheck Cond: ((clientid = 118) AND\n(pkgfileid = 417995))\n -> Bitmap Index Scan on ix_neededimport\n(cost=0.00..95491.61 rows=93622 width=0) (actual time=38.742..38.742\nrows=274808 loops=1)\n Index Cond: ((clientid = 118) AND\n(pkgfileid = 417995))\n -> Hash (cost=9206.80..9206.80 rows=410 width=887)\n(actual time=2130.473..2130.473 rows=274808 loops=1)\n Buckets: 1024 Batches: 8 (originally 1) Memory\nUsage: 32769kB\n -> Nested Loop (cost=264.27..9206.80 rows=410\nwidth=887) (actual time=1.239..1216.518 rows=274808 loops=1)\n -> Nested Loop (cost=178.77..261.15\nrows=13 width=13) (actual time=0.343..2.309 rows=966 loops=1)\n -> Nested Loop (cost=110.00..124.51\nrows=1 width=8) (actual time=0.093..0.098 rows=1 loops=1)\n -> Index Only Scan using\npk_client on client (cost=55.00..60.00 rows=1 width=4) (actual\ntime=0.076..0.078 rows=1 loops=1)\n Index Cond: (clientid = 118)\n Heap Fetches: 0\n -> Index Scan using ix_chain\non chain (cost=55.00..64.00 rows=1 width=8) (actual time=0.014..0.016\nrows=1 loops=1)\n Index Cond: (clientid = 118)\n -> Bitmap Heap Scan on store\n(cost=68.77..129.64 rows=14 width=13) (actual time=0.246..1.572\nrows=966 loops=1)\n Recheck Cond: (chainid = chain.chainid)\n -> Bitmap Index Scan on\nix_store (cost=0.00..68.07 rows=14 width=0) (actual time=0.216..0.216\nrows=966 loops=1)\n Index Cond: (chainid =\nchain.chainid)\n -> Index Scan using ix_merge8_daily on\nrxfilldata_parent rxfill (cost=85.50..670.63 rows=35 width=883)\n(actual time=0.120..1.081 rows=284 loops=966)\n Index Cond: (((storeid)::text =\n(store.clientstoreid)::text) AND (clientid = 118))\n Filter: (pkgfileid = 417995)\n Rows Removed by Filter: 292\n -> Index Scan using ux2_patientrx118 on patientrx118 patrx\n(cost=113.00..123.42 rows=1 width=20) (actual time=0.275..0.277 rows=1\nloops=273566)\n Index Cond: (((rxfill.rxnbr)::text = (rxnum)::text) AND\n(rxfill.clientid = clientid) AND (clientid = 118) AND (storeid =\nstore.storeid))\n -> Index Scan using ix3_betapatient118 on betapatient118 betapat\n(cost=87.00..94.17 rows=1 width=20) (actual time=0.025..0.029 rows=1\nloops=273566)\n Index Cond: ((rxfill.pharmacypatientid)::text =\n(pharmacypatientid)::text)\n Filter: ((clientid = 118) AND (rxfill.clientid = clientid))\n Total runtime: 90945.841 ms\n\n\npg_bgwriter_snapshots:\n now | checkpoints_timed | checkpoints_req |\ncheckpoint_write_time | checkpoint_sync_time | buffers_checkpoint |\nbuffers_clean | maxwritten_clean | buffers_backend |\nbuffers_backend_fsync | buffers_alloc | stats_reset\n-------------------------------+-------------------+-----------------+-----------------------+----------------------+--------------------+---------------+------------------+-----------------+-----------------------+---------------+-------------------------------\n 2014-07-29 10:00:01.65379-04 | 4 | 0\n| 2108808 | 1586 | 151291 |\n 2207939 | 83 | 67605 |\n 0 | 28932559 | 2014-07-29 08:01:01.623033-04\n 2014-07-29 11:00:01.578856-04 | 2 | 0 |\n 409840 | 618 | 17033 |\n 734718 | 53 | 52256 |\n0 | 30524848 | 2014-07-29 10:01:01.877218-04\n 2014-07-29 12:00:01.420009-04 | 4 | 0 |\n 1939095 | 1120 | 44134 |\n 1515493 | 124 | 114122 |\n0 | 43833671 | 2014-07-29 10:01:01.877218-04\n 2014-07-29 13:00:01.234634-04 | 2 | 0 |\n 1364481 | 169 | 16329 |\n 427183 | 11 | 30784 |\n0 | 21273967 | 2014-07-29 12:01:01.542161-04\n 2014-07-29 14:00:02.007022-04 | 4 | 0 |\n 2493810 | 607 | 43316 |\n 1233492 | 115 | 93203 |\n0 | 42564936 | 2014-07-29 12:01:01.542161-04\n 2014-07-29 15:00:01.713446-04 | 2 | 0 |\n 854284 | 93 | 15033 |\n 215280 | 28 | 14880 |\n0 | 25766378 | 2014-07-29 14:01:01.119926-04\n 2014-07-29 16:00:01.542989-04 | 4 | 0 |\n 2704023 | 322 | 28730 |\n 330864 | 28 | 22278 |\n0 | 43323265 | 2014-07-29 14:01:01.119926-04\n 2014-07-29 17:00:01.39066-04 | 2 | 0 |\n 809083 | 139 | 8264 |\n 167504 | 0 | 15206 |\n0 | 33998495 | 2014-07-29 16:01:01.671427-04\n 2014-07-29 18:00:01.375621-04 | 4 | 0 |\n 1767296 | 335 | 17826 |\n 252109 | 0 | 29399 |\n0 | 58038221 | 2014-07-29 16:01:01.671427-04\n 2014-07-29 19:00:01.252766-04 | 2 | 0 |\n 851746 | 278 | 30729 |\n 1737370 | 149 | 36806 |\n0 | 41487870 | 2014-07-29 18:01:01.474614-04\n 2014-07-29 20:00:02.07362-04 | 4 | 0 |\n 1026422 | 364 | 44182 |\n 2209503 | 219 | 68020 |\n0 | 63828691 | 2014-07-29 18:01:01.474614-04\n\n\nThe sar output for the timeframe the problem occurred is available as\nwell, if that would be helpful, but does not show any issues of\nunusual load.\n\nThanks for any help,\nClinton\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 30 Jul 2014 14:43:02 -0400",
"msg_from": "Clinton Adams <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow create temp table"
}
] |
[
{
"msg_contents": "Hi All,\n\nUsing PostgreSQL 9.3 on Linux Red-Hat platform.\n\nHow would I go about setting a default format for the timestamp? I've set the default for TimeZone to UTC.\n\nThank you in advance for any information.\n\nDF\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Thu, 31 Jul 2014 20:43:02 +0000",
"msg_from": "\"Ferrell, Denise CTR NSWCDD, Z11\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Setting a default format for timestamp"
},
{
"msg_contents": "_Where_ do you want that default format? Are you asking about a field? psql output? The O/S? \n\nWhat you're asking might be as simple as \"datestyle,\" if you're just talking the output mask: \n\nhttp://www.postgresql.org/docs/9.1/static/runtime-config-client.html \n\n\n\nDateStyle ( string ) \nSets the display format for date and time values, as well as the rules for interpreting ambiguous date input values. For historical reasons, this variable contains two independent components: the output format specification ( ISO , Postgres , SQL , or German ) and the input/output specification for year/month/day ordering ( DMY , MDY , or YMD ). These can be set separately or together. The keywords Euro and European are synonyms for DMY ; the keywords US , NonEuro , and NonEuropean are synonyms for MDY . See Section 8.5 for more information. The built-in default is ISO, MDY , but initdb will initialize the configuration file with a setting that corresponds to the behavior of the chosen lc_time locale. \n\n----- Original Message ----- \n\n> Hi All,\n\n> Using PostgreSQL 9.3 on Linux Red-Hat platform.\n\n> How would I go about setting a default format for the timestamp? I've set the\n> default for TimeZone to UTC.\n\n> Thank you in advance for any information.\n\n> DF\n\n> --\n> Sent via pgsql-admin mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-admin\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Thu, 31 Jul 2014 15:59:01 -0500 (CDT)",
"msg_from": "Scott Whitney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting a default format for timestamp"
}
] |
[
{
"msg_contents": "Hi all. Running version: on=> select version();\n version\n \n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro \n4.6.3-1ubuntu5) 4.6.3, 64-bit I have this query. prepare \nlist_un_done_in_folder_q AS\n SELECT em.entity_id, substr(em.plain_text_content, 1, 101) as \nplain_text_content, del.entity_id as delivery_id, del.subject,\n coalesce(prop.is_seen, false) AS is_seen,\n coalesce(prop.is_done, false) \nAS is_done,\n del.received_timestamp, del.sent_timestamp, ef.name as from_name, \nef.address as from_address\n , ARRAY(select a.name from origo_email_address a inner join \norigo_email_address_owner o ON o.address_id = a.entity_id AND o.recipient_type \n= 'TO' AND o.message_id = em.entity_id ORDER BY o.address_index ASC) as \nrecipient_to_name\n , ARRAY(select a.address from origo_email_address a inner join \norigo_email_address_owner o ON o.address_id = a.entity_id AND o.recipient_type \n= 'TO' AND o.message_id = em.entity_id ORDER BY o.address_index ASC) as \nrecipient_to_address\n , prop.followup_id, prop.is_forwarded as is_forwarded, prop.is_replied as \nis_replied, fm.folder_id\n , (SELECT\n person_fav.priority\n FROM origo_favourite_owner person_fav\n WHERE person_fav.favourite_for = $1\n AND person_fav.favourite_item = pers.entity_id)\n AS person_favourite_priority\n , (select company_fav.priority FROM origo_favourite_owner company_fav \nWHERE company_fav.favourite_for = $2\n \n \nAND company_fav.favourite_item = comp.entity_id)\n AS company_favourite_priority\n , pers.entity_id as from_person_entity_id, pers.id as from_person_id, \npers.onp_user_id, pers.firstname as from_firstname, pers.lastname as \nfrom_lastname\n , comp.entity_id as from_company_entity_id, comp.companyname as \nfrom_company_name\n , em.attachment_size FROM origo_email_delivery del JOIN \norigo_email_message em ON (del.message_id = em.entity_id)\n LEFT OUTER JOIN onp_crm_person pers ON (em.from_entity_id = \npers.entity_id)\n LEFT OUTER JOIN onp_crm_relation comp ON (comp.entity_id = \npers.relation_id)\n JOIN origo_email_message_property prop ON (em.entity_id = prop.message_id \nAND prop.owner_id = $3)\n LEFT OUTER JOIN origo_email_address ef ON em.from_id = ef.entity_id\n LEFT OUTER JOIN origo_email_folder_message fm ON fm.delivery_id = \ndel.entity_id\n WHERE 1 = 1 AND prop.is_done = FALSE\n AND fm.folder_id = $4 ORDER BY del.received_timestamp DESC LIMIT $5 \nOFFSET $6; Which sometimes performs really bad, although all indexes are \nbeing used. Here is the explain plan: on=> explain analyze execute \nlist_un_done_in_folder_q (3,3,3,44961, 101, 0);\n \n \nQUERY PLAN\n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2.53..52101.30 rows=101 width=641) (actual \ntime=0.343..311765.063 rows=75 loops=1)\n -> Nested Loop Left Join (cost=2.53..1365402.92 rows=2647 width=641) \n(actual time=0.342..311765.012 rows=75 loops=1)\n -> Nested Loop Left Join (cost=2.10..1248967.97 rows=2647 \nwidth=607) (actual time=0.202..311215.044 rows=75 loops=1)\n -> Nested Loop Left Join (cost=1.83..1248018.82 rows=2647 \nwidth=592) (actual time=0.201..311214.888 rows=75 loops=1)\n -> Nested Loop (cost=1.55..1247217.37 rows=2647 \nwidth=565) (actual time=0.199..311214.695 rows=75 loops=1)\n -> Nested Loop (cost=1.13..1240583.78 rows=2647 \nwidth=126) (actual time=0.194..311213.727 rows=75 loops=1)\n -> Nested Loop (cost=0.71..1230153.64 \nrows=20567 width=118) (actual time=0.029..311153.648 rows=12866 loops=1)\n -> Index Scan Backward using \norigo_email_delivery_received_idx on origo_email_delivery del \n(cost=0.42..1102717.48 rows=354038 width=98) (actual time=0.017..309196.670 \nrows=354296 loops=1)\n -> Index Scan using \norigo_email_prop_owner_message_not_done_idx on origo_email_message_property \nprop (cost=0.29..0.35 rows=1 width=20) (actual time=0.004..0.004 rows=0 \nloops=354296)\n Index Cond: ((owner_id = \n3::bigint) AND (message_id = del.message_id))\n -> Index Only Scan using \norigo_email_folder_message_delivery_id_folder_id_key on \norigo_email_folder_message fm (cost=0.42..0.50 rows=1 width=16) (actual \ntime=0.004..0.004 rows=0 loops=12866)\n Index Cond: ((delivery_id = \ndel.entity_id) AND (folder_id = 44961::bigint))\n Heap Fetches: 75\n -> Index Scan using origo_email_message_pkey on \norigo_email_message em (cost=0.42..2.50 rows=1 width=455) (actual \ntime=0.010..0.011 rows=1 loops=75)\n Index Cond: (entity_id = del.message_id)\n -> Index Scan using origo_person_entity_id_idx on \nonp_crm_person pers (cost=0.28..0.29 rows=1 width=35) (actual \ntime=0.001..0.001 rows=0 loops=75)\n Index Cond: (em.from_entity_id = entity_id)\n -> Index Scan using onp_crm_relation_pkey on onp_crm_relation \ncomp (cost=0.27..0.35 rows=1 width=19) (actual time=0.001..0.001 rows=0 \nloops=75)\n Index Cond: (entity_id = pers.relation_id)\n -> Index Scan using origo_email_address_pkey on origo_email_address \nef (cost=0.43..0.68 rows=1 width=50) (actual time=3.165..3.166 rows=1 loops=75)\n Index Cond: (em.from_id = entity_id)\n SubPlan 1\n -> Nested Loop (cost=0.85..16.90 rows=1 width=24) (actual \ntime=1.880..1.882 rows=1 loops=75)\n -> Index Scan using \norigo_email_address_owner_message_id_recipient_type_address_key on \norigo_email_address_owner o (cost=0.42..8.45 rows=1 width=12) (actual \ntime=1.759..1.759 rows=1 loops=75)\n Index Cond: ((message_id = em.entity_id) AND \n((recipient_type)::text = 'TO'::text))\n -> Index Scan using origo_email_address_pkey on \norigo_email_address a (cost=0.43..8.45 rows=1 width=28) (actual \ntime=0.095..0.095 rows=1 loops=93)\n Index Cond: (entity_id = o.address_id)\n SubPlan 2\n -> Nested Loop (cost=0.85..16.90 rows=1 width=26) (actual \ntime=0.009..0.010 rows=1 loops=75)\n -> Index Scan using \norigo_email_address_owner_message_id_recipient_type_address_key on \norigo_email_address_owner o_1 (cost=0.42..8.45 rows=1 width=12) (actual \ntime=0.005..0.006 rows=1 loops=75)\n Index Cond: ((message_id = em.entity_id) AND \n((recipient_type)::text = 'TO'::text))\n -> Index Scan using origo_email_address_pkey on \norigo_email_address a_1 (cost=0.43..8.45 rows=1 width=30) (actual \ntime=0.002..0.002 rows=1 loops=93)\n Index Cond: (entity_id = o_1.address_id)\n SubPlan 3\n -> Seq Scan on origo_favourite_owner person_fav (cost=0.00..4.75 \nrows=1 width=4) (actual time=0.024..0.024 rows=0 loops=75)\n Filter: ((favourite_for = 3::bigint) AND (favourite_item = \npers.entity_id))\n Rows Removed by Filter: 183\n SubPlan 4\n -> Seq Scan on origo_favourite_owner company_fav \n(cost=0.00..4.75 rows=1 width=4) (actual time=0.021..0.021 rows=0 loops=75)\n Filter: ((favourite_for = 3::bigint) AND (favourite_item = \ncomp.entity_id))\n Rows Removed by Filter: 183\n Total runtime: 311765.351 ms\n (42 rows) Some-times it performs much better (but still not good): on=> \nexplain analyze execute list_un_done_in_folder_q (3,3,3,44961, 101, 0);\n \n \nQUERY PLAN\n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2.53..52101.45 rows=101 width=641) (actual time=0.276..3480.876 \nrows=75 loops=1)\n -> Nested Loop Left Join (cost=2.53..1365406.92 rows=2647 width=641) \n(actual time=0.276..3480.846 rows=75 loops=1)\n -> Nested Loop Left Join (cost=2.10..1248971.97 rows=2647 \nwidth=607) (actual time=0.158..3461.045 rows=75 loops=1)\n -> Nested Loop Left Join (cost=1.83..1248022.82 rows=2647 \nwidth=592) (actual time=0.156..3460.955 rows=75 loops=1)\n -> Nested Loop (cost=1.55..1247221.37 rows=2647 \nwidth=565) (actual time=0.155..3449.744 rows=75 loops=1)\n -> Nested Loop (cost=1.13..1240587.78 rows=2647 \nwidth=126) (actual time=0.148..3449.134 rows=75 loops=1)\n -> Nested Loop (cost=0.71..1230157.64 \nrows=20567 width=118) (actual time=0.034..3407.201 rows=12870 loops=1)\n -> Index Scan Backward using \norigo_email_delivery_received_idx on origo_email_delivery del \n(cost=0.42..1102717.48 rows=354038 width=98) (actual time=0.019..2431.773 \nrows=354300 loops=1)\n -> Index Scan using \norigo_email_prop_owner_message_not_done_idx on origo_email_message_property \nprop (cost=0.29..0.35 rows=1 width=20) (actual time=0.002..0.002 rows=0 \nloops=354300)\n Index Cond: ((owner_id = \n3::bigint) AND (message_id = del.message_id))\n -> Index Only Scan using \norigo_email_folder_message_delivery_id_folder_id_key on \norigo_email_folder_message fm (cost=0.42..0.50 rows=1 width=16) (actual \ntime=0.003..0.003 rows=0 loops=12870)\n Index Cond: ((delivery_id = \ndel.entity_id) AND (folder_id = 44961::bigint))\n Heap Fetches: 75\n -> Index Scan using origo_email_message_pkey on \norigo_email_message em (cost=0.42..2.50 rows=1 width=455) (actual \ntime=0.007..0.007 rows=1 loops=75)\n Index Cond: (entity_id = del.message_id)\n -> Index Scan using origo_person_entity_id_idx on \nonp_crm_person pers (cost=0.28..0.29 rows=1 width=35) (actual \ntime=0.000..0.000 rows=0 loops=75)\n Index Cond: (em.from_entity_id = entity_id)\n -> Index Scan using onp_crm_relation_pkey on onp_crm_relation \ncomp (cost=0.27..0.35 rows=1 width=19) (actual time=0.000..0.000 rows=0 \nloops=75)\n Index Cond: (entity_id = pers.relation_id)\n -> Index Scan using origo_email_address_pkey on origo_email_address \nef (cost=0.43..0.68 rows=1 width=50) (actual time=0.007..0.008 rows=1 loops=75)\n Index Cond: (em.from_id = entity_id)\n SubPlan 1\n -> Nested Loop (cost=0.85..16.90 rows=1 width=24) (actual \ntime=0.012..0.014 rows=1 loops=75)\n -> Index Scan using \norigo_email_address_owner_message_id_recipient_type_address_key on \norigo_email_address_owner o (cost=0.42..8.45 rows=1 width=12) (actual \ntime=0.008..0.008 rows=1 loops=75)\n Index Cond: ((message_id = em.entity_id) AND \n((recipient_type)::text = 'TO'::text))\n -> Index Scan using origo_email_address_pkey on \norigo_email_address a (cost=0.43..8.45 rows=1 width=28) (actual \ntime=0.002..0.003 rows=1 loops=93)\n Index Cond: (entity_id = o.address_id)\n SubPlan 2\n -> Nested Loop (cost=0.85..16.90 rows=1 width=26) (actual \ntime=0.007..0.008 rows=1 loops=75)\n -> Index Scan using \norigo_email_address_owner_message_id_recipient_type_address_key on \norigo_email_address_owner o_1 (cost=0.42..8.45 rows=1 width=12) (actual \ntime=0.004..0.004 rows=1 loops=75)\n Index Cond: ((message_id = em.entity_id) AND \n((recipient_type)::text = 'TO'::text))\n -> Index Scan using origo_email_address_pkey on \norigo_email_address a_1 (cost=0.43..8.45 rows=1 width=30) (actual \ntime=0.002..0.002 rows=1 loops=93)\n Index Cond: (entity_id = o_1.address_id)\n SubPlan 3\n -> Seq Scan on origo_favourite_owner person_fav (cost=0.00..4.75 \nrows=1 width=4) (actual time=0.027..0.027 rows=0 loops=75)\n Filter: ((favourite_for = 3::bigint) AND (favourite_item = \npers.entity_id))\n Rows Removed by Filter: 183\n SubPlan 4\n -> Seq Scan on origo_favourite_owner company_fav \n(cost=0.00..4.75 rows=1 width=4) (actual time=0.023..0.023 rows=0 loops=75)\n Filter: ((favourite_for = 3::bigint) AND (favourite_item = \ncomp.entity_id))\n Rows Removed by Filter: 183\n Total runtime: 3481.136 ms\n (42 rows) Does anyone see anything obvious or have any hints what to \ninvestigate further? Thanks. -- Andreas Joseph Krogh CTO / Partner - Visena \nAS Mobile: +47 909 56 963 [email protected] <mailto:[email protected]> \nwww.visena.com <https://www.visena.com> <https://www.visena.com>",
"msg_date": "Tue, 5 Aug 2014 23:38:08 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performing very bad and sometimes good"
},
{
"msg_contents": "Andreas Joseph Krogh-2 wrote\n> Hi all. Running version: on=> select version();\n> version\n> \n> ------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc\n> (Ubuntu/Linaro \n> 4.6.3-1ubuntu5) 4.6.3, 64-bit \n\n9.3.2 is not release-worthy....\n\n\n> Bad:\n> Index Scan Backward using origo_email_delivery_received_idx on\n> origo_email_delivery del (cost=0.42..1102717.48 rows=354038 width=98)\n> (actual time=0.017..309196.670 rows=354296 loops=1)\n> \n>>>Add 4 new records<<\n> \n> Good (-ish):\n> Index Scan Backward using origo_email_delivery_received_idx on\n> origo_email_delivery del (cost=0.42..1102717.48 rows=354038 width=98)\n> (actual time=0.019..2431.773 rows=354300 loops=1)\n\nThe plans appear to be basically identical - and the queries/data as well\naside from the addition of 4 more unmatched records.\n\nThe difference between the two is likely attributable to system load\nvariations combined with the effect of caching after running the query the\nfirst (slow) time.\n\nDoing OFFSET/LIMIT pagination can be problematic so I'd be curious what\nwould happen if you got rid of it. In this specific case the result set is\nonly 75 with 101 allowed anyway.\n\nThe left joins seem to be marginal so I'd toss those out and optimize the\ninner joins and, more likely, the correlated subqueries in the select list. \nYou need to avoid nested looping over 300,000+ records somehow - though I'm\nnot going to be that helpful in the actual how part...\n\nNote that in the inner-most loop the actual time for the cached data is half\nof the non-cached data. While both are quite small (0.002/0.004) the\n300,000+ loops do add up. The same likely applies to the other planning\nnodes but I didn't dig that deep.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Query-performing-very-bad-and-sometimes-good-tp5813831p5813847.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Aug 2014 17:16:30 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performing very bad and sometimes good"
},
{
"msg_contents": "Andreas Joseph Krogh <[email protected]> wrote:\n\n> Some-times it performs much better (but still not good)\n\nAs has already been suggested, that difference is almost certainly \ndue to differences in how much of the necessary data is cached or \nwhat the query is competing with.\n\n> Does anyone see anything obvious or have any hints what to\n> investigate further?\n\nWe need more information to be able to say much. Please review\nthis page:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nKnowing more about the hardware and the tables (including all\nindexes) would help a lot, as well as all non-default configuration\nsettings. In particular, I'm curious whether there is an index on\nthe message_id column of origo_email_delivery.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Aug 2014 07:04:32 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performing very bad and sometimes good"
}
] |
[
{
"msg_contents": "Hello,\n\nsuppose you have two very simple tables with fk dependency, by which we join them\nand another attribute for sorting\n\nlike this\nselect * from users join notifications on users.id=notifications.user_id ORDER BY users.priority desc ,notifications.priority desc limit 10;\n\nVery typical web query.\n\nNo matter which composite indexes i try, postgresql can not make efficient nested loop plan using indexes.\nIt chooses all sorts of seq scans and hash joins or merge join and always a sort node and then a limit 10.\n\nNeither plan provides acceptable performance. And tables tend to grow =\\\n\nCan anybody suggest something or explain this behavior?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Aug 2014 02:21:51 +0300",
"msg_from": "Evgeniy Shishkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "two table join with order by on both tables attributes"
},
{
"msg_contents": "Evgeniy Shishkin wrote\n> Hello,\n> \n> suppose you have two very simple tables with fk dependency, by which we\n> join them\n> and another attribute for sorting\n> \n> like this\n> select * from users join notifications on users.id=notifications.user_id\n> ORDER BY users.priority desc ,notifications.priority desc limit 10;\n> \n> Very typical web query.\n> \n> No matter which composite indexes i try, postgresql can not make efficient\n> nested loop plan using indexes.\n> It chooses all sorts of seq scans and hash joins or merge join and always\n> a sort node and then a limit 10.\n> \n> Neither plan provides acceptable performance. And tables tend to grow =\\\n> \n> Can anybody suggest something or explain this behavior?\n\nCan you explain why a nested loop is best for your data? Given my\nunderstanding of an expected \"priority\"cardinality I would expect your ORDER\nBY to be extremely inefficient and not all that compatible with a nested\nloop approach.\n\nYou can use the various parameters listed on this page to force the desired\nplan and then provide EXPLAIN ANALYZE results for the various executed plans\nand compare them.\n\nhttp://www.postgresql.org/docs/9.3/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-ENABLE\n\nAnd now for the obligatory \"read this\" link:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nIf you can show that in fact the nested loop (or some other plan) performs\nbetter than the one chosen by the planner - and can provide data that the\ndevelopers can use to replicate the experiment - then improvements can be\nmade. At worse you will come to understand why the planner is right and can\nthen explore alternative models.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/two-table-join-with-order-by-on-both-tables-attributes-tp5814135p5814137.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 7 Aug 2014 16:42:45 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two table join with order by on both tables attributes"
},
{
"msg_contents": "My question was about that you can not have fast execution of this kind of query in postgresql.\nWith any runtime configuration you just swith from seq scan and hash join to merge join, and then you have a sort node.\n\nIn my understanding, i need to have two indexes\non users(priority desc, id)\nand notifications(user_id, priority desc)\n\nthen postgresql would choose nested loop and get sorted data from indexes.\nBut it wont. \n\nI don't understand why.\n\nDo you have any schema and GUCs which performs this kind of query well?\n\nSorry for top posting. \n\n> Can you explain why a nested loop is best for your data? Given my\n> understanding of an expected \"priority\"cardinality I would expect your ORDER\n> BY to be extremely inefficient and not all that compatible with a nested\n> loop approach.\n> \n> You can use the various parameters listed on this page to force the desired\n> plan and then provide EXPLAIN ANALYZE results for the various executed plans\n> and compare them.\n> \n> http://www.postgresql.org/docs/9.3/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-ENABLE\n> \n> And now for the obligatory \"read this\" link:\n> \n> https://wiki.postgresql.org/wiki/SlowQueryQuestions\n> \n> If you can show that in fact the nested loop (or some other plan) performs\n> better than the one chosen by the planner - and can provide data that the\n> developers can use to replicate the experiment - then improvements can be\n> made. At worse you will come to understand why the planner is right and can\n> then explore alternative models.\n> \n> David J.\n> \n> \n> \n> \n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/two-table-join-with-order-by-on-both-tables-attributes-tp5814135p5814137.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Aug 2014 03:02:54 +0300",
"msg_from": "Evgeniy Shishkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two table join with order by on both tables attributes"
},
{
"msg_contents": "Evgeniy Shishkin <[email protected]> writes:\n>> select * from users join notifications on users.id=notifications.user_id ORDER BY users.priority desc ,notifications.priority desc limit 10;\n\n> In my understanding, i need to have two indexes\n> on users(priority desc, id)\n> and notifications(user_id, priority desc)\n> then postgresql would choose nested loop and get sorted data from indexes.\n> But it wont. \n\nIndeed. If you think a bit harder, you'll realize that the plan you\nsuggest would *not* produce the sort order requested by this query.\nIt would (if I'm not confused myself) produce an ordering like\n users.priority desc, id asc, notifications.priority desc\nwhich would only match what the query asks for if there's just a single\nvalue of id per users.priority value.\n\nOffhand I think that the planner will not recognize a nestloop as\nproducing a sort ordering of this kind even if the query did request the\nright ordering. That could perhaps be improved, but I've not seen many\nif any cases where it would be worth the trouble.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 07 Aug 2014 20:19:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two table join with order by on both tables attributes"
},
{
"msg_contents": ">>> select * from users join notifications on users.id=notifications.user_id ORDER BY users.priority desc ,notifications.priority desc limit 10;\n> \n>> In my understanding, i need to have two indexes\n>> on users(priority desc, id)\n>> and notifications(user_id, priority desc)\n>> then postgresql would choose nested loop and get sorted data from indexes.\n>> But it wont. \n> \n> Indeed. If you think a bit harder, you'll realize that the plan you\n> suggest would *not* produce the sort order requested by this query.\n> It would (if I'm not confused myself) produce an ordering like\n> users.priority desc, id asc, notifications.priority desc\n> which would only match what the query asks for if there's just a single\n> value of id per users.priority value.\n> \n> Offhand I think that the planner will not recognize a nestloop as\n> producing a sort ordering of this kind even if the query did request the\n> right ordering. That could perhaps be improved, but I've not seen many\n> if any cases where it would be worth the trouble.\n\n\nThanks Tom, you are right.\n\nBut may be some sort of skip index scan ala loose index scan will help with index on notifications(priority desc,user_id)?\n\nI know that this is currently not handled by native executors.\nMay by i can work around this using WITH RECURSIVE query?\n\nAlso, are there any plans to handle loose index scan in the upcoming release?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Aug 2014 03:43:07 +0300",
"msg_from": "Evgeniy Shishkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two table join with order by on both tables attributes"
},
{
"msg_contents": "\n> On 08 Aug 2014, at 03:43, Evgeniy Shishkin <[email protected]> wrote:\n> \n>>>> select * from users join notifications on users.id=notifications.user_id ORDER BY users.priority desc ,notifications.priority desc limit 10;\n>> \n>>> In my understanding, i need to have two indexes\n>>> on users(priority desc, id)\n>>> and notifications(user_id, priority desc)\n>>> then postgresql would choose nested loop and get sorted data from indexes.\n>>> But it wont. \n>> \n>> Indeed. If you think a bit harder, you'll realize that the plan you\n>> suggest would *not* produce the sort order requested by this query.\n>> It would (if I'm not confused myself) produce an ordering like\n>> users.priority desc, id asc, notifications.priority desc\n>> which would only match what the query asks for if there's just a single\n>> value of id per users.priority value.\n>> \n>> Offhand I think that the planner will not recognize a nestloop as\n>> producing a sort ordering of this kind even if the query did request the\n>> right ordering. That could perhaps be improved, but I've not seen many\n>> if any cases where it would be worth the trouble.\n> \n\nAnd actually with this kind of query we really want the most wanted notifications, by the user.\nSo we really can rewrite to order by users.priority desc, id asc, notifications.priority desc according to business logic.\nAnd we will benefit if this case would be improved.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Aug 2014 04:05:45 +0300",
"msg_from": "Evgeniy Shishkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two table join with order by on both tables attributes"
},
{
"msg_contents": "On Fri, Aug 8, 2014 at 4:05 AM, Evgeniy Shishkin <[email protected]> wrote:\n>>>>> select * from users join notifications on users.id=notifications.user_id ORDER BY users.priority desc ,notifications.priority desc limit 10;\n\n>>>> In my understanding, i need to have two indexes\n>>>> on users(priority desc, id)\n>>>> and notifications(user_id, priority desc)\n\n> And actually with this kind of query we really want the most wanted notifications, by the user.\n> So we really can rewrite to order by users.priority desc, id asc, notifications.priority desc according to business logic.\n\nYou can rewrite it with LATERAL to trick the planner into sorting each\nuser's notifications separately. This should give you the nestloop\nplan you expect:\n\nSELECT *\nFROM users,\nLATERAL (\n SELECT * FROM notifications WHERE notifications.user_id=users.id\n ORDER BY notifications.priority DESC\n) AS notifications\nORDER BY users.priority DESC, users.id\n\nIt would be great if Postgres could do this transformation automatically.\n\nThere's a \"partial sort\" patch in the current CommitFest, which would\nsolve the problem partially (it could use the index on users, but the\nnotifications sort would have to be done in memory still).\nhttps://commitfest.postgresql.org/action/patch_view?id=1368\n\nRegards,\nMarti\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Aug 2014 16:29:39 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two table join with order by on both tables attributes"
},
{
"msg_contents": "\n> On 08 Aug 2014, at 16:29, Marti Raudsepp <[email protected]> wrote:\n> \n> On Fri, Aug 8, 2014 at 4:05 AM, Evgeniy Shishkin <[email protected]> wrote:\n>>>>>> select * from users join notifications on users.id=notifications.user_id ORDER BY users.priority desc ,notifications.priority desc limit 10;\n> \n>>>>> In my understanding, i need to have two indexes\n>>>>> on users(priority desc, id)\n>>>>> and notifications(user_id, priority desc)\n> \n>> And actually with this kind of query we really want the most wanted notifications, by the user.\n>> So we really can rewrite to order by users.priority desc, id asc, notifications.priority desc according to business logic.\n> \n> You can rewrite it with LATERAL to trick the planner into sorting each\n> user's notifications separately. This should give you the nestloop\n> plan you expect:\n> \n> SELECT *\n> FROM users,\n> LATERAL (\n> SELECT * FROM notifications WHERE notifications.user_id=users.id\n> ORDER BY notifications.priority DESC\n> ) AS notifications\n> ORDER BY users.priority DESC, users.id\n> \n\nThank you very much.\n\n\n> It would be great if Postgres could do this transformation automatically.\n> \n> There's a \"partial sort\" patch in the current CommitFest, which would\n> solve the problem partially (it could use the index on users, but the\n> notifications sort would have to be done in memory still).\n> https://commitfest.postgresql.org/action/patch_view?id=1368\n> \n> Regards,\n> Marti\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Aug 2014 16:57:37 +0300",
"msg_from": "Evgeniy Shishkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two table join with order by on both tables attributes"
}
] |
[
{
"msg_contents": "Folks,\n\nSo one thing we tell users who have chronically long IN() lists is that\nthey should create a temporary table and join against that instead.\nOther than not having the code, is there a reason why PostgreSQL\nshouldn't do something like this behind the scenes, automatically?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 08 Aug 2014 12:15:36 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimization idea for long IN() lists"
},
{
"msg_contents": "On Sat, Aug 9, 2014 at 5:15 AM, Josh Berkus <[email protected]> wrote:\n\n> Folks,\n>\n> So one thing we tell users who have chronically long IN() lists is that\n> they should create a temporary table and join against that instead.\n> Other than not having the code, is there a reason why PostgreSQL\n> shouldn't do something like this behind the scenes, automatically?\n>\n>\nHi Josh,\n\nI know that problem for many years.\nThere are some workaround which doesn't require using the temporary tables\n(and I used that approach quite a lot when performance matter):\n\nInstead of using:\nSELECT * FROM sometable\nWHERE\nsomefield IN (val1, val2, ...)\nAND other_filters;\n\nQuery could be written as:\nSELECT * FROM sometable\nJOIN (VALUES ((val1), (val2) ...)) AS v(somefield) ON\nv.somefield=sometable.somefield\nWHERE\nother_filters;\n\nWhen there no index on somefield query plans would look like as:\n\nOriginal query:\n\n Filter: (somefield = ANY ('{...}'::integer[]))\n\nvs optimized query:\n\n Hash Join (cost=0.25..117.89 rows=22 width=59) (actual time=5.332..5.332\nrows=0 loops=1)\n Hash Cond: (sometable.somefield = \"*VALUES*\".somefield)\n...\n -> Hash (cost=0.12..0.12 rows=10 width=4) (actual time=0.010..0.010\nrows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.12 rows=10 width=4)\n(actual time=0.001..0.003 rows=10 loops=1)\n\n\nIn synthetic data I observed the following performance results (fully\nin-memory data with integer values):\n\nList length IN Performance JOIN VALUES Performance\n 10 5.39ms 5.38ms\n 100 9.74ms 5.49ms\n 1000 53.02ms 9.89ms\n 10000 231.10ms 13.14ms\n\nSo starting from 10 elements VALUES/HASH JOIN approach is clear winner.\nIn case of the text literals IN list performance difference even more\nobvious (~2 order of magnitude for 10000 list).\n\nHowever, if IN list used for the primary key lookup - there are no visible\nperformance difference between these two approaches.\n\nSo yes there are some space for optimization of \"Filter: (somefield = ANY\n('{...}'::integer[]))\" via hashing.\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttp://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>\n\nPhone RU: +7 910 405 4718\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\nJabber: [email protected]\nМойКруг: http://mboguk.moikrug.ru/\n\n\"People problems are solved with people.\nIf people cannot solve the problem, try technology.\nPeople will then wish they'd listened at the first stage.\"\n\nOn Sat, Aug 9, 2014 at 5:15 AM, Josh Berkus <[email protected]> wrote:\nFolks,\n\nSo one thing we tell users who have chronically long IN() lists is that\nthey should create a temporary table and join against that instead.\nOther than not having the code, is there a reason why PostgreSQL\nshouldn't do something like this behind the scenes, automatically?\nHi Josh,I know that problem for many years.There are some workaround which doesn't require using the temporary tables (and I used that approach quite a lot when performance matter):\nInstead of using:SELECT * FROM sometableWHEREsomefield IN (val1, val2, ...)AND other_filters;Query could be written as:SELECT * FROM sometableJOIN (VALUES ((val1), (val2) ...)) AS v(somefield) ON v.somefield=sometable.somefield\n\nWHEREother_filters;When there no index on somefield query plans would look like as:Original query: Filter: (somefield = ANY ('{...}'::integer[]))\n\nvs optimized query: Hash Join (cost=0.25..117.89 rows=22 width=59) (actual time=5.332..5.332 rows=0 loops=1) Hash Cond: (sometable.somefield = \"*VALUES*\".somefield)... -> Hash (cost=0.12..0.12 rows=10 width=4) (actual time=0.010..0.010 rows=10 loops=1)\n\n Buckets: 1024 Batches: 1 Memory Usage: 1kB -> Values Scan on \"*VALUES*\" (cost=0.00..0.12 rows=10 width=4) (actual time=0.001..0.003 rows=10 loops=1)\n\nIn synthetic data I observed the following performance results (fully in-memory data with integer values):List length IN Performance JOIN VALUES Performance\n\n 10 5.39ms 5.38ms 100 9.74ms 5.49ms 1000 53.02ms 9.89ms 10000 231.10ms 13.14ms\nSo starting from 10 elements VALUES/HASH JOIN approach is clear winner.In case of the text literals IN list performance difference even more obvious (~2 order of magnitude for 10000 list).\nHowever, if IN list used for the primary key lookup - there are no visible performance difference between these two approaches.\n\nSo yes there are some space for optimization of \"Filter: (somefield = ANY ('{...}'::integer[]))\" via hashing.-- Maxim Boguk\n\nSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\n\nSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. \n\nIf people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Sat, 9 Aug 2014 14:22:00 +1000",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization idea for long IN() lists"
}
] |
[
{
"msg_contents": "hi ,everybody\n\nwhy does the planer estimate 200 rows when i use order by and group by .\nevn:postgresql 8.4 and 9.3\n\ntable:\nCREATE TABLE a\n(\n id serial NOT NULL,\n name character varying(20),\n modifytime timestamp without time zone,\n CONSTRAINT a_pk PRIMARY KEY (id)\n)\n\nSQL:\nexplain analyze\nselect * from\n( select id from a order by id ) d\ngroup by id;\n\nQuery plan:\n\"Group (cost=0.15..66.42 rows=200 width=4) (actual time=0.008..0.008\nrows=0 loops=1)\"\n\" -> Index Only Scan using a_pk on a (cost=0.15..56.30 rows=810 width=4)\n(actual time=0.006..0.006 rows=0 loops=1)\"\n\" Heap Fetches: 0\"\n\"Total runtime: 0.046 ms\"\n\nCan anybody suggest something or explain this behavior?\n\nhi ,everybodywhy does the planer estimate 200 rows when i use order by and group by .evn:postgresql 8.4 and 9.3table:CREATE TABLE a\n( id serial NOT NULL, name character varying(20), modifytime timestamp without time zone, CONSTRAINT a_pk PRIMARY KEY (id))\nSQL:explain analyzeselect * from ( select id from a order by id ) d group by id;Query plan:\"Group (cost=0.15..66.42 rows=200 width=4) (actual time=0.008..0.008 rows=0 loops=1)\"\n\" -> Index Only Scan using a_pk on a (cost=0.15..56.30 rows=810 width=4) (actual time=0.006..0.006 rows=0 loops=1)\"\" Heap Fetches: 0\"\"Total runtime: 0.046 ms\"\nCan anybody suggest something or explain this behavior?",
"msg_date": "Tue, 12 Aug 2014 11:59:02 +0900",
"msg_from": "=?UTF-8?B?5qWK5paw5rOi?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "how does the planer to estimate row when i use order by and group by"
},
{
"msg_contents": "On Tue, Aug 12, 2014 at 5:59 AM, 楊新波 <[email protected]> wrote:\n> why does the planer estimate 200 rows when i use order by and group by .\n> evn:postgresql 8.4 and 9.3\n\n> Can anybody suggest something or explain this behavior?\n\nBecause the table is empty, analyze doesn't store any stats for the\ntable, so the planner uses some default guesses.\n\nThis is actually beneficial for cases where you have done some inserts\nto a new table, and autovacuum hasn't gotten around to analyzing it\nyet. And it rarely hurts because any query plan will be fast when\nthere's no data.\n\nRegards,\nMarti\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Aug 2014 15:47:05 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how does the planer to estimate row when i use order by\n and group by"
}
] |
[
{
"msg_contents": "I create a table and insert some items.\nI create index on every column.\nAnd I execute select, I thought it should use index scan, but it is still seq scan. Why PG do not use index scan?\n\ncreate table v_org_info(\norg_no varchar2(8), org_nm varchar2(80),\norg_no_l1 varchar2(8), org_nm_l1 varchar2(80),\norg_no_l2 varchar2(8), org_nm_l2 varchar2(80)\n);\n\ncreate index idx_v_org_info_org_no on v_org_info(org_no);\ncreate index idx_v_org_info_org_no_l1 on v_org_info(org_no_l1);\ncreate index idx_v_org_info_org_no_l2 on v_org_info(org_no_l2);\n\nbegin\n for i in 1..20000 loop\n insert into v_org_info values(i,'test',i,'test',i,'test');\n insert into adm_org_info values(i);\n end loop;\nend;\n\n\nPOSTGRES=# explain analyze select a.org_nm from v_org_info a where a.org_no = 1000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\nSeq Scan on V_ORG_INFO A (cost=0.00..189.97 rows=9 width=178, batch_size=100) (actual time=0.930..18.034 rows=1 loops=1)\n Filter: (INT4IN(VARCHAROUT(ORG_NO)) = 1000)\n Rows Removed by Filter: 19999\nTotal runtime: 18.099 ms\n(4 rows)\n\n\n\n\n\n\n\n\n\nI create a table and insert some items.\nI create index on every column.\nAnd I execute select, I thought it should use index scan, but it is still seq scan. Why PG do not use index scan?\n \ncreate table v_org_info(\norg_no varchar2(8), org_nm varchar2(80),\norg_no_l1 varchar2(8), org_nm_l1 varchar2(80),\norg_no_l2 varchar2(8), org_nm_l2 varchar2(80)\n);\n \ncreate index idx_v_org_info_org_no on v_org_info(org_no);\ncreate index idx_v_org_info_org_no_l1 on v_org_info(org_no_l1);\ncreate index idx_v_org_info_org_no_l2 on v_org_info(org_no_l2);\n \nbegin\n for i in 1..20000 loop\n insert into v_org_info values(i,'test',i,'test',i,'test');\n insert into adm_org_info values(i);\n end loop;\nend;\n \n \nPOSTGRES=# explain analyze select a.org_nm from v_org_info a where a.org_no = 1000;\n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------\nSeq Scan on V_ORG_INFO A (cost=0.00..189.97 rows=9 width=178, batch_size=100) (actual time=0.930..18.034 rows=1 loops=1)\n Filter: (INT4IN(VARCHAROUT(ORG_NO)) = 1000)\n Rows Removed by Filter: 19999\nTotal runtime: 18.099 ms\n(4 rows)",
"msg_date": "Mon, 18 Aug 2014 03:19:22 +0000",
"msg_from": "Xiaoyulei <[email protected]>",
"msg_from_op": true,
"msg_subject": "select on index column,why PG still use seq scan?"
},
{
"msg_contents": "Xiaoyulei <[email protected]> writes:\n> I create a table and insert some items.\n> I create index on every column.\n> And I execute select, I thought it should use index scan, but it is still seq scan. Why PG do not use index scan?\n\n> create table v_org_info(\n> org_no varchar2(8), org_nm varchar2(80),\n> org_no_l1 varchar2(8), org_nm_l1 varchar2(80),\n> org_no_l2 varchar2(8), org_nm_l2 varchar2(80)\n> );\n\nThere is no \"varchar2\" type in Postgres. I tried this example with\n\"varchar\" in place of that, but when I got to\n\n> POSTGRES=# explain analyze select a.org_nm from v_org_info a where a.org_no = 1000;\n\nI got\n\nERROR: operator does not exist: character varying = integer\nLINE 1: ...ze select a.org_nm from v_org_info a where a.org_no = 1000;\n ^\nHINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.\n\nwhich is certainly what I *should* get. I changed it to\n\nexplain analyze select a.org_nm from v_org_info a where a.org_no = '1000';\n\nand then I got\n\n Bitmap Heap Scan on v_org_info a (cost=4.49..74.90 rows=27 width=58) (actual t\nime=0.044..0.044 rows=1 loops=1)\n Recheck Cond: ((org_no)::text = '1000'::text)\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_v_org_info_org_no (cost=0.00..4.48 rows=27 widt\nh=0) (actual time=0.020..0.020 rows=1 loops=1)\n Index Cond: ((org_no)::text = '1000'::text)\n Planning time: 0.481 ms\n Execution time: 0.104 ms\n\nwhich is OK, but after \"ANALYZE v_org_info\" I got\n\n Index Scan using idx_v_org_info_org_no on v_org_info a (cost=0.29..8.30 rows=1\n width=5) (actual time=0.019..0.020 rows=1 loops=1)\n Index Cond: ((org_no)::text = '1000'::text)\n Planning time: 0.372 ms\n Execution time: 0.060 ms\n\nwhich is better.\n\n> Seq Scan on V_ORG_INFO A (cost=0.00..189.97 rows=9 width=178, batch_size=100) (actual time=0.930..18.034 rows=1 loops=1)\n> Filter: (INT4IN(VARCHAROUT(ORG_NO)) = 1000)\n> Rows Removed by Filter: 19999\n> Total runtime: 18.099 ms\n> (4 rows)\n\nTBH, this looks like some incompetently hacked-up variant of Postgres;\ncertainly no version ever shipped by the core project would have done\nthis. It looks like somebody tried to make cross-type comparisons work by\ninserting conversion operations, but they did it in such a way that the\nconversions were applied to the column not the constant. An index on\norg_no isn't going to help you for a query on INT4IN(VARCHAROUT(ORG_NO)).\n(And I wonder why exactly the names are printing as upper case here ...)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 17 Aug 2014 23:49:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select on index column,why PG still use seq scan?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a tool that is trying to collect stats from postgres (v9.1.13).\npostgres attempts to allocate more memory than is allowed:\n\nSELECT mode, count(mode) AS count FROM pg_locks GROUP BY mode ORDER BY mode;\nERROR: invalid memory alloc request size 1459291560\n\nMemory-related configs from the server:\n\nshared_buffers = 10000MB\nwork_mem = 15MB\nmaintenance_work_mem = 400MB\neffective_cache_size = 50000MB\nmax_locks_per_transaction = 9000\nmax_pred_locks_per_transaction = 40000\n\nThe machine is running CentOS 6, a 32-core AMD 6276 processor, and is\nconfigured with 64GB of memory. Transparent Huge Pages are disabled\n:-)\n\nThanks in advance for your time and expertise.\n\nDave Owens\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Aug 2014 14:01:30 -0700",
"msg_from": "Dave Owens <[email protected]>",
"msg_from_op": true,
"msg_subject": "query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "On Mon, Aug 18, 2014 at 6:01 PM, Dave Owens <[email protected]> wrote:\n\n> max_locks_per_transaction = 9000\n> max_pred_locks_per_transaction = 40000\n>\n\n\nDo you really need such large values? What is your max_connections value?\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Aug 18, 2014 at 6:01 PM, Dave Owens <[email protected]> wrote:\n\nmax_locks_per_transaction = 9000\nmax_pred_locks_per_transaction = 40000Do you really need such large values? What is your max_connections value?\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 18 Aug 2014 18:21:59 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "On Mon, Aug 18, 2014 at 4:21 PM, Matheus de Oliveira\n<[email protected]> wrote:\n>\n> On Mon, Aug 18, 2014 at 6:01 PM, Dave Owens <[email protected]> wrote:\n>>\n>> max_locks_per_transaction = 9000\n>> max_pred_locks_per_transaction = 40000\n\nperformance of any query to pg_locks is proportional to the setting of\nmax_locks_per_transaction. still, something is awry here. can you\n'explain' that query? also, what's the answer you get when:\n\nSELECT COUNT(*) from pg_locks;\n\n?\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Aug 2014 16:29:10 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "On Mon, Aug 18, 2014 at 2:21 PM, Matheus de Oliveira\n<[email protected]> wrote:\n> Do you really need such large values? What is your max_connections value?\n\nmax_connections = 450 ...we have found that we run out of shared\nmemory when max_pred_locks_per_transaction is less than 30k.\n\nOn Mon, Aug 18, 2014 at 2:29 PM, Merlin Moncure <[email protected]> wrote:\n> performance of any query to pg_locks is proportional to the setting of\n> max_locks_per_transaction. still, something is awry here. can you\n> 'explain' that query?\n\ntudb=# explain SELECT mode, count(mode) AS count FROM pg_locks GROUP\nBY mode ORDER BY mode;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Sort (cost=0.63..0.65 rows=200 width=32)\n Sort Key: l.mode\n -> HashAggregate (cost=0.30..0.32 rows=200 width=32)\n -> Function Scan on pg_lock_status l (cost=0.00..0.10\nrows=1000 width=32)\n(4 rows)\n\n\n> SELECT COUNT(*) from pg_locks;\n\nERROR: invalid memory alloc request size 1562436816\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Aug 2014 14:36:52 -0700",
"msg_from": "Dave Owens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Dave Owens <[email protected]> wrote:\n\n\n\n> max_connections = 450 ...we have found that we run out of shared\n> memory when max_pred_locks_per_transaction is less than 30k.\n\n>> SELECT COUNT(*) from pg_locks;\n>\n> ERROR: invalid memory alloc request size 1562436816\n\nIt gathers the information in memory to return for all those locks\n(I think both the normal heavyweight locks and the predicate locks\ndo that). 450 * 30000 is 13.5 million predicate locks you could\nhave, so they don't need a very big structure per lock to start\nadding up. I guess we should refactor that to use a tuplestore, so\nit can spill to disk when it gets to be more than work_mem.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Aug 2014 15:01:47 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Kevin Grittner <[email protected]> writes:\n> Dave Owens <[email protected]> wrote:\n>> max_connections = 450 ...we have found that we run out of shared\n>> memory when max_pred_locks_per_transaction is less than 30k.\n\n> It gathers the information in memory to return for all those locks\n> (I think both the normal heavyweight locks and the predicate locks\n> do that).� 450 * 30000 is 13.5 million predicate locks you could\n> have, so they don't need a very big structure per lock to start\n> adding up.� I guess we should refactor that to use a tuplestore, so\n> it can spill to disk when it gets to be more than work_mem.\n\nSeems to me the bigger issue is why does he need such a huge\nmax_pred_locks_per_transaction setting? It's hard to believe that\nperformance wouldn't tank with 10 million predicate locks active.\nWhether you can do \"select * from pg_locks\" seems pretty far down\nthe list of concerns about this setting.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Aug 2014 19:24:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n> Kevin Grittner <[email protected]> writes:\n>> Dave Owens <[email protected]> wrote:\n>>> max_connections = 450 ...we have found that we run out of shared\n>>> memory when max_pred_locks_per_transaction is less than 30k.\n>\n>> It gathers the information in memory to return for all those locks\n>> (I think both the normal heavyweight locks and the predicate locks\n>> do that). 450 * 30000 is 13.5 million predicate locks you could\n>> have, so they don't need a very big structure per lock to start\n>> adding up. I guess we should refactor that to use a tuplestore, so\n>> it can spill to disk when it gets to be more than work_mem.\n>\n> Seems to me the bigger issue is why does he need such a huge\n> max_pred_locks_per_transaction setting? It's hard to believe that\n> performance wouldn't tank with 10 million predicate locks active.\n> Whether you can do \"select * from pg_locks\" seems pretty far down\n> the list of concerns about this setting.\n\nIt would be interesting to know more about the workload which is\ncapable of that, but it would be a lot easier to analyze what's\ngoing on if we could look at where those locks are being used (in\nsummary, of course -- nobody can make sense of 10 million detail\nlines). About all I can think to ask at this point is: how many\ntotal tables and indexes are there in all databases in this cluster\n(counting each partition of a partitioned table as a separate\ntable)? With the promotion of finer-grained locks to courser ones\nthis should be pretty hard to hit without a very large number of\ntables.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 04:38:52 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Hi Kevin,\n\nLooking at pg_stat_all_tables and pg_stat_all_indexes on our four\ndatabases we have:\n\n1358 tables\n1808 indexes\n\nThe above totals do not include template1, template0, or postgres\ndatabases. We do not use partitioned tables. Only one database has a\nmeaningful level of concurrency (New Relic reports about 30k calls per\nminute, from our main webapp). That database alone consists of 575\ntables and 732 indexes.\n\nDave Owens\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 08:57:20 -0700",
"msg_from": "Dave Owens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Dave Owens <[email protected]> wrote:\n\n> 1358 tables\n> 1808 indexes\n\nHmm, that's not outrageous. How about long-running transactions?\nPlease check pg_stat_activity and pg_prepared_xacts for xact_start\nor prepared (respectively) values older than a few minutes. Since\npredicate locks may need to be kept until an overlapping\ntransaction completes, a single long-running transaction can bloat\nthe lock count.\n\nAlso, could you show use the output from?:\n\n SELECT version();\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 09:40:41 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "On Tue, Aug 19, 2014 at 9:40 AM, Kevin Grittner <[email protected]> wrote:\n> Hmm, that's not outrageous. How about long-running transactions?\n> Please check pg_stat_activity and pg_prepared_xacts for xact_start\n> or prepared (respectively) values older than a few minutes. Since\n> predicate locks may need to be kept until an overlapping\n> transaction completes, a single long-running transaction can bloat\n> the lock count.\n\nI do see a handful of backends that like to stay IDLE in transaction\nfor minutes at a time. We are refactoring the application responsible\nfor these long IDLE times, which will hopefully reduce the duration of\ntheir connections.\n\n# select backend_start, xact_start, query_start, waiting,\ncurrent_query from pg_stat_activity where xact_start < now() -\ninterval '3 minutes';\n backend_start | xact_start |\n query_start | waiting | current_query\n-------------------------------+-------------------------------+-------------------------------+---------+-----------------------\n 2014-08-19 09:48:00.398498-07 | 2014-08-19 09:49:19.157478-07 |\n2014-08-19 10:03:04.99303-07 | f | <IDLE> in transaction\n 2014-08-19 09:38:00.493924-07 | 2014-08-19 09:53:47.00614-07 |\n2014-08-19 10:03:05.003496-07 | f | <IDLE> in transaction\n(2 rows)\n\n... now() was 2014-08-19 10:03 in the above query. I do not see\nanything in pg_prepared_xacts, we do not use two-phase commit.\n\n\n> Also, could you show use the output from?:\n>\n> SELECT version();\n version\n---------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.1.13 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\n(1 row)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 10:14:45 -0700",
"msg_from": "Dave Owens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "On 2014-08-18 14:36:52 -0700, Dave Owens wrote:\n> On Mon, Aug 18, 2014 at 2:21 PM, Matheus de Oliveira\n> <[email protected]> wrote:\n> > Do you really need such large values? What is your max_connections value?\n> \n> max_connections = 450 ...we have found that we run out of shared\n> memory when max_pred_locks_per_transaction is less than 30k.\n\nWhat was the precise error message when that happened?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 19:17:10 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Hi Andres,\n\nOn Tue, Aug 19, 2014 at 10:17 AM, Andres Freund <[email protected]> wrote:\n>> max_connections = 450 ...we have found that we run out of shared\n>> memory when max_pred_locks_per_transaction is less than 30k.\n>\n> What was the precise error message when that happened?\n\n2014-07-31 15:00:25 PDT 53dabbea.29c7ERROR: 53200: out of shared memory\n2014-07-31 15:00:25 PDT 53dabbea.29c7HINT: You might need to increase\nmax_pred_locks_per_transaction.\n2014-07-31 15:00:25 PDT 53dabbea.29c7LOCATION: CreatePredicateLock,\npredicate.c:2247\n2014-07-31 15:00:25 PDT 53dabbea.29c7STATEMENT: SELECT member_id,\nSUM(credit_quarters) FROM ondeck_tallies_x WHERE team_id = $1 AND\ncredit_quarters > 0 AND EXTRACT(day from current_timestamp -\ndt_attendance_taken) <= $2 GROUP BY member_id\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 10:25:50 -0700",
"msg_from": "Dave Owens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "I wonder if it would be helpful to restart the database, then begin\ngathering information pg_locks while it can still respond to queries.\nI speculate that this is possible because the amount of memory needed\nto query pg_locks continues to grow (around 1900MB now).\n\nDave Owens\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 10:50:41 -0700",
"msg_from": "Dave Owens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Dave Owens <[email protected]> wrote:\n\n> I do see a handful of backends that like to stay IDLE in\n> transaction for minutes at a time. We are refactoring the\n> application responsible for these long IDLE times, which will\n> hopefully reduce the duration of their connections.\n\nThat may help some. Other things to consider:\n\n - If you can use a connection pooler in transaction mode to reduce\nthe number of active connections you may be able to improve\nperformance all around, and dodge this problem in the process.\nVery few systems can make efficient use of hundreds of concurrent\nconnections, but for various reasons fixing that with a connection\npooler is sometimes difficult.\n\n - If you have transactions (or SELECT statements that you run\noutside of explicit transactions) which you know will not be\nmodifying any data, flagging them as READ ONLY will help contain\nthe number of predicate locks and will help overall performance.\n(If the SELECT statements are not in explicit transactions, you may\nhave to put them in one to allow the READ ONLY property to be set,\nor set default_transaction_read_only in the session to accomplish\nthis.)\n\n - Due to the heuristics used for thresholds for combining\nfine-grained locks into coarser ones, you might be able to work\naround this by boosting max_connections above the number you are\ngoing to use. Normally when you increase\nmax_pred_locks_per_transaction it increases the number of page\nlocks it will allow in a table or index before it combines them\ninto a relation lock; increasing max_connections doesn't affect the\ngranularity promotion threshold, but it increases the total number\nof predicate locks allowed, so if you boost that and reduce\nmax_pred_locks_per_transaction in proportion, you may be able to\ndodge the problem. It's an ugly workaround, but it might get you\ninto better shape. If that does work, it's good evidence that we\nshould tweak those heuristics.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 10:55:26 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Dave Owens <[email protected]> wrote:\n\n> I wonder if it would be helpful to restart the database, then begin\n> gathering information pg_locks while it can still respond to queries.\n> I speculate that this is possible because the amount of memory needed\n> to query pg_locks continues to grow (around 1900MB now).\n\nIf restart is an option, that sounds like a great idea. If you\ncould capture the data into tables where we can summarize to\nanalyze it in a meaningful way, that would be ideal. Something\nlike:\n\nCREATE TABLE activity_snap_1 AS SELECT * FROM pg_stat_activity;\n\nOf course, boost the number for each subsequent run.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 11:01:42 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "On Tue, Aug 19, 2014 at 11:01 AM, Kevin Grittner <[email protected]> wrote:\n> If restart is an option, that sounds like a great idea. If you\n> could capture the data into tables where we can summarize to\n> analyze it in a meaningful way, that would be ideal. Something\n> like:\n>\n> CREATE TABLE activity_snap_1 AS SELECT * FROM pg_stat_activity;\n>\n> Of course, boost the number for each subsequent run.\n\nKevin -\n\nWould the you or the list be interested in snapshots of pg_locks as well?\n\nI can take a restart tonight and get this going on a half-hourly basis\n(unless you think more frequent snaps would be useful).\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 11:28:58 -0700",
"msg_from": "Dave Owens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Dave Owens <[email protected]> wrote:\n> On Tue, Aug 19, 2014 at 11:01 AM, Kevin Grittner <[email protected]> wrote:\n\n>> CREATE TABLE activity_snap_1 AS SELECT * FROM pg_stat_activity;\n\n> Would the you or the list be interested in snapshots of pg_locks as well?\n\nMost definitely! I'm sorry that copied/pasted the pg_stat_activity\nexample, I was playing with both. pg_locks is definitely the more\nimportant one, but it might be useful to try to match some of these\nlocks up against process information as we drill down from the\nsummary to see examples of what makes up those numbers.\n\n> I can take a restart tonight and get this going on a half-hourly basis\n> (unless you think more frequent snaps would be useful).\n\nEach half-hour should be fine as long as that gives at least three\nor four samples before you are unable to query pg_locks.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Aug 2014 11:43:30 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "I now have 8 hours worth of snapshots from pg_stat_activity and\npg_locks (16 snapshots from each table/view). I have turned off\ncollection at this point, but I am still able to query pg_locks:\n\n# SELECT mode, count(mode) AS count FROM pg_locks GROUP BY mode ORDER BY mode;\n mode | count\n------------------+---------\n AccessShareLock | 291\n ExclusiveLock | 19\n RowExclusiveLock | 4\n RowShareLock | 1\n SIReadLock | 7287531\n(5 rows)\n\nSIReadLocks continue to grow. It seems, in general, that our\napplication code over uses Serializable... we have produced a patch\nthat demotes some heavy-hitting queries down to Read Committed, and we\nwill see if this makes an impact on the number of SIReadLocks.\n\nIs it interesting that only 101557 out of 7 million SIReadLocks have a\npid associated with them?\n\n-Dave Owens\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Aug 2014 10:15:45 -0700",
"msg_from": "Dave Owens <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Dave Owens <[email protected]> wrote:\n\n> I now have 8 hours worth of snapshots from pg_stat_activity and\n> pg_locks (16 snapshots from each table/view). I have turned off\n> collection at this point, but I am still able to query pg_locks\n\nCould you take the earliest one after activity started, and the\nlatest one before you stopped collecting them, compress them, and\nemail them to me off-list, please?\n\n> SIReadLocks continue to grow. It seems, in general, that our\n> application code over uses Serializable... we have produced a patch\n> that demotes some heavy-hitting queries down to Read Committed, and we\n> will see if this makes an impact on the number of SIReadLocks.\n\nDo all of those modify data? If not, you may get nearly the same\nbenefit from declaring them READ ONLY instead, and that would get\nbetter protection against seeing transient invalid states. One \nexample of that is here:\n\nhttp://wiki.postgresql.org/wiki/SSI#Deposit_Report\n\n> Is it interesting that only 101557 out of 7 million SIReadLocks have a\n> pid associated with them?\n\nI would need to double-check that I'm not forgetting another case,\nbut the two cases I can think of where the pid is NULL are if the\ntransaction is PREPARED (for two phase commit) or if committed\ntransactions are summarized (so they can be combined) to try to\nlimit RAM usage. We might clear the pid if the connection is\nclosed, but (without having checked yet) I don't think we did that.\nSince you don't use prepared transactions, they are probably from\nthe summarization. But you would not normally accumulate much\nthere unless you have a long-running transaction which is not\nflagged as READ ONLY.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Aug 2014 11:15:02 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
},
{
"msg_contents": "Kevin Grittner <[email protected]> wrote:\n> Dave Owens <[email protected]> wrote:\n>\n>> I now have 8 hours worth of snapshots from pg_stat_activity and\n>> pg_locks (16 snapshots from each table/view). I have turned off\n>> collection at this point, but I am still able to query pg_locks\n>\n> Could you take the earliest one after activity started, and the\n> latest one before you stopped collecting them, compress them, and\n> email them to me off-list, please?\n\nDave did this, off-list. There is one transaction which has been\nrunning for over 20 minutes, which seems to be the cause of the\naccumulation. I note that this query does not hold any of the\nlocks it would need to take before modifying data, and it has not\nbeen assigned a transactionid -- both signs that it has (so far)\nnot modified any data. If it is not going to modify any, it would\nnot have caused this accumulation of locks if it was flagged as\nREAD ONLY. This is very important to do if you are using\nserializable transactions in PostgreSQL.\n\nTo quantify that, I show the number of SIReadLocks in total:\n\ntest=# select count(*) from locks_snap_16 where mode = 'SIReadLock';\n count\n---------\n 3910257\n(1 row)\n\n... and the number of those which are only around because there is\nan open overlapping transaction, not flagged as read only:\n\ntest=# select count(*) from locks_snap_16 l\ntest-# where mode = 'SIReadLock'\ntest-# and not exists (select * from locks_snap_16 a\ntest(# where a.locktype = 'virtualxid'\ntest(# and a.virtualxid = l.virtualtransaction);\n count\n---------\n 3565155\n(1 row)\n\nI can't stress enough how important it is that the advice near the\nbottom of this section of the documentation is heeded:\n\nhttp://www.postgresql.org/docs/9.2/interactive/transaction-iso.html#XACT-SERIALIZABLE\n\nThose bullet-points are listed roughly in order of importance;\nthere is a reason this one is listed first:\n\n - Declare transactions as READ ONLY when possible.\n\nIn some shops using SERIALIZABLE transactions, I have seen them set\ndefault_transaction_read_only = on, and explicitly set it off for\ntransactions which will (or might) modify data.\n\nIf you have a long-running report that might itself grab a lot of\npredicate locks (a/k/a SIReadLocks), you can avoid that by\ndeclaring the transaction as READ ONLY DEFERRABLE. If you do that,\nthe transaction will wait to begin execution until it can acquire a\nsnapshot guaranteed not to show any anomalies (like the example\nreferenced in an earlier post can show). It then runs without\nacquiring any predicate locks, just like a REPEATABLE READ\ntransaction. In fairly busy benchmarks, we never saw it take more\nthan six seconds to acquire such a snapshot, although the wait time\nis not bounded. Again, getting such a snapshot will be possible\nsooner if you declare transactions as READ ONLY when possible. :-)\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Aug 2014 14:24:22 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query against pg_locks leads to large memory alloc"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a question about partition table query performance in postgresql, it's an old version 8.3.21, I know it's already out of support. so any words about the reason for the behavior would be very much appreciated.\n\nI have a partition table which name is test_rank_2014_monthly and it has 7 partitions inherited from the parent table, each month with one partition. The weird thing is query out of the parent partition is as slow as query from a non-partitioned table, however, query from child table directly is really fast.\n\nhave no idea... is this an expected behavior of partition table in old releases?\n\n\nhitwise_uk=# explain analyze select * from test_rank_2014_07 r WHERE r.date = 201407 ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on test_rank_2014_07 r (cost=0.00..169797.75 rows=7444220 width=54) (actual time=0.007..1284.622 rows=7444220 loops=1)\n Filter: (date = 201407)\n Total runtime: 1831.379 ms\n(3 rows)\n\n-- query on parent table\nhitwise_uk=# explain analyze select * from test_rank_2014_monthly r WHERE r.date = 201407 ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..169819.88 rows=7444225 width=54) (actual time=0.009..4484.552 rows=7444220 loops=1)\n -> Append (cost=0.00..169819.88 rows=7444225 width=54) (actual time=0.008..2495.457 rows=7444220 loops=1)\n -> Seq Scan on test_rank_2014_monthly r (cost=0.00..22.12 rows=5 width=54) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: (date = 201407)\n -> Seq Scan on test_rank_2014_07 r (cost=0.00..169797.75 rows=7444220 width=54) (actual time=0.007..1406.600 rows=7444220 loops=1)\n Filter: (date = 201407)\n Total runtime: 5036.092 ms\n(7 rows)\n\n--query on non-partitioned table\nhitwise_uk=# explain analyze select * from rank_2014_monthly r WHERE r.date = 201407 ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on rank_2014_monthly r (cost=0.00..1042968.85 rows=7424587 width=54) (actual time=3226.983..4537.974 rows=7444220 loops=1)\n Filter: (date = 201407)\n Total runtime: 5086.096 ms\n(3 rows)\n\n\ncheck constraints on child table is something like below:\n...\nCheck constraints:\n \"test_rank_2014_07_date_check\" CHECK (date = 201407)\nInherits: test_rank_2014_monthly\n\nThanks,\nSuya\n\n\n\n\n\n\n\n\n\nHi,\n\n\nI have a question about partition table query performance in postgresql, it's an old version 8.3.21, I know it's already out of support. so any words about the reason for the behavior would be very much appreciated.\n\nI have a partition table which name is test_rank_2014_monthly and it has 7 partitions inherited from the parent table, each month with one partition. The weird thing is query out of the parent partition is as slow as query from a non-partitioned table, however,\n query from child table directly is really fast. \n\nhave no idea... is this an expected behavior of partition table in old releases?\n\n\nhitwise_uk=# explain analyze select * from test_rank_2014_07 r WHERE r.date = 201407 ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on test_rank_2014_07 r (cost=0.00..169797.75 rows=7444220 width=54) (actual time=0.007..1284.622 rows=7444220 loops=1)\n Filter: (date = 201407)\n Total runtime: 1831.379 ms\n(3 rows)\n\n-- query on parent table\nhitwise_uk=# explain analyze select * from test_rank_2014_monthly r WHERE r.date = 201407 ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..169819.88 rows=7444225 width=54) (actual time=0.009..4484.552 rows=7444220 loops=1)\n -> Append (cost=0.00..169819.88 rows=7444225 width=54) (actual time=0.008..2495.457 rows=7444220 loops=1)\n -> Seq Scan on test_rank_2014_monthly r (cost=0.00..22.12 rows=5 width=54) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: (date = 201407)\n -> Seq Scan on test_rank_2014_07 r (cost=0.00..169797.75 rows=7444220 width=54) (actual time=0.007..1406.600 rows=7444220 loops=1)\n Filter: (date = 201407)\n Total runtime: 5036.092 ms\n(7 rows)\n\n--query on non-partitioned table\nhitwise_uk=# explain analyze select * from rank_2014_monthly r WHERE r.date = 201407 ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on rank_2014_monthly r (cost=0.00..1042968.85 rows=7424587 width=54) (actual time=3226.983..4537.974 rows=7444220 loops=1)\n Filter: (date = 201407)\n Total runtime: 5086.096 ms\n(3 rows)\n\n\ncheck constraints on child table is something like below:\n...\nCheck constraints:\n \"test_rank_2014_07_date_check\" CHECK (date = 201407)\nInherits: test_rank_2014_monthly\n\nThanks,\nSuya",
"msg_date": "Wed, 20 Aug 2014 09:30:27 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "query on parent partition table has bad performance"
},
{
"msg_contents": "Huang, Suya wrote\n> Hi,\n> \n> I have a question about partition table query performance in postgresql,\n> it's an old version 8.3.21, I know it's already out of support. so any\n> words about the reason for the behavior would be very much appreciated.\n> \n> I have a partition table which name is test_rank_2014_monthly and it has 7\n> partitions inherited from the parent table, each month with one partition. \n> The weird thing is query out of the parent partition is as slow as query\n> from a non-partitioned table, however, query from child table directly is\n> really fast.\n> \n> have no idea... is this an expected behavior of partition table in old\n> releases?\n> \n> \n> hitwise_uk=# explain analyze select * from test_rank_2014_07 r WHERE\n> r.date = 201407 ;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on test_rank_2014_07 r (cost=0.00..169797.75 rows=7444220\n> width=54) (actual time=0.007..1284.622 rows=7444220 loops=1)\n> Filter: (date = 201407)\n> Total runtime: 1831.379 ms\n> (3 rows)\n> \n> -- query on parent table\n> hitwise_uk=# explain analyze select * from test_rank_2014_monthly r WHERE\n> r.date = 201407 ;\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Result (cost=0.00..169819.88 rows=7444225 width=54) (actual\n> time=0.009..4484.552 rows=7444220 loops=1)\n> -> Append (cost=0.00..169819.88 rows=7444225 width=54) (actual\n> time=0.008..2495.457 rows=7444220 loops=1)\n> -> Seq Scan on test_rank_2014_monthly r (cost=0.00..22.12\n> rows=5 width=54) (actual time=0.000..0.000 rows=0 loops=1)\n> Filter: (date = 201407)\n> -> Seq Scan on test_rank_2014_07 r (cost=0.00..169797.75\n> rows=7444220 width=54) (actual time=0.007..1406.600 rows=7444220 loops=1)\n> Filter: (date = 201407)\n> Total runtime: 5036.092 ms\n> (7 rows)\n> \n> --query on non-partitioned table\n> hitwise_uk=# explain analyze select * from rank_2014_monthly r WHERE\n> r.date = 201407 ;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on rank_2014_monthly r (cost=0.00..1042968.85 rows=7424587\n> width=54) (actual time=3226.983..4537.974 rows=7444220 loops=1)\n> Filter: (date = 201407)\n> Total runtime: 5086.096 ms\n> (3 rows)\n> \n> \n> check constraints on child table is something like below:\n> ...\n> Check constraints:\n> \"test_rank_2014_07_date_check\" CHECK (date = 201407)\n> Inherits: test_rank_2014_monthly\n> \n> Thanks,\n> Suya\n\nGiven that the 2nd and 3rd queries perform about equal the question is why\nthe first query performs so much better. I suspect you are not taking any\ncare to avoid caching effects and so that it what you are seeing. Its hard\nto know for sure whether you ran the three queries in the order\nlisted...which if so would likely negate this theory somewhat.\n\nAdding (BUFFERS) to your explain would at least give some visibility into\ncaching effects - though since that is only available in supported versions\nthat is not an option for you. Still, it is the most likely explanation for\nwhat you are seeing.\n\nThere is time involved to process the partition constraint exclusion but I'm\ndoubting it accounts for a full 3 seconds...\n\nDavid J.\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/query-on-parent-partition-table-has-bad-performance-tp5815523p5815552.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Aug 2014 06:49:38 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query on parent partition table has bad performance"
},
{
"msg_contents": "\"Huang, Suya\" <[email protected]> writes:\n> I have a question about partition table query performance in postgresql, it's an old version 8.3.21, I know it's already out of support. so any words about the reason for the behavior would be very much appreciated.\n\n> I have a partition table which name is test_rank_2014_monthly and it has 7 partitions inherited from the parent table, each month with one partition. The weird thing is query out of the parent partition is as slow as query from a non-partitioned table, however, query from child table directly is really fast.\n\n> hitwise_uk=# explain analyze select * from test_rank_2014_07 r WHERE r.date = 201407 ;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on test_rank_2014_07 r (cost=0.00..169797.75 rows=7444220 width=54) (actual time=0.007..1284.622 rows=7444220 loops=1)\n> Filter: (date = 201407)\n> Total runtime: 1831.379 ms\n> (3 rows)\n\n> -- query on parent table\n> hitwise_uk=# explain analyze select * from test_rank_2014_monthly r WHERE r.date = 201407 ;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Result (cost=0.00..169819.88 rows=7444225 width=54) (actual time=0.009..4484.552 rows=7444220 loops=1)\n> -> Append (cost=0.00..169819.88 rows=7444225 width=54) (actual time=0.008..2495.457 rows=7444220 loops=1)\n> -> Seq Scan on test_rank_2014_monthly r (cost=0.00..22.12 rows=5 width=54) (actual time=0.000..0.000 rows=0 loops=1)\n> Filter: (date = 201407)\n> -> Seq Scan on test_rank_2014_07 r (cost=0.00..169797.75 rows=7444220 width=54) (actual time=0.007..1406.600 rows=7444220 loops=1)\n> Filter: (date = 201407)\n> Total runtime: 5036.092 ms\n> (7 rows)\n\nThe actual SeqScans are not very different in speed according to this.\nMost of the extra time seems to be going into the Append and Result nodes.\nSince those aren't actually doing anything except to return the input\ntuple up to their caller, I suspect what we're looking at here is mostly\nEXPLAIN ANALYZE's measurement overhead. How much speed difference is\nthere if you just do the query, rather than EXPLAIN ANALYZE'ing it?\n\n\n> --query on non-partitioned table\n> hitwise_uk=# explain analyze select * from rank_2014_monthly r WHERE r.date = 201407 ;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on rank_2014_monthly r (cost=0.00..1042968.85 rows=7424587 width=54) (actual time=3226.983..4537.974 rows=7444220 loops=1)\n> Filter: (date = 201407)\n> Total runtime: 5086.096 ms\n> (3 rows)\n\nYou don't appear to be comparing apples to apples here. Note the larger\ncost estimate, and the odd delay of more than 3 seconds before the first\nrow is returned. Presumably what is happening is that this table contains\ngigabytes of dead space before the first live tuple. You don't say how\nyou made this comparison table, but I'll bet it involved deleting data\nand then loading fresh data without a VACUUM or TRUNCATE first.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Aug 2014 10:12:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query on parent partition table has bad performance"
},
{
"msg_contents": "-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, August 21, 2014 12:13 AM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] query on parent partition table has bad performance\n\n\"Huang, Suya\" <[email protected]> writes:\n> I have a question about partition table query performance in postgresql, it's an old version 8.3.21, I know it's already out of support. so any words about the reason for the behavior would be very much appreciated.\n\n> I have a partition table which name is test_rank_2014_monthly and it has 7 partitions inherited from the parent table, each month with one partition. The weird thing is query out of the parent partition is as slow as query from a non-partitioned table, however, query from child table directly is really fast.\n\n> hitwise_uk=# explain analyze select * from test_rank_2014_07 r WHERE r.date = 201407 ;\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ------------------------------------------------------------\n> Seq Scan on test_rank_2014_07 r (cost=0.00..169797.75 rows=7444220 width=54) (actual time=0.007..1284.622 rows=7444220 loops=1)\n> Filter: (date = 201407)\n> Total runtime: 1831.379 ms\n> (3 rows)\n\n> -- query on parent table\n> hitwise_uk=# explain analyze select * from test_rank_2014_monthly r WHERE r.date = 201407 ;\n> \n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> -- Result (cost=0.00..169819.88 rows=7444225 width=54) (actual \n> time=0.009..4484.552 rows=7444220 loops=1)\n> -> Append (cost=0.00..169819.88 rows=7444225 width=54) (actual time=0.008..2495.457 rows=7444220 loops=1)\n> -> Seq Scan on test_rank_2014_monthly r (cost=0.00..22.12 rows=5 width=54) (actual time=0.000..0.000 rows=0 loops=1)\n> Filter: (date = 201407)\n> -> Seq Scan on test_rank_2014_07 r (cost=0.00..169797.75 rows=7444220 width=54) (actual time=0.007..1406.600 rows=7444220 loops=1)\n> Filter: (date = 201407) Total runtime: 5036.092 ms\n> (7 rows)\n\nThe actual SeqScans are not very different in speed according to this.\nMost of the extra time seems to be going into the Append and Result nodes.\nSince those aren't actually doing anything except to return the input tuple up to their caller, I suspect what we're looking at here is mostly EXPLAIN ANALYZE's measurement overhead. How much speed difference is there if you just do the query, rather than EXPLAIN ANALYZE'ing it?\n\n\n> --query on non-partitioned table\n> hitwise_uk=# explain analyze select * from rank_2014_monthly r WHERE r.date = 201407 ;\n> QUERY \n> PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------\n> Seq Scan on rank_2014_monthly r (cost=0.00..1042968.85 rows=7424587 width=54) (actual time=3226.983..4537.974 rows=7444220 loops=1)\n> Filter: (date = 201407)\n> Total runtime: 5086.096 ms\n> (3 rows)\n\nYou don't appear to be comparing apples to apples here. Note the larger cost estimate, and the odd delay of more than 3 seconds before the first row is returned. Presumably what is happening is that this table contains gigabytes of dead space before the first live tuple. You don't say how you made this comparison table, but I'll bet it involved deleting data and then loading fresh data without a VACUUM or TRUNCATE first.\n\n\n\t\t\tregards, tom lane\n\n\n===============================================================================================================================================================================\n\nThank you so much Tom for the valuable answer as always!\n\nFor the first point you made, you're right. The real execution time varies a lot from the explain analyze, the query on parent table are just as fast as it is on the child table. is this a bug of explain analyze command? While we reading the execution plan, shall we ignore the top Append/Result nodes?\n\nFor the second point, I created the test partition table using CTAS statement so there's no insert/update/delete on the test table. But on the production non-partition table, there might be such operations ran against them. But the reason why it takes 3 seconds to get the first row, might because it's non-partitioned so it has to scan the whole table to get the first correct record? This non-partitioned table has ~ 30 million rows while the partition of the table only has ~ 5 million rows.\n\n\nThanks,\nSuya\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 04:46:34 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query on parent partition table has bad performance"
},
{
"msg_contents": "\"Huang, Suya\" <[email protected]> writes:\n> For the first point you made, you're right. The real execution time varies a lot from the explain analyze, the query on parent table are just as fast as it is on the child table. is this a bug of explain analyze command? While we reading the execution plan, shall we ignore the top Append/Result nodes?\n\nWell, it's a \"bug\" of gettimeofday(): it takes more than zero time, in\nfact quite a lot more than zero time. Complain to your local kernel\nhacker, and/or the chief of engineering at Intel. There aren't any\neasy fixes available for us:\nhttp://www.postgresql.org/message-id/flat/[email protected]\n\n> For the second point, I created the test partition table using CTAS statement so there's no insert/update/delete on the test table. But on the production non-partition table, there might be such operations ran against them. But the reason why it takes 3 seconds to get the first row, might because it's non-partitioned so it has to scan the whole table to get the first correct record? This non-partitioned table has ~ 30 million rows while the partition of the table only has ~ 5 million rows.\n\nOh, so the extra time is going into reading rows that fail the filter\ncondition? Well, that's not surprising. That's exactly *why* you\npartition tables, so queries can skip entire child tables rather than\nhaving to look at and reject individual rows.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 01:10:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query on parent partition table has bad performance"
}
] |
[
{
"msg_contents": "I have a table called stop_event (a stop event is one bus passing one bus\nstop at a given time for a given route and direction), and I'd like to get\nthe average interval for each stop/route/direction combination.\n\nA few hundred new events are written to the table once every minute. No\nrows are ever updated (or deleted, except in development).\n\nstop_event looks like this:\n\n Table \"public.stop_event\"\n Column | Type | Modifiers\n-----------+-----------------------------+-----------\n stop_time | timestamp without time zone | not null\n stop | integer | not null\n bus | integer | not null\n direction | integer | not null\n route | integer | not null\nForeign-key constraints:\n \"stop_event_direction_id_fkey\" FOREIGN KEY (direction) REFERENCES\ndirection(id)\n \"stop_event_route_fkey\" FOREIGN KEY (route) REFERENCES route(id)\n \"stop_event_stop\" FOREIGN KEY (stop) REFERENCES stop(id)\n\nAnd my query looks like this:\n\nSELECT (floor(date_part(E'epoch', avg(interval))) / 60)::INTEGER,\n route,\n direction,\n name,\n st_asgeojson(stop_location)::JSON\nFROM\n (SELECT (stop_time - (lag(stop_time) OVER w)) AS interval,\n route,\n direction,\n name,\n stop_location\n FROM stop_event\n INNER JOIN stop ON (stop_event.stop = stop.id)\n WINDOW w AS (PARTITION BY route, direction, stop ORDER BY stop_time))\nAS all_intervals\nWHERE (interval IS NOT NULL)\nGROUP BY route,\n direction,\n name,\n stop_location;\n\nWith around 1.2 million rows, this takes 20 seconds to run. 1.2 million\nrows is only about a week's worth of data, so I'd like to figure out a way\nto make this faster. The EXPLAIN ANALYZE is at\nhttp://explain.depesz.com/s/ntC.\n\nClearly the bulk of the time is spent sorting the rows in the original\ntable, and then again sorting the results of the subselect. But I'm afraid\nI don't really know what to do with this information. Is there any way I\ncan speed this up? Is my use of an aggregate key for stop_event causing\nproblems? Would using a synthetic key help?\n\nThank you for any help you can provide,\n-Eli\n\nI have a table called stop_event (a stop event is one bus passing one bus stop at a given time for a given route and direction), and I'd like to get the average interval for each stop/route/direction combination.\nA few hundred new events are written to the table once every minute. No rows are ever updated (or deleted, except in development).stop_event looks like this:\n Table \"public.stop_event\" Column | Type | Modifiers \n-----------+-----------------------------+----------- stop_time | timestamp without time zone | not null stop | integer | not null\n bus | integer | not null direction | integer | not null route | integer | not null\nForeign-key constraints: \"stop_event_direction_id_fkey\" FOREIGN KEY (direction) REFERENCES direction(id)\n \"stop_event_route_fkey\" FOREIGN KEY (route) REFERENCES route(id) \"stop_event_stop\" FOREIGN KEY (stop) REFERENCES stop(id)\n\nAnd my query looks like this:SELECT (floor(date_part(E'epoch', avg(interval))) / 60)::INTEGER, route, direction,\n\n name, st_asgeojson(stop_location)::JSONFROM (SELECT (stop_time - (lag(stop_time) OVER w)) AS interval, route, direction, name, stop_location\n\n\n FROM stop_event INNER JOIN stop ON (stop_event.stop = stop.id) WINDOW w AS (PARTITION BY route, direction, stop ORDER BY stop_time))\nAS all_intervalsWHERE (interval IS NOT NULL)GROUP BY route, direction, name, stop_location;\nWith around 1.2 million rows, this takes 20 seconds to run. 1.2 million rows is only about a week's worth of data, so I'd like to figure out a way to make this faster. The EXPLAIN ANALYZE is at http://explain.depesz.com/s/ntC.\nClearly the bulk of the time is spent sorting the rows in the original table, and then again sorting the results of the subselect. But I'm afraid I don't really know what to do with this information. Is there any way I can speed this up? Is my use of an aggregate key for stop_event causing problems? Would using a synthetic key help?\nThank you for any help you can provide,-Eli",
"msg_date": "Thu, 21 Aug 2014 08:29:55 -0500",
"msg_from": "Eli Naeher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Window functions, partitioning, and sorting performance"
},
{
"msg_contents": "On Thu, Aug 21, 2014 at 4:29 PM, Eli Naeher <[email protected]> wrote:\n> Clearly the bulk of the time is spent sorting the rows in the original\n> table, and then again sorting the results of the subselect. But I'm afraid I\n> don't really know what to do with this information. Is there any way I can\n> speed this up?\n\n\"Sort Method: external merge Disk: 120976kB\"\n\nThe obvious first step is to bump up work_mem to avoid disk-based\nsort. Try setting it to something like 256MB in your session and see\nhow it performs then. This may also allow the planner to choose\nHashAggregate instead of sort.\n\nIt not always straightforward how to tune correctly. It depends on\nyour hardware, concurrency and query complexity, here's some advice:\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem_maintainance_work_mem\n\nAlso you could create an index on (route, direction, stop, stop_time)\nto avoid the inner sort entirely.\n\nAnd it seems that you can move the \"INNER JOIN stop\" to the outer\nquery as well, not sure if that will change much.\n\nTry these and if it's still problematic, report back with a new EXPLAIN ANALYZE\n\nRegards,\nMarti\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 17:02:07 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Window functions, partitioning, and sorting performance"
},
{
"msg_contents": "On 08/21/2014 08:29 AM, Eli Naeher wrote:\n\n> With around 1.2 million rows, this takes 20 seconds to run. 1.2 million\n> rows is only about a week's worth of data, so I'd like to figure out a\n> way to make this faster.\n\nWell, you'll probably be able to reduce the run time a bit, but even \nwith really good hardware and all in-memory processing, you're not going \nto see significant run-time improvements with that many rows. This is \none of the reasons reporting-specific structures, such as fact tables, \nwere designed to address.\n\nRepeatedly processing the same week/month/year aggregate worth of \nseveral million rows will just increase linearly with each iteration as \ndata size increases. You need to maintain up-to-date aggregates on the \nmetrics you actually want to measure, so you're only reading the few \nhundred rows you introduce every update period. You can retrieve those \nkind of results in a few milliseconds.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 09:05:41 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Window functions, partitioning, and sorting performance"
},
{
"msg_contents": "Upping work_mem did roughly halve the time, but after thinking about\nShaun's suggestion, I figured it's better to calculate this stuff once and\nthen store it. So here is how the table looks now:\n\n Table \"public.stop_event\"\n Column | Type |\n Modifiers\n---------------------+-----------------------------+---------------------------------------------------------\n stop_time | timestamp without time zone | not null\n stop | integer | not null\n bus | integer | not null\n direction | integer | not null\n route | integer | not null\n id | bigint | not null default\nnextval('stop_event_id_seq'::regclass)\n previous_stop_event | bigint |\nIndexes:\n \"stop_event_pkey\" PRIMARY KEY, btree (id)\n \"stop_event_previous_stop_event_idx\" btree (previous_stop_event)\nForeign-key constraints:\n \"stop_event_direction_id_fkey\" FOREIGN KEY (direction) REFERENCES\ndirection(id)\n \"stop_event_previous_stop_event_fkey\" FOREIGN KEY (previous_stop_event)\nREFERENCES stop_event(id)\n \"stop_event_route_fkey\" FOREIGN KEY (route) REFERENCES route(id)\n \"stop_event_stop\" FOREIGN KEY (stop) REFERENCES stop(id)\nReferenced by:\n TABLE \"stop_event\" CONSTRAINT \"stop_event_previous_stop_event_fkey\"\nFOREIGN KEY (previous_stop_event) REFERENCES stop_event(id)\n\nprevious_stop_event simply references the previous (by stop_time) stop\nevent for the combination of stop, route, and direction. I have\nsuccessfully populated this column for my existing test data. However, when\nI try to do a test self-join using it, Postgres does two seq scans across\nthe whole table, even though I have indexes on both id and\nprevious_stop_event: http://explain.depesz.com/s/ctck. Any idea why those\nindexes are not being used?\n\nThank you again,\n-Eli\n\nOn Thu, Aug 21, 2014 at 9:05 AM, Shaun Thomas <[email protected]>\n> wrote:\n>\n>> On 08/21/2014 08:29 AM, Eli Naeher wrote:\n>>\n>> With around 1.2 million rows, this takes 20 seconds to run. 1.2 million\n>>> rows is only about a week's worth of data, so I'd like to figure out a\n>>> way to make this faster.\n>>>\n>>\n>> Well, you'll probably be able to reduce the run time a bit, but even with\n>> really good hardware and all in-memory processing, you're not going to see\n>> significant run-time improvements with that many rows. This is one of the\n>> reasons reporting-specific structures, such as fact tables, were designed\n>> to address.\n>>\n>> Repeatedly processing the same week/month/year aggregate worth of several\n>> million rows will just increase linearly with each iteration as data size\n>> increases. You need to maintain up-to-date aggregates on the metrics you\n>> actually want to measure, so you're only reading the few hundred rows you\n>> introduce every update period. You can retrieve those kind of results in a\n>> few milliseconds.\n>>\n>> --\n>> Shaun Thomas\n>> OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n>> 312-676-8870\n>> [email protected]\n>>\n>> ______________________________________________\n>>\n>> See http://www.peak6.com/email_disclaimer/ for terms and conditions\n>> related to this email\n>>\n>\n>\n\nUpping work_mem did roughly halve the time, but after thinking about Shaun's suggestion, I figured it's better to calculate this stuff once and then store it. So here is how the table looks now:\n Table \"public.stop_event\"\n Column | Type | Modifiers \n---------------------+-----------------------------+---------------------------------------------------------\n stop_time | timestamp without time zone | not null stop | integer | not null\n bus | integer | not null direction | integer | not null\n route | integer | not null id | bigint | not null default nextval('stop_event_id_seq'::regclass)\n previous_stop_event | bigint | Indexes:\n \"stop_event_pkey\" PRIMARY KEY, btree (id) \"stop_event_previous_stop_event_idx\" btree (previous_stop_event)\nForeign-key constraints: \"stop_event_direction_id_fkey\" FOREIGN KEY (direction) REFERENCES direction(id)\n \"stop_event_previous_stop_event_fkey\" FOREIGN KEY (previous_stop_event) REFERENCES stop_event(id)\n \"stop_event_route_fkey\" FOREIGN KEY (route) REFERENCES route(id) \"stop_event_stop\" FOREIGN KEY (stop) REFERENCES stop(id)\nReferenced by: TABLE \"stop_event\" CONSTRAINT \"stop_event_previous_stop_event_fkey\" FOREIGN KEY (previous_stop_event) REFERENCES stop_event(id)\nprevious_stop_event simply references the previous (by stop_time) stop event for the combination of stop, route, and direction. I have successfully populated this column for my existing test data. However, when I try to do a test self-join using it, Postgres does two seq scans across the whole table, even though I have indexes on both id and previous_stop_event: http://explain.depesz.com/s/ctck. Any idea why those indexes are not being used?\nThank you again,\n-Eli\nOn Thu, Aug 21, 2014 at 9:05 AM, Shaun Thomas <[email protected]> wrote:\nOn 08/21/2014 08:29 AM, Eli Naeher wrote:\n\n\nWith around 1.2 million rows, this takes 20 seconds to run. 1.2 million\nrows is only about a week's worth of data, so I'd like to figure out a\nway to make this faster.\n\n\nWell, you'll probably be able to reduce the run time a bit, but even with really good hardware and all in-memory processing, you're not going to see significant run-time improvements with that many rows. This is one of the reasons reporting-specific structures, such as fact tables, were designed to address.\n\nRepeatedly processing the same week/month/year aggregate worth of several million rows will just increase linearly with each iteration as data size increases. You need to maintain up-to-date aggregates on the metrics you actually want to measure, so you're only reading the few hundred rows you introduce every update period. You can retrieve those kind of results in a few milliseconds.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email",
"msg_date": "Thu, 21 Aug 2014 11:19:17 -0500",
"msg_from": "Eli Naeher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Window functions, partitioning, and sorting performance"
},
{
"msg_contents": "Oops, I forgot to include the test self-join query I'm using. It is simply:\n\nSELECT se1.stop_time AS curr, se2.stop_time AS prev\nFROM stop_event se1\nJOIN stop_event se2 ON se1.previous_stop_event = se2.id;\n\n\n\nOn Thu, Aug 21, 2014 at 11:19 AM, Eli Naeher <[email protected]> wrote:\n\n> Upping work_mem did roughly halve the time, but after thinking about\n> Shaun's suggestion, I figured it's better to calculate this stuff once and\n> then store it. So here is how the table looks now:\n>\n> Table \"public.stop_event\"\n> Column | Type |\n> Modifiers\n>\n> ---------------------+-----------------------------+---------------------------------------------------------\n> stop_time | timestamp without time zone | not null\n> stop | integer | not null\n> bus | integer | not null\n> direction | integer | not null\n> route | integer | not null\n> id | bigint | not null default\n> nextval('stop_event_id_seq'::regclass)\n> previous_stop_event | bigint |\n> Indexes:\n> \"stop_event_pkey\" PRIMARY KEY, btree (id)\n> \"stop_event_previous_stop_event_idx\" btree (previous_stop_event)\n> Foreign-key constraints:\n> \"stop_event_direction_id_fkey\" FOREIGN KEY (direction) REFERENCES\n> direction(id)\n> \"stop_event_previous_stop_event_fkey\" FOREIGN KEY\n> (previous_stop_event) REFERENCES stop_event(id)\n> \"stop_event_route_fkey\" FOREIGN KEY (route) REFERENCES route(id)\n> \"stop_event_stop\" FOREIGN KEY (stop) REFERENCES stop(id)\n> Referenced by:\n> TABLE \"stop_event\" CONSTRAINT \"stop_event_previous_stop_event_fkey\"\n> FOREIGN KEY (previous_stop_event) REFERENCES stop_event(id)\n>\n> previous_stop_event simply references the previous (by stop_time) stop\n> event for the combination of stop, route, and direction. I have\n> successfully populated this column for my existing test data. However, when\n> I try to do a test self-join using it, Postgres does two seq scans across\n> the whole table, even though I have indexes on both id and\n> previous_stop_event: http://explain.depesz.com/s/ctck. Any idea why those\n> indexes are not being used?\n>\n> Thank you again,\n> -Eli\n>\n> On Thu, Aug 21, 2014 at 9:05 AM, Shaun Thomas <[email protected]>\n>> wrote:\n>>\n>>> On 08/21/2014 08:29 AM, Eli Naeher wrote:\n>>>\n>>> With around 1.2 million rows, this takes 20 seconds to run. 1.2 million\n>>>> rows is only about a week's worth of data, so I'd like to figure out a\n>>>> way to make this faster.\n>>>>\n>>>\n>>> Well, you'll probably be able to reduce the run time a bit, but even\n>>> with really good hardware and all in-memory processing, you're not going to\n>>> see significant run-time improvements with that many rows. This is one of\n>>> the reasons reporting-specific structures, such as fact tables, were\n>>> designed to address.\n>>>\n>>> Repeatedly processing the same week/month/year aggregate worth of\n>>> several million rows will just increase linearly with each iteration as\n>>> data size increases. You need to maintain up-to-date aggregates on the\n>>> metrics you actually want to measure, so you're only reading the few\n>>> hundred rows you introduce every update period. You can retrieve those kind\n>>> of results in a few milliseconds.\n>>>\n>>> --\n>>> Shaun Thomas\n>>> OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n>>> 312-676-8870\n>>> [email protected]\n>>>\n>>> ______________________________________________\n>>>\n>>> See http://www.peak6.com/email_disclaimer/ for terms and conditions\n>>> related to this email\n>>>\n>>\n>>\n>\n\nOops, I forgot to include the test self-join query I'm using. It is simply:SELECT se1.stop_time AS curr, se2.stop_time AS prevFROM stop_event se1JOIN stop_event se2 ON se1.previous_stop_event = se2.id;\nOn Thu, Aug 21, 2014 at 11:19 AM, Eli Naeher <[email protected]> wrote:\nUpping work_mem did roughly halve the time, but after thinking about Shaun's suggestion, I figured it's better to calculate this stuff once and then store it. So here is how the table looks now:\n Table \"public.stop_event\"\n Column | Type | Modifiers \n---------------------+-----------------------------+---------------------------------------------------------\n\n stop_time | timestamp without time zone | not null stop | integer | not null\n bus | integer | not null direction | integer | not null\n route | integer | not null id | bigint | not null default nextval('stop_event_id_seq'::regclass)\n previous_stop_event | bigint | Indexes:\n \"stop_event_pkey\" PRIMARY KEY, btree (id) \"stop_event_previous_stop_event_idx\" btree (previous_stop_event)\n\nForeign-key constraints: \"stop_event_direction_id_fkey\" FOREIGN KEY (direction) REFERENCES direction(id)\n \"stop_event_previous_stop_event_fkey\" FOREIGN KEY (previous_stop_event) REFERENCES stop_event(id)\n\n \"stop_event_route_fkey\" FOREIGN KEY (route) REFERENCES route(id) \"stop_event_stop\" FOREIGN KEY (stop) REFERENCES stop(id)\nReferenced by: TABLE \"stop_event\" CONSTRAINT \"stop_event_previous_stop_event_fkey\" FOREIGN KEY (previous_stop_event) REFERENCES stop_event(id)\nprevious_stop_event simply references the previous (by stop_time) stop event for the combination of stop, route, and direction. I have successfully populated this column for my existing test data. However, when I try to do a test self-join using it, Postgres does two seq scans across the whole table, even though I have indexes on both id and previous_stop_event: http://explain.depesz.com/s/ctck. Any idea why those indexes are not being used?\nThank you again,\n\n-Eli\nOn Thu, Aug 21, 2014 at 9:05 AM, Shaun Thomas <[email protected]> wrote:\nOn 08/21/2014 08:29 AM, Eli Naeher wrote:\n\n\nWith around 1.2 million rows, this takes 20 seconds to run. 1.2 million\nrows is only about a week's worth of data, so I'd like to figure out a\nway to make this faster.\n\n\nWell, you'll probably be able to reduce the run time a bit, but even with really good hardware and all in-memory processing, you're not going to see significant run-time improvements with that many rows. This is one of the reasons reporting-specific structures, such as fact tables, were designed to address.\n\nRepeatedly processing the same week/month/year aggregate worth of several million rows will just increase linearly with each iteration as data size increases. You need to maintain up-to-date aggregates on the metrics you actually want to measure, so you're only reading the few hundred rows you introduce every update period. You can retrieve those kind of results in a few milliseconds.\n\n-- \nShaun Thomas\nOptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email",
"msg_date": "Thu, 21 Aug 2014 11:21:03 -0500",
"msg_from": "Eli Naeher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Window functions, partitioning, and sorting performance"
},
{
"msg_contents": "On Thu, Aug 21, 2014 at 7:19 PM, Eli Naeher <[email protected]> wrote:\n> However, when I try to do a\n> test self-join using it, Postgres does two seq scans across the whole table,\n> even though I have indexes on both id and previous_stop_event:\n> http://explain.depesz.com/s/ctck. Any idea why those indexes are not being\n> used?\n\nBecause the planner thinks seq scan+hash join is going to be faster\nthan incurring the overhead of index scans for other kinds of plans.\n\nYou can try out alternative plan types by running 'set\nenable_hashjoin=off' in your session. If it does turn out to be\nfaster, then it usually means you haven't set planner tunables right\n(random_page_cost, effective_cache_size and possibly cpu_tuple_cost).\n\nRegards,\nMarti\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Aug 2014 20:14:31 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Window functions, partitioning, and sorting performance"
}
] |
[
{
"msg_contents": "Hello,\n\nTrying to insert into one table with 1 million records through java JDBC \ninto psql8.3. May I know (1) or (2) is better please?\n\n(1) set autocommit(true)\n(2) set autocommit(false)\n commit every n records (e.g., 100, 500, 1000, etc)\n\nThanks a lot!\nEmi\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Aug 2014 16:49:56 -0400",
"msg_from": "Emi Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "Emi Lu-2 wrote\n> Hello,\n> \n> Trying to insert into one table with 1 million records through java JDBC \n> into psql8.3. May I know (1) or (2) is better please?\n> \n> (1) set autocommit(true)\n> (2) set autocommit(false)\n> commit every n records (e.g., 100, 500, 1000, etc)\n> \n> Thanks a lot!\n> Emi\n\nTypically the larger the n the better. Locking and risk of data loss on a\nfailure are the tradeoffs to consider. Other factors, like memory, make\nchoosing too large an n bad so using 500,000 is probably wrong but 500 is\nprobably overly conservative. Better advice depends on context and\nhardware.\n\nYou should also consider upgrading to a newer, supported, version of\nPostgreSQL.\n\nDavid J.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/autocommit-true-false-for-more-than-1-million-records-tp5815943p5815946.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Aug 2014 13:58:34 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "* Emi Lu ([email protected]) wrote:\n> Hello,\n> \n> Trying to insert into one table with 1 million records through java\n> JDBC into psql8.3. May I know (1) or (2) is better please?\n> \n> (1) set autocommit(true)\n> (2) set autocommit(false)\n> commit every n records (e.g., 100, 500, 1000, etc)\n\nIt depends on what you need.\n\nData will be available to concurrent processes earlier with (1), while\n(2) will go faster.\n\n\tThanks,\n\t\n\t\tStephen",
"msg_date": "Fri, 22 Aug 2014 17:00:18 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "> *\n>> Trying to insert into one table with 1 million records through java\n>> JDBC into psql8.3. May I know (1) or (2) is better please?\n>>\n>> (1) set autocommit(true)\n>> (2) set autocommit(false)\n>> commit every n records (e.g., 100, 500, 1000, etc)\n> It depends on what you need.\n>\n> Data will be available to concurrent processes earlier with (1), while\n> (2) will go faster.\nNo need to worry about the lock/loosing records because after data \nloading will do a check. For now, I'd like the fastest way. Would you \nsuggest commit every 1000 or 3000 records?\n\nThanks a lot!\nEmi\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Aug 2014 17:11:01 -0400",
"msg_from": "Emi Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "* Emi Lu ([email protected]) wrote:\n> >*\n> >>Trying to insert into one table with 1 million records through java\n> >>JDBC into psql8.3. May I know (1) or (2) is better please?\n> >>\n> >>(1) set autocommit(true)\n> >>(2) set autocommit(false)\n> >> commit every n records (e.g., 100, 500, 1000, etc)\n> >It depends on what you need.\n> >\n> >Data will be available to concurrent processes earlier with (1), while\n> >(2) will go faster.\n> No need to worry about the lock/loosing records because after data\n> loading will do a check. For now, I'd like the fastest way. Would\n> you suggest commit every 1000 or 3000 records?\n\nThe improvement drops off pretty quickly in my experience, but it\ndepends on the size of the records and other things.\n\nTry it and see..? It's almost certainly going to depend on your\nspecific environment.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Fri, 22 Aug 2014 17:21:15 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "Good morning,\n>>>> Trying to insert into one table with 1 million records through java\n>>>> JDBC into psql8.3. May I know (1) or (2) is better please?\n>>>>\n>>>> (1) set autocommit(true)\n>>>> (2) set autocommit(false)\n>>>> commit every n records (e.g., 100, 500, 1000, etc)\n>>> It depends on what you need.\n>>>\n>>> Data will be available to concurrent processes earlier with (1), while\n>>> (2) will go faster.\n>> No need to worry about the lock/loosing records because after data\n>> loading will do a check. For now, I'd like the fastest way. Would\n>> you suggest commit every 1000 or 3000 records?\n> The improvement drops off pretty quickly in my experience, but it\n> depends on the size of the records and other things.\nThe table is huge with almost 170 columns.\n\n> Try it and see..? It's almost certainly going to depend on your\n> specific environment.\nCan you let me know what are the \"specific environment\" please? Such as: \n......\n\nBy the way, could someone let me know why set autocommit(false) is for \nsure faster than true please? Or, some online docs talk about this.\n\nThanks a lot!\nEmi\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Aug 2014 09:40:07 -0400",
"msg_from": "Emi Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "On Mon, Aug 25, 2014 at 9:40 AM, Emi Lu <[email protected]> wrote:\n\n>\n> By the way, could someone let me know why set autocommit(false) is for\n> sure faster than true please? Or, some online docs talk about this.\n>\n>\nNot sure about the docs specifically but:\n\nCommit is expensive because as soon as it is issued all of the data has to\nbe guaranteed written. While ultimately the same amount of data is\nguaranteed by doing them in batches there is opportunity to achieve\neconomies of scale.\n\n(I think...)\nWhen you commit you flush data to disk - until then you can make use of\nRAM. Once you exhaust RAM you might as well commit and free up that RAM\nfor the next batch.\n\nDavid J.\n\nOn Mon, Aug 25, 2014 at 9:40 AM, Emi Lu <[email protected]> wrote:\n\nBy the way, could someone let me know why set autocommit(false) is for sure faster than true please? Or, some online docs talk about this.\nNot sure about the docs specifically but:\nCommit is expensive because as soon as it is issued all of the data has to be guaranteed written. While ultimately the same amount of data is guaranteed by doing them in batches there is opportunity to achieve economies of scale.\n(I think...)\nWhen you commit you flush data to disk - until then you can make use of RAM. Once you exhaust RAM you might as well commit and free up that RAM for the next batch.\nDavid J.",
"msg_date": "Mon, 25 Aug 2014 09:51:25 -0400",
"msg_from": "David Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "Hi Emi,\n\nDatabases that comply to the ACID standard (\nhttp://en.wikipedia.org/wiki/ACID) ensure that that are no data loss by\nfirst writing the data changes to the database log in opposition to\nupdating the actual data on the filesystem first (on the datafiles).\n\nEach database has its own way of doing it, but it basically consists of\nwriting the data to the logfile at each COMMIT and writing the data to the\ndatafile only when it's necessary.\n\nSo the COMMIT command is a way of telling the database to write the data\nchanges to the logfile.\n\nBoth logfiles and datafiles resides on the filesystem, but why writing to\nthe logfile is faster?\n\nIt is because the logfile is written sequentially, while the datafile is\ntotally dispersed and may even be fragmented.\n\nResuming: autocommit false is faster because you avoid going to the hard\ndisk to write the changes into the logfile, you keep them in RAM memory\nuntil you decide to write them to the logfile (at each 10K rows for\ninstance).\n\nBe aware that, eventually, you will need to write data to the logfile, so\nyou can't avoid that. But usually the performance is better if you write X\nrows at a time to the logfile, rather than writing every and each row one\nby one (because of the hard disk writing overhead).\n\nThe number of rows you need to write to get a better performance will\ndepend on your environment and is pretty much done by blind-testing the\nprocess. For millions of rows, I usually commit at each 10K or 50K rows.\n\nRegards,\n\nFelipe\n\n\n\n\n2014-08-25 10:40 GMT-03:00 Emi Lu <[email protected]>:\n\n> Good morning,\n>\n>> Trying to insert into one table with 1 million records through java\n>>>>> JDBC into psql8.3. May I know (1) or (2) is better please?\n>>>>>\n>>>>> (1) set autocommit(true)\n>>>>> (2) set autocommit(false)\n>>>>> commit every n records (e.g., 100, 500, 1000, etc)\n>>>>>\n>>>> It depends on what you need.\n>>>>\n>>>> Data will be available to concurrent processes earlier with (1), while\n>>>> (2) will go faster.\n>>>>\n>>> No need to worry about the lock/loosing records because after data\n>>> loading will do a check. For now, I'd like the fastest way. Would\n>>> you suggest commit every 1000 or 3000 records?\n>>>\n>> The improvement drops off pretty quickly in my experience, but it\n>> depends on the size of the records and other things.\n>>\n> The table is huge with almost 170 columns.\n>\n> Try it and see..? It's almost certainly going to depend on your\n>> specific environment.\n>>\n> Can you let me know what are the \"specific environment\" please? Such as:\n> ......\n>\n> By the way, could someone let me know why set autocommit(false) is for\n> sure faster than true please? Or, some online docs talk about this.\n>\n> Thanks a lot!\n> Emi\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi Emi,Databases that comply to the ACID standard (http://en.wikipedia.org/wiki/ACID) ensure that that are no data loss by first writing the data changes to the database log in opposition to updating the actual data on the filesystem first (on the datafiles).\nEach database has its own way of doing it, but it basically consists of writing the data to the logfile at each COMMIT and writing the data to the datafile only when it's necessary.\nSo the COMMIT command is a way of telling the database to write the data changes to the logfile.Both logfiles and datafiles resides on the filesystem, but why writing to the logfile is faster?\nIt is because the logfile is written sequentially, while the datafile is totally dispersed and may even be fragmented.Resuming: autocommit false is faster because you avoid going to the hard disk to write the changes into the logfile, you keep them in RAM memory until you decide to write them to the logfile (at each 10K rows for instance).\nBe aware that, eventually, you will need to write data to the logfile, so you can't avoid that. But usually the performance is better if you write X rows at a time to the logfile, rather than writing every and each row one by one (because of the hard disk writing overhead).\nThe number of rows you need to write to get a better performance will depend on your environment and is pretty much done by blind-testing the process. For millions of rows, I usually commit at each 10K or 50K rows.\nRegards,Felipe2014-08-25 10:40 GMT-03:00 Emi Lu <[email protected]>:\nGood morning,\n\n\nTrying to insert into one table with 1 million records through java\nJDBC into psql8.3. May I know (1) or (2) is better please?\n\n(1) set autocommit(true)\n(2) set autocommit(false)\n commit every n records (e.g., 100, 500, 1000, etc)\n\nIt depends on what you need.\n\nData will be available to concurrent processes earlier with (1), while\n(2) will go faster.\n\nNo need to worry about the lock/loosing records because after data\nloading will do a check. For now, I'd like the fastest way. Would\nyou suggest commit every 1000 or 3000 records?\n\nThe improvement drops off pretty quickly in my experience, but it\ndepends on the size of the records and other things.\n\nThe table is huge with almost 170 columns.\n\n\nTry it and see..? It's almost certainly going to depend on your\nspecific environment.\n\nCan you let me know what are the \"specific environment\" please? Such as: ......\n\nBy the way, could someone let me know why set autocommit(false) is for sure faster than true please? Or, some online docs talk about this.\n\nThanks a lot!\nEmi\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 25 Aug 2014 11:02:52 -0300",
"msg_from": "Felipe Santos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "On Fri, Aug 22, 2014 at 1:49 PM, Emi Lu <[email protected]> wrote:\n\n> Hello,\n>\n> Trying to insert into one table with 1 million records through java JDBC\n> into psql8.3. May I know (1) or (2) is better please?\n>\n> (1) set autocommit(true)\n> (2) set autocommit(false)\n> commit every n records (e.g., 100, 500, 1000, etc)\n>\n\nIn general it is better to use COPY (however JDBC for 8.3. exposes that),\nas that is designed specifically for bulk loading.\n\nThen it doesn't matter whether autocommit is on or off, because the COPY is\na single statement.\n\nCheers,\n\nJeff\n\nOn Fri, Aug 22, 2014 at 1:49 PM, Emi Lu <[email protected]> wrote:\nHello,\n\nTrying to insert into one table with 1 million records through java JDBC into psql8.3. May I know (1) or (2) is better please?\n\n(1) set autocommit(true)\n(2) set autocommit(false)\n commit every n records (e.g., 100, 500, 1000, etc)In general it is better to use COPY (however JDBC for 8.3. exposes that), as that is designed specifically for bulk loading.\nThen it doesn't matter whether autocommit is on or off, because the COPY is a single statement.Cheers,Jeff",
"msg_date": "Mon, 25 Aug 2014 08:48:18 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "On the COPY's atomicity -- looking for a definitive answer from a core\ndeveloper, not a user's guess, please.\n\nSuppose I COPY a huge amount of data, e.g. 100 records.\n\nMy 99 records are fine for the target, and the 100-th is not -- it\ncomes with a wrong record format or a target constraint violation.\n\nThe whole thing is aborted then, and the good 99 records are not\nmaking it into the target table.\n\nMy question is: Where are these 99 records have been living, on the\ndatabase server, while the 100-th one hasn't come yet, and the need to\nthrow the previous data accumulation away has not come yet?\n\nThere have to be some limits to the space and/or counts taken by the\nnew, uncommitted, data, while the COPY operation is still in progress.\nWhat are they?\n\nSay, I am COPYing 100 TB of data and the bad records are close to the\nend of the feed -- how will this all error out?\n\nThanks,\n\n-- Alex\n\n\n\nOn Mon, Aug 25, 2014 at 11:48 AM, Jeff Janes <[email protected]> wrote:\n\n> On Fri, Aug 22, 2014 at 1:49 PM, Emi Lu <[email protected]> wrote:\n>\n>> Hello,\n>>\n>>\n>> Trying to insert into one table with 1 million records through java JDBC\n>> into psql8.3. May I know (1) or (2) is better please?\n>>\n>> (1) set autocommit(true)\n>> (2) set autocommit(false)\n>> commit every n records (e.g., 100, 500, 1000, etc)\n>>\n>\n> In general it is better to use COPY (however JDBC for 8.3. exposes that),\n> as that is designed specifically for bulk loading.\n>\n> Then it doesn't matter whether autocommit is on or off, because the COPY\n> is a single statement.\n>\n> Cheers,\n>\n> Jeff\n>\n\nOn the COPY's atomicity -- looking for a definitive answer from a coredeveloper, not a user's guess, please.Suppose I COPY a huge amount of data, e.g. 100 records.My 99 records are fine for the target, and the 100-th is not -- it\ncomes with a wrong record format or a target constraint violation.The whole thing is aborted then, and the good 99 records are notmaking it into the target table.My question is: Where are these 99 records have been living, on the\ndatabase server, while the 100-th one hasn't come yet, and the need tothrow the previous data accumulation away has not come yet?There have to be some limits to the space and/or counts taken by thenew, uncommitted, data, while the COPY operation is still in progress.\nWhat are they?Say, I am COPYing 100 TB of data and the bad records are close to theend of the feed -- how will this all error out?Thanks,-- Alex\nOn Mon, Aug 25, 2014 at 11:48 AM, Jeff Janes <[email protected]> wrote:\nOn Fri, Aug 22, 2014 at 1:49 PM, Emi Lu <[email protected]> wrote:\nHello,\n\nTrying to insert into one table with 1 million records through java JDBC into psql8.3. May I know (1) or (2) is better please?\n\n(1) set autocommit(true)\n(2) set autocommit(false)\n commit every n records (e.g., 100, 500, 1000, etc)In general it is better to use COPY (however JDBC for 8.3. exposes that), as that is designed specifically for bulk loading.\nThen it doesn't matter whether autocommit is on or off, because the COPY is a single statement.Cheers,Jeff",
"msg_date": "Tue, 26 Aug 2014 18:10:18 -0400",
"msg_from": "Alex Goncharov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "Alex Goncharov <[email protected]> wrote:\n\n> Suppose I COPY a huge amount of data, e.g. 100 records.\n>\n> My 99 records are fine for the target, and the 100-th is not --\n> it comes with a wrong record format or a target constraint\n> violation.\n>\n> The whole thing is aborted then, and the good 99 records are not\n> making it into the target table.\n\nRight. This is one reason people often batch such copies or check\nthe data very closely before copying in.\n\n> My question is: Where are these 99 records have been living, on\n> the database server, while the 100-th one hasn't come yet, and\n> the need to throw the previous data accumulation away has not\n> come yet?\n\nThey will have been written into the table. They do not become\nvisible to any other transaction until and unless the inserting\ntransaction successfully commits. These slides may help:\n\nhttp://momjian.us/main/writings/pgsql/mvcc.pdf\n\n> There have to be some limits to the space and/or counts taken by\n> the new, uncommitted, data, while the COPY operation is still in\n> progress. What are they?\n\nPrimarily disk space for the table. If you are not taking\nadvantage of the \"unlogged load\" optimization, you will have\nwritten Write Ahead Log (WAL) records, too -- which (depending on\nyour configuration) you may be archiving. In that case, you may\nneed to be concerned about the archive space required. If you have\nforeign keys defined for the table, you may get into trouble on the\nRAM used to track pending checks for those constraints. I would\nrecommend adding any FKs after you are done with the big bulk load.\n\nPostgreSQL does *not* have a \"rollback log\" which will impose a limit.\n\n> Say, I am COPYing 100 TB of data and the bad records are close\n> to the end of the feed -- how will this all error out?\n\nThe rows will all be in the table, but not visible to any other\ntransaction. Autovacuum will clean them out in the background, but\nif you want to restart your load against an empty table it might be\na good idea to TRUNCATE that table; it will be a lot faster.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Aug 2014 15:33:48 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "Thank you, Kevin -- this is helpful.\n\nBut it still leaves questions for me.\n\nKevin Grittner <[email protected]> wrote:\n\n> Alex Goncharov <[email protected]> wrote:\n\n> > The whole thing is aborted then, and the good 99 records are not\n> > making it into the target table.\n>\n> Right. This is one reason people often batch such copies or check\n> the data very closely before copying in.\n\nHow do I decide, before starting a COPY data load, whether such a load\nprotection (\"complexity\") makes sense (\"is necessary\")?\n\nClearly not needed for 1 MB of data in a realistic environment.\n\nClearly is needed for loading 1 TB in a realistic environment.\n\nTo put it differently: If I COPY 1 TB of data, what criteria should I\nuse for choosing the size of the chunks to split the data into?\n\nFor INSERT-loading, for the database client interfaces offering the\narray mode, the performance difference between loading 100 or 1000\nrows at a time is usually negligible if any. Therefore 100- and\n1000-row's array sizes are both reasonable choices.\n\nBut what is a reasonable size for a COPY chunk? It can't even be\nmeasured in rows.\n\nNote, that if you have a 1 TB record-formatted file to load, you can't\njust split it in 1 MB chunks and feed them to COPY -- the file has to\nbe split on the record boundaries.\n\nSo, splitting the data for COPY is not a trivial operation, and if\nsuch splitting can be avoided, a reasonable operator will avoid it.\n\nBut then again: when can it be avoided?\n\n> > My question is: Where are these 99 records have been living, on\n> > the database server, while the 100-th one hasn't come yet, and\n> > the need to throw the previous data accumulation away has not\n> > come yet?\n>\n> They will have been written into the table. They do not become\n> visible to any other transaction until and unless the inserting\n> transaction successfully commits. These slides may help:\n>\n> http://momjian.us/main/writings/pgsql/mvcc.pdf\n\nYeah, I know about the MVCC model... The question is about the huge\ndata storage to be reserved without a commitment while the load is not\ncompleted, about the size constrains in effect here.\n\n> > There have to be some limits to the space and/or counts taken by\n> > the new, uncommitted, data, while the COPY operation is still in\n> > progress. What are they?\n>\n> Primarily disk space for the table.\n\nHow can that be found? Is \"df /mount/point\" the deciding factor? Or\nsome 2^32 or 2^64 number?\n\n> If you are not taking advantage of the \"unlogged load\" optimization,\n> you will have written Write Ahead Log (WAL) records, too -- which\n> (depending on your configuration) you may be archiving. In that\n> case, you may need to be concerned about the archive space required.\n\n\"... may need to be concerned ...\" if what? Loading 1 MB? 1 GB? 1 TB?\n\nIf I am always concerned, and check something before a COPY, what\nshould I be checking? What are the \"OK-to-proceed\" criteria?\n\n> If you have foreign keys defined for the table, you may get into\n> trouble on the RAM used to track pending checks for those\n> constraints. I would recommend adding any FKs after you are done\n> with the big bulk load.\n\nI am curious about the simplest case where only the data storage is to\nbe worried about. (As an aside: the CHECK and NOT NULL constrains are\nnot a storage factor, right?)\n\n> PostgreSQL does *not* have a \"rollback log\" which will impose a\n> limit.\n\nSomething will though, right? What would that be? The available disk\nspace on a file system? (I would be surprised.)\n\n> > Say, I am COPYing 100 TB of data and the bad records are close\n> > to the end of the feed -- how will this all error out?\n>\n> The rows will all be in the table, but not visible to any other\n> transaction.\n\nI see. How much data can I fit there while doing COPY? Not 1 TB?\n\n-- Alex\n\n\n\nOn Tue, Aug 26, 2014 at 6:33 PM, Kevin Grittner <[email protected]> wrote:\n\n> Alex Goncharov <[email protected]> wrote:\n>\n> > Suppose I COPY a huge amount of data, e.g. 100 records.\n> >\n> > My 99 records are fine for the target, and the 100-th is not --\n> > it comes with a wrong record format or a target constraint\n> > violation.\n> >\n> > The whole thing is aborted then, and the good 99 records are not\n> > making it into the target table.\n>\n> Right. This is one reason people often batch such copies or check\n> the data very closely before copying in.\n>\n> > My question is: Where are these 99 records have been living, on\n> > the database server, while the 100-th one hasn't come yet, and\n> > the need to throw the previous data accumulation away has not\n> > come yet?\n>\n> They will have been written into the table. They do not become\n> visible to any other transaction until and unless the inserting\n> transaction successfully commits. These slides may help:\n>\n> http://momjian.us/main/writings/pgsql/mvcc.pdf\n>\n> > There have to be some limits to the space and/or counts taken by\n> > the new, uncommitted, data, while the COPY operation is still in\n> > progress. What are they?\n>\n> Primarily disk space for the table. If you are not taking\n> advantage of the \"unlogged load\" optimization, you will have\n> written Write Ahead Log (WAL) records, too -- which (depending on\n> your configuration) you may be archiving. In that case, you may\n> need to be concerned about the archive space required. If you have\n> foreign keys defined for the table, you may get into trouble on the\n> RAM used to track pending checks for those constraints. I would\n> recommend adding any FKs after you are done with the big bulk load.\n>\n> PostgreSQL does *not* have a \"rollback log\" which will impose a limit.\n>\n> > Say, I am COPYing 100 TB of data and the bad records are close\n> > to the end of the feed -- how will this all error out?\n>\n> The rows will all be in the table, but not visible to any other\n> transaction. Autovacuum will clean them out in the background, but\n> if you want to restart your load against an empty table it might be\n> a good idea to TRUNCATE that table; it will be a lot faster.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThank you, Kevin -- this is helpful.But it still leaves questions for me.Kevin Grittner <[email protected]> wrote:> Alex Goncharov <[email protected]> wrote:\n> > The whole thing is aborted then, and the good 99 records are not> > making it into the target table.> > Right. This is one reason people often batch such copies or check> the data very closely before copying in.\nHow do I decide, before starting a COPY data load, whether such a loadprotection (\"complexity\") makes sense (\"is necessary\")?Clearly not needed for 1 MB of data in a realistic environment.\nClearly is needed for loading 1 TB in a realistic environment.To put it differently: If I COPY 1 TB of data, what criteria should Iuse for choosing the size of the chunks to split the data into?For INSERT-loading, for the database client interfaces offering the\narray mode, the performance difference between loading 100 or 1000rows at a time is usually negligible if any. Therefore 100- and1000-row's array sizes are both reasonable choices.But what is a reasonable size for a COPY chunk? It can't even be\nmeasured in rows.Note, that if you have a 1 TB record-formatted file to load, you can'tjust split it in 1 MB chunks and feed them to COPY -- the file has tobe split on the record boundaries.So, splitting the data for COPY is not a trivial operation, and if\nsuch splitting can be avoided, a reasonable operator will avoid it.But then again: when can it be avoided?> > My question is: Where are these 99 records have been living, on> > the database server, while the 100-th one hasn't come yet, and\n> > the need to throw the previous data accumulation away has not> > come yet?> > They will have been written into the table. They do not become> visible to any other transaction until and unless the inserting\n> transaction successfully commits. These slides may help:> > http://momjian.us/main/writings/pgsql/mvcc.pdfYeah, I know about the MVCC model... The question is about the huge\ndata storage to be reserved without a commitment while the load is notcompleted, about the size constrains in effect here.> > There have to be some limits to the space and/or counts taken by> > the new, uncommitted, data, while the COPY operation is still in\n> > progress. What are they?> > Primarily disk space for the table.How can that be found? Is \"df /mount/point\" the deciding factor? Orsome 2^32 or 2^64 number?> If you are not taking advantage of the \"unlogged load\" optimization,\n> you will have written Write Ahead Log (WAL) records, too -- which> (depending on your configuration) you may be archiving. In that> case, you may need to be concerned about the archive space required.\n\"... may need to be concerned ...\" if what? Loading 1 MB? 1 GB? 1 TB?If I am always concerned, and check something before a COPY, whatshould I be checking? What are the \"OK-to-proceed\" criteria?\n> If you have foreign keys defined for the table, you may get into> trouble on the RAM used to track pending checks for those> constraints. I would recommend adding any FKs after you are done> with the big bulk load.\nI am curious about the simplest case where only the data storage is tobe worried about. (As an aside: the CHECK and NOT NULL constrains arenot a storage factor, right?)> PostgreSQL does *not* have a \"rollback log\" which will impose a\n> limit.Something will though, right? What would that be? The available diskspace on a file system? (I would be surprised.)> > Say, I am COPYing 100 TB of data and the bad records are close\n> > to the end of the feed -- how will this all error out?> > The rows will all be in the table, but not visible to any other> transaction.I see. How much data can I fit there while doing COPY? Not 1 TB?\n-- AlexOn Tue, Aug 26, 2014 at 6:33 PM, Kevin Grittner <[email protected]> wrote:\nAlex Goncharov <[email protected]> wrote:\n\n> Suppose I COPY a huge amount of data, e.g. 100 records.\n>\n> My 99 records are fine for the target, and the 100-th is not --\n> it comes with a wrong record format or a target constraint\n> violation.\n>\n> The whole thing is aborted then, and the good 99 records are not\n> making it into the target table.\n\nRight. This is one reason people often batch such copies or check\nthe data very closely before copying in.\n\n> My question is: Where are these 99 records have been living, on\n> the database server, while the 100-th one hasn't come yet, and\n> the need to throw the previous data accumulation away has not\n> come yet?\n\nThey will have been written into the table. They do not become\nvisible to any other transaction until and unless the inserting\ntransaction successfully commits. These slides may help:\n\nhttp://momjian.us/main/writings/pgsql/mvcc.pdf\n\n> There have to be some limits to the space and/or counts taken by\n> the new, uncommitted, data, while the COPY operation is still in\n> progress. What are they?\n\nPrimarily disk space for the table. If you are not taking\nadvantage of the \"unlogged load\" optimization, you will have\nwritten Write Ahead Log (WAL) records, too -- which (depending on\nyour configuration) you may be archiving. In that case, you may\nneed to be concerned about the archive space required. If you have\nforeign keys defined for the table, you may get into trouble on the\nRAM used to track pending checks for those constraints. I would\nrecommend adding any FKs after you are done with the big bulk load.\n\nPostgreSQL does *not* have a \"rollback log\" which will impose a limit.\n\n> Say, I am COPYing 100 TB of data and the bad records are close\n> to the end of the feed -- how will this all error out?\n\nThe rows will all be in the table, but not visible to any other\ntransaction. Autovacuum will clean them out in the background, but\nif you want to restart your load against an empty table it might be\na good idea to TRUNCATE that table; it will be a lot faster.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 26 Aug 2014 21:20:45 -0400",
"msg_from": "Alex Goncharov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "On Tue, Aug 26, 2014 at 9:21 PM, Alex Goncharov-2 [via PostgreSQL] <\[email protected]> wrote:\n\n> Thank you, Kevin -- this is helpful.\n>\n> But it still leaves questions for me.\n>\n>\n> Kevin Grittner <[hidden email]\n> <http://user/SendEmail.jtp?type=node&node=5816426&i=0>> wrote:\n>\n> > Alex Goncharov <[hidden email]\n> <http://user/SendEmail.jtp?type=node&node=5816426&i=1>> wrote:\n>\n> > > The whole thing is aborted then, and the good 99 records are not\n> > > making it into the target table.\n> >\n> > Right. This is one reason people often batch such copies or check\n> > the data very closely before copying in.\n>\n> How do I decide, before starting a COPY data load, whether such a load\n> protection (\"complexity\") makes sense (\"is necessary\")?\n>\n>\nYou should probably consider something like:\n\nhttp://pgloader.io/\n\n(I know there are others, this one apparently has the best marketing\nteam...)\n\nNormal case, with normal COPY, you load a bad file into an empty table, it\nfails, you truncate and get better data for the next attempt.\n\nHow long that will take is system (IOPS/CPU) and data dependent.\n\nThe probability of failure is source dependent - and prior experience plays\na large role here as well.\n\nIf you plan to load directly into a live table the wasted space from a bad\nload could kill you so smaller partial loads are better - if you can afford\nthe implicit system inconsistency such a partial load would cause.\n\nIf you understand how the system works you should be able to evaluate the\ndifferent pieces and come to a conclusion as how best to proceed in a\nspecific situation. No one else on this list has the relevant information\nto make that judgement call. If this is just asking about rules-of-thumb\nI'd say figure out how many records 100MB consumes and COMMIT after that\nmany records. 10,000 records is also a nice round number to pick -\nregardless of the amount of MB consumed. Start there and tweak based upon\nexperience.\n\n> If you are not taking advantage of the \"unlogged load\" optimization,\n> > you will have written Write Ahead Log (WAL) records, too -- which\n> > (depending on your configuration) you may be archiving. In that\n> > case, you may need to be concerned about the archive space required.\n>\n> \"... may need to be concerned ...\" if what? Loading 1 MB? 1 GB? 1 TB?\n>\n> If I am always concerned, and check something before a COPY, what\n> should I be checking? What are the \"OK-to-proceed\" criteria?\n>\n>\nIf you only have 500k free in your archive directory that 1MB file will\npose a problem...though if you have 4TB of archive available the 1TB would\nfit easily. Do you compress your WAL files before shipping them off to the\narchive? How compressible is your data?\n\nI'm sure people have decent rules-of-thumb here but in the end your\nspecific environment and data, especially at the TB scale, is going to be\nimportant; and is something that you will only discover through testing.\n\n\n>\n> > If you have foreign keys defined for the table, you may get into\n> > trouble on the RAM used to track pending checks for those\n> > constraints. I would recommend adding any FKs after you are done\n> > with the big bulk load.\n>\n> I am curious about the simplest case where only the data storage is to\n> be worried about. (As an aside: the CHECK and NOT NULL constrains are\n> not a storage factor, right?)\n>\n>\nCorrect\n\n\n>\n> > PostgreSQL does *not* have a \"rollback log\" which will impose a\n> > limit.\n>\n> Something will though, right? What would that be? The available disk\n> space on a file system? (I would be surprised.)\n>\n>\n> > > Say, I am COPYing 100 TB of data and the bad records are close\n> > > to the end of the feed -- how will this all error out?\n> >\n> > The rows will all be in the table, but not visible to any other\n> > transaction.\n>\n> I see. How much data can I fit there while doing COPY? Not 1 TB?\n>\n> -- Alex\n>\n\nYou need the same amount of space that you would require if the file\nimported to completion.\n\nPostgreSQL is optimistic in this regard - it assumes you will commit and\nso up until failure there is no difference between a good and bad import.\n The magic is described in Slide 24 of the MVCC link above (\nhttp://momjian.us/main/writings/pgsql/mvcc.pdf) - if the transaction is\naborted then as far as the system is concerned the written data has been\ndeleted and can be cleaned up just like if the following sequence of\ncommands occurred:\n\nBEGIN;\nCOPY tbl FROM ....;\nCOMMIT; ---success\nDELETE FROM tbl ....;\n\nHence the comment to \"TRUNCATE\" after a failed load if at all possible -\nto avoid the unnecessary VACUUM on tbl...\n\nQUESTION: would the vacuum reclaim the disk space in this situation (I\npresume yes) because if not, and another imported was to be attempted,\nideally the allocated space could be reused.\n\nI'm not sure what a reasonable formula would be, especially at the TB\nscale, but roughly 2x the size of the imported (uncompressed) file would be\na good starting point (table + WAL). You likely would want many multiples\nof this unless you are dealing with a one-off event. Indexes and dead\ntuples in particular are likely to be involved. You get some leeway\ndepending on compression but that is data specific and thus something you\nhave to test yourself if you are operating at the margin of your system's\nresources.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/autocommit-true-false-for-more-than-1-million-records-tp5815943p5816460.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nOn Tue, Aug 26, 2014 at 9:21 PM, Alex Goncharov-2 [via PostgreSQL] <[hidden email]> wrote:\n\nThank you, Kevin -- this is helpful.But it still leaves questions for me.Kevin Grittner <[hidden email]> wrote:\n> Alex Goncharov <[hidden email]> wrote:\n> > The whole thing is aborted then, and the good 99 records are not> > making it into the target table.> > Right. This is one reason people often batch such copies or check\n> the data very closely before copying in.\nHow do I decide, before starting a COPY data load, whether such a loadprotection (\"complexity\") makes sense (\"is necessary\")?\nYou should probably consider something like:http://pgloader.io/\n(I know there are others, this one apparently has the best marketing team...)\nNormal case, with normal COPY, you load a bad file into an empty table, it fails, you truncate and get better data for the next attempt.\nHow long that will take is system (IOPS/CPU) and data dependent. \nThe probability of failure is source dependent - and prior experience plays a large role here as well.\nIf you plan to load directly into a live table the wasted space from a bad load could kill you so smaller partial loads are better - if you can afford the implicit system inconsistency such a partial load would cause. \nIf you understand how the system works you should be able to evaluate the different pieces and come to a conclusion as how best to proceed in a specific situation. No one else on this list has the relevant information to make that judgement call. If this is just asking about rules-of-thumb I'd say figure out how many records 100MB consumes and COMMIT after that many records. 10,000 records is also a nice round number to pick - regardless of the amount of MB consumed. Start there and tweak based upon experience.\n\n> If you are not taking advantage of the \"unlogged load\" optimization,\n> you will have written Write Ahead Log (WAL) records, too -- which> (depending on your configuration) you may be archiving. In that> case, you may need to be concerned about the archive space required.\n\"... may need to be concerned ...\" if what? Loading 1 MB? 1 GB? 1 TB?If I am always concerned, and check something before a COPY, whatshould I be checking? What are the \"OK-to-proceed\" criteria?\nIf you only have 500k free in your archive directory that 1MB file will pose a problem...though if you have 4TB of archive available the 1TB would fit easily. Do you compress your WAL files before shipping them off to the archive? How compressible is your data?\nI'm sure people have decent rules-of-thumb here but in the end your specific environment and data, especially at the TB scale, is going to be important; and is something that you will only discover through testing.\n \n\n> If you have foreign keys defined for the table, you may get into> trouble on the RAM used to track pending checks for those> constraints. I would recommend adding any FKs after you are done> with the big bulk load.\nI am curious about the simplest case where only the data storage is tobe worried about. (As an aside: the CHECK and NOT NULL constrains arenot a storage factor, right?)\nCorrect \n> PostgreSQL does *not* have a \"rollback log\" which will impose a\n> limit.Something will though, right? What would that be? The available diskspace on a file system? (I would be surprised.)> > Say, I am COPYing 100 TB of data and the bad records are close\n\n> > to the end of the feed -- how will this all error out?> > The rows will all be in the table, but not visible to any other> transaction.I see. How much data can I fit there while doing COPY? Not 1 TB?\n-- AlexYou need the same amount of space that you would require if the file imported to completion. \nPostgreSQL is optimistic in this regard - it assumes you will commit and so up until failure there is no difference between a good and bad import. The magic is described in Slide 24 of the MVCC link above (http://momjian.us/main/writings/pgsql/mvcc.pdf) - if the transaction is aborted then as far as the system is concerned the written data has been deleted and can be cleaned up just like if the following sequence of commands occurred:\nBEGIN;\nCOPY tbl FROM ....;COMMIT; ---successDELETE FROM tbl ....;\nHence the comment to \"TRUNCATE\" after a failed load if at all possible - to avoid the unnecessary VACUUM on tbl...\nQUESTION: would the vacuum reclaim the disk space in this situation (I presume yes) because if not, and another imported was to be attempted, ideally the allocated space could be reused.\nI'm not sure what a reasonable formula would be, especially at the TB scale, but roughly 2x the size of the imported (uncompressed) file would be a good starting point (table + WAL). You likely would want many multiples of this unless you are dealing with a one-off event. Indexes and dead tuples in particular are likely to be involved. You get some leeway depending on compression but that is data specific and thus something you have to test yourself if you are operating at the margin of your system's resources.\nDavid J.\n\n\nView this message in context: Re: autocommit (true/false) for more than 1 million records\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Tue, 26 Aug 2014 20:40:19 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "> Thank you, Kevin -- this is helpful.\n\nThank you David, too.\n\n> But it still leaves questions for me.\n\nStill...\n\nAlex Goncharov <[email protected]> wrote:\n\n>>> How do I decide, before starting a COPY data load, whether such a load\n>>> protection (\"complexity\") makes sense (\"is necessary\")?\n\nThis is *the* practical question.\n\nDavid G Johnston <[email protected]> wrote:\n\n> You should probably consider something like: http://pgloader.io/\n\nThis is not my question; I want to see if anybody can offer a\nmeaningful situation evaluation strategy for a potential using or not\nusing COPY for loading the \"big data\".\n\nIf nobody can, fine: it'll give me the reason to claim \"Nobody knows\".\n\n> Normal case, with normal COPY,\n\nThis is the case I am asking about: the COPY operation limitations for\nthe \"big data\": until what point a plain COPY can be used.\n\n> you load a bad file into an empty table, it fails, you truncate and\n> get better data for the next attempt.\n\nThis is not how many businesses operate.\n\n> How long that will take is system (IOPS/CPU) and data dependent.\n\n\"How long\", was not the question: my question was originally about the\nbehavior for a bad record at the end of a large data set submitted to\nCOPY; when it was stated that the data \"in process\" becomes an\ninvisible (until committed) part of the target table, it became\nobvious to me that the fundamental question has to be asked: \"How much\ncan fit there, in the temporary operational space (whatever it's\ncalled in PostgreSQL.)?\" \"df /mount -> free\" or \"2^32\"?\n\n> The probability of failure is source dependent - and prior\n> experience plays a large role here as well.\n\nNot the question.\n\n> If you plan to load directly into a live table the wasted space from\n> a bad load could kill you so smaller partial loads are better - if\n> you can afford the implicit system inconsistency such a partial load\n> would cause.\n\nNot the question.\n\n> If you understand how the system works\n\nI don't, to the necessary extent, so I asked for an expert opinion :)\n\n> you should be able to evaluate the different pieces and come to a\n> conclusion as how best to proceed in a specific situation. No one\n> else on this list has the relevant information to make that\n> judgement call.\n\nWe'll see; too early to tell yet :)\n\n> If this is just asking about rules-of-thumb\n\nYes.\n\n> I'd say figure out how many records 100MB consumes and COMMIT after that\n> many records.\n\nPardon me: I am running COPY and know how many records are processed\nso far?.. (Hmm... can't be.)\n\n> 10,000 records is also a nice round number to pick - regardless of\n> the amount of MB consumed. Start there and tweak based upon\n> experience.\n\nYou are clearly suggesting to split the large data file into many\nsmall ones. To split very intelligently, on the record boundaries.\n\nAnd since this is very hard and would involve quite another, external\nprocessing machinery, I am trying to understand until what point this\nis safe not to do (subject to what factors.)\n\n> If you are not taking advantage of the \"unlogged load\" optimization,\n\nI don't see any way to control this for COPY only. Are you talking\nabout the 'postgresql.conf' settings?\n\n> If you only have 500k free in your archive directory that 1MB file\n> will pose a problem...though if you have 4TB of archive available\n> the 1TB would fit easily.\n\nSo the answer to the \"How much data can fit in the COPY storage\nareas?\" question is solely a \"df /mount/point\" thing?\n\nI.e. before initiating the COPY, I should:\n\n ls -l DATA-FILE\n df -m /server/db-cluster/pg_data-or-something\n\ncompare the two values and be assured that my COPY will reach the end\nof my DATA-FILE (whether is stumbles in the end or not) if the former\nvalue is meaningfully smaller than the latter?\n\nI would take this for the answer. (Let's see if there are other\nevaluation suggestions.)\n\n> Do you compress your WAL files before shipping them off to the\n> archive? How compressible is your data?\n\nTry to give me the upper limit evaluation strategy, when all the\ncompression and archive factors are working in my favor.\n\n> I'm sure people have decent rules-of-thumb here\n\nI would love to hear about them.\n\n> but in the end your specific environment and data, especially at the\n> TB scale, is going to be important; and is something that you will\n> only discover through testing.\n\n\"Don't malloc 2 GB on a system with 100 MB RAM\" is a meaningful rule\nof thumb, not requiring any testing. I am looking for similar simple\nguiding principles for COPY.\n\n>> > > Say, I am COPYing 100 TB of data and the bad records are close\n>> > > to the end of the feed -- how will this all error out?\n>> >\n>> > The rows will all be in the table, but not visible to any other\n>> > transaction.\n>>\n>> I see. How much data can I fit there while doing COPY? Not 1 TB?\n\n> You need the same amount of space that you would require if the file\n> imported to completion.\n\n> PostgreSQL is optimistic in this regard - it assumes you will commit\n> and so up until failure there is no difference between a good and\n> bad import.\n\nI can see it now, thanks.\n\n> I'm not sure what a reasonable formula would be, especially at the TB\n> scale,\n\nMake it 1 GB then :)\n\nCan I load 1 GB (uncompressed) via one COPY?\n\nWhen not -- when \"df\" says that there is less than 10 GB of free disk\nspace in the relevant file systems? Would that be all I need to know?\n\n> but roughly 2x the size of the imported (uncompressed) file would be\n> a good starting point (table + WAL). You likely would want many\n> multiples of this unless you are dealing with a one-off event.\n> Indexes and dead tuples in particular are likely to be involved.\n> You get some leeway depending on compression but that is data\n> specific and thus something you have to test yourself if you are\n> operating at the margin of your system's resources.\n\nI am willing to accept any factor -- 2x, 10x. I want to be certain the\nfactor of what over what, though. So far, only the \"df-free-space\" to\n\"data-file-size\" consideration has come up.\n\nThanks,\n\n-- Alex\n\n\n\nOn Tue, Aug 26, 2014 at 11:40 PM, David G Johnston <\[email protected]> wrote:\n\n> On Tue, Aug 26, 2014 at 9:21 PM, Alex Goncharov-2 [via PostgreSQL] <[hidden\n> email] <http://user/SendEmail.jtp?type=node&node=5816460&i=0>> wrote:\n>\n>> Thank you, Kevin -- this is helpful.\n>>\n>> But it still leaves questions for me.\n>>\n>>\n>> Kevin Grittner <[hidden email]\n>> <http://user/SendEmail.jtp?type=node&node=5816426&i=0>> wrote:\n>>\n>>\n>> > Alex Goncharov <[hidden email]\n>> <http://user/SendEmail.jtp?type=node&node=5816426&i=1>> wrote:\n>>\n>> > > The whole thing is aborted then, and the good 99 records are not\n>> > > making it into the target table.\n>> >\n>> > Right. This is one reason people often batch such copies or check\n>> > the data very closely before copying in.\n>>\n>> How do I decide, before starting a COPY data load, whether such a load\n>> protection (\"complexity\") makes sense (\"is necessary\")?\n>>\n>>\n> You should probably consider something like:\n>\n> http://pgloader.io/\n>\n> (I know there are others, this one apparently has the best marketing\n> team...)\n>\n> Normal case, with normal COPY, you load a bad file into an empty table,\n> it fails, you truncate and get better data for the next attempt.\n>\n> How long that will take is system (IOPS/CPU) and data dependent.\n>\n> The probability of failure is source dependent - and prior experience\n> plays a large role here as well.\n>\n> If you plan to load directly into a live table the wasted space from a bad\n> load could kill you so smaller partial loads are better - if you can afford\n> the implicit system inconsistency such a partial load would cause.\n>\n> If you understand how the system works you should be able to evaluate the\n> different pieces and come to a conclusion as how best to proceed in a\n> specific situation. No one else on this list has the relevant information\n> to make that judgement call. If this is just asking about rules-of-thumb\n> I'd say figure out how many records 100MB consumes and COMMIT after that\n> many records. 10,000 records is also a nice round number to pick -\n> regardless of the amount of MB consumed. Start there and tweak based upon\n> experience.\n>\n> > If you are not taking advantage of the \"unlogged load\" optimization,\n>> > you will have written Write Ahead Log (WAL) records, too -- which\n>> > (depending on your configuration) you may be archiving. In that\n>> > case, you may need to be concerned about the archive space required.\n>>\n>> \"... may need to be concerned ...\" if what? Loading 1 MB? 1 GB? 1 TB?\n>>\n>> If I am always concerned, and check something before a COPY, what\n>> should I be checking? What are the \"OK-to-proceed\" criteria?\n>>\n>>\n> If you only have 500k free in your archive directory that 1MB file will\n> pose a problem...though if you have 4TB of archive available the 1TB would\n> fit easily. Do you compress your WAL files before shipping them off to the\n> archive? How compressible is your data?\n>\n> I'm sure people have decent rules-of-thumb here but in the end your\n> specific environment and data, especially at the TB scale, is going to be\n> important; and is something that you will only discover through testing.\n>\n>\n>>\n>> > If you have foreign keys defined for the table, you may get into\n>> > trouble on the RAM used to track pending checks for those\n>> > constraints. I would recommend adding any FKs after you are done\n>> > with the big bulk load.\n>>\n>> I am curious about the simplest case where only the data storage is to\n>> be worried about. (As an aside: the CHECK and NOT NULL constrains are\n>> not a storage factor, right?)\n>>\n>>\n> Correct\n>\n>\n>>\n>> > PostgreSQL does *not* have a \"rollback log\" which will impose a\n>> > limit.\n>>\n>> Something will though, right? What would that be? The available disk\n>> space on a file system? (I would be surprised.)\n>>\n>>\n>> > > Say, I am COPYing 100 TB of data and the bad records are close\n>> > > to the end of the feed -- how will this all error out?\n>> >\n>> > The rows will all be in the table, but not visible to any other\n>> > transaction.\n>>\n>> I see. How much data can I fit there while doing COPY? Not 1 TB?\n>>\n>> -- Alex\n>>\n>\n> You need the same amount of space that you would require if the file\n> imported to completion.\n>\n> PostgreSQL is optimistic in this regard - it assumes you will commit and\n> so up until failure there is no difference between a good and bad import.\n> The magic is described in Slide 24 of the MVCC link above (\n> http://momjian.us/main/writings/pgsql/mvcc.pdf) - if the transaction is\n> aborted then as far as the system is concerned the written data has been\n> deleted and can be cleaned up just like if the following sequence of\n> commands occurred:\n>\n> BEGIN;\n> COPY tbl FROM ....;\n> COMMIT; ---success\n> DELETE FROM tbl ....;\n>\n> Hence the comment to \"TRUNCATE\" after a failed load if at all possible -\n> to avoid the unnecessary VACUUM on tbl...\n>\n> QUESTION: would the vacuum reclaim the disk space in this situation (I\n> presume yes) because if not, and another imported was to be attempted,\n> ideally the allocated space could be reused.\n>\n> I'm not sure what a reasonable formula would be, especially at the TB\n> scale, but roughly 2x the size of the imported (uncompressed) file would be\n> a good starting point (table + WAL). You likely would want many multiples\n> of this unless you are dealing with a one-off event. Indexes and dead\n> tuples in particular are likely to be involved. You get some leeway\n> depending on compression but that is data specific and thus something you\n> have to test yourself if you are operating at the margin of your system's\n> resources.\n>\n> David J.\n>\n>\n> ------------------------------\n> View this message in context: Re: autocommit (true/false) for more than 1\n> million records\n> <http://postgresql.1045698.n5.nabble.com/autocommit-true-false-for-more-than-1-million-records-tp5815943p5816460.html>\n> Sent from the PostgreSQL - performance mailing list archive\n> <http://postgresql.1045698.n5.nabble.com/PostgreSQL-performance-f2050081.html>\n> at Nabble.com.\n>\n\n> Thank you, Kevin -- this is helpful.Thank you David, too.> But it still leaves questions for me.Still...Alex Goncharov <[email protected]> wrote:\n>>> How do I decide, before starting a COPY data load, whether such a load>>> protection (\"complexity\") makes sense (\"is necessary\")?This is *the* practical question.\nDavid G Johnston <[email protected]> wrote:> You should probably consider something like: http://pgloader.io/\nThis is not my question; I want to see if anybody can offer ameaningful situation evaluation strategy for a potential using or notusing COPY for loading the \"big data\".If nobody can, fine: it'll give me the reason to claim \"Nobody knows\".\n> Normal case, with normal COPY,This is the case I am asking about: the COPY operation limitations forthe \"big data\": until what point a plain COPY can be used.> you load a bad file into an empty table, it fails, you truncate and\n> get better data for the next attempt.This is not how many businesses operate.> How long that will take is system (IOPS/CPU) and data dependent.\"How long\", was not the question: my question was originally about the\nbehavior for a bad record at the end of a large data set submitted toCOPY; when it was stated that the data \"in process\" becomes aninvisible (until committed) part of the target table, it becameobvious to me that the fundamental question has to be asked: \"How much\ncan fit there, in the temporary operational space (whatever it'scalled in PostgreSQL.)?\" \"df /mount -> free\" or \"2^32\"?> The probability of failure is source dependent - and prior\n> experience plays a large role here as well.Not the question.> If you plan to load directly into a live table the wasted space from> a bad load could kill you so smaller partial loads are better - if\n> you can afford the implicit system inconsistency such a partial load> would cause.Not the question.> If you understand how the system worksI don't, to the necessary extent, so I asked for an expert opinion :)\n> you should be able to evaluate the different pieces and come to a> conclusion as how best to proceed in a specific situation. No one> else on this list has the relevant information to make that\n> judgement call.We'll see; too early to tell yet :)> If this is just asking about rules-of-thumbYes.> I'd say figure out how many records 100MB consumes and COMMIT after that\n> many records.Pardon me: I am running COPY and know how many records are processedso far?.. (Hmm... can't be.)> 10,000 records is also a nice round number to pick - regardless of> the amount of MB consumed. Start there and tweak based upon\n> experience.You are clearly suggesting to split the large data file into manysmall ones. To split very intelligently, on the record boundaries.And since this is very hard and would involve quite another, external\nprocessing machinery, I am trying to understand until what point thisis safe not to do (subject to what factors.)> If you are not taking advantage of the \"unlogged load\" optimization,I don't see any way to control this for COPY only. Are you talking\nabout the 'postgresql.conf' settings?> If you only have 500k free in your archive directory that 1MB file> will pose a problem...though if you have 4TB of archive available> the 1TB would fit easily.\nSo the answer to the \"How much data can fit in the COPY storageareas?\" question is solely a \"df /mount/point\" thing?I.e. before initiating the COPY, I should: ls -l DATA-FILE\n df -m /server/db-cluster/pg_data-or-somethingcompare the two values and be assured that my COPY will reach the endof my DATA-FILE (whether is stumbles in the end or not) if the formervalue is meaningfully smaller than the latter?\nI would take this for the answer. (Let's see if there are otherevaluation suggestions.) > Do you compress your WAL files before shipping them off to the> archive? How compressible is your data?\nTry to give me the upper limit evaluation strategy, when all thecompression and archive factors are working in my favor.> I'm sure people have decent rules-of-thumb hereI would love to hear about them.\n> but in the end your specific environment and data, especially at the> TB scale, is going to be important; and is something that you will> only discover through testing.\"Don't malloc 2 GB on a system with 100 MB RAM\" is a meaningful rule\nof thumb, not requiring any testing. I am looking for similar simpleguiding principles for COPY.>> > > Say, I am COPYing 100 TB of data and the bad records are close>> > > to the end of the feed -- how will this all error out?\n>> >>> > The rows will all be in the table, but not visible to any other>> > transaction.>>>> I see. How much data can I fit there while doing COPY? Not 1 TB?\n> You need the same amount of space that you would require if the file> imported to completion.> PostgreSQL is optimistic in this regard - it assumes you will commit> and so up until failure there is no difference between a good and\n> bad import.I can see it now, thanks.> I'm not sure what a reasonable formula would be, especially at the TB> scale,Make it 1 GB then :)Can I load 1 GB (uncompressed) via one COPY?\nWhen not -- when \"df\" says that there is less than 10 GB of free diskspace in the relevant file systems? Would that be all I need to know?> but roughly 2x the size of the imported (uncompressed) file would be\n> a good starting point (table + WAL). You likely would want many> multiples of this unless you are dealing with a one-off event.> Indexes and dead tuples in particular are likely to be involved.> You get some leeway depending on compression but that is data\n> specific and thus something you have to test yourself if you are> operating at the margin of your system's resources.I am willing to accept any factor -- 2x, 10x. I want to be certain thefactor of what over what, though. So far, only the \"df-free-space\" to\n\"data-file-size\" consideration has come up.Thanks,-- AlexOn Tue, Aug 26, 2014 at 11:40 PM, David G Johnston <[email protected]> wrote:\nOn Tue, Aug 26, 2014 at 9:21 PM, Alex Goncharov-2 [via PostgreSQL] <[hidden email]> wrote:\n\nThank you, Kevin -- this is helpful.But it still leaves questions for me.Kevin Grittner <[hidden email]> wrote:\n\n> Alex Goncharov <[hidden email]> wrote:\n> > The whole thing is aborted then, and the good 99 records are not> > making it into the target table.> > Right. This is one reason people often batch such copies or check\n\n> the data very closely before copying in.\nHow do I decide, before starting a COPY data load, whether such a loadprotection (\"complexity\") makes sense (\"is necessary\")?\n\nYou should probably consider something like:http://pgloader.io/\n(I know there are others, this one apparently has the best marketing team...)\nNormal case, with normal COPY, you load a bad file into an empty table, it fails, you truncate and get better data for the next attempt.\nHow long that will take is system (IOPS/CPU) and data dependent. \nThe probability of failure is source dependent - and prior experience plays a large role here as well.\nIf you plan to load directly into a live table the wasted space from a bad load could kill you so smaller partial loads are better - if you can afford the implicit system inconsistency such a partial load would cause. \nIf you understand how the system works you should be able to evaluate the different pieces and come to a conclusion as how best to proceed in a specific situation. No one else on this list has the relevant information to make that judgement call. If this is just asking about rules-of-thumb I'd say figure out how many records 100MB consumes and COMMIT after that many records. 10,000 records is also a nice round number to pick - regardless of the amount of MB consumed. Start there and tweak based upon experience.\n\n> If you are not taking advantage of the \"unlogged load\" optimization,\n> you will have written Write Ahead Log (WAL) records, too -- which> (depending on your configuration) you may be archiving. In that> case, you may need to be concerned about the archive space required.\n\"... may need to be concerned ...\" if what? Loading 1 MB? 1 GB? 1 TB?If I am always concerned, and check something before a COPY, whatshould I be checking? What are the \"OK-to-proceed\" criteria?\nIf you only have 500k free in your archive directory that 1MB file will pose a problem...though if you have 4TB of archive available the 1TB would fit easily. Do you compress your WAL files before shipping them off to the archive? How compressible is your data?\nI'm sure people have decent rules-of-thumb here but in the end your specific environment and data, especially at the TB scale, is going to be important; and is something that you will only discover through testing.\n \n\n> If you have foreign keys defined for the table, you may get into> trouble on the RAM used to track pending checks for those> constraints. I would recommend adding any FKs after you are done> with the big bulk load.\nI am curious about the simplest case where only the data storage is tobe worried about. (As an aside: the CHECK and NOT NULL constrains arenot a storage factor, right?)\nCorrect \n> PostgreSQL does *not* have a \"rollback log\" which will impose a\n> limit.Something will though, right? What would that be? The available diskspace on a file system? (I would be surprised.)> > Say, I am COPYing 100 TB of data and the bad records are close\n\n\n> > to the end of the feed -- how will this all error out?> > The rows will all be in the table, but not visible to any other> transaction.I see. How much data can I fit there while doing COPY? Not 1 TB?\n-- AlexYou need the same amount of space that you would require if the file imported to completion. \nPostgreSQL is optimistic in this regard - it assumes you will commit and so up until failure there is no difference between a good and bad import. The magic is described in Slide 24 of the MVCC link above (http://momjian.us/main/writings/pgsql/mvcc.pdf) - if the transaction is aborted then as far as the system is concerned the written data has been deleted and can be cleaned up just like if the following sequence of commands occurred:\nBEGIN;\n\nCOPY tbl FROM ....;COMMIT; ---successDELETE FROM tbl ....;\nHence the comment to \"TRUNCATE\" after a failed load if at all possible - to avoid the unnecessary VACUUM on tbl...\nQUESTION: would the vacuum reclaim the disk space in this situation (I presume yes) because if not, and another imported was to be attempted, ideally the allocated space could be reused.\nI'm not sure what a reasonable formula would be, especially at the TB scale, but roughly 2x the size of the imported (uncompressed) file would be a good starting point (table + WAL). You likely would want many multiples of this unless you are dealing with a one-off event. Indexes and dead tuples in particular are likely to be involved. You get some leeway depending on compression but that is data specific and thus something you have to test yourself if you are operating at the margin of your system's resources.\nDavid J.\n\n\nView this message in context: Re: autocommit (true/false) for more than 1 million records\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Wed, 27 Aug 2014 01:02:03 -0400",
"msg_from": "Alex Goncharov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "On Wed, Aug 27, 2014 at 1:02 AM, Alex Goncharov <\[email protected]> wrote:\n\n> > Thank you, Kevin -- this is helpful.\n>\n> Thank you David, too.\n>\n>\n> > But it still leaves questions for me.\n>\n> Still...\n>\n>\n> Alex Goncharov <[email protected]> wrote:\n>\n> >>> How do I decide, before starting a COPY data load, whether such a load\n> >>> protection (\"complexity\") makes sense (\"is necessary\")?\n>\n> This is *the* practical question.\n>\n>\n> David G Johnston <[email protected]> wrote:\n>\n> > You should probably consider something like: http://pgloader.io/\n>\n> This is not my question; I want to see if anybody can offer a\n> meaningful situation evaluation strategy for a potential using or not\n> using COPY for loading the \"big data\".\n>\n\nOK. Though I presume that given limitations to copy - of which the whole\n\"all-or-nothing\" is one - that pointing out more user-friendly API's would\nbe worthwhile.\n\n\n> If nobody can, fine: it'll give me the reason to claim \"Nobody knows\".\n>\n> > Normal case, with normal COPY,\n>\n> This is the case I am asking about: the COPY operation limitations for\n> the \"big data\": until what point a plain COPY can be used.\n>\n> > you load a bad file into an empty table, it fails, you truncate and\n> > get better data for the next attempt.\n>\n> This is not how many businesses operate.\n>\n>\nYet this is basically what you are asking about....\n\n\n\n> > How long that will take is system (IOPS/CPU) and data dependent.\n>\n> \"How long\", was not the question: my question was originally about the\n> behavior for a bad record at the end of a large data set submitted to\n> COPY; when it was stated that the data \"in process\" becomes an\n> invisible (until committed) part of the target table, it became\n> obvious to me that the fundamental question has to be asked: \"How much\n> can fit there, in the temporary operational space (whatever it's\n> called in PostgreSQL.)?\" \"df /mount -> free\" or \"2^32\"?\n>\n> > The probability of failure is source dependent - and prior\n> > experience plays a large role here as well.\n>\n> Not the question.\n>\n> > If you plan to load directly into a live table the wasted space from\n> > a bad load could kill you so smaller partial loads are better - if\n> > you can afford the implicit system inconsistency such a partial load\n> > would cause.\n>\n> Not the question.\n>\n\nThese were things to consider when deciding on whether it is worthwhile to\nsplit the large file into chunks.\n\n\n> > If you understand how the system works\n>\n> I don't, to the necessary extent, so I asked for an expert opinion :)\n>\n> > you should be able to evaluate the different pieces and come to a\n> > conclusion as how best to proceed in a specific situation. No one\n> > else on this list has the relevant information to make that\n> > judgement call.\n>\n> We'll see; too early to tell yet :)\n>\n> > If this is just asking about rules-of-thumb\n>\n> Yes.\n>\n> > I'd say figure out how many records 100MB consumes and COMMIT after that\n> > many records.\n>\n> Pardon me: I am running COPY and know how many records are processed\n> so far?.. (Hmm... can't be.)\n>\n\nTake you 1TB file, extract the first 100MB, count the number of\nrecords-separators. Commit after that many.\n\n\n>\n> > 10,000 records is also a nice round number to pick - regardless of\n> > the amount of MB consumed. Start there and tweak based upon\n> > experience.\n>\n> You are clearly suggesting to split the large data file into many\n> small ones. To split very intelligently, on the record boundaries.\n>\n> And since this is very hard and would involve quite another, external\n> processing machinery, I am trying to understand until what point this\n> is safe not to do (subject to what factors.)\n>\n>\nSee thoughts to consider from previous e-mail.\n\n\n> > If you are not taking advantage of the \"unlogged load\" optimization,\n>\n> I don't see any way to control this for COPY only. Are you talking\n> about the 'postgresql.conf' settings?\n>\n\nI am not sure if this is the same thing but I am pretty sure he is\nreferring to creating an unlogged table as the copy target - thus avoiding\nWAL.\n\n\n> > If you only have 500k free in your archive directory that 1MB file\n> > will pose a problem...though if you have 4TB of archive available\n> > the 1TB would fit easily.\n>\n> So the answer to the \"How much data can fit in the COPY storage\n> areas?\" question is solely a \"df /mount/point\" thing?\n>\n> I.e. before initiating the COPY, I should:\n>\n> ls -l DATA-FILE\n> df -m /server/db-cluster/pg_data-or-something\n>\n> compare the two values and be assured that my COPY will reach the end\n> of my DATA-FILE (whether is stumbles in the end or not) if the former\n> value is meaningfully smaller than the latter?\n>\n> I would take this for the answer. (Let's see if there are other\n> evaluation suggestions.)\n>\n\nThat should get the copy to succeed though whether you blow up your\narchives or slaves would not be addressed.\n\n\n> > Do you compress your WAL files before shipping them off to the\n> > archive? How compressible is your data?\n>\n> Try to give me the upper limit evaluation strategy, when all the\n> compression and archive factors are working in my favor.\n>\n\nAssume worse-case unless you know, from experimentation, what an\nappropriate compression factor would be. Keeping in mind I presume you\nexpect other simultaneous activity on the same server. If you then fall\ninto a marginal situation you can see whether reducing your estimates to\nhit you goal is worth the risk. Though you can incorporate that into your\noverall planned buffer as well.\n\n\n> > I'm sure people have decent rules-of-thumb here\n>\n> I would love to hear about them.\n>\n> > but in the end your specific environment and data, especially at the\n> > TB scale, is going to be important; and is something that you will\n> > only discover through testing.\n>\n> \"Don't malloc 2 GB on a system with 100 MB RAM\" is a meaningful rule\n> of thumb, not requiring any testing. I am looking for similar simple\n> guiding principles for COPY.\n>\n\n> >> > > Say, I am COPYing 100 TB of data and the bad records are close\n> >> > > to the end of the feed -- how will this all error out?\n> >> >\n> >> > The rows will all be in the table, but not visible to any other\n> >> > transaction.\n> >>\n> >> I see. How much data can I fit there while doing COPY? Not 1 TB?\n>\n> > You need the same amount of space that you would require if the file\n> > imported to completion.\n>\n> > PostgreSQL is optimistic in this regard - it assumes you will commit\n> > and so up until failure there is no difference between a good and\n> > bad import.\n>\n> I can see it now, thanks.\n>\n> > I'm not sure what a reasonable formula would be, especially at the TB\n> > scale,\n>\n> Make it 1 GB then :)\n>\n> Can I load 1 GB (uncompressed) via one COPY?\n>\n\nYou cannot load compressed data via COPY...\n\nWhile I have never done such scale myself my conclusion thus far is that\nwith enough hard drive space and, at least depending on the FK situation\nnoted, RAM you should be able to load any size file with a single copy\nwithout getting any system errors and/or crashing the server (postgres or\nOS).\n\nIn the simple case the question to split depends on the probability of a\ndata error and how much data (and time) you wish to lose should one occur.\n\n\n> When not -- when \"df\" says that there is less than 10 GB of free disk\n> space in the relevant file systems? Would that be all I need to know?\n>\n> > but roughly 2x the size of the imported (uncompressed) file would be\n> > a good starting point (table + WAL). You likely would want many\n> > multiples of this unless you are dealing with a one-off event.\n> > Indexes and dead tuples in particular are likely to be involved.\n> > You get some leeway depending on compression but that is data\n> > specific and thus something you have to test yourself if you are\n> > operating at the margin of your system's resources.\n>\n> I am willing to accept any factor -- 2x, 10x. I want to be certain the\n> factor of what over what, though. So far, only the \"df-free-space\" to\n> \"data-file-size\" consideration has come up.\n>\n>\nRAM did as well and that was not enumerated - other than it being optional\nin the case that no FKs are defined.\n\nI'm not sure what kind of overhead there is on WAL and data pages but there\nis going to be some. At scale hopefully compression would wash out the\noverhead so figuring 2x for any stored data seems like reasonable disk free\nspace required for the most basic scenario. Indexes count as data - you'd\nlikely want to consider at least one that operates as the primary key.\n\nKevin is a better source for this than I - mostly I'm drawing conclusions\nfrom what I read in his post.\n\nDavid J.\n\nOn Wed, Aug 27, 2014 at 1:02 AM, Alex Goncharov <[email protected]> wrote:\n> Thank you, Kevin -- this is helpful.Thank you David, too.\n> But it still leaves questions for me.Still...Alex Goncharov <[email protected]> wrote:\n>>> How do I decide, before starting a COPY data load, whether such a load>>> protection (\"complexity\") makes sense (\"is necessary\")?This is *the* practical question.\n\nDavid G Johnston <[email protected]> wrote:> You should probably consider something like: http://pgloader.io/\nThis is not my question; I want to see if anybody can offer ameaningful situation evaluation strategy for a potential using or notusing COPY for loading the \"big data\".\nOK. Though I presume that given limitations to copy - of which the whole \"all-or-nothing\" is one - that pointing out more user-friendly API's would be worthwhile.\nIf nobody can, fine: it'll give me the reason to claim \"Nobody knows\".\n> Normal case, with normal COPY,This is the case I am asking about: the COPY operation limitations forthe \"big data\": until what point a plain COPY can be used.> you load a bad file into an empty table, it fails, you truncate and\n\n> get better data for the next attempt.This is not how many businesses operate.Yet this is basically what you are asking about....\n > How long that will take is system (IOPS/CPU) and data dependent.\"How long\", was not the question: my question was originally about the\n\nbehavior for a bad record at the end of a large data set submitted toCOPY; when it was stated that the data \"in process\" becomes aninvisible (until committed) part of the target table, it becameobvious to me that the fundamental question has to be asked: \"How much\n\ncan fit there, in the temporary operational space (whatever it'scalled in PostgreSQL.)?\" \"df /mount -> free\" or \"2^32\"?> The probability of failure is source dependent - and prior\n\n> experience plays a large role here as well.Not the question.> If you plan to load directly into a live table the wasted space from> a bad load could kill you so smaller partial loads are better - if\n\n> you can afford the implicit system inconsistency such a partial load> would cause.Not the question.\nThese were things to consider when deciding on whether it is worthwhile to split the large file into chunks.\n> If you understand how the system worksI don't, to the necessary extent, so I asked for an expert opinion :)\n> you should be able to evaluate the different pieces and come to a> conclusion as how best to proceed in a specific situation. No one> else on this list has the relevant information to make that\n\n> judgement call.We'll see; too early to tell yet :)> If this is just asking about rules-of-thumbYes.> I'd say figure out how many records 100MB consumes and COMMIT after that\n\n> many records.Pardon me: I am running COPY and know how many records are processedso far?.. (Hmm... can't be.)\nTake you 1TB file, extract the first 100MB, count the number of records-separators. Commit after that many.\n> 10,000 records is also a nice round number to pick - regardless of> the amount of MB consumed. Start there and tweak based upon\n> experience.You are clearly suggesting to split the large data file into manysmall ones. To split very intelligently, on the record boundaries.And since this is very hard and would involve quite another, external\n\nprocessing machinery, I am trying to understand until what point thisis safe not to do (subject to what factors.)\nSee thoughts to consider from previous e-mail.> If you are not taking advantage of the \"unlogged load\" optimization,\nI don't see any way to control this for COPY only. Are you talking\nabout the 'postgresql.conf' settings?I am not sure if this is the same thing but I am pretty sure he is referring to creating an unlogged table as the copy target - thus avoiding WAL.\n> If you only have 500k free in your archive directory that 1MB file\n> will pose a problem...though if you have 4TB of archive available> the 1TB would fit easily.\nSo the answer to the \"How much data can fit in the COPY storageareas?\" question is solely a \"df /mount/point\" thing?I.e. before initiating the COPY, I should: ls -l DATA-FILE\n\n df -m /server/db-cluster/pg_data-or-somethingcompare the two values and be assured that my COPY will reach the endof my DATA-FILE (whether is stumbles in the end or not) if the formervalue is meaningfully smaller than the latter?\nI would take this for the answer. (Let's see if there are otherevaluation suggestions.)That should get the copy to succeed though whether you blow up your archives or slaves would not be addressed.\n > Do you compress your WAL files before shipping them off to the> archive? How compressible is your data?\nTry to give me the upper limit evaluation strategy, when all thecompression and archive factors are working in my favor.\nAssume worse-case unless you know, from experimentation, what an appropriate compression factor would be. Keeping in mind I presume you expect other simultaneous activity on the same server. If you then fall into a marginal situation you can see whether reducing your estimates to hit you goal is worth the risk. Though you can incorporate that into your overall planned buffer as well.\n> I'm sure people have decent rules-of-thumb hereI would love to hear about them.\n> but in the end your specific environment and data, especially at the> TB scale, is going to be important; and is something that you will> only discover through testing.\"Don't malloc 2 GB on a system with 100 MB RAM\" is a meaningful rule\n\nof thumb, not requiring any testing. I am looking for similar simpleguiding principles for COPY.\n>> > > Say, I am COPYing 100 TB of data and the bad records are close>> > > to the end of the feed -- how will this all error out?\n>> >>> > The rows will all be in the table, but not visible to any other>> > transaction.>>>> I see. How much data can I fit there while doing COPY? Not 1 TB?\n\n> You need the same amount of space that you would require if the file> imported to completion.> PostgreSQL is optimistic in this regard - it assumes you will commit> and so up until failure there is no difference between a good and\n\n> bad import.I can see it now, thanks.> I'm not sure what a reasonable formula would be, especially at the TB> scale,Make it 1 GB then :)Can I load 1 GB (uncompressed) via one COPY?\nYou cannot load compressed data via COPY...\nWhile I have never done such scale myself my conclusion thus far is that with enough hard drive space and, at least depending on the FK situation noted, RAM you should be able to load any size file with a single copy without getting any system errors and/or crashing the server (postgres or OS).\nIn the simple case the question to split depends on the probability of a data error and how much data (and time) you wish to lose should one occur.\n\nWhen not -- when \"df\" says that there is less than 10 GB of free diskspace in the relevant file systems? Would that be all I need to know?> but roughly 2x the size of the imported (uncompressed) file would be\n\n> a good starting point (table + WAL). You likely would want many> multiples of this unless you are dealing with a one-off event.> Indexes and dead tuples in particular are likely to be involved.> You get some leeway depending on compression but that is data\n\n> specific and thus something you have to test yourself if you are> operating at the margin of your system's resources.I am willing to accept any factor -- 2x, 10x. I want to be certain thefactor of what over what, though. So far, only the \"df-free-space\" to\n\n\"data-file-size\" consideration has come up.RAM did as well and that was not enumerated - other than it being optional in the case that no FKs are defined.\nI'm not sure what kind of overhead there is on WAL and data pages but there is going to be some. At scale hopefully compression would wash out the overhead so figuring 2x for any stored data seems like reasonable disk free space required for the most basic scenario. Indexes count as data - you'd likely want to consider at least one that operates as the primary key.\nKevin is a better source for this than I - mostly I'm drawing conclusions from what I read in his post.\nDavid J.",
"msg_date": "Wed, 27 Aug 2014 02:10:00 -0400",
"msg_from": "David Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "Alex Goncharov wrote:\r\n> Thank you, Kevin -- this is helpful.\r\n> \r\n> But it still leaves questions for me.\r\n\r\n>> Alex Goncharov <[email protected]> wrote:\r\n> \r\n>>> The whole thing is aborted then, and the good 99 records are not\r\n>>> making it into the target table.\r\n>>\r\n>> Right. This is one reason people often batch such copies or check\r\n>> the data very closely before copying in.\r\n> \r\n> How do I decide, before starting a COPY data load, whether such a load\r\n> protection (\"complexity\") makes sense (\"is necessary\")?\r\n> \r\n> Clearly not needed for 1 MB of data in a realistic environment.\r\n> \r\n> Clearly is needed for loading 1 TB in a realistic environment.\r\n> \r\n> To put it differently: If I COPY 1 TB of data, what criteria should I\r\n> use for choosing the size of the chunks to split the data into?\r\n> \r\n> For INSERT-loading, for the database client interfaces offering the\r\n> array mode, the performance difference between loading 100 or 1000\r\n> rows at a time is usually negligible if any. Therefore 100- and\r\n> 1000-row's array sizes are both reasonable choices.\r\n> \r\n> But what is a reasonable size for a COPY chunk? It can't even be\r\n> measured in rows.\r\n> \r\n> Note, that if you have a 1 TB record-formatted file to load, you can't\r\n> just split it in 1 MB chunks and feed them to COPY -- the file has to\r\n> be split on the record boundaries.\r\n> \r\n> So, splitting the data for COPY is not a trivial operation, and if\r\n> such splitting can be avoided, a reasonable operator will avoid it.\r\n> \r\n> But then again: when can it be avoided?\r\n\r\nYou don't need to split the data at all if you make sure that they are\r\ncorrect.\r\n\r\nIf you cannot be certain, and you want to avoid having to restart a huge\r\nload with corrected data, the batch size is pretty much a matter of taste:\r\nHow much overhead does it generate to split the data in N parts?\r\nHow much time are you ready to wait for (re)loading a single part?\r\n\r\nYou'll probably have to experiment to find a solution that fits you.\r\n\r\n>>> My question is: Where are these 99 records have been living, on\r\n>>> the database server, while the 100-th one hasn't come yet, and\r\n>>> the need to throw the previous data accumulation away has not\r\n>>> come yet?\r\n>>\r\n>> They will have been written into the table. They do not become\r\n>> visible to any other transaction until and unless the inserting\r\n>> transaction successfully commits. These slides may help:\r\n>>\r\n>> http://momjian.us/main/writings/pgsql/mvcc.pdf\r\n> \r\n> Yeah, I know about the MVCC model... The question is about the huge\r\n> data storage to be reserved without a commitment while the load is not\r\n> completed, about the size constrains in effect here.\r\n\r\nI don't understand that question.\r\n\r\nYou need the space anyway to complete the load.\r\nIf the load fails, you simply reclaim the space (VACUUM) and reuse it.\r\nThere is no extra storage needed.\r\n\r\n>>> There have to be some limits to the space and/or counts taken by\r\n>>> the new, uncommitted, data, while the COPY operation is still in\r\n>>> progress. What are they?\r\n>>\r\n>> Primarily disk space for the table.\r\n> \r\n> How can that be found? Is \"df /mount/point\" the deciding factor? Or\r\n> some 2^32 or 2^64 number?\r\n\r\nDisk space can be measure with \"df\".\r\n\r\n>> If you are not taking advantage of the \"unlogged load\" optimization,\r\n>> you will have written Write Ahead Log (WAL) records, too -- which\r\n>> (depending on your configuration) you may be archiving. In that\r\n>> case, you may need to be concerned about the archive space required.\r\n> \r\n> \"... may need to be concerned ...\" if what? Loading 1 MB? 1 GB? 1 TB?\r\n> \r\n> If I am always concerned, and check something before a COPY, what\r\n> should I be checking? What are the \"OK-to-proceed\" criteria?\r\n\r\nThat means \"you should consider\", not \"you should be worried\".\r\nUnless you are loading into a table created in the same transaction,\r\n\"redo\" information will be generated and stored in \"WAL files\", which\r\nend up in your WAL archive.\r\n\r\nThis needs extra storage, proportional to the storage necessary\r\nfor the data itself.\r\n\r\n>> If you have foreign keys defined for the table, you may get into\r\n>> trouble on the RAM used to track pending checks for those\r\n>> constraints. I would recommend adding any FKs after you are done\r\n>> with the big bulk load.\r\n> \r\n> I am curious about the simplest case where only the data storage is to\r\n> be worried about. (As an aside: the CHECK and NOT NULL constrains are\r\n> not a storage factor, right?)\r\n\r\nRight.\r\n\r\n>> PostgreSQL does *not* have a \"rollback log\" which will impose a\r\n>> limit.\r\n> \r\n> Something will though, right? What would that be? The available disk\r\n> space on a file system? (I would be surprised.)\r\n\r\nYou can find something on the limitations here:\r\nhttp://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F\r\n\r\n>>> Say, I am COPYing 100 TB of data and the bad records are close\r\n>>> to the end of the feed -- how will this all error out?\r\n>>\r\n>> The rows will all be in the table, but not visible to any other\r\n>> transaction.\r\n> \r\n> I see. How much data can I fit there while doing COPY? Not 1 TB?\r\n\r\nSure, why not?\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 Aug 2014 08:42:53 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million\n records"
},
{
"msg_contents": "This might also help:\nhttp://www.postgresql.org/docs/9.1/static/populate.html\n\nBulk load tables from text files in almost all RDMS are \"log free\"\n(Postgres' COPY is one of them).\n\nThe reason is that the database doesn't need to waste resources by writing\nthe log because there's no risk of data loss. If the COPY operation fails,\nyour data will still live in the text files you're trying to bulk load from.\n\n\n\n2014-08-27 5:42 GMT-03:00 Albe Laurenz <[email protected]>:\n\n> Alex Goncharov wrote:\n> > Thank you, Kevin -- this is helpful.\n> >\n> > But it still leaves questions for me.\n>\n> >> Alex Goncharov <[email protected]> wrote:\n> >\n> >>> The whole thing is aborted then, and the good 99 records are not\n> >>> making it into the target table.\n> >>\n> >> Right. This is one reason people often batch such copies or check\n> >> the data very closely before copying in.\n> >\n> > How do I decide, before starting a COPY data load, whether such a load\n> > protection (\"complexity\") makes sense (\"is necessary\")?\n> >\n> > Clearly not needed for 1 MB of data in a realistic environment.\n> >\n> > Clearly is needed for loading 1 TB in a realistic environment.\n> >\n> > To put it differently: If I COPY 1 TB of data, what criteria should I\n> > use for choosing the size of the chunks to split the data into?\n> >\n> > For INSERT-loading, for the database client interfaces offering the\n> > array mode, the performance difference between loading 100 or 1000\n> > rows at a time is usually negligible if any. Therefore 100- and\n> > 1000-row's array sizes are both reasonable choices.\n> >\n> > But what is a reasonable size for a COPY chunk? It can't even be\n> > measured in rows.\n> >\n> > Note, that if you have a 1 TB record-formatted file to load, you can't\n> > just split it in 1 MB chunks and feed them to COPY -- the file has to\n> > be split on the record boundaries.\n> >\n> > So, splitting the data for COPY is not a trivial operation, and if\n> > such splitting can be avoided, a reasonable operator will avoid it.\n> >\n> > But then again: when can it be avoided?\n>\n> You don't need to split the data at all if you make sure that they are\n> correct.\n>\n> If you cannot be certain, and you want to avoid having to restart a huge\n> load with corrected data, the batch size is pretty much a matter of taste:\n> How much overhead does it generate to split the data in N parts?\n> How much time are you ready to wait for (re)loading a single part?\n>\n> You'll probably have to experiment to find a solution that fits you.\n>\n> >>> My question is: Where are these 99 records have been living, on\n> >>> the database server, while the 100-th one hasn't come yet, and\n> >>> the need to throw the previous data accumulation away has not\n> >>> come yet?\n> >>\n> >> They will have been written into the table. They do not become\n> >> visible to any other transaction until and unless the inserting\n> >> transaction successfully commits. These slides may help:\n> >>\n> >> http://momjian.us/main/writings/pgsql/mvcc.pdf\n> >\n> > Yeah, I know about the MVCC model... The question is about the huge\n> > data storage to be reserved without a commitment while the load is not\n> > completed, about the size constrains in effect here.\n>\n> I don't understand that question.\n>\n> You need the space anyway to complete the load.\n> If the load fails, you simply reclaim the space (VACUUM) and reuse it.\n> There is no extra storage needed.\n>\n> >>> There have to be some limits to the space and/or counts taken by\n> >>> the new, uncommitted, data, while the COPY operation is still in\n> >>> progress. What are they?\n> >>\n> >> Primarily disk space for the table.\n> >\n> > How can that be found? Is \"df /mount/point\" the deciding factor? Or\n> > some 2^32 or 2^64 number?\n>\n> Disk space can be measure with \"df\".\n>\n> >> If you are not taking advantage of the \"unlogged load\" optimization,\n> >> you will have written Write Ahead Log (WAL) records, too -- which\n> >> (depending on your configuration) you may be archiving. In that\n> >> case, you may need to be concerned about the archive space required.\n> >\n> > \"... may need to be concerned ...\" if what? Loading 1 MB? 1 GB? 1 TB?\n> >\n> > If I am always concerned, and check something before a COPY, what\n> > should I be checking? What are the \"OK-to-proceed\" criteria?\n>\n> That means \"you should consider\", not \"you should be worried\".\n> Unless you are loading into a table created in the same transaction,\n> \"redo\" information will be generated and stored in \"WAL files\", which\n> end up in your WAL archive.\n>\n> This needs extra storage, proportional to the storage necessary\n> for the data itself.\n>\n> >> If you have foreign keys defined for the table, you may get into\n> >> trouble on the RAM used to track pending checks for those\n> >> constraints. I would recommend adding any FKs after you are done\n> >> with the big bulk load.\n> >\n> > I am curious about the simplest case where only the data storage is to\n> > be worried about. (As an aside: the CHECK and NOT NULL constrains are\n> > not a storage factor, right?)\n>\n> Right.\n>\n> >> PostgreSQL does *not* have a \"rollback log\" which will impose a\n> >> limit.\n> >\n> > Something will though, right? What would that be? The available disk\n> > space on a file system? (I would be surprised.)\n>\n> You can find something on the limitations here:\n>\n> http://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F\n>\n> >>> Say, I am COPYing 100 TB of data and the bad records are close\n> >>> to the end of the feed -- how will this all error out?\n> >>\n> >> The rows will all be in the table, but not visible to any other\n> >> transaction.\n> >\n> > I see. How much data can I fit there while doing COPY? Not 1 TB?\n>\n> Sure, why not?\n>\n> Yours,\n> Laurenz Albe\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThis might also help:http://www.postgresql.org/docs/9.1/static/populate.htmlBulk load tables from text files in almost all RDMS are \"log free\" (Postgres' COPY is one of them).\nThe reason is that the database doesn't need to waste resources by writing the log because there's no risk of data loss. If the COPY operation fails, your data will still live in the text files you're trying to bulk load from.\n2014-08-27 5:42 GMT-03:00 Albe Laurenz <[email protected]>:\nAlex Goncharov wrote:\n> Thank you, Kevin -- this is helpful.\n>\n> But it still leaves questions for me.\n\n>> Alex Goncharov <[email protected]> wrote:\n>\n>>> The whole thing is aborted then, and the good 99 records are not\n>>> making it into the target table.\n>>\n>> Right. This is one reason people often batch such copies or check\n>> the data very closely before copying in.\n>\n> How do I decide, before starting a COPY data load, whether such a load\n> protection (\"complexity\") makes sense (\"is necessary\")?\n>\n> Clearly not needed for 1 MB of data in a realistic environment.\n>\n> Clearly is needed for loading 1 TB in a realistic environment.\n>\n> To put it differently: If I COPY 1 TB of data, what criteria should I\n> use for choosing the size of the chunks to split the data into?\n>\n> For INSERT-loading, for the database client interfaces offering the\n> array mode, the performance difference between loading 100 or 1000\n> rows at a time is usually negligible if any. Therefore 100- and\n> 1000-row's array sizes are both reasonable choices.\n>\n> But what is a reasonable size for a COPY chunk? It can't even be\n> measured in rows.\n>\n> Note, that if you have a 1 TB record-formatted file to load, you can't\n> just split it in 1 MB chunks and feed them to COPY -- the file has to\n> be split on the record boundaries.\n>\n> So, splitting the data for COPY is not a trivial operation, and if\n> such splitting can be avoided, a reasonable operator will avoid it.\n>\n> But then again: when can it be avoided?\n\nYou don't need to split the data at all if you make sure that they are\ncorrect.\n\nIf you cannot be certain, and you want to avoid having to restart a huge\nload with corrected data, the batch size is pretty much a matter of taste:\nHow much overhead does it generate to split the data in N parts?\nHow much time are you ready to wait for (re)loading a single part?\n\nYou'll probably have to experiment to find a solution that fits you.\n\n>>> My question is: Where are these 99 records have been living, on\n>>> the database server, while the 100-th one hasn't come yet, and\n>>> the need to throw the previous data accumulation away has not\n>>> come yet?\n>>\n>> They will have been written into the table. They do not become\n>> visible to any other transaction until and unless the inserting\n>> transaction successfully commits. These slides may help:\n>>\n>> http://momjian.us/main/writings/pgsql/mvcc.pdf\n>\n> Yeah, I know about the MVCC model... The question is about the huge\n> data storage to be reserved without a commitment while the load is not\n> completed, about the size constrains in effect here.\n\nI don't understand that question.\n\nYou need the space anyway to complete the load.\nIf the load fails, you simply reclaim the space (VACUUM) and reuse it.\nThere is no extra storage needed.\n\n>>> There have to be some limits to the space and/or counts taken by\n>>> the new, uncommitted, data, while the COPY operation is still in\n>>> progress. What are they?\n>>\n>> Primarily disk space for the table.\n>\n> How can that be found? Is \"df /mount/point\" the deciding factor? Or\n> some 2^32 or 2^64 number?\n\nDisk space can be measure with \"df\".\n\n>> If you are not taking advantage of the \"unlogged load\" optimization,\n>> you will have written Write Ahead Log (WAL) records, too -- which\n>> (depending on your configuration) you may be archiving. In that\n>> case, you may need to be concerned about the archive space required.\n>\n> \"... may need to be concerned ...\" if what? Loading 1 MB? 1 GB? 1 TB?\n>\n> If I am always concerned, and check something before a COPY, what\n> should I be checking? What are the \"OK-to-proceed\" criteria?\n\nThat means \"you should consider\", not \"you should be worried\".\nUnless you are loading into a table created in the same transaction,\n\"redo\" information will be generated and stored in \"WAL files\", which\nend up in your WAL archive.\n\nThis needs extra storage, proportional to the storage necessary\nfor the data itself.\n\n>> If you have foreign keys defined for the table, you may get into\n>> trouble on the RAM used to track pending checks for those\n>> constraints. I would recommend adding any FKs after you are done\n>> with the big bulk load.\n>\n> I am curious about the simplest case where only the data storage is to\n> be worried about. (As an aside: the CHECK and NOT NULL constrains are\n> not a storage factor, right?)\n\nRight.\n\n>> PostgreSQL does *not* have a \"rollback log\" which will impose a\n>> limit.\n>\n> Something will though, right? What would that be? The available disk\n> space on a file system? (I would be surprised.)\n\nYou can find something on the limitations here:\nhttp://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F\n\n>>> Say, I am COPYing 100 TB of data and the bad records are close\n>>> to the end of the feed -- how will this all error out?\n>>\n>> The rows will all be in the table, but not visible to any other\n>> transaction.\n>\n> I see. How much data can I fit there while doing COPY? Not 1 TB?\n\nSure, why not?\n\nYours,\nLaurenz Albe\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 27 Aug 2014 08:50:52 -0300",
"msg_from": "Felipe Santos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "[about loadling large amounts of data]\r\n\r\nFelipe Santos wrote:\r\n> This might also help:\r\n> http://www.postgresql.org/docs/9.1/static/populate.html\r\n> \r\n> \r\n> Bulk load tables from text files in almost all RDMS are \"log free\" (Postgres' COPY is one of them).\r\n> \r\n> The reason is that the database doesn't need to waste resources by writing the log because there's no\r\n> risk of data loss. If the COPY operation fails, your data will still live in the text files you're\r\n> trying to bulk load from.\r\n\r\nThat is only true if the table was created in the same transaction as the COPY statement.\r\n\r\nOtherwise it could be that recovery starts after CREATE TABLE but before COPY, and\r\nit would have to recover the loaded data.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 Aug 2014 12:41:50 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million\n records"
},
{
"msg_contents": "Alex Goncharov <[email protected]> wrote:\n> Kevin Grittner <[email protected]> wrote:\n\n>> The rows will all be in the table, but not visible to any other\n>> transaction.\n>\n> How much data can I fit there while doing COPY? Not 1 TB?\n\nAs has already been said, why not? This is not some special\nsection of the table -- the data is written to the table. Period.\nCommit or rollback just tells new transactions whether data flagged\nwith that transaction number is visible.\n\nNobody can tell you how much space that will take -- it depends on\nmany factors, including how many columns of what kind of data, how\ncompressible it is, and how it is indexed. But the point is, we\nare not talking about any separate space from what is needed to\nstore the data in the database.\n\nFWIW, I think the largest single COPY statement I ever ran was\ngenerated by pg_dump and piped directly to psql for a major release\nupgrade (before pg_upgrade was available), and it was somewhere in\nthe 2TB to 3TB range. It took a long time, but it \"just worked\".\nThat should be true for 10TB or 100TB, as long as you have sized\nthe machine correctly and are loading clean data. Whether you have\nthat covered, and how you want to \"hedge your bets\" based on your\ndegree of confidence in those things is a judgment call. When I'm \nin the position of needing to make such a call, I like to do some \ntests.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 Aug 2014 05:59:29 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
},
{
"msg_contents": "Hello All,\n\nI learned a lot by inputs from all of you. To share one more thing about \njava_JDBC bypassing autocommit that I tried:\n(1) Read/save source data into f1.csv, f2.csv, ......\n(2) Copy/load into dest psql.DB\n CopyManager cm = null;\n FileReader fileReader = null;\n cm = new CopyManager((BaseConnection) conn_psql);\n fileReader = new FileReader(\"f1.csv\");\n cm.copyIn(\"COPY table_name FROM STDIN WITH DELIMITER '|'\", \nfileReader);\n fileReader.close();\n\nEmi\n\nOn 08/27/2014 08:59 AM, Kevin Grittner wrote:\n> Alex Goncharov <[email protected]> wrote:\n>> Kevin Grittner <[email protected]> wrote:\n>>> The rows will all be in the table, but not visible to any other\n>>> transaction.\n>> How much data can I fit there while doing COPY? Not 1 TB?\n> As has already been said, why not? This is not some special\n> section of the table -- the data is written to the table. Period.\n> Commit or rollback just tells new transactions whether data flagged\n> with that transaction number is visible.\n>\n> Nobody can tell you how much space that will take -- it depends on\n> many factors, including how many columns of what kind of data, how\n> compressible it is, and how it is indexed. But the point is, we\n> are not talking about any separate space from what is needed to\n> store the data in the database.\n>\n> FWIW, I think the largest single COPY statement I ever ran was\n> generated by pg_dump and piped directly to psql for a major release\n> upgrade (before pg_upgrade was available), and it was somewhere in\n> the 2TB to 3TB range. It took a long time, but it \"just worked\".\n> That should be true for 10TB or 100TB, as long as you have sized\n> the machine correctly and are loading clean data. Whether you have\n> that covered, and how you want to \"hedge your bets\" based on your\n> degree of confidence in those things is a judgment call. When I'm\n> in the position of needing to make such a call, I like to do some\n> tests.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Aug 2014 10:49:08 -0400",
"msg_from": "Emi Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autocommit (true/false) for more than 1 million records"
}
] |
[
{
"msg_contents": "hi, recently i change the hardware of my database 32 cores up to 64 \ncores and 128GB Ram, but the performance is the same. Perhaps i have to \nchange any parameter in the postgresql.conf?.\n\nThanks by your help\n\n-- \nAtentamente,\n\n\nJEISON BEDOYA DELGADO\n.\n\n\n--\nNOTA VERDE:\nNo imprima este correo a menos que sea absolutamente necesario.\nAhorre papel, ayude a salvar un arbol.\n\n--------------------------------------------------------------------\nEste mensaje ha sido analizado por MailScanner\nen busca de virus y otros contenidos peligrosos,\ny se considera que esta limpio.\n\n--------------------------------------------------------------------\nEste texto fue anadido por el servidor de correo de Audifarma S.A.:\n\nLas opiniones contenidas en este mensaje no necesariamente coinciden\ncon las institucionales de Audifarma. La informacion y todos sus\narchivos Anexos, son confidenciales, privilegiados y solo pueden ser\nutilizados por sus destinatarios. Si por error usted recibe este\nmensaje, le ofrecemos disculpas, solicitamos eliminarlo de inmediato,\nnotificarle de su error a la persona que lo envio y abstenerse de\nutilizar su contenido.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Aug 2014 13:47:46 -0500",
"msg_from": "Jeison Bedoya Delgado <[email protected]>",
"msg_from_op": true,
"msg_subject": "tuning postgresql 9.3.5 and multiple cores"
},
{
"msg_contents": "On Monday, August 25, 2014, Jeison Bedoya Delgado <[email protected]>\nwrote:\n\n> hi, recently i change the hardware of my database 32 cores up to 64 cores\n> and 128GB Ram, but the performance is the same. Perhaps i have to change\n> any parameter in the postgresql.conf?.\n>\n\n\nPostgreSQL does not (yet) automatically parallelize queries.\n\nUnless you have more than 32 queries trying to run at the same time,\nincreasing the number of cores from 32 to 64 is unlikely to be useful.\n\nCheers,\n\nJeff\n\nOn Monday, August 25, 2014, Jeison Bedoya Delgado <[email protected]> wrote:\nhi, recently i change the hardware of my database 32 cores up to 64 cores and 128GB Ram, but the performance is the same. Perhaps i have to change any parameter in the postgresql.conf?.\nPostgreSQL does not (yet) automatically parallelize queries.Unless you have more than 32 queries trying to run at the same time, increasing the number of cores from 32 to 64 is unlikely to be useful.\nCheers,Jeff",
"msg_date": "Mon, 25 Aug 2014 18:51:35 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning postgresql 9.3.5 and multiple cores"
},
{
"msg_contents": "Changing to a higher rate CPU would be more helpful if you run less than 32\nqueries at a time.\n\n\nOn Tue, Aug 26, 2014 at 8:51 AM, Jeff Janes <[email protected]> wrote:\n\n> On Monday, August 25, 2014, Jeison Bedoya Delgado <\n> [email protected]> wrote:\n>\n>> hi, recently i change the hardware of my database 32 cores up to 64 cores\n>> and 128GB Ram, but the performance is the same. Perhaps i have to change\n>> any parameter in the postgresql.conf?.\n>>\n>\n>\n> PostgreSQL does not (yet) automatically parallelize queries.\n>\n> Unless you have more than 32 queries trying to run at the same time,\n> increasing the number of cores from 32 to 64 is unlikely to be useful.\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n\n-- \nRegards,\n\nSoni Maula Harriz\n\nChanging to a higher rate CPU would be more helpful if you run less than 32 queries at a time.On Tue, Aug 26, 2014 at 8:51 AM, Jeff Janes <[email protected]> wrote:\nOn Monday, August 25, 2014, Jeison Bedoya Delgado <[email protected]> wrote:\n\nhi, recently i change the hardware of my database 32 cores up to 64 cores and 128GB Ram, but the performance is the same. Perhaps i have to change any parameter in the postgresql.conf?.\nPostgreSQL does not (yet) automatically parallelize queries.Unless you have more than 32 queries trying to run at the same time, increasing the number of cores from 32 to 64 is unlikely to be useful.\nCheers,Jeff\n-- Regards,Soni Maula Harriz",
"msg_date": "Tue, 26 Aug 2014 16:50:15 +0700",
"msg_from": "Soni M <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning postgresql 9.3.5 and multiple cores"
},
{
"msg_contents": "On 26/08/14 06:47, Jeison Bedoya Delgado wrote:\n> hi, recently i change the hardware of my database 32 cores up to 64\n> cores and 128GB Ram, but the performance is the same. Perhaps i have to\n> change any parameter in the postgresql.conf?.\n>\n\nIn addition to the points that others have made, even if you do have > \n32 active sessions it it not clear that 64 cores will automagically get \nyou twice (or in fact any) better performance than 32. We are seeing \nexactly this effect with a (60 core) machine that gets pretty much the \nsame performance as an older generation 32 core one.\n\nInterestingly while this is *likely* a software issue - it is not \nimmediately obvious where it lies - we tested Postgres (9.3/9.4/9.5) and \nMysql (5.5/5.6/5.7) *all* of which exhibited the the lack of improvement \nwith more cores.\n\nProfiling suggested numa effects - but trying to eliminate these seemed \nto simply throw up new factors to inhibit performance. My *guess* (and \nit is a guess) is that we are seeing 2 (perhaps more) performance \nbottlenecks very close to each other: numa and spinlock contention at least.\n\nRegards\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Aug 2014 22:10:12 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning postgresql 9.3.5 and multiple cores"
}
] |
[
{
"msg_contents": "Hi pgsql\r\n\r\n\r\nhttp://activebillion.com/bring.php?fzuvceubqu3101hcvfvcq\r\n\r\n\r\n\r\n\r\n\r\nRich\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Aug 2014 22:26:57 +0200",
"msg_from": "\"Rich\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "From: Rich"
}
] |
[
{
"msg_contents": "Hi all\n\nI have the following table with 10+ million records:\n\ncreate table ddetail (\nddet_id serial,\nco_id integer,\nclient_id integer,\ndoc_no varchar,\nline_id integer,\nbatch_no integer,\namount NUMERIC , \n...,\nconstraint PRIMAR KEY ( co_id , client_id , doc_no , line_id, ddet_id )\n) ;\n\nWhen doing the following query on this table, performance is really slow:\n\nSELECT co_id , client_id , doc_no , line_id , sum( amount )\nFROM ddetail \nGROUP BY co_id , client_id , doc_no , line_id \n\nIt seems as if the planner is not using the PRIMARY KEY as index which was\nmy assumption.\n\nCan somebody please confirm whether aggregate functions such as GROUP BY\nshould use indexes ? \n\n\nThanks in advance\n\ngmb \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Performance-issue-index-not-used-on-GROUP-BY-tp5816702.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Aug 2014 01:50:40 -0700 (PDT)",
"msg_from": "gmb <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issue: index not used on GROUP BY..."
},
{
"msg_contents": "2014-08-28 11:50 GMT+03:00 gmb <[email protected]>:\n\n> It seems as if the planner is not using the PRIMARY KEY as index which was\n> my assumption.\n>\n\nCan you send `EXPLAIN (analyze, buffers)` for your query instead?\nIt'll show exactly what's going on.\n\n\n-- \nVictor Y. Yegorov\n\n2014-08-28 11:50 GMT+03:00 gmb <[email protected]>:\nIt seems as if the planner is not using the PRIMARY KEY as index which was\nmy assumption.Can you send `EXPLAIN (analyze, buffers)` for your query instead?It'll show exactly what's going on.-- \nVictor Y. Yegorov",
"msg_date": "Thu, 28 Aug 2014 11:57:09 +0300",
"msg_from": "Victor Yegorov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue: index not used on GROUP BY..."
},
{
"msg_contents": "\n\n> Can you send `EXPLAIN (analyze, buffers)` for your query instead?\n> It'll show exactly what's going on.\n\nGroupAggregate (cost=303425.31..339014.43 rows=136882 width=48) (actual\ntime=4708.181..6688.699 rows=287268 loops=1)\n Buffers: shared read=23899, temp read=30974 written=30974\n -> Sort (cost=303425.31..306847.34 rows=1368812 width=48) (actual\ntime=4708.170..5319.429 rows=1368744 loops=1)\n Sort Key: co_id, client_id, doc_no, \n Sort Method: external merge Disk: 80304kB\n Buffers: shared read=23899, temp read=30974 written=30974\n -> Seq Scan on ddetail (cost=0.00..37587.12 rows=1368812 width=48)\n(actual time=0.122..492.964 rows=1368744 loops=1)\n Buffers: shared read=23899\nTotal runtime: 6708.244 ms\n\n\nMy initial attempt was this (this is what I actually need):\n\nSELECT co_id , client_id , doc_no , line_id , batch_no , sum( amount )\nFROM ddetail\nGROUP BY co_id , client_id , doc_no , line_id , batch_no ;\n\nbut I removed column batch_no from the query because I thought this was the\ncause of the problem ( batch_no is not part of my PK ).\n\n\nThanks \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Performance-issue-index-not-used-on-GROUP-BY-tp5816702p5816706.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Aug 2014 02:08:59 -0700 (PDT)",
"msg_from": "gmb <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue: index not used on GROUP BY..."
},
{
"msg_contents": "2014-08-28 12:08 GMT+03:00 gmb <[email protected]>:\n\n> GroupAggregate (cost=303425.31..339014.43 rows=136882 width=48) (actual\n> time=4708.181..6688.699 rows=287268 loops=1)\n> Buffers: shared read=23899, temp read=30974 written=30974\n> -> Sort (cost=303425.31..306847.34 rows=1368812 width=48) (actual\n> time=4708.170..5319.429 rows=1368744 loops=1)\n> Sort Key: co_id, client_id, doc_no,\n> Sort Method: external merge Disk: 80304kB\n> Buffers: shared read=23899, temp read=30974 written=30974\n> -> Seq Scan on ddetail (cost=0.00..37587.12 rows=1368812\n> width=48)\n> (actual time=0.122..492.964 rows=1368744 loops=1)\n> Buffers: shared read=23899\n> Total runtime: 6708.244 ms\n>\n>\n> My initial attempt was this (this is what I actually need):\n>\n> SELECT co_id , client_id , doc_no , line_id , batch_no , sum( amount )\n> FROM ddetail\n> GROUP BY co_id , client_id , doc_no , line_id , batch_no ;\n>\n\nI think index will be of no help here, as (1) you're reading whole table\nanyway and (2) `amount` is not part of your index.\n\nTry to avoid disk-based sort by increasing `work_mem` for your session, I\nthink value in the range 120MB-150MB should work:\n\n SET work_mem TO '150MB';\n\nCheck `EXPLAIN` output after the change.\n\n-- \nVictor Y. Yegorov\n\n2014-08-28 12:08 GMT+03:00 gmb <[email protected]>:\nGroupAggregate (cost=303425.31..339014.43 rows=136882 width=48) (actual\ntime=4708.181..6688.699 rows=287268 loops=1)\n Buffers: shared read=23899, temp read=30974 written=30974\n -> Sort (cost=303425.31..306847.34 rows=1368812 width=48) (actual\ntime=4708.170..5319.429 rows=1368744 loops=1)\n Sort Key: co_id, client_id, doc_no,\n Sort Method: external merge Disk: 80304kB\n Buffers: shared read=23899, temp read=30974 written=30974\n -> Seq Scan on ddetail (cost=0.00..37587.12 rows=1368812 width=48)\n(actual time=0.122..492.964 rows=1368744 loops=1)\n Buffers: shared read=23899\nTotal runtime: 6708.244 ms\n\n\nMy initial attempt was this (this is what I actually need):\n\nSELECT co_id , client_id , doc_no , line_id , batch_no , sum( amount )\nFROM ddetail\nGROUP BY co_id , client_id , doc_no , line_id , batch_no ;I think index will be of no help here, as (1) you're reading whole table anyway and (2) `amount` is not part of your index.\nTry to avoid disk-based sort by increasing `work_mem` for your session, I think value in the range 120MB-150MB should work:\n SET work_mem TO '150MB';Check `EXPLAIN` output after the change.-- Victor Y. Yegorov",
"msg_date": "Thu, 28 Aug 2014 12:28:07 +0300",
"msg_from": "Victor Yegorov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue: index not used on GROUP BY..."
},
{
"msg_contents": "On Thu, Aug 28, 2014 at 11:50 AM, gmb <[email protected]> wrote:\n> Can somebody please confirm whether aggregate functions such as GROUP BY\n> should use indexes ?\n\nYes, if the planner deems it faster than other approaches. It can make\nwrong choices for many reasons, but usually when your planner tunables\nlike random_page_cost, effective_cache_size aren't set appropriately.\n\nThere's some advice here:\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nJust for the purpose of testing, you could try \"set enable_sort=false\"\nin your session and see if that makes it faster.\n\nOn Thu, Aug 28, 2014 at 12:08 PM, gmb <[email protected]> wrote:\n> Sort Key: co_id, client_id, doc_no,\n\nSomething went missing from this line...\n\n> Sort Method: external merge Disk: 80304kB\n\nDepends on your hardware and workloads, but more work_mem may also\nimprove queries to avoid sorts and hashes needing to use disk. But\nbeware, setting it too high may result in your server running out of\nmemory.\n\nRegards,\nMarti\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Aug 2014 12:29:46 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue: index not used on GROUP BY..."
},
{
"msg_contents": "\nThanks for these suggestions\n\nUnfortunately , I don't have a lot of memory available ( 65 connections ,\nwork_mem = 64MB in pg conf ).\n\n>> I think index will be of no help here, as (1) you're reading whole table\n>> anyway and (2) `amount` is not part of your index.\n\nI did not think that the the field being used in the agg function should\nalso be part of the index. \nI'll try this and check the result. \n\nMy problem is that dropping / adding indexes on this table takes a LOT of\ntime, so I'm stuck with doing the tests using the indexes as is, or doing\nthe tests on a smaller dataset.\n\nOn the smaller dataset ( 1.5 mill records on that table ) the planner did\nnot take the index into account, even when I omit the amount column:\n\n\nCREATE INDEX ix_1\n ON ddetail\n USING btree\n (co_id , client_id , doc_no , line_id , batch_no);\n\nSELECT co_id , client_id , doc_no , line_id , batch_no \nFROM ddetail\nGROUP BY co_id , client_id , doc_no , line_id , batch_no ;\n\nHashAggregate (cost=54695.74..56064.49 rows=136875 width=22)\n -> Seq Scan on debfdetail (cost=0.00..37586.44 rows=1368744 width=22)\n\nstill does a seq scan instead of the index scan.\nI guess it is possible that on the 1.4 million records, it is faster to do a\nseq scan ? \nSo I guess I'll have to try and do this on the 10 mill table and check the\nresult there.\n\n\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Performance-issue-index-not-used-on-GROUP-BY-tp5816702p5816715.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Aug 2014 04:29:19 -0700 (PDT)",
"msg_from": "gmb <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue: index not used on GROUP BY..."
},
{
"msg_contents": "2014-08-28 14:29 GMT+03:00 gmb <[email protected]>:\n\n> Unfortunately , I don't have a lot of memory available ( 65 connections ,\n> work_mem = 64MB in pg conf ).\n>\n\nYou don't have to change cluster-wide settings here.\n\nYou can issue `SET` command from your client right before running your\nquery, only your session will be affected.\n\n\n-- \nVictor Y. Yegorov\n\n2014-08-28 14:29 GMT+03:00 gmb <[email protected]>:\nUnfortunately , I don't have a lot of memory available ( 65 connections ,\nwork_mem = 64MB in pg conf ).You don't have to change cluster-wide settings here.You can issue `SET` command from your client right before running your query, only your session will be affected.\n-- Victor Y. Yegorov",
"msg_date": "Thu, 28 Aug 2014 15:25:56 +0300",
"msg_from": "Victor Yegorov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue: index not used on GROUP BY..."
},
{
"msg_contents": "On 08/28/2014 01:50 AM, gmb wrote:\n> Can somebody please confirm whether aggregate functions such as GROUP BY\n> should use indexes ? \n\nSometimes. In your case, the index has one more column than the GROUP\nBY, which makes it less likely that Postgres will use it (since\ndepending on the cardinality of ddet_id, it might actually be slower to\nuse the index).\n\nIn addition, other folks on this thread have already pointed out the\nmemory settings issues to you.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Aug 2014 13:48:05 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue: index not used on GROUP BY..."
},
{
"msg_contents": "Thanks for the feedback, everybody.\nI spent a couple of days trying to optimise this; \nAs mentioned , the increased memory is not an option for me, as this query\nis part of a report that can be run by any user on an ad hoc basis.\nAllocating the required memory to any session on demand is not feasible in\nthis environment.\n\nIn the end , it seems to me that a more sustainable solution will be to\nintroduce an additional table to carry the summarized values and lookup on\nthat table in this type of scenario.\n\nRegards\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Performance-issue-index-not-used-on-GROUP-BY-tp5816702p5817622.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Sep 2014 12:50:26 -0700 (PDT)",
"msg_from": "gmb <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue: index not used on GROUP BY..."
}
] |
[
{
"msg_contents": "Any suggestions on a query rewrite to speed this poor performing query up.\n\nwork_mem=164MB\n\nThanks\n\nexplain (analyze on, buffers on) select * from SARS_IMPACT_REPORT this_\n where this_.model_uid=1\n and this_.source_date_time between '2014-08-08 19:21:08.212'::timestamp without time zone and '2014-08-09 03:59:19.388'::timestamp without time zone\n and (ST_within (this_.clone_location,'010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F')\n or ST_touches (this_.clone_location,'010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'))\n order by source_date_time asc, source_uid asc, clone_report_uid\n limit 3000;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5574.70..5574.70 rows=1 width=141) (actual time=33998.629..33999.142 rows=3000 loops=1)\n Buffers: shared hit=6 read=39347\n -> Sort (cost=5574.44..5574.70 rows=104 width=141) (actual time=33997.358..33998.246 rows=15000 loops=1)\n Sort Key: this_.source_date_time, this_.source_uid, this_.clone_report_uid\n Sort Method: top-N heapsort Memory: 4753kB\n Buffers: shared hit=6 read=39347\n -> Append (cost=0.00..5570.95 rows=104 width=141) (actual time=8.302..33417.186 rows=710202 loops=1)\n Buffers: shared read=39347\n -> Seq Scan on SARS_IMPACT_REPORT this_ (cost=0.00..0.00 rows=1 width=648) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((source_date_time >= '2014-08-08 19:21:08.212'::timestamp without time zone) AND (source_date_time <= '2014-08-09 03:59:19.388'::timestamp without time zone) AND (clone_location && '01030000000100000005000000\n6787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry) AND (model_uid = 1) AND (_st_contains('010300000001000000\n050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry, clone_location) OR _st_touches(clone_location, '0\n10300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)))\n -> Seq Scan on SARS_IMPACT_REPORT_overflow this__1 (cost=0.00..72.40 rows=1 width=648) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((source_date_time >= '2014-08-08 19:21:08.212'::timestamp without time zone) AND (source_date_time <= '2014-08-09 03:59:19.388'::timestamp without time zone) AND (clone_location && '01030000000100000005000000\n6787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry) AND (model_uid = 1) AND (_st_contains('010300000001000000\n050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry, clone_location) OR _st_touches(clone_location, '0\n10300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)))\n -> Index Scan using idx_clone_report_query_y201408 on SARS_IMPACT_REPORT_y2014m08 this__2 (cost=0.57..5570.95 rows=103 width=136) (actual time=8.300..33308.118 rows=710202 loops=1)\n Index Cond: ((model_uid = 1::bigint) AND (source_date_time >= '2014-08-08 19:21:08.212'::timestamp without time zone) AND (source_date_time <= '2014-08-09 03:59:19.388'::timestamp without time zone))\n Filter: ((clone_location && '010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)\n AND _st_contains('010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry, clone_location)\n OR _st_touches (clone_location, '010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)))\n Rows Removed by Filter: 912821\n Buffers: shared read=39347\n Total runtime: 34000.160 ms <-- Unacceptable runtime\n\nIndexes:\n \"idx_clone_report_y2014m08_pkey\" PRIMARY KEY, btree (clone_report_uid)\n \"idx_clone_report_query_y201408\" btree (model_uid, source_date_time)\n \"sidx_clone_report_y2014m08\" gist (clone_location)\n\n\n\n\n\n\n\n\n\nAny suggestions on a query rewrite to speed this poor performing query up.\n\n\nwork_mem=164MB\n\nThanks\n\nexplain (analyze on, buffers on) select * from SARS_IMPACT_REPORT this_ \n where this_.model_uid=1\n and this_.source_date_time between '2014-08-08 19:21:08.212'::timestamp without time zone and '2014-08-09 03:59:19.388'::timestamp without time zone\n\n and (ST_within (this_.clone_location,'010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F')\n\n or ST_touches (this_.clone_location,'010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'))\n order by source_date_time asc, source_uid asc, clone_report_uid\n limit 3000;\n \n\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5574.70..5574.70 rows=1 width=141) (actual time=33998.629..33999.142 rows=3000 loops=1)\n Buffers: shared hit=6 read=39347\n -> Sort (cost=5574.44..5574.70 rows=104 width=141) (actual time=33997.358..33998.246 rows=15000 loops=1)\n Sort Key: this_.source_date_time, this_.source_uid, this_.clone_report_uid\n Sort Method: top-N heapsort Memory: 4753kB\n Buffers: shared hit=6 read=39347\n -> Append (cost=0.00..5570.95 rows=104 width=141) (actual time=8.302..33417.186 rows=710202 loops=1)\n Buffers: shared read=39347\n -> Seq Scan on SARS_IMPACT_REPORT this_ (cost=0.00..0.00 rows=1 width=648) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((source_date_time >= '2014-08-08 19:21:08.212'::timestamp without time zone) AND (source_date_time <= '2014-08-09 03:59:19.388'::timestamp without time zone) AND (clone_location && '01030000000100000005000000\n6787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry) AND (model_uid = 1) AND (_st_contains('010300000001000000\n050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry, clone_location) OR _st_touches(clone_location, '0\n10300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)))\n -> Seq Scan on SARS_IMPACT_REPORT_overflow this__1 (cost=0.00..72.40 rows=1 width=648) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((source_date_time >= '2014-08-08 19:21:08.212'::timestamp without time zone) AND (source_date_time <= '2014-08-09 03:59:19.388'::timestamp without time zone) AND (clone_location && '01030000000100000005000000\n6787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry) AND (model_uid = 1) AND (_st_contains('010300000001000000\n050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry, clone_location) OR _st_touches(clone_location, '0\n10300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)))\n -> Index Scan using idx_clone_report_query_y201408 on SARS_IMPACT_REPORT_y2014m08 this__2 (cost=0.57..5570.95 rows=103 width=136) (actual time=8.300..33308.118 rows=710202 loops=1)\n Index Cond: ((model_uid = 1::bigint) AND (source_date_time >= '2014-08-08 19:21:08.212'::timestamp without time zone) AND (source_date_time <= '2014-08-09 03:59:19.388'::timestamp without time zone))\n Filter: ((clone_location && '010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)\n\n AND _st_contains('010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry, clone_location)\n OR _st_touches (clone_location, '010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)))\n Rows Removed by Filter: 912821\n Buffers: shared read=39347\n Total runtime: 34000.160 ms <-- Unacceptable runtime\n\nIndexes:\n \"idx_clone_report_y2014m08_pkey\" PRIMARY KEY, btree (clone_report_uid)\n \"idx_clone_report_query_y201408\" btree (model_uid, source_date_time)\n \"sidx_clone_report_y2014m08\" gist (clone_location)",
"msg_date": "Fri, 29 Aug 2014 04:28:58 +0000",
"msg_from": "\"Burgess, Freddie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow running query PostgreSQL 9.3.4"
},
{
"msg_contents": "2014-08-29 7:28 GMT+03:00 Burgess, Freddie <[email protected]>:\n\n> -> Index Scan using idx_clone_report_query_y201408 on\n> SARS_IMPACT_REPORT_y2014m08 this__2 (cost=0.57..5570.95 rows=103\n> width=136) (actual time=8.300..33308.118 rows=710202 loops=1)\n> Index Cond: ((model_uid = 1::bigint) AND\n> (source_date_time >= '2014-08-08 19:21:08.212'::timestamp without time\n> zone) AND (source_date_time <= '2014-08-09 03:59:19.388'::timestamp without\n> time zone))\n> Filter: ((clone_location &&\n> '010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)\n>\n> AND\n> _st_contains('010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry,\n> clone_location)\n> OR _st_touches (clone_location,\n> '010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)))\n> Rows Removed by Filter: 912821\n>\n\n\nFirst, I think your stats are off, note this line:\n\n-> Index Scan using idx_clone_report_query_y201408 on\nSARS_IMPACT_REPORT_y2014m08 this__2 (cost=0.57..5570.95 >>>rows=103<<<\nwidth=136) (actual time=8.300..33308.118 >>>rows=710202<<< loops=1)\n\nReal rows returned are 3 orders of magnituded higher then expected.\n\nAlso, given almost a million rows were removed by the filter, it'd be worth\ntrying to select on `clone_location` first.\n\n\nCould you do the following:\n\nVACUUM ANALYZE sars_impact_report_y2014m08;\nVACUUM ANALYZE sars_impact_report;\nexplain (analyze, buffers)\nWITH clone AS (\n SELECT * FROM SARS_IMPACT_REPORT\n WHERE\nST_within(this_.clone_location,'010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F')\n OR ST_touches\n(this_.clone_location,'010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F')\n)\nselect * from clone this_\n where this_.model_uid=1\n and this_.source_date_time between '2014-08-08 19:21:08.212'::timestamp\nwithout time zone and '2014-08-09 03:59:19.388'::timestamp without time\nzone\n order by source_date_time asc, source_uid asc, clone_report_uid\n limit 3000;\n\n\n-- \nVictor Y. Yegorov\n\n2014-08-29 7:28 GMT+03:00 Burgess, Freddie <[email protected]>:\n\n -> Index Scan using idx_clone_report_query_y201408 on SARS_IMPACT_REPORT_y2014m08 this__2 (cost=0.57..5570.95 rows=103 width=136) (actual time=8.300..33308.118 rows=710202 loops=1)\n Index Cond: ((model_uid = 1::bigint) AND (source_date_time >= '2014-08-08 19:21:08.212'::timestamp without time zone) AND (source_date_time <= '2014-08-09 03:59:19.388'::timestamp without time zone))\n\n Filter: ((clone_location && '010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)\n\n AND _st_contains('010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry, clone_location)\n\n OR _st_touches (clone_location, '010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F'::geometry)))\n\n Rows Removed by Filter: 912821First, I think your stats are off, note this line:\n-> Index Scan using idx_clone_report_query_y201408 on SARS_IMPACT_REPORT_y2014m08 this__2 (cost=0.57..5570.95 >>>rows=103<<< width=136) (actual time=8.300..33308.118 >>>rows=710202<<< loops=1)\nReal rows returned are 3 orders of magnituded higher then expected.Also, given almost a million rows were removed by the filter, it'd be worth trying to select on `clone_location` first.\nCould you do the following:VACUUM ANALYZE sars_impact_report_y2014m08;\nVACUUM ANALYZE sars_impact_report;explain (analyze, buffers)WITH clone AS ( SELECT * FROM SARS_IMPACT_REPORT\n WHERE ST_within(this_.clone_location,'010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F') \n OR ST_touches (this_.clone_location,'010300000001000000050000006787D1E89889F5BFFBA6BE01196EE53F1AF703F9588EF5BF6D9AC3A07D5FE53F0C2792E0B193F5BF6D9AC3A07D5FE53FC096C4F07198F5BFFBA6BE01196EE53F6787D1E89889F5BFFBA6BE01196EE53F')\n) select * from clone this_ where this_.model_uid=1 and this_.source_date_time between '2014-08-08 19:21:08.212'::timestamp without time zone and '2014-08-09 03:59:19.388'::timestamp without time zone \n order by source_date_time asc, source_uid asc, clone_report_uid limit 3000;-- Victor Y. Yegorov",
"msg_date": "Fri, 29 Aug 2014 09:38:53 +0300",
"msg_from": "Victor Yegorov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow running query PostgreSQL 9.3.4"
}
] |
[
{
"msg_contents": "Hi ,\n\nI'm tweaking table layout to get better performance of query. One table doesn't use hstore but expand all metrics of cha_type to different rows. The other table has hstore for metrics column as cha_type->metrics so it has less records than the first one.\n\nI would be expecting the query on seconds table has better performance than the first one. However, it's not the case at all. I'm wondering if there's something wrong with my execution plan? With the hstore table, the optimizer has totally wrong estimation on row counts at hash aggregate stage and it takes 34 seconds on hash-join,25 seconds on hash-aggregate, 10 seconds on sort. However, with non-hstore table, it takes 17 seconds on hash join, 18 seconds on hashaggregate and 2 seconds on sort.\n\nCan someone help me to explain why this is happening? And is there a way to fine-tune the query?\n\nTable structure\n\ndev=# \\d+ weekly_non_hstore\n Table \"test.weekly_non_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\ncha_typel | text | | extended | |\nvisits | double precision | | plain | |\npages | double precision | | plain | |\nduration | double precision | | plain | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n\ndev=# \\d+ weekly_hstore\n Table \"test.weekly_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\nvisits | hstore | | extended | |\npages | hstore | | extended | |\nduration | hstore | | extended | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n\ndev=# select count(*) from weekly_non_hstore;\n count\n----------\n71818882\n(1 row)\n\n\ndev=# select count(*) from weekly_hstore;\n count\n---------\n1292314\n(1 row)\n\n\nQuery\ndev=# explain analyze select cha_type,sum(visits) from weekly_non_hstore a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits) desc;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=3674073.37..3674431.16 rows=143115 width=27) (actual time=47520.637..47969.658 rows=3639539 loops=1)\n Sort Key: (sum(a.visits))\n Sort Method: quicksort Memory: 391723kB\n -> HashAggregate (cost=3660386.70..3661817.85 rows=143115 width=27) (actual time=43655.637..44989.202 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..3301286.54 rows=71820032 width=27) (actual time=209.789..26477.652 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.32 rows=71820032 width=75) (actual time=0.053..8858.594 rows=71818882 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=209.189..209.189 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.014..64.695 rows=371759 loops=1)\nTotal runtime: 48172.405 ms\n(11 rows)\n\nTime: 48173.569 ms\n\ndev=# explain analyze select cha_type, sum(visits) from (select (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo group by cha_type order by sum(visits) desc;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=7599039.89..7599040.39 rows=200 width=64) (actual time=70424.561..70986.202 rows=3639539 loops=1)\n Sort Key: (sum((((each(a.visits)).value)::numeric)))\n Sort Method: quicksort Memory: 394779kB\n -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64) (actual time=59267.120..60502.647 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..2022645.24 rows=371759000 width=184) (actual time=186.140..34619.879 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=230) (actual time=0.107..416.741 rows=1292314 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=185.742..185.742 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.016..62.123 rows=371759 loops=1)\nTotal runtime: 71177.675 ms\n\n\n\n\n\n\n\n\n\nHi ,\n \nI’m tweaking table layout to get better performance of query. One table doesn’t use hstore but expand all metrics of cha_type to different rows. The other table has hstore for metrics column as cha_type->metrics so it has less records than\n the first one. \n \nI would be expecting the query on seconds table has better performance than the first one. However, it’s not the case at all. I’m wondering if there’s something wrong with my execution plan? With the hstore table, the optimizer has totally\n wrong estimation on row counts at hash aggregate stage and it takes 34 seconds on hash-join,25 seconds on hash-aggregate, 10 seconds on sort. However, with non-hstore table, it takes 17 seconds on hash join, 18 seconds on hashaggregate and 2 seconds on sort.\n \nCan someone help me to explain why this is happening? And is there a way to fine-tune the query?\n \nTable structure\n \ndev=# \\d+ weekly_non_hstore\n Table \"test.weekly_non_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\ncha_typel | text | | extended | |\nvisits | double precision | | plain | |\npages | double precision | | plain | |\nduration | double precision | | plain | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n \ndev=# \\d+ weekly_hstore\n Table \"test.weekly_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\nvisits | hstore | | extended | |\npages | hstore | | extended | |\nduration | hstore | | extended | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n \ndev=# select count(*) from weekly_non_hstore;\n count\n----------\n71818882\n(1 row)\n \n \ndev=# select count(*) from weekly_hstore;\n count\n---------\n1292314\n(1 row)\n \n \nQuery \ndev=# explain analyze select cha_type,sum(visits) from weekly_non_hstore a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits) desc;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=3674073.37..3674431.16 rows=143115 width=27) (actual time=47520.637..47969.658 rows=3639539 loops=1)\n Sort Key: (sum(a.visits))\n Sort Method: quicksort Memory: 391723kB\n -> HashAggregate (cost=3660386.70..3661817.85 rows=143115 width=27) (actual time=43655.637..44989.202 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..3301286.54 rows=71820032 width=27) (actual time=209.789..26477.652 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.32 rows=71820032 width=75) (actual time=0.053..8858.594 rows=71818882 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=209.189..209.189 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.014..64.695 rows=371759 loops=1)\nTotal runtime: 48172.405 ms\n(11 rows)\n \nTime: 48173.569 ms\n \ndev=# explain analyze select cha_type, sum(visits) from (select (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo group by cha_type order by sum(visits)\n desc; \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=7599039.89..7599040.39 rows=200 width=64) (actual time=70424.561..70986.202 rows=3639539 loops=1)\n Sort Key: (sum((((each(a.visits)).value)::numeric)))\n Sort Method: quicksort Memory: 394779kB\n -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64) (actual time=59267.120..60502.647 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..2022645.24 rows=371759000 width=184) (actual time=186.140..34619.879 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=230) (actual time=0.107..416.741 rows=1292314 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=185.742..185.742 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.016..62.123 rows=371759 loops=1)\nTotal runtime: 71177.675 ms",
"msg_date": "Mon, 1 Sep 2014 06:10:35 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "query performance with hstore vs. non-hstore"
},
{
"msg_contents": "Hi\n\nIn this use case hstore should not help .. there is relative high overhead\nrelated with unpacking hstore -- so classic schema is better.\n\nHstore should not to replace well normalized schema - it should be a\nreplace for some semi normalized structures as EAV.\n\nHstore can have some profit from TOAST .. comprimation, less system data\noverhead, but this advantage started from some length of data. You should\nto see this benefit on table size. When table with HStore is less than\nwithout, then there is benefit of Hstore. Last benefit of Hstore are\nindexes over tuple (key, value) .. but you don't use it.\n\nRegards\n\nPavel\n\n\n2014-09-01 8:10 GMT+02:00 Huang, Suya <[email protected]>:\n\n> Hi ,\n>\n>\n>\n> I’m tweaking table layout to get better performance of query. One table\n> doesn’t use hstore but expand all metrics of cha_type to different rows.\n> The other table has hstore for metrics column as cha_type->metrics so it\n> has less records than the first one.\n>\n>\n>\n> I would be expecting the query on seconds table has better performance\n> than the first one. However, it’s not the case at all. I’m wondering if\n> there’s something wrong with my execution plan? With the hstore table, the\n> optimizer has totally wrong estimation on row counts at hash aggregate\n> stage and it takes 34 seconds on hash-join,25 seconds on hash-aggregate, 10\n> seconds on sort. However, with non-hstore table, it takes 17 seconds on\n> hash join, 18 seconds on hashaggregate and 2 seconds on sort.\n>\n>\n>\n> Can someone help me to explain why this is happening? And is there a way\n> to fine-tune the query?\n>\n>\n>\n> Table structure\n>\n>\n>\n> dev=# \\d+ weekly_non_hstore\n>\n> Table \"test.weekly_non_hstore\"\n>\n> Column | Type | Modifiers | Storage | Stats target |\n> Description\n>\n>\n> ----------+------------------------+-----------+----------+--------------+-------------\n>\n> date | date | | plain | |\n>\n> ref_id | character varying(256) | | extended | |\n>\n> cha_typel | text | | extended | |\n>\n> visits | double precision | | plain | |\n>\n> pages | double precision | | plain | |\n>\n> duration | double precision | | plain | |\n>\n> Has OIDs: no\n>\n> Tablespace: \"tbs_data\"\n>\n>\n>\n> dev=# \\d+ weekly_hstore\n>\n> Table \"test.weekly_hstore\"\n>\n> Column | Type | Modifiers | Storage | Stats target |\n> Description\n>\n>\n> ----------+------------------------+-----------+----------+--------------+-------------\n>\n> date | date | | plain | |\n>\n> ref_id | character varying(256) | | extended | |\n>\n> visits | hstore | | extended | |\n>\n> pages | hstore | | extended | |\n>\n> duration | hstore | | extended | |\n>\n> Has OIDs: no\n>\n> Tablespace: \"tbs_data\"\n>\n>\n>\n> dev=# select count(*) from weekly_non_hstore;\n>\n> count\n>\n> ----------\n>\n> 71818882\n>\n> (1 row)\n>\n>\n>\n>\n>\n> dev=# select count(*) from weekly_hstore;\n>\n> count\n>\n> ---------\n>\n> 1292314\n>\n> (1 row)\n>\n>\n>\n>\n>\n> Query\n>\n> dev=# explain analyze select cha_type,sum(visits) from weekly_non_hstore\n> a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits)\n> desc;\n>\n>\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Sort (cost=3674073.37..3674431.16 rows=143115 width=27) (actual\n> time=47520.637..47969.658 rows=3639539 loops=1)\n>\n> Sort Key: (sum(a.visits))\n>\n> Sort Method: quicksort Memory: 391723kB\n>\n> -> HashAggregate (cost=3660386.70..3661817.85 rows=143115 width=27)\n> (actual time=43655.637..44989.202 rows=3639539 loops=1)\n>\n> -> Hash Join (cost=12029.58..3301286.54 rows=71820032 width=27)\n> (actual time=209.789..26477.652 rows=36962761 loops=1)\n>\n> Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n>\n> -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.32\n> rows=71820032 width=75) (actual time=0.053..8858.594 rows=71818882 loops=1)\n>\n> -> Hash (cost=7382.59..7382.59 rows=371759 width=47)\n> (actual time=209.189..209.189 rows=371759 loops=1)\n>\n> Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n>\n> -> Seq Scan on seg1 b (cost=0.00..7382.59\n> rows=371759 width=47) (actual time=0.014..64.695 rows=371759 loops=1)\n>\n> Total runtime: 48172.405 ms\n>\n> (11 rows)\n>\n>\n>\n> Time: 48173.569 ms\n>\n>\n>\n> dev=# explain analyze select cha_type, sum(visits) from (select\n> (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from\n> weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo group by cha_type\n> order by sum(visits) desc;\n>\n> QUERY\n> PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Sort (cost=7599039.89..7599040.39 rows=200 width=64) (actual\n> time=70424.561..70986.202 rows=3639539 loops=1)\n>\n> Sort Key: (sum((((each(a.visits)).value)::numeric)))\n>\n> Sort Method: quicksort Memory: 394779kB\n>\n> -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64)\n> (actual time=59267.120..60502.647 rows=3639539 loops=1)\n>\n> -> Hash Join (cost=12029.58..2022645.24 rows=371759000\n> width=184) (actual time=186.140..34619.879 rows=36962761 loops=1)\n>\n> Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n>\n> -> Seq Scan on weekly_hstore a (cost=0.00..133321.14\n> rows=1292314 width=230) (actual time=0.107..416.741 rows=1292314 loops=1)\n>\n> -> Hash (cost=7382.59..7382.59 rows=371759 width=47)\n> (actual time=185.742..185.742 rows=371759 loops=1)\n>\n> Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n>\n> -> Seq Scan on seg1 b (cost=0.00..7382.59\n> rows=371759 width=47) (actual time=0.016..62.123 rows=371759 loops=1)\n>\n> Total runtime: 71177.675 ms\n>\n\nHiIn this use case hstore should not help .. there is relative high overhead related with unpacking hstore -- so classic schema is better. Hstore should not to replace well normalized schema - it should be a replace for some semi normalized structures as EAV.\nHstore can have some profit from TOAST .. comprimation, less system data overhead, but this advantage started from some length of data. You should to see this benefit on table size. When table with HStore is less than without, then there is benefit of Hstore. Last benefit of Hstore are indexes over tuple (key, value) .. but you don't use it.\nRegardsPavel2014-09-01 8:10 GMT+02:00 Huang, Suya <[email protected]>:\n\n\n\nHi ,\n \nI’m tweaking table layout to get better performance of query. One table doesn’t use hstore but expand all metrics of cha_type to different rows. The other table has hstore for metrics column as cha_type->metrics so it has less records than\n the first one. \n \nI would be expecting the query on seconds table has better performance than the first one. However, it’s not the case at all. I’m wondering if there’s something wrong with my execution plan? With the hstore table, the optimizer has totally\n wrong estimation on row counts at hash aggregate stage and it takes 34 seconds on hash-join,25 seconds on hash-aggregate, 10 seconds on sort. However, with non-hstore table, it takes 17 seconds on hash join, 18 seconds on hashaggregate and 2 seconds on sort.\n \nCan someone help me to explain why this is happening? And is there a way to fine-tune the query?\n \nTable structure\n \ndev=# \\d+ weekly_non_hstore\n Table \"test.weekly_non_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\ncha_typel | text | | extended | |\nvisits | double precision | | plain | |\npages | double precision | | plain | |\nduration | double precision | | plain | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n \ndev=# \\d+ weekly_hstore\n Table \"test.weekly_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\nvisits | hstore | | extended | |\npages | hstore | | extended | |\nduration | hstore | | extended | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n \ndev=# select count(*) from weekly_non_hstore;\n count\n----------\n71818882\n(1 row)\n \n \ndev=# select count(*) from weekly_hstore;\n count\n---------\n1292314\n(1 row)\n \n \nQuery \ndev=# explain analyze select cha_type,sum(visits) from weekly_non_hstore a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits) desc;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=3674073.37..3674431.16 rows=143115 width=27) (actual time=47520.637..47969.658 rows=3639539 loops=1)\n Sort Key: (sum(a.visits))\n Sort Method: quicksort Memory: 391723kB\n -> HashAggregate (cost=3660386.70..3661817.85 rows=143115 width=27) (actual time=43655.637..44989.202 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..3301286.54 rows=71820032 width=27) (actual time=209.789..26477.652 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.32 rows=71820032 width=75) (actual time=0.053..8858.594 rows=71818882 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=209.189..209.189 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.014..64.695 rows=371759 loops=1)\nTotal runtime: 48172.405 ms\n(11 rows)\n \nTime: 48173.569 ms\n \ndev=# explain analyze select cha_type, sum(visits) from (select (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo group by cha_type order by sum(visits)\n desc; \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=7599039.89..7599040.39 rows=200 width=64) (actual time=70424.561..70986.202 rows=3639539 loops=1)\n Sort Key: (sum((((each(a.visits)).value)::numeric)))\n Sort Method: quicksort Memory: 394779kB\n -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64) (actual time=59267.120..60502.647 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..2022645.24 rows=371759000 width=184) (actual time=186.140..34619.879 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=230) (actual time=0.107..416.741 rows=1292314 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=185.742..185.742 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.016..62.123 rows=371759 loops=1)\nTotal runtime: 71177.675 ms",
"msg_date": "Mon, 1 Sep 2014 08:21:58 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query performance with hstore vs. non-hstore"
},
{
"msg_contents": "Thank you Pavel.\r\n\r\nThe cost of unpacking hstore comparing to non-hstore could be calculated by:\r\nSeq scan on hstore table + hash join with seg1 table:\r\nHstore: 416.741+ 34619.879 =~34 seconds\r\nNon-hstore: 8858.594 +26477.652 =~ 34 seconds\r\n\r\nThe subsequent hash-aggregate and sort operation should be working on the unpacked hstore rows which has same row counts as non-hstore table. however, timing on those operations actually makes the big difference.\r\n\r\nI don’t quite get why…\r\n\r\nThanks,\r\nSuya\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]]\r\nSent: Monday, September 01, 2014 4:22 PM\r\nTo: Huang, Suya\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] query performance with hstore vs. non-hstore\r\n\r\nHi\r\nIn this use case hstore should not help .. there is relative high overhead related with unpacking hstore -- so classic schema is better.\r\nHstore should not to replace well normalized schema - it should be a replace for some semi normalized structures as EAV.\r\nHstore can have some profit from TOAST .. comprimation, less system data overhead, but this advantage started from some length of data. You should to see this benefit on table size. When table with HStore is less than without, then there is benefit of Hstore. Last benefit of Hstore are indexes over tuple (key, value) .. but you don't use it.\r\nRegards\r\n\r\nPavel\r\n\r\n2014-09-01 8:10 GMT+02:00 Huang, Suya <[email protected]<mailto:[email protected]>>:\r\nHi ,\r\n\r\nI’m tweaking table layout to get better performance of query. One table doesn’t use hstore but expand all metrics of cha_type to different rows. The other table has hstore for metrics column as cha_type->metrics so it has less records than the first one.\r\n\r\nI would be expecting the query on seconds table has better performance than the first one. However, it’s not the case at all. I’m wondering if there’s something wrong with my execution plan? With the hstore table, the optimizer has totally wrong estimation on row counts at hash aggregate stage and it takes 34 seconds on hash-join,25 seconds on hash-aggregate, 10 seconds on sort. However, with non-hstore table, it takes 17 seconds on hash join, 18 seconds on hashaggregate and 2 seconds on sort.\r\n\r\nCan someone help me to explain why this is happening? And is there a way to fine-tune the query?\r\n\r\nTable structure\r\n\r\ndev=# \\d+ weekly_non_hstore\r\n Table \"test.weekly_non_hstore\"\r\n Column | Type | Modifiers | Storage | Stats target | Description\r\n----------+------------------------+-----------+----------+--------------+-------------\r\ndate | date | | plain | |\r\nref_id | character varying(256) | | extended | |\r\ncha_typel | text | | extended | |\r\nvisits | double precision | | plain | |\r\npages | double precision | | plain | |\r\nduration | double precision | | plain | |\r\nHas OIDs: no\r\nTablespace: \"tbs_data\"\r\n\r\ndev=# \\d+ weekly_hstore\r\n Table \"test.weekly_hstore\"\r\n Column | Type | Modifiers | Storage | Stats target | Description\r\n----------+------------------------+-----------+----------+--------------+-------------\r\ndate | date | | plain | |\r\nref_id | character varying(256) | | extended | |\r\nvisits | hstore | | extended | |\r\npages | hstore | | extended | |\r\nduration | hstore | | extended | |\r\nHas OIDs: no\r\nTablespace: \"tbs_data\"\r\n\r\ndev=# select count(*) from weekly_non_hstore;\r\n count\r\n----------\r\n71818882\r\n(1 row)\r\n\r\n\r\ndev=# select count(*) from weekly_hstore;\r\n count\r\n---------\r\n1292314\r\n(1 row)\r\n\r\n\r\nQuery\r\ndev=# explain analyze select cha_type,sum(visits) from weekly_non_hstore a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits) desc;\r\n QUERY PLAN\r\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nSort (cost=3674073.37..3674431.16 rows=143115 width=27) (actual time=47520.637..47969.658 rows=3639539 loops=1)\r\n Sort Key: (sum(a.visits))\r\n Sort Method: quicksort Memory: 391723kB\r\n -> HashAggregate (cost=3660386.70..3661817.85 rows=143115 width=27) (actual time=43655.637..44989.202 rows=3639539 loops=1)\r\n -> Hash Join (cost=12029.58..3301286.54 rows=71820032 width=27) (actual time=209.789..26477.652 rows=36962761 loops=1)\r\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\r\n -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.32 rows=71820032 width=75) (actual time=0.053..8858.594 rows=71818882 loops=1)\r\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=209.189..209.189 rows=371759 loops=1)\r\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\r\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.014..64.695 rows=371759 loops=1)\r\nTotal runtime: 48172.405 ms\r\n(11 rows)\r\n\r\nTime: 48173.569 ms\r\n\r\ndev=# explain analyze select cha_type, sum(visits) from (select (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo group by cha_type order by sum(visits) desc;\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------\r\nSort (cost=7599039.89..7599040.39 rows=200 width=64) (actual time=70424.561..70986.202 rows=3639539 loops=1)\r\n Sort Key: (sum((((each(a.visits)).value)::numeric)))\r\n Sort Method: quicksort Memory: 394779kB\r\n -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64) (actual time=59267.120..60502.647 rows=3639539 loops=1)\r\n -> Hash Join (cost=12029.58..2022645.24 rows=371759000 width=184) (actual time=186.140..34619.879 rows=36962761 loops=1)\r\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\r\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=230) (actual time=0.107..416.741 rows=1292314 loops=1)\r\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=185.742..185.742 rows=371759 loops=1)\r\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\r\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.016..62.123 rows=371759 loops=1)\r\nTotal runtime: 71177.675 ms\r\n\r\n\n\n\n\n\n\n\n\n\nThank you Pavel.\n \nThe cost of unpacking hstore comparing to non-hstore could be calculated by:\nSeq scan on hstore table + hash join with seg1 table:\nHstore: 416.741+ 34619.879 =~34 seconds\nNon-hstore: 8858.594 +26477.652 =~ 34 seconds\n \nThe subsequent hash-aggregate and sort operation should be working on the unpacked hstore rows which has same row counts as non-hstore table. however, timing\r\n on those operations actually makes the big difference. \n \nI don’t quite get why…\n \nThanks,\nSuya\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Monday, September 01, 2014 4:22 PM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] query performance with hstore vs. non-hstore\n \n\n\n\n\n\nHi\n\nIn this use case hstore should not help .. there is relative high overhead related with unpacking hstore -- so classic schema is better.\r\n\n\nHstore should not to replace well normalized schema - it should be a replace for some semi normalized structures as EAV.\n\nHstore can have some profit from TOAST .. comprimation, less system data overhead, but this advantage started from some length of data. You should to see this benefit on table size. When table with HStore is\r\n less than without, then there is benefit of Hstore. Last benefit of Hstore are indexes over tuple (key, value) .. but you don't use it.\n\nRegards\n\r\nPavel\n\n\n \n\n2014-09-01 8:10 GMT+02:00 Huang, Suya <[email protected]>:\n\n\nHi ,\n \nI’m tweaking table layout to get better performance of query. One table doesn’t use hstore but expand all metrics of cha_type to different rows. The other table has hstore for metrics\r\n column as cha_type->metrics so it has less records than the first one. \n \nI would be expecting the query on seconds table has better performance than the first one. However, it’s not the case at all. I’m wondering if there’s something wrong with my execution\r\n plan? With the hstore table, the optimizer has totally wrong estimation on row counts at hash aggregate stage and it takes 34 seconds on hash-join,25 seconds on hash-aggregate, 10 seconds on sort. However, with non-hstore table, it takes 17 seconds on hash\r\n join, 18 seconds on hashaggregate and 2 seconds on sort.\n \nCan someone help me to explain why this is happening? And is there a way to fine-tune the query?\n \nTable structure\n \ndev=# \\d+ weekly_non_hstore\n Table \"test.weekly_non_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\ncha_typel | text | | extended | |\nvisits | double precision | | plain | |\npages | double precision | | plain | |\nduration | double precision | | plain | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n \ndev=# \\d+ weekly_hstore\n Table \"test.weekly_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\nvisits | hstore | | extended | |\npages | hstore | | extended | |\nduration | hstore | | extended | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n \ndev=# select count(*) from weekly_non_hstore;\n count\n----------\n71818882\n(1 row)\n \n \ndev=# select count(*) from weekly_hstore;\n count\n---------\n1292314\n(1 row)\n \n \nQuery\r\n\ndev=# explain analyze select cha_type,sum(visits) from weekly_non_hstore a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits) desc;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=3674073.37..3674431.16 rows=143115 width=27) (actual time=47520.637..47969.658 rows=3639539 loops=1)\n Sort Key: (sum(a.visits))\n Sort Method: quicksort Memory: 391723kB\n -> HashAggregate (cost=3660386.70..3661817.85 rows=143115 width=27) (actual time=43655.637..44989.202 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..3301286.54 rows=71820032 width=27) (actual time=209.789..26477.652 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.32 rows=71820032 width=75) (actual time=0.053..8858.594 rows=71818882 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=209.189..209.189 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.014..64.695 rows=371759 loops=1)\nTotal runtime: 48172.405 ms\n(11 rows)\n \nTime: 48173.569 ms\n \ndev=# explain analyze select cha_type, sum(visits) from (select (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id\r\n )foo group by cha_type order by sum(visits) desc; \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=7599039.89..7599040.39 rows=200 width=64) (actual time=70424.561..70986.202 rows=3639539 loops=1)\n Sort Key: (sum((((each(a.visits)).value)::numeric)))\n Sort Method: quicksort Memory: 394779kB\n -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64) (actual time=59267.120..60502.647 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..2022645.24 rows=371759000 width=184) (actual time=186.140..34619.879 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=230) (actual time=0.107..416.741 rows=1292314 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=185.742..185.742 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.016..62.123 rows=371759 loops=1)\nTotal runtime: 71177.675 ms",
"msg_date": "Mon, 1 Sep 2014 06:54:34 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query performance with hstore vs. non-hstore"
},
{
"msg_contents": "2014-09-01 8:54 GMT+02:00 Huang, Suya <[email protected]>:\n\n> Thank you Pavel.\n>\n>\n>\n> The cost of unpacking hstore comparing to non-hstore could be calculated\n> by:\n>\n> Seq scan on hstore table + hash join with seg1 table:\n>\n> Hstore: 416.741+ 34619.879 =~34 seconds\n>\n> Non-hstore: 8858.594 +26477.652 =~ 34 seconds\n>\n>\n>\n> The subsequent hash-aggregate and sort operation should be working on the\n> unpacked hstore rows which has same row counts as non-hstore table.\n> however, timing on those operations actually makes the big difference.\n>\n\n>\n> I don’t quite get why…\n>\n\nThese values can be messy -- timing in EXPLAIN ANALYZE has relative big\nimpact but different for some methods\n\ntry to watch complete time for EXPLAIN (ANALYZE, TIMING OFF)\n\n\n>\n>\n> Thanks,\n>\n> Suya\n>\n>\n>\n> *From:* Pavel Stehule [mailto:[email protected]]\n> *Sent:* Monday, September 01, 2014 4:22 PM\n> *To:* Huang, Suya\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] query performance with hstore vs. non-hstore\n>\n>\n>\n> Hi\n>\n> In this use case hstore should not help .. there is relative high overhead\n> related with unpacking hstore -- so classic schema is better.\n>\n> Hstore should not to replace well normalized schema - it should be a\n> replace for some semi normalized structures as EAV.\n>\n> Hstore can have some profit from TOAST .. comprimation, less system data\n> overhead, but this advantage started from some length of data. You should\n> to see this benefit on table size. When table with HStore is less than\n> without, then there is benefit of Hstore. Last benefit of Hstore are\n> indexes over tuple (key, value) .. but you don't use it.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n> 2014-09-01 8:10 GMT+02:00 Huang, Suya <[email protected]>:\n>\n> Hi ,\n>\n>\n>\n> I’m tweaking table layout to get better performance of query. One table\n> doesn’t use hstore but expand all metrics of cha_type to different rows.\n> The other table has hstore for metrics column as cha_type->metrics so it\n> has less records than the first one.\n>\n>\n>\n> I would be expecting the query on seconds table has better performance\n> than the first one. However, it’s not the case at all. I’m wondering if\n> there’s something wrong with my execution plan? With the hstore table, the\n> optimizer has totally wrong estimation on row counts at hash aggregate\n> stage and it takes 34 seconds on hash-join,25 seconds on hash-aggregate, 10\n> seconds on sort. However, with non-hstore table, it takes 17 seconds on\n> hash join, 18 seconds on hashaggregate and 2 seconds on sort.\n>\n>\n>\n> Can someone help me to explain why this is happening? And is there a way\n> to fine-tune the query?\n>\n>\n>\n> Table structure\n>\n>\n>\n> dev=# \\d+ weekly_non_hstore\n>\n> Table \"test.weekly_non_hstore\"\n>\n> Column | Type | Modifiers | Storage | Stats target |\n> Description\n>\n>\n> ----------+------------------------+-----------+----------+--------------+-------------\n>\n> date | date | | plain | |\n>\n> ref_id | character varying(256) | | extended | |\n>\n> cha_typel | text | | extended | |\n>\n> visits | double precision | | plain | |\n>\n> pages | double precision | | plain | |\n>\n> duration | double precision | | plain | |\n>\n> Has OIDs: no\n>\n> Tablespace: \"tbs_data\"\n>\n>\n>\n> dev=# \\d+ weekly_hstore\n>\n> Table \"test.weekly_hstore\"\n>\n> Column | Type | Modifiers | Storage | Stats target |\n> Description\n>\n>\n> ----------+------------------------+-----------+----------+--------------+-------------\n>\n> date | date | | plain | |\n>\n> ref_id | character varying(256) | | extended | |\n>\n> visits | hstore | | extended | |\n>\n> pages | hstore | | extended | |\n>\n> duration | hstore | | extended | |\n>\n> Has OIDs: no\n>\n> Tablespace: \"tbs_data\"\n>\n>\n>\n> dev=# select count(*) from weekly_non_hstore;\n>\n> count\n>\n> ----------\n>\n> 71818882\n>\n> (1 row)\n>\n>\n>\n>\n>\n> dev=# select count(*) from weekly_hstore;\n>\n> count\n>\n> ---------\n>\n> 1292314\n>\n> (1 row)\n>\n>\n>\n>\n>\n> Query\n>\n> dev=# explain analyze select cha_type,sum(visits) from weekly_non_hstore\n> a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits)\n> desc;\n>\n>\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Sort (cost=3674073.37..3674431.16 rows=143115 width=27) (actual\n> time=47520.637..47969.658 rows=3639539 loops=1)\n>\n> Sort Key: (sum(a.visits))\n>\n> Sort Method: quicksort Memory: 391723kB\n>\n> -> HashAggregate (cost=3660386.70..3661817.85 rows=143115 width=27)\n> (actual time=43655.637..44989.202 rows=3639539 loops=1)\n>\n> -> Hash Join (cost=12029.58..3301286.54 rows=71820032 width=27)\n> (actual time=209.789..26477.652 rows=36962761 loops=1)\n>\n> Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n>\n> -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.32\n> rows=71820032 width=75) (actual time=0.053..8858.594 rows=71818882 loops=1)\n>\n> -> Hash (cost=7382.59..7382.59 rows=371759 width=47)\n> (actual time=209.189..209.189 rows=371759 loops=1)\n>\n> Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n>\n> -> Seq Scan on seg1 b (cost=0.00..7382.59\n> rows=371759 width=47) (actual time=0.014..64.695 rows=371759 loops=1)\n>\n> Total runtime: 48172.405 ms\n>\n> (11 rows)\n>\n>\n>\n> Time: 48173.569 ms\n>\n>\n>\n> dev=# explain analyze select cha_type, sum(visits) from (select\n> (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from\n> weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo group by cha_type\n> order by sum(visits) desc;\n>\n> QUERY\n> PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Sort (cost=7599039.89..7599040.39 rows=200 width=64) (actual\n> time=70424.561..70986.202 rows=3639539 loops=1)\n>\n> Sort Key: (sum((((each(a.visits)).value)::numeric)))\n>\n> Sort Method: quicksort Memory: 394779kB\n>\n> -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64)\n> (actual time=59267.120..60502.647 rows=3639539 loops=1)\n>\n> -> Hash Join (cost=12029.58..2022645.24 rows=371759000\n> width=184) (actual time=186.140..34619.879 rows=36962761 loops=1)\n>\n> Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n>\n> -> Seq Scan on weekly_hstore a (cost=0.00..133321.14\n> rows=1292314 width=230) (actual time=0.107..416.741 rows=1292314 loops=1)\n>\n> -> Hash (cost=7382.59..7382.59 rows=371759 width=47)\n> (actual time=185.742..185.742 rows=371759 loops=1)\n>\n> Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n>\n> -> Seq Scan on seg1 b (cost=0.00..7382.59\n> rows=371759 width=47) (actual time=0.016..62.123 rows=371759 loops=1)\n>\n> Total runtime: 71177.675 ms\n>\n>\n>\n\n2014-09-01 8:54 GMT+02:00 Huang, Suya <[email protected]>:\n\n\n\nThank you Pavel.\n \nThe cost of unpacking hstore comparing to non-hstore could be calculated by:\nSeq scan on hstore table + hash join with seg1 table:\nHstore: 416.741+ 34619.879 =~34 seconds\nNon-hstore: 8858.594 +26477.652 =~ 34 seconds\n \nThe subsequent hash-aggregate and sort operation should be working on the unpacked hstore rows which has same row counts as non-hstore table. however, timing\n on those operations actually makes the big difference. \n\n \nI don’t quite get why…These values can be messy -- timing in EXPLAIN ANALYZE has relative big impact but different for some methods\ntry to watch complete time for EXPLAIN (ANALYZE, TIMING OFF) \n\n \nThanks,\nSuya\n \nFrom: Pavel Stehule [mailto:[email protected]]\n\nSent: Monday, September 01, 2014 4:22 PM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] query performance with hstore vs. non-hstore\n \n\n\n\n\n\nHi\n\nIn this use case hstore should not help .. there is relative high overhead related with unpacking hstore -- so classic schema is better.\n\n\nHstore should not to replace well normalized schema - it should be a replace for some semi normalized structures as EAV.\n\nHstore can have some profit from TOAST .. comprimation, less system data overhead, but this advantage started from some length of data. You should to see this benefit on table size. When table with HStore is\n less than without, then there is benefit of Hstore. Last benefit of Hstore are indexes over tuple (key, value) .. but you don't use it.\n\nRegards\n\nPavel\n\n\n \n\n2014-09-01 8:10 GMT+02:00 Huang, Suya <[email protected]>:\n\n\nHi ,\n \nI’m tweaking table layout to get better performance of query. One table doesn’t use hstore but expand all metrics of cha_type to different rows. The other table has hstore for metrics\n column as cha_type->metrics so it has less records than the first one. \n \nI would be expecting the query on seconds table has better performance than the first one. However, it’s not the case at all. I’m wondering if there’s something wrong with my execution\n plan? With the hstore table, the optimizer has totally wrong estimation on row counts at hash aggregate stage and it takes 34 seconds on hash-join,25 seconds on hash-aggregate, 10 seconds on sort. However, with non-hstore table, it takes 17 seconds on hash\n join, 18 seconds on hashaggregate and 2 seconds on sort.\n \nCan someone help me to explain why this is happening? And is there a way to fine-tune the query?\n \nTable structure\n \ndev=# \\d+ weekly_non_hstore\n Table \"test.weekly_non_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\ncha_typel | text | | extended | |\nvisits | double precision | | plain | |\npages | double precision | | plain | |\nduration | double precision | | plain | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n \ndev=# \\d+ weekly_hstore\n Table \"test.weekly_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\nvisits | hstore | | extended | |\npages | hstore | | extended | |\nduration | hstore | | extended | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n \ndev=# select count(*) from weekly_non_hstore;\n count\n----------\n71818882\n(1 row)\n \n \ndev=# select count(*) from weekly_hstore;\n count\n---------\n1292314\n(1 row)\n \n \nQuery\n\ndev=# explain analyze select cha_type,sum(visits) from weekly_non_hstore a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits) desc;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=3674073.37..3674431.16 rows=143115 width=27) (actual time=47520.637..47969.658 rows=3639539 loops=1)\n Sort Key: (sum(a.visits))\n Sort Method: quicksort Memory: 391723kB\n -> HashAggregate (cost=3660386.70..3661817.85 rows=143115 width=27) (actual time=43655.637..44989.202 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..3301286.54 rows=71820032 width=27) (actual time=209.789..26477.652 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.32 rows=71820032 width=75) (actual time=0.053..8858.594 rows=71818882 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=209.189..209.189 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.014..64.695 rows=371759 loops=1)\nTotal runtime: 48172.405 ms\n(11 rows)\n \nTime: 48173.569 ms\n \ndev=# explain analyze select cha_type, sum(visits) from (select (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id\n )foo group by cha_type order by sum(visits) desc; \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=7599039.89..7599040.39 rows=200 width=64) (actual time=70424.561..70986.202 rows=3639539 loops=1)\n Sort Key: (sum((((each(a.visits)).value)::numeric)))\n Sort Method: quicksort Memory: 394779kB\n -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64) (actual time=59267.120..60502.647 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..2022645.24 rows=371759000 width=184) (actual time=186.140..34619.879 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=230) (actual time=0.107..416.741 rows=1292314 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=185.742..185.742 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.016..62.123 rows=371759 loops=1)\nTotal runtime: 71177.675 ms",
"msg_date": "Mon, 1 Sep 2014 09:06:42 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query performance with hstore vs. non-hstore"
},
{
"msg_contents": "Hi Pavel,\r\n\r\nSee output of explain (analyze,timing off), the total runtime is close to the one enable timing.\r\n\r\ndev=# EXPLAIN (ANALYZE, TIMING OFF) select cha_type,sum(visits) from weekly_non_hstore a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits) desc;\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------------------------------------\r\nSort (cost=3674118.09..3674476.91 rows=143528 width=27) (actual rows=3639539 loops=1)\r\n Sort Key: (sum(a.visits))\r\n Sort Method: quicksort Memory: 391723kB\r\n -> HashAggregate (cost=3660388.94..3661824.22 rows=143528 width=27) (actual rows=3639539 loops=1)\r\n -> Hash Join (cost=12029.58..3301288.46 rows=71820096 width=27) (actual rows=36962761 loops=1)\r\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\r\n -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.96 rows=71820096 width=75) (actual rows=71818882 loops=1)\r\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual rows=371759 loops=1)\r\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\r\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual rows=371759 loops=1)\r\nTotal runtime: 42914.194 ms\r\n(11 rows)\r\n\r\n\r\ndev=# explain (analyze, timing off) select cha_type, sum(visits) from (select (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo group by cha_type order by sum(visits) desc;\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------------------------------\r\nSort (cost=7599039.89..7599040.39 rows=200 width=64) (actual rows=3639539 loops=1)\r\n Sort Key: (sum((((each(a.visits)).value)::numeric)))\r\n Sort Method: quicksort Memory: 394779kB\r\n -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64) (actual rows=3639539 loops=1)\r\n -> Hash Join (cost=12029.58..2022645.24 rows=371759000 width=186) (actual rows=36962761 loops=1)\r\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\r\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=232) (actual rows=1292314 loops=1)\r\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual rows=371759 loops=1)\r\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\r\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual rows=371759 loops=1)\r\nTotal runtime: 69521.570 ms\r\n(11 rows)\r\n\r\nThanks,\r\nSuya\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]]\r\nSent: Monday, September 01, 2014 5:07 PM\r\nTo: Huang, Suya\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] query performance with hstore vs. non-hstore\r\n\r\n\r\n\r\n2014-09-01 8:54 GMT+02:00 Huang, Suya <[email protected]<mailto:[email protected]>>:\r\nThank you Pavel.\r\n\r\nThe cost of unpacking hstore comparing to non-hstore could be calculated by:\r\nSeq scan on hstore table + hash join with seg1 table:\r\nHstore: 416.741+ 34619.879 =~34 seconds\r\nNon-hstore: 8858.594 +26477.652 =~ 34 seconds\r\n\r\nThe subsequent hash-aggregate and sort operation should be working on the unpacked hstore rows which has same row counts as non-hstore table. however, timing on those operations actually makes the big difference.\r\n\r\nI don’t quite get why…\r\n\r\nThese values can be messy -- timing in EXPLAIN ANALYZE has relative big impact but different for some methods\r\ntry to watch complete time for EXPLAIN (ANALYZE, TIMING OFF)\r\n\r\n\r\nThanks,\r\nSuya\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]<mailto:[email protected]>]\r\nSent: Monday, September 01, 2014 4:22 PM\r\nTo: Huang, Suya\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] query performance with hstore vs. non-hstore\r\n\r\nHi\r\nIn this use case hstore should not help .. there is relative high overhead related with unpacking hstore -- so classic schema is better.\r\nHstore should not to replace well normalized schema - it should be a replace for some semi normalized structures as EAV.\r\nHstore can have some profit from TOAST .. comprimation, less system data overhead, but this advantage started from some length of data. You should to see this benefit on table size. When table with HStore is less than without, then there is benefit of Hstore. Last benefit of Hstore are indexes over tuple (key, value) .. but you don't use it.\r\nRegards\r\n\r\nPavel\r\n\r\n2014-09-01 8:10 GMT+02:00 Huang, Suya <[email protected]<mailto:[email protected]>>:\r\nHi ,\r\n\r\nI’m tweaking table layout to get better performance of query. One table doesn’t use hstore but expand all metrics of cha_type to different rows. The other table has hstore for metrics column as cha_type->metrics so it has less records than the first one.\r\n\r\nI would be expecting the query on seconds table has better performance than the first one. However, it’s not the case at all. I’m wondering if there’s something wrong with my execution plan? With the hstore table, the optimizer has totally wrong estimation on row counts at hash aggregate stage and it takes 34 seconds on hash-join,25 seconds on hash-aggregate, 10 seconds on sort. However, with non-hstore table, it takes 17 seconds on hash join, 18 seconds on hashaggregate and 2 seconds on sort.\r\n\r\nCan someone help me to explain why this is happening? And is there a way to fine-tune the query?\r\n\r\nTable structure\r\n\r\ndev=# \\d+ weekly_non_hstore\r\n Table \"test.weekly_non_hstore\"\r\n Column | Type | Modifiers | Storage | Stats target | Description\r\n----------+------------------------+-----------+----------+--------------+-------------\r\ndate | date | | plain | |\r\nref_id | character varying(256) | | extended | |\r\ncha_typel | text | | extended | |\r\nvisits | double precision | | plain | |\r\npages | double precision | | plain | |\r\nduration | double precision | | plain | |\r\nHas OIDs: no\r\nTablespace: \"tbs_data\"\r\n\r\ndev=# \\d+ weekly_hstore\r\n Table \"test.weekly_hstore\"\r\n Column | Type | Modifiers | Storage | Stats target | Description\r\n----------+------------------------+-----------+----------+--------------+-------------\r\ndate | date | | plain | |\r\nref_id | character varying(256) | | extended | |\r\nvisits | hstore | | extended | |\r\npages | hstore | | extended | |\r\nduration | hstore | | extended | |\r\nHas OIDs: no\r\nTablespace: \"tbs_data\"\r\n\r\ndev=# select count(*) from weekly_non_hstore;\r\n count\r\n----------\r\n71818882\r\n(1 row)\r\n\r\n\r\ndev=# select count(*) from weekly_hstore;\r\n count\r\n---------\r\n1292314\r\n(1 row)\r\n\r\n\r\nQuery\r\ndev=# explain analyze select cha_type,sum(visits) from weekly_non_hstore a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits) desc;\r\n QUERY PLAN\r\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nSort (cost=3674073.37..3674431.16 rows=143115 width=27) (actual time=47520.637..47969.658 rows=3639539 loops=1)\r\n Sort Key: (sum(a.visits))\r\n Sort Method: quicksort Memory: 391723kB\r\n -> HashAggregate (cost=3660386.70..3661817.85 rows=143115 width=27) (actual time=43655.637..44989.202 rows=3639539 loops=1)\r\n -> Hash Join (cost=12029.58..3301286.54 rows=71820032 width=27) (actual time=209.789..26477.652 rows=36962761 loops=1)\r\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\r\n -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.32 rows=71820032 width=75) (actual time=0.053..8858.594 rows=71818882 loops=1)\r\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=209.189..209.189 rows=371759 loops=1)\r\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\r\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.014..64.695 rows=371759 loops=1)\r\nTotal runtime: 48172.405 ms\r\n(11 rows)\r\n\r\nTime: 48173.569 ms\r\n\r\ndev=# explain analyze select cha_type, sum(visits) from (select (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo group by cha_type order by sum(visits) desc;\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------\r\nSort (cost=7599039.89..7599040.39 rows=200 width=64) (actual time=70424.561..70986.202 rows=3639539 loops=1)\r\n Sort Key: (sum((((each(a.visits)).value)::numeric)))\r\n Sort Method: quicksort Memory: 394779kB\r\n -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64) (actual time=59267.120..60502.647 rows=3639539 loops=1)\r\n -> Hash Join (cost=12029.58..2022645.24 rows=371759000 width=184) (actual time=186.140..34619.879 rows=36962761 loops=1)\r\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\r\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=230) (actual time=0.107..416.741 rows=1292314 loops=1)\r\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=185.742..185.742 rows=371759 loops=1)\r\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\r\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.016..62.123 rows=371759 loops=1)\r\nTotal runtime: 71177.675 ms\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHi Pavel,\n \nSee output of explain (analyze,timing off), the total runtime is close to the one enable timing.\n \ndev=# EXPLAIN (ANALYZE, TIMING OFF) select cha_type,sum(visits) from weekly_non_hstore a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits)\r\n desc;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=3674118.09..3674476.91 rows=143528 width=27) (actual rows=3639539 loops=1)\n Sort Key: (sum(a.visits))\n Sort Method: quicksort Memory: 391723kB\n -> HashAggregate (cost=3660388.94..3661824.22 rows=143528 width=27) (actual rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..3301288.46 rows=71820096 width=27) (actual rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.96 rows=71820096 width=75) (actual rows=71818882 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual rows=371759 loops=1)\nTotal runtime: 42914.194 ms\n(11 rows)\n \n \ndev=# explain (analyze, timing off) select cha_type, sum(visits) from (select (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from weekly_hstore\r\n a join seg1 b on a.ref_id=b.ref_id )foo group by cha_type order by sum(visits) desc;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=7599039.89..7599040.39 rows=200 width=64) (actual rows=3639539 loops=1)\n Sort Key: (sum((((each(a.visits)).value)::numeric)))\n Sort Method: quicksort Memory: 394779kB\n -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64) (actual rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..2022645.24 rows=371759000 width=186) (actual rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=232) (actual rows=1292314 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual rows=371759 loops=1)\nTotal runtime: 69521.570 ms\n(11 rows)\n \nThanks,\nSuya\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Monday, September 01, 2014 5:07 PM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] query performance with hstore vs. non-hstore\n \n\n \n\n \n\n2014-09-01 8:54 GMT+02:00 Huang, Suya <[email protected]>:\n\n\nThank you Pavel.\n \nThe cost of unpacking hstore comparing to non-hstore could be calculated by:\nSeq scan on hstore table + hash join with seg1 table:\nHstore: 416.741+ 34619.879 =~34 seconds\nNon-hstore: 8858.594 +26477.652 =~ 34 seconds\n \nThe subsequent hash-aggregate and sort operation should be working on the unpacked hstore rows which\r\n has same row counts as non-hstore table. however, timing on those operations actually makes the big difference.\n\n\n\n\n\n\n \nI don’t quite get why…\n\n\n\n\n \n\n\nThese values can be messy -- timing in EXPLAIN ANALYZE has relative big impact but different for some methods\n\n\ntry to watch complete time for EXPLAIN (ANALYZE, TIMING OFF)\n\n\n \n\n\n\n\n \nThanks,\nSuya\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: Monday, September 01, 2014 4:22 PM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] query performance with hstore vs. non-hstore\n\n\n \n\n\n\n\n\nHi\n\nIn this use case hstore should not help .. there is relative high overhead related with unpacking hstore -- so classic schema is better.\r\n\n\nHstore should not to replace well normalized schema - it should be a replace for some semi normalized structures as EAV.\n\nHstore can have some profit from TOAST .. comprimation, less system data overhead, but this advantage started from some length of data. You should to see this benefit on table size. When\r\n table with HStore is less than without, then there is benefit of Hstore. Last benefit of Hstore are indexes over tuple (key, value) .. but you don't use it.\n\nRegards\n\r\nPavel\n\n\n \n\n2014-09-01 8:10 GMT+02:00 Huang, Suya <[email protected]>:\n\n\nHi ,\n \nI’m tweaking table layout to get better performance of query. One table doesn’t use hstore but expand all metrics of cha_type to different rows. The other table has hstore for metrics\r\n column as cha_type->metrics so it has less records than the first one. \n \nI would be expecting the query on seconds table has better performance than the first one. However, it’s not the case at all. I’m wondering if there’s something wrong with my execution\r\n plan? With the hstore table, the optimizer has totally wrong estimation on row counts at hash aggregate stage and it takes 34 seconds on hash-join,25 seconds on hash-aggregate, 10 seconds on sort. However, with non-hstore table, it takes 17 seconds on hash\r\n join, 18 seconds on hashaggregate and 2 seconds on sort.\n \nCan someone help me to explain why this is happening? And is there a way to fine-tune the query?\n \nTable structure\n \ndev=# \\d+ weekly_non_hstore\n Table \"test.weekly_non_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\ncha_typel | text | | extended | |\nvisits | double precision | | plain | |\npages | double precision | | plain | |\nduration | double precision | | plain | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n \ndev=# \\d+ weekly_hstore\n Table \"test.weekly_hstore\"\n Column | Type | Modifiers | Storage | Stats target | Description\n----------+------------------------+-----------+----------+--------------+-------------\ndate | date | | plain | |\nref_id | character varying(256) | | extended | |\nvisits | hstore | | extended | |\npages | hstore | | extended | |\nduration | hstore | | extended | |\nHas OIDs: no\nTablespace: \"tbs_data\"\n \ndev=# select count(*) from weekly_non_hstore;\n count\n----------\n71818882\n(1 row)\n \n \ndev=# select count(*) from weekly_hstore;\n count\n---------\n1292314\n(1 row)\n \n \nQuery\r\n\ndev=# explain analyze select cha_type,sum(visits) from weekly_non_hstore a join seg1 b on a.ref_id=b.ref_id group by cha_type order by sum(visits) desc;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=3674073.37..3674431.16 rows=143115 width=27) (actual time=47520.637..47969.658 rows=3639539 loops=1)\n Sort Key: (sum(a.visits))\n Sort Method: quicksort Memory: 391723kB\n -> HashAggregate (cost=3660386.70..3661817.85 rows=143115 width=27) (actual time=43655.637..44989.202 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..3301286.54 rows=71820032 width=27) (actual time=209.789..26477.652 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_non_hstore a (cost=0.00..1852856.32 rows=71820032 width=75) (actual time=0.053..8858.594 rows=71818882 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=209.189..209.189 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.014..64.695 rows=371759 loops=1)\nTotal runtime: 48172.405 ms\n(11 rows)\n \nTime: 48173.569 ms\n \ndev=# explain analyze select cha_type, sum(visits) from (select (each(visits)).key as cha_type,(each(visits)).value::numeric as visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id\r\n )foo group by cha_type order by sum(visits) desc; \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=7599039.89..7599040.39 rows=200 width=64) (actual time=70424.561..70986.202 rows=3639539 loops=1)\n Sort Key: (sum((((each(a.visits)).value)::numeric)))\n Sort Method: quicksort Memory: 394779kB\n -> HashAggregate (cost=7599030.24..7599032.24 rows=200 width=64) (actual time=59267.120..60502.647 rows=3639539 loops=1)\n -> Hash Join (cost=12029.58..2022645.24 rows=371759000 width=184) (actual time=186.140..34619.879 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=230) (actual time=0.107..416.741 rows=1292314 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=185.742..185.742 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=0.016..62.123 rows=371759 loops=1)\nTotal runtime: 71177.675 ms",
"msg_date": "Tue, 2 Sep 2014 00:53:27 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query performance with hstore vs. non-hstore"
},
{
"msg_contents": "Huang, Suya wrote\n> See output of explain (analyze,timing off), the total runtime is close to\n> the one enable timing.\n\nCalling 43s \"close to\" 70s doesn't sound right...\n\n\n> dev=# explain (analyze, timing off) select cha_type, sum(visits) from\n> (select (each(visits)).key as cha_type,(each(visits)).value::numeric as\n> visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo group\n> by cha_type order by sum(visits) desc;\n\nWhat version of PostgreSQL are you using?\n\nTwo calls to each() and cast to numeric are not free.\n\nYour sequential scan savings is nearly 9 seconds but you lose all of that,\nand more, when PostgreSQL evaluates the result of the scan and has to\nprocess the each() and the cast before it performs the join against the\nexpanded result. There is no planner node for this activity but it does\ncost time - in this case more time than it would take to simply store the\nnative data types in separate rows.\n\nYou really should expand the hstore after the join (i.e., in the top-most\nselect-list) but in this case since the join removed hardly any rows the\ngain from doing so would be minimal. The idea being you should not expand\nthe hstore of any row that fails the join condition since it will not end up\nin the final result anyway.\n\nAlso, in this specific case, the call to each(...).key is pointless - you\nnever use the data.\n\nIf you did need to use both columns, and are using 9.3, you should re-write\nthis to use LATERAL.\n\nIn 9.2- you, possibly using a CTE, could do something like this:\n\nSELECT (each).* FROM (\nSELECT each(hs) FROM ( VALUES('k=>1'::hstore) ) h (hs)\n) src\n\nThis is a single call to each(), in a subquery, which result is then\nexpanded using (col).* notation in the parent query. This avoids calling\neach twice - and note that (each(...).*) does not work to avoid the\ndouble-call - you have to use a subquery / a CTE one to ensure that it is\nnot collapsed (offset 0 should work too but I find the CTE one a little\ncleaner personally).\n\nDavid J.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/query-performance-with-hstore-vs-non-hstore-tp5817109p5817281.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 1 Sep 2014 20:38:21 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query performance with hstore vs. non-hstore"
},
{
"msg_contents": "Hi David,\n\nThanks for the reply.\n\n>Calling 43s \"close to\" 70s doesn't sound right...\n\nOops, I'm not saying 43s close to 70s... I mean that the plan generated by disable timing for explain plan doesn't make obvious difference comparing to the earlier plan I sent out which enabled timing.\n\n>What version of PostgreSQL are you using?\n>\n>Two calls to each() and cast to numeric are not free.\n>\n>Your sequential scan savings is nearly 9 seconds but you lose all of that, and more, when PostgreSQL evaluates the result of the scan and has to process the each() and >the cast before it performs the join against the expanded result. There is no planner node for this activity but it does cost time - in this case more time than it >would take to simply store the native data types in separate rows.\n>\n>You really should expand the hstore after the join (i.e., in the top-most\n>select-list) but in this case since the join removed hardly any rows the gain from doing so would be minimal. The idea being you should not expand the hstore of any row >that fails the join condition since it will not end up in the final result anyway.\n>\n>Also, in this specific case, the call to each(...).key is pointless - you never use the data.\n>\n>If you did need to use both columns, and are using 9.3, you should re-write this to use LATERAL.\n>\n>In 9.2- you, possibly using a CTE, could do something like this:\n>\n>SELECT (each).* FROM (\n>SELECT each(hs) FROM ( VALUES('k=>1'::hstore) ) h (hs)\n>) src\n>\n>This is a single call to each(), in a subquery, which result is then expanded using (col).* notation in the parent query. This avoids calling each twice - and note that >(each(...).*) does not work to avoid the double-call - you have to use a subquery / a CTE one to ensure that it is not collapsed (offset 0 should work too but I find the >CTE one a little cleaner personally).\n>\n\nI'm using Postgresql 9.3.4.\nI changed the query as you suggested. The execution time are still similar to the original one.\n\ndev=# explain analyze select (each).key as cha_type, sum((each).value::numeric) as visits from (select each(visits) from weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo group by cha_type order by visits desc;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=9455046.69..9455047.19 rows=200 width=32) (actual time=70928.881..71425.833 rows=3639539 loops=1)\n Sort Key: (sum(((foo.each).value)::numeric))\n Sort Method: quicksort Memory: 394779kB\n -> HashAggregate (cost=9455037.05..9455039.05 rows=200 width=32) (actual time=60077.937..61425.469 rows=3639539 loops=1)\n -> Subquery Scan on foo (cost=12029.58..5737447.05 rows=371759000 width=32) (actual time=281.658..23912.400 rows=36962761 loops=1)\n -> Hash Join (cost=12029.58..2019857.05 rows=371759000 width=186) (actual time=281.655..18759.265 rows=36962761 loops=1)\n Hash Cond: ((a.ref_id)::text = (b.ref_id)::text)\n -> Seq Scan on weekly_hstore a (cost=0.00..133321.14 rows=1292314 width=232) (actual time=11.141..857.959 rows=1292314 loops=1)\n -> Hash (cost=7382.59..7382.59 rows=371759 width=47) (actual time=262.722..262.722 rows=371759 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 28951kB\n -> Seq Scan on seg1 b (cost=0.00..7382.59 rows=371759 width=47) (actual time=11.701..113.859 rows=371759 loops=1)\n Total runtime: 71626.871 ms\n(12 rows)\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of David G Johnston\nSent: Tuesday, September 02, 2014 1:38 PM\nTo: [email protected]\nSubject: Re: [PERFORM] query performance with hstore vs. non-hstore\n\nHuang, Suya wrote\n> See output of explain (analyze,timing off), the total runtime is close \n> to the one enable timing.\n\nCalling 43s \"close to\" 70s doesn't sound right...\n\n\n> dev=# explain (analyze, timing off) select cha_type, sum(visits) from \n> (select (each(visits)).key as cha_type,(each(visits)).value::numeric \n> as visits from weekly_hstore a join seg1 b on a.ref_id=b.ref_id )foo \n> group by cha_type order by sum(visits) desc;\n\nWhat version of PostgreSQL are you using?\n\nTwo calls to each() and cast to numeric are not free.\n\nYour sequential scan savings is nearly 9 seconds but you lose all of that, and more, when PostgreSQL evaluates the result of the scan and has to process the each() and the cast before it performs the join against the expanded result. There is no planner node for this activity but it does cost time - in this case more time than it would take to simply store the native data types in separate rows.\n\nYou really should expand the hstore after the join (i.e., in the top-most\nselect-list) but in this case since the join removed hardly any rows the gain from doing so would be minimal. The idea being you should not expand the hstore of any row that fails the join condition since it will not end up in the final result anyway.\n\nAlso, in this specific case, the call to each(...).key is pointless - you never use the data.\n\nIf you did need to use both columns, and are using 9.3, you should re-write this to use LATERAL.\n\nIn 9.2- you, possibly using a CTE, could do something like this:\n\nSELECT (each).* FROM (\nSELECT each(hs) FROM ( VALUES('k=>1'::hstore) ) h (hs)\n) src\n\nThis is a single call to each(), in a subquery, which result is then expanded using (col).* notation in the parent query. This avoids calling each twice - and note that (each(...).*) does not work to avoid the double-call - you have to use a subquery / a CTE one to ensure that it is not collapsed (offset 0 should work too but I find the CTE one a little cleaner personally).\n\nDavid J.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/query-performance-with-hstore-vs-non-hstore-tp5817109p5817281.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 4 Sep 2014 07:33:08 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query performance with hstore vs. non-hstore"
}
] |
[
{
"msg_contents": "We use benchmarksql to start tpcc test in postgresql 9.3.3.\nBefore test we set benchmarksql client number about 800. And we increase the hash partitions from 16 to 1024 , in order to reduce the hash partition locks competition.\nWe expect that after increase the number of partitions, reduces lock competition, TPMC should be increased. But the test results on the contrary, after modified to 1024, TPMC did not increase, but decrease.\nWhy such result?\n\nWe modify the following macro definition:\nNUM_BUFFER_PARTITIONS 1024\nLOG2_NUM_PREDICATELOCK_PARTITIONS 10\nLOG2_NUM_LOCK_PARTITIONS 10\n\n\n\n\n\n\n\n\n\n\n \nWe use benchmarksql to start tpcc test in postgresql 9.3.3.\nBefore test we set benchmarksql client number about 800. And we increase the hash partitions from 16 to 1024 , in order to reduce the hash partition locks competition.\nWe expect that after increase the number of partitions, reduces lock competition, TPMC should be increased. But the test results on the contrary, after modified to 1024, TPMC did not increase, but decrease.\n\nWhy such result?\n \nWe modify the following macro definition:\nNUM_BUFFER_PARTITIONS 1024\nLOG2_NUM_PREDICATELOCK_PARTITIONS 10\nLOG2_NUM_LOCK_PARTITIONS 10",
"msg_date": "Tue, 2 Sep 2014 06:59:39 +0000",
"msg_from": "Xiaoyulei <[email protected]>",
"msg_from_op": true,
"msg_subject": "why after increase the hash table partitions, tpmc decrease"
},
{
"msg_contents": "Hi,\n\n> We modify the following macro definition:\n> NUM_BUFFER_PARTITIONS 1024\n> LOG2_NUM_PREDICATELOCK_PARTITIONS 10\n> LOG2_NUM_LOCK_PARTITIONS 10\n\nIME, increase in NUM_BUFFER_PARTITIONS is effective but that in\nLOG2_NUM_LOCK_PARTITIONS results in performance degradation. Probably\nbecause it leads to an increase in overhead of LockReleaseAll() in\nsrc/backend/storage/lmgr/lock.c. I recommends that LOG2_NUM_LOCK_PARTITIONS\nshould not be increased so much.\n\nBest regards,\nTakashi Horikawa\n--\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Xiaoyulei\n> Sent: Tuesday, September 02, 2014 4:00 PM\n> To: [email protected]\n> Cc: yetao\n> Subject: [PERFORM] why after increase the hash table partitions, tpmc\n> decrease\n> \n> \n> \n> We use benchmarksql to start tpcc test in postgresql 9.3.3.\n> \n> Before test we set benchmarksql client number about 800. And we increase\n> the hash partitions from 16 to 1024 , in order to reduce the hash\npartition\n> locks competition.\n> \n> We expect that after increase the number of partitions, reduces lock\n> competition, TPMC should be increased. But the test results on the\ncontrary,\n> after modified to 1024, TPMC did not increase, but decrease.\n> \n> Why such result?\n> \n> \n> \n> We modify the following macro definition:\n> \n> NUM_BUFFER_PARTITIONS 1024\n> \n> LOG2_NUM_PREDICATELOCK_PARTITIONS 10\n> \n> LOG2_NUM_LOCK_PARTITIONS 10\n> \n>",
"msg_date": "Wed, 3 Sep 2014 00:12:52 +0000",
"msg_from": "Takashi Horikawa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why after increase the hash table partitions, tpmc decrease"
}
] |
[
{
"msg_contents": "Hello there,\r\nI use PostgreSQL 9.1\r\n\n\nThe scenario is:\r\n\n\nI receive heavy insertion into Table1 (about 100 rows a sec). For \neach new entry, I have to check it with the previous and next ones \n(check if those items are inside an area using ST_DWithin). Depending on\n the result, what I need to do is: use the new entry (joining another \ntable) to insert/update into new table. \r\n\n\nMy Questions are: \r\n\n\n\nI used to use Trigger to do the checking and insertion/updating. the header is as follows:CREATE TRIGGER ts_trigger\n AFTER INSERT\n ON table1\n FOR EACH ROW\n EXECUTE PROCEDURE test_trigger();But I don't think it's the efficient way to do it.\r\n\nI'm inserted in using batches. I'd like to understand the technique but cannot find a good resources for this.\nSome advise me to use temp table, but I don't think it would be useful in my case.\r\n \t\t \t \t\t \n\n\n\nHello there,I use PostgreSQL 9.1\n\nThe scenario is:\n\nI receive heavy insertion into Table1 (about 100 rows a sec). For \neach new entry, I have to check it with the previous and next ones \n(check if those items are inside an area using ST_DWithin). Depending on\n the result, what I need to do is: use the new entry (joining another \ntable) to insert/update into new table. \n\nMy Questions are: \n\nI used to use Trigger to do the checking and insertion/updating. the header is as follows:CREATE TRIGGER ts_trigger\n AFTER INSERT\n ON table1\n FOR EACH ROW\n EXECUTE PROCEDURE test_trigger();But I don't think it's the efficient way to do it.\nI'm inserted in using batches. I'd like to understand the technique but cannot find a good resources for this.\nSome advise me to use temp table, but I don't think it would be useful in my case.",
"msg_date": "Tue, 2 Sep 2014 17:44:13 +0300",
"msg_from": "Shadin A <[email protected]>",
"msg_from_op": true,
"msg_subject": "Implementing a functionality for processing heavy insertion"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.